How Service worker prevent 'fetch' from consuming too much disk space? - javascript

May I know what is the strategy that can be used to prevent addEventListener('fetch' && cache.put( from filling up user's disk space? Is there anyway to set expirary for the each request in cache?

According to the service worker spec:
The Cache objects are exactly what authors have to manage themselves.
The Cache objects do not get updated unless authors explicitly request
them to be. The Cache objects do not expire unless authors delete the
entries.
Note that the browser can throw your cache away if it wants to (and you should assume that sometimes it will), especially if it decides you're storing too much - it's not going to let you fill up the user's entire drive.
There do not appear do be any methods for examining the size of the cache either. However note that there are related specifications in the works, for example for the Quota Management API. The article by Jake Archibald linked above gives the following example of using the this API (which would give you a result for all site storage, not just your caches):
navigator.storageQuota.queryInfo("temporary").then(function(info) {
console.log(info.quota);
// Result: <quota in bytes>
console.log(info.usage);
// Result: <used data in bytes>
});
However I just tried this in Chrome Canary and storageQuota was undefined - I don't think this is implemented anywhere yet. Hopefully someone will correct me if I'm wrong.

Related

Why can't we get a returned value from sendTransaction() run on a smart contract?

All discussions on this mention that it's impossible to get a returned value from sendTransaction() run on a contract function, where the contract state is being changed. I don't understand why the returned value can't be recorded in the transaction log on the blockchain, similarly to events, and so then it could be retrieved on the transaction confirmation:
web3.eth.sendTransaction(...)
.on('confirmation', function(1, receipt){ ... // retrieving value returned by smart contract function here })
Logs are made for describing the events emitted from the contract - which is the current solution for getting data from transactions - , so the return data can't go in there.
Including a return_data in the receipt, though, has been discussed and apparently forgotten. EIP758, has the following sollution:
EIP 658 originally proposed adding return data to transaction receipts. However, return data is not charged for (as it is not stored on the blockchain), so adding it to transaction receipts could result in DoS and spam opportunities. Instead, a simple Boolean status field was added to transaction receipts. This modified version of EIP 658 was included in the Byzantium hard fork. While the status field is useful, applications often need the return data as well.
The primary advantage of using the strategy outlined here is efficiency: no extra data needs to be stored on the blockchain, and minimal extra computational load is imposed on nodes. Since light clients have the current state, they can compute and send return data notifications without contacting a server. Although after-the-fact lookups of the return value would not be supported, this is consistent with the conventional use of return data, which are only accessible to the caller when the function returns, and are not stored for later use.
And this go client pull request, which didn't go through because the best solution would be, instead, an ethereum hard fork - even though we had some since then and it didn't happen.

Can you prevent users editing the chrome.storage.local values in a Chrome extension? What is the best way to store persistent values for an extension?

So I'm working on a Chrome extension for someone else. I don't want to give away specific details about the project, so for I'll use an equivalent example: let's assume it's an extension to run on an image/forum board. Imagine I have variables such as userPoints, isBanned etc. The later being fairly self-explanatory, while the former corresponding to points the user acquires as they perform certain actions, hence unlocking additional features etc
Let's imagine I have code like:
if(accountType !== "banned"){
if(userPoints > 10000) accountType = "gold";
else if(userPoints > 5000) accountType = "silver";
else if(userPoints > 2500) accountType = "bronze";
else if(userPoints <= 0) accountType = "banned";
else accountType = "standard";
}else{
alert("Sorry, you're banned");
stopExtension();
}
Obviously though, it becomes trivial for someone with the knowledge to just browse to the extensions background page and paste chrome.storage.local.set({'userPoints': 99999999}) in the console, hence giving them full access to all the site. And, with the Internet, someone can of course share this 'hack' on Twitter/YouTube/forums or whatever, then suddenly, since all they'd need to do is copy and paste a simple one-liner, you can have 1000s of people, even with no programming experience, all using a compromised version of your extension.
And I realise I could use a database on an external site, but realistically, it would be possible that I would be wanting to get/update these variables such as userPoints 200+ times per hour, if the user was browsing the extentions target site the entire time. So the main issues I have with using an external db are:
efficiency: realistically, I don't want every user to be querying the
db 200+ times per hour
ease-of-getting-started: I want the user to just download the
extension and go. I certainly don't want them to have to sign up. I
realise I could create a non-expiring cookie with for the user's ID
which would be used to access their data in the db, but I don't want
to do that, since users can e.g. clear all cookies etc
by default, I want all features to be disabled (i.e. effectively
being considered like a 'banned' user) - if, for some reason, the
connection with the db on my site fails, then the user wouldn't be
able to use the extension, which I wouldn't want (and just speaking
from experience of my parents being with Internet providers whose
connection could drop 10 times per hour, for some people, failed
connections could be a real issue) - in contrast, accessing data from
the local storage will have like a 99.999% success rate I'd assume,
so, for non-critical extensions like what I'm creating, that's more
than good enough
Still, at least from what I've found searching, I've not found any Chrome storage method that doesn't also allow the user to edit the values too. I would have thought there would be a storage method (or at least option with chrome.storage.local.set(...) to specify that the value could only be accessed from within the extension's context pages, but I've not found that option, at least.
Currently I'm thinking of encrypting the value to increment by, then obfuscating the code using a tool like obfuscator.io. With that, I can make a simple, like 30 character js file such as this
userPoints = userPoints + 1000;
become about 80,000...still, among all the junk, if you have the patience to scroll through the nonsense, it's still possible to find what you're looking for:
...[loads of code](_0x241f5c);}}}});_0x5eacdc(),***u=u+parseInt(decrypt('\u2300\u6340'))***;function _0x34ff36(_0x17398d)[loads more code]...
[note that, since it's an extension and the js files will be stored on the user's pc, things like file size/loading times of getting the js files from a server are irrelevant]
Hence meaning a user wouldn't be able to do something like chrome.storage.local.set({'userPoints': 99999999}), they'd instead have to set it to the encrypted version of a number - say, something like chrome.storage.local.set({'userPoints': "✀ເ찀삌ሀ"}) - this is better, but obviously, by no means secure.
So anyway, back to the original question: is there a way to store persistent values for a Chrome extension without the user being able to edit them?
Thanks

How can I or my app know that Chrome is discarding indexed db items?

I have an app which has recently started to randomly lose indexeddb items. By "lose" I mean they are confirmed as having been saved, but days later, are no longer present.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full.
My question is specifically, are there any events that my app can listen for, or any Chrome log entries that I can refer to in order to confirm this is the case. NB. I'm not looking to fix the problem, I am only looking for ways in which I can detect it.
Each application can query how much data is stored or how much more
space is available for the app by calling queryUsageAndQuota() method
of Quota API.See
You can use periodic BackgroundSync
to estimate the used and free space allocated to temporary usage using:
// index.html
navigator.serviceWorker.ready.then(registration => {
registration.periodicSync.register('estimate-storage', {
// Minimum interval at which the sync may fire.
minInterval: 24 * 60 * 60 * 1000,
});
});
// service_worker.js
self.addEventListener('periodicsync', event => {
if (event.tag == 'estimate-storage') {
event.waitUntil(estimateAndNotify());
}
});
To estimate use the method: navigator.storage.estimate()//returns a Promise which resolves with {usage, quota} values in bytes.
When TEMPORARY storage quota is exceeded, all the data (incl.
AppCache, IndexedDB, WebSQL, File System API) stored for oldest used
origin gets deleted.
You can switch to unlimitedStorage or persistent storage.
Not sure how will it work in real life
This feature was initially still somewhat experimental.
but here https://developers.google.com/web/updates/2016/06/persistent-storage you can find that
This is still under development [...] the goal is to make users are aware of “persistent” data before clearing it [...] you can presume that “persistent” means your data won’t get cleared without the user being explicitly informed and directly in control of that deletion.
Which can answer you clarification in comments that you are looking for a way
how can a user or my app know for sure that this is happening
So, don't know about your app, but user maybe can get a notification. At least the page is of 2016 year. Something must have been done by now.
You may refer to Browser storage limits and eviction criteria article on MDN.
My hypothesis is that Chrome is discarding indexeddb items because the disk is full
According to this article your hypothesis seems to be true.
Now one of the ways to confirm this could be to save unique ids of items that are confirmed in a persistent storage - (this will have very small size compared to your actual data) and periodically compare whether all items are there in your indexeddb.
You might also want to refer to best practices.
and finally this comments
I think you need IndexedDB Observer. These links may help you if I got what you meant. Link-01 - Link-02
might be helpful.

Can a Firebase client determine the bytes sent and received?

While inspecting the structure of the various Firebase JavaScript objects in the browser's console, I noticed that some objects have these properties:
bytes_received: 429
bytes_sent: 64
This seems to indicate the amount of data that was sent and received for this node/ref/snapshot.
Is there a structured way for a client to access this information?
Not really. :-)
You're seeing some internal stats the client keeps track of. The only way to get at them is with:
Firebase.INTERNAL.stats(firebaseRef);
And it'll dump them to the console. (Note that the stats are for all interactions with the Firebase, not just that particular firebaseRef.)
This isn't a supported API and could disappear or change at any time. It also may not match up exactly with what you see in the Forge dashboard (the client is unaware of some of the transport overhead that goes on). But if it helps you at all during development / debugging, great.
Since Firebase.INTERNAL.stats(ref) only dumps the stats to the console, I've written a module, firebase-stats, that does naughty things to access and return the byte stats so that you can do more useful programmatic things with the information.
var firebaseStats = require('firebase-stats'),
Firebase = require('firebase'),
ref = new Firebase('https://docs-examples.firebaseio.com');
firebaseStats(ref); // -> { bytes_received: 287, bytes_sent: 58 }
This works by looking for an object with a property called bytes_sent. This should continue to work across releases, but we're obviously well in to undocumented internal territory, so this comes with absolutely no warranty; use at your own risk. This module will throw if it can't find the stats object.

Publish data from browser app without writing my own server

I need users to be able to post data from a single page browser application (SPA) to me, but I can't put server-side code on the host.
Is there a web service that I can use for this? I looked at Amazon SQS (simple queue service) but I can't call their REST APIs from within the browser due to cross origin policy.
I favour ease of development over robustness right now, so even just receiving an email would be fine. I'm not sure that the site is even going to catch on. If it does, then I'll develop a server-side component and move hosts.
Not only there are Web Services, but nowadays there are robust systems that provide a way to server-side some logic on your applications. They are called BaaS or Backend as a Service providers, usually to provide some backbone to your front end applications.
Although they have multiple uses, I'm going to list the most common in my opinion:
For mobile applications - Instead of having to learn an API for each device you code to, you can use an standard platform to store logic and data for your application.
For prototyping - If you want to create a slick application, but you don't want to code all the backend logic for the data -less dealing with all the operations and system administration that represents-, through a BaaS provider you only need good Front End skills to code the simplest CRUD applications you can imagine. Some BaaS even allow you to bind some Reduce algorithms to calls your perform to their API.
For web applications - When PaaS (Platform as a Service) came to town to ease the job for Backend End developers in order to avoid the hassle of System Administration and Operations, it was just logic that the same was going to happen to the Backend. There are many clones that showcase the real power of this strategy.
All of this is amazing, but I have yet to mention any of them. I'm going to list the ones that I know the most and have actually used in projects. There are probably many, but as far as I know, this one have satisfied most of my news, whether it's any of the previously ones mentioned.
Parse.com
Parse's most outstanding features target mobile devices; however, nowadays Parse contains an incredible amount of API's that allows you to use it as full feature backend service for Javascript, Android and even Windows 8 applications (Windows 8 SDK was introduced a few months ago this year).
How does a Parse code looks in Javascript?
Parse works through classes and objects (ain't that beautiful?), so you first create a specific class (can be done through Javascript, REST or even the Data Browser manager) and then you add objects to specific classes.
First, add up Parse as a script tag in javascript:
<script type="text/javascript" src="http://www.parsecdn.com/js/parse-1.1.15.min.js"></script>
Then, through a given Application ID and a Javascript Key, initialize Parse.
Parse.initialize("APPLICATION_ID", "JAVASCRIPT_KEY");
From there, it's all object manipulation
var Person = Parse.Object.extend("Person"); //Person is a class *cof* uppercase *cof*
var personObject = new Person();
personObject.save({name: "John"}, {
success: function(object) {
console.log("The object with the data "+ JSON.stringify(object) + " was saved successfully.");
},
error: function(model, error) {
console.log("There was an error! The following model and error object were provided by the Server");
console.log(model);
console.log(error);
}
});
What about authentication and security?
Parse has a User based authentication system, which pretty much allows you to store a base of users that can manipulate the data. If map the data with User information, you can ensure that only a given user can manipulate specific data. Plus, in the settings of your Parse application, you can specify that no clients are allowed to create classes, to ensure innecesary calls are performed.
Did you REALLY used in a web application?
Yes, it was my tool of choice for a medium fidelity prototype.
Firebase.com
Firebase's main feature is the ability to provide Real Time to your application without all the hassle. You don't need a MeteorJS server in order to bring Push Notifications to your software. If you know Javascript, you are half way through to bring Real Time magic to your users.
How does a Firebase looks in Javascript?
Firebase works in a REST fashion, and I think they do an amazing job structuring the Glory of REST. As a good example, look at the following Resource structure in Firebase:
https://SampleChat.firebaseIO-demo.com/users/fred/name/first
You don't need to be a rocket scientist to know that you are retrieve the first name of the user "Fred", giving there's at least one -usually there should be a UUID instead of a name, but hey, it's an example, give me a break-.
In order to start using Firebase, as with Parse, add up their CDN Javascript
<script type='text/javascript' src='https://cdn.firebase.com/v0/firebase.js'></script>
Now, create a reference object that will allow you to consume the Firebase API
var myRootRef = new Firebase('https://myprojectname.firebaseIO-demo.com/');
From there, you can create a bunch of neat applications.
var USERS_LOCATION = 'https://SampleChat.firebaseIO-demo.com/users';
var userId = "Fred"; // Username
var usersRef = new Firebase(USERS_LOCATION);
usersRef.child(userId).once('value', function(snapshot) {
var exists = (snapshot.val() !== null);
if (exists) {
console.log("Username "+userId+" is part of our database");
} else {
console.log("We have no register of the username "+userId);
}
});
What about authentication and security?
You are in luck! Firebase released their Security API about two weeks ago! I have yet to explore it, but I'm sure it fills most of the gaps that allowed random people to use your reference to their own purpose.
Did you REALLY used in a web application?
Eeehm... ok, no. I used it in a Chrome Extension! It's still in process but it's going to be a Real Time chat inside a Chrome Extension. Ain't that cool? Fine. I find it cool. Anyway, you can browse more awesome examples for Firebase in their examples page.
What's the magic of these services? If you read your Dependency Injection and Mock Object Testing, at some point you can completely replace all of those services for your own through a REST Web Service provider.
Since these services were created to be used inside any application, they are CORS ready. As stated before, I have successfully used both of them from multiple domains without any issue (I'm even trying to use Firebase in a Chrome Extension, and I'm sure I will succeed soon).
Both Parse and Firebase have Data Browser managers, which means that you can see the data you are manipulating through a simple web browser. As a final disclaimer, I have no relationship with any of those services other than the face that James Taplin (Firebase Co-founder) was amazing enough to lend me some Beta access to Firebase.
You actually CAN use SQS from the browser, even without CORS, as long as you only need the browser to send messages, not receive them. Warning: this is a kludge that would make my CS professors cry.
When you perform a GET request via javascript, the browser will always perform the request, however, you'll only get access to the response if it was from the same origin (protocol, host, port). This is your ticket to ride, since messages can be posted to an SQS queue with just a GET, and who really cares about the response anyways?
Assuming you're using jquery, your queue is https://sqs.us-east-1.amazonaws.com/71717171/myqueue, and allows anyone to post a message, the following will post a message with the body "HITHERE" to the queue:
$.ajax({
url: 'https://sqs.us-east-1.amazonaws.com/71717171/myqueue' +
'?Action=SendMessage' +
'&Version=2012-11-05' +
'&MessageBody=HITHERE'
})
The'll be an error in the console saying that the request failed, but the message will show up in the queue anyways.
Have you considered JSONP? That is one way of calling cross-domain scripts from javascript without running into the same origin policy. You're going to have to set up some script somewhere to send you the data, though. Javascript just isn't up to the task.
Depending in what kind of data you want to send, and what you're going to do with it, one way of solving it would be to post the data to a Google Spreadsheet using Ajax. It's a bit tricky to accomplish though.Here is another stackoverflow question about it.
If presentation isn't that important you can just have an embedded Google Spreadsheet Form.
What about mailto:youremail#goeshere.com ? ihihi
Meantime, you can turn on some free hostings like Altervista or Heroku or somenthing else like them .. so you can connect to their server , if i remember these free services allows servers p2p, so you can create a sort of personal web services and push ajax requests as well, obviously their servers are slow for free accounts, but i think it's enought if you do not have so much users traffic, else you should turn on some better VPS or Hosting or Cloud solution.
Maybe CouchDB can provide what you're after. IrisCouch provides free CouchDB instances. Lock it down so that users can't view documents and have a sensible validation function and you've got yourself an easy RESTful place to stick your data in.

Categories

Resources