I'm wanting to invalidate a previously cached GET from my service worker when a POST, PUT, or DELETE to the same url or any url of a resource or collection 'under' it happens, for example:
let's say I cache /subscriptions and later on I do a POST to /subscriptions to add a new subscription, or say I PUT to /subscriptions/243 to update an existing subscription.
This means that my cached subscriptions collection is now stale data and I want to delete it from my cache so the next request will go to the server.
I've thought of two options, where I'm not sure either are possible:
Can I use a Regexp in the caches.match() call?
This way I could just match the parent collection piece of the requested url with keys found in the cache.
Can I get the keys of each cached data response?
If so, I could just loop through each response and see if the key meets my criteria for deleting it.
Any ideas?
Thanks!
You can't use a RegExp or anything other than string matching (optionally ignoring query parameters) when doing a lookup via caches.match().
I'd recommend the second approach, in which you open a named cache, get its keys, and then filter for the ones you care about. It's not that much code, and looks fairly nice with await/async:
async function deleteCacheEntriesMatching(cacheName, regexp) {
const cache = await caches.open(cacheName);
const cachedRequests = await cache.keys();
// request.url is a full URL, not just a path, so use an appropriate RegExp!
const requestsToDelete = cachedRequests.filter(request => request.url.match(regexp));
return Promise.all(requestsToDelete.map(request => cache.delete(request)));
}
// Call it like:
await deleteCacheEntriesMatching('my-cache', new RegExp('/subscriptions'));
Related
Working on an app that uses the new(ish) File System Access API, and I wanted to save the fileHandles of recently loaded files, to display a "Recent Files..." menu option and let a user load one of these files without opening the system file selection window.
This article has a paragraph about storing fileHandles in IndexedDB and it mentions that the handles returned from the API are "serializable," but it doesn't have any example code, and JSON.stringify won't do it.
File handles are serializable, which means that you can save a file handle to IndexedDB, or call postMessage() to send them between the same top-level origin.
Is there a way to serialize the handle other than JSON? I thought maybe IndexedDB would do it automatically but that doesn't seem to work, either.
Here is a minimal example that demonstrates how to store and retrieve a file handle (a FileSystemHandle to be precise) in IndexedDB (the code uses the idb-keyval library for brevity):
import { get, set } from 'https://unpkg.com/idb-keyval#5.0.2/dist/esm/index.js';
const pre = document.querySelector('pre');
const button = document.querySelector('button');
button.addEventListener('click', async () => {
try {
const fileHandleOrUndefined = await get('file');
if (fileHandleOrUndefined) {
pre.textContent =
`Retrieved file handle "${fileHandleOrUndefined.name}" from IndexedDB.`;
return;
}
// This always returns an array, but we just need the first entry.
const [fileHandle] = await window.showOpenFilePicker();
await set('file', fileHandle);
pre.textContent =
`Stored file handle for "${fileHandle.name}" in IndexedDB.`;
} catch (error) {
alert(error.name, error.message);
}
});
I have created a demo that shows the above code in action.
When a platform interface is [Serializable], it means it has associated internal serialization and deserialization rules that will be used by APIs that perform the “structured clone” algorithm to create “copies” of JS values. Structured cloning is used by the Message API, as mentioned. It’s also used by the History API, so at least in theory you can persist FileHandle objects in association with history entries.
In Chromium at the time of writing, FileHandle objects appear to serialize and deserialize successfully when used with history.state in general, e.g. across reloads and backwards navigation. Curiously, it seems deserialization may silently fail when returning to a forward entry: popStateEvent.state and history.state always return null when traversing forwards to an entry whose associated state includes one or more FileHandles. This appears to be a bug.
History entries are part of the “session” storage “shelf”. Session here refers to (roughly) “the lifetime of the tab/window”. This can sometimes be exactly what you want for FileHandle (e.g. upon traversing backwards, reopen the file that was open in the earlier state). However it doesn’t help with “origin shelf” lifetime storage that sticks around across multiple sessions. The only API that can serialize and deserialize FileHandle for origin-level storage is, as far as I’m aware, IndexedDB.
For those using Dexie to interface with IndexedDB, you will get an empty object unless you leave the primary key unnamed ('not inbound'):
db.version(1).stores({
test: '++id'
});
const [fileHandle] = await window.showOpenFilePicker();
db.test.add({ fileHandle })
This results in a record with { fileHandle: {} } (empty object)
However, if you do not name the primary key, it serializes the object properly:
db.version(1).stores({
test: '++'
});
const [fileHandle] = await window.showOpenFilePicker();
db.test.add({ fileHandle })
Result: { fileHandle: FileSystemFileHandle... }
This may be a bug in Dexie, as reported here: https://github.com/dfahlander/Dexie.js/issues/1236
If I have a database structure like here and I make a query as shown below.Is there a difference on the traffic used to download the snapshot from the database if I access each node with snapshot.forEach(function(childSnapshot) and if I don't access the nodes?
If there is no difference, is there a way to access only the keys in Chats without getting a snapshot data for what each key contains.I'm assuming that this way it will generate less downloaded data
var requests = db.ref("Chats");
requests.on('child_added', function(snapshot) {
var communicationId = snapshot.key;
console.log("Chat id = " + communicationId);
getMessageInfo(
communicationId,
function() {
snapshot.ref.remove();
}
);
When you call requests.on('child_added', ...), you are always going to access all of the data at the requests node. It doesn't matter what you do in the callback function. The entire node is loaded into memory, and cost of the query is paid. What you do with the snapshot in memory doesn't cost anything else.
If you don't want all of the child nodes under requests, you should find some way to filter the query for only the children you need.
As they mentioned in the documentation, either of these methods can be used:
Call a method to get the data.
Set a listener to receive data-change
events.
Traffic depends upon our usage. When your data need not get updated in realtime, you can just call a method to get the data (1) But if you want your data to be updated in realtime, then you should go for (2). When you set a listener, Firebase sends your listener an initial snapshot of the data, and then another snapshot each time the child changes.
(1) - Example
firebase.database().ref('/users/').once('value') // Single Call
(2) - Example
firebase.database().ref('/users/').on('child_added') // Every Update It is Called
And also, I think you cannot get all keys, because when you reference a child and retrieve a data, firebase itself sends it as key-value pairs (DataSnapshot).
Further Reference: https://firebase.google.com/docs/reference/js/firebase.database.DataSnapshot
https://firebase.google.com/docs/database/web/read-and-write
What I'm trying to do:
I want to add data (id, data) to db. While doing so, I want to check if the id already exists and if so, append to existing data. else add a new (id, data) entry.
Ex: This could be a db of student id and grades in multiple tests. If an id exists in the db already, I want to just append to the existing test scores already in db, else create a new entry: (id, grades)
I have setup an update handler function to do this. I understand, couchdb insert by default does not do the above. Now - how do I check the db for that id and if so, decide to whether to add a new entry or append. I know there is db.get(). However, I presume since the update handler function is already part of the db itelf, there may be a more efficient way of doing it.
I see this sample code in the couchdb wiki:
function(doc, req){
if (!doc){
if ('id' in req && req['id']){
// create new document
return [req, 'New Document'];
}
// change nothing in database
return [null, 'Incorrect data format'];
}
doc[id] = req;
return [doc, 'Edited World!'];
}
a few clarifications in this example that's not clear: where do we get the id from? Often the id is not explicitly passed in while adding to db.
Does that mean, we need to explicitly pass a field called "_id"?
how do I check the db for that id and if so, decide to whether to add a new entry or append.
CouchDB does this for you, assuming HTTP client triggers your update function using the ID. As the documentation describes:
When the request to an update handler includes a document ID in the URL, the server will provide the function with the most recent version of that document
In the sample code you found, the update function looks like function(doc, req) and so in the code inside of that function the variable doc will have the existing document already "inside" your CouchDB database. All the incoming data from the client/user will be found somewhere within req.
So for your case it might look something like:
function(doc, req){
if (doc) {
doc.grades.push(req.form.the_grade);
return [doc, "Added the grade to existing document"];
} else {
var newDoc = {_id:req.uuid, grades:[req.form.the_grade]};
return [newDoc, "Created new document ("+newDoc._id+") for the grade"];
}
}
If the client does a POST to /your_db/_design/your_ddoc/_update/your_update_function/existing_doc_id then the first part of the if (doc) will be triggered, since you will have gotten the existing document (pre-fetched for you) in the doc variable. If the client does a POST to just /your_db/_design/your_ddoc/_update/your_update_function, then there is no doc provided and you must create a new one (else clause).
Note that the client will need to know the ID to update an existing document. Either track it, look it up, or — if you must and understand the drawbacks — make them determinate based on something that is known.
Aside: the db.get() you mentioned is probably from a CouchDB HTTP client library and is not availble (and would not work) in the context of any update/show/list/etc. functions that are running sandboxed in the database "itself".
Cited from the documentation:
If you are updating an existing document, it should already have an _id set, and if you are creating a new document, make sure to set its _id to something, either generated based on the input or the req.uuid provided. The second element is the response that will be sent back to the caller.
So tl;dr, if the _id is not specified, use req.uuid.
Documentation link: http://docs.couchdb.org/en/stable/ddocs/ddocs.html#update-functions
I'm looking for a simple way to cache HTML that I pull using the request-promise library.
The way I've done this in the past is specify a time-to-live say one day. Then I take the parameters passed into request and I hash them. Then whenever a request is made I save the HTML contents on the file-system in a specific folder and name the file the name of the hash and the unix timestamp. Then when a request is made for the using the same parameters I check if the cache is still relevant via timestamp and pull it or make a new request.
Is there any library that can help with this that can wrap around request? Does request have a method of doing this natively?
I went with the recco in the comments and used Redis. Note this only works for get requests.
/* cached requests */
async function cacheRequest(options){
let stringOptions = JSON.stringify(options)
let optionsHashed = crypto.createHash('md5').update(stringOptions).digest('hex')
let get = await client.getAsync(optionsHashed)
if (get) return get
let HTML = await request.get(options)
await client.setAsync(optionsHashed, HTML)
return HTML
}
I am using Restangular to handle my token/header authentication in a single page Angular web application.
Using addFullRequestInterceptor, I set the correct headers for each outgoing REST API call, using a personal key for encrypting data.
Restangular
.setBaseUrl(CONSTANTS.API_URL)
.setRequestSuffix('.json')
.setDefaultHeaders({'X-MyApp-ApiKey': CONSTANTS.API_KEY})
.addFullRequestInterceptor(requestInterceptor)
.addErrorInterceptor(errorInterceptor);
function requestInterceptor(element, operation, route, url, headers, params, httpConfig) {
var timeStamp = Helpers.generateTimestamp(),
//Condensed code for illustration purposes
authSign = Helpers.generateAuthenticationHash(hashIngredients, key, token),
allHeaders = angular.extend(headers, {
'X-MyApp-Timestamp': timeStamp,
'Authentication': authSign
});
return {
headers: allHeaders
}
}
Works great. There is one exception I need though: For a new visitor that has not logged in yet, a generic key/token pair is requested via REST. This key/token pair is used in the headers of the login authentication call.
So for this call, I create a separate Restangular sub-configuration. In this configuration I want to override the requestInterceptor. But this seems to be ignored (i.e. the original interceptor is still called). It doesn't matter if I pass null or a function that returns an empty object.
var specialRestInst = Restangular.withConfig(function(RestangularConfigurer) {
RestangularConfigurer.addFullRequestInterceptor(function() {return {}});
}),
timeStamp = Helpers.generateTimestamp(),
header = {'X-MyApp-Timestamp': timeStamp};
specialRestInst.one('initialise').get({id: 'app'}, header)
So as documented by Restangular, withConfig takes the base confuration and extends it. I would like to know how to removeFullRequestInterceptor (this function does not exist), override it, or something like that.
I would take a different approach and try to pass a flag to the interceptor. If the flag exists then the authSign is excluded. You can do this using withHttpConfig. It's better to exclude on special cases then to always having to tell the interceptor to include the authSign.
So you would update the interceptor like this.
function requestInterceptor(element, operation, route, url, headers, params, httpConfig) {
var timeStamp = Helpers.generateTimestamp();
var allHeaders = {'X-MyApp-Timestamp': timeStamp};
if(!httpConfig.excludeAuth) {
//Condensed code for illustration purposes
var authSign = Helpers.generateAuthenticationHash(hashIngredients, key, token);
allHeaders['Authentication'] = authSign;
}
return angular.extend(headers, allHeaders);
}
When you need to exclude the authSign you would use restangular like this.
specialRestInst.withHttpConfig({excludeAuth: true}).get({id: 'app'});
You should be able to add any values to http config you want as a long as they aren't already used.
I'm not sure if this will work as expected, but I can't see why it wouldn't work.