Is it possible to identify updates on the server, that's, has updates on the page (HTML) or on the style (CSS) and make a request to server to get updated data? If so, how?
Shortly;
When your service worker script change, it will set to replace the old one user has as soon as user gets online with your PWA.
You should reflect changes to the array of items you have in your SW.
You should change the name of the cache, and remove the old cache.
As the result, your users will always have the latest version of your app.
With details;
1) Change your SW to replace the one user currently have.
Even slightest change in your SW is enough for it to be considered as a new version, so it will take over the old one. This quote from Google Web Developers explains it;
Update your service worker JavaScript file. When the user navigates to
your site, the browser tries to redownload the script file that
defined the service worker in the background. If there is even a
byte's difference in the service worker file compared to what it
currently has, it considers it new.
Best practice would be keeping a version number anywhere in your SW, and updating it programmatically as the content change. It could even be quoted, does not matter, it will still work. Remember: even a byte's difference is enough.
2) Reflect your changes to the list of items to be cached.
You keep a list of assets to be cached in your SW. So reflect added/updated/removed assets to your list. Best practice would be setting up cache buster for your assets. That is because SW is located a layer behind the browser cache, so to speak.
In another words: fetch requests from SW goes through browser cache. So SW will pick new assets from browser cache instead of the server, and cache those until SW changes again.
In that case you will end up half of your users using your PWA with new assets while the other half is suffering from incredulous bugs. And you would be having wonderful time with over-exposure to complaints and frustration of being unable to find the cause or a way to reproduce those.
3) Replace your old cache, do not merge it.
If you do not change the name of the cache for your updated list of assets, those will be merged with old ones.
Merge will be happen in a way that old and removed assets will be kept while new ones added and changed ones will be replaced. While things will seem to work okay this way, you will be accumulating old assets on users' devices. And space you are storing is not infinite.
4) Everyone is happy
It may look tedious to implement and keep track of all the things mentioned but I assure you that otherwise you will start having way greater and way more unpleasant things to deal with. And unsatisfied users will be a nice plus.
So I encourage you to design your SW for once and for good.
Yes it is possible by invalidating the cache in your service worker:
https://developers.google.com/web/fundamentals/getting-started/primers/service-workers#update-a-service-worker
Note also that at the moment there is an open issue and Service worker JavaScript update frequency (every 24 hours?) as the service worker may not be updated due to browser cache.
However you may also want to take a look at sw-precache which does this for you (for example through a gulp task)
Have a look at LiveJS to dynamically update the css.
I believe the solution they use is to add a get parameter with a timestamp to the css or html page: /css/style.css?time=1234 and calculate the hash of the result. If the hash changed since the last verification, update the CSS, otherwise, keep looking.
A similar implementation could be built from HTML, but I have not seen any similar projects for it. You should have a look at Ajax if you want to automatically update data in your page.
Related
I'm working on a vue app that uses vuex and gets objects from an api. The tables have paging and fetch batches of objects from the api, sometimes including related entities as nested objects. The UI allows some editing via inputs in a table, and adds via modals.
When the user wants to save all changes, I have a problem: how do I know what to patch via the api?
Idea 1: capture every change on every input and mark the object being edited as dirty
Idea 2: make a deep copy of the data after the fetch, and do a deep comparison to find out what's dirty
Idea 3: this is my question: please tell me that idea 3 exists and it's better than 1 or 2!
If the answer isn't idea 3, I'm really hoping it's not idea 1. There are so many inputs to attach change handlers to, and if the user edits something, then re-edits back to its original value, I'll have marked something dirty that really isn't.
The deep copy / deep compare at least isolates the problem to two places in code, but my sense is that there must be a better way. If this is the answer (also hoping not), do I build the deep copy / deep compare myself, or is there a package for it?
It looks like you have the final state on the UI and want to persist it on the server. Instead of sending over the delta - I would just send over the full final state and overwrite whatever there was on server side
So if you have user settings - instead of sending what settings were toggled - just send over the "this is what the new set of settings is"
Heavy stuff needs to be done on the server rather than the client most of the time. So I'll follow the answer given by Asad. You're not supposed to make huge objects diffs, it's 2022 so we need to think about performance.
Of course, it also depends of your app, what this is all about. Maybe your API guy is opposed to it for a specific reason (not only related to performance). Setup a meeting with your team/PO and check what is feasible.
You can always make something on your side too, looping on all inputs should be feasible without manually doing that yourself.
TLDR: this needs to be a discussion in your company with your very specific constrains/limitations. All "reasonable solutions" are already listed and you will probably not be able to go further because those kind of "opinion based" questions are not allowed anyway on SO.
In a web app, using pouchDB, I have a slow running function that finishes by updating a document in the DB. I want to move it off the main UI thread and into a web worker. However, we have lots of other code using pouchDB still in the main thread (e.g. the change event listener, but also code that deals with other documents). (For reference the database size is on the order of 100MB; Vue2 is used so, in general, the UI can update when the data changes.)
This is where I seem to come unstuck immediately:
Shared memory is basically out, as all the browsers disable it by default
Even if it wasn't, pouchDB is a class, and cannot be transferred(?).
Isolating all the db code, including the changes handler, into one web worker is a huge refactor; and then we still have the issue of having to pass huge chunks of data in and out of that web worker.
Move all the code that uses the data into the web worker too, and just have the UI thread pass messages back and forth, is an even bigger refactor, and I've not thought through how it might interfere with Vue.
That seems to leave us with a choice of two extremes. Either rewrite the whole app from the ground up, possibly dropping Vue, or just do the slow, complex calculation in a web worker, then have it pass back the result, and continue to do the db.put() in the main UI thread.
Is it really an all or nothing situation? Are there any PouchDB "tricks" that allow working with web workers, and if so will we need to implement locking?
You're missing an option, that I would chose in your situation. Write a simple adapter that allows your worker code to query the DB in main thread via messages. Get your data, process it in the worker and send it back.
You only need to "wrap" the methods that you need in the worker. I recommend writing a class or set of functions that are async in your worker, to make the code readable.
You don't need to worry about the amount of data passed. The serialization and de-serialization is quite fast and the transfer is basically memcpy, so that does not take any reasonable time.
I found this adapter plugin, which I guess counts as the "PouchDB trick" I was after: https://github.com/pouchdb-community/worker-pouch
It was trivial to add (see below), and has been used in production for 6-7 weeks, and appears to have fixed the problems we saw. (I say appears, as it is quite hard to see it having any effect, and we didn't have a good way to reproduce the slowdown problems users were seeing.)
const PouchDB = require('pouchdb-browser').default
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
The real code is like this, but usePouchDBWorker has always been kept as true:
const PouchDB = require('pouchdb-browser').default
// const pouchdbDebug = require('pouchdb-debug')
if (usePouchDBWorker) {
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
}
This code is used in both web app and Electron builds. The web app is never used with older web browsers, so read the github site if that might be a concern for your own use case.
I'm implementing an app for iOS that is based on events. These events have a startTime and endTime. They are available to the user to see only at this interval from startTime to endTime.
That's how it works: An user can create an event and post it to the Firebase database (the event contains startTime and endTime). So if the current time matches this interval, the user can see the current events that are going on; but if the endTime arrives, the event gets deleted from the database and the user can no longer access it.
The thing is I know nothing about javascript and how it could work on an iOS app with Firebase. I think (through my research) that I need something to check the database for old events and I have no idea how to implement it.
How does this check works on an iOS app with Firebase?
I'm sorry if I wasn't clear
The accepted answer is ok but you do have a "proper" option.
You can use Google App Engine and write a backend which runs a script say every X hours to clear out old entries (be careful with TZs in your timestamps, make sure everything is consistent). Depending on whether you feel you will have other processing needs in the future, it may be worth going that route as otherwise you are rather limited with what you can do (it all need to go through the client apps, and you have security, reliability and performance issues potentially depending on your use case).
For that you would need a full Google App Engine (Google Cloud) account and probably higher pay-as-you-use pricing.
For details how to approach this see here: https://cloud.google.com/solutions/mobile/firebase-app-engine-android-studio
From there you can also see the nice design option diagrams they've done to visualize how your client apps + firebase + app engine work together. https://cloud.google.com/solutions/mobile/mobile-app-backend-services
You should still implement client-side filtering of events since the purging of old events won't be done in real time.
BTW Google literally JUST NOW uploaded fancy new tutorials (codelabs) replacing all of firebase's old docs. So I suspect they will very soon add a codelab with app engine integration as well (I couldn't find one yet). In the meantime, you can see the one I pasted above, it is from google's official cloud site.
Best of luck.
I've had to implement something similar. Here's the breakdown of how you can achieve this with Firebase.
Firebase does not provide server-side logic so you can't rely on Firebase deleting the data for you (in the case the event owner terminating the app, turning off phone, etc).
Use FirebaseServerValue.timestamp to provide your event with a standard time value. This will be your constant to either allow or deny a user from seeing an event.
Use if / else statements to control the event's visibility only between startTime and endTime (also set as timestamps).
Because you can't use server side logic to delete the data, you must use the client to remove data from Firebase. Use the if / else logic from #3 to determine if your current timestamp value is past the endTime timestamp, and, if true, remove that piece of data from Firebase.
Hope this helps.
When some cached value is expired or new cache will be generated for any reason and we have a huge traffic at the time of no cache exists, there will be a heavy load on MongoDB and response time significantly increases. This is typically called "Dog-pile effect". Everything works well after cache is created.
I know that it's a very common problem which applies to all web applications using a database & cache system.
What should one do to avoid dog-pile effect at a Node.js & MongoDB & Redis stack? What are best practices and common mistakes?
One fairly proven way to keep the dogs from piling up is to keep a "lock" (e.g. in Redis) that prevents the cache populating logic from firing up more than once. The first time that the fetcher is called (for a given piece of content), the lock is acquired (for it) and set to expire (e.g. with SET ... NX EX 60). Any subsequent invocation of the fetcher for that content will fail on getting the lock thus only one dog gets to the pile.
The other thing you may want to put into place is some kind of rate limiting on the fetcher, regardless the content. That's also quite easily doable with Redis - feel free to look it up or ask another question :)
Id just serve expired content until new content is done caching so that database wont get stampede.
I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.