I have a web app written in Typescript and VueJS that execute a collection of tasks (ajax requests) and to track the whole process (and to execute one task after another) I use a Vue instance as bus to notify changes between components.
If the user open a new browser tab, the process stop. If the user come back, the process resumes.
The issue is present in Firefox and in Chrome.
I put in my code a simple window.setInterval to log every 2 seconds an 'Hello' and...surprise I have an 'Hello' every 2 seconds without any temporal 'hole'.
I see a very old issue in github for a similar situation: https://github.com/vuejs/Discussion/issues/76 but seems to be too old to be this.
I expect that the process doesn't stop but continues without interruptions..
https://developers.google.com/web/updates/2017/03/background_tabs
Background tabs can have a dramatic negative effect on browser performance, especially on battery life. To mitigate this, Chrome has been placing various restrictions on background tabs for the last several years. Recently there’s been a number of efforts to make further improvements, and this document gives an overview of the Chrome policy. This document focuses on describing current policies in Chrome 57. Long-term strategy and further plans can be found in this document.
https://www.chromestatus.com/feature/6172836527865856
As an intervention we want to limit how much CPU a background page is allowed to use and to throttle timer queues when this limit is violated. Current target is that background page CPU load level should be under 1%.
Related
I've been trying to make PWAs that reliably work offline on and off for the past year, but every time I write a new service worker, it works as expected for a week or two and then just breaks (until reconnecting). I thought it was due to the site's data getting evicted, but the local storage is often intact when I reconnect to the internet so it can load. Recently I also had one of the service workers remain active, but its cache storage was deleted, as well as the other service workers (I've got multiple different sites on the same origin, some of which have service workers. It's my GitHub Pages).
According to the spec, it sounds like service workers should always remain registered unless the data for the whole origin is evicted. I also don't think my service workers are accidentally deleting their caches or unregistering themselves as the issue only happens after not using them for a while, in which case they aren't running. Clearing Chrome for Android's cache also doesn't break the PWAs when offline, so I don't think I'm manually doing anything that's breaking them. Clearing an individual PWA's storage and cache also doesn't break it.
The relevant sentence in the spec:
"A user agent must persistently keep a list of registered service worker registrations unless otherwise they are explicitly unregistered."
(the unregistered service workers also don't show up in chrome://serviceworker-internals/ )
Any ideas? Do you think this is a bug? I've mostly seen this in Chrome for Android, but I think I'll try some other browsers as well to check. Unfortunately I can't test any of this very well as it's quite unpredictable and takes weeks.
These are the 2 main sites to try, as I use the same template for a few, although I don't think it'll be that helpful:
https://hedgehog125.github.io/Bagel-PWA/
https://hedgehog125.github.io/Bagel-V2/ see https://github.com/hedgehog125/SvelteKit-Plugin-Versioned-Worker/blob/main/src/worker.js for the proper service worker template
Thanks for any input
Edit: I thought it would be worth a try to see if that first site still works in Firefox while offline. It does, despite not visiting it in Firefox for maybe 6 months. I guess this is a Chrome bug/feature then?
Edit: Updated the title
I'm having issues with my Socket.IO project.
It's a complex back-office built with create-react-app. There is a lot of different web sockets handlers all over the project. It's nicely architectured, so there is a low amount of messy code. Nevertheless, on one of my pages (the main one), the server is sending every 5s some live metrics through web sockets to any of the clients connected to that main page.
If I take a look at my client app, everything works fine:
The update live metrics happens every 5s as expected
If I stay on that page, reload the page. Then the messages keeps coming every 5s:
As seen in the WS Network panel pf the Chromium devtools
But the Javascript eventListeners are not triggered. I've tested to see if they were setted properly after the refresh and it's true. I'm not sure where to look more to debug this.
I thought that if the network is recevings the web socket messages and that the listeners are setted up, there is no reason that the callback function is not called.
Thanks in advance,
I was wondering if it is possible to throttle user internet speed when accessing the website, based on a user choice. This is needed for a small scale test of how users react to different internet speeds. My workaround would be to get the user to manually throttle the speed in chrome dev tools, but I would prefer that as the last option. Any option to achieve this or something similar would be amazing. Thank you.
Edit: Just to clarify I am looking to code the throttling functionality within the website itself so the user won't have to install something or set the Chrome dev tools manually, as I am aware of those solutions already.
What you desire to do is not easily possible due to security reasons. Chrome (and most other browsers) prevent DevTools access from js scripts. A user has to manually and interactively press the buttons on DevTools to change the network speed of the chrome tab.
On your behalf, you should get the UX testers to use the DevTools.
That being said, there are solutions for this. But they might be complex!
Solutions in JS:
Dirty fix:
Create looping data downloader script that performs a DOS attack on the client.
Basically something like:
let delay = 100*(Math.random() +0.5);
setInterval(/*downloadStuff*/, delay);
Issues with this fix:
This creates real network congestion on the client, which might not be optimal.
Introduces web page lag because of CPU usage.
Better, but time consuming fix:
You can simulate a slow network environment by doing the following:
Periodically do request.abort() some ajax and xhr requests. See here and here. And yes, you have to keep references to the remote calls. (Some inspirational code by bruth)
Randomly prevent some images from loading by changing their src attribute for a few second. See here.
And... there are more to it.
Iframes are tricky as they can be from another domain. Chrome does not support cross domain request. To simulate a slow network, you have to stop the iframe once in a while and refresh it using the src attribute, just like the images. You could use window.frames[].stop(), to simulate a frozen/stopped iframe.
Videos are sometimes loaded with iframes, which is again hard to simulate network lag on. Unlike, images, reloading a video will reset the playback time. AFAIK, there are no way to simulate video lag easily (without heavy change of video playback logic).
And... if you are really into it. Go ahead and override different events such as those from GlobalEventHandlers.
Many solutions aside from js
Use Chrome DevTools (easiest as mentioned)
If the site is connected to a server you own. Add delay on before responding to simulate server congestion.
Use/create a Chrome Extension that changes the network speed
Create your own browser that can run the site, and change the network speed accordingly
Install software to control networks settings on the OS
Change the network speed on the router
Not entirely sure what experience you want other than the Chrome dev tools way but here is an alternative.
clumsy makes your network condition on Windows significantly worse, but in a managed and interactive manner
https://jagt.github.io/clumsy/
https://serverfault.com/a/570702
I developed a web app to display a slideshow, and want to display it on my secondary monitor (Connected via HDMI) with IE's Kiosk mode on Windows 10. Because of CPU and other resources on the shared server, I want to pause the slideshow when the monitor is powered off. (And therefore nobody is seeing it)
Is there a way to detect connected displays from Internet Explorer? Since this is a one-pc kiosk setup, add-ons, etc. are accepted. Triggering javascript/jquery events would be ideal. Thank you!
No, there is no reliable way to detect if a second monitor is physically switched off but still connected via the cable.
I have to ask though: why do you need to physically switch the second monitor off?
As an alternative could you not:
Have the slideshow stop after a timed duration unless it receives an input?
Have the slideshow only on display at certain times of the day?
Accept events from, say, a node server to control when to and not show the slideshow?
Having said that these threads could provided you, albeit unreliably apparently, what you need:
Is there any way to detect the monitor state in Windows (on or off)?
Monitoring a displays state in python?
You can't do in javascript. Why not try some asp component.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd162617%28v=vs.85%29.aspx
You could potentially write a command line program that sits on a particular port, continuously checks for that locally and then use HTML5 WebSockets in IE to communicate with it?
i.e. C# PowerModeChangedEvent
SystemEvents.PowerModeChanged += new PowerModeChangedEventHandler(
SystemEvents_PowerModeChanged
);
I don't think so....
CPU cycles are paused when the client computer is put into Sleep mode. (win+L)
Start>Control Panel>Power
configures how the monitor(s) behave when the client is powered down or put to sleep mode.
the screen object in js returns the metic values (height/width) of the screen object but not its powered state.
the impact of wasted CPU cycles on a powered down secondary monitor should be un-noticable....
probably you have not selected the option to "Use software rendering instead of GPU rendering" on the Advance tab of internet options....
You will notice that your CPU on your desktop will throttle up and the cooling fan will race if you haven't set the above setting when running graphic intensive web pages or canvas scripts.
Here's the situation:
I have a web-based ticket application, multiple users.
One problem that might occur (and does happen in the old version I'm replacing) is that user1 opens a ticket, edits it, and saves it. But while he was editing it, user2 also opened and saved the ticked. The changes user2 made will be lost/overwritten by user1.
To prevent this I implemented a locking mechanism, it's fairly simply:
On opening a ticket the PHP script checks for existing locks.
If it doesn't find any, it locks & opens the document.
In JS, setTimeout() and an XmlHttpRequest call to unlocks the ticket after 10 minutes (works w/o problems).
I also set an unload event to unlock the ticket when closing/moving away from the window/tab
The problem sits in step 4: The unload event (& it's friend beforeunload) just doesn't work well enough to implement this reliably (for this feature to have any serious meaning, it needs to be reliable), many browsers don't always fire it when I would like it to be fired (Like pressing back button, hitting F5, closing tab, etc. This varies per browser)
The only alternative I can come up with is using a setTimeout() and XmlHttpRequest() call to a php script to tell it the page is still open. If this "heartbeat" monitor fails we assume the user moved away from the ticket and unlock the document.
This seems horribly inefficient to me and quickly leads to many requests to the server with even a few users.
Anyone got a better idea on how to handle this?
It needs to work in IE8+ and other modern browsers (ideally, Firefox, Webkit, Opera). I don't care about IE6/IE7, our organization doesn't use those).
Using heartbeat pings via XHR is the way to go. Depending on the use case you might also want to send them after the user stopped typing in a field instead of every x seconds - this would ensure the page being kept open but inactive would not keep it locked.
If you send those XHRs after the user stopped typing, use one of the keydown/up/press events and a debounce / throttle script to send the request only when the user stops typing for e.g. 5 seconds and one every x seconds (in case it's likely enough the user will be typing for a long time).
Maybe it's not the best solution, but it's worth looking into it : websockets.
You could establish a connection with the server at page load and when the connection fails (ie the client does not respond to the ping), you can unlock the ticket.
Using something like socket.io ensures you that this procedure will work even on ie8.
The main advantage is that you do not send a request every n seconds, but the server sends you a ping every n seconds and you don't have to care about unload/beforeunload events. If the client doesn't respond to the ping, unlock the ticket.
The main disadvantage is that you need a server to handle all your websocket connections, which can be done in almost any server-side language, but it can be a bit harder than a simple web-service (in case of xhr polling)
Implementing ajax heartbeats or unload handlers to unlock the document automatically is tricky.
You problem is that even if you have support for beforeunload in all browsers that you target, it still might not be called if the browser crashes or the user falls asleep.
Look at how webdav works. You explicitly aquire a lock before you start edit, then you save and release the lock explicitly.
Other users can see who has acquired a lock and admins can release locks that has been left behind by accident.