I was curious if there was a way to detect the user pressing the "stop navigation" button in the browser using javascript (or, even better, jQuery.) For example, if you click a link for a webpage that takes a while to load, you may want to show a spinning loader. But what if the user cancels navigation to the page? Is there anyway to detect that to get rid of the spinning loader that you put?
EDIT: I did a bit more research, and there seems to be an onStop event in javascript but, wouldn't you know it, it only works in internet explorer. If anyone has any other ideas to implement a cross browser solution like onStop, that'd be wonderful, but if not, I'll answer my own question in a few days to close this.
EDIT 2: https://stackoverflow.com/a/16216193 says it's not possible. As do a few other answers.
Alright so, as promised, I'm going to answer my own question.
I've thought about this quite a bit - and I've come up with a solution. I wasn't able to make it work in code (I didn't try too hard), but it should work in theory.
So I thought about the criteria of deciding when a webpage should decide stop was called. I came up with this:
If the script hasn't died after a reasonable amount of time, it can be assumed navigation has been canceled.
Then a jQuery event can be fired on the body or something like that. But what constitutes "a resonable amount of time?" I figured it would be partially based on page render time (fetching images, etc.) to get an idea of how fast the user's internet is. That can be gotten by doing:
var start = new Date();
var time;
$("body").load(function () {
time = new Date() - start;
...
});
Multiply that by a coefficient (maybe 3 or something) and get an approxamate transfer time. (This would have to be adjusted to account for how long it would take for the server to generate the next page, dependent on how dynamic it is.) Then, using this new found time*3 you'd write something like this:
$("a").click(function() { //Anything that could go to another page should filter through here
setInterval(function() {$(document).trigger("navstopped");},time*3);
}
$(document).on("navstopped") {
//Do stuff now that we assume navigation stopped.
}
Assume. That's really all we're doing here. We may have an inconsistent internet connection, fast one minute, slow the next. Server load could be inconsistent too. Maybe it's serving up images like a ninja for this page, but it's hit with a bunch of requests the next, making it generate/serve the next page a bit slower. So we're just assuming that something interrupted the navigation some how, but we are not certain.
Now, of course, this could be used in conjunction with IE's onStop event, but this was really the only cross browser solution I could think of. I wasn't able to get it to work, but maybe some jQuery god may be able to in the future.
Edit before post: Even before I posted this, I had another idea. More browsers support onAbort. If we have a picture that never loads, and the user presses stop, will onAbort be fired? Even if another webpage is loading? It requires testing but that may work too. I like my first idea better though. Although unstable, it is more stable than this cockamamie idea and I realize this could be bad practice.
Related
First of all, apologies if this question was answered before.
I'm writing a code in JS to read an Excel File, get the value of the first cell in the column, search for it (it's an ISBN code, which I'm searching with the Google Books API) and get other relevant info, made available through the search (like Title, Subtitle and Author), then proceed to the next line and repeat the process.
My problem is writing the new data back in the Excel File. The code is writing all info in the last used row in the file. While using window.alert to flag the code, I noticed that when the alert was in a for loop, right before the search was initiated, the new data was inserted just fine, but if I tried to use a pause (like a timer function or a while loop to consume time) it didn't help at all.
What I want to know is why that behavior might be happening and, if possible, of course, a possible solution for my problem, since having to use alert as a pause isn't exactly the most interesting solution.
Thanks in advance
Alert will always stop all execution of code, except for web workers. Therefore, If you need to continue execution, use a web worker. Have a look at this for reference (the note part covers this topic partially)
When browsers show a native modal interaction widget, such as an alert, it transitions into a state that waits for the response. In this state, it is allowed to redraw the page and process certain low level events. Here's the code from Mozilla Firefox that alert() and confirm() use:
http://mxr.mozilla.org/mozilla-central/source/toolkit/components/prompts/src/nsPrompter.js#434
This openRemotePrompt function doesn't return until the user clicks "OK" on the alert. However browser behaves differently while the alert is open. A loop repeatedly calls thread.processNextEvent to do certain kinds of work until the dialog is closed. (It doesn't run the application's JavaScript code, since that's meant to be single-threaded.)
When you use a pure JavaScript busy wait, for example, by looping until a certain wall time, the browser doesn't take these measures to keep things moving. Most noticeably, the UI won't redraw while the JavaScript code is looping.
Here's the plot, which is a True Story (a problem that exists for a real person, that is - me):
You are working on a large enterprise site, which includes a lot of JavaScript and specifically jQuery code you don't have any control of, and can't possibly change (good luck even finding out who wrote it, or why). Layers of authentication and authority are involved, so just pretend it's written in stone and you can't touch it.
Somewhere in this code, there is an event that scrolls the document to the top of the page after it has loaded. "OK, that sounds harmless" one might think - but it is now your task to scroll the page to a specific item based on a query string or anchor.
Everything works fine generally, but when you click a link that goes to example.com/list#item11, the browser works as expected and you go directly down to the item you want to link to...and then, whammo, the page instantly jumps back to the top of the page.
Now, you might say "well, that's what document.ready() is for!" ...to your horror, you find that the rogue event comes along anyway.
After Stack Overflow searching for an even later event to tie into, you find this gem:
$(document).ready(function(e) {
$(window).load(function(e){ });
}
And surely, this will definitely work! Only, it does not. You try return false and e.preventDefault(), but that does nothing for you here.
All you can be sure of is that this rogue scrolling event occurs after your code runs, after the DOM is ready, and definitely after the window.load() event. You are sure of nothing else.
Can you assassinate this rogue event? Is there some mechanism to intercept scroll events and prevent them from occurring? Can you link into some event later event, like "the DOM is ready, the window is loaded, the page is settled, the children are in bed, and all other events are done being handled.... event()`"?
The only solutions I can imagine now are "give up - scrolling behavior on page load isn't going to work in your scenario", "use a timer and wait! then commit seppuku for being such a dirty hack!", and "ninja-assassination mission!" (since I don't know who wrote the offending code, I'd have to settle for killing their code instead of them - and I'm sure they had their reasons, or have already been assassinated... or at least waiting for the code to pass and do my thing).
Is there some Better Way, some hard to find function, some last resort that invokes the arcane Dark Lords of Reflection, or is it time to give up and solve the problem another way?
TLDR;
How do you stop a disruptive scripted event - like scrolling - from occurring when you can't change the code that is causing it? Acceptable answers include how to make certain your code runs after - without using a timer hack! - and/or if your code always runs first how do you prevent the later code from messing up yours?
It might be helpful to find out how the event is defined, and what events are firing, but I feel that this is a separate question and may not necessarily be required to fix the situation. For illustration purposes, assume there are thousands of active events and listeners spread out across dozens of minified script files. It may just be so hard to narrow down what exactly is happening as to be too much trouble to even try.
The best solution will be to edit the source code where the ready event is declare.
if you can't, you can copy this code somewhere else and edit it.
if its totally not possible, then
you cannot unbind the ready event because that can cause problem.
you can override the window.scrollTop() function by its prototype
window.prototype.scrollTo2 = window.prototype.scrollTo;
window.prototype.scrollTo = function(){
/*Look in the url if you have an hash tab*/
// if yes return false
//if not
window.prototype.scrollTo2()
};
Smack them with a timer if everything else fails:
$(document).ready(function() {
window.setTimeout(function() {
document.location = "#bottom";
}, 200);
});
Live test case.
Ugly, but working.
I would hook into the window.scrollTo prototype to try and catch the burglar in the act. If you know how it's done, it's easier to get rid of it.
If this rogue call is not embedded in too huge a pile of JQuery goo, it could even allow to trace the call to the original culprit, who would soon be smitten with great vengeance and furious anger.
I'm looking to implement a warning if the user attempts to leave the order process before it's completed in any fashion other then of course following the payment button.
Something like this:
<script type="text/javascript">
window.onbeforeunload = function(){
return 'You must click "Buy Now" to make payment and finish your order. If you leave now your order will be canceled.';
};
if document.getElementsByClassName('eStore_buy_now_button').onclick = function(){
};
</script>
I'm sure that's detrimentally wrong in a few ways, but it's the best way I can illustrate what I'm trying. And I understand some browsers will display default text instead of the actual warning I've written, that's fine.
A few notes, I'd rather use plain old JS instead of loading up jQuery for just this one simple task. There are no settings on the page so it's a simple leave page or click "Buy Now" operation.
UPDATE:
I assure you it's not for my sake, it's for the user's sake. Although it's explicitly explained (what to do), I think user's are jumping the gun and leaving before the process is truly finished out of an instant gratification, ignore the messages kind of mentality. It's a simple 2-step process, they submit the details for the project and then make payment. For whatever reason they're submitting details and then not following through with payment about 50% of the time. And then they'll follow up "So, are you working on the project or what?" and then I have to explain "You never finished your order." They follow up with a "Whoops, here ya go."
Unfortunately, I would chalk this up as marketing and web design 101. Rule #1, people are dumb. Not to be taken in a rude or pessimistic sense. Basically, the idea is assume everyone is dumb in your design, instruction so that you make something so easy a five-year-old can do it. I totally agree with not holding users hostage. But this page is ONLY reached in the middle of their intended order process that THEY initiate (this page will never be reached in a browsing sort of way). So I think it's a pretty legitimate use case where you're saving a common user mistake from themselves. A demographic of customers that are not tech-savvy, so they honestly need such guidance.
document.querySelector('.eStore_buy_now_button').addEventListener("click", function(){
window.btn_clicked = true; //set btn_clicked to true
});
window.onbeforeunload = function(){
if(!window.btn_clicked){
return 'You must click "Buy Now" to make payment and finish your order. If you leave now your order will be canceled.';
}
};
This will alert the user whenever the page unloads (eg leaving the page) until btn_clicked is set to true.
DEMO: http://jsfiddle.net/DerekL/GSWbB/show/
Don't do it.
There is a fine line in terms of usability - on one hand sometimes I may have intended to place an order but accidentally left the page; on the other hand it could get annoying pretty quickly. When abrowser is set up to save previous session (i.e. reopen tabs on next launch) and one page behaves this way, you'll end up with only that tab re-opened next time (confirmed on Mac Safari), discarding the rest of the tabs. They'll not be buying from you again!
I'd suggest you make it clear to the user by means of inline messages that the order has not been submitted yet and they still need to confirm their action, but if they were to accidentally navigate away you should make it easy to pick up where they left off. Would be fairly trivial to store such info in a cookie so that on subsequent page visits the user would be prompted with "you have an incomplete order for ..., would you like to finish it now?"
As an alternative, you could use rely on an inactivity alert (think of online banking prompting you when your session is about to expire) to bring the user back to the "complete order" page if they get distracted.
If you are certain you want to rely on this event, the answers to this question may provide better insight. Basically, the functionality or its implementation beyond a basic text warning should not be relied onto because of inconsistent (?) implementation across browsers as well as possibility of having it blocked by the user.
Another update:
Prompted by Derek's comment on this approach being used by Gmail etc., I've come across an article suggesting you stick with onunload instead and rely on AJAX calls to save the state of the page - which backs my thoughts on allowing the user to pick up where they left even if the javascript event is never triggered.
I implemented infinite scroll like so:
new_page_value = 1;
$(window).scroll(function() {
if($(window).scrollTop() >= $(document).height() - $(window).height() - 200) {
new_page_value = parseInt(new_page_value) + 1;
get_page(new_page_value);
}
});
When the user almost reaches the bottom of the page (200px left) the function get_page() is called. This contains an ajax call that gets all the contents of the new page and appends it to the <body> of the document.
Now I just realized if my site gets big and instead of having 10 small pages I have a gazillion giant pages then the user's browser might crash if they are persistent enough to keep infinite scrolling for long time.
Would this be a possible solution to this problem:
I will keep appending the new pages to the document <body> until the 10th page, after that I will be replacing the <body> content entirely instead of appending. So using html() rather than append().
I just don't know if this will actually work to prevent crashes. Will .html() clear the "memory" of prior html that was brought in via ajax?
I really think this is a common issue for many sites with AJAX list content. So let's take an example at some of the most popular ( think of scale = experience ) websites and their solutions :
Google Images
If you check out images.google.com and you search for whatever, for e.g. "guiness", you will see a page full of results (actually the images are ajax loaded, not the html-code, so the page is with fixed height) and when you scroll at the bottom there is a button "Show more results". This might be solution one of your problem, but is it really necessary to place a button at the bottom after, for e.g. the 10-th page? I really think it is generally a good solution for page usability and memory leaks, but it is really not a necessary option as we can see in :
Facebook
Facebook Newsfeed is another story. There is a button "Show more posts", but I really don't know when exactly it is displayed rather than loading the next page of posts. It happened to me once to load 10-15 pages of posts, only by scrolling. And you know Facebook posts include videos, photos, AJAX comments and a lot of more Javascript fancy stuff, which take a lot of memory. I think they've managed to do this after a lot of research, how much of the users scroll to the bottom.
Youtube
Youtube has "Load more videos" at every page, so the solution is basically similar to Google, except that Google renders the whole html of the page and on scrolling just loads the images.
Twitter
Twitter supports infinite scrolling. Yep, they do it may be because tweet is 140 characters and they don't need to worry about memory so much. After all who is willing to read more than 1000 pages of tweets at one page load. So they don't have a button for "load more" and they don't need one.
So there are two solutions :
Use infinite scrolling ( you should consider how much content you load and how rich it is )
Use button : "Load More"
Most of all, you should not delete already loaded content of a list.
Nowadays everything is Javascript and Javascript has garbage collection, so it is very hard to unload the DOM ( if it has Javascript, not plain text ) and manage to remove the Garbage from Javascript. Which means that you won't free the whole allocated memory of the unloaded content from the browser.
Also think about of your requests, why would you need to load again something, that you have already loaded at first place. It costs another server request, meaning another database request and so on.
I have worked with this before and here are some of my thoughts:
a) If you are appending data to the memory page(s) at a time then it is not an issue, some browsers might not respond well but most of the lastest browsers will render without any problem so long as there is enough memory on the target machine, you could probably see how the ram usage increases as you append pages. Use chrome for this as each page is a separate process and it has an inbuilt task manager
b) regarding usage of html(), it indeed removes the markup but it does so at a heavy cost as it tries to take care of special conditions and has an overhead and accesses all the controls nested within the container that you are replacing (not sure about the last pat), but it has a cost. A simpler way to clear the DOM would be to use the innerHTML property and set it to empty, jquery does this but it is at a later point in the html() api. open up the api and look at the method.
using innerHTML
$("<selector>")[0].innerHTML=""
Also deletion of pages sounds weird to me as a user, what if I want to go back to the initial comments and please dont think about making it an infinite scroller too.. I have tried and given up after the number of bugs raised but we had a genuine use case for it and I had to stick a button up there, but this wasnt when the user scrolled away from the first page, this is when the user landed on a 3rd page but now needs to see the results above it.
Hope that answers your question and btw infinte scrolling is your friend use it, dont over engineer a case which will probably only be tested by your QA team. Its better to spend your effort somewhere else.
Yes it will, if i may suggest an idea after let's say 5 pages just delete the first page and append the new one instead of deleted all of the previous pages. good luck :)
Is there a better way to collect data on how many visitors don't have js enabled. My idea was to add a hidden value at page load with js. If it works then js was successful. But looks like there's no way to actually read it back to know if it was successful unless I do some kind of a page reload, but now it gets complicated (I have to delay whatever operations that were about to happen, etc. so as I said gets complicated). Any tips or ideas on this? I'm sure there's a better practice way than mine.
I should add, if there's already a ready-made solution for this, please let me know, I'm not really interested in reinventing the wheel :)
A good way to do this is use a <noscript><img src="track.php" width="1" height="1" /></noscript>, and that will allow for browsers without javascript to pull a tracking image and then the server can get the Useragent and IP from that tracking image.
You can't know in advance which technologies the user is using client-side, so the only way to know for sure is after the first load. Even so, he might disable JS after the first page load and you're left running with a different scenario.
In fact, try it here in SO: load a page with JS enabled, then disable and reload. You'll see a big red banner at the top telling you this page works better with JS enabled.
Bottom line: you should never rely on client's technology, unless you really want to limit the people reaching your site. If you want to reach the most number of people, you should code as if they had every technology, and none at the same time.