I have a js widget which is embedded in random sites.
When a user clicks a link I want to report it to my server.
To call my server I create a tag dynamically with src containing the parames reported to the server.
If I do it on onclick event and return true from the onclick, the server probably will not get the call because the browser changes the page (is this correct?).
If I do it on onclick and return false then use redirect after the call to the server is returned, I lose the back button functionality because browsers like IE do not support the back button after redirect.
Any idea of how to do this in a robust way?
There is no problem. If you add an onclick handler to the <a> elements that sends an AJAX POST to the server (registering the click), this will arrive at the server, the page will still move on to the clicked link, and the back button will work as expected.
Example in jQuery:
$('a').live('click', function () {
$.post('/registerclick/', { data: $(this).attr('src') }, function () {});
});
The reason this works, is because the client/server model in HTML specifies that when a request is sent to the server, it cannot be cancelled. Therefore the following applies:
Event handler goes off, and a request to the server is sent
User gets redirected by default browser behavior to the page in the href attribute
Server gets the request, even tho the user is at a whole different page
The browser isn't aware of the 3rd step; but that doesn't matter for the server.
Related
I have a web page that shows remote asset data (for example weather station data) and that does background XMLHttpRequest()'s every 5 seconds to our server and reloads the page if new data from the remote asset has been received. This has all been working fine for years.
The page also has numerous links and submit buttons that can be used to go to other pages or submit commands to the server (which then sends a command to the asset). Issue I'm having is that some of the commands the server then executes involve calls to 3rd party web services, some of which can occasionally take up to 30 seconds to return or time out. But in the meantime if new data came in from the asset the background JS function reloads the page, thereby interrupting and cancelling the new http request that the user initiated.
I could probably work around this by adding onclick or onsubmit tags to every link and submit button to call a function to disable the timer, but as there can be dozens of links on the page I am hoping there might be a simpler, more elegant way where one simple function can tell me if the user clicked on something and thereby initiated a new active http session.
I enable my script by doing a setTimeout('myCheckServerFunction("'+url+'")',5000); from the html head. If the server then tells it there is new data it does a setTimeout(function(){location.reload();},5000);
So I'd like to disable the JS timer and prevent any reload if the user has clicked any link or button and thus if a new http session is active. Does there exist a function like this? eg. something like "window.isNewHttpRequestActive()" ? Or maybe there's a way I can check if the window.location changed? (not sure if that would get updated before the new http request is complete.)
Otherwise I could maybe attach a addEventListener() to every link and submit button on the page but I'm a PHP dev not JS so if anyone could recommend the best way to parse the DOM and attach a listener to every link and submit button that would work too.
I did try looking for events that "bubble" up to higher layers eg. the body element and that will catch link clicks but also catches any click even just a click on any blank area, So not sure how well that would work as I'd still need to filter that event to determine if it actually came from a link or button. Thank you.
Listening to all click events on body isn't necessarily a bad idea.
EDIT: As gre_gor pointed out in comment, it might be. The perceived target of the click is not always the link or button if other elements are inside of them.
So my original method, which was using event.target.tagName is to be avoided.
The following code would add an event listener for click on every a element of the document, and let you cancel the timer if it is set :
for (let element of document.getElementsByTagName("a") {
element.addEventListener("click", (event) => {
if (relocationTimeout !== undefined) {
clearTimeout(relocationTimeout);
relocationTimeout = undefined;
}
});
}
Up to you to adapt the selector in the loop to fit your needs.
Of course don't forget to store the timeout reference in a variable when you set it :
let relocationTimeout = setTimeout(function(){location.reload();},5000)
This may seem like a simple question, but it doesn't seem to be answered anywhere that i can find.
I am writing an onClick event handler that simply calls dataLayer.push() when an anchor is clicked.
Is dataLayer.push() a synchronous operation?
Will the GET request to google definitely be sent, even though the browser has unloaded the page it was requested from due to the link being followed?
Some browsers show the connection get cancelled, some show it success.
My question is if the computer is slow, is it possible for the page to get unloaded before the request is sent?
This is why i assume that google started using the eventCallback property to redirect the user after the link has been followed.
e.g.
https://developers.google.com/tag-manager/enhanced-ecommerce#product-clicks
This source code does not include the click handler, but implies that the onClick event should stop propogation and let the eventCallback function set document.location.
However, as soon as you cancel the event, all its information has gone.
This (in my opinion) is just the wrong way to do it.
e.g.
(CTRL or COMMAND) + Click opens a new tab on browsers. This will not work unless the onClick event handler allows the prorogation to continue.
Relying on eventCallback also means that if the google scrips didn't load for one of the many reasons it could (but is still unlikely), your links don't work. And your site is broken.
So this leaves the correct way to do it for the onClick event handler to allow the event to propagate and return true.
Which also means that dataLayer.push() would need return after the GET request was sent for any of this to work properly.
Code example:
NOTE: You will get mixed results in mixed environments.
Link
$(document).on('click', 'a', function(event) {
// Is dataLayer.push() guaranteed to fire a GET ?
// data set externally
dataLayer.push(data);
return true;
});
Is there anyone out there that can guarantee that the GET request will get fired to the google server?
Have the google developers forgotten something here?
EDIT: Updated title to be more relevant to the question.
datalayer.push does not send anything to Google. It pushes objects with key/value pairs to the datalayer array. This might contain an event which in turn fires a tag. Whether the tag is sent depends on the setup of the tag, not on the dataLayer.push.
As a consequence, when you write your own click handlers your are yourself responsible to make sure your tags are actually fired.
If you use the built-in click handler you can configure a delay to make sure your tag has time to fire before the link redirects:
Since link clicks usually cause the browser to load a new page and
interrupt any pending HTTP request, you have the option to add a small
delay to allow tags fired by Tag Manager to execute properly before
redirecting to the next page. Checking the “Wait For Tags” option will
delay opening of links until all tags have fired or the specified
timeout has elapsed, whichever comes first.
You should be able to mix both methods (push data on the click, but still use the "native" link click handler for the event).
You can also try to specify "beacon" as the transport method in your Google Analytics tags, on browsers that support this (which I think is only Chrome at the moment) GA will then use the navigator.sendBeacon interface, which sends the data even in case the page unloads.
You might think that Google's solution is not very elegant (but the simple delay has the advantage that it works for all tags, not just for GA), but they have not "forgotten" the problem.
Also solutions that combine GA hit callbacks with timeouts that redirects if the callback fails as proposed i.e. by Simo Ahava somewhere should be be doable with GTM, even if they are probably more cumbersome to implement in GA.
What is the best method to handle a AJAX get if there is no javascript enabled on browser?
For an example, a simple click on image thumbnail directs you to a controller that returns image details and zoomed photo.
I wanted to do this with ajax and show the bigger picture with details on the same page.
But I don't know how to implement backup method when user doesn't have javascript.
Can i delegate click method on anchor container and then somehow stop bubbling? And then when there is no javascript does the anchor gets triggered? The details won't be on same page, but atleast they will show.
Can i delegate click method on anchor container and then somehow stop bubbling?
Basically, yes. Have the link around the image (or whatever):
<a class="image-link" href="link/to/the/details">...</a>
And handle it in JavaScript:
$(document.body).on("click", ".image-link", function(e) {
// We're handling it with ajax, don't do the default action
// (which is following the link)
e.preventDefault();
// ...do the ajax stuff...
});
If JavaScript isn't enabled, the handler isn't hooked up, and the link gets followed normally. If JavaScript is enabled, we prevent the default action, which prevents the link being followed, and do the ajax instead.
I'm writing a small website which has several pages that are very similar. Most of the time, only the content of one div is different. The navigation, header etc stays the same.
So I realized this with a "base" html file, some smaller html-files with only a content-div and javascript code like this (which is triggered by a button click event):
$.get("content/text1.html", function(data) {
$("#content").html(data);
});
This works very smooth but the problem was, that the url in the address-bar doesn't change with those kind of requests. So it is not possible for the user to link to certain pages. I know it is possible with #-urls, but i want to have urls like:
example.com/talks/foo/bar
And not some workaround.
In another Thread, someone gave me a hint to the html5 browser history api (especially history.js).
What I'm trying to achieve with it:
Someone clicks on a button -> an ajax request is triggered and the content of the content-div gets updated -> the url gets updated to something like example.com/talks/foo/bar
If someone requests example.com/talks/foo/bar in his browser directly, the same ajax request and content update as in (1) should be performed
I tried to realize the first one with:
$.get("content/text1.html", function(data) {
$("#content").html(data);
History.pushState(null, null, "content/text1.html");
});
But how am I supposed to achieve the second point? With a rewriterule, that redirects everything to the base-html file and some js-logic in it to decode the url and trigger the ajax request?
I have the feeling, that I am a bit on the wrong path..
So is this the way history.js should be used?
How can i achieve the second bullet point?
To get the initial state in html5 browsers no ajax calls are required. Like you said the url itself gets changed, not the hash so the server should reply to the url with the correct content already loaded.
You should do all your ajax calls and DOM manipulation inside the statechange event handler.
So when the user clicks on a link all you do is call pushState and handler the DOM changes in the statechange event handler. This works because statechange is triggered when pushState is called.
I am submitting multiple data multiple times on one button click.
First I submit data to a variable number of hidden iframes (the form is enctype="multipart/form-data"). Then I would like to run the normal submit button/function which redirects the main page.
If i do this all in one function though, I don't end up recieving all the data I send to the hidden iframes. I believe the redirect begins before the other forms finish sending (sometimes I receive the first data submitted, most of the time I receive none).
I have hacked a solution using setTimeout(function(){$("#submitbtn").click()},3000)... but clearly this won't always take 3 seconds. I want a way of detecting when it finishes so I can start the submit.
Sorry if this is hard to understand, if you need more info just comment/ask.
I'm not sure how you are submitting that data to the iframes but if you are using $.ajax you can set async : false which will pause every jQuery event, animation, etc. There may be something similiar if you are using a different method.
If you are using AJAX I recommend using the success callback of jQuery's $.post.
e.g.
$.post('/process', form_data, function(data) { document.location = '/thankyou'; false})
In this example after the post data has been successfully submitted the user will be redirected to /thankyhou
You can also use $("#submitbtn").click() in the callback.
I managed to do it by using the IFRAME.load, to detect changes to the target iframes. Once all iframes.load()'s fired I fired the last submit (the one that redirects the main page).