My application relies on being able to set a user Online/Offline state.
I have solved this by using the window.onbeforeunload to see when the user leaves a page.
The problem is that the below code does not always get executed and is therefore still saying that some people are online when they have left the page.
This is my current code:
window.onbeforeunload = function (event) {
$.ajax({
type: 'GET',
url: './changeStatus.php?cid=' + ccID + '&status=0',
contentType: "application/json",
dataType: 'json',
cache: false,
async: false,
success: function (data) {},
error: function (e) {
//console.log(e);
}
});
return;
};
Any ideas how i can force this to always get executed? (On all browsers)
The script is probably always executed when the page closes, but that doesn't mean that it always succeeds in calling the server.
If for example the user disconnects the internet connection before closing the page, the script can't send anything to the server. You simply can't rely on getting a call from the browser every time the page is closed.
You can keep track on the server when the user was last heard of, so that you can set the user as offline when he hasn't requested any pages for a while. Another alternative is to frequently send a request to the server as long as the user is on the page, and when the requests stop coming, the server knows that the user has gone away.
Related
I am binding the beforeunload event of a page to make a quick synchronous ajax call (not best practice but it is essentially a ping so we are trying it as an idea).
$(window).bind('beforeunload', function(){
makeAjaxPUTcall();
//return 'Send a message!';
});
This works fine when built, deployed to a test environment and tested manually.
I was then trying to add some selenium tests (ChromeDriver) to automate the testing. The Ajax call does not seem to get made. Setting a breakpoint on the browser seems to show the code running through the Ajax call and if I uncomment the return I will get an (unwanted) alert before the page unloads. I have, for sake of argument, bound the javascript function to the blur event of the page controls
$(':input').on('blur', function() { makeAjaxPUTcall(); });
and it works without problem in selenium (as well as manual testing).
When run in the beforeunload event logging seems to show that the call never hits server.
The javascript function is like this
function makeAjaxPUTcall(){
$.ajax({
type : 'PUT',
async: false,
url: urlVal,
data: null,
processData: false,
dataType: 'json',
contentType: 'application/json',
success: function() {}
});
}
So in summary from where I'm standing I've proved, in various ways, that the code to make the REST call is working client side and server side. I have also 'proved' by manual dev testing that these calls are triggered when a page navigation takes place manually. I have also proved that the beforeunload event is being triggered in the selenium tests (by adding a return string and seeing an alert pop up).
I have also converted the PUT to a GET and after putting a breakpoint in the test code navigated a different tab to the url provided and proved that it is triggered and does hit a breakpoint in the (java) controller code.
It SEEMS to me that the issue must be something to do with the way Selenium is handling the navigation ? Can anyone point me in the next direction ?!
Try it with fetch:
window.onbeforeunload = () => {
fetch(url, {
method: 'PUT',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({foo: "bar"})
})
}
Why? Fetch creates a promise and I believe chrome will try to let that resolve first
I have a website that basically makes API calls and displays the data in a table; the API is on a different server from the website.
If the API server is down what is the best way to alert the user client-side (JavaScript) that the server is unavailable?
Could/Should I put the alert in the API call error handling (See code for example)? What is the best practice for this type of situation.
function apiCall(query, product){
var p = product;
var urlr='https://myFakeAPIUrl/api/'+query+'/'+ product;
$.ajax({
contentType: 'application/json',
crossDomain: true,
url: urlr,
type: "GET",
success: function (result){
alert("Yay, the API server is up.");
},
error: function(error){
console.log(error);
alert("Sorry, the server is down.");
}
});
}
var productData = apiCall("Produce", "112233");
I would ask myself what a user would like to see in this situation.
What I always do is putting a timeout on the Ajax request, whenever that timeout of e.g. 9999ms runs out, the user should get notified (with a toast, a heading, etc..) that something went wrong and that they should try it again later.
I have what should be a very simple little process. I have an interval timer which, when it expires, makes an ajax call to the server. With the json that comes back, it parses it out and updates the DOM. The setInterval is set to one second. The server then does a very simple SELECT on the database. It executes a query which takes milliseconds to execute, if that.
On our test server, it's working fine. But when it's deployed to our customer it is most of the time NOT actually hitting the database. We can see it in query analyser. There should be a continuous flow of queries, but they are sporadic at best, often 40 or more seconds between hits.
Here's the code:
setTimeout(function run() {
// When the timer elapses, get the data from the server
GetData(0);
setTimeout(run, _refreshRate);
}, 1000);
function GetData(isFirstLoad) {
//console.log("Attempting to obtain the data...");
jQuery.ajax({
url: "something.ashx",
type: "GET",
contentType: 'application/json; charset=utf-8',
success: function(resultData) {
//console.log("Got the data.");
ParseJson(resultData);
// show the last refresh date and time
$('#refreshTime').html(GetDateTime());
},
error : function(xhr, textStatus, errorThrown) {
if (textStatus == 'timeout') {
//console.log("Timeout occured while getting data from the server. Trying again.");
// If a timeout happens, DON'T STOP. Just keep going forever.
$.ajax(this);
return;
}
},
timeout: 0,
});
}
Everything within ParseJson(resultData); works fine. And with this line...
$('#refreshTime').html(GetDateTime());
...the time gets refreshed every one second like clockwork, even though the database never gets hit.
And I can put a breakpoint in the debug tools inside the error and it never gets hit.
If we hit refresh, it works or a few seconds (we can see queries hitting the database) but then it slows way down again.
The frustrating part is that it works flawlessly on our test server. But there is clearly something I'm overlooking.
EDIT:
Ok, this is really weird. When I have debugger open, it works. As soon as I close the debugger, it stops working. I don't even have to have the network tab running and capturing events. Just the debugger window open makes it work.
This is IE, which is what the client is using so it's our only option.
Found the answer here:
jQuery ajax only works in IE when the IE debugger is open
Turns out iE, and only IE, will cache ajax responses. You have to tell it not to. Adding cache: false did the trick.
function GetData(isFirstLoad) {
//console.log("Attempting to obtain the data...");
jQuery.ajax({
url: "something.ashx",
type: "GET",
contentType: 'application/json; charset=utf-8',
cache: false,
success: function(resultData) {
I have a local application that contains a web server for exchanging JSON with a web application. The Web application itself is served from the web, meaning that browsers see them as cross-origin.
When the application is running, it provides correct cross-origin headers to allow the interchange. However, what I want is the option to quickly detect if the application is running or not.
The current method is to use AJAX to a "heartbeat" URL on the localhost service:
pg.init.getHeartbeat = function(port) {
var url = pg.utils.baseUrl('heartbeat', port); // returns a localhost URL
$.ajax({
url: url,
type: 'GET',
async: true,
success: function(data) {
// Hooray! Application is there. Do 'hooray' stuff
},
error: function(data) {
// Whoah the bus. Application not there. Do 'no application' stuff.
}
});
};
Works great in Webkit. Webkit tries to get a heartbeat, quickly fails, and does failure stuff very quickly.
Problem is in Firefox. Firefox tries to get a heartbeat, and takes between 4 and 10 seconds to fail. It might not seem like much, but 4 seconds before the UI moves the user to the next step is making the app feel very slow and unresponsive.
Any other ideas out there? As far as I can tell, changing an iFrame's src attribute and capturing a failure isn't working, either. It's not triggering the error event. And even when I can get an error to trigger from sample code, it's still taking 4 seconds, so there is no net improvement.
The web server side of things should not have any server-side scripting languages (PHP, etc.); I need the JavaScript to be able to take care of it independently.
You can do a timeout if the navigator is Firefox:
var timeoutcall = 0;
if(navigator.userAgent.toLowerCase().indexOf('firefox') > -1)
{
timeoutcall = 100;
}
$.ajax({
url: url,
type: 'GET',
timeout: timeoutcall ,
async: true,
success: function(data) {
// Hooray! Application is there. Do 'hooray' stuff
},
error: function(data) {
// Whoah the bus. Application not there. Do 'no application' stuff.
}
});
If the timeout is 0 then there is no timeout. So, if i am in firefox i set timeout to 100ms, and if im not in firefox set to unlimited.
I have an AJAX long polling request below:
$.ajax({
type: "GET",
url: "events_controller.php",
dataType: "json",
success: function (data) {
eventsTimer = setTimeout(function(){eventsTimerHandler()}, 1000);
}
});
On the server, if some event happens then it will return what has happened and the request above will display a notification.
A problem I am having is if I do something on the browser to trigger an event that will happen in say 10 seconds in the future and then immediately go to a different page, it will create a new long polling request but the previous one is still active and no notification will be sent to the user.
I hope I'm making sense.