I have a PHP web app.
When multiple simultaneous AJAX requests occur, it seems they are queued on the server side like only one process is run at one time. It only happens when all the requests are done from one browser.
The weirdest thing is that sometimes it runs as it should, simultaneously (screen: https://imgur.com/8oDGV8t ) and after like 10 minutes it waits one process to be done and only then it runs another process doing them one-by-one (screen: https://imgur.com/OPkzYNh ).
The code for test screenshots:
sleep(5);
exit();
P.S. when these AJAX requests are queued, also normal html requests are 'waiting in the queue'.
I think it is highly likely that this has something to do with session management.
What happens is that a new request waits for the session in the previous request to be closed.
This only happens because session data is accessed, and thus a lock is obtained on the session file.
You can avoid this by not starting the session in the first place. If you need the session you need to close the session right after it was started. If you need to set $_SESSION variables you need this before closing the session. You can do this like so:
session_start();
$_SESSION['some'] = 'value';
session_write_close(); // From here on out, concurrent requests are no longer blocked
$_SESSION variables will still be available after closing the session.
See also: https://codingexplained.com/coding/php/solving-concurrent-request-blocking-in-php
Related
So I have a quite expensive and complex PHP process which makes its execution long lasting, lets call it function "expensive_process()".
I have an interface which through a press of a button calls an ajax request to a PHP script which in turn initiates "expensive_process()". Here's the javascript code:
$('#run_expensive_process_button').click( function(){
var url = "initiate_expensive_process.php";
$.ajax({
url: url
});
});
And initiate_expensive_process.php code:
<?php
session_start();
run_expensive_process();
?>
Simple and trivial. Now the issue with this is that while expensive_process() is running, the browser is losing the ability to navigate the domain. If I refresh the browser window it hangs indefinitely while the process last. If I redirect to a different url under the same domain, same thing. This happens in all browsers. However, if I relaunch the browser (close and open a new window, not a tab), navigation works normally, even though expensive_process() is still running.
I've inspected network traffic, and the HTTP request to initiate_expensive_process.php doesn't get a response while expensive_process() is running, but I'm assuming this shouldn't be locking the browser given the asynchronous nature of the request..
One more thing, which I believe is relevant. This situation is happening on a replica server. On my local machine, where I run WAMP and the same source code, this is not happening, i.e., while expensive_process() is running, I'm still able to navigate the hosting domain without having to relaunch the browser. This seems to be an indication of a server configuration problem of some sort, but I'm not sure I can rule out other possible reasons.
Anyone know what might be causing this or what can be done to figure out the source of the problem?
Thanks
Most likely the other PHP scripts also session variables. Only one script process can access a session at a time; if a second script tries to access the session while the first script is still running, it will be blocked until the first script finishes.
The first script can unlock the session by calling session_write_close() when it's done using the session. See If call PHP page via ajax that takes a while to run/return (and it sets session variables), will a 2nd ajax call see those session changes? for more details about how you can construct the script.
I wonder whether it might be due to ajax. The javascript is being executed client-side.
Maybe you might consider a stringified JSON call instead of ajax?
I have a PHP page named update_details.php?id=xyz which has a query for getting the details and updating the login time of the users.
The users have a profile page named profile.php?id=xyz. So for different users the profile page is different like profile.php?id=abc, profile.php?id=def etc. Now this profile.php has an ajax function that sends the user id to the update_details.php through ajax call so that the update_details.php can update the record.
Now for example if I have 2000 users and all of them open their profile page simultaneously. Now my question is will the update_details page be able to handle this. I mean is it one update_details.php or each update_details.php?id=abc, update_details.php?id=def etc is considered to be a seperate one.
To be more precise, when 2000 users are updating their record through 2000 ajax calls, are the calls going to one update_details.php or to the one according to their ids like update_details.php?id=abc, update_details.php?id=def etc. TIA
Okay, let's check how the request goes from the browser till it's served and the browser gets a response.
The client clicks on a link, maybe a button.
The browser makes a HTTP request and sends it to the server ( that maybe Apache, nginx, whatever you use )
The server analyzes the request, checks its rules.. Saying : I found a rule when I hit a url with .php extension, I run a php interpreter and pass it the request info..
The server spawns new process or assign the request to one of its workers ( depends on the internals of the server ).
How many concurrent php processes will run ? it depends on the web server configuration and design.
So to answer your question, each php process is running has its isolated memory segment even if they are executing the same instructions from update_details.php
Think of it like 10 workers in a factory crafting a chair following the same instruction, but each one uses a different paint color, wood type, etc..
1) I need the following requirement to be satisfied:
Client's request(Long running process) should wait till the server is serving the request.
Current solution:
Client initiates the request followed by ping request every 5 sec to check the request status and
with that also maintains the session.
2) If the client moves to other tab in the application and comes back, The client should still show the process status and server should continue working on the request.
3) If the client closes the browser or logs out, the server should stop the process.
PS : Need the functionality for all the browsers after IE-9,Chrome and Firefox.
There are many ways to skin a cat, but this is how I would accomplish it.
1, assign unique identifier to the request (You most likely have done this as you're requesting the ready state every few seconds).
Set a member of their session data to the unique ID.
Set all your pages to load the JS needed to continually check the process, but the JS should NOT use any identifier.
In the script that parses the ajax request, have it check the session for the unique identifier, and update an internal system (file or database) with the time of the last request and the unique identifier.
and push back details if there are details to be pushed.
In another system(like a cron system) or within the process itself(if in a loop for example) have it check the same database or file system that gets updated with the timestamp for the unique identifier and the last timestamp. If the timestamp is too old, lets say 15 seconds (remember page load times may delay the 5 second interval), then kill the process if cron'd, or suicide the process if within the process script itself.
Logout will kill the session data, thus making the updating of the table/file impossible(and a check should be there for this) and that will make it so that in the next few seconds from logout, the process stops.
You will not be able to find a reliable solution for logout. window.onbeforeunload will not allow you to communicate with the server (you can only prompt the user using only the built-in dialog, and that's pretty much it). Perhaps, instead of finding a solution on capturing logout/abandon, add some logic to the server's process to wait for those pings (maybe allow 30 seconds of no-comm before abandoning); that way you're not wasting server's cycles that much and you still have the monitoring working as before.
I have a JavaScript application that regularly saves new and updated data. However I need it to work on slow connection as well.
Data is submitted in one single HTTP POST request. The response will return newly inserted ids for newly created records.
What I'm finding is that data submitted is fully saved, however sometimes the return result times out. The browser application therefore does not know the data has been submitted successfully and will try to save it again.
I know I can detect the timeout in the browser, but how can I make sure the data is saved correctly?
What are some good methods of handling this case?
I see from here https://dba.stackexchange.com/a/94309/2599 that I could include a pending state:
Get transaction number from server
send data, gets saved as pending on server
if pending transaction already exists, do not overwrite data, but send same results back
if success received, commit pending transaction
if error back, retry later
if timeout, retry later
However I'm looking for a simpler solution?
Really, it seems you need to get to the bottom of why the client thinks the data was not saved, but it actually was. If the issue is purely one of timing, then perhaps a client timeout just needs to be lengthened so it doesn't give up too soon or the amount of data you're sending back in the response needs to be reduced so the response comes back quicker on a slow link.
But, if you can't get rid of the problem that way, there are a bunch of possibilities to program around the issue:
The server can keep track of the last save request from each client (or a hash of such request) and if it sees a duplicate save request come in from the same client, then it can simply return something like "already-saved".
The code flow in the server can be modified so that a small response is sent back to the client immediately after the database operation has committed (no delays for any other types of back-end operations), thus lessening the chance that the client would timeout after the data has been saved.
The client can coin a unique ID for each save request and if the server sees the same saveID being used on multiple requests, then it can know that the client thinks it is just trying to save this data again.
After any type of failure, before retrying, the client can query the server to see if the previous save attempt succeeded or failed.
You can have multiple retries count as a simple global int.
You can also automatically retry, but this isn't good for an auto save app.
A third option is use the auto-save plugins for jQuery.
Few suggestions
Increase the time out, don't handle timeout as success.
You can flush output of each record as soon as you get using ob_flush and flush.
Since you are making request in regular interval. Check for connection_aborted method on each API call, if client has disconnected you can save the response in temp file and on next request you can append the last response with new response but this method is more resource consuming.
I made a chat using PHP and JavaScript chat and there is a disconnect button which removes user from the chat removing him from user list first. But if the user closes browser then he will remain in the user list. How do I check if he left?
This must be done without putting any handles on page closing in JS because if user kills the browser then he will remain in chat.
By the way , JS script always sends a request to the PHP page which constantly checks for new messages in a loop and when there are some, the script prints them out and exits. Then it repeats all over again.
EDIT : How do I make a heartbeat thing in PHP? If a user closes the page the script execution will be terminated therefore we won't be able to check if the user is still connected in the same script.
Sorry, there is no reliable way of doing this, that's the way HTTP was built - it's a "pull" protocol.
The only solution I can think of is that "valid" and logged in clients must query the server in a very small interval. If they don't, they're logged out.
you could send a tiny ajax call to your server every 5 seconds. and users that doesn't do this aren't in the room any more
You answered your own question: if you don't detect a request for new messages from a user over a given length of time (more than a few seconds), then they left the room.
The nature of HTTP dictates that you need to do some AJAX type of communication. If you don't want to listen for the "give me more messages" request (not sure why you wouldn't want to), then build in a heartbeat type communication.
If you can't modify the JS code for some reason, there really is little you can do. Only thing you can do with PHP is to check if there's been for example over 15 minutes from the last activity, the user has left. But this is in no way a smart thing to do – a user might just sit and watch the conversation for 15 minutes.
Only proper way to do is using AJAX polling in set intervals if you want to do it reliably.
You noted that a user polls the server for new messages constantly, can't you use that to detect if user has left?
Maintain a list of active users on the server, as well as the last time they connected to the chat to request new messages.
When a user connects to check for messages update their time.
Whenever your code runs iterate through this list and remove users who haven't connected in too long.
The only failure is that if the number of users in the channel drops to zero, the server wont notice until someone comes back.
To address your edit, you can ignore client termination by using ignore_user_abort.
Using javascript u can do the following :
<script type="text/javascript">
window.onunload = unloadPage;
function unloadPage()
{
alert("unload event detected!");
}
</script>
Make the necessary ajax call on the unloadPage() function to ur PHP Script
Request a PHP script that goes a little something like this, with AJAX:
register_shutdown_function("disconnect_current_user");
header('Content-type: multipart/x-mixed-replace; boundary="pulse"');
while(true) {
echo "--pulse\r\n.\r\n";
sleep(2);
}
This way, you won't constantly be opening/closing connections.
The answers to all the questions asked by the OP are covered in the section in the manual about connection handling:
http://uk3.php.net/manual/en/features.connection-handling.php
No Ajax.
No Javascript.
No keep alives.
C.