Can the same php query handle simultaneous ajax requests from different pages - javascript

I have a PHP page named update_details.php?id=xyz which has a query for getting the details and updating the login time of the users.
The users have a profile page named profile.php?id=xyz. So for different users the profile page is different like profile.php?id=abc, profile.php?id=def etc. Now this profile.php has an ajax function that sends the user id to the update_details.php through ajax call so that the update_details.php can update the record.
Now for example if I have 2000 users and all of them open their profile page simultaneously. Now my question is will the update_details page be able to handle this. I mean is it one update_details.php or each update_details.php?id=abc, update_details.php?id=def etc is considered to be a seperate one.
To be more precise, when 2000 users are updating their record through 2000 ajax calls, are the calls going to one update_details.php or to the one according to their ids like update_details.php?id=abc, update_details.php?id=def etc. TIA

Okay, let's check how the request goes from the browser till it's served and the browser gets a response.
The client clicks on a link, maybe a button.
The browser makes a HTTP request and sends it to the server ( that maybe Apache, nginx, whatever you use )
The server analyzes the request, checks its rules.. Saying : I found a rule when I hit a url with .php extension, I run a php interpreter and pass it the request info..
The server spawns new process or assign the request to one of its workers ( depends on the internals of the server ).
How many concurrent php processes will run ? it depends on the web server configuration and design.
So to answer your question, each php process is running has its isolated memory segment even if they are executing the same instructions from update_details.php
Think of it like 10 workers in a factory crafting a chair following the same instruction, but each one uses a different paint color, wood type, etc..

Related

jQuery $.get() is blocking other requests

I'm developing a web application and use jQuery to make asynchronous HTTP requests to my API. I have a detail view where you can see a lot of information of a specific object stored in the database. Because there is a lot of information and data that is linked to other objects, I make different calls to my API to gather different information for my views.
In the 'detail view' I have some kind of widgets that show the requested information. For that, I make about 5-7 HTTP GET requests to my API. When using the debugger (both Safari and Firefox), I can see that some requests are blocking other requests and the page takes a lot of time until everything is loaded and shown to the user.
I make a request like this:
$.get("api/api.php?object=myobject&endpoint=someendpoint", function(data) {
// data is JSON formatted
$("#my-widget input").val(data["name"]);
});
And another one e.g. like this:
$.get("api/api.php?object=anotherobject&endpoint=anotherendpoint", function(data) {
// data is JSON formatted
$("#other-widget input").val(data["somekey"]);
});
If the first request takes a little longer to finish, it blocks the second request until the callback function of the first request finished. But why? I thought that those calls are asynchronous and non-blocking.
I want to build a fast web application for a company where the requests are only made inside the local network, so a request should only take about 10-50ms (or even less). But the page takes about 10 seconds to show up with all information.
Am I doing something wrong? Or is there a JavaScript framework that can be used for exactly this problem? Any help is appreciated!
EDIT: As you can see in the screenshot, the requests have to wait some seconds, and if the request is fired, it takes a few seconds until a response comes back.
If I call the URL directly in my browser or do a GET request using curl it is a lot faster.
EDIT2: Thanks #CBroe! The session file write lock was the problem. As long as the session file is locked, no other script can run until the previous script finished. I just called session_write_close() immediately after session_start() and it runs a lot faster now.
Attention: Use session_write_close() only if you don't need to write to the $_SESSION array. Reading is possible after that, but writing not. (See this topic for further details: https://stackoverflow.com/a/50368260/1427878)

Differentiate browser close event and logout

1) I need the following requirement to be satisfied:
Client's request(Long running process) should wait till the server is serving the request.
Current solution:
Client initiates the request followed by ping request every 5 sec to check the request status and
with that also maintains the session.
2) If the client moves to other tab in the application and comes back, The client should still show the process status and server should continue working on the request.
3) If the client closes the browser or logs out, the server should stop the process.
PS : Need the functionality for all the browsers after IE-9,Chrome and Firefox.
There are many ways to skin a cat, but this is how I would accomplish it.
1, assign unique identifier to the request (You most likely have done this as you're requesting the ready state every few seconds).
Set a member of their session data to the unique ID.
Set all your pages to load the JS needed to continually check the process, but the JS should NOT use any identifier.
In the script that parses the ajax request, have it check the session for the unique identifier, and update an internal system (file or database) with the time of the last request and the unique identifier.
and push back details if there are details to be pushed.
In another system(like a cron system) or within the process itself(if in a loop for example) have it check the same database or file system that gets updated with the timestamp for the unique identifier and the last timestamp. If the timestamp is too old, lets say 15 seconds (remember page load times may delay the 5 second interval), then kill the process if cron'd, or suicide the process if within the process script itself.
Logout will kill the session data, thus making the updating of the table/file impossible(and a check should be there for this) and that will make it so that in the next few seconds from logout, the process stops.
You will not be able to find a reliable solution for logout. window.onbeforeunload will not allow you to communicate with the server (you can only prompt the user using only the built-in dialog, and that's pretty much it). Perhaps, instead of finding a solution on capturing logout/abandon, add some logic to the server's process to wait for those pings (maybe allow 30 seconds of no-comm before abandoning); that way you're not wasting server's cycles that much and you still have the monitoring working as before.

Send data to client page from mysql database without refreshing page (timeout)

I created a tabulation system for beauty pageants that show judges score on a projector. I created a presentation page for that using Codeigniter.
The HTML from that presentation page is purely written in Javascript. The page refreshes each second to get real-time data sent by the judges.
The not-so-cool thing about this logic is that when the page writes a lot of data, the page blinks every second. So the refreshing of the page is noticeable and somewhat disturbing.
This is a snippet of the code I'm working on.
$(document).ready(function() {
getJudgesScore();
setInterval(function(){
if (getNumFinalists() == 0)
getJudgesScore();
else {
window.open ('presentationFinalists','_self',false)
}
},1000);
});
You can imagine how much data is being sent and received every time this code is executed.
To sum this up, what I want to accomplish is instead of the client asking for data every second, the server initiates the connection every time a new data is saved to the database. Thank you for taking your time reading my concern.
This might help you to take necessary data from mysql server and send to client page.
Timer jquery run for after perticular time of interval.
<script src="../JS/Timer/jquery.timer.js"></script>
var timer = $u.timer(function() {
getJudgesScore();
});
timer.set({time: 1000, autostart: true});
refer this link also
https://code.google.com/p/jquery-timer/source/browse/trunk/jquery.timer.js?r=12
What you are attempting is a tricky -- but not impossible -- proposition.
#Chelsea is right -- the server can't genuinely initiate a connection to the client -- but there are several technologies that can emulate that functionality, using client connections that are held open for future events.
Those that come to mind are EventSource, Web Sockets, and long polling... all of which have various advantages and disadvantages. There's not one "correct" answer, but Google (and Stack Overflow) are your friends.
Of course, by "server," above, I'm referring to the web server, not the database. The web server would need to notify the appropriate clients of data changes as it posts them to the database.
To get real-time notification of events from the MySQL server itself (delivered to the web server) is also possible, but requires access to and decoding of the replication event stream, which is a complicated proposition. The discovered events would then need to result in an action by the web server to notify the listening clients over the already-established connections using one of the mechanisms above.
You could also continue to poll the server from the browser, but use only exchange enough data via ajax to update what you need. If you included some kind of indicator in your refresh requests, such as a timestamp you received in the prior update, or some kind of monotonic global version ID such as the MySQL UUID_SHORT() function generates, you could send a very lightweight 204 No Content response to the client, indicating that the browser did not need to update anything.

How can my ASP.NET page go back and forth from client to server code and back several times?

OK, the tite seems a little confusing, so I'll try to explain more thoroughly...
The process the page does currently follows the following sequence:
- User clicks a button
- server-side code goes retrieve data from the DB and exposes said data to the client using, populating, let's say, hidden fields.
- client-side code uses this data to fire up a an ActiveX component which performs a few tasks with the data provided.
And this works fine, however, we need to optimize the process because the ActiveX component is not fit to handle high volumes of data. We need to send data into "blocks" to the component, rather them send all data at once as it is done today.
However, I just hit a roadblock here, on how can I make the page go back and forth from server to client code multiple times? Like... "user clicks a button, server retrieves first block of data, sends to client, client executes ActiveX for the first block, client requests next block, server retrieves second block, sends to client, client executes ActiveX for the second block, client requests third block... and so on"? I can't get past the first request, since I can't register a client script block 2 times and expect AJAX to handle those multiple sequential callbacks...
Or is there a way?
This sounds more like an architectural issue than anything else.
What you should be doing here is:
1) User clicks a button. This is NOT a regular submit button. Just a plain old button that executes some local javascript.
2) Local javascript makes an AJAX request to determine how many records are available.
3) That javascript then does a loop based on the number of available records divided by the amount you want to pull per chunk.
3.a) Execute AJAX request for a chunk
3.b) Throw the data into your ActiveX control - which, btw, I really would suggest you guys think about getting rid of. There are so many issues with ActiveX that it's not even funny.
4) Repeat 3.a and 3.b until completion.
You'll notice that at no point was a full post back performed. You'll also notice that you shouldn't have to register any client script blocks.
Now the draw back here is purely in the ActiveX control. Can it be instantiated from javascript multiple times in a page or are you forced to only use a single instance?
If it's limited to a single instance, then you'll need a different approach entirely.

How does popular mail websites handle server side scripting?

I was wondering how do popular mail websites handle / call the serverside scripts. How do they do it differently in a way that users are not easily able to decipher which file they are calling to invoke say login authentication.
For eg: from yahoo website i did view source on login page and saw
<form method="post" action="https://login.yahoo.com/config/login?" autocomplete="" name="login_form" onsubmit="return hash2(this)">
usually action is the server side script file which is being called on submit button right? so they are redirecting to some other website on .done (i.e after authentication), but how do we know what file they calling to run the script?.. Where is the username and password. I tried a wireshark capture too, because they are using post, i won't see the username/password in the url but in wireshark i should see right?
Sorry a lame question, but was just curious as to how these big people work.
Are you merely confused about the URL https://login.yahoo.com/config/login??
Consider: A web server does not need to work with files at all. Having a URL like http://example.com/login.php is merely an extremely lazy way to map to a file on disk. Internally, the web server will receive the request as /login.php and will have to look through its configuration if there's a file login.php somewhere in a directory configured for the host example.com, execute that file and send back the results to the user. That's a complicated task.
Instead it could just receive the query for /config/login? and do something completely different with it, like... logging you in.
You're never executing files directly on a remote server. This is important. There's always a program translating URLs to executable programs or actions. This is completely arbitrary and has nothing to do with the file system.
Try searching for "pretty URLs".
The /config/login? in this case is just a entry point into the server at login.yahoo.com. It could be a HTTP handler name, and when that handler gets invoked on that webserver, it just calls into some other server side call (c++ or java or anything else)...
So its kinda hidden from you. They are (possibly) just executing a 'method' or a series of methods on the server side...which on completion return some data back to the browser via the same http handler/entry-point.
These server entry points or HTTP handlers get all the data from the browser when that form is post'ed and is forwarded to the actual handler for this call.
Search for HTTP handler modules.

Categories

Resources