I have a web application with a big order form: 300+ input fields for item amounts, each with a button to increase and decrease the amount. Each change in an input field is sent to the server via Ajax. The buttons are "debounced": Clicks are collected and the new amount is sent 200 ms after the last click. Now it seems that some requests fail, probably due to bad network conditions, but it could also be a server problem. This means that the displayed amount and the the amount stored on the server differ. What strategies can I use to keep client and server in sync? At the moment I see two options:
Error handling on the client - when a request fails, re-send it (with a maximum number of tries).
Calculate a checksum/hash of all amounts and send it together with the amount. If the server calculates a different amount, it returns an error code and all field contents are sent to the server.
Any other ideas or recommendations?
You can maintain a ChangeLog (a list of ids of records that are to be synced with the server) on the client side (just ids, not the record itself).
Then during a sync process make sure the server sends back an acknowledgement of having successfully processed a record.
If success, remove the record from the ChangeLog. If not, it stays and will be re-sent during the next sync process.
This is the protocol used by SyncML standard and may apply to your case.
Related
I've been scratching my head and trying this for about a week now. So I hope I can find my help here..
I'm making an application that provides real-time data to the client, I've thought about Server-Sent-Events but that doesn't allow per-user responses AFAIK.
WebSocket is also an option but I'm not convinced about it, let me sketch my scenario which I did with WS:
Server fetches 20 records every second, and pushes these to an array
This array gets sent to all websocket connections every second, see this pseudo below:
let items = [ { ... some-data ... } ];
io.on("connection", socket => {
setInterval(() => {
io.emit("all_items", items);
}, 1000);
});
The user can select some items in the front end, the websocket receives this per connection
However, I'm conviced the way I'm taking this on is not a good way and enormously innefficient. Let me sketch the scenario of the program of what I want to achieve:
There is a database with let's say 1.000 records
User connects to the back-end from a (React) Front-end, gets connected to the main "stream" with about 20 fetched records (without filters), which the server fetches every second. SELECT * FROM Items LIMIT 20
Here comes the complex part:
The user clicks some checkboxes with custom filters (in the front-end) e.g. location = Shelf 2. Now, what's supposed to happen is that the websocket ALWAYS shows 20 records for that user, no matter what the filters are
I've imagined to have a custom query for each user with custom options, but I think that's bad and will absolutely destroy the server if you have like 10.000 users
How would I be able to take this on? Please, everything helps a little, thank you in advance.
I have to do some guessing about your app. Let me try to spell it out while talking just about the server's functionality, without mentioning MySQL or any other database.
I guess your server maintains about 1k datapoints with volatile values. (It may use a DBMS to maintain those values, but let's ignore that mechanism for the moment.) I guess some process within your application changes those values based on some kind of external stimulus.
Your clients, upon first connecting to your server, start receiving a subset of twenty of those values once a second. You did not specify how to choose that initial subset. All newly-connected clients get the same twenty values.
Clients may, while connected, apply a filter. When they do that, they start getting a different, filtered, subset from among all the values you have. They still get twenty values. Some or all the values may still be in the initial set, and some may not be.
I guess the clients get updated values each second for the same twenty datapoints.
You envision running the application at scale, with many connected clients.
Here are some thoughts on system design.
Keep your datapoints in RAM in a suitable data structure.
Write js code to apply the client-specified filters to that data structure. If that code is efficient you can handle millions of data points this way.
Back up that RAM data structure to a DBMS of your choice; MySQL is fine.
When your server first launches load the data structure from the database.
To get to the scale you mention you'll need to load-balance all this across at least five servers. You didn't mention the process for updating your datapoints, but it will have to fan out to multiple servers, somehow. You need to keep that in mind. It's impossible to advise you about that with the information you gave us.
But, YAGNI. Get things working, then figure out how to scale them up. (It's REALLY hard work to get to 10K users; spend your time making your app excellent for your first 10, then 100 users, then scale it up.)
Your server's interaction with clients goes like this (ignoring authentication, etc).
A client connects, implicitly requesting the "no-filtering" filter.
The client gets twenty values pushed once each second.
A client may implicitly request a different filter at any time.
Then the client continues to get twenty values, chosen by the selected filter.
So, most client communication is pushed out, with an occasional incoming filter request.
This lots-of-downbound-traffic little-bit-of-upbound-traffic is an ideal scenario for Server Sent Events. Websockets or socket.io are also fine. You could structure it like this.
New clients connect to the SSE endpoint at https://example.com/stream
When applying a filter they reconnect to another SSE endpoint at https://example.com/stream?filter1=a&filter2=b&filter3=b
The server sends data each second to each open SSE connection applying the filter. (Streams work very well for this in nodejs; take a look at the server side code for the signalhub package for an example.
The project that I am working on is to receive a request where in the main and/or most part of that request consists of data coming from a database. Upon receiving, my system proceeds with its function which is to parse all the data and ultimately concatenates the needed information to form a query, then insert those data using the mentioned query into my local database.
It is working fine and no issue at all. Except for the fact that it takes too long to process when the request has over 6,000,000 characters and over 200,000 lines (or maybe less but still with large numbers).
I have this tested with my system being used as a server (the supposed setup in production), and with Postman as well, but both drops the connection before the final response is built and sent. I have already tested and seen that although the connection drops, my system still proceeds with processing the data even up to the query, and even until it sends its supposed response. But since the request dropped somewhere in the middle of the processing, the response is ignored.
Is this about connection timeout in nodejs?
Or limit in 'app.use(bodyParser.json({limit: '10mb'}))'?
I really only see 1 way around this. I have done similar in the past. Allow the client to send as much as you need/want. However, instead of trying to have the client wait around for some undetermined amount of time (at which point the client may timeout), instead send an immediate response that is basically "we got your request and we're processing it".
Now the not so great part but it's the only way I've ever solved this type of issue. In your "processing" response, send back some sort of id. Now the client can check once in a while to see if it's request has been finished by sending you that id. On the server end you store the result for the client by the id you gave them. You'll have to make a few decisions about things like how long a response id is kept around and if it can be requested more than once, things like that.
1) I need the following requirement to be satisfied:
Client's request(Long running process) should wait till the server is serving the request.
Current solution:
Client initiates the request followed by ping request every 5 sec to check the request status and
with that also maintains the session.
2) If the client moves to other tab in the application and comes back, The client should still show the process status and server should continue working on the request.
3) If the client closes the browser or logs out, the server should stop the process.
PS : Need the functionality for all the browsers after IE-9,Chrome and Firefox.
There are many ways to skin a cat, but this is how I would accomplish it.
1, assign unique identifier to the request (You most likely have done this as you're requesting the ready state every few seconds).
Set a member of their session data to the unique ID.
Set all your pages to load the JS needed to continually check the process, but the JS should NOT use any identifier.
In the script that parses the ajax request, have it check the session for the unique identifier, and update an internal system (file or database) with the time of the last request and the unique identifier.
and push back details if there are details to be pushed.
In another system(like a cron system) or within the process itself(if in a loop for example) have it check the same database or file system that gets updated with the timestamp for the unique identifier and the last timestamp. If the timestamp is too old, lets say 15 seconds (remember page load times may delay the 5 second interval), then kill the process if cron'd, or suicide the process if within the process script itself.
Logout will kill the session data, thus making the updating of the table/file impossible(and a check should be there for this) and that will make it so that in the next few seconds from logout, the process stops.
You will not be able to find a reliable solution for logout. window.onbeforeunload will not allow you to communicate with the server (you can only prompt the user using only the built-in dialog, and that's pretty much it). Perhaps, instead of finding a solution on capturing logout/abandon, add some logic to the server's process to wait for those pings (maybe allow 30 seconds of no-comm before abandoning); that way you're not wasting server's cycles that much and you still have the monitoring working as before.
I have a JavaScript application that regularly saves new and updated data. However I need it to work on slow connection as well.
Data is submitted in one single HTTP POST request. The response will return newly inserted ids for newly created records.
What I'm finding is that data submitted is fully saved, however sometimes the return result times out. The browser application therefore does not know the data has been submitted successfully and will try to save it again.
I know I can detect the timeout in the browser, but how can I make sure the data is saved correctly?
What are some good methods of handling this case?
I see from here https://dba.stackexchange.com/a/94309/2599 that I could include a pending state:
Get transaction number from server
send data, gets saved as pending on server
if pending transaction already exists, do not overwrite data, but send same results back
if success received, commit pending transaction
if error back, retry later
if timeout, retry later
However I'm looking for a simpler solution?
Really, it seems you need to get to the bottom of why the client thinks the data was not saved, but it actually was. If the issue is purely one of timing, then perhaps a client timeout just needs to be lengthened so it doesn't give up too soon or the amount of data you're sending back in the response needs to be reduced so the response comes back quicker on a slow link.
But, if you can't get rid of the problem that way, there are a bunch of possibilities to program around the issue:
The server can keep track of the last save request from each client (or a hash of such request) and if it sees a duplicate save request come in from the same client, then it can simply return something like "already-saved".
The code flow in the server can be modified so that a small response is sent back to the client immediately after the database operation has committed (no delays for any other types of back-end operations), thus lessening the chance that the client would timeout after the data has been saved.
The client can coin a unique ID for each save request and if the server sees the same saveID being used on multiple requests, then it can know that the client thinks it is just trying to save this data again.
After any type of failure, before retrying, the client can query the server to see if the previous save attempt succeeded or failed.
You can have multiple retries count as a simple global int.
You can also automatically retry, but this isn't good for an auto save app.
A third option is use the auto-save plugins for jQuery.
Few suggestions
Increase the time out, don't handle timeout as success.
You can flush output of each record as soon as you get using ob_flush and flush.
Since you are making request in regular interval. Check for connection_aborted method on each API call, if client has disconnected you can save the response in temp file and on next request you can append the last response with new response but this method is more resource consuming.
I have an ASP.NET page where a request is made and after a while server returns either new page or just file for download. I want to indicate on screen s that server is "Processing..." while it takes time before returning data.
To call javascript when user hits submit is easy. Also reload of page on Postback causes any "Processing..." indicators (some DIVs popping up at the top of page) to go away.
My problem is mostly cases when data returned by server is not a page but a file to store. How can I catch the moment that server started to return data, and run a javascript/remove "Processing" DIV ? Is it even a way to do so in case of reply of different mime type?
In which cases it is even possible?
There are a couple of ways to approximate what you're trying to do with timers and assumptions about what happened, but to really do what you're describing, you need to be polling the server for an indication that the download occurred.
What I would do is take the file, Response.WriteFile it, and then write a flag to some store, either a db, or the file system, or whatever, that uniquely identifies that the transaction has completed. On the client side, your script is polling the server, and on the server, the poll response is checking the store for the flag indicating that the download has occurred.
The key here is that you have to take finer control of the download process itself...merely redirecting to the file is not going to give you the control you need. If you need more specifics on how to accomplish any of these steps, let me know.