Does JavaScript WebSocket.send method block? - javascript

If I'm sending a large Blob or ArrayBuffer over a JavaScript WebSocket via its send method... does the send method call block until the data is sent, or does it make a copy of the data to send asynchronously so the call can return immediately?
A related (unanswered) question is, from how I interpret it, whether a rapid series of sends will cause onmessage events to be delayed, as someone seems to have described happening in Mobile Safari: Apparent blocking behaviour in JavaScript websocket on mobile Safari

Based on the description of the bufferedAmount attribute, I have deduced that send must return immediately, because otherwise bufferedAmount would always be zero. If it is non-zero, then there must be data buffered from a prior call to send, and if send buffers data, there's no reason for it to block.
From http://dev.w3.org/html5/websockets/
The bufferedAmount attribute must return the number of bytes of
application data (UTF-8 text and binary data) that have been queued
using send() but that, as of the last time the event loop started
executing a task, had not yet been transmitted to the network. (This
thus includes any text sent during the execution of the current task,
regardless of whether the user agent is able to transmit text
asynchronously with script execution.) This does not include framing
overhead incurred by the protocol, or buffering done by the operating
system or network hardware. If the connection is closed, this
attribute's value will only increase with each call to the send()
method (the number does not reset to zero once the connection closes).
In this simple example, the bufferedAmount attribute is used to ensure
that updates are sent either at the rate of one update every 50ms, if
the network can handle that rate, or at whatever rate the network can
handle, if that is too fast.
var socket = new WebSocket('ws://game.example.com:12010/updates');
socket.onopen = function () {
setInterval(function() {
if (socket.bufferedAmount == 0)
socket.send(getUpdateData());
}, 50);
};
The bufferedAmount attribute can also be used to saturate the network
without sending the data at a higher rate than the network can handle,
though this requires more careful monitoring of the value of the attribute over time.

Related

How do I know if an message has been sent successfully when using raw WebSocket?

Unlike websocket/ws which accepts an optional ack callback, raw WebSocket.prototype.send only accepts a single message.
Note that sent != received... I'm assuming you want to make sure a message was received.
Implement ACK. It's super easy and it's the only way I know to make sure the message was received.
A good enough approach would attach a message ID on the client side and send that ID as part of the message (i.e. as msgID in a JSON). I use a timer ID because I cancel the timer when the ACK arrives and it's a good way to keep the timer data.
When the server recognizes a msgID, it should automatically send an ACK with the same message ID before processing the rest of the message (i.e., a JSON for {"event":"ACK", "data":m["msgID"]}).
If you're using binary data, you can simply append a fixed size ID at the beginning of each message (say, 8 bytes), or append a length indicator for a dynamic sized ID.
EDIT In reflection about Remy's comment:
Yes, using timeout IDs might be relying heavily on the way that the ID "pool" is implemented.
A safer design might prefer to avoid assumption by using a counter or some other approach.
However, it should also be noted that on all the browsers with which I tested this approach, timeout IDs are always unique.
On the Mozilla Developer Documentation Site, it states that:
It is guaranteed that a timeout ID will never be reused by a subsequent call to setTimeout() or setInterval() on the same object (a window or a worker). However, different objects use separate pools of IDs.
I think this assumption is correct across the board for all browsers (subject to the ID type overflowing and resetting itself to zero).

Does jquery post ever timeout?

Official documentation at jQuery does not mention it.
Possible confusion: I know I can use ajax to gain control over timeout, but my question is different.
Scenario:
I am using post to grab data from a backend which I know will take a long (sometimes very very long) time to load.
Question:
Will my javascript request ever timeout or will it always wait until backend is loaded, even if it takes a few minutes?
Jquery uses the native XMLHttpRequest module to make requests.
The XMLHttpRequest.timeout property is an unsigned long representing the number of milliseconds a request can take before automatically being terminated. The default value is 0, which means there is no timeout.
Reading the source code of the jquery library, the ajax method does not set a timeout in and way, hence it is save to say that the request does not timeout.
But you can explicitly set a timeout in both jquery and the native module.
this does not mean that your request will not timeout, since the server usually does impose a bail timeout strategy, usually long responses timeout from the server side. you could consider chunking or streaming as a safe and convenient solution.
github jquery ajax source:
https://github.com/jquery/jquery/tree/2d4f53416e5f74fa98e0c1d66b6f3c285a12f0ce/src/ajax
The timeout of a request is, by default, controlled by the browser and the receiving server, whichever cancels the request first. I believe most browsers have a 60 second timeout by default. The server can be any arbitrary value.
Will my javascript request ever timeout or will it always wait until backend is loaded, even if it takes a few minutes?
The answer to this is therefore, yes, your request will timeout at an arbitrary point. If you want to control the amount of time you force your users to wait for a request then you can specifically set this time by using the timeout property of the $.ajax call. This overrides any timeout set in the browser or on the server.
15 seconds should be more than enough. If a request is taking longer than that I'd suggest you change the pattern you're using to generate the response.
HTTP Request timeout is a server side configuration not a client side configuration. Requests submitted via Jquery code is no different.
You might want to have a test against the return code from the last request and add exception handling to your code (like resubmit the request)
Always check the response code and a common strategy is to rety. https://www.lifewire.com/g00/troubleshooting-network-error-messages-4102727

Is it possible to cancel asynchronous call independently of its current state?

When I type text into my textfield widget I send request with every character typed into it to get matching data from the server.
When I type really fast I swarm server with the requests and this causes to freeze my control, I managed to create throttling mechanism where I set how many ms client should wait before sending request. This requires to set arbitrary constant ms to wait. The better solution would be to just cancel sending previous request when the next key button is pressed.
Is it possible to cancel AJAX request independently of its current state? If yes, how to achieve this?
Call XMLHttpRequest.abort()
See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/abort
You'll have to track your requests somehow, perhaps in an array.
requests = [];
xhr = new XMLHttpRequest(),
method = "GET",
url = "https://developer.mozilla.org/";
requests.push(xhr);
MDN says :
The XMLHttpRequest.abort() method aborts the request if it has already
been sent. When a request is aborted, its readyState is set to 0
(UNSENT), but the readystatechange event is not fired.
What's important to note here is that while you will be able to .abort() requests on the client side (and thus not have to worry about the server's response), you are still swarming your server because all those requests are still being sent.
My opinion is that you had it right the first time, by implementing a mechanism that limits the frequency of AJAX requests. You mentioned that you had a problem with this freezing your control (I assume you mean that the browser is either taking longer to respond to user actions, or stops responding completely), and this could be a sign that there is a problem with the way your application handles asynchronous code.
Make sure you are using async APIs like Promise correctly, avoid loops that do heavy processing or just wait around in client code, and make your event processing (i.e your AJAX callback) simple and fast to reduce the impact on the user.

handle HTTP time out for ajax save

I have a JavaScript application that regularly saves new and updated data. However I need it to work on slow connection as well.
Data is submitted in one single HTTP POST request. The response will return newly inserted ids for newly created records.
What I'm finding is that data submitted is fully saved, however sometimes the return result times out. The browser application therefore does not know the data has been submitted successfully and will try to save it again.
I know I can detect the timeout in the browser, but how can I make sure the data is saved correctly?
What are some good methods of handling this case?
I see from here https://dba.stackexchange.com/a/94309/2599 that I could include a pending state:
Get transaction number from server
send data, gets saved as pending on server
if pending transaction already exists, do not overwrite data, but send same results back
if success received, commit pending transaction
if error back, retry later
if timeout, retry later
However I'm looking for a simpler solution?
Really, it seems you need to get to the bottom of why the client thinks the data was not saved, but it actually was. If the issue is purely one of timing, then perhaps a client timeout just needs to be lengthened so it doesn't give up too soon or the amount of data you're sending back in the response needs to be reduced so the response comes back quicker on a slow link.
But, if you can't get rid of the problem that way, there are a bunch of possibilities to program around the issue:
The server can keep track of the last save request from each client (or a hash of such request) and if it sees a duplicate save request come in from the same client, then it can simply return something like "already-saved".
The code flow in the server can be modified so that a small response is sent back to the client immediately after the database operation has committed (no delays for any other types of back-end operations), thus lessening the chance that the client would timeout after the data has been saved.
The client can coin a unique ID for each save request and if the server sees the same saveID being used on multiple requests, then it can know that the client thinks it is just trying to save this data again.
After any type of failure, before retrying, the client can query the server to see if the previous save attempt succeeded or failed.
You can have multiple retries count as a simple global int.
You can also automatically retry, but this isn't good for an auto save app.
A third option is use the auto-save plugins for jQuery.
Few suggestions
Increase the time out, don't handle timeout as success.
You can flush output of each record as soon as you get using ob_flush and flush.
Since you are making request in regular interval. Check for connection_aborted method on each API call, if client has disconnected you can save the response in temp file and on next request you can append the last response with new response but this method is more resource consuming.

Checking sync state of client and server when using Ajax

I have a web application with a big order form: 300+ input fields for item amounts, each with a button to increase and decrease the amount. Each change in an input field is sent to the server via Ajax. The buttons are "debounced": Clicks are collected and the new amount is sent 200 ms after the last click. Now it seems that some requests fail, probably due to bad network conditions, but it could also be a server problem. This means that the displayed amount and the the amount stored on the server differ. What strategies can I use to keep client and server in sync? At the moment I see two options:
Error handling on the client - when a request fails, re-send it (with a maximum number of tries).
Calculate a checksum/hash of all amounts and send it together with the amount. If the server calculates a different amount, it returns an error code and all field contents are sent to the server.
Any other ideas or recommendations?
You can maintain a ChangeLog (a list of ids of records that are to be synced with the server) on the client side (just ids, not the record itself).
Then during a sync process make sure the server sends back an acknowledgement of having successfully processed a record.
If success, remove the record from the ChangeLog. If not, it stays and will be re-sent during the next sync process.
This is the protocol used by SyncML standard and may apply to your case.

Categories

Resources