Reduce AJAX request size. Simple chat with Polling system - javascript

NOTICE: I replaced my polling system with websockets but I still want to know the answer to my questions above.
I'm trying to reduce an AJAX request of a traditional-polling message system, but I don't know how to get it:
$chatbox = $("#chatbox");
setInterval(function(){
// I send the sha1 of the chatbox html content to verify changes.
$.post("post.php", {checksum: hex_sha1($chatbox.html())}, function (data, status) {
switch (status) {
case "success":
// If version of "post.php" checksum is different than mine (there are changes) data isn't empty; I assign data as the new content of the chatbox.
if(data){
$chatbox.html(data);
$chatbox.scrollTop($chatbox[0].scrollHeight);
}
break;
default:
$chatbox.html('Connection error...');
break;
}
});
}, 1000);
Well, As you see I use an setInterval() with 1000 miliseconds as parameter and thanks to the SHA1 checksum system I can reduce the size of all AJAX response to 343 B (except when "post.php" returns some new message, obviously)
Questions:
Why all my AJAX requests have ever the same size (343 B) even though I change the SHA1 (20 B) hash to MD5 (16 B)?
My checksum variable (SHA1) occuppies 20 B: Where do the remaining 323 B?
Could I reduce more the AJAX request size? How?
NOTE:
hex_sha1() is a implementation of SHA1 algorithm for Javascript: http://pajhome.org.uk/crypt/md5/sha1.html
NOTE 2:
Unfortunately I can't use an Server-Push Technique like node.js. I can only use Javascript (client-side) and PHP.

Why not use the plain javascript AJAX Request? Maybe your AJAX data is too long, that's why it has a large size: and the only thing you can do for it is to make the AJAX data have a few data.
What do you want? like Facebook AJAX Polling? Do it like this on the server PHP:
$chat_data = "(this is the chat data variable if there is no chat data it will idle)";
while (!$chat_data) {
// when there's no chat data let's idle the request without disconnecting
// the client from the AJAX request.
sleep(1);
}
exit(json_encode($chat_data));
On JavaScript Client Side:
function repoll () {
chat_poll = new XMLHttpRequest();
// the chat_req variable is for sending POST data to the server.
chat_req = new FormData();
chat_req.append("do", "chatpoll");
chat_poll.open("POST", "post.php");
chat_poll.send(chat_req);
chat_poll.onload = function () {
// do something here with the chat data
// repoll the server
repoll();
}
repoll();
By doing this, your implementing the Facebook like server polling.
For the websocket example in JavaScript client side:
web_socket = new WebSocket("ws://[thesocket]:[theport]");
web_socket.onmessage = function (w) {
// do something here. this will fire if messages is received from the websocket.
// you will get the message from w.data variable.
alert("Data Received: " + w.data);
}
// to send data to the web socket do this:
web_socket.send("the data you want. I prefer JSON for sending configuration and chat data or XML if you want");

Here's my take on your questions, even though you'd better of using a library like socket.io with fallback support for older browsers (simulating websockets via long-polling or such).
Why all my AJAX requests have ever the same size (343 B) even though I
change the SHA1 (20 B) hash to MD5 (16 B)?
Most HTTP communication between browser and server by default is compressed with gzip. The bulk of your request/response stream consists of HTTP headers where 4 bytes of difference in your hashing algorithm's output may not make a difference effectively due to the gzip compression.
My checksum variable (SHA1) occuppies 20 B: Where do the remaining 323 B?
See above, and to see for yourself, you could use a http monitor, tcpdump or developer tools to see the raw data transferred.
Could I reduce more the AJAX request size? How?
WebSocket has a lot less footprint compared to HTTP requests, so using it seems the best option here (and polling in intervals is almost never a good idea, even without WebSocket you would be better off implementing long-polling in your server).

I have assembled a simple page with a jQuery $.post request. It produces a request header that's in the form:
POST /post_.js HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
Accept: */*
Accept-Language: it-IT,it;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Referer: http://localhost/test.html
Content-Length: 49
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
You can see the response and request headers by using Firebug on Firefox or F12 on Chrome.
In my opinion the extra bytes are those involved in the Http request.

Related

How to set HTTP headers for client (browser) while sending a response containing the headers to be set and the redirect url from backend (Node.js)?

I am really sorry about if I am missing something very basic here, but here goes...
BRIEF: My question is the same as the one found here: How to set headers while requesting a page in nodejs?, and Mr Khan's answer there is just falling short of explaining how to set the headers from backend (Node.js). I would have commented there, but I don't have enough Karma for that :(
This is what I've done so far:
const newTokens = await jwt.generateJWT(user); // generateJWT is a custom function that returns two tokens
res.setHeader("Authorization", `Bearer ${newTokens.accessToken}`);
res.setHeader("refresh-token", newTokens.refreshToken);
return res.redirect("/member/dashboard");
The above code is able to send the HTTP headers to the browser, but is not able to set them on the browser for the domain.
The response headers as in Firefox are:
HTTP/1.1 302 Found
X-Powered-By: Express
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiNjA3MDgyNDlmNjBjNjE1YWU4NTdjMmU4IiwidXNlcl9yb2xlIjoibWVtYmVyIiwidXNlcl9uYW1lIjoiQWxleCIsImlhdCI6MTYxNzk5OTM5NywiZXhwIjoxNjE3OTk5OTk3fQ.Odb6TrWBnf9dq00T_ddxD9hqVjhFQYdqA5pP2u6y-2k
refresh-token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyX2lkIjoiNjA3MDgyNDlmNjBjNjE1YWU4NTdjMmU4IiwiaWF0IjoxNjE3OTk5Mzk3LCJleHAiOjE2MTc5OTk5OTd9.kY9DZWprHxZFMI3btX-yzZxiUrqZY3kdmxzyc3apAyw
Location: /member/dashboard
Vary: Accept
Content-Type: text/html; charset=utf-8
Content-Length: 78
Date: Fri, 09 Apr 2021 20:16:37 GMT
Connection: keep-alive
Note: The "Authorization" and "refresh-token" headers have been sent, and the redirect "location"
has also been set causing the 302 status code.
Unfortunately, the headers don't seem to be returning on all subsequent requests from the client as the headers are not being set.
Please let me know if I am doing something obviously wrong.
EDIT: The reason I am trying to do this from the backend directly is that I don't want to depend on the frontend to handle this job, as I do not intend on implementing a framework-specific frontend, i.e., it should work across all frameworks.
PS: Forgive me if my English is bad, it isn't my native language.
When you do res.redirect(), the browser will NOT apply the headers you set on that response to the redirected request. Those headers are part of the response back to the requesting client and that's all. They will NOT be sent with the redirected request.
Headers on the redirected request cannot be controlled by the server. Browsers just don't work that way so you can't design things that way if you're relying on a standard browser to be the client.
If you're using redirection and you want something sent back with the redirection, then your best option is typically to put stuff into a cookie or into the query string of the redirect URL. That cookie or query string will be sent with the redirected request and the server can get it from there.
You could also establish a server-side session and put data into that session. This will set a session cookie which will be present on future client requests and the server can then access data from the server-side session object on future requests from that client.

HTTP POST using XHR with Chunked Transfer Encoding

I have a REST API that accepts an Audio file via an HTTP Post. The API has support for Transfer-Encoding: chunked request header so that the file can be uploaded in pieces as it is being created from a recorder running on the client. This way the server can start processing the file as it arrives for improved performance. For example:
HTTP 1.1 POST .../v1/processAudio
Transfer-Encoding: chunked
[Chunk 1 256 Bytes] (server starts processing when arrives)
[Chunk 2 256 Bytes]
[Chunk 3 256 Bytes]
...
The audio files are typically short and are around 10K to 100K in size. I have C# and Java code that is working so I know that API works. However, I cannot seem to get the recording and upload working in a browser using javascript.
Here is my Test Code that does a POST to localhost with Transfer-Encoding:
<html>
<script type="text/javascript">
function streamUpload() {
var blob = new Blob(['GmnQPBU+nyRGER4JPAW4DjDQC19D']);
var xhr = new XMLHttpRequest();
// Add any event handlers here...
xhr.open('POST', '/', true);
xhr.setRequestHeader("Transfer-Encoding", "chunked");
xhr.send(blob);
}
</script>
<body>
<div id='demo'>Test Chunked Upload using XHR</div>
<button onclick="streamUpload()">Start Upload</button>
</body>
</html>
The problem is that i'm receiving the following Error in Chrome
Refused to set unsafe header "Transfer-Encoding"
streamUpload # uploadTest.html:14
onclick # uploadTest.html:24
After looking at XHR documentation i'm still confused because it does not talk about unsafe request headers. I'm wondering if its possible that XHR does not allow or implement Transfer-Encoding: chunked for HTTP POST?
I've looked at work arounds using multiple XHR.send() requests and WebSockets but both are undesirable because it will require significant changes to the server APIs which are already in place, simple, stable and working. The only issue is that we cannot seem to POST from a browser with psedo-streaming via Transfer-Encoding: chunked request header.
Any thoughts or advice would be very helpful.
As was mentioned in a comment, you're not allowed to set that header as it's controlled by the user agent.
For the full set of headers, see 4.6.2 The setRequestHeader() method from W3C XMLHttpRequest Level 1 and note that Transfer-Encoding is one of the headers that are controlled by the user agent to let it control those aspects of transport.
Accept-Charset
Accept-Encoding
Access-Control-Request-Headers
Access-Control-Request-Method
Connection
Content-Length
Cookie
Cookie2
Date
DNT
Expect
Host
Keep-Alive
Origin
Referer
TE
Trailer
Transfer-Encoding
Upgrade
User-Agent
Via
There is a similar list in the WhatWG Fetch API Living Standard.
https://fetch.spec.whatwg.org/#terminology-headers
As other replies have already mentioned, you aren't allowed to set the "Transfer-Encoding" header yourself.
However, you also don't actually need to use HTTP chunked transfer encoding in order to incrementally stream a file to your server and start processing parts of it right away either. A regular HTTP POST works just fine for that. Even though it is transmitted as a single HTTP request, I believe the streaming/chunking magic happens for you at the TCP level (other people are welcome to correct me if I'm wrong on where the magic specifically happens). I can confirm this works because I've done it with node.js and Express on the backend. I'm sure it probably works with other server side technologies as well.
HTTP chunked transfer encoding is only useful when you DON'T know the size of the stream you are going to be sending in advance (such as live video, video conference calls, remote desktop sessions, chats, etc.). And for these cases WebSockets are a more widely deployed solution that solve the same problem:
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
For your use case, where you DO know the size of the file in advance you are probably better off sticking to your XmlHttpRequest and abandoning the chunked transfer encoding. Alternatively, you can give the new Fetch API a try:
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

No auto-decompression of gzipped json on browser's side when using angular's http get method

I'm trying to load a json file using angular (v1.2.6):
$http.get('myfile.json').success(function(data) { ... }
This works fine, except when I create a (static) compressed version of the file on the server, and try to load 'myfile.json.gz' instead (to reduce the loading time).
The request headers seem correct (Chrome 31.0 on Mac) (as stated here and here):
Accept: application/json, text/plain, */*
Accept-Encoding: gzip,deflate,sdch
while the response headers contain:
Connection: close
Accept-Ranges: bytes
Content-Length: 702468
Content-Type: application/x-gzip
Content-Encoding: gzip
However, the content is not automatically decompressed by the client browser, as I understand it should be. data.length is ~700Kb instead of the original uncompressed ~3Mb.
Although this one post suggests it needs to be done manually, I understand decompression should be automatic and transparent.
My question is, should it be decompressed automatically? and why isn't it the case here?
Your content type should not be "application/x-gzip* it should stay :
application/json
The content encoding is enough to say to the browser that the content is zipped.
What HTTP server are you running ?
You should configure it in order to return the correct mime type, regardless of the .gz extension.

Setting the timeout when using Dojo

I recently updated an app that uses Dojo to sent asynchronous petitions to my server which serves these petitions with cgi.
My problem is as follows. So for example the variable that makes the requests is
parent.sc_dojo.io.script.jsonp_sc_dojoIoScript2
This new service takes too long to send the response approximately 40 - 60 seconds, and after this time the variable parent.sc_dojo.io.script.jsonp_sc_dojoIoScript2 appears as UNDEFINED
I made an analysis using firebug, see the following image for major details.
The petition to the server has the following data:
Connection Keep-Alive
Content-Type text/javascript; charset=utf-8
Date Tue, 10 Sep 2013 12:39:22 GMT
Keep-Alive timeout=5, max=100
Server Apache/2.2.22 (Ubuntu)
Transfer-Encoding chunked
The timeout ranges from 5 to 100, I don't really know the units of this measure, Any ideas?
About Connection Keep-Alive
When a client browser sends the "Connection: Keep-alive" header to an HTTP/1.1 server, the browser is saying "hey, I want to carry on a long conversation, so don't close the connection after the first exchange."
The keep-alive "timeout" value is in seconds. The "max" value is unit-less, representing the maximum number of requests to service per connection. Taken together, these augment the client's request to "hey, I want to carry on a long conversation, so don't close the connection after the first exchange BUT if nothing exchanges in 5 seconds (timeout) OR if more than 100 requests go back and forth (max), I'm ok with you closing the connection." The server responds with the actual values it will service for timeout and max.
The penalty for a closed connection is that a new one has to be opened up. Some modern browsers limit the number of simultaneous open connections, so keeping these values too small may introduce latency (while your app waits for free connections). On the other hand, the server need not agree to the timeout and max values requested: the server sets its own limits.
See these articles for details:
http://www.feedthebot.com/pagespeed/keep-alive.html
http://en.wikipedia.org/wiki/HTTP_persistent_connection
http://www.hpl.hp.com/personal/ange/archives/archives-95/http-wg-archive/1661.html
About dojo timeouts
I don't see your code or dojo version, but dojo does allow you to set how long it will wait for a response via the timeout property in the XHR request. The default timeout is "never". Code below.
In practice, "never" is misleading: browsers have their own defaults for keep-alive timeouts and upstream routers might have their own timeouts.
Try to keep it short. If the response takes more than 15 seconds, there may need to be a different design approach to the problem: reverse ajax, polling, combined response, etc.
require(['dojo/request/xhr'], function (xhr) {
xhr(
'http://www.example.com/echo',
{ timeout:15000 /* change this, units are milliseconds */, handleAs:'json' }
).then(function (r) {
console.log(r);
});
});
The specific problem
Ok, finally. If you have a long server side run, here's what I would do:
Send a request from client to server that starts the job
Server responds with a unique URL that can be polled for status
In Javascript, use setInterval() to periodically check the returned URL for status
When the URL shows "status" done, kill the setInterval and issue a final call to get the result

Removing HTTP headers from an XMLHttpRequest

I am working on an ajax long polling type application, and I would like to minimize the amount of bandwidth I am using. One of the big costs right now are the client side HTTP headers. Once I have a connection established and a session id stored on the client, I don't really want to squander any more bandwidth transferring redundant http information (such as browser type, accept encodings, etc.). Over the course of many connections, this quickly adds up to a lot of data!
I would really like to just take my XMLHttpRequest and nuke all of the headers so that only the absolute minimum gets transmitted to the server. Is it possible to do this?
You have very little control over request headers, but you can still do a few things -
Reduce the size of the cookie. In general, you only want the session id, everything else can be eliminated and stored server side.
Minimize http referrer by keeping a short URL. The longer your page url, the more data will have to be sent via the http referrer. One trick is to store data in the fragment identifier (the portion of the url after the #). The fragment identifier is never sent to the server, so you save a few bytes over there.
Some request headers are only sent if you had previous set corresponding response headers. For example, you can indirectly control the ETag and if-modified-since request headers.
You may want to consider Web Sockets. Support is pretty good (IE10+).
You may be able to override some of the standard headers using setRequestHeader() before sending the request, but it is possible the browser may not allow overriding of some and it seems there is no way to get a list of headers (besides asking the server to echo them back to you) to know which to try to override.
I think it's possible to remove all headers at least in some browsers.
Take a look at the communication between gmail/calendar apps and the backend from google in chrome (it's not the same in firefox)
it's possible google has some hidden api for the XMLHttpRequest object,
you'll see something like the below output (notice there is no request headers section):
Request URL:https://mail.google.com/mail/u/0/channel/bind?XXXXXXXXXXXXXX
Request Method:POST
Status Code:200 OK
Query String Parameters
OSID:XXXXXXXXXXXXX
OAID:XXXXXXXXX
VER:8
at:XXXXXXXXXXXXXX
it:30
SID:XXXXXXXXXXXX
RID:XXXXXXXXX
AID:XXXXXXXXXX
zx:XXXXXXXXXXXX
t:1
Request Payload
count=1&ofs=211&req0_type=cf&req0_focused=1&req0__sc=c
Response Headers
cache-control:no-cache, no-store, max-age=0, must-revalidate
content-encoding:gzip
content-type:text/plain; charset=utf-8
date:Tue, 09 Oct 2012 08:52:46 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:GSE
status:200 OK
version:HTTP/1.1
x-content-type-options:nosniff
x-xss-protection:1; mode=block

Categories

Resources