Efficient HTTP streaming in Firefox OS - javascript

I need to handle an infinite HTTP response (with Transfer-Encoding: chunked header).
This response contains a stream of images, so it must be handled as efficiently as possible.
XmlHttpRequest is not a solution here since it keeps all the reply in memory. Plus, if reading ArrayBuffer, the response isn't populated before the end of streaming, which means never here.
So, since I am under Firefox OS, the TCPSocket API seems to be my only hope.
I already started to implement a dirty HTTP stack (here and here), getting inspiration from the IMAP/SMTP implementations but it is still very slow.
So, two questions:
Is it worth spending time on this, or did I miss something easier?
If I want to implement it, what are the best practices not to foget about?
PS: I communicate with an external device, so changes on the server side are just not possible here.

As stated by the XMLHttpRequest doc on MDN, Firefox actually makes available extra responseType values (and so does Firefox OS) for streaming data, like moz-chunked-arraybuffer.
var xhr = new XMLHttpRequest({ mozSystem: true });
xhr.responseType = "moz-chunked-arraybuffer";
xhr.open('GET', deviceStreamingUrl);
xhr.addEventListener('progress', event => {
processChunk(xhr.response);
});
xhr.send();
Thanks to fabrice on #fxos#irc.mozilla.org!

Related

CORS response not received on different computers

This is my first question on SO, but you have all helped me enormously in the past from existing posts - so thank you!
I am working on a Web/Database system using localhost through Xampp, but need to backup sql file to my one&one online server. I am using CORS for cross-domain with js to make the backup and it works on my PC, but not my clients. The request onload works for us both, as the files are saved, but my client does not receive the response message to confirm it has saved!! Anyone know why this might be - we are both running IE9 and same xampp versions.
Code I am using for CORS request is:
var request = new XMLHttpRequest();
request.open('POST', "http://www.mysite/Backups", true);
request.onload = function()
{
if (request.status === 200)
{ //response functions here}
request.send("Content="+backupContent);
}
Hope this is in the correct question format - its my first time remember!
I had a year ago a really similar problem with IE. Your client is using IE, that means they are quite big and serious, so I bet they also have specific settings for IE security.
Go to your IE security preferences and restrict everything you can - I cannot tell you exactly the name of the property, I have no explorer anymore, but with this you can reproduce this behaviour.
How to solve the issue? Usually they don't agree on changing their security settings, so the only way that worked for me is using JSONP instead of CORS. I know: not modern, uglly... But that works.
This is just a guess, I trust that everything is done correctly on your side.

Using XHR to precache resources not behaving as expected

I'm simply trying to use XHR to precache some resources, but the cache is not behaving as expected.
Here are the bounds of the problem:
I know the resource URLs in advance (of course).
I don't know their content-types (mix of CSS, images, and other).
They will always be same-origin.
I control the cache headers.
They can be cached forever for all I care.
I've always been under the impression that XHR used the browser cache more or less like any other resource, but never rigorously tested that. Here's what I'm doing:
Request all resources up-front with XHR.
Explicitly set request header Cache-Control: max-age=3600 (Chrome was setting max-age=0 for some reason).
Set the following response headers on the server:
Cache-control: public; max-age=3600
Date: now
Expires: now + 1 hour
[Content-Type, Content-Length]
Here's what I'm seeing:
XHR always fetches the resource (confirmed on server and with dev tools).
Subsequent requests (via image/css/etc elements) always fetch (even after the XHRs have completed) on a cold cache.
But they always use the cache when it's warm.
I've poked at it in various ways, but this behavior never seems to change.
After much wailing and gnashing of teeth, I believe I've proven that this approach simply won't work on all browsers. Here's my experience thus far:
Firefox: Works a charm.
Chrome: Seldom works -- it's as though XHR uses a different cache than the elements (even though I'm pretty sure that's not the case; I haven't had time to delve into Chrome code to figure out exactly what's going on.
Safari: Apparently random. Sometimes resource requests kicked off from elements retrieve from the cache, sometimes not. I'm sure there's a method, but it appears to be madness from the outside.
In the end, I had to switch to the somewhat more craptastic-but-reliable approach of creating a hidden iframe, injecting script/img elements into it, then waiting on the iframe window's onload event. This works, but gives no fine-grained feedback in terms of which elements are loaded (getting reliable onload events from the individual elements is more "cross-browser tricky" than just waiting on the whole frame.
I'd love to understand more precisely what's going on in Chrome/Safari, but sadly don't have the time to dig in further.

GET query to URL using XMLHttpRequest in Chrome

I'm attempting to retrieve a url using XMLHttpRequest directly:
req = new XMLHttpRequest
req.onreadystatechange = ->
console.log req.readyState
if req.readyState == 1
console.log "sending..."
req.send
if req.readyState == 4
handler(req.response, req.status)
req.open("GET", info.srcUrl, true)
req.responseType = "arraybuffer"
But I never see the object transitioning beyond the 1 readyState. What am I missing?
If you are attempting to retrieve an arbitrary resource from another source than the server from which you received the running script, you are more than likely hitting a security issue related to Cross Site Scripting.
Except under very limited circumstances, you cannot retrieve resources from any other site but the one that served the page you are currently viewing.
For an explanation, see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Same_origin_policy_for_JavaScript?redirectlocale=en-US&redirectslug=JavaScript%2FSame_origin_policy_for_JavaScript
For the limited circumstances I mentioned above, see https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS?redirectlocale=en-US&redirectslug=HTTP_access_control
Further, since you do not appear to provide an error handler for your XMLHTTPRequest, you're more than likely missing the error message which would have informed you why your request had failed.
Update
A quick tutorial on XMLHTTPRequest, including how to handle an onError event can be found at https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest?redirectlocale=en-US&redirectslug=DOM%2FXMLHttpRequest%2FUsing_XMLHttpRequest
Coming from Ruby, I did not realize that there is a subtle but important difference between req.send and req.send(). As #RobW pointed out in the comments, this method should also not be called in the event handler but at the end of the code.

Javascript access another webpage

I know very, very little of javascript, but I'm interested in writing a script which needs information from another webpage. It there a javascript equivalent of something like urllib2? It doesn't need to be very robust, just enough to process a simple GET request, no need to store cookies or anything and store the results.
There is the XMLHttpRequest, but that would be limited to the same domain of your web site, because of the Same Origin Policy.
However, you may be interested in checking out the following Stack Overflow post for a few solutions around the Same Origin Policy:
Ways to circumvent the same-origin policy
UPDATE:
Here's a very basic (non cross-browser) example:
var xhr = new XMLHttpRequest();
xhr.open('GET', '/questions/3315235', true);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4) {
console.log(xhr.responseText);
}
};
xhr.send(null);
If you run the above in Firebug, with Stack Overflow open, you'd get the HTML of this question printed in your JavaScript console:
JavaScript access another webpage http://img217.imageshack.us/img217/5545/fbugxml.png
You could issue an AJAX request and process it.
Write your own server, which runs the script to load the data from websites. Then from your web page, ask your server to fetch the data from websites and send them back to you.
see http://www.storminthecastle.com/2013/08/25/use-node-js-to-extract-data-from-the-web-for-fun-and-profit/

xmlhttprequest to spoof referer then redirect to another page?

I've created some code using curl (PHP) which allows me to spoof the referrer or blank the referer then direct the user to another page with an spoofed referrer.
However the drawback to this is the IP address in the headers will always be the IP of my server, which isn't a valid solution.
The question;
Is it possible using client side scripting i.e. (xmlhttprequest) to "change" the referrer then direct the user to a new page?
Thus keeping the users IP address intact but spoofing the referrer.
If yes, any help would be much appreciated.
Thanks!
not from javascript in a modern browser when the page is rendered.
Update:
See comments for some manual tools and other javascript-based platforms where you technically can spoof the referrer. In the context of the 8-year-old original question which seems to be related to make web requests, the answer is still generally "no."
I don't plan to edit all of my decade-old answers though so downvoters, have at `em. I apologize in advance for not correctly forseeing the future and providing an answer that will last for eternity.
This appears to work in the Firefox Javascript console:
var xhr = new XMLHttpRequest;
xhr.open("get", "http://www.example.com/", true);
xhr.setRequestHeader( 'Referer', 'http://www.fake.com/' );
xhr.send();
In my server log I see:
referer: http://www.fake.com/
Little late to the table, but it seems there's been a change since last post.
In Chrome (probably most modern browsers at this time) are no longer allowing 'Referer' to be altered programmatically - it's now static-ish.
However, it does allow a custom header to be sent. E.g.:
var xhr = new XMLHttpRequest;
xhr.open("get", "http://www.example.com/", true);
xhr.setRequestHeader('CustomReferer', 'http://www.fake.com/');
xhr.send();
In PHP that header can be read through "HTTP_(header in uppercase)":
$_SERVER['HTTP_CUSTOMREFERER'];
That was the trick for my project...
For many of us probably common knowledge, but for some hopefully helpful!
You can use Fetch API to partially modify the Referer header.
fetch(url, {
referrer: yourCustomizedReferer, // Note: it's `referrer` with correct spelling, and it's NOT nested inside `headers` option
// ...
});
However, I think it only works when the original Referer header and your wanted Referer header are under the same domain. And it doesn't seem to work in Safari.
Allowing to modify Referer header is quite unexpected though it's argued here that there are other tricks (e.g. pushState()) to do this anyway.

Categories

Resources