Can one read an XML response as a stream in Ext JS - javascript

I have a request that generates an XML response. Due to certain present constraints the response can be quite slow to complete but does begin to return data quickly.
Is it possible to read the response stream in Ext JS before the response is complete? My understanding is that *Readers are only given the response text once it is finished.

There doesn't seem to be an Ext.Proxy for Comet techniques, and I think one would be pretty tough to write. First of all, you'll need to write a Sax parser to make any sense of an incomplete XML document. Then, you'll need to write a proxy that repeatedly executes Operations when new data comes through. ExtJS isn't really designed for this, but it'd be an interesting thing to try. I'm not sure how practical it would be, though; any implemented Ext.Proxy would behave very differently from the other Proxies; though it might have methods with the same names, it'd be a very different interface. You'd have to extend Ext.Model and Ext.Store to understand how to populate themselves with streamed data, writing new listeners for chunk events and new data contracts for the consumers of Stores and Models. I'm not sure it'd be worth your time!
However, if all you need is an event to be thrown when a stream chunk comes through, then that's possible in Gecko and WebKit browsers. You just need to attach a handler to the onreadystate event of the XHR, which will fire every time data is received.
Experimentally:
Ext.define('Ext.proxy.StreamEventedAjax',{
extend: 'Ext.proxy.Ajax',
doRequest: function(operation, callback, scope) {
if (Ext.isIE) return null;
// do other doRequest setup here, use this.buildRequest, etc
var me = this,
req = new XMLHttpRequest(),
responseLength = 0,
newText = "";
req.onreadystate = function(e) {
newText = req.responseText.substring(responseLength);
responseLength = req.responseText.length;
operation.fireEvent('datareceived',e,newText);
}
req.open(me.getMethod(request),request.url);
}
});

Related

Call stack size exceeded on re-starting Node function

I'm trying to overcome Call stack size exceeded error but with no luck,
Goal is to re-run the GET request as long as I get music in type field.
//tech: node.js + mongoose
//import components
const https = require('https');
const options = new URL('https://www.boredapi.com/api/activity');
//obtain data using GET
https.get(options, (response) => {
//console.log('statusCode:', response.statusCode);
//console.log('headers:', response.headers);
response.on('data', (data) => {
//process.stdout.write(data);
apiResult = JSON.parse(data);
apiResultType = apiResult.type;
returnDataOutside(data);
});
})
.on('error', (error) => {
console.error(error);
});
function returnDataOutside(data){
apiResultType;
if (apiResultType == 'music') {
console.log(apiResult);
} else {
returnDataOutside(data);
console.log(apiResult); //Maximum call stack size exceeded
};
};
Your function returnDataOutside() is calling itself recursively. If it doesn't gets an apiResultType of 'music' on the first time, then it just keeps calling itself deeper and deeper until the stack overflows with no chance of ever getting the music type because you're just calling it with the same data over and over.
It appears that you want to rerun the GET request when you don't have music type, but your code is not doing that - it's just calling your response function over and over. So, instead, you need to put the code that makes the GET request into a function and call that new function that actually makes a fresh GET request when the apiResultType isn't what you want.
In addition, you shouldn't code something like this that keeping going forever hammering some server. You should have either a maximum number of times you try or a timer back-off or both.
And, you can't just assume that response.on('data', ...) contains a perfectly formed piece of JSON. If the data is anything but very small, then the data may arrive in any arbitrary sized chunks. It make take multiple data events to get your entire payload. And, this may work on fast networks, but not on slow networks or through some proxies, but not others. Instead, you have to accumulate the data from the entire response (all the data events that occur) concatenated together and then process that final result on the end event.
While, you can code the plain https.get() to collect all the results for you (there's an example of that right in the doc here), it's a lot easier to just use a higher level library that brings support for a bunch of useful things.
My favorite library to use in this regard is got(), but there's a list of alternatives here and you can find the one you like. Not only do these libraries accumulate the entire request for you with you writing any extra code, but they are promise-based which makes the asynchronous coding easier and they also automatically check status code results for you, follow redirects, etc... - many things you would want an http request library to "just handle" for you.

Non blocking Javascript and concurrency

I have code on a web-worker and because i can't post to it an object with methods(functions) , i dont know how to stop blocking the UI with this code:
if (data != 'null') {
obj['backupData'] = obj.tbl.data().toArray();
obj['backupAllData'] = data[0];
}
obj.tbl.clear();
obj.tbl.rows.add(obj['backupAllData']);
var ext = config.extension.substring(1);
$.fn.dataTable.ext.buttons[ext + 'Html5'].action(e, dt, button, config);
obj.tbl.clear();
obj.tbl.rows.add(obj['backupData'])
This code exports records from an html table. Data is an array and is returned from a web worker and sometimes can have 50k or more objects.
As obj and all the methods that it contains are not transferable to we-worker, when data length 30k ,40k or 50k or even more, the UI blocks.
which is the best way to do this?
Thanks in advance.
you could try wrapping the heavy work in an async function like a timeout to allow the engine to queue the whole logic and elaborate it as soon as it has time
setTimeout(function(){
if (data != 'null') {
obj['backupData'] = obj.tbl.data().toArray();
obj['backupAllData'] = data[0];
}
//heavy stuff
}, 0)
or , if the code is extremely long, you can try figure it out a strategy to split your code into chunk of operation and execute each chunk in a separate async function (timeout)
Best way to iterate over an array without blocking the UI
Update:
Sadly, ImmutableJS doesn't work at the moment across webworkers. You should be able to transfer the ArrayBuffer so you don't need to parse it back into an array. Also read this article. If your workload is that heavy, it would be best to actually send back one item at a time from the worker.
Previously:
The code is converting all the data into an array, which is immediately costly. Try returning an immutable data structure from web worker if possible. This will guarantee that it doesn't change when the references change and you can continue iterating over it slowly in batches.
The next thing you can do is to use requestIdleCallback to schedule small batches of items to be processed.
This way you should be able to make the UI breathe a bit.

Getting large AJAX responses in Chrome without crashing

I am working on a project which is aimed at the Chrome browser. Our goal which we would like to accomplish is to get a one million record array into the browser to work with the data. When I generated a test file that contained a million records it was a bit more than one gigabyte.
For reasons I will explain, I believe we can accomplish the goal if can get the browser to collect the garbage when necessary. I believe the browser holds the text of the AJAX responses when it doesn't need to and crashes for that reason.
Now, I can generate a million records within the browser and manipulate it as I need to. However, I have trouble sending the AJAX to the browser without crashing it.
Since sending one million crashes it, I tried sending batches of one hundred thousand. I can get two such batches across and parse the JSON. If I do not have a onreadystatechange on my AJAX call, I can make the call a number of times. Also, if I receive a hundred thousand records, I can go over it ten times and make the full array.
Because I seem to be able to actually hold one million records, I believe that, as I said, holding the response texts is overwhelming the browsers.
In order to try to get better memory management, I have pushed the AJAX resquests and parsing into a web worker. When the webworker gets the AJAX and makes the hundred thousand record array, it pushes it to the DOM thread. When the DOM thread has taken the data it has the web worker do another AJAX.
However, it still crashes.
I am open to using websockets or something else, if that would help somehow.
Here is the code in the DOM thread:
var iterations=3;
var url='hunthou.json';
var worker=new Worker('src/worker.js');
var count=0;
worker.addEventListener('message',function(e){
alert('count: '+count);
//bigArr=bigArr.concat(e.data);
console.log('e.data length: '+e.data.length);
bigArr[count]=e.data;
console.log('bigArr length: '+bigArr.length);
if(count<(iterations-1)){
worker.postMessage(url);
} else{
alert('done');
console.log('done');
worker.terminate();
console.log('bye');
}
count++;
});
worker.postMessage(url);
Here is the webworker:
var arr=[];
var request = new XMLHttpRequest();
request.onreadystatechange = function () {
var DONE = this.DONE || 4;
if (this.readyState === DONE){
arr=JSON.parse(request.responseText);
self.postMessage(arr);
arr.length=0;
request.responseText.length=0;
console.log('okay');
}
};
self.addEventListener('message', function(e) {
var url=e.data;
console.log('url: '+url);
request.open("GET",'../'+url,true);
request.send(null);
}, false);
Instead of sending the whole data at once.
You can create multiple requests which will retrieve chunks of data instead of retrieving whole data at once, this will prevent your browser from crashing.

Parsing a large JSON array in Javascript

I'm supposed to parse a very large JSON array in Javascipt. It looks like:
mydata = [
{'a':5, 'b':7, ... },
{'a':2, 'b':3, ... },
.
.
.
]
Now the thing is, if I pass this entire object to my parsing function parseJSON(), then of course it works, but it blocks the tab's process for 30-40 seconds (in case of an array with 160000 objects).
During this entire process of requesting this JSON from a server and parsing it, I'm displaying a 'loading' gif to the user. Of course, after I call the parse function, the gif freezes too, leading to bad user experience. I guess there's no way to get around this time, is there a way to somehow (at least) keep the loading gif from freezing?
Something like calling parseJSON() on chunks of my JSON every few milliseconds? I'm unable to implement that though being a noob in javascript.
Thanks a lot, I'd really appreciate if you could help me out here.
You might want to check this link. It's about multithreading.
Basically :
var url = 'http://bigcontentprovider.com/hugejsonfile';
var f = '(function() {
send = function(e) {
postMessage(e);
self.close();
};
importScripts("' + url + '?format=json&callback=send");
})();';
var _blob = new Blob([f], { type: 'text/javascript' });
_worker = new Worker(window.URL.createObjectURL(_blob));
_worker.onmessage = function(e) {
//Do what you want with your JSON
}
_worker.postMessage();
Haven't tried it myself to be honest...
EDIT about portability: Sebastien D. posted a comment with a link to mdn. I just added a ref to the compatibility section id.
I have never encountered a complete page lock down of 30-40 seconds, I'm almost impressed! Restructuring your data to be much smaller or splitting it into many files on the server side is the real answer. Do you actually need every little byte of the data?
Alternatively if you can't change the file #Cyrill_DD's answer of a worker thread will be able to able parse data for you and send it to your primary JS. This is not a perfect fix as you would guess though. Passing data between the 2 threads requires the information to be serialised and reinterpreted, so you could find a significant slow down when the data is passed between the threads and be back to square one again if you try to pass all the data across at once. Building a query system into your worker thread for requesting chunks of the data when you need them and using the message callback will prevent slow down from parsing on the main thread and allow you complete access to the data without loading it all into your main context.
I should add that worker threads are relatively new, main browser support is good but mobile is terrible... just a heads up!

Recieving a stream from rails 4.0 in JS callback

I'm trying transmit an image file from the server to the client, but my javascript callback becomes active before the stream closes I doing this because sending it in a traditional render json: times out and takes way to long anyway. The stream takes much less time, but i keep can't get all the data before the callback fires up.
controller code
def mytest
image=ImageList.new(AssistMe.get_url(image_url))
response.stream.write image.export_pixels(0, 0, image.columns, image.rows, 'RGBA').to_s
response.stream.close
end
javascript
var getStream, runTest;
runTest = function() {
return $.post('/dotest', getStream);};
getStream = function(params) {
return document.getElementById('whatsup2').innerHTML =
"stream is here " + params.length;};
the response is an array, I can make it an array of arrays by adding a "[" at the front and a "],['finish'] at the end to be able to detect the end of the data, but I haven't been able to figure out how to get javascript to wait until the end of stream to run. I assume i need to set up some kind of pole to check for the end, but how do I attach it to the callback?
Okay, here's a blog that describes this pretty well
blog
But i decided to forgo a stream and use .to_s. Since you can pipe several actions tougher
render object.method.method.to_s you get all the server side benefits of using a stream without the complexity. If you have a slow process where you need to overlap the client and server actions, then go to the blog and do it. Otherwise to_s covers it pretty well

Categories

Resources