Fetch vs Request - javascript

I'm consuming a JSON stream and am trying to use fetch to consume it. The stream emits some data every few seconds. Using fetch to consume the stream gives me access to the data only when the stream closes server side. For example:
var target; // the url.
var options = {
method: "POST",
body: bodyString,
}
var drain = function(response) {
// hit only when the stream is killed server side.
// response.body is always undefined. Can't use the reader it provides.
return response.text(); // or response.json();
};
var listenStream = fetch(target, options).then(drain).then(console.log).catch(console.log);
/*
returns a data to the console log with a 200 code only when the server stream has been killed.
*/
However, there have been several chunks of data already sent to the client.
Using a node inspired method in the browser like this works every single time an event is sent:
var request = require('request');
var JSONStream = require('JSONStream');
var es = require('event-stream');
request(options)
.pipe(JSONStream.parse('*'))
.pipe(es.map(function(message) { // Pipe catches each fully formed message.
console.log(message)
}));
What am I missing? My instinct tells me that fetch should be able to mimic the pipe or stream functionality.

response.body gives you access to the response as a stream. To read a stream:
fetch(url).then(response => {
const reader = response.body.getReader();
reader.read().then(function process(result) {
if (result.done) return;
console.log(`Received a ${result.value.length} byte chunk of data`);
return reader.read().then(process);
}).then(() => {
console.log('All done!');
});
});
Here's a working example of the above.
Fetch streams are more memory-efficient than XHR, as the full response doesn't buffer in memory, and result.value is a Uint8Array making it way more useful for binary data. If you want text, you can use TextDecoder:
fetch(url).then(response => {
const reader = response.body.getReader();
const decoder = new TextDecoder();
reader.read().then(function process(result) {
if (result.done) return;
const text = decoder.decode(result.value, {stream: true});
console.log(text);
return reader.read().then(process);
}).then(() => {
console.log('All done!');
});
});
Here's a working example of the above.
Soon TextDecoder will become a transform stream, allowing you to do response.body.pipeThrough(new TextDecoder()), which is much simpler and allows the browser to optimise.
As for your JSON case, streaming JSON parsers can be a little big and complicated. If you're in control of the data source, consider a format that's chunks of JSON separated by newlines. This is really easy to parse, and leans on the browser's JSON parser for most of the work. Here's a working demo, the benefits can be seen at slower connection speeds.
I've also written an intro to web streams, which includes their use from within a service worker. You may also be interested in a fun hack that uses JavaScript template literals to create streaming templates.

Turns out I could get XHR to work - which doesn't really answer the request vs. fetch question. It took a few tries and the right ordering of operations to get it right. Here's the abstracted code. #jaromanda was right.
var _tryXhr = function(target, data) {
console.log(target, data);
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
console.log("state change.. state: "+ this.readyState);
console.log(this.responseText);
if (this.readyState === 4) {
// gets hit on completion.
}
if (this.readyState === 3) {
// gets hit on new event
}
};
xhr.open("POST", target);
xhr.setRequestHeader("cache-control", "no-cache");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.send(data);
};

Related

JavaScript: XMLHttpRequest returning 'undefined' string on first run of script; I need it to load during the script

Preface: I'm a novice at JS, have no formal training in it, and usually make things on the fly by researching what I am trying to do. That failed this time.
I am currently trying to make a short JS script that will serve as a bookmarklet. The intent is to leverage the Tinder API to show users of Tinder some of the profile pictures of users who liked them, normally available with the Gold Feature.
Currently, it looks like this:
var stringz;
var xhr = new XMLHttpRequest();
var tokez = localStorage.getItem("TinderWeb/APIToken");
var url = "https://api.gotinder.com/v2/fast-match/teasers?locale=en";
xhr.withCredentials = true;
xhr.open("GET", url);
xhr.setRequestHeader("accept", "application/json");
xhr.setRequestHeader("content-type", "application/json; charset=utf-8");
xhr.setRequestHeader("x-auth-token", tokez);
xhr.setRequestHeader("tinder-version", "2.35.0");
xhr.setRequestHeader("platform", "web");
xhr.send();
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && xhr.status == 200) {
stringz = xhr.responseText;
return stringz;
}
};
//Turn the xhr response into a JSON string
var jasonstring = JSON.parse(stringz);
//Grab the URLs
var jasonstrung = jasonstring.data.results.map(x => x.user.photos.map(y => y.url));
//Turn the URLs into a nicely formatted JSON string
var jason = JSON.stringify(jasonstrung, null, 4);
//See what we got
console.log(jason);
The reason I am doing both JSON.parse and JSON.stringify is that the returned data from the xhr is a text string formatted like JSON but it isn't actually JSON yet so I have to parse it in order to grab the pieces I want, then format them after so they aren't a goopy block (although the stringify part isn't super necessary)
On the first run of this in the Chrome Dev Console, it spits out the following:
VM5418:1 Uncaught SyntaxError: Unexpected token u in JSON at position 0
at JSON.parse (<anonymous>)
at <anonymous>:18:24
My assumption as to why it does this is because stringz is not yet "filled up" and returns as "undefined" when JSON.parse tries to cut through it.
However, once the script completes, if one were to type console.log(stringz), the expected string appears! If one runs the entire script 2x, it prints out the final desired dataset:
[
[
"https://preview.gotinder.com/5ea6601a4a11120100e84f58/original_65b52a4a-e2b2-4fdb-a9e6-cb16cf4f91c6.jpeg"
],
[
"https://preview.gotinder.com/5a4735a12eced0716745c8f1/1080x1080_9b15a72b-10c3-47c6-8680-a9c1ff6bdbf7.jpg"
],
[
"https://preview.gotinder.com/5e8d4231370407010088281b/original_adb4a1e3-06c0-4984-bca1-978200a5a311.jpeg"
],
[
"https://preview.gotinder.com/5ea77de583887d0100f385b8/original_af32971d-6d80-4076-a0f8-92ab54f820b3.jpeg"
],
[
"https://preview.gotinder.com/5bf7a1a29c0764cc3409bb02/1080x1350_c9784773-b937-4564-8c96-1a380832fdab.jpg"
],
[
"https://preview.gotinder.com/5d147c0560364e16004bcf5e/original_bf550230-baba-4d70-8c75-da64a9ce1b6c.jpeg"
],
[
"https://preview.gotinder.com/5c9ca2c2c8a4501600a979aa/original_915f4c0f-eb58-4283-bc58-00fdadc3c33c.jpeg"
],
[
"https://preview.gotinder.com/541efb64f5d81ab67f4b599f/original_7f11dea4-41c8-4e9c-8c7a-0c886484a076.jpeg"
],
[
"https://preview.gotinder.com/5a8b56376c220c1f5d8b43d9/original_7c19a078-8bd7-48f9-8e30-123b8f937814.jpeg"
],
[
"https://preview.gotinder.com/5d0c18341ea6e416002bfb1d/original_41d203ce-d116-4714-a223-90ccfd928ff2.jpeg"
]
]
Is there any way to make this thing work in one go (bookmarklet style)? setTimeout doesn't work unfortunately, assuming it is a problem in terms of taking too long to fill "stringz" before I use JSON.parse on it.
Thank you!
The problem is coming from that XHR makes your function asynchronous: it sends a request and the response arrives later - during that time your next (and next, and next,....) lines of code are executed.
You have to start your JSON string transformation when the response has already arrived - that means you should place your code xhr.onreadystatechange (I had to comment out a lot of things so the snippet works):
var stringz;
var xhr = new XMLHttpRequest();
// var tokez = localStorage.getItem("TinderWeb/APIToken");
// var url = "https://api.gotinder.com/v2/fast-match/teasers?locale=en";
var url = "https://jsonplaceholder.typicode.com/posts";
// xhr.withCredentials = true;
xhr.open("GET", url);
// xhr.setRequestHeader("accept", "application/json");
// xhr.setRequestHeader("content-type", "application/json; charset=utf-8");
// xhr.setRequestHeader("x-auth-token", tokez);
// xhr.setRequestHeader("tinder-version", "2.35.0");
// xhr.setRequestHeader("platform", "web");
xhr.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
// the response arrives here
stringz = this.responseText;
// start your JSON transformation when the
// response arrives
jsonTransform(stringz)
return stringz;
}
};
xhr.send();
// this part of code will be executed synchronously - this
// doesn't wait until your response arrives
var synchronous = 'this will be logged before response arrives'
console.log(synchronous)
function jsonTransform(stringz) {
//Turn the xhr response into a JSON string
var jasonstring = JSON.parse(stringz);
//Grab the URLs
// var jasonstrung = jasonstring.data.results.map(x => x.user.photos.map(y => y.url));
//Turn the URLs into a nicely formatted JSON string
// var jason = JSON.stringify(jasonstrung, null, 4);
//See what we got
const jason = JSON.stringify(jasonstring)
console.log(jason);
}
Another method
I suggest you use fetch() instead of xhr - with xhr you have to take care of everything - fetch() is quite new, with a Promise based syntax (you'll be meeting that a lot, if you work with APIs)
More on fetch():
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch
const url = "https://jsonplaceholder.typicode.com/posts";
// fetch returns a Promise, so you can use
// .then, .catch, .finally - very handy!
fetch(url)
.then(resp => {
return resp.json()
})
.then(json => {
// you have the json formatted response here:
console.log(json)
})
.catch(err => {
console.log(err)
})
You could use fetch() with the friendly async-await syntax, that makes your code feel synchronous:
const url = "https://jsonplaceholder.typicode.com/posts";
// await can only be placed in an async function!
async function fetchAPI(url) {
// try-catch block to handle errors of the fetch()
try {
const response = await fetch(url)
const json = await response.json()
console.log(json)
} catch (err) {
console.log(err)
}
}
fetchAPI(url)

How do i read a large SOLR response object by object while the response is still returning

I'm querying SOLR7.5 for some large objects and would like to render them to a Browser UI as they are returned.
What are my options for reading the response bit by bit using when using the select request handler
I don't think there is anything native to Solr to do what you are asking.
One approach to handle this would be to return only the ID of the documents that match the criteria in your query (and not include the heavy part of the document) and then fetch the large part of the document asynchronously from the client.
i was looking in the wrong place. I just needed to read up on my webAPI fetch().
the response.json() reads the response to completion.
response.body.getReader() allows you to grab the stream in chunk and decode it from there.
let test = 'https://my-solr7/people/select?q=something'
fetchStream(test);
function fetchStream(uri, params = {}){
const options = {
method: 'GET',
};
var decoder = new TextDecoder();
fetch(uri, options)
.then ()
.then( (response) => {
let read;
const reader = response.body.getReader();
reader.read()
.then(read = (result) => {
if (result.done) return;
console.log(result.value);
let chunk = decoder.decode(result.value || new Uint8Array, {stream: !result.done});
console.log(chunk)
reader.read().then(read);
});
});
}

How to correctly send and receive deflated data

I'm using protobufs for serializing my data. So far I serialized my data on the server (node restify) send it, receive it (Request is made by XMLHttpRequest) and serialize it on the Client.
Now I want to employ zipping to reduce the transfered file size. I tried using the library pako, which uses zlib.
In a basic script that I used to compare the protobuf and zipping performance to json I used it this way, and there were no problems
var buffer = proto.encode(data,'MyType'); // Own library on top of protobufs
var output = pako.deflate(buffer);
var unpacked = pako.inflate(output);
var decoded = proto.decode(buffer,'MyType');
However if I try to do this in a client-server model I can't get it working.
Server:
server.get('/data', function (req, res) {
const data = getMyData();
const buffer = proto.encode(data, 'MyType');
res.setHeader('content-type', 'application/octet-stream;');
res.setHeader('Content-Encoding', 'gzip;');
return res.send(200,buffer);
});
My own protolibrary serializes the data in protobuf and then deflates it:
...
let buffer = type.encode(message).finish();
buffer = pako.deflate(buffer);
return buffer;
The request looks like this:
public getData(){
return new Promise((resolve,reject) => {
const request = new XMLHttpRequest();
request.open("GET", this.url, true);
request.responseType = "arraybuffer";
request.onload = function(evt) {
const arr = new Uint8Array(request.response);
const payload = proto.decode(request.response ,'MyType')
resolve(payload);
};
request.send();
});
}
The proto.decode method first inflates the buffer buffer = pako.inflate(buffer); and then deserializes it from Protobuf.
If the request is made i get following error: "Uncaught incorrect header check" returned by the inflate method of pako:
function inflate(input, options) {
var inflator = new Inflate(options);
inflator.push(input, true);
// That will never happens, if you don't cheat with options :)
if (inflator.err) { throw inflator.msg || msg[inflator.err]; }
return inflator.result;
}
Also I looked at the request in Postman and found following:
The deflated response looks like this: 120,156,60,221,119,64,21,237,119,39,240,247,246,242,10,49,191,244,178,73,54,157 and has a length of 378564
The same request without deflating (the protobuf) looks like this
�:�:
 (� 0�8#H
 (� 0�8#H
� (�0�8#�Hand has a length of 272613.
I'm assuming, that I'm doing something incorrectly on the server side, since the inflated request is larger than the one not using compression.
Is it the content-type Header? I'm out of ideas.

What is the output of a piped file stream?

Perhaps the question is not worded in the greatest way but here's some more context. Using GridFSBucket, I'm able to store a file in mongo and obtain a download stream for that file. Here's my question. Let's say I wanted to send that file back as a response to my http request.
I do:
downloadStream.pipe(res);
On the client side now when I print the responseText, I get some long string with some funky characters that look to be encrypted. What is the format/type of this string/stream? How do I setup my response so that I can get the streamed data as an ArrayBuffer on my client side?
Thanks
UPDATE:
I haven't solved the problem yet, however the suggestion by #Jekrb, gives exactly the same output as doing console.log(this.responseText). It looks like the string is not a buffer. Here is the output from these 2 lines:
console.log(this.responseText.toString('utf8'))
var byteArray = new Uint8Array(arrayBuffer);
UPDATE 2 - THE CODE SNIPPETS
Frontend:
var savePDF = function(blob){
//fs.writeFile("test.pdf",blob);
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (this.readyState === XMLHttpRequest.DONE && this.status === 200){
//TO DO: Handle the file in the response which we will be displayed.
console.log(this.responseText.toString('utf8'));
var arrayBuffer = this.responseText;
if (arrayBuffer) {
var byteArray = new Uint8Array(arrayBuffer);
}
console.log(arrayBuffer);
}
};
xhr.open("POST","/pdf",true);
xhr.responseType = 'arrayBuffer';
xhr.send(blob);
};
BACKEND:
app.post('/pdf',function(req,res){
MongoClient.connect("mongodb://localhost:27017/test", function(err, db) {
if(err) return console.dir(err);
console.log("Connected to Database");
var bucket = new GridFSBucket(db, { bucketName: 'pdfs' });
var CHUNKS_COLL = 'pdfs.chunks';
var FILES_COLL = 'pdfs.files';
// insert file
var uploadStream = bucket.openUploadStream('test.pdf');
var id = uploadStream.id;
uploadStream.once('finish', function() {
console.log("upload finished!")
var downloadStream = bucket.openDownloadStream(id);
downloadStream.pipe(res);
});
// This pipes the POST data to the file
req.pipe(uploadStream);
});
});
My guess is that either the response is being outputted as plain binary which is not base64 encoded (still a buffer) or it is a compressed (gzip) response that needs to be uncompressed first.
Hard to pinpoint the issue without seeing the code though.
UPDATE:
Looks like you're missing the proper response headers.
Try setting these headers before the downloadStream.pipe(res):
res.setHeader('Content-disposition', 'attachment; filename=test.pdf');
res.set('Content-Type', 'application/pdf');
Your stream is likely already a buffer. You might be able to call responseText.toString('utf8') to convert the streamed data into readable string.
I solved it!!!
Basically preset the response type to "arraybuffer" before you make the request using
req.responseType = "arraybuffer"
Now, once you receive the response, don't use responseText, instead use response. response contains the arraybuffer with the data for the file.

Chrome extension: how to pass ArrayBuffer or Blob from content script to the background without losing its type?

I have this content script that downloads some binary data using XHR, which is sent later to the background script:
var self = this;
var xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
if (this.status == 200) {
self.data = {
data: xhr.response,
contentType: xhr.getResponseHeader('Content-Type')
};
}
};
xhr.send();
... later ...
sendResponse({data: self.data});
After receiving this data in background script, I'd like to form another XHR request that uploads this binary data to my server, so I do:
var formData = new FormData();
var bb = new WebKitBlobBuilder();
bb.append(data.data);
formData.append("data", bb.getBlob(data.contentType));
var req = new XMLHttpRequest();
req.open("POST", serverUrl);
req.send(formData);
The problem is that the file uploaded to the server contains just this string: "[object Object]". I guess this happens because ArrayBuffer type is lost somehow while transferring it from content process to the background? How can I solve that?
Messages passed between a Content Script and a background page are JSON-serialized.
If you want to transfer an ArrayBuffer object through a JSON-serialized channel, wrap the buffer in a view, before and after transferring.
I show an isolated example, so that the solution is generally applicable, and not just in your case. The example shows how to pass around ArrayBuffers and typed arrays, but the method can also be applied to File and Blob objects, by using the FileReader API.
// In your case: self.data = { data: new Uint8Array(xhr.response), ...
// Generic example:
var example = new ArrayBuffer(10);
var data = {
// Create a view
data: Array.apply(null, new Uint8Array(example)),
contentType: 'x-an-example'
};
// Transport over a JSON-serialized channel. In your case: sendResponse
var transportData = JSON.stringify(data);
//"{"data":[0,0,0,0,0,0,0,0,0,0],"contentType":"x-an-example"}"
// At the receivers end. In your case: chrome.extension.onRequest
var receivedData = JSON.parse(transportData);
// data.data is an Object, NOT an ArrayBuffer or Uint8Array
receivedData.data = new Uint8Array(receivedData.data).buffer;
// Now, receivedData is the expected ArrayBuffer object
This solution has been tested successfully in Chrome 18 and Firefox.
new Uint8Array(xhr.response) is used to create a view of the ArrayBuffer, so that the individual bytes can be read.
Array.apply(null, <Uint8Array>) is used to create a plain array, using the keys from the Uint8Array view. This step reduces the size of the serialized message. WARNING: This method only works for small amounts of data. When the size of the typed array exceeds 125836, a RangeError will be thrown. If you need to handle large pieces of data, use other methods to do the conversion between typed arrays and plain arrays.
At the receivers end, the original buffer can be obtained by creating a new Uint8Array, and reading the buffer attribute.
Implementation in your Google Chrome extension:
// Part of the Content script
self.data = {
data: Array.apply(null, new Uint8Array(xhr.response)),
contentType: xhr.getResponseHeader('Content-Type')
};
...
sendResponse({data: self.data});
// Part of the background page
chrome.runtime.onMessage.addListener(function(data, sender, callback) {
...
data.data = new Uint8Array(data.data).buffer;
Documentation
MDN: Typed Arrays
MDN: ArrayBuffer
MDN: Uint8Array
MDN: <Function> .apply
Google Chrome Extension docs: Messaging > Simple one-time requests
"This lets you send a one-time JSON-serializable message from a content script to extension, or vice versa, respectively"
SO bonus: Upload a File in a Google Chrome Extension - Using a Web worker to request, validate, process and submit binary data.
For Chromium Extensions manifest v3, URL.createObjectURL() approach doesn't work anymore because it is prohibited in the service workers.
The best (easiest) way to pass data from a service worker to a content script (and vice-versa), is to convert the blob into a base64 representation.
const fetchBlob = async url => {
const response = await fetch(url);
const blob = await response.blob();
const base64 = await convertBlobToBase64(blob);
return base64;
};
const convertBlobToBase64 = blob => new Promise(resolve => {
const reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = () => {
const base64data = reader.result;
resolve(base64data);
};
});
Then send the base64 to the content script.
Service worker:
chrome.tabs.sendMessage(sender.tab.id, { type: "LOADED_FILE", base64: base64 });
Content script:
chrome.runtime.onMessage.addListener(async (request, sender) => {
if (request.type == "LOADED_FILE" && sender.id == '<your_extension_id>') {
// do anything you want with the data from the service worker.
// e.g. convert it back to a blob
const response = await fetch(request.base64);
const blob = await response.blob();
}
});

Categories

Resources