node.js POST callbacks not received - javascript

I am POSTing a json file to a node.js listener, and I must not fully understand how POSTs are properly constructed, because in the code below the on('data') callback is never invoked. I can see the actual json string in the body, so I can work around the lack of a callback, but it seems like I'm doing something wrong with how I generate my POST request. [Postman details further below]
// Server initialization
var server = restify.createServer();
server.use(restify.queryParser());
server.use(CookieParser.parse);
server.use(restify.bodyParser());
// Later, at the point where I register callbacks.
this.server.post('receive', function (request, respond) {
console.log('Received POST');
console.log("Headers: %s", JSON.stringify(request.headers));
console.log("Body: %s", JSON.stringify(request.body));
var body = '' ;
var filePath = './data/notes.json';
// this event is never triggered.
request.on('data', function (data) {
console.log('Data received.');
body += data;
});
request.on('end', function () {
console.log('End of POST');
fs.writeFile(filePath, body, function () {
respond.end();
});
});
});
POST details:
I'm using Postman to create a POST request, Content-Type: application/json and the putting the json string in the raw body. What will normally trigger the 'data' event in a POST request? If I ignore data events and just read from the request body, will I run into issues?

Since you're using restify.bodyParser, that middleware would have already read the request body so there isn't any more for your handler to read (hence no data events). A stream, like request, can only be read once, until it's exhausted. You can't "re-read" it.
Which also means that you can just use request.body, which should be the (parsed) result of the JSON that you're posting.
As an aside: I don't know Postman very well, but it looks like you're sending a JSON-encoded string to the server, as opposed to a JSON-encoded object.
To send the latter, I would expect that this should be the raw body data:
{"body":"this is a test"}

Related

Cloudflare Worker TypeError: One-time-use body

I'm trying to use a Cloudflare Worker to proxy a POST request to another server.
It is throwing a JS exception – by wrapping in a try/catch blog I've established that the error is:
TypeError: A request with a one-time-use body (it was initialized from a stream, not a buffer) encountered a redirect requiring the body to be retransmitted. To avoid this error in the future, construct this request from a buffer-like body initializer.
I would have thought this could be solved by simply copying the Response so that it's unused, like so:
return new Response(response.body, { headers: response.headers })
That's not working. What am I missing about streaming vs buffering here?
addEventListener('fetch', event => {
var url = new URL(event.request.url);
if (url.pathname.startsWith('/blog') || url.pathname === '/blog') {
if (reqType === 'POST') {
event.respondWith(handleBlogPost(event, url));
} else {
handleBlog(event, url);
}
} else {
event.respondWith(fetch(event.request));
}
})
async function handleBlog(event, url) {
var newBlog = "https://foo.com";
var originUrl = url.toString().replace(
'https://www.bar.com/blog', newBlog);
event.respondWith(fetch(originUrl));
}
async function handleBlogPost(event, url) {
try {
var newBlog = "https://foo.com";
var srcUrl = "https://www.bar.com/blog";
const init = {
method: 'POST',
headers: event.request.headers,
body: event.request.body
};
var originUrl = url.toString().replace( srcUrl, newBlog );
const response = await fetch(originUrl, init)
return new Response(response.body, { headers: response.headers })
} catch (err) {
// Display the error stack.
return new Response(err.stack || err)
}
}
A few issues here.
First, the error message is about the request body, not the response body.
By default, Request and Response objects received from the network have streaming bodies -- request.body and response.body both have type ReadableStream. When you forward them on, the body streams through -- chunks are received from the sender and forwarded to the eventual recipient without keeping a copy locally. Because no copies are kept, the stream can only be sent once.
The problem in your case, though, is that after streaming the request body to the origin server, the origin responded with a 301, 302, 307, or 308 redirect. These redirects require that the client re-transmit the exact same request to the new URL (unlike a 303 redirect, which directs the client to send a GET request to the new URL). But, Cloudflare Workers didn't keep a copy of the request body, so it can't send it again!
You'll notice this problem doesn't happen when you do fetch(event.request), even if the request is a POST. The reason is that event.request's redirect property is set to "manual", meaning that fetch() will not attempt to follow redirects automatically. Instead, fetch() in this case returns the 3xx redirect response itself and lets the application deal with it. If you return that response on to the client browser, the browser will take care of actually following the redirect.
However, in your worker, it appears fetch() is trying to follow the redirect automatically, and producing an error. The reason is that you didn't set the redirect property when you constructed your Request object:
const init = {
method: 'POST',
headers: event.request.headers,
body: event.request.body
};
// ...
await fetch(originUrl, init)
Since init.redirect wasn't set, fetch() uses the default behavior, which is the same as redirect = "automatic", i.e. fetch() tries to follow redirects. If you want fetch() to use manual redirect behavior, you could add redirect: "manual" to init. However, it looks like what you're really trying to do here is copy the whole request. In that case, you should just pass event.request in place of the init structure:
// Copy all properties from event.request *except* URL.
await fetch(originUrl, event.request);
This works because a Request has all of the fields that fetch()'s second parameter wants.
What if you want automatic redirects?
If you really do want fetch() to follow the redirect automatically, then you need to make sure that the request body is buffered rather than streamed, so that it can be sent twice. To do this, you will need to read the whole body into a string or ArrayBuffer, then use that, like:
const init = {
method: 'POST',
headers: event.request.headers,
// Buffer whole body so that it can be redirected later.
body: await event.request.arrayBuffer()
};
// ...
await fetch(originUrl, init)
A note on responses
I would have thought this could be solved by simply copying the Response so that it's unused, like so:
return new Response(response.body, { headers: response.headers })
As described above, the error you're seeing is not related to this code, but I wanted to comment on two issues here anyway to help out.
First, this line of code does not copy all properties of the response. For example, you're missing status and statusText. There are also some more-obscure properties that show up in certain situations (e.g. webSocket, a Cloudflare-specific extension to the spec).
Rather than try to list every property, I again recommend simply passing the old Response object itself as the options structure:
new Response(response.body, response)
The second issue is with your comment about copying. This code copies the Response's metadata, but does not copy the body. That is because response.body is a ReadableStream. This code initializes the new Respnose object to contain a reference to the same ReadableStream. Once anything reads from that stream, the stream is consumed for both Response objects.
Usually, this is fine, because usually, you only need one copy of the response. Typically you are just going to send it to the client. However, there are a few unusual cases where you might want to send the response to two different places. One example is when using the Cache API to cache a copy of the response. You could accomplish this by reading the whole Response into memory, like we did with requests above. However, for responses of non-trivial size, that could waste memory and add latency (you would have to wait for the entire response before any of it gets sent to the client).
Instead, what you really want to do in these unusual cases is "tee" the stream so that each chunk that comes in from the network is actually written to two different outputs (like the Unix tee command, which comes from the idea of a T junction in a pipe).
// ONLY use this when there are TWO destinations for the
// response body!
new Response(response.body.tee(), response)
Or, as a shortcut (when you don't need to modify any headers), you can write:
// ONLY use this when there are TWO destinations for the
// response body!
response.clone()
Confusingly, response.clone() does something completely different from new Response(response.body, response). response.clone() tees the response body, but keeps the Headers immutable (if they were immutable on the original). new Response(response.body, response) shares a reference to the same body stream, but clones the headers and makes them mutable. I personally find this pretty confusing, but it's what the Fetch API standard specifies.

Native JS fetch API complete error handling. How?

I've put together the following code from learning about the fetch API. I am trying to replace AJAX and this looks wonderful so far.
Main Question:
According to the Fetch API documentation...
A fetch() promise will reject with a TypeError when a network error is
encountered or CORS is misconfigured on the server side, although this
usually means permission issues or similar — a 404 does not constitute
a network error, for example.
Having the 3 technologies working together...
If I disable the Web Server I get:
NetworkError when attempting to fetch resource.
Wonderful. That works great.
If I disable MySQL I get my custom error from PHP:
MySQL server down?
Wonderful. That works great.
If I disable PHP I get exactly nothing because the only way I can think of to pass through the Web Server request and trigger an error at the PHP level is with a... timeout.
After some research, I don't think there is a timeout option... at least not yet.
How could I implement it in the code below?
// CLICK EVENT
$( "#btn_test" ).on('click', function() {
// Call function
test1();
});
function test1() {
// json() - Returns a promise that resolves with a JSON object.
function json_response(response) {
// Check if response was ok.
if(response.ok) {
return response.json()
}
}
// data - Access JSON data & process it.
function json_response_data(data) {
console.log('json_response_data: ', data);
}
// URL to post request to...
var url = 'data_get_json_select_distinct_client.php';
// Sample serializeArray() from html form data.
// <input type="text" name="CLIENT_ID" value="1000">
var post_data = [{
"name":"CLIENT_ID",
"value":"1000"
}];
// STRINGIFY
post_data = JSON.stringify(post_data);
// FETCH
fetch(url, {
method: 'post',
headers: new Headers({'Content-Type': 'application/json; charset=utf-8'}),
body: post_data
})
// VALID JSON FROM SERVER?
.then(json_response)
// ACCESS JSON DATA.
.then(json_response_data)
// ERROR.
.catch(function(error) {
console.log('Web server down?: ', error.message);
});
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<button type="button" id="btn_test">FETCH RECORD</button>
Your server should be returning some sort of 5xx error code, should there be a problem server-side. This error is catchable in your JS.

Receiving submitted POST file Object in server side Javascript

When user sends a multi part form with files/images selected, generally in Meteor or node.js, the server side POST url handler uses this.request or req or request object to detect whether its a POST method or any other and its headers etc but what I don't understand is, where is the actually file located at this request object and how do I retrieve it so that it can be used for image/file upload or certain manipulations at server?
node provides the querystring api to parse strings that look like:
foo=bar&baz=qux&baz=quux&corge
...which is how multi part form data is also sent. Parsing this with that api will return an object:
{ foo: 'bar', baz: ['qux', 'quux'], corge: '' }
So, you can first detect the method of the request, and if it is POST, you can attach a handler to 'data', get all the data into your own variable, and on 'end' , parse it using querystring:
var qs = require('querystring');
// your request callback function would be something like:
function (request,response){
if(request.method=='POST'){
var body = '';
request.on('data',function(data){
body += data;
//reject requests that have sent too much data (eg 2MB):
if(body.length > 2e6){
// Send HTTP status code for `Request Entity Too Large`:
response.writeHead(413);
response.end();
});
request.on('end', function(){
var form = qs.parse(body);
//use form as object
});
} // end if
} // end handler

node.js missing post data in async request

I'm making a simple form in Node.js. Everything else seems to be working correctly, but the function that is supposed to receive post request data is never getting called. Here's the relevant code snippet:
if (request.method == 'POST') {
var body = '';
console.log(request.body);
request.on('data', function (chunk) {
console.log("got the post request data"); //nothing logged to console
body += chunk;
});
request.on('end', onRequestEnd(body, response));
}
The function onRequestEnd does get called, but later my code breaks when there's nothing but an empty string in the parameter body. Is the keyword 'data' correct?
The code was modified from an answer here: How do you extract POST data in Node.js?. I'll post more if needed.
After lots of frustration I solved the problem myself!
I changed the line:
request.on('end', onRequestEnd(body, response));
to:
request.on('end', function() {
onRequestEnd(body, response);
});
It had something to do with callbacks. I'm not exactly sure why this works and the other one doesn't though. This is how I feel: http://www.masti-xpress.com/images/Story-of-Every-Programmer.jpg
I'll share how I solved the problem with this.
I had another view of it however and I'll share that as well.
What I wanted was to have something like this in my "view".
app('/urlToView', function(req, response){
request.on('end', function() {
var post = **request.data;** //sanitize data
resolver.renderTemplateOr404('start.html', post, request, response);
});
}
The request.data is the important thing to notice here.
However I haven't really solved how to "not" have the request.on('end'...) in my view yet.
A reason as to why the console.log() would be how you handle the callback from the function that you do all this work in.
I hijack the request before it lands in my view when I start the server
self.preProcess(self, request, response);
and
preProcess: function onRequest(app, request, response){
processor.preRequest(request);
}
and lastly int the preRequest() function I do
if (request.method === 'POST') {
var postdata = "";
request.on('data', function(postdataChunk){
postdata += postdataChunk;
});
request.on('end', function(){
_transformRequest(request, _transformPostdata(postdata)); //this is to set the data on the request
});
}
and adding a console.log(postdataChunk); here isn't a problem since all of the callbacks are properly handled.
Also, this might be very stupid of me to ask but are you aware of that console.log(); doesnt output to browser but to the terminal window?
This might not be an exact answer for you but I hope this helps a bit.

After creating new instance in collection, don't do a GET request on the endpoint (backbone)

After I add a model instance to a collection, I do a POST request to add it. Then a GET request is done to get the model from the server. Is there a way to not to the GET request, only the POST request? Also, is it possible to get the success and error callback functions to respond to the success and failure of the POST request?
I want to do this because the collection has a URL that parses the JSON data that gets back, so the GET request doesn't work, but the POST request does work. I don't want to do a GET request on a endpoint that doesn't work.
The GET request is unnecessary. On the server in your POST handler you should return a JSON result back to the client representing the model. This is especially useful when there are generated fields such as an id. Then on the client in the success callback you can grab the model returned from the POST.
In the following example a new model is added to the collection if successful. I've also included the error callback which will fire if either client side validation fails or the POST fails:
var isNew = this.model.isNew();
this.model.save({}, {
success: function(model, response) {
if (isNew && this.collection) {
this.collection.add(model);
}
},
error: function(model, response) {
var errorMsg;
// Response may be string (if failed client side validation or an AJAX response (if failed server side)
if (_.isString(response))
errorMsg = response;
else
errorMsg = response.responseText;
}
});
The process you follow is indeed unnecessary. You should be using create on the collection to directly add the model, and invoke sync (the POST in this case) in the same time.
For example:
collection.create({foo: 'bar'}); or collection.create(unsaved_model);
The result of invoking create will return either the (saved) model or false if this was not successful. In addition it is possible to wait for the model to be saved before adding to the collection by doing
collection.create({foo: 'bar'}, {wait: true});
The documentation is your friend.

Categories

Resources