Node JS + DIME - sending binary data in POST - javascript

There's a file 1740 bytes long, it's contents is read to a Buffer res. res.length is 1740 and res.toString('binary', 0, res.length).length is also 1740.
I send a POST request using request lib
request.post({
url: endpoint,
headers: headers,
body: res.toString('binary', 0, res.length)
}, callback);
The request goes to a gSOAP server. Through hours of debugging on the server I send request to, we found the following: the request that comes to the server is 1753 bytes long and some characters are converted. In particular, hex B7 becomes C2 B7, so it's converted as described here: http://www.fileformat.info/info/unicode/char/b7/index.htm
I tried setting encoding: 'binary' and encoding: null to request params, same result (with encoding : null I only get the error message as a buffer, but that's all).
I tried using https library and piping a strean into the request, same result.
Best regards, Alexander
EDIT
At the moment, I found a workaround with cURL, just sending a request from cli with --data-binary "#file_to_which_i_dumped_the_request"' does the thing. But the app and the nodejs server itself is shipped within an installer, so we'd have to install cURL on users' machines too which is... acceptable, but is not the best option.
So is there way to send a binary POST body with nodejs?
Thanks.

Don't use the binary string encoding: it has been deprecated (see here) and it only makes sense if "the other side" will decode it back to a buffer.
Just use the buffer directly:
request.post({
url : endpoint,
headers : headers,
body : res
}, callback);

Related

How can I send new line feed in a POST call using plain text in JavaScript ES6 / Prometheus

I am trying to send a payload using POST to a Prometheus gateway server.
This server expects the request to be text/plain and it must end with a new line feed character.
"Using the Prometheus text protocol, pushing metrics is so easy that no separate CLI is provided. Simply use a command-line HTTP tool like curl. Your favorite scripting language has most likely some built-in HTTP capabilities you can leverage here as well.
Note that in the text protocol, each line has to end with a line-feed character (aka 'LF' or '\n'). Ending a line in other ways, e.g. with 'CR' aka '\r', 'CRLF' aka '\r\n', or just the end of the packet, will result in a protocol error."
var payload = 'http_requests_total{method="post", code="200"} 1000 1295000000000'
var apiRequest = http.request{{
'endpoint': 'http//myserver/api/metrics',
'method': 'POST',
'headers': {
'content-type': 'text/plain',
'Accept-Encoding': 'UTF-8'
}
)}
var resp = apiRequest.write(payload);
If I send as is, I receive a 400 response saying "unexpected end of input stream"
This is because the payload does not end with a line feed character.
I have tried to add "\n" but this doesn't work.
var payload = 'http_requests_total{method="post", code="200"} 1000 1295000000000' + '\n'
Am I missing something fundamental? If I send a curl request is works! but I am limited to using JavaScript ES6.

how to post data using python [duplicate]

I have got Apache2 Installed and Python working.
I am having a problem though. I have two pages.
One a Python Page and the other an Html Page with JQuery
Can someone please tell me how I can get my ajax post to work correctly.
<html>
<head>
</head>
<body>
<script>
$(function()
{
alert('Im going to start processing');
$.ajax({
url: "saveList.py",
type: "post",
data: {'param':{"hello":"world"}},
dataType: "application/json",
success : function(response)
{
alert(response);
}
});
});
</script>
</body>
</html>
And the Python Code
import sys
import json
def index(req):
result = {'success':'true','message':'The Command Completed Successfully'};
data = sys.stdin.read();
myjson = json.loads(data);
return str(myjson);
OK, let's move to your updated question.
First, you should pass Ajax data property in string representation. Then, since you mix dataType and contentType properties, change dataType value to "json":
$.ajax({
url: "saveList.py",
type: "post",
data: JSON.stringify({'param':{"hello":"world"}}),
dataType: "json",
success: function(response) {
alert(response);
}
});
Finally, modify your code a bit to work with JSON request as follows:
#!/usr/bin/python
import sys, json
result = {'success':'true','message':'The Command Completed Successfully'};
myjson = json.load(sys.stdin)
# Do something with 'myjson' object
print 'Content-Type: application/json\n\n'
print json.dumps(result) # or "json.dump(result, sys.stdout)"
As a result, in the success handler of Ajax request you will receive object with success and message properties.
You should read json data like this:
#!/usr/bin/env python3
import os
import sys
import json
content_len = int(os.environ["CONTENT_LENGTH"])
req_body = sys.stdin.read(content_len)
my_dict = json.loads(req_body)
With the following code, you can run into problems:
myjson = json.load(sys.stdin)
or written less succinctly:
requ_body = sys.stdin.read()
my_dict = json.load(requ_body)
That does work for me when my cgi script is on an apache server, but you can't count on that working in general--as I found out when my cgi script was on another server. According to the cgi spec:
RFC 3875 CGI Version 1.1 October 2004
4.2. Request Message-Body
Request data is accessed by the script in a system-defined method;
unless defined otherwise, this will be by reading the 'standard
input' file descriptor or file handle.
Request-Data = [ request-body ] [ extension-data ]
request-body = <CONTENT_LENGTH>OCTET
extension-data = *OCTET
A request-body is supplied with the request if the CONTENT_LENGTH is
not NULL. The server MUST make at least that many bytes available
for the script to read. The server MAY signal an end-of-file
condition after CONTENT_LENGTH bytes have been read or it MAY supply
extension data. Therefore, the script MUST NOT attempt to read more
than CONTENT_LENGTH bytes, even if more data is available. However,
it is not obliged to read any of the data.
The key line is:
the script MUST NOT attempt to read more
than CONTENT_LENGTH bytes, even if more data is available.
Apparently, apache sends an eof signal to the cgi script immediately after sending the request body to the cgi script, which causes sys.stdin.read() to return. But according to the cgi spec, a server is not required to send an eof signal after the body of the request, and I found that my cgi script was hanging on sys.stdin.read()--when my script was on another server, which eventually caused a timeout error.
Therefore, in order to read in json data in the general case, you should do this:
content_len = int(os.environ["CONTENT_LENGTH"])
req_body = sys.stdin.read(content_len)
my_dict = json.loads(req_body)
The server sets a bunch of environment variables for cgi scripts, which contain header information, one of which is CONTENT_LENGTH.
Here is what a failed curl request looked like when I used myjson = json.load(sys.stdin):
-v verbose output
-H specify one header
--data implicitly specifies a POST request
Note that curl automatically calculates a Content-Length header
for you.
~$ curl -v \
> -H 'Content-Type: application/json' \
> --data '{"a": 1, "b": 2}' \
> http://localhost:65451/cgi-bin/1.py
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 65451 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 65451 (#0)
> POST /cgi-bin/1.py HTTP/1.1
> Host: localhost:65451
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 16
>
* upload completely sent off: 16 out of 16 bytes
=== hung here for about 5 seconds ====
< HTTP/1.1 504 Gateway Time-out
< Date: Thu, 08 Mar 2018 17:53:30 GMT
< Content-Type: text/html
< Server: inets/6.4.5
* no chunk, no close, no size. Assume close to signal end
<
* Closing connection 0
Adding a little bit to the great #7stud's answer
I had some problems with content length when reading unicode which I fixed by reading from buffer:
content_length = int(os.environ["CONTENT_LENGTH"])
data = sys.stdin.buffer.read(content_length).decode('utf-8')

Azure Storage put blob REST API: The MAC signature differs from Azure computed signature (missing content-type)

Since my backend is Beego(Golang) and Azure didn't provide Storage Service SDK for Go, I have to manually create my blob uploading procesure. Here is my workflow:
With Dropzonejs as the frontend, user drags a file into browser to be uploaded.
In the Dropzone addedfile handler, the client asks my backend to generate an auth sig with data like my storage account, file name/length, and other x-ms-headers.
The auth sig returned and I trigger Dropzone to call XMLHttpRequest.send() to the URL of Azure Put blob API, with the auth sig.
Azure returns error and I found my backend didn't compute the sig with content-type data.
AuthenticationFailedServer failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:daefeaf4-0001-0021-74d6-f4a4cf000000
Time:2016-08-12T20:16:35.5410039ZThe MAC signature found in the HTTP request 'xxxx' is not the same as any computed signature. Server used following string to sign: 'PUT
multipart/form-data; boundary=----WebKitFormBoundaryn9qZe6obJbXmk5Ko
x-ms-blob-type:BlockBlob
x-ms-date:Fri, 12 Aug 2016 20:16:36 GMT
x-ms-version:2015-12-11
/myaccount/user-files/id_hex/57116204071.pdf'.
The problem is the random string of boundary in content-type (multipart/form-data; boundary=----WebKitFormBoundaryn9qZe6obJbXmk5Ko) is randomly generated by the browser AFTER xmlhttprequest.send() (in step 3). How do I know what the boundary string will be BEFORE xmlhttprequest.send() (in step 2)?
Have you try the solution at Issue#590, which seems to be the similar scenario with you, who wants to remove the multipart/form-data; boundary from header.
With the solution provided in this issue, similar with:
$("#dropzone").dropzone({
url: "<url>",
method: "PUT",
headers: {"Authorization": auth,
"x-ms-date":strTime,
"x-ms-version": "2015-12-11"
},
sending: function(file, xhr) {
var _send = xhr.send;
xhr.send = function() {
_send.call(xhr, file);
};
}
});

Cross-domain Jquery JSONP POST to Rails app

This is killing me. Trying to load data from a different domain from an API-sorts of that I'm trying to write. When sending JSON parameters as POST they get discarded, I've read somewhere that some special headers must be set before_filter:
def cors_headers #set_access_control_headers
headers['Access-Control-Allow-Origin'] = '*'
headers['Access-Control-Allow-Methods'] = 'POST, GET, OPTIONS'
headers['Access-Control-Max-Age'] = "1728000"
headers['Access-Control-Allow-Headers'] = 'content-type, accept'
end
Haven't had any luck with these though. Guess it's a browser limitation.
When I try sending the data as GET instead of POST, it gets added to the URL like this:
Completed in 959ms (View: 0, DB: 2) | 200 OK [http://www.somedomain.com/connector/browse/Sport.json?callback=jQuery16105855946165975183_1379526705493&{%22filters%22:[{%22filter%22:{%22attribute%22:%22id%22,%22op
erator%22:%22%3E%22,%22value%22:%222%22}},{%22filter%22:{%22attribute%22:%22id%22,%22operator%22:%22%3C%22,%22value%22:%227523%22}}]}&_=1379526723982]
So Rails basically can't see the filters which are the params that I'm trying to send
Parameters: {"{\"filters\":"=>{}, "id"=>"Sport", "_"=>"1379526723982", "callback"=>"jQuery16105855946165975183_1379526705493"}
The jquery snippet I'm playing with is:
$jq.ajax({url: "http://www.somedomain.com/connector/browse/" + x + ".json" + "?callback=?",
type: "get", // tried post too
dataType: "json", // tried jsonp too
accepts: "json",
data: req_data, // this is JSON.stringified already
processData:false,
contentType: "application/json; charset=UTF-8;",
success: output
});
The sample data I'm trying to send is this
{"filters":[{"filter":{"attribute":"id","operator":">","value":"2"}},{"filter":{"attribute":"id","operator":"<","value":"7523"}}]}
Has anyone an idea on how to sort this out?
Muchos gracias!
Basically the JS SOP prevents us from sending a POST request and reading the response, but this can be worked around like this:
1) Compose the request data, send it as POST. Don’t expect to receive a response. Don’t use on success, use on complete instead. Add a random-ish variable to the request
2) Temporarily store the response on the server side in a file or session variable or memcached server, use the random var mentioned above as key within the store.
3) send a 2nd JSON AJAX call to fetch the cached object.
With memcached, make sure the cached responses get removed from time to time or expire, in my case the app gets a lot of traffic, it would spam my memcache servers with junk if not set to expire.
here's some sample code

In node.js, how do I get the Content-Length header in response to http.get()?

I have the following script and it seems as though node is not including the Content-Length header in the response object. I need to know the length before consuming the data and since the data could be quite large, I'd rather not buffer it.
http.get('http://www.google.com', function(res){
console.log(res.headers['content-length']); // DOESN'T EXIST
});
I've navigated all over the object tree and don't see anything. All other headers are in the 'headers' field.
Any ideas?
www.google.com does not send a Content-Length. It uses chunked encoding, which you can tell by the Transfer-Encoding: chunked header.
If you want the size of the response body, listen to res's data events, and add the size of the received buffer to a counter variable. When end fires, you have the final size.
If you're worried about large responses, abort the request once your counter goes above how ever many bytes.
Not every server will send content-length headers.
For example:
http.get('http://www.google.com', function(res) {
console.log(res.headers['content-length']); // undefined
});
But if you request SO:
http.get('http://stackoverflow.com/', function(res) {
console.log(res.headers['content-length']); // 1192916
});
You are correctly pulling that header from the response, google just doesn't send it on their homepage (they use chunked encoding instead).

Categories

Resources