NodeJS base64 image encoding not quite working - javascript

I am using API to get user's profile photo from O365 cloud. Based on the doc it says response contains
*The binary data of the requested photo. *
I would like to use this image to be displayed by Data URI format. Ex:-
"data:image/png;base64,iVBORw0KGgoAAA
ANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAHElEQVQI12P4
//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU
5ErkJggg=="
where everything after data:image/png;base64,.... is image data in Base64.
I unable to get Base64 encoding working for the image data I am getting from API. Not sure if I am consuming raw binary img data correctly and converting it correctly to Base64.
To verify:
I upload my URI data here to online editor & it never parses my data/image. Instead if I upload an image here & to get Base64 data then upload URI in 1st editor it does display correctly. So I am guessing by Base64 conversation isn't correct.
CODE in nodejs:-
let base64ImgTry1 = Buffer.from('binary-data-from-api').toString('base64')
//OR
var base64ImgTry2 = new Buffer('binary-data-from-api','binary').toString('base64');
let imgURI_1 = 'data:image/png;base64,' +base64ImgTry1
let imgURI_2 = 'data:image/png;base64,' +base64ImgTry1
none of these imgURI_1 , imgURI_2 works. Not sure if I am going wrong consuming binary-data-from-api.
I also tried this NPM library https://www.npmjs.com/package/image-data-uri
in which i used this method -> encode(data, mediaType) where data was direct response from API.
https://www.site24x7.com/tools/datauri-to-image.html

Related

Rendering data:application/pdf Base64 encoded data uri to HTMLCanvasElement?

I'm wondering if it is possible to somehow render a data:application/pdf Base64 encoded data uri to a HTMLCanvasElement?
I need to do this because a business requirement is that the PDF is only accessible/downloadable once a user has paid.
Where I am with my code:
I am using jsPDF to render a PDF dynamiclaly based off of some input data. This is rendered as the user updates input.
I obtain a base64 endcoded string using that library's const base64Data = pdf.output('datauristring') method call.
I then convert this base64 string to a blob.
const base64 = await fetch(base64Data)
const blob = await base64.blob()
this.PDF = URL.createObjectURL(blob)
However, I'm thinking some of these steps might not be needed?
Would anyone recommend any potential solutions for rendering a PDF inside a CanvasHTMLElement?

Read pdf as ArrayBuffer to store it in json file with detailed information

I want to build a wrapper around a pdf document to store more information. I tried the FileReader with reader.readAsBinaryString() but this broke the pdf file (some parts like images were missing).
So I tried reader.readAsArrayBuffer() which seems to get the content without any damage. But I don't know to to convert the ArrayBuffer to a string so I can write it's value into a json file to export it.
When I use btoa(new TextDecoder("utf-8").decode(e.target.result))
I get an error : The string to be encoded contains characters outside of the Latin1 range.
That sounds like a terrible idea in general, but anyway, might help someone else...
The easiest and most reliable way to encode a binary file to a string is to encode it as base64.
The FileReader API has a readAsDataURL() method, which will return a data URI composed of both an URI header and the base64 binary data.
So all you need if you want only the data as string, is to grab whatever comes after "base64," in the returned dataURI.
inp.onchange = e => {
const reader = new FileReader();
reader.onload = e => {
var myObj = {
name: inp.files[0].name,
data: reader.result.split('base64,')[1]
};
console.log(JSON.stringify(myObj));
};
reader.readAsDataURL(inp.files[0]);
};
<input type="file" id="inp">
Now, I can't advice to store a whole pdf file, moreover which contains images in a JSON file. Encoded as base64 the binary data will grow by 34% (IIRC). So you might want to consider saving both the meta-data and the original pdf file in a single binary compressed file (e.g zip).

Extracting gzip data in Javascript with Pako - encoding issues

I am trying to run what I expect is a very common use case:
I need to download a gzip file (of complex JSON datasets) from Amazon S3, and decompress(gunzip) it in Javascript. I have everything working correctly except the final 'inflate' step.
I am using Amazon Gateway, and have confirmed that the Gateway is properly transferring the compressed file (used Curl and 7-zip to verify the resulting data is coming out of the API). Unfortunately, when I try to inflate the data in Javascript with Pako, I am getting errors.
Here is my code (note: response.data is the binary data transferred from AWS):
apigClient.dataGet(params, {}, {})
.then( (response) => {
console.log(response); //shows response including header and data
const result = pako.inflate(new Uint8Array(response.data), { to: 'string' });
// ERROR HERE: 'buffer error'
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
Also tried a version to do it splitting the binary data input into an array by adding the following before the inflate:
const charData = response.data.split('').map(function(x){return x.charCodeAt(0); });
const binData = new Uint8Array(charData);
const result = pako.inflate(binData, { to: 'string' });
//ERROR: incorrect header check
I suspect I have some sort of issue with the encoding of the data and I am not getting it into the proper format for Uint8Array to be meaningful.
Can anyone point me in the right direction to get this working?
For clarity:
As the code above is listed, I get a buffer error. If I drop the Uint8Array, and just try to process 'result.data' I get the error: 'incorrect header check', which is what makes me suspect that it is the encoding/format of my data which is the issue.
The original file was compressed in Java using GZIPOutputStream with
UTF-8 and then stored as a static file (i.e. randomname.gz).
The file is transferred through the AWS Gateway as binary, so it is
exactly the same coming out as the original file, so 'curl --output
filename.gz {URLtoS3Gateway}' === downloaded file from S3.
I had the same basic issue when I used the gateway to encode the binary data as 'base64', but did not try a whole lot around that effort, as it seems easier to work with the "real" binary data than to add the base64 encode/decode in the middle. If that is a needed step, I can add it back in.
I have also tried some of the example processing found halfway through this issue: https://github.com/nodeca/pako/issues/15, but that didn't help (I might be misunderstanding the binary format v. array v base64).
I was able to figure out my own problem. It was related to the format of the data being read in by Javascript (either Javascript itself or the Angular HttpClient implementation). I was reading in a "binary" format, but it was not the same as that recognized/used by pako. When I read the data in as base64, and then converted to binary with 'atob', I was able to get it working. Here is what I actually have implemented (starting at fetching from the S3 file storage).
1) Build AWS API Gateway that will read a previously stored *.gz file from S3.
Create a standard "get" API request to S3 that supports binary.
(http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html)
Make sure the Gateway will recognize the input type by setting 'Binary types' (application/gzip worked for me, but others like application/binary-octet and image/png should work for other types of files besides *.gz). NOTE: that setting is under the main API selections list on the left of the API config screen.
Set the 'Content Handling' to "Convert to text(if needed)" by selecting the API Method/{GET} -> Integration Request Box and updating the 'Content Handling' item. (NOTE: the example in the link above recommends "passthrough". DON'T use that as it will pass the unreadable binary format.) This is the step that actually converts from binary to base64.
At this point you should be able to download a base64 verion of your binary file via the URL (test in browser or with Curl).
2) I then had the API Gateway generate the SDK and used the respective apiGClient.{get} call.
3) Within the call, translate the base64->binary->Uint8 and then decompress/inflate it. My code for that:
apigClient.myDataGet(params, {}, {})
.then( (response) => {
// HttpClient result is in response.data
// convert the incoming base64 -> binary
const strData = atob(response.data);
// split it into an array rather than a "string"
const charData = strData.split('').map(function(x){return x.charCodeAt(0); });
// convert to binary
const binData = new Uint8Array(charData);
// inflate
const result = pako.inflate(binData, { to: 'string' });
console.log(result);
}).catch ( (itemGetError) => {
console.log(itemGetError);
});
}

How to post and get data uri image from web api?

I am trying to post data uri image from javascript to backend asp.net webapi. However, it gives me input is not a valid Base-64 string error. Now, I understand that it may be due to "data:image/png;base64," part that the data uri contain.
Now, even if I remove this part from the data uri and send only the rest of the string to server, how do I store the Base-64 string on the server?
Moreover, how to retrieve this data as image from webapi?
NOTE: Image would be less than 200kB size and hence is to be stored as varbinary(Max) in sql server.
The thing is you should convert your image to byte[] and store it in your server as varbinary
byte []arr = new byte[image1.ContentLength];
image1.InputStream.Read(arr, 0, image1.ContentLength);
While retrieving you should convert the varbinary data to base64 string and base 64 string to image
string imageBase64Data = Convert.ToBase64String(img);
Here comes the important part the above code convert varbinary to base64string.It should be in proper format to display the image.Thats what the following code does
string imageDataURL = string.Format("data:image/png;base64,{0}", asd);
Session["Photo"] = imageDataURL;
Now you can able to view your image
Post the image from your client in a for of string, w/o specifying the type. On your action you can get the image in the following way :
var bytes = Convert.FromBase64String(yourStringHere);
using (var ms = new MemoryStream(bytes))
{
image = Image.FromStream(ms);
}

How do I encode/decode a file correctly after reading it through javascript and pass the file data through ajax?

I have a django File field with multiple attribute set to true. I am trying to make a multiple file uploader where I get the file objects with a simple javascript FileReader object. After looping through the file list I read the file data through
reader.readAsBinaryString(file);
and get the desired file data result. After passing this data to my views through ajax I am trying to create a copy of the file into the media folder. I am presently using the following views function :
#csrf_exempt
def storeAttachment(data):
'''
stores the files in media folder
'''
data = simplejson.loads(data.raw_post_data)
user_org = data['user_org']
fileName = data['fileName']
fileData = data['fileData']
file_path = MEDIA_ROOT + 'icts_attachments/'
try:
path = open((file_path+ str(user_org) + "_" + '%s')%fileName, "w+")
path.write(fileData)
path.close()
return HttpResponse(1)
except IOError:
return HttpResponse(2)
I am able to write simple text files,.js,.html and other few formats but when I try to upload pdf, word, excel, rar formats I get the following error in my response even though a file with invalid data is saved at my MEDIA path(the file does not open).
'ascii' codec can't encode characters in position 41-42: ordinal not in range(128)
I tried to encode/decode the file data using various references but with no effect..Any advice will be greatly appreciated..
You got error because Python 2's default ASCII encoding was used. Characters greater than 127 cause an exception so use str.encode to encode from Unicode to text/bytes.
Good practice is to use with keyword when dealing with file objects.
path = u''.join((file_path, user_org, '_', fileName)).encode('utf-8')
with open(path, 'w+') as f:
f.write(fileData)

Categories

Resources