I have a jrmxl (Jasper report) file stored in a postgresql database in a binary format (bytea). I'm trying to read that file and convert it into a plain jrmxl (XML) file and save it on the disk.
Here is what i've tried so far
var fs = require('fs');
exports.saveFile = function (pg) {
//pg is the postgres connection to query the db
pg.query('Select data from data_file where id = 123', function (err, result) {
if (err) {
console.log(err);
return;
}
var data = result.rows[0].data;
//Buffer.isBuffer(data) === true
// I can get the data here. Now I try to convert it into text
var file = data.toString('utf8');
fs.writeFile('report.jrxml',file, function (er) {
if (er) {
console.log('an error occurred while saving the file');
return;
}
console.log('file saved');
}}
});
}
If i run the code above, the file is saved but it's somehow binary.
How can i convert this to a plain xml file in text format that i can import in ireport for example?
You might try going through a buffer first. I have used this technique to transform DB BLOBs into base64 strings.
var fileBuffer = new Buffer( result.rows[0].data, 'binary' );
var file = fileBuffer.toString('utf8');
I use 'pako' npm package to resolve that issue:
import { connection, Message } from 'websocket';
import * as pako from 'pako';
protected async onCustomMessage(message: Message, con): Promise<void> {
let data;
let text;
if (message.type === 'utf8') {
// console.log("Received UTF8: '" + message.utf8Data + "'");
text = message.utf8Data;
data = JSON.parse(text);
} else {
const binary = message.binaryData;
text = pako.inflate(binary, {
to: 'string',
});
data = JSON.parse(text);
}
}
npm i pako && npm i -D #types/pako
Related
I'm sending an image encoded as base64 through sockets and decoding is not working. The file that must contain the new image is written as base64 instead of a jpg file.
encoding socket:
function encode_base64(filename) {
fs.readFile(path.join(__dirname, filename), function (error, data) {
if (error) {
throw error;
} else {
console.log(data);
var dataBase64 = data.toString('base64');
console.log(dataBase64);
client.write(dataBase64);
}
});
}
rl.on('line', (data) => {
encode_base64('../image.jpg')
})
decoding socket:
function base64_decode(base64str, file) {
var bitmap = new Buffer(base64str, 'base64');
fs.writeFileSync(file, bitmap);
console.log('****** File created from base64 encoded string ******');
}
client.on('data', (data) => {
base64_decode(data,'copy.jpg')
});
// the first few characters in the new file
//k1NRWuGwBGJpmHDTI9VcgOcRgIT0ftMsldCjFJ43whvppjV48NGq3eeOIeeur
Change encode function like below. Also, keep in mind new Buffer() has been deprecated so use Buffer.from() method.
function encode_base64(filename) {
fs.readFile(path.join(__dirname, filename), function (error, data) {
if (error) {
throw error;
} else {
//console.log(data);
var dataBase64 = Buffer.from(data).toString('base64');
console.log(dataBase64);
client.write(dataBase64);
}
});
}
And decode as Below :
function base64_decode(base64Image, file) {
fs.writeFileSync(file,base64Image);
console.log('******** File created from base64 encoded string ********');
}
client.on('data', (data) => {
base64_decode(data,'copy.jpg')
});
You can decode the base64 image using following method .
EDITED
To strip off the header
let base64String = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA'; // Not a real image
// Remove header
let base64Image = base64String.split(';base64,').pop();
To write to a file
import fs from 'fs';
fs.writeFile('image.png', base64Image, {encoding: 'base64'}, function(err) {
console.log('File created');
});
Note :- Don’t forget the {encoding: 'base64'} here and you will be good to go.
You can use a Buffer.from to decode the Base64, and write it to a file using fs.writeFileSync
const { writeFileSync } = require("fs")
const base64 = "iVBORw0KGgoA..."
const image = Buffer.from(base64, "base64")
writeFileSync("image.png", image)
If you have the Base64 string inside a file, you need to decode it into string first, like:
const { writeFileSync, readFileSync } = require("fs")
const base64 = readFileSync(path, "ascii")
const image = Buffer.from(base64, "base64")
writeFileSync("image.png", image)
It seems that the decoding function base64_decode gets the data as a buffer.
Thus, the encoding argument in new Buffer(base64str, 'base64') is ignored.
(Compare the docs of Buffer.from(buffer) vs Buffer.from(string[, encoding])).
I suggest to convert to a string first
function base64_decode(base64str, file) {
var bitmap = new Buffer(base64str.toString(), 'base64');
fs.writeFileSync(file, bitmap);
console.log('******** File created from base64 encoded string ********');
}
I am trying to send a soap request with an attachment. Everything works fine except that the attachment i send is always of zero bytes. The soap server accepts a Base64 encoded file and i had achieved to do it in Java using the code
OutputStream outputStream = new ByteArrayOutputStream()
outputStream.writeTo(fileOutputStream);
Base64.encode(outputStream.toByteArray())//argument passed to the function which sends this to the SOAP API
I want to replicate the same with node but i am unable to do so. Below is the function i am using to achieve this. I am reading some files from the client and trying to send it to the SOAP API. I have marked the place in the code responsible to read and append the data the rest is just for reference.
function createSoapEntryWithAtt(req,response){
var form = new formidable.IncomingForm();
form.parse(req, function (err, fields, files) {
let filesArr = []
for(objkeys in files){
filesArr.push(files[objkeys])
}
return Promise.all(filesArr.map(item => {
return new Promise((res,rej) => {
var oldpath = item.path;
var newpath = 'C:/user/' + item.name;
**var data = fs.readFileSync(oldpath).toString('base64');
let result = []
for (var i = 0; i < data.length; i += 2)// trying to create a 64bit byte array
result.push('0x' + data[i] + '' + data[i + 1])**
console.log(result)
if(data)
res({ [`${item.name}`]: result })
rej("Error occured")
})
})).then(data => {
let url = config.url
var credentials = {
AuthenticationInfo: {
userName: "user",
password: "passwd"
}
}
let args = {
Notes: "Testing From Node App",
}
let count = 0
for (index in data) {
if (count <= 3) {
**for(keys in data[index]){
//console.log(data[index][keys])
args[`Attachment${++count}_Name`] = keys
args[`Attachment${++count}_Data`] = data[index][keys]//Attaching the file read
}
}**
}
soap.createClient(url, function (err, client) {
client.addSoapHeader(credentials)
client.CreateWorkInfo(args, function (err, res) {
if (err) {
console.log("Error is ----->" + err)
} else {
console.log("Response is -----> " + res)
response.end();
}
})
})
})
});
}
Please ignore this question .... and thanks and sorry if anyone wasted time on this question. The error was a careless mistake from my side in the line args["Attachment${++count}_Name"] = keys
args["Attachment${++count}_Data"] = data[index][keys]. Here as i am incrementing the count in both lines there is a mismatch in the sense that Attachment name will be 1 and then in the second line Attachment data will be 02 and hence the name does not contain any data.
Trying to create a new csv file in a directory.
I want to store the data of a variable inside that csv file:
handleRequest(req, res) {
var svcReq = req.body.svcReq;
var csvRecData = JSON.stringify(req.body);
console.log("DATA WE ARE GETIING IS: " + csvRecData);
if (svcReq == 'invDetails') {
var checking = fs.writeFile('../i1/csvData/myCsvFile.csv', csvRecData, function (err) {
if (err) throw err;
console.log("Saved! got the file");
console.log("Checking csvData:" + checking);
});
}
}
I don't see any errors in the console or terminal but the file is not generated. What is my issue?
The path in writeFile should be pointed correctly..you cannot simply use "../il/csv" from your current file.First check your current directory using path.
1)Install path npm module
2)
var path = require('path');
var fs = require('fs');
console.log(path.join(__dirname))
fs.writeFile((path.join(__dirname)+"/test123.csv"), "Sally Whittaker,2018,McCarren House,312,3.75!", function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
I'm trying to create an image file from chunks of ArrayBuffers.
all= fs.createWriteStream("out."+imgtype);
for(i=0; i<end; i++){
all.write(picarray[i]);
}
all.end();
where picarray contains ArrayBuffer chunks. However, I get the error TypeError: Invalid non-string/buffer chunk.
How can I convert ArrayBuffer chunks into an image?
Have you tried first converting it into a node.js. Buffer? (this is the native node.js Buffer interface, whereas ArrayBuffer is the interface for the browser and not completely supported for node.js write operations).
Something along the line of this should help:
all= fs.createWriteStream("out."+imgtype);
for(i=0; i<end; i++){
var buffer = new Buffer( new Uint8Array(picarray[i]) );
all.write(buffer);
}
all.end();
after spending some time i got this, it worked for me perfectly.
as mentioned by #Nick you will have to convert buffer array you recieved from browser in to nodejs Buffer.
var readWriteFile = function (req) {
var fs = require('fs');
var data = new Buffer(req);
fs.writeFile('fileName.png', data, 'binary', function (err) {
if (err) {
console.log("There was an error writing the image")
}
else {
console.log("The sheel file was written")
}
});
});
};
Array Buffer is browser supported which will be unsupportable for writing file, we need to convert to Buffer native api of NodeJs runtime engine.
This few lines of code will create image.
const fs = require('fs');
let data = arrayBuffer // you image stored on arrayBuffer variable;
data = Buffer.from(data);
fs.writeFile(`Assets/test.png`, data, err => { // Assets is a folder present in your root directory
if (err) {
console.log(err);
} else {
console.log('File created successfully!');
}
});
I want to download a zip file from the internet and unzip it in memory without saving to a temporary file. How can I do this?
Here is what I tried:
var url = 'http://bdn-ak.bloomberg.com/precanned/Comdty_Calendar_Spread_Option_20120428.txt.zip';
var request = require('request'), fs = require('fs'), zlib = require('zlib');
request.get(url, function(err, res, file) {
if(err) throw err;
zlib.unzip(file, function(err, txt) {
if(err) throw err;
console.log(txt.toString()); //outputs nothing
});
});
[EDIT]
As, suggested, I tried using the adm-zip library and I still cannot make this work:
var ZipEntry = require('adm-zip/zipEntry');
request.get(url, function(err, res, zipFile) {
if(err) throw err;
var zip = new ZipEntry();
zip.setCompressedData(new Buffer(zipFile.toString('utf-8')));
var text = zip.getData();
console.log(text.toString()); // fails
});
You need a library that can handle buffers. The latest version of adm-zip will do:
npm install adm-zip
My solution uses the http.get method, since it returns Buffer chunks.
Code:
var file_url = 'http://notepad-plus-plus.org/repository/7.x/7.6/npp.7.6.bin.x64.zip';
var AdmZip = require('adm-zip');
var http = require('http');
http.get(file_url, function(res) {
var data = [], dataLen = 0;
res.on('data', function(chunk) {
data.push(chunk);
dataLen += chunk.length;
}).on('end', function() {
var buf = Buffer.alloc(dataLen);
for (var i = 0, len = data.length, pos = 0; i < len; i++) {
data[i].copy(buf, pos);
pos += data[i].length;
}
var zip = new AdmZip(buf);
var zipEntries = zip.getEntries();
console.log(zipEntries.length)
for (var i = 0; i < zipEntries.length; i++) {
if (zipEntries[i].entryName.match(/readme/))
console.log(zip.readAsText(zipEntries[i]));
}
});
});
The idea is to create an array of buffers and concatenate them into a new one at the end. This is due to the fact that buffers cannot be resized.
Update
This is a simpler solution that uses the request module to obtain the response in a buffer, by setting encoding: null in the options. It also follows redirects and resolves http/https automatically.
var file_url = 'https://github.com/mihaifm/linq/releases/download/3.1.1/linq.js-3.1.1.zip';
var AdmZip = require('adm-zip');
var request = require('request');
request.get({url: file_url, encoding: null}, (err, res, body) => {
var zip = new AdmZip(body);
var zipEntries = zip.getEntries();
console.log(zipEntries.length);
zipEntries.forEach((entry) => {
if (entry.entryName.match(/readme/i))
console.log(zip.readAsText(entry));
});
});
The body of the response is a buffer that can be passed directly to AdmZip, simplifying the whole process.
Sadly you can't pipe the response stream into the unzip job as node zlib lib allows you to do, you have to cache and wait the end of the response. I suggest you to pipe the response to a fs stream in case of big files, otherwise you will full fill your memory in a blink!
I don't completely understand what you are trying to do, but imho this is the best approach. You should keep your data in memory only the time you really need it, and then stream to the csv parser.
If you want to keep all your data in memory you can replace the csv parser method fromPath with from that takes a buffer instead and in getData return directly unzipped
You can use the AMDZip (as #mihai said) instead of node-zip, just pay attention because AMDZip is not yet published in npm so you need:
$ npm install git://github.com/cthackers/adm-zip.git
N.B. Assumption: the zip file contains only one file
var request = require('request'),
fs = require('fs'),
csv = require('csv')
NodeZip = require('node-zip')
function getData(tmpFolder, url, callback) {
var tempZipFilePath = tmpFolder + new Date().getTime() + Math.random()
var tempZipFileStream = fs.createWriteStream(tempZipFilePath)
request.get({
url: url,
encoding: null
}).on('end', function() {
fs.readFile(tempZipFilePath, 'base64', function (err, zipContent) {
var zip = new NodeZip(zipContent, { base64: true })
Object.keys(zip.files).forEach(function (filename) {
var tempFilePath = tmpFolder + new Date().getTime() + Math.random()
var unzipped = zip.files[filename].data
fs.writeFile(tempFilePath, unzipped, function (err) {
callback(err, tempFilePath)
})
})
})
}).pipe(tempZipFileStream)
}
getData('/tmp/', 'http://bdn-ak.bloomberg.com/precanned/Comdty_Calendar_Spread_Option_20120428.txt.zip', function (err, path) {
if (err) {
return console.error('error: %s' + err.message)
}
var metadata = []
csv().fromPath(path, {
delimiter: '|',
columns: true
}).transform(function (data){
// do things with your data
if (data.NAME[0] === '#') {
metadata.push(data.NAME)
} else {
return data
}
}).on('data', function (data, index) {
console.log('#%d %s', index, JSON.stringify(data, null, ' '))
}).on('end',function (count) {
console.log('Metadata: %s', JSON.stringify(metadata, null, ' '))
console.log('Number of lines: %d', count)
}).on('error', function (error) {
console.error('csv parsing error: %s', error.message)
})
})
If you're under MacOS or Linux, you can use the unzip command to unzip from stdin.
In this example I'm reading the zip file from the filesystem into a Buffer object but it works
with a downloaded file as well:
// Get a Buffer with the zip content
var fs = require("fs")
, zip = fs.readFileSync(__dirname + "/test.zip");
// Now the actual unzipping:
var spawn = require('child_process').spawn
, fileToExtract = "test.js"
// -p tells unzip to extract to stdout
, unzip = spawn("unzip", ["-p", "/dev/stdin", fileToExtract ])
;
// Write the Buffer to stdin
unzip.stdin.write(zip);
// Handle errors
unzip.stderr.on('data', function (data) {
console.log("There has been an error: ", data.toString("utf-8"));
});
// Handle the unzipped stdout
unzip.stdout.on('data', function (data) {
console.log("Unzipped file: ", data.toString("utf-8"));
});
unzip.stdin.end();
Which is actually just the node version of:
cat test.zip | unzip -p /dev/stdin test.js
EDIT: It's worth noting that this will not work if the input zip is too big to be read in one chunk from stdin. If you need to read bigger files, and your zip file contains only one file, you can use funzip instead of unzip:
var unzip = spawn("funzip");
If your zip file contains multiple files (and the file you want isn't the first one) I'm afraid to say you're out of luck. Unzip needs to seek in the .zip file since zip files are just a container, and unzip may just unzip the last file in it. In that case you have to save the file temporarily (node-temp comes in handy).
Two days ago the module node-zip has been released, which is a wrapper for the JavaScript only version of Zip: JSZip.
var NodeZip = require('node-zip')
, zip = new NodeZip(zipBuffer.toString("base64"), { base64: true })
, unzipped = zip.files["your-text-file.txt"].data;