I have a pdf buffer data coming from my nodeJS backend, something like..
"%PDF-1.5
%����..."
How can I save this into a file using browser's javascript ?
You can save it as a file on the NodeJS end, and then download the file using javascript. That is probably the best way, as javascript does not have access to writing to the file system.
Sorry for late reply,You can do piping like this:
filepath = 'test.pdf';
var dest = fs.createWriteStream(filepath);
request(options, function (error, response, body) {
if (error)
throw new Error(error);
}).on('end', function () {
return callback(filepath);
}).on('error', function (err) {
return callback(err);
}).pipe(dest);
}
Related
I want to be able to:
-edit the data of a .dat file on my computer for a website.
-pull data from the file to use it later on.
I know a tiny bit about javascript and heard javascript cannot directly edit databases.
Is a .dat file in my computer a database?
I have done a few things in Javascript for websites but I haven't done anything complicated completely myself. I created some websites before and I have a basic understanding of HTML and CSS.
Please phrase your response as simply as possible. Explain the meaning of any complicated but necessary terms.
You need some server-side script to access the filesystem of the server such as PHP or NodeJs...
Nodejs example here.
let fs = require('fs');
Appeding file:
fs.appendFile('mynewfile1.txt', 'Hello content!', function (err) {
if (err) throw err;
console.log('Saved!');
});
Delete file:
fs.unlink('mynewfile2.txt', function (err) {
if (err) throw err;
console.log('File deleted!');
});
Read file :
fs.readFile('demofile1.html', function(err, data) {
if (err) throw err;
console.log(data);
});
There are a lot of solutions that are based on the fetch api or the XMLHttpRequest, but they return CORS or same-origin-policy errors.
The File/Filereader API works out of the box , but only for files chosen by the user via a input file (because that is the only way to import them as a File obj)
Is there a way to do something simple and minimal like
const myfile = new File('relative/path/to/file') //just use a path
const fr = new FileReader();
fr.readAsText(myfile);
Thanks
Try the following JS, this will use fs to read the file and if it exists it will turn it into a string and output to console. You can change it up to however you'd like.
var fs = require('fs');
fs.readFile('test.txt', 'utf8', function(err, data) {
if (err) {
return console.log(err);
}
console.log(data);
});
I want to overwrite or clear out the contents in a csv file before I append new data. Is there a method for this ?
Just write to the file using fs.writeFile, it will override an existing file (if one exists).
const fs = require('fs');
fs.writeFile('message.csv', 'new content', (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
I can't figure out why this frisby tests won't run!
Basically I'm trying to import JSON from a file and check it against a return from a request. The compiler doesn't seem to find any tests when I run this file.
If anyone could possibly suggest a better way to do this? I'm thinking about trying a different way to handle the file reading. I know about readFileSync() but I do not want to use that if I don't have to! Any help would be appreciated.
function readContent(callback,url,file) {
fs.readFile(file, 'UTF8', function (err, content) {
if (err) return callback(err)
data = JSON.parse(content)
callback(null, data)
})
}
readContent(function (err, content) {
frisby.create('Testing API')
.get(url)
.expectStatus(200)
.expectBodyContains(content)
.toss()
},
url,file)
Here's one I prepared earlier:
// Read a JSON file from disk, and compare its contents with what comes back from the API.
fs.readFile(path.resolve(__dirname, 'GET_ReferenceTypes.json'), 'utf-8', function(error, data){
if (error) throw error
frisby.create('GET ReferenceTypes, inside readFile callback')
.get(URL + 'ReferenceTypes?requestPersonId=2967&id=99')
.expectStatus(200)
// JSON.parse() is required to convert the string into a proper JSON object for comparison.
// The .replace() strips the BOM character from the beginning of the unicode file.
.expectJSON(JSON.parse(data.replace(/^\uFEFF/, '')))
.toss();
});
Double check the encoding on your JSON file, because this whole thing comes apart without the .replace() call.
I'm using this package elsewhere and it's working just fine, however in one particular example with one XML file I'm getting "undefined" errors.
Example:
fs.readFile('./XML/theXMLfile13mb.xml', 'ascii', function(err,data){
if(err) {
console.log("Could not open file " + err);
process.exit(1);
}
parseString(data, function (err, result) {
console.log(result); // Returns undefined
var json1 = JSON.stringify(result); // Gives an error
var json = JSON.parse(json1);
The xml2js docs don't really mention how this might be possible/what this might mean. I've tried using other XML files and they work fine. This particular XML file is no bigger than the others nor does it appear to be any less in-tact (it opens fine in my browser and all the data is presented as expected).
Any ideas on how I could troubleshoot this?
You need to convert the data from a Buffer to a String, use this:
parseString(data.toString(), function (err, result) {
Instead of:
parseString(data, function (err, result) {