There are a lot of solutions that are based on the fetch api or the XMLHttpRequest, but they return CORS or same-origin-policy errors.
The File/Filereader API works out of the box , but only for files chosen by the user via a input file (because that is the only way to import them as a File obj)
Is there a way to do something simple and minimal like
const myfile = new File('relative/path/to/file') //just use a path
const fr = new FileReader();
fr.readAsText(myfile);
Thanks
Try the following JS, this will use fs to read the file and if it exists it will turn it into a string and output to console. You can change it up to however you'd like.
var fs = require('fs');
fs.readFile('test.txt', 'utf8', function(err, data) {
if (err) {
return console.log(err);
}
console.log(data);
});
Related
I'm looking for the best way to send image files to my server using Apollo Express, and Node.
Getting the information there doesn't seem to be an issue, I convert the object into a string but can't find out how to convert it back to a regular file object to store away.
What I have so far;
JS - let buffer = await toBase64(file);
Through Apollo server..
Node - let buffer = Buffer.from(args.image, 'base64');
This gives me a Buffer. I'm unsure how to proceed with NodeJS to convert this back to a file object.
Thanks
I hope this will be helpfull for you
const file = new File([
new Blob(["decoded_base64_String"])
], "output_file_name");
You can use one of the various write or writeFile methods which accept a Buffer.
const fs = require("fs");
let buffer = Buffer.from(
"iVBORw0KGgoAAAANSUhEUgAAAAgAAAAGCAIAAABxZ0isAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAQSURBVBhXY/iPAwygxP//AAjcj3EdtT3BAAAAAElFTkSuQmCC",
"base64"
);
fs.writeFile("pic.png", buffer, (err) => {
if (err) throw err;
console.log("The file has been saved!");
});
There is a pretty nice example available for uploading large files to s3 via aws-sdk-js library but unfortunately this is using nodeJs fs.
Is there a way we can achieve the same thing in Plain Javascript? Here is a nice Gist as well which breaks down the large file into the smaller Chunks however this is still missing the .pipe functionality of nodeJs fs which is required to pass to asw-sdk-js upload function. Here is a relevant code snippet as well in Node.
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}).
on('httpUploadProgress', function(evt) {
console.log('Progress:', evt.loaded, '/', evt.total);
}).
send(function(err, data) { console.log(err, data) });
Is there something similar available in Plain JS (non nodeJs)? Useable with Rails.
Specifically, an alternative to the following line in Plain JS.
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
The same link you provided contains an implementation intended for the Browser, and it also uses the AWS client SDK.
// Get our File object
var file = $('#file-chooser')[0].files[0];
// Upload the File
var bucket = new AWS.S3({params: {Bucket: 'myBucket'});
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
$('#results').html(err ? 'ERROR!' : 'UPLOADED.');
});
** EDITS **
Note the documentation for the Body field includes Blob, which means streaming will occur:
Body — (Buffer, Typed Array, Blob, String, ReadableStream)
You can also use the Event Emitter convention in the client offered by the AWS SDK's ManagedUpload interface if you care to monitor progress. Here is an example:
var managed = bucket.upload(params)
managed.on('httpUploadProgress', function (bytes) {
console.log('progress', bytes.total)
})
managed.send(function (err, data) {
$('#results').html(err ? 'ERROR!' : 'UPLOADED.');
})
If you want to read the file from your local system in chunks before you send to s3.uploadPart, you'll want to do something with Blob.slice, perhaps defining a Pipe Chain.
I would like my server.js to basically save a string to a .txt file for a history/log on the server.
Since you can not use php or jQuery in server.js, I don't know how to do this, nor has anyone asked the same question.
Do you know how?
Thank you.
First you get the file system library:
var fs = require('fs');
Then, you can just output like this:
fs.writeFile("log.txt", stringText, function(error) {
if(error) throw error; // Handle the error just in case
else console.log("Success!");
});
you can use the fs module.
something like that will do the job :
let myString = "very very important string";
let fs = require("fs");
// you can use async if you prefer, check the doc
fs.writeFileSync("./myFile.txt", myString);
i'm pretty new into NodeJs. And i am trying to read a file into a variable.
Here is my code.
var fs = require("fs"),
path = require("path"),
util = require("util");
var content;
console.log(content);
fs.readFile(path.join(__dirname,"helpers","test.txt"), 'utf8',function (err,data) {
if (err) {
console.log(err);
process.exit(1);
}
content = util.format(data,"test","test","test");
});
console.log(content);
But every time i run the script i get
undefined and undefined
What am i missing? Help please!
As stated in the comments under your question, node is asynchronous - meaning that your function has not completed execution when your second console.log function is called.
If you move the log statement inside the the callback after reading the file, you should see the contents outputted:
var fs = require("fs"),
path = require("path"),
util = require("util");
var content;
console.log(content);
fs.readFile(path.join(__dirname, "helpers", "test.txt"), 'utf8', function (err, data) {
if (err) {
console.log(err);
process.exit(1);
}
content = util.format(data, "test", "test", "test");
console.log(content);
});
Even though this will solve your immediately problem, without an understanding of the async nature of node, you're going to encounter a lot of issues.
This similar stackoverflow answer goes into more details of what other alternatives are available.
The following code snippet uses ReadStream. It reads your data in separated chunks, if your data file is small it will read the data in a single chunk. However this is a asynchronous task. So if you want to perform any task with your data, you need to include them within the ReadStream portion.
var fs = require('fs');
var readStream = fs.createReadStream(__dirname + '/readMe.txt', 'utf8');
/* include the file directory and file name instead of <__dirname + '/readMe.txt'> */
var content;
readStream.on('data', function(chunk){
content = chunk;
performTask();
});
function performTask(){
console.log(content);
}
There is also another easy way by using synchronous task. As this is a synchronous task, you do not need to worry about its executions. The program will only move to the next line after execution of the current line unlike the asynchronous task.
A more clear and detailed answer is provided in the following link:
Get data from fs.readFile
var fs = require('fs');
var content = fs.readFileSync('readMe.txt','utf8');
/* include your file name instead of <'readMe.txt'> and make sure the file is in the same directory. */
or easily as follows:
const fs = require('fs');
const doAsync = require('doasync');
doAsync(fs).readFile('./file.txt')
.then((data) => console.log(data));
I have a pdf buffer data coming from my nodeJS backend, something like..
"%PDF-1.5
%����..."
How can I save this into a file using browser's javascript ?
You can save it as a file on the NodeJS end, and then download the file using javascript. That is probably the best way, as javascript does not have access to writing to the file system.
Sorry for late reply,You can do piping like this:
filepath = 'test.pdf';
var dest = fs.createWriteStream(filepath);
request(options, function (error, response, body) {
if (error)
throw new Error(error);
}).on('end', function () {
return callback(filepath);
}).on('error', function (err) {
return callback(err);
}).pipe(dest);
}