node.js unable to serve image when it is being overwritten - javascript

I have a node.js app that periodically poll images and store them onto filesystem.
The problem is, when node.js is overwriting the images, whoever that visits the website at that moment will see blank images everywhere (because the images is being overwritten at that moment).
This only happens for that few seconds whenever it is time to poll the images, but it is annoying. Is there anyway to be able to still serve the image while we are overwriting it?
Code to save/overwrite image:
// This method saves a remote path into a file name.
// It will first check if the path has something to download
function saveRemoteImage(path, fileName)
{
isImagePathAvailable(path, function(isAvailable)
{
if(isAvailable)
{
console.log("image path %s is valid. download now...", path);
console.log("Downloading image file from %s -> %s", path, fileName);
var ws = fs.createWriteStream(fileName);
ws.on('error', function(err) { console.log("ERROR DOWNLOADIN IMAGE FILE: " + err); });
request(path).pipe(ws);
}
else
{
console.log("image path %s is invalid. do not download.");
}
});
}
Code to serve image:
fs.exists(filePath, function(exists)
{
if (exists)
{
// serve file
var stat = fs.statSync(filePath);
res.writeHead(200, {
'Content-Type': 'image/png',
'Content-Length': stat.size
});
var readStream = fs.createReadStream(filePath);
readStream.pipe(res);
return;
}

I'd suggest writing the new version of the image to a temporary file:
var ws = fs.createWriteStream(fileName + '.tmp');
var temp = request(path).pipe(ws);
and renaming it when the file is entirely downloaded:
temp.on('finish', function() {
fs.rename(fileName + '.tmp', fileName);
});
We use the 'finish' event, which is fired when all the data has been written to the underlying system, ie. the filesystem.

May be it is better to
serve the old version of file while downloading;
download new file to temporary file (say _fileName, for example);
rename file after downloading thus rewriting original file;

Related

why I get ENONET: no such file or directory when the file exists? node.js

I'm doing a crash course on YouTube before I dive deeper into React and NodeJs.
Im trying to check the url from the code and display the right page.
so if the url ends with '/about' I will display the about html.
the problem is that it's not displaying the page.
the file is exist.
it says: ENOENT: no such file or directory.
this is the code:
const server = http.createServer((req,res)=>{
let filePath = path.join(__dirname, 'public', req.url === '/' ? 'index.html' : req.url)
// Checking the extension
let extName = path.extname(filePath)
// Content Type
let contentType = 'text/html'
// check ext and set ceontent type
switch (extName) {
case '.js':
contentType = 'text/javascript'
break
case '.css':
contentType = 'text/css'
break
case '.json':
contentType = 'application/json'
break
case '.png':
contentType = 'image/png'
break
case '.jpg':
contentType = 'image/jpg'
break
}
console.log(contentType, "here")
// Read file
fs.readFile(filePath, (err,content) => {
console.log(filePath)
if (err) {
console.log(err.message)
if (err.code == 'ENONET') {
//Page not found
fs.readFile(path.join(__dirname, 'public', '404.html'), (err,content) => {
res.writeHead(200, {'Content-Type': 'text/html'})
res.end(content, 'utf-8')
})
} else {
// some server error
res.writeHead(500)
res.end(`Server Error ${err.code}`)
}
}else {
// Success
res.writeHead(200, {'Content-Type': contentType})
res.end(content,'utf-8')
}
})
})
//creating the port either getting it from the host, or setting it to 5000
const PORT = process.env.PORT || 5000
// setting the server to listen on the port, and it get's also a call back so i print the server listens
server.listen(PORT, () => console.log(`Server Listening on port ${PORT}`))
I know that the problem is that I miss the extension in the filePath name.
but the thing is that the person I learn from, have the exact same code, and it works for him.
I know that the problem is that I miss the extension in the filePath name. but the thing is that the person I learn from, have the exact same code, and it works for him.
This code could work only if the file about exists in the public directory (with no file extension on it). So, rather than discuss how it worked for someone else, we should discuss how this code can work for you or what you would have to change in it to make it work for you.
Your code expects the path that is passed in the request to be an entire filename in your public directory.
So, when you send a request in for /about, you try to do fs.readFile("about", ...). That's the file that has to exist.
If you want /about to serve a file named /about.html in your file system, then you have check if extName is empty and, if so, add ".html" to the filename to give it a default extension. Or, in some cases, you might check for more than one possibility in the file system. If about isn't found, then check for about.html.
You could add a default .html path by changing this part:
// Checking the extension
let extName = path.extname(filePath);
to this:
// Checking the extension
let extName = path.extname(filePath);
if (!extName) {
extName = ".html";
filePath += ".html";
}
FYI, there's a misspelling too. Change this:
if (err.code == 'ENONET')
to this:
if (err.code == 'ENOENT')
Another word of caution, your server may be vulnerable to some requests putting ../ into the path and then being able to access files outside of your public directory. Most browsers will stop that, but scripted requests could do it.

res.download file from Amazon S3

I am trying to download a file from outside of my root directory however every time I try, it tries to take it from the root directory. I will need the user of my site to be able to download these files.
The file has initially been uploaded to Amazon S3 and I have accessed it using the getObject function.
Here is my code:
app.get('/test_script_api', function(req, res){
var fileName = req.query.File;
s3.getObject(
{ Bucket: "bucket-name", Key: fileName },
function(error, s3data){
if(error != null){
console.log("Failed to retrieve an object: " + error);
}else{
//I have tried passing the S3 data but it asks for a string
res.download(s3data.Body);
//So I have tried just passing the file name & an absolute path
res.download(fileName);
}
}
);
});
This returns the following error:
Error: ENOENT: no such file or directory, stat '/home/ec2-user/environment/test2.txt'
When I enter an absolute path it just appends this onto the end of /home/ec2-user/environment/
How can I change the directory res.download is trying to download from?
Is there an easier way to download your files from Amazon S3?
Any help would be much appreciated here!
I had the same problem and I found this answer:
NodeJS How do I Download a file to disk from an aws s3 bucket?
Based on that, you need to use createReadStream() and pipe().
R here more about stream.pipe() - https://nodejs.org/en/knowledge/advanced/streams/how-to-use-stream-pipe/
res.attachment() will set the headers for you.
-> https://expressjs.com/en/api.html#res.attachment.
This code should work for you (based on the answer in the above link):
app.get('/test_script_api', function (req, res) {
var fileName = req.query.File;
res.attachment(fileName);
var file = s3.getObject({
Bucket: "bucket-name",
Key: fileName
}).createReadStream()
.on("error", error => {
});
file.pipe(res);
});
In my case, on the client side, I used
This made sure that the file is downloading.

socket.io, node.js forwarding image from server to client

I want to receive an image via socket.io on node.js and would like forward it to a client (browser), but the image sent via the message to the browser is not recognised and therefore not show.
However, when I save the message/image first on the node.js server and load the saved file again to forward the image it works fine. I can also open the jpeg file on the server from the file system without a problem. Sending a different jpeg directly from the server works also as expected.
socket.on('image', function(msg) {
var fileName = 'clientImage.jpg';
// First save the file
fs.writeFile(fileName, msg.buffer, function() {});
// reload the image and forward it to the client
fs.readFile(__dirname +'/clientImage.jpg', function(err, buf){
socket.emit('serverImage', {image: true, buffer: buf});
});
}
If I simplify the function to forward the message (msg) received without the "fs" workaround, like:
socket.emit('serverImage', {image: true, buffer: msg.buffer});
or in the simples expected way
socket.emit('serverImage', msg);
the message will not be recognised as an image on the browser and the client does not fire the "onload" event for the Image.
Client code (works with jpeg files fine):
socket.on('serverImage', function(msg) {
var blob = new Blob([msg.buffer], {type: 'image/jpeg'} );
var url = URL.createObjectURL(blob);
var limg = new Image();
limg.onload = function () {
console.log(' -- image on load!');
rcontext.drawImage(this, 0, 0);
URL.revokeObjectURL(url);
};
limg.src = url;
});
Is there a way that the message can be adopted/converted somehow i.e. encoding, to be recognised directly without the "fs" library, or any other suggestions?
many thanks!
Many thanks for the responses,
I did further tests and found a workaround / solution by using an additional buffer variable specifying the type in front of the socket.emit :
var nbuffer = new Buffer(msg.buffer,'image/jpeg');
socket.emit('serverImage', {image: true, buffer: nbuffer});
with this additional step, the browser recognises now the message as image.
Many thanks for your help!
writeFile is asynchronous. It takes time to write a file to the disk. You passed it a callback function, but it's empty. Re-read the image inside that callback function.
// First save the file
fs.writeFile(fileName, msg.buffer
, function() { // When writing is done (that's the important part)
// reload the image and forward it to the client
fs.readFile(__dirname +'/clientImage.jpg', function(err, buf){
socket.emit('serverImage', {image: true, buffer: buf});
});
});

Generate Download URL After Successful Upload

I have successfully uploaded files to Firebase's storage via Google Cloud Storage through JS! What I noticed is that unlike files uploaded directly, the files uploaded through Google Cloud only have a Storage Location URL, which isn't a full URL, which means it cannot be read! I'm wondering if there is a way to generate a full URL on upload for the "Download URL" part of Firebase's actual storage.
Code being used:
var filename = image.substring(image.lastIndexOf("/") + 1).split("?")[0];
var gcs = gcloud.storage();
var bucket = gcs.bucket('bucket-name-here.appspot.com');
request(image).pipe(bucket.file('photos/' + filename).createWriteStream(
{metadata: {contentType: 'image/jpeg'}}))
.on('error', function(err) {})
.on('finish', function() {
console.log(imagealt);
});
When using the GCloud client, you want to use getSignedUrl() to download the file, like so:
bucket.file('photos/' + filename).getSignedUrl({
action: 'read',
expires: '03-17-2025'
}, function(err, url) {
if (err) {
console.error(err);
return;
}
// The file is now available to read from this URL.
request(url, function(err, resp) {
// resp.statusCode = 200
});
});
You can either:
a) Create a download url through the firebase console
b) if you attempt to get the downloadurl programmatically from a firebase client, one will be created on the fly for you.

Node.js/Mongodb/GridFS resize images on upload

I am saving uploaded images in Mongodb GridFS with Node.js/Express/gridfs-stream/multyparty using streams.
Works fine.
Now I would like to "normalize" (resize) images to some standard format before storing to database.
I could use gm https://github.com/aheckmann/gm and have streaming but I would have to install native ImageMagic (not an option) or
Use something like lwip https://github.com/EyalAr/lwip and have a "pure Node" setup, but then I cannot have streaming
So is there a solution to have a streaming solution to request -> resize -> store to GridFS without installing external libraries?
Current solution (missing the resize step):
function storeImage(req, err, succ){
var conn = mongoose.connection;
var gfs = Grid(conn.db);
var context = {};
var form = new multiparty.Form();
form.on('field', function(name, value){
context[name] = value;
console.log(context);
});
form.on('part', function(part){
// handle events only if file part
if (!part.filename) { return; }
var options =
{
filename: part.filename,
metadata: context,
mode: 'w',
root: 'images'
};
var ws = gfs.createWriteStream(options);
// success GridFS
ws.on('close', function (file) {
console.log(file.filename + file._id);
succ(file._id);
});
// error GridFS
ws.on('error', function (errMsg) {
console.log('An error occurred!', errMsg);
err(errMsg);
});
part.pipe(ws);
});
// Close emitted after form parsed
form.on('close', function() {
console.log('Upload completed!');
});
form.parse(req);
}
For posterity
1) Initially I used lwip while I was storing images locally. When people started uploading bigger images (which was added as requirement) lwip started exploding my instance on Heroku and I switched to
2) gm over ImageMagick running on AWS Lambda that has ImageMagick preconfigured in the default instance. Images now stored on S3 and distributed via CloudFront.

Categories

Resources