I'm new to streams, and I'm trying to check for stream length before uploading it to s3.
It's not that efficient, but performance is not an issue at the moment.
The below code works fine for many images, but for one specific image it stops at the bytelength validation.
//using graphql-upload
const { createReadStream, filename, encoding, mimetype } = file;
const stream: ReadStream = createReadStream();
const validationStream = stream.pipe(new Stream.PassThrough());
const uploadStream = stream.pipe(new Stream.PassThrough());
try {
debugConsoleLog("File upload begun"); //Ok
//... extension validation
//... mime validation
debugConsoleLog("Type check good"); //Ok
let byteLength = 0;
for await (const uploadChunk of validationStream) {
debugConsoleLog(byteLength); // prints 0 once, never prints again
byteLength += (uploadChunk as Buffer).byteLength;
}
debugConsoleLog("Counted byteSize"); //Never called
//... Check if size is too big and upload //Never called
} catch (err) {
console.log(err);
throw err;
} finally {
stream.destroy();
validationStream.destroy();
uploadStream.destroy();
}
The image that doesn't work is a snapshot of a mac touchbar. Not that anyone will upload it, but it shows that some images are getting permanently stuck in the processing code.
There is something wrong in this part of the code for that image. How can I prevent the endless processing at this file if something is wrong with the image.
let byteLength = 0;
for await (const uploadChunk of validationStream) {
debugConsoleLog(byteLength); // prints 0 once, never prints again
byteLength += (uploadChunk as Buffer).byteLength;
}
Please tell if you need more info about the stream in question!
Thank you.
Related
I am struggling through learning JQuery/Javascript and have a web application using the chrome "experimental" web serial API. When I enter a command and get a response back, this string is broken into 2 pieces in a random place, usually in the first third:
<p0><iDCC-EX V-0.2.1 / MEGA / STANDARD_MOTOR_SHIELD G-9db6d36>
All the other return messages are shorter and also wrapped in "<" and ">" brackets.
In the code below. The log window only ever shows the second chunk, even in the "ChunkTransformer() routine that simultaneously displays it properly in the devtools console log.
How can I get all my return messages to appear as one string? It is ok if the chunks are split as separate return values by the brackets as long as they display in the log. I think the <p0> is not displaying because the log window thinks it is a special character. It would not even display here until I wrapped in in a code tag. So I think I have at least two issues.
async function connectServer() {
try{
port = await navigator.serial.requestPort(); // prompt user to select device connected to a com port
await port.open({ baudRate: 115200 }); // open the port at the proper supported baud rate
// create a text encoder output stream and pipe the stream to port.writeable
const encoder = new TextEncoderStream();
outputDone = encoder.readable.pipeTo(port.writable);
outputStream = encoder.writable;
// send a CTRL-C and turn off the echo
writeToStream('\x03', 'echo(false);');
let decoder = new TextDecoderStream();
inputDone = port.readable.pipeTo(decoder.writable);
inputStream = decoder.readable
// test why only getting the second chunk in the log
.pipeThrough(new TransformStream(new ChunkTransformer()));
// get a reader and start the non-blocking asynchronous read loop to read data from the stream.
reader = inputStream.getReader();
readLoop();
return true;
} catch (err) {
console.log("User didn't select a port to connect to")
return false;
}
}
async function readLoop() {
while (true) {
const { value, done } = await reader.read();
if (value) {
displayLog(value);
}
if (done) {
console.log('[readLoop] DONE'+done.toString());
displayLog('[readLoop] DONE'+done.toString());
reader.releaseLock();
break;
}
}
}
class ChunkTransformer {
transform(chunk, controller) {
displayLog(chunk.toString()); // only shows last chunk!
console.log('dumping the raw chunk', chunk); // shows all chunks
controller.enqueue(chunk);
}
}
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").animate({scrollTop: $("#log-box").prop("scrollHeight"), duration: 1}, "fast");
}
First Step:
Modify the displayLog() function in one of the following ways
With Animate:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").animate({scrollTop: $("#log-box").prop("scrollHeight")}, "fast");
}
Without Animate:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
$("#log-box").scrollTop( $("#log-box").prop("scrollHeight"));
}
OR Just for your understanding:
function displayLog(data){
$("#log-box").append("<br>"+data+"<br>");
scrollHeight = $("#log-box").prop("scrollHeight");
$("#log-box").scrollTop(scrollHeight);
}
I am writing a Firebase function where I am following the code from the sample code in Github provided by Firebase
However, I am consistently getting the error
Function returned undefined, expected Promise or value in my Firebase functions log.
I have pretty much modified my code to be exactly the same and yet no respite. Has anyone tried this code? Is it error free? Why am I getting the error? Same code is also in the Firebase guide
Sample code producing the error is below
exports.imageToJPG = functions.storage.object().onChange(event => {
const object = event.data;
const filePath = object.name;
const baseFileName = path.basename(filePath, path.extname(filePath));
const fileDir = path.dirname(filePath);
const JPEGFilePath = path.normalize(path.format({dir: fileDir, name: baseFileName, ext: JPEG_EXTENSION}));
const tempLocalFile = path.join(os.tmpdir(), filePath);
const tempLocalDir = path.dirname(tempLocalFile);
const tempLocalJPEGFile = path.join(os.tmpdir(), JPEGFilePath);
// Exit if this is triggered on a file that is not an image.
if (!object.contentType.startsWith('image/')) {
console.log('This is not an image.');
return;
}
// Exit if the image is already a JPEG.
if (object.contentType.startsWith('image/jpeg')) {
console.log('Already a JPEG.');
return;
}
// Exit if this is a move or deletion event.
if (object.resourceState === 'not_exists') {
console.log('This is a deletion event.');
return;
}
const bucket = gcs.bucket(object.bucket);
// Create the temp directory where the storage file will be downloaded.
return mkdirp(tempLocalDir).then(() => {
// Download file from bucket.
return bucket.file(filePath).download({destination: tempLocalFile});
}).then(() => {
console.log('The file has been downloaded to',
tempLocalFile);
// Convert the image to JPEG using ImageMagick.
return spawn('convert', [tempLocalFile, tempLocalJPEGFile]);
}).then(() => {
console.log('JPEG image created at', tempLocalJPEGFile);
// Uploading the JPEG image.
return bucket.upload(tempLocalJPEGFile, {destination: JPEGFilePath});
}).then(() => {
console.log('JPEG image uploaded to Storage at', JPEGFilePath);
// Once the image has been converted delete the local files to free up disk space.
fs.unlinkSync(tempLocalJPEGFile);
fs.unlinkSync(tempLocalFile);
});
});
Any pointers?
It seems that recently Firebase has updated their SDK due to which their sample code and documentation is little out of date. You must return with a boolean value even if you are just trying to exit the function. So it must be return true for each of the return statements in the code above where there is no Promise being returned.
I will delete this question and answer once Firebase has updated their sample code and documentation. Till then leaving it here for those who may still stumble upon this issue without knowing why.
With javascript and chrome (on electron) I am reading files in chunks and attaching it to a string. I can see now that if I try to read a file of 462Mb, I get the error RangeError: Invalid string length and if in every chunk I print the string length, the lass reading shows 268,400,000, reading chunks of 100,000.
What is this error about? A javascript string limit? My computer saying stop? I can see that CPU keeps below 50% and memory doesn't go higher than 55%.
I am about to think about a workaround, but I cannot find anything about a length limit, so maybe I am facing another type of error?
The code I'm using to read files
var start, temp_end, end;
var BYTES_PER_CHUNK = 100000;
function readFile(file_to_read,param) {
if (param.start < param.end) {
return new Promise(function(resolve){
var chunk = file_to_read.file.slice(param.start, param.temp_end);
var reader = new FileReader();
reader.onload = function(e) {
if (e.target.readyState == 2) { // the file is being uploaded in chunks, and the chunk has been successfully read
document.getElementById('file_monitor').max = param.end;
document.getElementById('file_monitor').value = param.temp_end;
//file.data += new TextDecoder("utf-8").decode(e.target.result);
Promise.resolve()
.then(function(){
file_to_read.data += e.target.result;
}).then(function(){
param.start = param.temp_end; // 0 if a new file, the previous one if still reading the same file
param.temp_end = param.start + BYTES_PER_CHUNK;
if (param.temp_end > param.end)
param.temp_end = param.end;
resolve(readFile(file_to_read,param));
}).catch(function(e){
console.log(e);
console.log(file_to_read.data.length);
console.log(file_to_read.data);
console.log(e.target.result);
resolve();
});
}
}
reader.readAsText(chunk);
// reader.readAsBinaryString(chunk);
});
} else
return Promise.resolve();
}
I am working with pngjs through many of it's methods. Most of the time, they work fine. However, like in the following example, I get an error: "Stream is not writable"
var fs = require('fs'),
PNG = require('pngjs').PNG;
var dst = new PNG({width: 100, height: 50});
fs.createReadStream('http://1.1m.yt/hry7Eby.png') //download this picture in order to examine the code.
.pipe(new PNG())
.on('parsed', function(data) {
console.log(data);
});
This case is not singular, I get this error on 1 random png image once a day, through all of pngjs methods, and that error obviously crashes my app.
(note: you can't use the http link I gave you with a readStream, you will have to download & rename it and do something like):
fs.createReadStream('1.png')
Thank you for your time and effort.
This seems to be a bug in the library, though I'm wary of saying so as I'm no expert in PNGs. The parser seems to complete while the stream is still writing. It encounters the IEND, and so calls this:
ParserAsync.prototype._finished = function() {
if (this.errord) {
return;
}
if (!this._inflate) {
this.emit('error', 'No Inflate block');
}
else {
// no more data to inflate
this._inflate.end();
}
this.destroySoon();
};
If you comment out the this.destroySoon(); it finishes the image correctly, instead of eventually calling this function:
ChunkStream.prototype.end = function(data, encoding) {
if (data) {
this.write(data, encoding);
}
this.writable = false;
// already destroyed
if (!this._buffers) {
return;
}
// enqueue or handle end
if (this._buffers.length === 0) {
this._end();
}
else {
this._buffers.push(null);
this._process();
}
};
...which would otherwise end up setting the stream.writeable to false, or, if you comment that out, to pushing a null value into the _buffers array and screwing up the ChunkStream._processRead.
I'm fairly certain this is a synchronicity problem between the time the zlib parser takes to complete and the time the stream takes to complete, since if you do this synchronously it works fine:
var data = fs.readFileSync('pic.png');
var png = PNG.sync.read(data);
var buff = PNG.sync.write(png);
fs.writeFileSync('out2.png', buff);
I wan't to log into a file continuously, but after every 1000 lines I want to change to a new file. Now my method works like this:
var fs = require('fs');
...
var outputStream = fs.createWriteStream(fileName + '.csv');
outputStream.write(content, 'utf8', callback);
...
if (lineCounter === 1000) {
outputStream.end(function(err) {
outputStream = fs.createWriteStream(fileName2 + '.csv');
outputStream.write(content, 'utf8', callback);
});
}
In the end the files doesn't contains the last few lines. I'm open for any solution, I just need stream write into several files.
Thanks in advance!
At first I tried using the streams of Highland.js but I couldn't pause them for some reason. The script I am posting is tested and it is working. I share the original source at the end. So, I haven't actually start reading second file, but I believe it is easy now, as you have a point to proceed further after the script has reached the defined limit of lines.
var stream = require('stream'),
fs = require('fs'),
readStream = fs.createReadStream('./stream.txt', {highWaterMark: 15}),
limitStream = new stream.Transform(),
limit = 0
limitStream._transform = function(chunk, encoding, cb) {
if (++limit <= 5) {
console.log('before', limit)
return cb(null, chunk + '\n')
}
console.log('after',limit)
this.end()
cb()
}
limitStream.on('unpipe', function() { console.log('unpipe emitted from limitStream') })
limitStream.on('end', function() { console.log('end emitted from limitStream') })
readStream.pipe(limitStream).pipe(process.stdout)
Source: https://groups.google.com/forum/#!topic/nodejs/eGukJUQrOBY
After posting the answer, I found library, that can also work, but I admit that I haven't tested it. I just share it as a reference point: https://github.com/isaacs/truncating-stream