I have one JavaScript source file I am processing that I want to end up in two or more destination folders. Piping to multiple destinations work if I chain the pipes, but not if I add it to a stream one at a time. This prevents me from making the number of destination folders dynamic.
For example doing the following works
var rebundle = function() {
var stream = bundler.bundle();
stream = stream.pipe(source("bundled.js"));
stream.pipe(gulp.dest(dests[0]))
.pipe(gulp.dest(dests[1]))
.pipe(gulp.dest(dests[2]));
return stream;
};
But the following works inconsistently. Sometimes one folder fails to output, other times it does but is missing some contents.
var rebundle = function() {
var stream = bundler.bundle();
stream = stream.pipe(source("bundled.js"));
dests.map(function(d) {
stream.pipe(gulp.dest(d));
});
return stream;
};
In short, is there a way to modify it to allow for a dynamic amount of destinations when starting from one source folder cleanly in one file?
Versions
gulp 3.9
browserify 9
Each invokation of stream.pipe() returns a new stream. You have to apply each following invokation of .pipe() to the previously returned stream.
You're doing it right when you do stream = stream.pipe(source("bundled.js")), but then in your dest.map() callback you're just adding one pipe after another to the same stream. That means you're creating lots of new streams, but those never get returned from you task, so gulp doesn't wait for them to finish.
You have to store the returned stream each time, so that it's used in the next iteration:
dests.map(function(d) {
stream = stream.pipe(gulp.dest(d));
});
Related
I want to record user's microphone 5 seconds long segments and upload each to the server. I tried using MediaRecorder and I called start() and stop() methods at 5 seconds time interval, but when I concatenate these recordings there is a "drop" sound between. So I tried to record 5 seconds segments using timeslice parameter of start() method:
navigator.mediaDevices.getUserMedia({ audio: { channelCount: 2, volume: 1.0, echoCancellation: false, noiseSuppression: false } }).then(function(stream) {
const Recorder = new MediaRecorder(stream, { audioBitsPerSecond: 128000, mimeType: "audio/ogg; codecs=opus" });
Recorder.start(5000);
Recorder.addEventListener("dataavailable", function(event) {
const audioBlob = new Blob([event.data], { type: 'audio/ogg' });
upload(audioBlob);
});
});
But only the first segment is playable. What can I do, or how can I make all blobs playable?
I MUST record then upload each segment. I CAN'T make an array of blobs (because the user could record 24hours of data or even more and the data needs to be uploaded on the server while the user is recording - with a 5 seconds delay).
Thank you!
You have to understand how media files are built.
It is not only some raw data that can be converted to either audio or video directly.
It will depend on the format chosen, but the basic case is that you have what is called metadata which are like a dictionary describing how the file is structured.
These metadata are necessary for the software that will then read the file to know how it should parse the actual data that is contained in the file.
The MediaRecorder API is in a strange position here, since it must be able to at the same time write these metadata, and also add non-determined data (it is a live recorder).
So what happens is that browsers will put the main metadata at the beginning of the file, in a way that they'll be able to simply push new data to the file, and still be a valid file (even though some info like duration will be missing).
Now, what you get in datavailableEvent.data is only a part of a whole file, that is being generated.
The first one will generally contain the metadata, and some other data, depending on when the event has been told to fire, but the next parts won't necessarily contain any metadata.
So you can't just grab these parts as standalone files, because the only file that is generated is the one that is made of all these parts, joined together in a single Blob.
So, to your problem, you have different possible approaches:
You could send to your server the latest slices you got from your recorder in an interval, and merge these server-side.
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = e => chunks.push(e.data);
recorder.start(); // you don't need the timeslice argument
setInterval(()=>{
// here we both empty the 'chunks' array, and send its content to the server
sendToServer(new Blob(chunks.splice(0,chunks.length)))
}, 5000);
And on your server-side, you would append the newly sent data to the being recorded file.
An other way would be to generate a lot of small standalone files, and to do this, you could simply generate a new MediaRecorder in an interval:
function record_and_send(stream) {
const recorder = new MediaRecorder(stream);
const chunks = [];
recorder.ondataavailable = e => chunks.push(e.data);
recorder.onstop = e => sendToServer(new Blob(chunks));
setTimeout(()=> recorder.stop(), 5000); // we'll have a 5s media file
recorder.start();
}
// generate a new file every 5s
setInterval(record_and_send, 5000);
Doing so, each file will be standalone, with a duration of approximately 5 seconds, and you will be able to play these files one by one.
Now if you wish to only store a single file on server, still using this method, you can very well merge these files together on server-side too, using e.g a tool like ffmpeg.
Using a version of one of the #Kalido's suggestions I got this working. It will send small standalone files to the server that it won't produce any glitch on image or sound when they are concatenated as an unified file on the server side:
var mediaRecorder;
var recordingRunning = false;
var chunks;
// call this function to start the process
function startRecording(stream) {
recordingRunning = true;
chunks = [];
mediaRecorder = new MediaRecorder(stream, { mimeType: "video/webm; codecs=vp9" });
mediaRecorder.ondataavailable = function (e) {
chunks.push(e.data);
};
mediaRecorder.onstop = function () {
actualChunks = chunks.splice(0, chunks.length);
const blob = new Blob(actualChunks, { type: "video/webm; codecs=vp9" });
uploadVideoPart(blob); // Upload to server
};
recordVideoChunk(stream);
};
// call this function to stop the process
function stopRecording(stream) {
recordingRunning = false
mediaRecorder.stop();
};
function recordVideoChunk(stream) {
mediaRecorder.start();
setTimeout(function() {
if(mediaRecorder.state == "recording")
mediaRecorder.stop();
if(recordingRunning)
recordVideoChunk(stream);
}, 10000); // 10 seconds videos
}
Latter on the server I concatenate them with this command:
# list.txt
file 'chunk1'
file 'chunk2'
file 'chunk3'
# command
ffmpeg -avoid_negative_ts 1 -f concat -safe 0 -i list.txt -c copy output.mp4
I have a lot of devices sending messages to a TCP Server written in node. The main task of the TCP server is to route some of that messages to redis in order to be processed by another app.
I've written a simple server that does the job quite well. The structure of the code is basically this (not the actual code, details hidden):
const net = require("net");
net.createServer(socket => {
socket.on("data", buffer => {
const data = buffer.toString();
if (shouldRouteMessage(data)) {
redis.publish(data);
}
});
});
Most of the messages are like: {"text":"message body"}, or {"lng":32.45,"lat":12.32}. But sometimes I need to process a message like {"audio":"...encoded audio..."} that spans several "data" events.
What I need in this case is to save the encoded audio into a file and send to redis {"audio":"path/to/audio-file.mp3"} where the route is the file with the audio data received.
One simple option is to store the buffers until I detect the end of the message and then save all them to a file, but that means, among other things, that I must keep the file on memory before saving to disk.
I hope there are better options using streams and pipes. ¿Any suggestions? (some code examples, would be nice)
Thanks
I finally solved, so I post the solution here for documentation purposes (and with some luck, to help others).
The solution was, indeed, quite simple: just open a write stream to a file and write the data packets as they are received. Something like this:
const net = require("net");
const fs = require("fs");
net.createServer(socket => {
socket.on("data", buffer => {
let file = null;
let filePath = null;
const data = buffer.toString();
if (shouldRouteMessage(data)) {
// just publish the message
redis.publish(data);
} else if (isAudioStart(data)) {
// create a write stream to a file and write the first data packet
filePath = buildFilePath(data);
file = fs.createWriteStream(filePath);
file.write(data);
} else if (isLastFragment(data)) {
// if is the last fragment, write it, close the file and publish the result
file.write(data);
file.close();
redis.publish(filePath);
file = filePath = null;
} else if (isDataFragment(data)) {
// just write (stream) it to file
file.write(data);
}
});
});
Note: shouldRouteMessage, buildFilePath, isDataFragment, and isLastFragment are custom functions that depends on the kind of data.
In this way, the incoming data is streamed to the file directly and no need to save the contents in memory before. node's streams rocks!
As always the devil is in the details. Some checks are necesary to, for example, ensure there's always a file when you want to write it. Remember also to set the proper encoding when converting to string (for example: buffer.toString('binary'); did the trick for me). Depending on your data format, the shouldRouteMessage, isAudioStart... and all this custom functions can be more or less complex.
Hope it helps.
I am trying to achieve following error handling:
Say we have a readable stream.
We pipe it into a transform stream.
Somehow the transform stream emits an error.
We would like to recover the readable stream (and all of its data), and re-pipe it into another transform stream.
Step 4 appears to be difficult: I can listen to unpipe event on the target stream (transform stream) and retrieve a reference to the source stream (readable stream), but at least some chunks of its data have been lost.
Can we do this without creating a custom transform stream?
A real-world example is deflate content encoding, where in some cases, you need zlib.createInflateRaw() instead of zlib.createInflate(), but we can't decide which one would be the correct choice before looking at the response body buffers.
You do not need to introduce a stream in the middle just to read the first byte. For example:
(function readChunk() {
var chunk = res.read();
if (!chunk)
return res.once('readable', readChunk);
var inflate;
if ((chunk[0] & 0x0F) === 0x08)
inflate = zlib.createInflate();
else
inflate = zlib.createInflateRaw();
// Put chunk back into stream
res.unshift(chunk);
// Prepare the decompression
res.pipe(inflate);
output = new Response(inflate, response_options);
resolve(output);
})();
Also, var body = res.pipe(new stream.PassThrough()); is unnecessary, you can just use res where appropriate.
I'm playing around with node and fifos and am getting some bizarre behavior I can't explain. Basically, I'm creating a fifo using spawn, creating a writestream to that fifo, piping data to the writestream, and spawning a cat command that reads from the fifo. It works if I then pipe the results of the cat command to stdout, but doesn't work if I pipe them to another file. See the code below
Note: this behavior only exhibits when you are writing enough data to hit the write buffer
var fs = require('fs'),
stream = require('stream');
// create read stream
var rs = fs.createReadStream('testinput')
var spawn = require('child_process').spawn;
// create fifo
var fifo = spawn('mkfifo', ['testfifo']);
fifo.on('exit', function() { // when fifo is created, proceed
// create outfile and attach fifostream
var ws = fs.createWriteStream('testoutput')
var fifows = fs.createWriteStream('testfifo');
// pipe to fifo
rs.pipe(fifows);
// spawn process to read fifo
var prog = spawn('cat', ['testfifo']);
// send results somewhere
prog.stdout.pipe(process.stdout); // this works
// prog.stdout.pipe(ws); // this doesn't
})
I should probably say why I'm trying to do this. I want to have a stream, which is coming in from a websocket be used as the argument of a spawned command that is expecting a file. So in the example above a stream coming in can be fed to cat and behave as if cat was just reading a file.
Background -
I'm trying to use node.js and the fs module to accomplish an end goal of monitoring a file, and detecting the lines that have been appended to it.
Current Implementation -
I'm currently using fs.watch to monitor the changes to the file persistently, and fs.readFile to read the file once the watch has been triggered.
Drawbacks -
The downside of this implementation is that it is computationally expensive and slow to derive the appended lines in this manner, especially since it requires reading in the entire file contents despite my interest in only the appended lines.
Ideal Solution -
I would like to instead use fs.createReadStream to somehow read the file up until the end, leave the file descriptor at the end, and start reading again once the file has been appended to.
I've found two ways to read the contents of a stream buffer, but in both implementations, which are readable.read() and readable.on('data',...), it seems the stream is ended once there is no more data to read, although the stream is not closed. I'm not exactly sure how to continue using a ended stream, as readable.resume() does not seem to do anything.
My Question -
How do I read appended lines from a file in a way that is triggered once the file is modified? Is my ideal solution down the right track?
Thank you for your time and help!
This is a problem I once had, and it was quite a headache. This is the implementation that I came up with.
var fs = require('fs');
var file = __dirname + '/file.log';
fs.stat(file, function(err, stats) {
var start = stats.size;
// read the entire file here if you need it
watchFile(start);
});
function watchFile(start) {
fs.watch(file, function(event, filename) {
fs.stat(file, function(err, stats) {
var stream = fs.createReadStream(file, {
start: start,
end: stats.size
});
var lines = new String();
stream.on('data', function(data) {
lines += data;
});
stream.on('end', function() {
// you have the new lines
});
start = stats.size + 1;
});
});
};
First I find the size of the file, and pass it to a watch function. Every time the file changes, I find out the new size of the file and read from the old position to the new position. On some systems the watch function might fire twice per change, so you might want to add checks to get rid of useless reads such as when the start and end are the same byte.