I'm creating screenshots at 15 fps with this NodeJS code:
var spawn = require('child_process').spawn;
var args = ['-ss', '00:00:07.86', '-i', 'filename.mp4', '-vf', 'fps=15', '/out%d.png'];
var ffmpeg = spawn('ffmpeg', args);
This works fine, but I want the time stamp of each screenshot in the filename.
from FFMPEG docs:
%t is expanded to a timestamp
But putting ... ,'/out%t.png'] fails and prints:
grep stderr: [image2 # 0x7f828c802c00] Could not get frame filename
number 2 from pattern '/Users/***/projects/out%t.png' (either set updatefirst or use a pattern like %03d within the filename pattern)
av_interleaved_write_frame(): Invalid argument
...
grep stderr: Conversion failed!
child process exited with code 1
So that doesn't look like the way to go.
How do i get the timestamp for each screenshot?
Thanks
As far as I know %d is the only param that you can use in this case.
You can use %t for report filename when using the -report param, but not for video frames.
Knowing video length, start time (00:00:07.86) and FPS (15) you can match frame numbers from filenames with frame timestamps, and rename all the files after ffmpeg finish extracting frames. It's an ugly workaround for sure, but it's the only thing I can come up with...
According to poohitan comment, I end up building a workaround correlating the generated frame number, start time and fps to actual frame timestamp.
Video.frameToSec = function(imageName, start, fps) {
// imageName => out209.png
start = start || 0;
var thenum = imageName.match(/\d/g);
thenum = thenum.join('');
var fps_sec = 1 / fps;
var sec = thenum * fps_sec - fps_sec / 2 + start;
return {imageName:imageName,sec:sec};
}
Too bad ffmpeg doesnt have a built in attribute because when creating screenshot the timestamp is important.
Anyway thanks for the help!
Related
In my app I have a big object (>600Mb) and I'd like to stringify it and save it in a json file - because of it's size I'd like to use big-json library and json.createStringifyStream method which returns a stringified object stream. My script looks like this:
import json from 'big-json'
const fetchAt = new Date();
const myData = // big object >600MB
const myDataFileWriteStream = fs.createWriteStream('./myfile.json');
json.createStringifyStream({body: myData})
.pipe(myDataFileWriteStream)
.on('finish', function () {
const writeAt = new Date();
console.log(`Data written in ${(writeAt - fetchAt) / 1000} seconds.`); // this line is never printed out
});
When I run it I can see that it's saving the data for some time - After that it freezes and stops doing anything - but the script doesn't finish:
It doesn't print the log hooked on the finish event
The file is much smaller than I anticipate
The script doesn't exit
I run it for like 30 minutes and no sign of finishing - it uses 0 CPU. Do you spot any problem ? What might be the cause ?
Edit:
I have a working example - please clone https://github.com/solveretur/big-json-test and run npm start . After ~15 minutes it should freeze and won't finish - I use node#14
I would like to send a message from a text file every 24 hours to a channel in my server (discord.js).
I think it would be like "appendFile" or something like that. If you know how to do this I would appreciate it!
The way I am looking to use this is every morning with a random good morning message.
You can use setInterval , here is what you can do -
var fs = require("fs"); // require fs
var file = fs.readFileSync("./path/to/file"); // read file
setInterval(()=>{
MessageChannel.send(file); // the `TextChannel` class
},86400000) // setting the time to 24 hours in ms
What you need is a cron job.
This is the one i use.
Install
npm install cron
var CronJob = require('cron').CronJob;
var job = new CronJob('* * * * * *', function() {
console.log('You will see this message every second');
}, null, true, 'America/Los_Angeles');
job.start();
You can read about cron patterns here
I'm trying to convert ansi color codes from console output into HTML. I have a found a script to do this but I cant seem to make it parse the strings inside node js. I have tried to JSON.stringify it to also include special chars but its not working.
forever list
[32minfo[39m: Forever processes running
[90mscript[39m [37mforever[39m [37mpid[39m [37mid[39m
[90mdata[39m: [37m [39m [37muid[39m [90mcommand[39m
I get output like this back from ssh2shell in node js. I have a script:
https://github.com/pixelb/scripts/blob/master/scripts/ansi2html.sh
This is supposed to convert the above to html and add the appropriate color codes. It works fine with normal terminal output for example:
npm install --color=always | ansi2html.sh > npminstall.html
This is the raw output on the linux machine piped to a file. It seems the JS strings are missing these escapes when they are shown in console.log but they are also missing newlines there. Perhaps its because im concatenating them directly into the string and its removing special chars?
total 24
-rwxr-xr-x 1 admin admin 17002 May 13 02:52 ^[[0m^[[38;5;34mansi2html.sh^[[0m
drwxr-xr-x 4 admin admin 4096 May 13 00:00 ^[[38;5;27mgit^[[0m
-rw-r--r-- 1 admin admin 0 May 13 02:57 ls.html
Hopefully some of this makes sense.
Thanks
There are a couple of filters that SSH2shell applies to the output from commands. The first removes non-standard ASCII from the response and then the colour formatting codes are removed.
In v1.6.0 I have added pipe()/unpipe(), the events for both and exposed the stream.on('data', function(data){}) event so you can access the stream output directly without SSH2shell interacting with it in any way.
This should resolve the problem of not getting the right output from SSH2shell by giving you access to the raw data.
var fs = require('fs')
var host = {
server: {
host: mydomain.com,
port: 22,
userName: user,
password: password:)
},
commands: [
"`Test session text message: passed`",
"msg:console test notification: passed",
"ls -la"
],
}
//until npm published use the cloned dir path.
var SSH2Shell = require ('ssh2shell')
//run the commands in the shell session
var SSH = new SSH2Shell(host),
callback = function( sessionText ){
console.log ( "-----Callback session text:\n" + sessionText);
console.log ( "-----Callback end" );
},
firstLog = fs.createWriteStream('first.log'),
secondLog = fs.createWriteStream('second.log'),
buffer = ""
//multiple pipes can be added but they wont be bound to the stream until the connection is established
SSH.pipe(firstLog).pipe(secondLog);
SSH.on('data', function(data){
//do something with the data chunk
console.log(data)
})
SSH.connect(callback)
tried this ?
https://github.com/hughsk/ansi-html-stream
var spawn = require('child_process').spawn
, ansi = require('ansi-html-stream')
, fs = require('fs')
var npm = spawn('npm', ['install', 'browserify', '--color', 'always'], {
cwd: process.cwd()
})
var stream = ansi({ chunked: false })
, file = fs.createWriteStream('browserify.html', 'utf8')
npm.stdout.pipe(stream)
npm.stderr.pipe(stream)
stream.pipe(file, { end: false })
stream.once('end', function() {
file.end('</pre>\n')
})
file.write('<pre>\n');
I am trying out calculation of MD5 using javascript and looking at
fastest MD5 Implementation in JavaScript post 'JKM' implementation is suppose to be one of the faster implementations. I am using SparkMD5 which is based of off JKM implementation. However the example provided https://github.com/satazor/SparkMD5/blob/master/test/readme_example.html takes about 10seconds for a 13MB file (~23 seconds with debugger) while the same file takes only 0.03seconds using md5sum function in linux command line. Are these results too slow for javascript implementation or is this poor performance expected?
It is expected.
First, I don't think I need to tell you that JAVASCRIPT IS SLOW. Yes, even with modern JIT optimization etc. JavaScript is still slow.
To show you that it is not your JS implementation's fault, I will do some comparisons with Node.js, so that the browser DOM stuff doesn't get in the way for benchmarking.
Test file generation:
$ dd if=/dev/zero of=file bs=6M count=1
(my server only has 512 MB of RAM and Node.js can't take anything higher than 6M)
Script:
//var md5 = require('crypto-js/md5')
var md5 = require('MD5')
//var md5 = require('spark-md5').hash
//var md5 = require('blueimp-md5').md5
require('fs').readFile('file', 'utf8', function(e, b) { // Using string here to be fair for all md5 engines
console.log(md5(b))
})
(you can uncomment the contestants/benchmarkees)
The result is: (file reading overhead removed)
MD5: 5.250s - 0.072s = 5.178s
crypto-js/md5: 4.914s - 0.072s = 4.842s
Blueimp: 4.904s - 0.072s = 4.832s
MD5 with Node.js binary buffer instead of string: 1.143s - 0.063s = 1.080s
spark: 0.311s - 0.072s = 0.239s
md5sum: 0.023s - 0.003s = 0.020s
So no, spark-md5 is in reality not bad at all.
When looking at the example HTML page, I saw that they are using the incremental API. So I did another benchmark:
var md5 = require('spark-md5')
var md5obj = new md5()
var chunkNum = 0
require('fs').createReadStream('file')
.on('data', function (b) {
chunkNum ++
md5obj.append(b.toString())
})
.on('end', function () {
console.log('total ' + chunkNum + ' chunks')
console.log(md5obj.end())
})
With 96 chunks, it is 0.313s.
So no, it is not the MD5 implementation's fault at all. Performance this poor is TBH a little surprising, but not totally impossible as well, you are running this code in a browser.
BTW, my server is a DigitalOcean VPS with SSD. The file reading overhead is about 0.072s:
require('fs').readFile('file', 'utf8', function() {})
while with native cat it's about 0.003s.
For MD5 with native Buffer, the overhead is about 0.063s:
require('fs').readFile('file', function() {})
We've recently started seeing a new error in our apache logs:
[Wed Mar 16 08:32:59 2011] [error] [client 10.40.1.2] (36)File name too long: Cannot map GET /static/app/js <..lots of javascript...>
It looks as though JavaScript from a page is being sent in a request to the server. However it's unclear how this would occur. From searching t'internet, looks like this kind of thing has occurred with certain wordpress plugins, but there isn't much other information out there.
Note about the environment: Clients use IE8 running on a Citrix thin client in the UK. The web servers are 1700km away, so there's a bit of latency. The site makes heavy use of AJAX and large cookies.
Could anyone advise on how to debug this issue please?
Thanks
Andrew
I'm getting this too, with a PHP framework that allows URLs formatted so that
index.php?controller=doohickey&id=z61
can be rewritten as
index.php/controller/doohickey/z61
along with a regex in the framework code.
The errors looks like (/var/log/apache/error_log):
GET /index.php/accounts_badoink/confirmaction/WUW%253DWBW%25253DV0tXPWM3Nzc1....
-> in this case, apache is parsing the filename as
/index.php/accounts_badoink/confirmaction/WUW%253DWBW%25253DV0tXPWM3Nzc1....
(I'm serializing an object state and passing it around).
I have to rewrite this (at least the URLs with long appended serialized objects) to the more-customary style:
GET /index.php?controller=accounts_badoink&confirmaction=WUW%253DWBW%25253DV0tXPWM3Nzc1....
-> in this case, Apache is parsing the file name as index.php
So in short, rewrite your URLs and include a ? as early as possible, to pass data as CGI-style parameters instead of path elements.
I Ran strace -p $PID & for each Apache process id (as reported by pidof apache2) :
# pidof apache2 | tr ' ' '\n' | grep -v 21561 | sed "s|\(.*\)|strace -p \1 \&|g" | sh -
to finish :
# kill -HUP `pidof strace`
And see the kernel calls make by apache2:
accept(3, {sa_family=AF_INET, sin_port=htons(38985), sin_addr=inet_addr("127.0.0.1")}, [16]) = 13
fcntl(13, F_GETFD) = 0
fcntl(13, F_SETFD, FD_CLOEXEC) = 0
fcntl(13, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(13, F_SETFL, O_RDWR|O_NONBLOCK) = 0
read(13, "GET /newregcon/index.php/account"..., 8000) = 4949
write(2, "[Wed May 11 15:39:36 2011] [erro"..., 4451) = 4451
writev(13, [{"HTTP/1.1 403 Forbidden\r\nDate: We"..., 219}, {"<!DOCTYPE HTML PUBLIC \"-//IETF//"..., 4610}], 2) = 4829
As these system calls don't return errors (e.g. ' ... = -1' ), I downloaded the apache2 sources, and found:
Grep for "Cannot map" :
server/core.c
3489:AP_DECLARE_NONSTD(int) ap_core_translate(request_rec *r)
3490:{
3520: if ((rv = apr_filepath_merge(&r->filename, conf->ap_document_root, path,
3521: APR_FILEPATH_TRUENAME
3522: | APR_FILEPATH_SECUREROOT, r->pool))
3523: != APR_SUCCESS) {
3524: ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r,
3525: "Cannot map %s to file", r->the_request);
3526: return HTTP_FORBIDDEN;
3527: }
look for apr_filepath_merge ...
srclib/apr/file_io/unix/filepath.c
81:APR_DECLARE(apr_status_t) apr_filepath_merge(char **newpath,
82: const char *rootpath,
83: const char *addpath,
84: apr_int32_t flags,
85: apr_pool_t *p)
86:{
87: char *path;
88: apr_size_t rootlen; /* is the length of the src rootpath */
89: apr_size_t maxlen; /* maximum total path length */
149: rootlen = strlen(rootpath);
150: maxlen = rootlen + strlen(addpath) + 4; /* 4 for slashes at start, after
151: * root, and at end, plus trailing
152: * null */
153: if (maxlen > APR_PATH_MAX) {
154: return APR_ENAMETOOLONG;
155: }
find APR_PATH_MAX ...
Netware
./srclib/apr/include/apr.hnw:424:#define APR_PATH_MAX PATH_MAX
WIN32
./srclib/apr/include/apr.hw:584:#define APR_PATH_MAX 8192
./srclib/apr/include/apr.h.in
/* header files for PATH_MAX, _POSIX_PATH_MAX */
#if APR_HAVE_LIMITS_H
#include <limits.h>
/usr/src/linux-headers-2.6.35-28/include/linux/limits.h
#define PATH_MAX 4096 /* # chars in a path name including nul */
Another related thing I ran into was PHP 5.2's SUHOSIN security patchset,
which (inter alia) limits get-parameter length (to 512 by default):
http://www.hardened-php.net/suhosin/configuration.html#suhosin.get.max_value_length