Send a message from a file every hour - javascript

I would like to send a message from a text file every 24 hours to a channel in my server (discord.js).
I think it would be like "appendFile" or something like that. If you know how to do this I would appreciate it!
The way I am looking to use this is every morning with a random good morning message.

You can use setInterval , here is what you can do -
var fs = require("fs"); // require fs
var file = fs.readFileSync("./path/to/file"); // read file
setInterval(()=>{
MessageChannel.send(file); // the `TextChannel` class
},86400000) // setting the time to 24 hours in ms

What you need is a cron job.
This is the one i use.
Install
npm install cron
var CronJob = require('cron').CronJob;
var job = new CronJob('* * * * * *', function() {
console.log('You will see this message every second');
}, null, true, 'America/Los_Angeles');
job.start();
You can read about cron patterns here

Related

How to set max_receive_message_length globally?

I am using grpc-dynamic-gateway package and it uses the grpc package as a client. Im trying to figure out how I can set max_receive_message_length globally on the grpc object I require in my script?
const grpc = require('grpc');
grpc.max_receive_message_length = 1024 * 1024 * 100; // <-- this does not work, how can I do this?
app.use('/', grpcGateway(
['api.proto'],
'0.0.0.0:5051',
grpc.credentials.createInsecure(),
true,
process.cwd(),
grpc // This is being passed to the middleware..id like it to have the option above set on it
));
I figured this out. The change needs to be made directly in the grpc-dynamic-gateway package. In index.js on line 77 change:
getPkg(clients, pkg, true)[svc] = new (getPkg(protos[si], pkg, false))[svc](grpcLocation, credentials)
to
getPkg(clients, pkg, true)[svc] = new (getPkg(protos[si], pkg, false))[svc](grpcLocation, credentials, {
"grpc.max_receive_message_length": 1024 * 1024 * 1000,
"grpc.max_send_message_length": 1024 * 1024 * 1000
})
Im not sure if this is the only way to fix this but it worked for me. Also obviously adjust the message sizes to a more appropriate value for yourself. 1000mb is probably excessive.
According to this response in their github issues, you have to set this when you instantiate the server, as it is a server option:
var server = new grpc.Server({'grpc.max_receive_message_length': 1024*1024*100})

Stuck script when saving stringified object using big json library

In my app I have a big object (>600Mb) and I'd like to stringify it and save it in a json file - because of it's size I'd like to use big-json library and json.createStringifyStream method which returns a stringified object stream. My script looks like this:
import json from 'big-json'
const fetchAt = new Date();
const myData = // big object >600MB
const myDataFileWriteStream = fs.createWriteStream('./myfile.json');
json.createStringifyStream({body: myData})
.pipe(myDataFileWriteStream)
.on('finish', function () {
const writeAt = new Date();
console.log(`Data written in ${(writeAt - fetchAt) / 1000} seconds.`); // this line is never printed out
});
When I run it I can see that it's saving the data for some time - After that it freezes and stops doing anything - but the script doesn't finish:
It doesn't print the log hooked on the finish event
The file is much smaller than I anticipate
The script doesn't exit
I run it for like 30 minutes and no sign of finishing - it uses 0 CPU. Do you spot any problem ? What might be the cause ?
Edit:
I have a working example - please clone https://github.com/solveretur/big-json-test and run npm start . After ~15 minutes it should freeze and won't finish - I use node#14

Make redis cache expire at certain time everyday

I am using node-redis. I have a cron job that updates my db and I have redis cache that caches the db response.
The problem I'm having is that my cron job runs everyday at 12am, however I can only set redis cache to expire in x seconds from now. Is there a way to make node-redis cache expire everyday at 12am exactly. Thanks.
Code:
const saveResult = await SET_CACHE_ASYNC('cacheData', response, 'EX', 15);
yes, you can use https://redis.io/commands/expireat command, if you use https://www.npmjs.com/package/redis package as redis driver, code will be like this
const redis = require('redis')
const client = redis.createClient();
const when = (Date.now()+24*60*60*1000) / 1000;
client.expireat('cacheData', when, function(error){....};
``
Recently I had the same problem with an application. My work around was creating a new timespan based on the time difference between my set and expiration time. Here is my code:
private TimeSpan GetTimeSpanUntilNextDay(int hour)
=> new DateTime(DateTime.Now.Date.AddDays(1).Ticks).AddHours(hour) - DateTime.Now;
Using the stack exchange lib for a 6 AM absolute expirition time the code looks like so:
public async Task<bool> SetEntranceAsync(string key, MYTYPE unserializedObject)
{
var db = _multiplexer.GetDatabase();
var jsonValue = JsonConvert.SerializeObject(unserializedObject);
return await db.StringSetAsync(key, jsonValue, GetTimeSpanUntilNextDay(6));
}
I used C# but you should be able to do the trick in any language.

Google BigQuery Python library is 2x as fast as the Node JS library for downloading results

I've been doing a test to compare the speeds at which the Google BigQuery Python client library downloads query results compared to the Node JS library. It would seem that, out-of-the-box, the Python libraries download data about twice as fast as the Javascript Node JS client. Why is that so?
Below I provide the two tests, one in Python and one in Javascript.
I've selected the usa_names public dataset of BigQuery as an example. The usa_1910_current table in this dataset is about 6 million rows and about 180Mb in size. I have a 200Mb fibre download link (for information about the last mile). The data, after being packed into a pandas dataframe, is about 1.1Gb (with Pandas overhead included).
Python test
from google.cloud import bigquery
import time
import pandas as pd
bq_client = bigquery.Client("mydata-1470162410749")
sql = """SELECT * FROM `bigquery-public-data.usa_names.usa_1910_current`"""
job_config = bigquery.QueryJobConfig()
start = time.time()
#---------------------------------------------------
query_job = bq_client.query(
sql,
location='US',
job_config=job_config)
#---------------------------------------------------
end = time.time()
query_time = end-start
start = time.time()
#---------------------------------------------------
rows = list(query_job.result(timeout=30))
df = pd.DataFrame(data=[list(x.values()) for x in rows], columns=list(rows[0].keys()))
#---------------------------------------------------
end = time.time()
iteration_time = end-start
dataframe_size_mb = df.memory_usage(deep=True).sum() / 1024 ** 2
print("Size of the data in Mb: " + str(dataframe_size_mb) + " Mb")
print("Shape of the dataframe: " + str(df.shape))
print("Request time:", query_time)
print("Fetch time:", iteration_time)
Node JS test
// Import the Google Cloud client library
const {BigQuery} = require('#google-cloud/bigquery');
const moment = require('moment')
async function query() {
const bigqueryClient = new BigQuery();
const query = "SELECT * FROM `bigquery-public-data.usa_names.usa_1910_current`";
const options = {
query: query,
location: 'US',
};
// Run the query as a job
const [job] = await bigqueryClient.createQueryJob(options);
console.log(`Job ${job.id} started.`);
// Wait for the query to finish
let startTime = moment.utc()
console.log('Start: ', startTime.format("YYYY-MM-DD HH:mm:ss"));
const [rows] = await job.getQueryResults();
let endTime = moment.utc()
console.log('End: ', endTime.format("YYYY-MM-DD HH:mm:ss"));
console.log('Difference (s): ', endTime.diff(startTime) / 1000)
}
query();
Python library test results with 180Mb of data:
Size of the data in Mb: 1172.0694370269775 Mb
Shape of the dataframe: (6028151, 5)
Request time: 3.58441424369812
Fetch time: 388.0966112613678 <-- This is 6.46 mins
Node JS library test results with 180Mb of data:
Start: 2019-06-03 19:11:03
End: 2019-06-03 19:24:12 <- About 13 mins
For further reference, I also ran the tests against a 2Gb table...
Python library test results with 2Gb of data:
Size of the data in Mb: 3397.0339670181274 Mb
Shape of the dataframe: (1278004, 21)
Request time: 2.4991791248321533
Fetch time: 867.7270500659943 <-- This is 14.45mins
Node JS library test results with 2Gb of data:
Start: 2019-06-03 15:30:59
End: 2019-06-03 16:02:49 <-- The difference is just below 31 mins
As I can see Node JS uses pagination to manage the datasets while Python with looks like it brings the entire datasets and start to work with it.
This is maybe affecting the performance of the Node JS client library, my recommendation is to take a look at the source code of both clients and read constantly the Google Cloud Blog, where Google publishes sometimes tips and best practices to use their products, as an example this article: Testing Cloud Pub/Sub clients to maximize streaming performance.

NodeJS spawn ffmpeg thumb %t in filename

I'm creating screenshots at 15 fps with this NodeJS code:
var spawn = require('child_process').spawn;
var args = ['-ss', '00:00:07.86', '-i', 'filename.mp4', '-vf', 'fps=15', '/out%d.png'];
var ffmpeg = spawn('ffmpeg', args);
This works fine, but I want the time stamp of each screenshot in the filename.
from FFMPEG docs:
%t is expanded to a timestamp
But putting ... ,'/out%t.png'] fails and prints:
grep stderr: [image2 # 0x7f828c802c00] Could not get frame filename
number 2 from pattern '/Users/***/projects/out%t.png' (either set updatefirst or use a pattern like %03d within the filename pattern)
av_interleaved_write_frame(): Invalid argument
...
grep stderr: Conversion failed!
child process exited with code 1
So that doesn't look like the way to go.
How do i get the timestamp for each screenshot?
Thanks
As far as I know %d is the only param that you can use in this case.
You can use %t for report filename when using the -report param, but not for video frames.
Knowing video length, start time (00:00:07.86) and FPS (15) you can match frame numbers from filenames with frame timestamps, and rename all the files after ffmpeg finish extracting frames. It's an ugly workaround for sure, but it's the only thing I can come up with...
According to poohitan comment, I end up building a workaround correlating the generated frame number, start time and fps to actual frame timestamp.
Video.frameToSec = function(imageName, start, fps) {
// imageName => out209.png
start = start || 0;
var thenum = imageName.match(/\d/g);
thenum = thenum.join('');
var fps_sec = 1 / fps;
var sec = thenum * fps_sec - fps_sec / 2 + start;
return {imageName:imageName,sec:sec};
}
Too bad ffmpeg doesnt have a built in attribute because when creating screenshot the timestamp is important.
Anyway thanks for the help!

Categories

Resources