Make redis cache expire at certain time everyday - javascript

I am using node-redis. I have a cron job that updates my db and I have redis cache that caches the db response.
The problem I'm having is that my cron job runs everyday at 12am, however I can only set redis cache to expire in x seconds from now. Is there a way to make node-redis cache expire everyday at 12am exactly. Thanks.
Code:
const saveResult = await SET_CACHE_ASYNC('cacheData', response, 'EX', 15);

yes, you can use https://redis.io/commands/expireat command, if you use https://www.npmjs.com/package/redis package as redis driver, code will be like this
const redis = require('redis')
const client = redis.createClient();
const when = (Date.now()+24*60*60*1000) / 1000;
client.expireat('cacheData', when, function(error){....};
``

Recently I had the same problem with an application. My work around was creating a new timespan based on the time difference between my set and expiration time. Here is my code:
private TimeSpan GetTimeSpanUntilNextDay(int hour)
=> new DateTime(DateTime.Now.Date.AddDays(1).Ticks).AddHours(hour) - DateTime.Now;
Using the stack exchange lib for a 6 AM absolute expirition time the code looks like so:
public async Task<bool> SetEntranceAsync(string key, MYTYPE unserializedObject)
{
var db = _multiplexer.GetDatabase();
var jsonValue = JsonConvert.SerializeObject(unserializedObject);
return await db.StringSetAsync(key, jsonValue, GetTimeSpanUntilNextDay(6));
}
I used C# but you should be able to do the trick in any language.

Related

Invalid SAS token being created for Azure API Management

I am trying to create an SAS Token to communicate with Azure API Management Rest API using JavaScript (Express.js). But using that actually leads me to a 401 Unauthorized. I am using the following lines of code.
// setting one day expiry time
const expiryDate = new Date(Date.now() + 1000 * 60 * 60 * 24)
const expiryString = expiryDate.toISOString()
const identifier = process.env.AZURE_APIM_IDENTIFIER
const key = process.env.AZURE_APIM_SECRET_KEY ?? ""
const stringToSign = `${identifier}\n${expiryString}`
const signature = CryptoJS.HmacSHA256(stringToSign, key)
const encodedSignature = CryptoJS.enc.Base64.stringify(signature)
// SAS Token
const sasToken = `SharedAccessSignature uid=${identifier}&ex=${expiryString}&sn=${encodedSignature}`
The above snippet returns me something like this:
SharedAccessSignature uid=integration&ex=2021-04-21T10:48:04.402Z&sn=**O8KZAh9zVHw6Dmb03t1xlhTnrmP1B6i+5lbhQWe**= (Some characters hidden for security, but number of characters is real)
Note that there is only one trailing dash = in the above mentioned SAS token, whereas SAS Tokens in all examples and manually created SAS Token from API Management Portal have 2 dashes ==
Is there anything I am doing wrong?
Thanks in advance.
According to the document of SAS token for Azure APIM, we can see the sample is c# code:
The difference between the sample and your code is the c# sample uses HMACSHA512 but your code use HMAS256. So I think you also need to use HMACSHA512 in your nodejs. You can do it like:
var hash = crypto.createHmac('sha512', key);
You may also need to do hash.update(text); and hash.digest(), please refer to this document about it.
Thank you Hury Shen! I also figured out that we don't need crypto-js for (as we have to import an external library for that). Node has crypto as its native module and we can use that. The following JavaScript snippet works fine.
import crypto from "crypto"
const identifier = <YOUR_AZURE_APIM_IDENTIFIER>
const secretKey = <YOUR_AZURE_APIM_SECRET_KEY>
// setting token expiry time
const expiryDate = new Date(Date.now() + 1000 * 60 * 60 * 24 * 29)
const expiryString = expiryDate.toISOString().slice(0, -1) + "0000Z"
const dataToSign = `${identifier}\n${expiryString}`
// create signature
const signedData = crypto
.createHmac("sha512", secretKey)
.update(dataToSign)
.digest("base64")
// SAS Token
const accessToken = `SharedAccessSignature uid=${identifier}&ex=${expiryString}&sn=${signedData}`

Send a message from a file every hour

I would like to send a message from a text file every 24 hours to a channel in my server (discord.js).
I think it would be like "appendFile" or something like that. If you know how to do this I would appreciate it!
The way I am looking to use this is every morning with a random good morning message.
You can use setInterval , here is what you can do -
var fs = require("fs"); // require fs
var file = fs.readFileSync("./path/to/file"); // read file
setInterval(()=>{
MessageChannel.send(file); // the `TextChannel` class
},86400000) // setting the time to 24 hours in ms
What you need is a cron job.
This is the one i use.
Install
npm install cron
var CronJob = require('cron').CronJob;
var job = new CronJob('* * * * * *', function() {
console.log('You will see this message every second');
}, null, true, 'America/Los_Angeles');
job.start();
You can read about cron patterns here

Google BigQuery Python library is 2x as fast as the Node JS library for downloading results

I've been doing a test to compare the speeds at which the Google BigQuery Python client library downloads query results compared to the Node JS library. It would seem that, out-of-the-box, the Python libraries download data about twice as fast as the Javascript Node JS client. Why is that so?
Below I provide the two tests, one in Python and one in Javascript.
I've selected the usa_names public dataset of BigQuery as an example. The usa_1910_current table in this dataset is about 6 million rows and about 180Mb in size. I have a 200Mb fibre download link (for information about the last mile). The data, after being packed into a pandas dataframe, is about 1.1Gb (with Pandas overhead included).
Python test
from google.cloud import bigquery
import time
import pandas as pd
bq_client = bigquery.Client("mydata-1470162410749")
sql = """SELECT * FROM `bigquery-public-data.usa_names.usa_1910_current`"""
job_config = bigquery.QueryJobConfig()
start = time.time()
#---------------------------------------------------
query_job = bq_client.query(
sql,
location='US',
job_config=job_config)
#---------------------------------------------------
end = time.time()
query_time = end-start
start = time.time()
#---------------------------------------------------
rows = list(query_job.result(timeout=30))
df = pd.DataFrame(data=[list(x.values()) for x in rows], columns=list(rows[0].keys()))
#---------------------------------------------------
end = time.time()
iteration_time = end-start
dataframe_size_mb = df.memory_usage(deep=True).sum() / 1024 ** 2
print("Size of the data in Mb: " + str(dataframe_size_mb) + " Mb")
print("Shape of the dataframe: " + str(df.shape))
print("Request time:", query_time)
print("Fetch time:", iteration_time)
Node JS test
// Import the Google Cloud client library
const {BigQuery} = require('#google-cloud/bigquery');
const moment = require('moment')
async function query() {
const bigqueryClient = new BigQuery();
const query = "SELECT * FROM `bigquery-public-data.usa_names.usa_1910_current`";
const options = {
query: query,
location: 'US',
};
// Run the query as a job
const [job] = await bigqueryClient.createQueryJob(options);
console.log(`Job ${job.id} started.`);
// Wait for the query to finish
let startTime = moment.utc()
console.log('Start: ', startTime.format("YYYY-MM-DD HH:mm:ss"));
const [rows] = await job.getQueryResults();
let endTime = moment.utc()
console.log('End: ', endTime.format("YYYY-MM-DD HH:mm:ss"));
console.log('Difference (s): ', endTime.diff(startTime) / 1000)
}
query();
Python library test results with 180Mb of data:
Size of the data in Mb: 1172.0694370269775 Mb
Shape of the dataframe: (6028151, 5)
Request time: 3.58441424369812
Fetch time: 388.0966112613678 <-- This is 6.46 mins
Node JS library test results with 180Mb of data:
Start: 2019-06-03 19:11:03
End: 2019-06-03 19:24:12 <- About 13 mins
For further reference, I also ran the tests against a 2Gb table...
Python library test results with 2Gb of data:
Size of the data in Mb: 3397.0339670181274 Mb
Shape of the dataframe: (1278004, 21)
Request time: 2.4991791248321533
Fetch time: 867.7270500659943 <-- This is 14.45mins
Node JS library test results with 2Gb of data:
Start: 2019-06-03 15:30:59
End: 2019-06-03 16:02:49 <-- The difference is just below 31 mins
As I can see Node JS uses pagination to manage the datasets while Python with looks like it brings the entire datasets and start to work with it.
This is maybe affecting the performance of the Node JS client library, my recommendation is to take a look at the source code of both clients and read constantly the Google Cloud Blog, where Google publishes sometimes tips and best practices to use their products, as an example this article: Testing Cloud Pub/Sub clients to maximize streaming performance.

How do I set cryptoJS.sha256 output to binary in Postman pre-request script

I am trying to create an HMAC signature in Postman using a pre-request script. Without going too far into the details of implementation,
I have confirmed that my means for generating the signature is messed up. I can see what the expected result should be with a proof of concept example but I’m missing something somewhere and cannot tell if it is in the conversion. I’ve read around from other questions on SO that binary is the default provided by cryptojs internally and that simply calling for the hash is the equivalent of asking for the digest with conversions completed for you. Here is the code I’m trying to run in postman and the working implementation code as shown in nodeJS.
var CryptoJS = require("crypto-js");
const d = new Date();
const timestamp = d.getTime();
const postData = {};
postData.nonce = 100; //timestamp * 1000; //nanosecond
postman.setEnvironmentVariable('nonce', postData.nonce);
const secret = CryptoJS.enc.Base64.parse(pm.environment.get("apiSecret"));
const path = pm.globals.get("balanceMethod");
const message = CryptoJS.SHA256( encodeURI(postData.nonce + postData)) ; // ...
const hmacDigest = CryptoJS.HmacSHA512(path + message, secret);
postman.setEnvironmentVariable('API-Signature', CryptoJS.enc.Base64.stringify(hmacDigest));
console.log(CryptoJS.enc.Base64.stringify(hmacDigest));
Does this apply to my situation in that I’d need to convert my sha256 message into a bytes array in order to work?
Reference code for building implementation that does work with nodeJS:
const getMessageSignature = (path, request, secret, nonce) => {
const message = qs.stringify(request);
const secret_buffer = new Buffer(secret, 'base64');
const hash = new crypto.createHash('sha256');
const hmac = new crypto.createHmac('sha512', secret_buffer);
const hash_digest = hash.update(nonce + message).digest('binary');
const hmac_digest = hmac.update(path + hash_digest, 'binary').digest('base64');
return hmac_digest;
};
Same reference code for building implementation in python3:
req['nonce'] = 100 #int(1000*time.time())
postdata = urllib.parse.urlencode(req)
# Unicode-objects must be encoded before hashing
encoded = (str(req['nonce']) + postdata).encode()
message = urlpath.encode() + hashlib.sha256(encoded).digest()
signature = hmac.new(base64.b64decode(self.secret),
message, hashlib.sha512)
sigdigest = base64.b64encode(signature.digest())
The only post data I'm sending is the Nonce at this time and I've purposely set it to 100 to be able to replicate the result to fix the generated signature. Seems close but not matching result. The python and nodeJS do match expected results and work properly.
Check out the answer in this thread. It helped me with the my problem and may be what is happening in your case also. All it is necessary is break the input of the HMAC into two parts.

Node js / MongoDB replica set array in javascript

Warning: I'm a novice programmer (more of sysadmin). We were given an node js application that's using MongoDB. From what I can tell, the mongo.js file is using mongojs and monq java classes. It was setup with only one MongoDB and I'm trying to setup a new HA environment to use a replica set. Here is what they provided:
var mongojs = require('mongojs');
var monq = require('monq');
var dbName = 'exampledb';
var db = mongojs(dbName, ['collections']);
var client = monq('mongodb://127.0.0.1/exampledb', { w: 1 });
exports.db = db;
exports.ObjectId = mongojs.ObjectId;
exports.monqClient = client;
Now for a replica set, according to this article, I need to make the following change:
var db = mongojs('replset0.com, replset1.com, replset2.com/mydb?slaveOK=true?', ['collections']);
I'm not entirely sure what I need to do for the line after that. I'm guessing I would have to create an array that would contain the host name and port # for each member of the replica set (setup is primary, secondary, arbiter) such as:
var replSet = new replSet();
var replSet[0] = "server0:port0"
var replSet[1] = "server1.:port1"
var replSet[2] = "server2.:port2"
How would I go about detecting which node is the primary? Also if the primary were to fail, I would have to restart the node js application (using forever)?
I found the answer as it's calling MongoDB's URI
http://docs.mongodb.org/manual/reference/connection-string/
Should be something like:
var client = monq('mongodb://server0:port0,server1:port1,server2:port2/[dbname]?replicaSet=[replicaSet Name]
First question: As long as you give it all of the members in the connection string, the mongojs driver should be able to figure out which one is primary. No need to figure it out yourself.
Second question: No, you don't have to restart the node app. The other members in the set will elect a new primary, although it takes time for mongo to detect failure and run the election.
For more information, see the mongodb docs on replica sets.

Categories

Resources