I want to sync with my server and found some code (which may be a starting point for my own):
var serverDateTimeOffset
function getServerDateTime() {
if (serverDateTimeOffset === undefined || serverDateTimeOffset === null) {
serverDateTimeOffset = 0; //while syncing with the server, do not start a new sync, but return localtime
var clientTimestamp = (new Date()).valueOf();
$.getJSON('/getDateTime.ashx?ct=' + clientTimestamp, function (data) {
var nowTimeStamp = (new Date()).valueOf();
var serverTimestamp = data.serverTimestamp;
var serverClientRequestDiffTime = data.diff;
var serverClientResponseDiffTime = nowTimeStamp - serverTimestamp;
var responseTime = (serverClientRequestDiffTime - nowTimeStamp + clientTimestamp - serverClientResponseDiffTime) / 2
var syncedServerTime = new Date((new Date()).valueOf() - (serverClientResponseDiffTime - responseTime) / 2);
serverDateTimeOffset = (serverClientResponseDiffTime - responseTime) / 2;
console.log("synced time with server:", syncedServerTime, serverDateTimeOffset);
})
}
return new Date((new Date()).valueOf() - serverDateTimeOffset);
}
But I do not understand what is the meaning of the variable responseTime (and the logical reasoning behind it). What does it represent? Anyone has suggestions? It appears that it combines some relative with absolute times.
Example in my reasoning:
10 start at client ==+5 (req in transit) ==> 15 (arrival) ==> (server processing +5) ==> before sending (20) ==+7 (resp in transit) ==> 27 (back at client). Calculation for responseTime would result in -19/2 = -9.5.
When receive time is same as sent time at server (server processing would be 0) then 5 -22 + 10 -7 responseTime resulsts in -14/2 = -7.
And what about the variable serverDateTimeOffset?
Do I miss something? I found a description of NTP which may be helpful:
Synchronizing a client to a network server consists of several packet exchanges where each exchange is a pair of request and reply. When sending out a request, the client stores its own time ( ) into the packet being sent. When a server receives such a packet, it will in turn store its own time ( ) into the packet, and the packet will be returned after putting a transmit timestamp into the packet. When receiving the reply, the receiver will once more log its own receipt time to estimate the travelling time of the packet. The travelling time (delay) is estimated to be half of "the total delay minus remote processing time", assuming symmetrical delays.
Edit:
The function is polled regularly (it is part of the update() function, so the first time it may indeed be 0)
$(document).ready(function () {
...
setTimeout(function () { update() }, 2000); }
..
});
Some tests resulted in the following examples:
responseTime= (serverClientRequestDiffTime( -578 )-nowTimeStamp( 1621632913398 )+clientTimestamp( 1621632913081 )-serverClientResponseDiffTime( 895 ))/2= -895
serverDateTimeOffset=(serverClientResponseDiffTime( 895 )-responseTime( -895 ))/2= 895
added req delay via proxy:
responseTime= (serverClientRequestDiffTime( 2456 )-nowTimeStamp( 1621632992161 )+clientTimestamp( 1621632988800 )-serverClientResponseDiffTime( 905 ))/2= -905
serverDateTimeOffset=(serverClientResponseDiffTime( 905 )-responseTime( -905 ))/2= 905
Related
I spent the last 2 days trying to find a way to modify a virtually unlimited amount of data in my cloud function.
I am aware it's a topic already heavily discussed but despite trying every single solution I found, I always end up with a similar scenario so I guess it's not only the way iI treat my batches.
The goal here is to be able to scale up our activity and anticipate the growth of users so I'm running some stress tests.
I'm skipping the entire app idea because it would take a while but here is the problem :
1) I am feeding an array with the list of the desired actions for the batch :
tempFeeObject["star"]=FieldValue.increment(restartFee)
let docRefFees = db.collection(collectedFeesDb).doc(element); // log action
finalArray.push([docRefFees,tempFeeObject,"set"])
2) I try to resolve all the final arrays. In this case, for the stress test, we talk about 6006 documents.
Here is the code for it :
try
{
const batches = []
finalArray.forEach((data, i) => {
if (i % 400 === 0) {
let newBatch = db.batch();
batches.push(newBatch)
}
const batch = batches[batches.length - 1]
let tempCommit = {...data[1]}
Object.keys(tempCommit).forEach(key => {
if(data[1][key] && data[1][key].seconds && !Number.isNaN(data[1][key].seconds)){
let sec = 0
let nano = 0
if(Object.keys(data[1][key])[1].includes("nano")){
sec = parseInt(data[1][key][Object.keys(data[1][key])[0]],10)
nano = parseInt(data[1][key][Object.keys(data[1][key])[1]],10)
}else{
sec = parseInt(data[1][key][Object.keys(data[1][key])[1]],10)
nano = parseInt(data[1][key][Object.keys(data[1][key])[0]],10)
}
let milliseconds = (sec*1000)+(nano/1000000)
tempCommit[key] = admin.firestore.Timestamp.fromMillis(milliseconds)
}
});
if(data[2]==="set"){
batch.set(data[0], tempCommit, {merge: true});
}else if(data[2]==="delete"){
batch.delete(data[0]);
}
})
await Promise.all(batches.map(batch => batch.commit()))
console.log(`${finalArray.length} documents updated`)
return {
result:"success"
}
}
catch (error) {
console.log(`***ERROR: ${error}`)
return {
result:"error"
}
}
The middle part might seem confusing but it is made to recreate the timestamp (i had errors when instead of a timestamp I had the array of values instead. And I test the array to find the seconds and nanoseconds because I also had cases where my script found seconds either in the first or second position).
So my script checks each key of the committed document and recreates the timestamp if he finds one.
3) Results :
The results are my main problem :
On one side it is working since my database is being updated with the correct values.
But it is not happening the way it should.
Here are the function logs :
4:04:01.790 PM
autoStationEpochMakerTestNet
Function execution started <=== Function START
4:04:02.576 PM
autoStationEpochMakerTestNet
Auto epoch function treating : 1 actions
4:04:02.577 PM
autoStationEpochMakerTestNet
atation : Arcturian Space Station has 1 actions
4:04:02.582 PM
autoStationEpochMakerTestNet
Function execution took 791 ms. Finished with status: ok <== function END
4:04:02.682 PM
autoStationEpochMakerTestNet
list used : 1 <== But the functions called by the previous one are still printing
4:04:04.926 PM
autoStationEpochMakerTestNet
***ERROR: Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
4:04:05.726 PM
autoStationEpochMakerTestNet
result : { result: 'error' }
4:04:05.726 PM
autoStationEpochMakerTestNet
Arcturian Space Station function epoch generator failed. testmode = true
4:07:02.107 PM
autoStationEpochMakerTestNet
Function execution started <== Second function Trigger START
4:07:02.156 PM
autoStationEpochMakerTestNet
6006 documents updated <== Receiving logs from what i assume is the previous function trigger
4:07:02.158 PM
autoStationEpochMakerTestNet
result : { result: 'success' }
4:07:02.210 PM
autoStationEpochMakerTestNet
Auto epoch function treating : 0 actions <== As expected there is no more action to treat since it was actually processed on the previous call
4:07:02.212 PM
autoStationEpochMakerTestNet
Function execution took 104 ms. Finished with status: ok
So my comprehension of what is happening is as follows :
The function triggers and builds the batches but is somehow instantly finished without waiting for the function or returning anything.
Meantimes there are still logs coming from the function including this one :
***ERROR: Error: 4 DEADLINE_EXCEEDED: Deadline exceeded
So I feel my batch array isn't working as expected.
Then on the following function triggers (scheduled function, every 3 min). All the functions seem to finish to execute properly.
Once again, my database is being updated as it should be.
A few elements might be noticing:
Some of the data being committed are timestamps which I think counts for more than 1 action ?
In this case, one document has more than 2000 fields updated (simple int value)
await promise.all() is new to me so it's also one of my leads.
I set the function ram at 2Go and timeout at 540sec.
Also here is the code for the main function (the scheduled one, which is here to dispatch actions and wait for the answer) :
Object.keys(finalObject).forEach(async (stationName) => {
if(finalObject[stationName].finalList.length>0){
let epochDbUsed = stationsEpochDb[stationName]
let resultGenerator = await generateEpochsForStation(finalObject[stationName].finalList,finalObject[stationName].actionsToRemove,actionsDb,epochDbUsed,testMode)
if(resultGenerator.result==="success"){
await db.collection(logsDb).add({
"user":"cloudFunction",
"type": `createdEpochs${stationName}`,
"station":stationName,
"testMode":testMode,
"actionQty":actionCount,
"note":`New epoch has been created on ${stationName}. testmode : ${testMode}`,
"date":FieldValue.serverTimestamp()
});
}else{
console.log(`${stationName} function epoch generator failed. testmode = ${testMode}`)
await db.collection(logsDb).add({
"user":"cloudFunction",
"type": `createdEpochs${stationName}Failure`,
"station":stationName,
"testMode":testMode,
"actionQty":actionCount,
"note":`New epoch has been created on ${stationName}. testmode : ${testMode}, FAILED`,
"date":FieldValue.serverTimestamp()
});
}
}
});
return null
I am not sure if the return null at the end could mess things up, since its a scheduled function, I'm not sure what I'm supposed to return (actually never really thought of that, and realizing as I write, I'm going to look that up after posting that :) ).
Well, I think this is enough data to understand the situation, feel free to ask for any more details of course.
Thanks a lot to whoever will take the time to help :)
Edit 1 :
I think I might have found a solution :
const _datawait = [];
finalArray.forEach( data => {
let docRef = data[0];
_datawait.push( docRef.set(data[1],{merge: true}) );
})
const _dataloaded = await Promise.all( _datawait );
console.log(`${actionCount} documents updated`)
// update date from last epoch
await updateLastEpochDateEnd(stationEpochDb,lastEpoch.epochId)
// add epoch
await db.collection(stationEpochDb).add(finalEpoch)
return _dataloaded
Doing so wait for all requests stored in the finalArray ([docref],[data]) to receive their promise before returning anything.
Took me a while to get there but it feels good to see the logs in order :
8:46:59.311 PM
stationRestarter
Function execution started
8:46:59.355 PM
stationRestarter
docID : PO3UstEZt1lV1IHwxBIR
8:47:10.667 PM
stationRestarter
6004 documents updated
8:47:11.201 PM
stationRestarter
Function execution took 11890 ms. Finished with status: ok
I'm done for today, I will update tomorrow to see if this is a final solution :)
I have setup a http server to send data in intervals of 20 seconds. The data starts at 101 and this number is incremented every time. So the sequence of numbers will be 101,102,103, etc
I also append to the data, after a ; delimiter, the timestamp that the server sends the data.
I think I have some bug in my javascript code, because I am observing this behaviour:
http server sends data "105" at 12:28:52.654
in my web page, I see data item "105" at time 12:29:12:690, ie 20 seconds later. 20 seconds is the interval that I send the data. So it seems like the EventSource onmessage function is being called but is processing the previous data item, "104" in this case.
The web page code:
<!DOCTYPE HTML>
<html>
<head>
<script type="text/javascript">
let source = new EventSource('/startmonitoring');
function startCallMonitoring(){
source.onmessage = function(event) {
console.log(event.data);
addCall(event.data);
};
source.addEventListener('error', function(e) {
if (e.readyState == EventSource.CLOSED) {
console.log("closed");
}
}, false);
}
function stopCallMonitoring() {
source.close();
}
function gettime() {
var currentDate = new Date();
var hour = currentDate.getHours();
var minute = currentDate.getMinutes();
var second = currentDate.getSeconds();
var millisecond = currentDate.getMilliseconds();
return pad(hour) + ":" + pad(minute) + ":" + pad(second) + "." + millisecond;
}
function getdate() {
var currentDate = new Date();
var date = currentDate.getDate();
var month = currentDate.getMonth(); //Be careful! January is 0 not 1
var year = currentDate.getFullYear();
return pad(date) + "/" + pad(month + 1) + "/" + pad(year);
}
function pad(n) {
return n<10 ? '0'+n : n;
}
function addCall(callerid) {
// insert new row.
var tableref = document.getElementById('CallsTable').getElementsByTagName('tbody')[0];
var newrow = tableref.insertRow(0);
var datecell = newrow.insertCell(0);
var timecell = newrow.insertCell(1);
var calleridcell = newrow.insertCell(2);
var customerlinkcell = newrow.insertCell(3);
datecell.innerHTML = getdate();
timecell.innerHTML = gettime();
calleridcell.innerHTML = callerid;
customerlinkcell.innerHTML = "customerlink";
console.log("added " + callerid + " at " + gettime());
}
</script>
</head>
<body>">
<button onclick="startCallMonitoring()">Start Call Monitoring</button>
<button onclick="stopCallMonitoring()">Stop Call Monitoring</button>
<table id="CallsTable">
<thead>
<tr>
<th>Date</th>
<th>Time added to table</th>
<th>CallerID</th>
<th>link</th>
</tr>
</thead>
<tbody>
<tr>
</tr>
</tbody>
</table>
</body>
</html>
Screenshot of event stream in Chrome developer tools.
Why this behaviour? How can I fix it?
Additional information regarding the server side.
I wrote the http server myself so that could be a cause. Without sending the whole code for the server, which is quite large, here is the code using some helper functions to create an HTTP response message.
This timerfunc is called every 20 seconds.
Basically, when I see in the server console the output:
timerfunc() - sending: HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 29
Cache-Control: no-cache
Content-Type: text/event-stream
Access-Control-Allow-Origin: *
id: 7
data: 106;12:29:12.689
to 192
Then in web browser, data item 105 is populated.
void http_server::timerfunc() {
http_response rs;
rs.status = 200;
rs.set_version(1, 1);
rs.add_header("Connection", "keep-alive");
rs.add_header("Content-Type", "text/event-stream"); // this is REQUIRED
//header('Cache-Control: no-cache');
rs.add_header("Cache-Control", "no-cache"); // not sure if required, investigate what it does
rs.add_header("Access-Control-Allow-Origin", "*"); // think because for node.js demo was on different network - don't think need this
//rs.add_header("Transfer-Encoding", "chunked"); // doesn't work if you don't do chunking - investigate - but don't need
static unsigned number = 100;
std::string callerid = std::to_string(number);
char timebuf[50] = {};
get_timestamp(timebuf);
rs.set_body("id: 7\ndata: " + callerid + ";" + timebuf + "\n");
rs.set_content_length_to_body_length();
unsigned retcode = 0;
const size_t len = rs.get_content_length();
for (auto client : clients) {
std::string s = codec.make_http_response_message(rs);
retcode = send(client, s.c_str(), s.length());
std::cout << "timerfunc() - sending: " << s << " to " << client << std::endl;
}
number++;
if (number == 999)
number = 100;
}
I thought it would be useful to publish how I eventually fixed the problem.
It was nothing to do with the client. I had to make the following changes to the web server sending the event to make it fully work:
Change header in requests with event payload to Transfer-Type: chunked.
Payload of request must use the chunked format of \r\n\r\n0\r\n
For Server Sent Events the data stream to send must be prepended with "data: "
So if you wanted to send "My lovely streamed data part 1" as a stream message the http message would look like this:
HTTP/1.1 200 OK\r\n
Connection: Keep-Alive\r\n
Date: Wed, 15 Jan 2020 18:40:24 GMT\r\n
Content-Type: text/event-stream\r\n
Transfer-Encoding: chunked\r\n
Cache-Control: no-cache\r\n
\r\n
24\r\n
data: My lovely streamed data part 1\r\n
0\r\n
Not sure if the Cache-Control header is required.
The web server can send multiple segments of text prepended with length then \r\n then string then \r\n.
Read up on Transfer-Encoding here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding
I am creating an application using Cordova which requires interpreting data from a Bluetooth HR monitor (capable of recording raw RR intervals, such as the Polar H7). I am using the cordova-plugin-ble-central
I am having a difficult time making sense of the data received from the monitor, despite trawling the internet for answers and reading the Bluetooth Heart Rate Service Characteristic specification numerous times.
Here is my function which runs each time data is received:
onData: function(buffer) {
console.log(buffer);
// var data8 = new Uint8Array(buffer);
var data16 = new Uint16Array(buffer);
var rrIntervals = data.slice(1);
for (i=0; i<rrIntervals.length; i++) {
rrInterval = rrIntervals[i];
heartRate.addReading(rrInterval); // process RR interval elsewhere
}
},
When I log the data received in buffer, the following is output to the console:
console output
I know how to extract RR intervals (highlighted in yellow), but I don't really understand what the other values represent, which I require as users might be connecting with other monitors which don't transmit RR intervals etc.
A quick plain English explanation of what the data received means and how to parse it would be much appreciated. For example, what number constitutes the flags field, and how to convert this to binary to extract the sub-fields (ie. to check if RR intervals present - I know this is determined by the 5th bit in the flags field.)
The plugin also states that 'Raw data is passed from native code to the success callback as an ArrayBuffer' but I don't know how to check the flags to determine if the data from the specific HR monitor is in 8 or 16 bit format. Below is another console log of when I create both Uint8 and Uint16 arrays from the data received. Again, I have highlighted the heart rate and RR intervals, but I need to know what the other values represent and how to parse them correctly.
console log with Uint8 and Uint16 output
The whole code is below:
var heartRateSpec = {
service: '180d',
measurement: '2a37'
};
var app = {
initialize: function() {
this.bindEvents();
},
bindEvents: function() {
document.addEventListener('deviceready', this.onDeviceReady, false);
},
onDeviceReady: function() {
app.scan();
},
scan: function() {
app.status("Scanning for Heart Rate Monitor");
var foundHeartRateMonitor = false;
function onScan(peripheral) {
// this is demo code, assume there is only one heart rate monitor
console.log("Found " + JSON.stringify(peripheral));
foundHeartRateMonitor = true;
ble.connect(peripheral.id, app.onConnect, app.onDisconnect);
}
function scanFailure(reason) {
alert("BLE Scan Failed");
}
ble.scan([heartRateSpec.service], 5, onScan, scanFailure);
setTimeout(function() {
if (!foundHeartRateMonitor) {
app.status("Did not find a heart rate monitor.");
}
}, 5000);
},
onConnect: function(peripheral) {
app.status("Connected to " + peripheral.id);
ble.startNotification(peripheral.id, heartRateSpec.service, heartRateSpec.measurement, app.onData, app.onError);
},
onDisconnect: function(reason) {
alert("Disconnectedz " + reason);
beatsPerMinute.innerHTML = "...";
app.status("Disconnected");
},
onData: function(buffer) {
var data = new Uint16Array(buffer);
if (heartRate.hasStarted() == false) {
heartRate.beginReading(Date.now());
} else {
var rrIntervals = data.slice(1);
for (i=0; i<rrIntervals.length; i++) {
rrInterval = rrIntervals[i];
heartRate.addReading(rrInterval);
}
}
},
onError: function(reason) {
alert("There was an error " + reason);
},
status: function(message) {
console.log(message);
statusDiv.innerHTML = message;
}
};
app.initialize();
Many thanks in advance for any help or advice.
UPDATE for a more in depth explanation check out this post I wrote on the subject.
I've figured it out - here's a quick explanation for anyone who encounters a similar problem:
The data passed into onData(buffer) is just binary data, so whether we convert it into a Uint8Array or a Uint16Array it still represents the same binary data. Of course, the integers in Uint16 will likely be larger as they comprise 16 bits rather than 8.
The flags field is always represented by the first byte, so we can get this by converting the data (passed in as buffer) to a Uint8Array and access the first element of this array which will be the element with index 0.
We can then check the various bit fields using bitwise operations. For example, the Bluetooth Heart Rate Service Characteristic specification tells us that the fifth bit represents whether the reading contains RR intervals (1) or doesn't contain any (0).
Below we can see that the fifth bit is the number 16 in binary:
128 64 32 16 8 4 2 1
0 0 0 1 0 0 0 0
Therefore the operation 16 & flag (where flag is the byte containing the flags field) will return 16 (which can be hoisted to true) if the reading contains RR intervals and 0 (hoisted to false) if it doesn't.
I'm trying to query posts from Instagram by providing the hashtag and the time range (since and until dates).
I use the recent tags endpoint.
https://api.instagram.com/v1/tags/{tag-name}/media/recent?access_token=ACCESS-TOKEN
My code is written in Node.js using the instagram-node library (see the inline comments):
// Require the config file
var config = require('../config.js');
// Require and intialize the instagram instance
var ig = require('instagram-node').instagram();
// Set the access token
ig.use({ access_token: config.instagram.access_token });
// We export this function for public use
// hashtag: the hashtag to search for
// minDate: the since date
// maxDate: the until date
// callback: the callback function (err, posts)
module.exports = function (hashtag, minDate, maxDate, callback) {
// Create the posts array (will be concated with new posts from pagination responses)
var posts = [];
// Convert the date objects into timestamps (seconds)
var sinceTime = Math.floor(minDate.getTime() / 1000);
var untilTime = Math.floor(maxDate.getTime() / 1000);
// Fetch the IG posts page by page
ig.tag_media_recent(hashtag, { count: 50 }, function fetchPosts(err, medias, pagination, remaining, limit) {
// Handle error
if (err) {
return callback(err);
}
// Manually filter by time
var filteredByTime = medias.filter(function (currentPost) {
// Convert the created_time string into number (seconds timestamp)
var createdTime = +currentPost.created_time;
// Check if it's after since date and before until date
return createdTime >= sinceTime && createdTime <= untilTime;
});
// Get the last post on this page
var lastPost = medias[medias.length - 1] || {};
// ...and its timestamp
var lastPostTimeStamp = +(lastPost.created_time || -1);
// ...and its timestamp date object
var lastPostDate = new Date(lastPostTimeStamp * 1000);
// Concat the new [filtered] posts to the big array
posts = posts.concat(filteredByTime);
// Show some output
console.log('found ' + filteredByTime.length + ' new items total: ' + posts.length, lastPostDate);
// Check if the last post is BEFORE until date and there are no new posts in the provided range
if (filteredByTime.length === 0 && lastPostTimeStamp <= untilTime) {
// ...if so, we can callback!
return callback(null, posts);
}
// Navigate to the next page
pagination.next(fetchPosts);
});
};
This will start fetching the posts with the most recent to least recent ones, and manually filter the created_time.
This works, but it's very very inefficient because if we want, for example, to get the posts from one year ago, we have to iterate the pages until that time, and this will use a lot of requests (probably more than 5k / hour which is the rate limit).
Is there a better way to make this query? How to get the Instagram posts by providing the hashtag and the time range?
I think this is the basic idea you're looking for. I'm not overly familiar with Node.js, so this is all in plain javascript. You'll have to modify it to suit your needs and probably make a function out of it.
The idea is to convert an instagram id (1116307519311125603 in this example) to a date and visa versa to enable you to quickly grab a specific point in time rather then backtrack through all results until finding your desired timestamp. The portion of the id after the underscore '_' should be trimmed off as that refers, in some way, to the user IIRC. There are 4 functions in the example that I hope will help you out.
Happy hacking!
//static
var epoch_hour = 3600,
epoch_day = 86400,
epoch_month = 2592000,
epoch_year = 31557600;
//you'll need to set this part up/integrate it with your code
var dataId = 1116307519311125603,
range = 2 * epoch_hour,
count = 1,
tagName = 'cars',
access = prompt('Enter access token:'),
baseUrl = 'https://api.instagram.com/v1/tags/' +
tagName + '/media/recent?access_token=' + access;
//date && id utilities
function idToEpoch(n){
return Math.round((n / 1000000000000 + 11024476.5839159095) / 0.008388608);
}
function epochToId(n){
return Math.round((n * 0.008388608 - 11024476.5839159095) * 1000000000000);
}
function newDateFromEpoch(n){
var d = new Date(0);
d.setUTCSeconds(n);
return d;
}
function dateToEpoch(d){
return (d.getTime()-d.getMilliseconds())/1000;
}
//start with your id and range; do the figuring
var epoch_time = idToEpoch(dataId),
minumumId = epochToId(epoch_time),
maximumId = epochToId(epoch_time + range),
minDate = newDateFromEpoch(epoch_time),
maxDate = newDateFromEpoch(epoch_time + range);
var newUrl = baseUrl +
'&count=' + count +
'&min_tag_id=' + minumumId +
'&max_tag_id=' + maximumId;
//used for testing
/*alert('Start: ' + minDate + ' (' + epoch_time +
')\nEnd: ' + maxDate + ' (' + (epoch_time +
range) + ')');
window.location = newUrl;*/
To support this excellent answer, an instagram ID is generated via the plpgSQL function:
CREATE OR REPLACE FUNCTION insta5.next_id(OUT result bigint) AS $$
DECLARE
our_epoch bigint := 1314220021721;
seq_id bigint;
now_millis bigint;
shard_id int := 5;
BEGIN
SELECT nextval('insta5.table_id_seq') %% 1024 INTO seq_id;
SELECT FLOOR(EXTRACT(EPOCH FROM clock_timestamp()) * 1000) INTO now_millis;
result := (now_millis - our_epoch) << 23;
result := result | (shard_id << 10);
result := result | (seq_id);
END;
$$ LANGUAGE PLPGSQL;
from Instagram's blog
Despite a similar getting posts process, Data365.co Instagram API, I currently working at, seems to be more suitable and efficient. It does not have a limit of 5,000 posts per hour, and you can specify the period of time for which your need posts in the request itself. Also, the billing will be taken into account only posts from the indicated period. You won't have to pay for data you don't need.
You can see below a task example to download posts by the hashtag bitcoins for the period from January 1, 2021, to January 10, 2021.
POST request: https://api.data365.co/v1.1/instagram/tag/bitcoins/update?max_posts_count=1000&from_date=2021-01-01&to_date=2021-01-10&access_token=TOKEN
A GET request example to get the corresponding list of posts:
https://api.data365.co/v1.1/instagram/tag/bitcoins/posts?from_date=2021-01-01&to_date=2021-01-10&max_page_size=100&order_by=date_desc&access_token=TOKEN
More detailed info view in API documentation at https://api.data365.co/v1.1/instagram/docs#tag/Instagram-hashtag-search
Consider the following NodeJS program.
var crypto = require('crypto');
var overall = new Date().getMilliseconds();
var hashme = function myself(times){
times--;
if(times > 0){
var name = new Buffer(100000).toString('utf8') + times;
var hash = crypto.createHash('md5').update(name).digest("hex");
console.log(hash);
myself(times)
}
else{
return console.log('Finished in ' + (new Date().getMilliseconds() - overall) + 'ms.')
}
}
hashme(400)
It creates a new string buffer of 10000 bytes, salts it with the iterated value, then calculates the md5 sum of the buffer and logs the elapsed time when finished.
When I run the program, I get wildly different results between 200ms and 600ms each time it is run.
What's going on here?
Creating a Buffer and calling toString() is going to be a part of that latency. So you're timing more than just hashing there. Take the Buffer creation and toString() out of the equation and you'll get a much more accurate and precise reading.