How to get the value of SELECT COUNT(*)? - javascript

I've literally been trying all day to make Firefox to obey my will...
I want :
int c = SELECT COUNT(*) FROM ...
I've tried executeAsync({...});, but I believe it's the wrong paradigm, as I want the result immediately. (And mozIStoragePendingStatement results in errors)
var count = 0;
var conn = Services.storage.openDatabase(dbfile); // Will also create the file if it does not exist
let statement = conn.createStatement("SELECT COUNT(*) FROM edges LIMIT 42;");
console.log("columns: " + statement.columnCount); // prints "1";
console.log("col name:" + statement.getColumnName(0)); // is "COUNT(*)"
while (statement.executeStep())
count = statement.row.getResultByIndex(0); // "illegal value"
count = statement.row.getString(0); // "illegal value", too
count = statement.row.COUNT(*); // hahaha. still not working
count = statement.row[0]; // hahaha. "undefinded"
count = statement.row[1]; // hahaha. "undefinded"
}
statement.reset();
It basically works but I dont get the value. What's wrong with all the statements (those within the loop).
Thanks for any hints...

I've tried executeAsync({...});, but I believe it's the wrong paradigm, as I want the result immediately.
You shouldn't want that, the Storage API is asynchronous for a reason. Synchronous access to databases can cause a random delay (e.g. if the hard drive is busy). And since your code executes on the main thread (the same thread that services the user interface) the entire user interface would hang while your code is waiting for the database to respond. The Mozilla devs tried synchronous database access in Firefox 3 and quickly noticed that it degrades user experience - hence the asynchronous API, the database processing happens on a background thread without blocking anything.
You should change your code to work asynchronously. Something like this should do for example:
Components.utils.import("resource://gre/modules/Services.jsm");
var conn = Services.storage.openDatabase(dbfile);
if (conn.schemaVersion < 1)
{
conn.createTable("edges", "s INTEGER, t INTEGER");
conn.schemaVersion = 1;
}
var statement = conn.createStatement("SELECT COUNT(*) FROM edges");
statement.executeAsync({
handleResult: function(resultSet)
{
var row = resultSet.getNextRow();
var count = row.getResultByIndex(0);
processResult(count);
},
handleError: function(error) {},
handleCompletion: function(reason) {}
});
// Close connection once the pending operations are completed
conn.asyncClose();
See also: mozIStorageResultSet, mozIStorageRow.

Try aliasing count(*) as total, then fetch that

Related

How to check what the throttling limit is for your access to an endpoint with JS

[![enter image description here][1]][1]I need to implement code to check what my throttling limit is on an endpoint (I know it's x times per minute). I've only been able to find an example of this in python, which I have never used. It seems like my options are to run a script to send the request repeatedly until it throttles me or, if possible, query the API to see what the limit is.
Does anyone have a good idea on how to go about this?
Thanks.
Note: The blank space is just data from the api calls.
[1]: https://i.stack.imgur.com/gAFQQ.png
This starts concurency number of workers (I'm using workers as a loose term here; don't # me). Each one makes as many requests as possible until one of the requests is rate-limited or it runs out of time. It them reports how many of the requests completed successfully inside the given time window.
If you know the rate-limit window (1 minute based on your question), this will find the rate-limit. If you need to discover the window, you would want to intentionally exhaust the limit, then slow down the requests and measure the time until they started going through again. The provided code does not do this.
// call apiCall() a bunch of times, stopping when a apiCall() resolves
// false or when "until" time is reached, whichever comes first. For example
// if your limit is 50 req/min (and you give "until" enough time to
// actuially complete 50+ requests) this will call apiCall() 50 times. Each
// call should return a promise resolving to TRUE, so it will be counted as
// a success. On the 51st call you will presumably hit the limit, the API
// will return an error, apiCall() will detect that, and resolve to false.
// This will cause the worker to stop making requests and return 50.
async function workerThread(apiCall, until) {
let successfullRequests = 0;
while(true) {
const success = await apiCall();
// only count it if the request was successfull
// AND finished within the timeframe
if(success && Date.now() < until) {
successfullRequests++;
} else {
break;
}
}
return successfullRequests;
}
// this just runs a bunch of workerThreads in parallell, since by doing a
// single request at a time, you might not be able to hit the limit
// depending on how slow the API is to return. It returns the sum of each
// workerThread(), AKA the total number of apiCall()s that resolved to TRUE
// across all threads.
async function testLimit(apiCall, concurency, time) {
const endTime = Date.now() + time;
// launch "concurency" number of requests
const workers = [];
while(workers.length < concurency) {
workers.push(workerThread(apiCall, endTime));
}
// sum the number of requests that succeded from each worker.
// this implicitly waits for them to finish.
let total = 0;
for(const worker of workers) {
total += await worker;
}
return total;
}
// put in your own code to make a trial API call.
// return true for success or false if you were throttled.
async function yourAPICall() {
try {
// this is a really sloppy example API
// the limit is ROUGHLY 5/min, but because of the sloppy server-side
// implimentation you might get 4-6.
const resp = await fetch("https://9072997.com/demos/rate-limit/");
return resp.ok;
} catch {
return false;
}
}
// this is a demo of how to use the function
(async function() {
// run 2 requests at a time for 5 seconds
const limit = await testLimit(yourAPICall, 2, 5*1000);
console.log("limit is " + limit + " requests in 5 seconds");
})();
Note that this method measures the quota available to itself. If other clients or previous requests have already depleted the quota, it will affect the result.

IndexedDB-Bug in Google Chrome

I try to create a ring buffer, so that i can store a lot of json-Data.
The goal is to save around 300,000 records and change them cyclically. For the test, I randomly created 1,000 records (with 10 float values per record) and saved them as json in the indexedDB.
To persist in the IndexedDB, i used a loop (from 0 to 99) and the command "put".
My observation is the following:
On the first pass, the DB is created and the 100 records are saved successfully.
Also, the first refresh works, the new randomly generated float values are saved. But the memory utilization increases significantly.
After a second refresh, the random data will not be changed, because the memory usage has exceeded the limit.
The key for the used indexedDB are set in a loop (starts by 0 and ends by 99).
In other browsers like Firefox and MS Edge, the test runs well, even after 100 refreshes.
Is there someone who knows the problem or even better has a solution?
It would also be ok to delete all Recrords from the indexedDB, while the page is reloading.
So I tried to remove all data while initializing - but also here, the memory usage stayed at a high level. (over 230 MB).
function getObjectStore(store_name, mode) {
var tx = db.transaction(store_name, mode);
return tx.objectStore(store_name);
}
function putDbElement(number, json, _callback) {
var obj = {
number: number,
json: json
};
var store = getObjectStore(DB_STORE_NAME, 'readwrite');
var req;
try {
req = store.put(obj);
_callback();
} catch (e) {
throw e;
}
}
for ( var i = 0; i < 100; i++ ) {
putDbElement(
i,
getRandomJson( 1000 ),
function() {
console.log( "created: " + i );
}
);
}
IndexedDB is asynchronous.
You are opening a new transaction for each iteration. It could be the reason for high memory usage.
You need to handle success, error. You can use loops, but they must be within a transaction, onsuccess. Then each put operation must have its own success/error handlers too.
Thankes alot for your quick answer.
I've extended the code to onsuccess and onerror.But still had the same problem.
Although I found no solution, but an explanation for the problem: IndexedDB size keeps growing even though data saved doesn't change.
Chrome uses the LevelDB to be faster, but in my case I find that irritating.

TransactionInactiveError with subsequent put calls

I can't figure out if I'm doing something wrong or if I'm just pushing it to hard.
I'm trying to sync ~70000 records from my online db to IndexedDB in combination with EventSource and a Worker.
So I get 2000 records per package and then use the following code to store them in IndexedDB:
eventSource.addEventListener('package', function(e) {
var data = JSON.parse(e.data);
putData(data.type, data.records);
});
function putData(storeName, data) {
var store = db.transaction([storeName], 'readwrite').objectStore(storeName);
return new Promise(function(resolve, reject) {
putRecord(data, store, 0);
store.transaction.oncomplete = resolve;
store.transaction.onerror = reject;
});
}
function putRecord(data, store, recordIndex) {
if(recordIndex < data.length){
var req = store.put(data[recordIndex]);
req.onsuccess = function(e) {
recordIndex += 1;
putRecord(data, store, recordIndex);
};
req.onerror = function() {
self.postMessage(this.error.name);
recordIndex += 1;
putRecord(data, store, recordIndex);
};
}
}
It all works for about ~10000 records. Didn't really test where the limit is though. I suspect that at some point there are too many transactions in parallel which causes a single transaction to be very slow and thus causing trouble because of some timeout. According to the dev tools the 70000 records are around 20MB.
Complete error:
Uncaught TransactionInactiveError: Failed to execute 'put' on
'IDBObjectStore': The transaction has finished.
Any ideas?
I don't see an obvious error in your code, but you can make it much simpler and faster. There's no need to wait for the success of a previous put() to issue a second put() request.
function putData(storeName, data) {
var store = db.transaction([storeName], 'readwrite').objectStore(storeName);
return new Promise(function(resolve, reject) {
for (var i = 0; i < data.length; ++i) {
var req = store.put(data[i]);
req.onerror = function(e) {
self.postMessage(e.target.error.name);
};
}
store.transaction.oncomplete = resolve;
store.transaction.onerror = reject;
});
}
It is possible that the error you are seeing is because the browser has implemented an arbitrary time limit on the transaction. But again, your code looks correct, including the use of Promises (which are tricky with IDB, but so far as I can tell you're doing it correctly!)
If this is still occurring I second the comment to file a bug against the browser(s) with a stand-alone repro. (If it's happening in Chrome I'd be happy to take a look.)
I think this is due the implementation. If you read the specs a transaction must keep a list of all the requests made in the transaction. When the transaction is commited all these changes are persisted otherwise the transaction will be aborted. Specs
Probably is the maximum request list in your case a 1000 request. You can easily test that by trying to insert a 1001 records. So my guess is when the 1000 request is reached, the transaction is set to inactive.
Maybe change your stratigy and only make 1000 request in every transaction and start a new transaction when the other one is completed.

Saving To MongoDB In A Loop

i am having trouble saving a new record to mongoDB. i am pretty sure there is something i am using in my code that i don't fully understand and i was hoping someone might be able to help.
i am trying to save a new record to mongoDB for each of the cats. this code is for node.js
for(var x = 0; x < (cats.length - 1); x++){
if (!blocked){
console.log("x = "+x);
var memberMessage = new Message();
memberMessage.message = message.message;
memberMessage.recipient = room[x].userId;
memberMessage.save(function(err){
if (err) console.log(err);
console.log(memberMessage + " saved for "+cats[x].name);
});
}
});
}
i log the value of "cats" before the loop and i do get all the names i expect so i would think that looping through the array it would store a new record for each loop.
what seems to happen is that when i look ta the the database, it seems to have only saved for the last record for every loop cycle. i don't know how/why it would be doing that.
any help on this is appreciated because I'm new to node.js and mongoDB.
thanks.
That's because the save is actually a I/O operation which is Async. Now, the for loop is actually sync.
Think of it this way: your JS engine serially executes each line it sees. Assume these lines are kept one-after-another on a stack. When it comes to the save, it keeps it aside on a different stack (as it is an I/O operation, and thus would take time) and goes ahead with the rest of the loop. It so turns out that the engine would only check this new stack after it has completed every line on the older one. Therefore, the value of the variable cats will be the last item in the array. Thus, only the last value is saved.
To fight this tragedy, you can use mutiple methods:
Closures - Read More
You can make closure like so: cats.forEach()
Promises - Read More. There is a sweet library which promisifies the mongo driver to make it easier to work with.
Generators, etc. - Read More. Not ready for primetime yet.
Note about #2 - I'm not a contributor of the project, but do work with the author. I've been using the library for well over an year now, and it's fast and awesome!
You can use a batch create feature from mongoose:
var messages = [];
for(var x = 0; x < (cats.length - 1); x++) {
if (!blocked) {
var message = new Message();
message.message = message.message;
message.recipient = room[x].userId;
messages.push(message);
}
}
Message.create(messages, function (err) {
if (err) // ...
});

Ussing setTimeout inside a loop to not allow blocking

Some lines of code to give you the idea what I'm trying to ask.
Code starts with
var webSocketsServerPort = 8002;
var webSocketServer = require('websocket').server;
var conns = [];
I use the array conns to push the users after each successful connection. I put there additional (their ID) information so I can identify the user.
And when I need to send a specific information to a user I call the following function.
function sendMessage(userID, message){
for(var i = 0, len = conns.length; i < len; ++i){
if(conns[i].customData.ID == userID){
conns[i].sendUTF(message);
}
}
}
My question is:
Is it a better idea if replace conns[i].sendUTF(message); with setTimeout(function(){conns[i].sendUTF(message)},1) so that in case there are 5000 connected users sendUTF(msg) will not be able to block the loop and in the best case all the messages will be sent at the same time.
If you change your design to order everything by an id instead of an Array of objects, there is no reason to have to loop to find all of the user's connection. You would only need to loop through the multiple connections for each user.
var connections = {};
function addConnection (userId, conn) {
if (!connections[userId]) {
connections[userId] = [];
}
connections[userId].push(conn);
}
var getUserConnections (userId) {
return connections[userId];
}
That wouldn't help in the way you are thinking. If it's not going to "block" at that time, it will "block" in 1 ms.
Doing setTimeout that way will only delay the execution, but not the queueing. JS will still blockingly run your for loop to get all 5000 items into the waiting queue before clearing the stack for the other things.
What you need is to give way each iteration. Since you're on NodeJS, you can use process.nextTick() to schedule the next iteration. Here's a quick example.
var i = 0;
var length = cons.length;
function foo(){
// if not yet the limit, schedule the next
if(i++ < length) process.nextTick(foo);
// Run as usual
if(conns[i].customData.ID == userID) conns[i].sendUTF(message);
}

Categories

Resources