Backbone fetch async timeout not taking effect - javascript

I am trying to do a fetch on a large backbone collection that involves some server side processing.
I tried setting timeout to 0 (for a infinite timeout) or to a large value:
aCollection.fetch({
// ...
timeout: 500000
});
// or:
aCollection.fetch({
// ...
timeout: 0
});
…but neither of them are taking effect; the GET query involved during the fetch operation times-out every 2 minutes.
Is this a browser timeout over-riding the fetch async options? is there any work around for this?

I got this problem too and it was due to node default timeout (I was using expressjs).
It's probably not the best solution but I changed the server timeout.
this.server.timeout = 240000; (4minutes)

Related

Download two URLs at once but process as soon as one completes

I have two API urls to hit. One known to be fast (~50-100ms). One known to be slow (~1s). I use the results of these to display product choices to the user. Currently I await-download one, then do the second. Pretty synchronous and because of that it's adding 50-100ms to the already-slow second hit.
I would like to:
Send both requests at once
Start processing data as soon as one request comes back
Wait for both requests before moving on from there.
I've seen the example Axios give...
axios.all([getUserAccount(), getUserPermissions()])
.then(axios.spread(function (acct, perms) {
// Both requests are now complete
}));
But this appears to wait for both URLs to commit. This would still be marginally faster but I want the data from my 50ms API hit to start showing as soon as it's ready.
For sure you can chain additional .thens to the promises returned by axios:
Promise.all([
getUserAccount()
.then(processAccount),
getUserPermissions()
.then(processPermissions)
]).then(([userAccount, permissions]) => {
//...
});
wereas processAccount and processPermissions are functions that take the axios response object as an argument and return the wanted results.
For sure you can also add multiple .thens to the same promise:
const account = getUserAccount();
const permissions = getUserPermissions();
// Show permissions when ready
permissions.then(processPermissions);
Promise.all([account, permissions])
.then(([account, permissions]) => {
// Do stuff when both are ready
});
I replaced axios.all with Promise.all - I don't know why axios provides that helper as JS has a native implementation for that. I tried consulting the docs ... but they are not even documenting that API.

Cypress wait for API after button click

I've made a React app, which all works perfectly and I'm now writing some end to end tests using Cypress.
The React app all works on the same url, it's not got any routes, and api calls from inside the app are handled through button clicks.
The basis of the app is the end user selects some options, then presses filter to view some graphs that are dependant on the selected options.
cy.get('button').contains('Filter').click()
When the button is pressed in cypress, it runs the 3 api calls which return as expected, but looking over the cypress docs there is no easy way unless I use inline cy.wait(15000) which isn't ideal, as sometimes they return a lot faster, and sometimes they return slower, depending on the selected options.
Edit 1
I've tried using server and route:
cy.server({ method: 'GET' });
cy.route('/endpoint1*').as('one')
cy.route('/endpoint2*').as('two')
cy.route('/endpoint3*').as('three')
cy.get('button').contains('Filter').click()
cy.wait(['#one', '#two', '#three'], { responseTimeout: 15000 })
Which gives me the error:
CypressError: Timed out retrying: cy.wait() timed out waiting 5000ms for the 1st request to the route: 'one'. No request ever occurred.
After further investigation
Changing from responseTimeout to just timeout fixed the error.
cy.server({ method: 'GET' });
cy.route('/endpoint1*').as('one')
cy.route('/endpoint2*').as('two')
cy.route('/endpoint3*').as('three')
cy.get('button').contains('Filter').click()
cy.wait(['#one', '#two', '#three'], { timeout: 15000 }).then(xhr => {
// Do what you want with the xhr object
})
Sounds like you'll want to wait for the routes. Something like this:
cy.server();
cy.route('GET', '/api/route1').as('route1');
cy.route('GET', '/api/route2').as('route2');
cy.route('GET', '/api/route3').as('route3');
cy.get('button').contains('Filter').click();
// setting timeout because you mentioned it can take up to 15 seconds.
cy.wait(['#route1', '#route2', 'route3'], { responseTimeout: 15000 });
// This won't execute until all three API calls have returned
cy.get('#something').click();
Rather than using a .wait you can use a timeout parameter. That way if it finished faster, you don't have to wait.
cy.get('button').contains('Filter', {timeout: 15000}).click()
This is mentioned as one of the options parameters in the official docs here.
cy.server() and cy.route() are deprecated in Cypress 6.0.0 In a future release, support for cy.server() and cy.route() will be removed. Consider using cy.intercept() instead.
this worked for me...
ex:-
see the screen shot
cy.intercept('GET', 'http://localhost:4001/meta').as('route');
cy.get(':nth-child(2) > .nav-link').click();
cy.contains('Filter');
cy.wait(['#route'], { responseTimeout: 15000 });
Try this:
cy.contains('button', 'Save').click();
cy.get('[data-ng-show="user.manage"]', { timeout: 10000 }).should('be.visible').then(() => {
`cy.get('[data-ng-show="user.manage"]').click();
})

Retrying failed attempts with NodeJS/ES6

Foreword: I am newer to Javascript after coming from a C++ background.
I am writing a NodeJS app using a public npm library to request a few sets of data. The data source rate limits requests, and in a few extreme cases these rate limits are hit. When these limits are hit, the API will only return "Rate Limit Exceeded" for a few seconds before processing more requests.
When I receive the data, I try to parse it using .map(). However, when the rate limit exceeds, the whole app comes crashing down because map() is only available for arrays, and the "Rate Limit Exceeded" message is just a simple object.
if (message.message === 'Rate limit exceeded') { //This check doesn't work btw
console.log('There was a problem parsing data from the server: ' + err);
return;
}
var items = data.map(item => ({
time: new Date(item[0] * 1000),
low: item[1],
high: item[2],
open: item[3],
close: item[4],
volume: Number(item[5])
}));
for(var i = 0; i < items.length; i++)
dataStore.push(items[i]);
I want to approach this by detecting the "Rate Limit Exceeded" message, waiting a few seconds, and then retrying.
From my current understanding, setTimeout() would be a good candidate for this, but I do not understand how to get it to work recursively. Essentially, I would like it to re-request data every five seconds until the data is correctly processed.
TL;DR: I want a function to recursively call itself with setTimeout() until it properly receives data; or if there is a better way to achieve this, I am all ears.
What you can do here is use setInterval() and keep on checking every n seconds. Once you get the data, you can exit out the setInterval() using clearInterval().
Note that am using jQuery here in this context as am not sure if you are using any particular NPM package to request the data.
let retryAfter = 10000; //10 seconds
setInterval(function fetchData() {
$.get('//api.jsonbin.io/b/5a3823a38aaf400a97709c43', (data) => {
// Keep on retrying
console.log(data);
// If you get the data, just exit the setInterval
if(data) {
clearInterval(fetchData);
}
});
}(), retryAfter);
//() brackets here is to execute the function without the first 10 sec delay
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
As of now, in this example, my setInterval() will run just once as it gets the data at first. But in your example, it will run until you receive the data.
Also, I can't suggest how you are supposed to compare the error message of yours as am not sure the API which is returning you the error message is in the form of a JSON or plain text. It depends on how you can compare the error message based on the response data type.
You could just do
if(Array.isArray(data) === false) return;
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray

mongoDB insert and process.nextTick

I have a list of 50k entries that I am entering into my db.
var tickets = [new Ticket(), new Ticket(), ...]; // 50k of them
tickets.forEach(function (t, ind){
console.log(ind+1 + '/' + tickets.length);
Ticket.findOneAndUpdate({id: t.id}, t, {upsert: true}, function (err, doc){
if (err){
console.log(err);
} else {
console.log('inserted');
}
});
});
Instead of the expected interleaving of
1 / 50000
inserted
2 / 50000
inserted
I am getting all of the indices followed by all of the inserted confirmations
1 / 50000
2 / 50000
...
50000 / 50000
inserted
inserted
...
inserted
I think something is happening with process.nextTick. There is a significant slow down after a few thousand records.
Does anyone know how to get the efficient interleaving?
Instead of the expected interleaving
That would be the expected behavior only for synchronous I/O.
Remember that these operations are all asynchronous, which is a key idea of node.js. What the code does is this:
for each item in the list,
'start a function' // <-- this will immediately look at the next item
output a number (happens immediately)
do some long-running operation over the network with connection pooling
and batching. When done,
call a callback that says 'inserted'
Now the code will launch a ton of those functions that, in turn, send requests to the database. All that will happen long before the first request has even reached the database. It is likely that the OS doesn't even bother to actually send the first TCP packets before you're at, say ticket 5 or 10 or so.
To answer the question from your comment: No, the requests will be sent out relatively soon (that is up to the OS), but the results won't reach your single-threaded javascript code before your loop hasn't finished queuing up the 50k entries. This is because the forEach is your currently running piece of code, and all events that come in while it's running will be processed only after it's finished - you'll observe the same if you use setTimeout(function() { console.log("inserted... not") }, 0) instead of the actual DB call, because setTimeout is also an async event.
To make your code fully async, your data source should be some kind of (async) iterator that provides data, instead of a huge array of items.
You are running into the wonders of node's asynchronicity. It's sending the upsert requests off into the ether, then continuing on to the next record without waiting for a response. Does it matter, as it's just an informational message that is not in sync with the upsert. You might want to use the Async library to flip through your array, if you need to make sure they are done in order.

Strange issue with socket.on method

I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});

Categories

Resources