chrome.cookies.getAll and remove all cookies then wait for finish - javascript

the chrome.cookies API is not clear to me. I want to getAll cookies for 3 different Domains, then delete those and wait for this process to finish, afterwards I want to set some cookies.
If I understand cookies chrome.cookies.getAll does not return a promise, only I can define a callback. Now its clear to me that I can write callbacks for all three getAll cookies commands but in there I am removing several cookies again this process goes asynchronously. So I am totally lost on how I can identify when all cookies of 3 domains have been completely removed.
One option that I could think of is that I run the 3 cookie.getAlls one time ahead and count the number of cookies, then with every remove I increase a counter and in a callback of remove I check whether I have reached the total number. This seems very strange so I can not believe that this is the correct way to do it.
Thanks

i don't think it's optimize but this is a quick answer.
function RemoveCookies(cookies, domain){
for(var i=0; i<cookies.length;i++) {
chrome.cookies.remove({url: 'https://'domain + cookies[i].path, name:cookies[i].name});
}
}
function RemoveDomain1(callback, ...params){
chrome.cookies.getAll({domain: domain1}, function(cookies) {
RemoveCookies(cookies, domain1);
callback(params)
});
}
function RemoveDomain2(callback, ...params){
chrome.cookies.getAll({domain: domain2}, function(cookies) {
RemoveCookies(cookies, domain2);
callback(params)
});
}
function RemoveDomain3(callback, ...params){
chrome.cookies.getAll({domain: domain3}, function(cookies) {
RemoveCookies(cookies, domain3);
callback()
});
}
RemoveDomain1(RemoveDomain2, RemoveDomain3, DoSomthingAfterAll)
check this link too maybe it help

Related

Clearing timeouts for one item in array - javascript

Say I have an messaging service that schedules messages to friends,
And a user uploads their friends, along with when they want to send them
But say 10000 milliseconds later after scheduling, the uploader wants to take bob out of the (for) loop. How do I take bob out, without canceling scheduler. Or is there a better way to do this? (Its on a node server)
var friends = [‘John’, ‘bob’, ‘billy’, ‘dan’];
for (i in friends) {
setTimeout(function(){
sendMessage(friend[i])
},3000000)
}
I feel like there is a much better way to do this but have not found anything
Thanks, new to js and appreciate the help!
setTimeout returns an object. Store that somewhere, and then call clearTimeout with that object tif you want to do so, as described here:
https://stackoverflow.com/a/6394645/1384352
The way you have your code, all the timeouts will expire at the same time.
Instead you could only initiate the next timeout when the current one finishes. In the following demo 'Billy' is removed while the messages are being sent with an interval of 1 second. And indeed, Billy gets no message:
var friends = ['John', 'Bob', 'Billy', 'Dan'];
(function loop(i) {
setTimeout(function(){
if (i >= friends.length) return; // all done
sendMessage(friends[i]);
loop(i+1); // next
}, 1000);
})(0); // call the function with index 0
function sendMessage(friend) { // mock
console.log('send message to ' + friend);
}
// now remove 'Billy' after 2.5 second
setTimeout(function () {
console.log('remove ' + friends[2]);
friends.splice(2, 1);
}, 2500);
One option could be that you add flag if user should be removed from the recipients and if flag is on you skip sending to that person. Then later on clean marked users when you can. e.g.
if (!friends[i].isRemoved) {
setTimout.....
}
Other option would be that you define individual timers for each friend and then you can cancel who ever you want.

How to do nested looping over many pages in CasperJS

I don't have a clue where to start with this. Basically I need CasperJS to run through about 15 different pages, each page that it runs through it needs to get the data for 150 different locations that need to be set as cookie values. For each location, I need to check the data for 5 different dates.
Any one of these seems pretty straight forward, but trying to get all three to happen is confusing me.
I tried to set it up this way:
for(Iterate through URLs){
for(Iterate through locations){
for(Iterate through dates){
phantom.addCookie({
// Cookie data here based on location and date
});
casper.start(url)
.then(function(){
// Do some stuff here
})
.run();
}
}
}
Essentially what it does is loop through everything, then load the page based on the last link, at the last location, on last date. But every other location gets skipped. Is there an easier way to do this? Perhaps better, is there a way to tell my JavaScript loop to wait for casper to finish doing what it needs to do before jumping to the next loop iteration?
I'm happy to provide more details if needed. I tried to simplify the process as best I can without cutting out needed info.
That's pretty much it. Two things to look out for:
casper.start() and casper.run() should only be called once per script. You can use casper.thenOpen() to open different URLs.
Keep in mind that all casper.then*() and casper.wait*() functions are asynchronous step functions and are only scheduled for execution after the current step. Since JavaScript has function level scope, you need to "fix" the iteration variables for each iteration otherwise you will get only the last URL. (More information)
Example code:
casper.start(); // deliberately empty
for (var url in urls) {
for (var location in locations) {
for (var date in dates) {
(function(url, location, date){
casper.then(function(){
phantom.addCookie({
// Cookie data here based on location and date
});
}).thenOpen(url)
.then(function(){
// Do some stuff here
});
})(url, location, date);
}
}
}
casper.run(); // start all the scheduled steps
If you use Array.prototype.forEach instead of the for-loop, then you can safely skip the use of the IIFE to fix the variables.
I'm not sure, but you may need to first open a page to then add a cookie for that domain. It may be possible that PhantomJS only accepts a cookie when that domain for that cookie is currently open.

Parse Cloud Code Ending Prematurely?

I'm writing a job that I want to run every hour in the background on Parse. My database has two tables. The first contains a list of Questions, while the second lists all of the user\question agreement pairs (QuestionAgreements). Originally my plan was just to have the client count the QuestionAgreements itself, but I'm finding that this results in a lot of requests that really could be done away with, so I want this background job to run the count, and then update a field directly on Question with it.
Here's my attempt:
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.count({
success: function(count) {
question.set("agreementCount", count);
question.save(null, null);
}
});
}).then(function() {
status.success("Finished updating Question Agreement Counts.");
}, function(error) {
status.error("Failed to update Question Agreement Counts.")
});
});
The problem is, this only seems to be running on a few of the Questions, and then it stops, appearing in the Job Status section of the Parse Dashboard as "succeeded". I suspect the problem is that it's returning prematurely. Here are my questions:
1 - How can I keep this from returning prematurely? (Assuming this is, in fact, my problem.)
2 - What is the best way of debugging cloud code? Since this isn't client side, I don't have any way to set breakpoints or anything, do I?
status.success is called before the asynchronous success calls of count are finished. To prevent this, you can use promises here. Check the docs for Parse.Query.each.
Iterates over each result of a query, calling a callback for each one. If the callback returns a promise, the iteration will not continue until that promise has been fulfilled.
So, you can chain the count promise:
agreementQuery.count().then(function () {
question.set("agreementCount", count);
question.save(null, null);
});
You can also use parallel promises to make it more efficient.
There are no breakpoints in cloud code, that makes Parse really hard to use. Only way is logging your variables with console.log
I was able to utilize promises, as suggested by knshn, to make it so that my code would complete before running success.
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var promises = []; // Set up a list that will hold the promises being waited on.
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.equalTo("agreement", 1);
// Make sure that the count finishes running first!
promises.push(agreementQuery.count().then(function(count) {
question.set("agreementCount", count);
// Make sure that the object is actually saved first!
promises.push(question.save(null, null));
}));
}).then(function() {
// Before exiting, make sure all the promises have been fulfilled!
Parse.Promise.when(promises).then(function() {
status.success("Finished updating Question Agreement Counts.");
});
});
});

Strange issue with socket.on method

I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});

How to ensure order of outbound messages

I'm looking for advice as to the best way to solve a problem I'm having with sending messages over a socket. I need to make sure to preserve the order the messages are sent, i.e. first in first out, even when I can't guarantee that the socket is always open.
I have a network manager in my program, with a send method. This send method takes a message object and then attempts to push it out over the socket.
However, sometimes the socket will be closed, due to lost network connectivity, and I need to stop sending messages and queue up any new messages while I'm waiting for the socket to reopen. When the socket reopens, the queued messages should be sent in order.
I'm working with Javascript and Websockets. I have something like this right now, but it seems flawed:
function send(msg) {
if (msg) {
outbox.push(msg);
}
if (!readyState) {
return setTimeout(send, 100);
}
while (outbox.length) {
socket.send(outbox.shift());
}
}
Has anyone ever tackled a problem like this before? I'm looking for a general approach to structuring the program or perhaps an algorithm that can be used.
Adam. Here's a slightly more complete answer. If I were you, I'd wrap this up in a connection object.
function send(msg) {
if (msg) {
if (socket.readyState === socket.OPEN) {
socket.send(msg);
} else {
outbox.push(msg);
}
}
}
socket.onopen = function() {
while(outbox.length) {
socket.send(outbox.shift());
}
}
Are you just asking in general as far as structuring?
Have you thought of using logic with push() pop() shift() and unshift()?
Essentially on a socket error callback of some sort, push() the messages into a stack. When the socket can be opened or you want to try and push the messages again, you can shift() the correct object out. You could use whatever combination of those to get the right ordering of the messages.

Categories

Resources