While I'm pretty used to using RxJS, and reactive programming, there is one thing that's been bothering me, that I can't get my head around.
Let's say that we have a simple function that will be run every time some one clicks button SCAN
function scan() {
this.startScaning(10).subscribe(scannedItem => console.log(scannedItem))
}
Inside our scan function, we use a startScanning method which starts scanning (i.e. for Bluetooth devices) for 10 seconds, and it returns an observable to which we subscribe and we log all the discovered devices/items.
OK, so far so good, but what bothers me is what happens if user clicks the button 10 times in a row. What happens to the previous subscriptions? And how am I supposed to handle this? Do I need to unsubscribe every time, do I need to unsubscribe at all?
A nice explanation would be appreciated, with possible further readings/examples, thanks
The way I would handle this would be to flip a boolean while the process is running and bind the button's [disabled] property to that value, e.g.
isScanning: boolean
function scan() {
this.isScanning = true
this.startScaning(10).subscribe({
next: scannedItem => console.log(scannedItem),
complete: () => this.isScanning = false
})
}
<button (click)="scan()" [disabled]="isScanning">Click me!</button>
(you might also want to add some sort of indicator that it's processing while the button is disabled - I like to use Font Awesome's spinner icons with *ngIf="isScanning" for that)
As for the rest, it depends on how exactly the startScaning method is implemented. Most likely you'd have ten separate observables each of which would automatically complete ten seconds after its respective click, so there wouldn't be any need to worry about manually unsubscribing or anything unless it was a really heavy process (but IMO you should still disable the button anyway for UX reasons).
Looking at your question again, I assumed you're using Angular but you didn't actually say that. If you're not, the general principle is the same, the only difference is you'll need to use a different way of setting the button's disabled state.
You could on the click change the function submit to another class that does nothing, and when the subscribe return the result call for a class to change the button again to call this function
It doesn't solve the RxJS problem just your problem, just make the button useless while waiting to the result
I suppose you also could use observables to map each call to a variable, but in your case seems better to block the function call while the loop is running
Related
I have a controller where I need to load content using ajax. While it's loading, I'd like a spinner to appear in the interim. The code looks something like the below:
<i class="fa fa-2x fa-spin fa-spinner" ng-show="isLoadingContent"></i>
And the corresponding js:
$scope.isLoadingContent = true;
$q.all(promises).then(function (values) {
$scope.isLoadingContent = false;
// more code - display returned data
However, the UI the spinner does not appear where/when I expect it to appear when I step through the code.
$scope.isLoadingContent = true;
debugger; // the spinner does not appear on the UI
$q.all(promises).then(function (values) {
debugger; // the spinner finally does appear in the UI at this point
$scope.isLoadingContent = false;
// more code - display returned data
I have tried stepping through the code but came up short as to what's going on --
and I am sure I am misunderstanding the sequence of events happening in the Event Loop and where the angular-cycle plays it's role in all of this.
Is someone able to provide an explanation as to why the spinner is set to appear within the promise's method rather than where I set $scope.isLoadingContent? Is it not actually getting set but rather getting queue'd up in the event-loop's message-queue?
------------ EDIT ------------
I believe I came across an explanation as to what's going on. Thanks in large part to, #jcford and #istrupin.
So a little tidbit missing in the original post, the event firing the promise calls and the spinner update was actually based around a $scope.$on("some-name", function(){...}) event - effectively a click-event that is triggered outside of my current controller's scope. I believe this means the $digest cycle doesn't work as it typically does because of where the event-origination is fired off. So any update in the $on function doesn't call $apply/$digest like it normally does, meaning I have to specifically make that $digest call.
Oddly enough, I realize now that within the $q.all(), it must call $apply since, when debugging, I saw the DOM changes that I had expected. Fwiw.
tl;dr - call $digest.
A combination of both answers will do the trick here. Use
$scope.$evalAsync()
This will combine scope apply with timeout in a nice way. The code within the $evalAsync will either be included in the current digest OR wait until the current digest is over and start a new digest with your changes.
i.e.
$q.all(promises).then(function (values) {
$scope.$evalAsync($scope.isLoadingContent = false);
});
Try adding $scope.$apply() after assigning $scope.isLoadingContent = true to force the digest. There might be something in the rest of your code keeping it from applying immediately.
As pointed out in a number of comments, this is absolutely a hack and is not the best way to go about solving the issue. That said, if this does work, you at least know that your binding is set up correctly, which will allow you to debug further. Since you mentioned it did, the next step would then be to see what's screwing up the normal digest cycle -- for example triggering outside of angular, as suggested by user JC Ford.
I usually use isContentLoaded (as oposite to isLoading). I leave it undefined at first so ng-show="!isContentLoaded" is guaranteed to show up at first template iteration.
When all is loaded i set isContentLoaded to true.
To debug your template you need to use $timeout
$timeout(function () { debugger; })
That will stop the code execution right after first digest cycle with all the $scope variable values reflected in the DOM.
I'm working on an end-to-end test using Protractor. The part of the application I'm working on first uses ng-switch statements to show/hide questions in the registration process, one at a time. There's an animation between questions that gave me the hardest time. For example, attempting to load the page->go to next question->assert that an element exists was tough, among other things. The script would load the page, click the next button, then make the assert before the next slide was on screen.
What's worse is that for about half of a second between questions, both the old question and the new one existed on the DOM. The best non-sleep wait mechanism I could come up with was to do a browser.wait() that first waited for there to be two questions on the DOM, then chain another browser.wait() that waited for there to be only one question on the DOM again and then proceed from there. (this entire operation is wrapped into registerPage.waitForTransition() in the code)
The browser.wait()s were not always blocking long enough, so I ended up writing code that looks like this:
it('moves to previous question after clicking previous link', function() {
var title;
// Get the current slide title, then click previous, wait for transition,
// then check the title again to make sure it changed
registerPage.slideTitle.getText()
.then(function(text) {
title = text;
})
.then(registerPage.prevLink.click())
.then(registerPage.waitForTransition())
.then(function() {
expect(registerPage.slideTitle.getText()).not.toBe(title);
});
});
in order to ensure that each wait was properly completed before executing the next command. Now this works perfectly. What was happening before was that the tests would succeed 95% of the time, but would occasionally fire off the asserts or the next click action, etc. before the transition was actually 100% complete. That doesn't happen anymore, but I feel like this is almost OVERusing .then() on promises. But at the same time, it makes sense to force everything to occur sequentially since that's how interacting with a site actually works. Load the page, then wait for the next button to slide in, then make a selection, then click the next button, then wait for the next slide, etc.
Am I doing this in a completely bad-practice style or is this acceptable use of promises when using Protractor on an app with heavy animations?
I like these kind of code-review-like questions, so thanks for posting.
I do think some of your .thens are unnecessary. The .click() and expect shouldn't need them, as they should be added to the controlFlow for you. The expect should also handle the promise for your getText().
The problem you're having would seem to be within your waitForTransition() method, operating outside the controlFlow. Depending on how you're handling the waits within this method, you may need to add it to the controlFlow yourself. Eg. are you calling non-webdriver commands? I've also had good luck with using Expected Conditions isClickable() in cases like these.
Additionally, I would also offload much of this code to your page object, especially when waiting is required. For example, if you add something like this to your page object:
registerPage:
this.getSlideTitleText = function() {
return this.slideTitle.getText().then(function(text) {
return text;
});
};
this.clickPrevLink = function() {
this.prevLink.click();
return this.waitForTransition(); // fix this and the rest should work
};
then your test could be...
it('moves to previous question after clicking previous link', function() {
var title = registerPage.getSlideTitleText();
registerPage.clickPrevLink();
expect(registerPage.getSlideTitleText()).not.toBe(title);
});
i'm trying to get my script to wait for user input (click of a button) before continuing, this is v feasible in other languages, but seems impossible in js. basically, i want the user to select an option within a given time frame, if the user selects the wrong option, they're told..script then conts...otherwise, if after a certain amount of time theres no response...script just continues again sowing them the correct ans, but there seems to be nothing in js to make the script wait for that user input! ive tried a while loop, but that is just a big no no in js, ive used settimeout but has no real effect because the script just continues like normal then performs an action after x amount of time, ive tried setting variables and letting the script cont only if it is of a particular value, which is set only if the user clicks...eg var proceed=false, this is only set to true if the user clicks a button, but it still doesn't work... ive tried sooo many other solutions but nothing actually seems to be working. i like the idea of a while loop, because it doeas exactly what i want it to so, but if completly freezes my browser, is there a more effecient type of loop that will will peroform in the same manner with crashing my browser?
heres my code below that compltely freezes my computer. this method is called within a for loop which calls another method after it.
function getUserResp(){
$("#countdown").countdown({seconds: 15});
setTimeout("proceed=true", 16000);
$("#ans1").click(function(){
ansStr=$(this).text();
checkAns(ansStr);
});
$("#ans2").click(function(){
ansStr=$(this).text();
checkAns(ansStr);
});
$("#ans3").click(function(){
ansStr=$(this).text();
checkAns(ansStr);
});
would like something like this.....or just some sort of loop to make the script wait before going ahead so at least it gives the user some time to respond rather than running straight though!
do{
$(".ans").mouseover(function(){
$(this).addClass("hilite").fadeIn(800);
});
$(".ans").mouseout(function(){
$(this).removeClass("hilite");
});
}while(proceed==false);
}
You're doing it wrong.
JavaScript in the browser uses an event-driven model. There's no main function, just callbacks that are called when an event happens (such as document ready or anchor clicked). If you want something to happen after a user clicks something, then put a listener on that thing.
What you've done just keeps adding an event listener every time round the loop.
If you want to wait for user input then just don't do anything - the browser waits for user input (it's got an internal event loop). The worst thing you can do is try to reimplement your own event loop on top of the browser's.
You need to learn JavaScript. Trying to write JavaScript like you would another language only leads to pain and suffering. Seriously.
Douglas Crockford said it best:
JavaScript is a language that most people don’t bother to learn before they use. You can’t do that with any other language, and you shouldn’t want to, and you shouldn’t do that with this language either. Programming is a serious business, and you should have good knowledge about what you’re doing, but most people feel that they ought to be able to program in this language without any knowledge at all, and it still works. It’s because the language has enormous expressive power, and that’s not by accident.
You can't block the Javascript from running in the same way that you can in some other imperative languages. There's only one thread for Javascript in the browser, so if you hang it in a loop, nothing else can happen.
You must use asynchronous, event-driven programming. Setting a click handler (or whatever) combined with a timeout is the right way to start. Start a 15 second setTimeout. Inside the click handler for the answers, cancel the timeout. This way the timeout's handler only happens if the user doesn't click an answer.
For example:
var mytimeout = setTimeout(15000, function() {
// This is an anonymous function that will be called when the timer goes off.
alert("You didn't answer in time.");
// Remove the answer so the user can't click it anymore, etc...
$('#ans').hide();
});
$('#ans').click(function() {
// Clear the timeout, so it will never fire the function above.
clearTimeout(mytimeout);
alert("You picked an answer!");
});
See how the code must be structured such that it's event-driven. There's no way to structure it to say "do this thing, and wait here for an answer."
You're looking at client-side javascript as if it wasn't already in an event-driven loop. All you need to do is wait for the appropriate event to happen, and if it hasn't happened yet, continue to wait, or else perform some default action.
You don't need to:
create main loop: // All
wait for user input // Of
timer = start_timer() // This
// Is done for you
if [user has input data]:
process_data()
else if [timer > allowed_time]:
process_no_data()
else:
wait() // By the Browser
You only need the middle part. All you need to do is (Actual javascript follows, not pseudo-code):
// First, store all of the answer sections,
// so you're not grabbing them every time
// you need to check them.
var answers = {};
answers.ans1 = $("#ans1");
answers.ans2 = $("#ans2");
answers.ans3 = $("#ans3");
// This is a flag. We'll use it to check whether we:
// A. Have waited for 16 seconds
// B. Have correct user input
var clear_to_proceed = false;
var timer_id;
// Now we need to set up a function to check the answers.
function check_answers() {
if ( ! clear_to_proceed ) {
clear_to_proceed = checkAns(answers.ans1.text());
clear_to_proceed = checkAns(answers.ans2.text());
clear_to_proceed = checkAns(answers.ans3.text());
// I assume checkAns returns
// true if the answer is correct
// and false if it is wrong
}
if ( clear_to_proceed ) {
clearTimeout(timer_id);
return true; // Or do whatever needs be done,
// as the client has answered correctly
} else {
// If we haven't set a timer yet, set one
if ( typeof timer_id === 'undefined' ) {
timer_id = setTimeout(function(){
// After 16 seconds have passed we'll check their
// answers one more time and then force the default.
check_answers();
clear_to_proceed = true;
check_answers();
}, 16000);
}
return false; // We're just waiting for now.
}
}
// Finally, we check the answers any time the user interact
// with the answer elements.
$("#ans1,#ans2,#ans3").bind("focus blur", function() {
check_answers();
});
I have a function called save(), this function gathers up all the inputs on the page, and performs an AJAX call to the server to save the state of the user's work.
save() is currently called when a user clicks the save button, or performs some other action which requires us to have the most current state on the server (generate a document from the page for example).
I am adding in the ability to auto save the user's work every so often. First I would like to prevent an AutoSave and a User generated save from running at the same time. So we have the following code (I am cutting most of the code and this is not a 1:1 but should be enough to get the idea across):
var isSaving=false;
var timeoutId;
var timeoutInterval=300000;
function save(showMsg)
{
//Don't save if we are already saving.
if (isSaving)
{
return;
}
isSaving=true;
//disables the autoSave timer so if we are saving via some other method
//we won't kick off the timer.
disableAutoSave();
if (showMsg) { //show a saving popup}
params=CollectParams();
PerformCallBack(params,endSave,endSaveError);
}
function endSave()
{
isSaving=false;
//hides popup if it's visible
//Turns auto saving back on so we save x milliseconds after the last save.
enableAutoSave();
}
function endSaveError()
{
alert("Ooops");
endSave();
}
function enableAutoSave()
{
timeoutId=setTimeOut(function(){save(false);},timeoutInterval);
}
function disableAutoSave()
{
cancelTimeOut(timeoutId);
}
My question is if this code is safe? Do the major browsers allow only a single thread to execute at a time?
One thought I had is it would be worse for the user to click save and get no response because we are autosaving (And I know how to modify the code to handle this). Anyone see any other issues here?
JavaScript in browsers is single threaded. You will only ever be in one function at any point in time. Functions will complete before the next one is entered. You can count on this behavior, so if you are in your save() function, you will never enter it again until the current one has finished.
Where this sometimes gets confusing (and yet remains true) is when you have asynchronous server requests (or setTimeouts or setIntervals), because then it feels like your functions are being interleaved. They're not.
In your case, while two save() calls will not overlap each other, your auto-save and user save could occur back-to-back.
If you just want a save to happen at least every x seconds, you can do a setInterval on your save function and forget about it. I don't see a need for the isSaving flag.
I think your code could be simplified a lot:
var intervalTime = 300000;
var intervalId = setInterval("save('my message')", intervalTime);
function save(showMsg)
{
if (showMsg) { //show a saving popup}
params=CollectParams();
PerformCallBack(params, endSave, endSaveError);
// You could even reset your interval now that you know we just saved.
// Of course, you'll need to know it was a successful save.
// Doing this will prevent the user clicking save only to have another
// save bump them in the face right away because an interval comes up.
clearInterval(intervalId);
intervalId = setInterval("save('my message')", intervalTime);
}
function endSave()
{
// no need for this method
alert("I'm done saving!");
}
function endSaveError()
{
alert("Ooops");
endSave();
}
All major browsers only support one javascript thread (unless you use web workers) on a page.
XHR requests can be asynchronous, though. But as long as you disable the ability to save until the current request to save returns, everything should work out just fine.
My only suggestion, is to make sure you indicate to the user somehow when an autosave occurs (disable the save button, etc).
All the major browsers currently single-thread javascript execution (just don't use web workers since a few browsers support this technique!), so this approach is safe.
For a bunch of references, see Is JavaScript Multithreaded?
Looks safe to me. Javascript is single threaded (unless you are using webworkers)
Its not quite on topic but this post by John Resig covers javascript threading and timers:
http://ejohn.org/blog/how-javascript-timers-work/
I think the way you're handling it is best for your situation. By using the flag you're guaranteeing that the asynchronous calls aren't overlapping. I've had to deal with asynchronous calls to the server as well and also used some sort of flag to prevent overlap.
As others have already pointed out JavaScript is single threaded, but asynchronous calls can be tricky if you're expecting things to say the same or not happen during the round trip to the server.
One thing, though, is that I don't think you actually need to disable the auto-save. If the auto-save tries to happen when a user is saving then the save method will simply return and nothing will happen. On the other hand you're needlessly disabling and reenabling the autosave every time autosave is activated. I'd recommend changing to setInterval and then forgetting about it.
Also, I'm a stickler for minimizing global variables. I'd probably refactor your code like this:
var saveWork = (function() {
var isSaving=false;
var timeoutId;
var timeoutInterval=300000;
function endSave() {
isSaving=false;
//hides popup if it's visible
}
function endSaveError() {
alert("Ooops");
endSave();
}
function _save(showMsg) {
//Don't save if we are already saving.
if (isSaving)
{
return;
}
isSaving=true;
if (showMsg) { //show a saving popup}
params=CollectParams();
PerformCallBack(params,endSave,endSaveError);
}
return {
save: function(showMsg) { _save(showMsg); },
enableAutoSave: function() {
timeoutId=setInterval(function(){_save(false);},timeoutInterval);
},
disableAutoSave: function() {
cancelTimeOut(timeoutId);
}
};
})();
You don't have to refactor it like that, of course, but like I said, I like to minimize globals. The important thing is that the whole thing should work without disabling and reenabling autosave every time you save.
Edit: Forgot had to create a private save function to be able to reference from enableAutoSave
Are you able to halt JavaScript execution without locking up the browser? The way you would normally halt execution is to do an infinite while()-loop, but in the case of FireFox, it locks up the browser until the loop has ended.
What's your take on this?
I am trying to override window.confirm() to implement my own dialog using HTML. I am doing this so I don't have to change existing code (it's a pretty big code-base).
I need to be able to halt execution to allow user-input; to in turn return a boolean like the standard confirm function does:
if (confirm("..."))
{
// user pressed "OK"
}
else
{
// user pressed "Cancel"
}
Update
To my knowledge; this cannot be done using setTimeout() or setInterval() since these functions execute the code thats given to them asynchronously.
confirm() prompt() and alert() are special functions--they call out of the JavaScript sandbox into the browser, and the browser suspends JavaScript execution. You can't do the same thing, since you need to build your functionality into JavaScript.
I don't think there's a great way to drop in a replacement without doing some restructuring along the lines of:
myconfirmfunction(function() {
/* OK callback */
}, function() {
/* cancel callback */
});
Either use callbacks or make your code Firefox-only. In Firefox with support for JavaScript 1.7 and higher, you can use the yield statement to simulate your desired effect. I have created a library for this purpose called async.js. The standard library for async.js includes a confirm method, which can be used as such:
if (yield to.confirm("...")) {
// user pressed OK
} else {
// user pressed Cancel
}
You cannot stop the event thread in JavaScript, so instead you have to work around the problem, usually by using callback functions. These are functions that are run at a later time, but can be passed around like any other object in JavaScript. You might be familiar with them from AJAX programming. So, for example:
doSomeThing();
var result = confirm("some importart question");
doSomeThingElse(result);
Would be converted into:
doSomeThing();
customConfirm("some importart question", function(result){
doSomeThingElse(result);
});
where customConfirm now takes a question and passes the result to the function it takes as an argument. If you implement a DOM dialog with a button, then connect an event listener to the OK and CANCEL buttons, and call the callback function when the user clicks on one of them.
There is an extension to the JavaScript language called StratifiedJS. It runs in every browser, and it allows you to do just that: halting one line of JavaScript code without freezing the browser.
You can enable Stratified JavaScript e.g. by including Oni Apollo ( http://onilabs.com/docs ) in your webpage like:
<script src="http://code.onilabs.com/latest/oni-apollo.js"></script>
<script type="text/sjs"> your StratifiedJS code here </script>
Your code would look like this:
var dom = require("dom");
displayYourHtmlDialog();
waitfor {
dom.waitforEvent("okbutton", "click");
// do something when the user pressed OK
}
or {
dom.waitforEvent("cancelbutton", "click");
}
hideYourHtmlDialog();
// go on with your application
the way you normally halt execution should hardly ever be an infinite while loop.
break up your work into parts, that you call with SetTimeout
change this:
DoSomeWork();
Wait(1000);
var a = DoSomeMoreWork();
Wait(1000);
DoEvenMoreWork(a);
to this:
DoSomeWork();
setTimeout(function() {
var a = DoSomeMoreWork();
setTimeout(function() {
DoEvenMoreWork(a);
}, 1000);
}, 1000);
I don't think there's any way to reasonably re-create the functionality of confirm() or prompt() in your own JavaScript. They're "special" in the sense of being implemented as calls into the native browser library. You can't really do a modal dialog of that sort in JavaScript.
I have seen various UI libraries that simulate the effect by putting an element on top of the page, that looks & acts like a modal dialog, but those are implemented using async callbacks.
You will have to modify the existing library, rather than replacing window.confirm.
I tried using tight looping for this. I needed to slow down a native event (which AFAIK is the only use case for a synchronous wait that can't be re-architected asynchronously). There are lots of example loops out there that claim not to lock up the browser; but none of them worked for me (the browser didn't lock up, but they prevented it from doing the thing I was waiting for in the first place), so I abandoned the idea.
Next I tried this - storing and replaying the event, which seems to be impossible cross-browser too. However depending on the event and how flexible you need to be, you can get close.
In the end I gave up, and feel much better for it; I found a way to make my code work without having to slow down the native event at all.