How can I update DOM while a method is running? - javascript

I'm writing a simple tool that works on the client side. Basically, the user brings in a file, presses a button to start it, it does stuff with it (takes about 10-15 seconds), and then it gives the modified file back.
Unfortunately, as soon as they press the button to start the method, the DOM doesn't update until the method is finished, so there's no feedback until it's all finished, which is quite frustrating.
In the template section, I have:
<p v-if="processingStatus==1">Processing data...</p>
The "processingStatus" variable is set to 0 by default.
In the method, which is "processData" and is called when the button is pressed, it starts with
this.processingStatus = 1
And then proceeds to process the data.
Unfortunately, the "Processing data..." tag doesn't show up until the method is finished. How can I force VueJS to render the DOM while the method is running?

Based on what you are saying that processData is doing (going through a bunch of data in arrays), my guess is that it is not running async and is locking the javascript thread until it is done.
What you need to do is inside processData is set the is processing variable like you are, but then do all the actual work inside a promise or some other mechanism to release control and update the UI.
Potentially you could also call
vm.$forceUpdate();
before starting all your array work.

Related

How to refresh screen data in JavaScript without making the page slow?

I have a question in terms of code and NOT user experience, I have the JS:
$(document).on( "click", "input:radio,input:checkbox", function() {
getContent($(this).parent(),0);
});
The above JS gets the contents from radios and checkboxes, and it refreshes the page to show dependencies. For example if I check on yes, and the dependency is on yes, show text box, the above works!
What I want to know is, if there is a better way to do the same thing, but in a more friendly way, as this is at times, making the pages slow. Especially if I do a lot of ticks/checks in one go, I miss a few, as the parent refreshes!
If you have to hit your server to getContent() then it will automatically be slow.
However, you can save a lot if you send all the elements once instead of hitting the server each time a change is made.
Yet, if creating one super large page is not an option, then you need to keep your getContent() function, but there is one possible solution, in case you did not already implement such, which is to cache all the data that you queried earlier.
So you could have an object (a map) which has keys defining the data you're interested in. If the key is defined, then the data is already available and your return and use that data directly from the cache. Otherwise, you have to hit the server.
One thing to do, you mentioned slowness as you 'tick' things back and forth, is to not send more than one request at a time to the server (with a timeout in case the server never replies). So the process here is:
Need data 'xyz'
Is that data already cached? if yes, then skip step (3 and 4)
If a request being worked on? if yes, push the data on the request stack and return
Send a request to the server, which blocks any further request until answer for 'xyz' is received
Receive the answer and cache the data in an object (map) and release the request queue
Make use of data as required
I check the request queue, if not empty pop the next request and start processing from (2)
The request process is expected to be run on a timer because (1) it can time out and (2) it need to run in the background (not GUI preemptive)

JavaScript callbacks and control flow

When are callbacks executed, for example the callback of a setTimeout() or a click event?
Do they pause code, that is already running, or do they wait until it has finished?
Example
I have a data structure (incrementalChanges) that records state changes caused by user interactions, for example mouse clicks. If I want to send all changes to another peer, I send him this data structure.
Another possibility is a full synchronisation (makeFullSync()), that means I send him my complete current state, so that I must empty the incremental changes (deleteIncrementalChanges()). That is, what you can see in the code. However I am not sure, what happens, if a user clicks something exactly between these two function calls. If this event fires immediately, then an item to the incrementalChanges structure would be added, but then in the second call directly deleted, so that it will never be sent and the other peer's state would became invalid.
makeFullSync();
/* what if between these 2 calls a new change is made, that is saved in the
changes data structure, that will be deleted by deleteIncrementalChanges()?
Then this change would be lost? If I change the order it is not better ...
*/
deleteIncrementalChanges();
Some good links and, in the case the first scenario (it pauses running code) is true, solutions are welcomed.
Javascript is single threaded, and keeps an event stack of stuff it needs to get to once it's done running the current code it's working on. It will not start the next event in the stack until the current one is finished.
If you make multiple asynchronous calls, such as calls for a server to update data on another client, you need to structure your code to handle the case where they don't necessarily reach the second client in the same order.
If you're sending changes one at a time to another user, you can time stamp the changes to track what order they were made on the first client.
Do they pause code, that is already running, or do they wait until it has finished?
They wait until it has finished. JavaScript is single threaded, more than one piece of code can not run at once. JS uses an event loop to handle asynchronous stuff. If an event such as a click handler or timer firing happens while another piece of code is running, that event is queued up and runs after the currently running code finishes executing.
Assuming makeFullSync(); and deleteIncrementalChanges(); are called in the same chunk of code they will be executed one after another without any click events being processed until after they have both run.
One almost exception to the nothing runs in parallel rule in JS is WebWorkers. You can send data off to a worker for processing which will happen in another thread. Even though they run in parallel their results are inserted back into the event loop like any other event.

OpenGraph watch action - less than 50% watched

I'm using the Javascript SDK to contact the API.
Using Built-in Watch action:
When a user watches less than 50% of a video, or if a user removes a watch activity from your app/site, you should also remove the corresponding action instance that was published to Open Graph by invoking the following call
DELETE https://graph.facebook.com/[watch action instance id]
My problem is with when the user navigates away from the page.
I've tried to use the jquery unload method to make the delete call to the API but it fails to. I've also tried using ajax to make the call synchronously but this holds/freezes the browser for at least 5 seconds on average.
Any ideas?
I've tried to use the jquery unload method to make the delete call to the API but it fails to.
Well, that’s a problem with the call being asynchronous. Your unload handler fires, starts the request, and then the browser navigates away from the page. Wait, what, there’s a request still running? Let’s terminate that, since I’m about to load and display another page anyway …
I've also tried using ajax to make the call synchronously but this holds/freezes the browser for at least 5 seconds on average.
If that’s how long your call takes, then I see no realistic way of shortening that.
You could try setting up a script that terminates straight away, without giving a return value (or a yeah OK, go on with your stuff, browser response) – and finishes the rest (making the actual API call) afterwards, server-side.

Delaying a setTimeout()

I'm having an issue with some asynchronous JavaScript code that fetches values from a database using ajax.
Essentially, what I'd like to do is refresh a page once a list has been populated. For that purpose, I tried inserting the following code into the function that populates the list:
var timer1;
timer1 = setTimeout([refresh function], 1000);
As you might imagine, this works fine when the list population takes less than 1 second, but causes issues when it takes longer. So I had the idea of inserting this code into the function called on the success of each ajax call:
clearTimeout(timer1);
timer1 = setTimeout([refresh function], 1000);
So in theory, every time a list element is fetched the timer should reset, meaning that the refresh function should only ever be called 1 second after the final list element is successfully retrieved. However, in execution all that happens is that timer1 is reset once, the first time the second block of code is reached.
Can anybody see what the problem might be? Or if there's a better way of doing this? Thanks.
==========
EDIT: To clear up how my ajax calls work: one of the issues with the code's structure is that the ajax calls are actually nested; the callback method of the original ajax call is itself another ajax call, whose callback method contains a database transaction (incorrect - see below). In addition, I have two such methods running simultaneously. What I need is a way to ensure that ALL calls at all levels have completed before refreshing the page. This is why I thought that giving both methods one timer, and resetting it every time one of the callback methods was called, would keep pushing its execution back until all threads were complete.
Quite honestly, the code is very involved-- around 140 lines including auxiliary methods-- and I don't think that posting it here is feasible. Sorry-- if no one can help without code, then perhaps I'll bite the bullet and try copying it here in a format that makes some kind of sense.
==========
EDIT2: Here's a general workflow of what the methods are trying to do. The function is a 'synchronisation' function, one that both sends data to and retrieves data from the server.
I. Function is called which retrieves items from the local database
i. Every time an item is fetched, it is sent to the server (ajax)
a. When the ajax calls back, the item is updated locally to reflect
its success/failure
II. A (separate) list of items is retrieved from the local database
i. Every time an item is fetched, an item matching that item's ID is fetched from the server (ajax)
a. On successful fetch from server, the items are compared
b. If the server-side item is more recent, the local item is updated
So the places I inserted the second code block above are in the 'i.' sections of each method, in other words, where the ajax should be calling back (repeatedly). I realize that I was in error in my comments above; there is actually never a nested ajax call, but rather a database transaction inside an ajax call inside a database transaction.
You're doing pretty well so far. The trick you want to use is to chain your events together, something like this:
function refresh()
{
invokeMyAjaxCall(param1, param2, param3, onSuccessCallback, onFailureCallback);
}
function onSuccessCallback()
{
// Update my objects here
// Once all the objects have been updated, trigger another ajax call
setTimeout(refresh, 1000);
}
function onFailureCallback()
{
// Notify the user that something failed
// Once you've dealt with the failures, trigger another call in 1 sec
setTimeout(refresh, 1000);
}
Now, the difficulty with this is: what happens if a call fails? Ideally, it sounds like you want to ensure that you are continually updating information from the server, and even if a temporary failure occurs you want to keep going.
I've assumed here that your AJAX library permits you to do a failure callback. However, I've seen some cases when libraries hang without either failing or succeeding. If necessary, you may need to use a separate set of logic to determine if the connection with the server has been interrupted and restart your callback sequence.
EDIT: I suspect that the problem you've got is a result of queueing the next call before the first call is done. Basically, you're setting up a race condition: can the first call finish before the next call is triggered? It may work most times, or it may work once, or it may work nearly all the time; but unless the setTimeout() is the very last statement in your "response-processing" code, this kind of race condition will always be a potential problem.

When doing AJAX edit to the database, should I update the interface immediately with the new data

I'm using inline-edit to update text in the database with AJAX. This is basically the process, pretty usual stuff:
text is not editable
I click the text, it becomes editable
I type the new text
then click to send the updated text to the database
then return the text to non-editable format
My question is when should I update the interface with the new data? Should I update it immediately before the ajax call, or should I wait for the update response to return from the database?
My concern:
If I don't update the interface immediately and wait to get the response from the database, then I've lost the asynchronous benefit that comes with ajax.
But if I update it immediately, then if the database response has an error, I somehow have to track the change I already made, and reverse it, which is a lot more work.
So how is this sort of thing usually done?
I think it is completely reasonable to wait for the response and update as a result of a callback. Doing so does not detract from the async approach. It is still fully async because you are not blocking the entire page or reloading it completely.
Plenty of times in apps, especially in mobile ones where the bandwidth might be limited, I will see a spinner indicating that the field is submitting. This does not hold up any other part of the app. Even stackoverflow does this when I use the mobile view. Rely on the callbacks in order to stay async and still be synced to database return values.
AJAX calls are pretty quick, excepting network issues of course. Personally, I don't think you will lose the benefit of AJAX by waiting for a response from the database. That is, unless you plan on it being slow because of server-side processing, etc...
Now if you were to set the textfield to a non-editable state, the user might think that his change has been accepted and will be confused when the server returns an error and the value is reverted to its original state. I would leave the field editable until the server returns.
If you are using jQuery it's pretty simple, but if you are using your homebrew ajax call script you will have to add some mechanism to see if everything went good or bad.
$.ajax({
url: '/ajax/doSomething.php',
type: 'POST',
dataType: 'json',
data: {
'q': $("#editableText").val()
},
success: function(json) {
$("#editableText").html(json.value);
},
error: {
alert('something went wrong!');
}
})
So when your doSomething.php returns true or false, our ajax calls does something according to it.
Yes the ajax calls are pretty fast, but before changing the data displayed on the page I guess you must be sure that everything went OK, else the user might leave the page without knowing if they have done the editing or not.
The case that you have mentioned is an Optimistic UI update. In this case you are assuming (implictly) that the update will be performed on the server without any error.
The disadvantage to this approach would be with a following scenario
User clicks on non-editable text
Text becomes editable
User types in new text
User clicks send
The UI changes to the new text and makes it uneditable
User closes the browser window (or navigates away from the page) before the reply ever comes back (assuming that the change was performed)
Next time the user logs in (or comes back to the page) they are confused as to why the change did not apply!
However you also want to use the asynchronous nature of ajax and make sure that the user can still interact with your app (or the rest of the page) as this change is being performed.
The way we do that (at my work-place) would typically be (using long polling or http push)
The user interacts with non-editable text
The text becomes editable
User types in new text
User clicks send
Instead of updating the text optimistically we show some kind of spinner (only on the text) that indicates to the user that we are waiting for some kind of response from the server. Note that since we are only disabling just the part of the page that shows this text the user does not have to wait for the ajax call to return in order to interact with the rest of the page. (In fact we have many cases that support updating multiple parts of the page in parallel). Think of gmail where you might be chatting with someone in the browser window and at the same time you get an email. The chat can go on and the email counter is also incremented. Both of them are on the same page but do not depend on each other (or wait for each other in any way)
When the update is complete we take away the spinner (which is usually shown using css - toggle class) and replace the element's value with the new text.
Lastly if you are doing something like this make sure that your server also has support for adding a pending class to the text field (that will cause a spinner to appear). This is done for the following reason
User updates text and submits
User immediately navigates to a different page
User then comes back to the original page again (and let us assume that the update is not yet complete)
Since your server knows that the field is being updated and can add a pending css class to the label / display field the page comes up loaded with a spinner.(On the other hand if there is no pending request there will be no spinner shown)
Finally when the long poller message comes back from the server the spinner is taken off and the current value is displayed

Categories

Resources