Firebase atomic increment can be used in update or set. But they don't return the updated value on completion. So, I have to use once('value') immediately after update or set:
var submitref = firebase.database().ref('/sequence/mykey')
return submitref.set(firebase.database.ServerValue.increment(1)).then(_=>{
return submitref.once('value').then(snap=>snap.val());
});
Lets assume 2 threads are executing this code concurrently. submitref.set() will work fine, because of atomic increment. But if they complete submitref.set() at the same time and execute submitref.once('value') at the same time, both threads will receive same incremented value by +2.
Is this a possibility or am I not understanding it correctly?
The increment operation executes atomically on the server. There is no guarantee in their operation that all clients (or even any client) will see all intermediate states.
Your use-case to keep a sequential, monotonically incremental counter is better suited to a transaction since with a transaction the client controls the new value based on the current value.
JavaScript is a single-threaded language. Every bit of code executes in some order relative to any other bit of code. There is no threading contention except through any native libraries, which is not the case here. Also, Realtime Database pipelines all of its operations over a single connection in the order they were received, so there is a consistent order to its operations as well.
All that said, I imagine you could get into a situation where the two calls to set() happen before the two calls to once(), which means they would both show the twice-incremented value.
In this case, you might be better off overall using a listener with on() in order to simply know the most recent value at any time, and act on it whenever it's seen to change, regardless of what happens after establishing it.
My server-side Blazor app calls multiple times a javascript function that is supposed to move a div around (using setInterval).
The issue is that the function does not wait for the previous call to finish. As a result, the DOM is changed from different js interops at the same time which leads to unpredictable results. I was expecting the calls to be stacked up and run one by one.
Do you have an idea how can I solve this issue? Thanks a lot!
Seems like the only way is to use custom data attributes on the DOM as a lock.
Something like that: data-lockincrement="0"
Each JSInterop call should include that lockincrement so Setinterval continues until lockincrement matches the value which means the calls will be executed sequentially.
I'm making modifications to this file https://github.com/davidguttman/react-pivot/blob/master/index.jsx#L84 to move the Dimensions component out to a parent component.
One strange thing i noticed is that i have to call setTimeout(this.updateRows, 0) instead of this.updateRows() for the views to update correctly.
Any idea why this is so? AFAIK, setTimeout(_,0) simply makes the function call asynchronous (i.e. allows concurrent execution for performance). Why would that help with rendering views correctly? I'm asking this question to avoid "Programming by Coincidence".
This is because setState is asynchronous.
Since you are reading from this.state in the updateRows function, it won't work until the state is actually updated.
Using setTimeout as you did, is one way to allow the state to update. setState will complete, and then updateRows will execute in the next frame.
A better way would be to use the callback parameter of setState
this.setState({dimensions: updatedDimensions}, () => {
this.updateRows();
});
Another option is to keep any state changes in an object and pass it into the function instead of reading directly from this.state, but this can lead to much more complexity.
In this instance it's probably less about "concurrent execution" and more about the event loop. The setTimeout call removes function execution from the current call stack and adds an entry to the message queue. The currently executing stack will run to completion before the next message in the queue begins execution.
I don't know why this is required in this particular instance - some sort of state must be getting set in the current stack that's required for updateRows to produce the desired result.
I'm trying to adjust the diff match patch google javascript library to make one of its calls asynchronous. The problem is that the entire library is built synchronously so that closures return objects rather then sending them through executed callback functions. I'd like to make one of the functions asynchronous without having to rewrite the entire library, which would be an enormous task because it would require rewriting every method up the stack (since the library is very modular). Instead, I would like to make one call asynchronous with setTimeout that sets the data I want to return to an appropriately scoped variable. Then the function waiting for that data would loop until it received it. Is this a reliable way to handle this, and will it work? What's a good practice for this?
I have an interesting situation that my usually clever mind hasn't been able to come up with a solution for :) Here's the situation...
I have a class that has a get() method... this method is called to get stored user preferences... what it does is calls on some underlying provider to actually get the data... as written now, it's calling on a provider that talks cookies... so, get() calls providerGet() let's say, providerGet() returns a value and get() passes it along to the caller. The caller expects a response before it continues its work obviously.
Here's the tricky part... I now am trying to implement a provider that is asychronous in nature (using local storage in this case)... so, providerGet() would return right away, having dispatched a call to local storage that will, some time later, call a callback function that was passed to it... but, since providerGet() already returned, and so did get() now by extension to the original called, it obviously hasn't returned the actual retrieved data.
So, the question is simply is there a way to essentially "block" the return from providerGet() until the asychronous call returns? Note that for my purposes I'm not concerned with the performance implications this might have, I'm just trying to figure out how to make it work.
I don't think there's a way, certainly I know I haven't been able to come up with it... so I wanted to toss it out and see what other people can come up with :)
edit: I'm just learning now that the core of the problem, the fact that the web sql API is asychronous, may have a solution... turns out there's a synchronous version of the API as well, something I didn't realize... I'm reading through docs now to see how to use it, but that would solve the problem nicely since the only reason providerGet() was written asychronously at all was to allow for that provider... the code that get() is a part of is my own abstraction layer above various storage providers (cookies, web sql, localStorage, etc) so the lowest common denominator has to win, which means if one is asychronous they ALL have to be asychronous... the only one that was is web sql... so if there's a way to do that synchronously my point become moot (still an interesting question generically I think though)
edit2: Ah well, no help there it seems... seems like the synchronous version of the API isn't implemented in any browser and even if it was it's specified that it can only be used from worker threads, so this doesn't seem like it'd help anyway. Although, reading some other things it sounds like there's a way to pull of this trick using recursion... I'm throwing together some test code now, I'll post it if/when I get it working, seems like a very interesting way to get around any such situation generically.
edit3: As per my comments below, there's really no way to do exactly what I wanted. The solution I'm going with to solve my immediate problem is to simply not allow usage of web SQL for data storage. It's not the ideal solution, but as that spec is in flux and not widely implemented anyway it's not the end of the world... hopefully when its properly supported the synchronous version will be available and I can plug in a new provider for it and be good to go. Generically though, there doesn't appear to be any way to pull of this miracle... confirms what I expected was the case, but wish I was wrong this one time :)
spawn a webworker thread to do the async operation for you.
pass it info it needs to do the task plus a unique id.
the trick is to have it send the result to a webserver when it finishes.
meanwhile...the function which spawned the webworker sends an ajax request to the same webserver
use the synchronous flag of the xmlhttprequest object(yes, it has a synchronous option). since it will block until the http request is complete, you can just have your webserver script poll the database for updates or whatever until the result has been sent to it.
ugly, i know. but it would block without hogging cpu :D
basically
function get(...) {
spawnWebworker(...);
var xhr = sendSynchronousXHR(...);
return xhr.responseTEXT;
}
No, you can't block until the asynch call finishes. It's that simple.
It sounds like you may already know this, but if you want to use asynchronous ajax calls, then you have to restructure the way your code is used. You cannot just have a .get() method that makes an asynchronous ajax call, blocks until it's complete and returns the result. The design pattern most commonly used in these cases (look at all of Google's javascript APIs that do networking, for example) is to have the caller pass you a completion function. The call to .get() will start the asynchronous operation and then return immediately. When the operation completes, the completion function will be called. The caller must structure their code accordingly.
You simply cannot write straight, sequential procedural javascript code when using asynchronous networking like:
var result = abc.get()
document.write(result);
The most common design pattern is like this:
abc.get(function(result) {
document.write(result);
});
If your problem is several calling layers deep, then callbacks can be passed along to different levels and invoked when needed.
FYI, newer browsers support the concept of promises which can then be used with async and await to write code that might look like this:
async function someFunc() {
let result = await abc.get()
document.write(result);
}
This is still asynchronous. It is still non-blocking. abc.get() must return a promise that resolves to the value result. This code must be inside a function that is declared async and other code outside this function will continue to run (that's what makes this non-blocking). But, you get to write code that "looks" more like blocking code when local to the specific function it's contained within.
Why not just have the original caller pass in a callback of its own to get()? This callback would contain the code that relies on the response.
The get() method will forward the callback to providerGet(), which would then invoke it when it invokes its own callback.
The result of the fetch would be passed to the original caller's callback.
function get( arg1, arg2, fn ) {
// whatever code
// call providerGet, passing along the callback
providerGet( fn );
}
function providerGet( fn ) {
// do async activity
// in the callback to the async, invoke the callback and pass it the data
// ...
fn( received_data );
// ...
}
get( 'some_arg', 'another_arg', function( data ) {
alert( data );
});
When your async method starts, I would open some sort of modal dialog (that the user cannot close) telling them that the request is in process. When the request finishes, close the modal in your callback.
One possible way to do this is with jqModal, but that would require you to load jQuery into your project. I'm not sure if that's an option for you or not.
This is ugly, but anyway I think the question is kindof implying an ugly solution is desired...
In your get function, serialize your query into a string.
Open an iframe, passing (A) this serialized query and (B) a random number in querystring to this iframe
Your iframe has some javascript code that reads the SQL query and number from its own querystring
Your iframe asynchronously begins running the query.
When your iframe query is asynchronously finished, it sends it, along with the random number to a server of yours, say to /write.php?rand=###&reslt="blahblahblah"
Write.php saves this info somewhere
Back in your main script, after creating and loading the iframe, you create a synchronous AJAX request to your server, say to /read.php?rand=####
/read.php blocks until the written info is available, then returns it to your main page
Alternately, to avoid sending the data over the network, you could instead have your iframe encode the result into a canvas-generated image that the browser caches (similar to the approach that Zombie cookie reportedly used). Then your blocking script would try to continually load this image over and over again (with some small network-generated delay on each request) until the cached version is available, which you could recognize via some flag that you've set to indicate it's done.