Why is a sync xmlhttprequest deprecated ?
I would appreciate if the browser waits for the data instead of
open it for further clicking without finalizing the data-query.
I would like to write a program with just one step after the other.
Moreover the code should be properly structured and I wonder how
I can realize this without the promise-hell.
Is there any sync alternative to xmlhttprequest which is not "deprecated" ?
Thanks
Thomas
Why is a sync xmlhttprequest deprecated ?
Because it blocks the Event Loop and provides bad user experience to your end users.
Basically it "freezes" the page while it's happening.
I would appreciate if the browser waits for the data
That is not how browsers work though. Fortunately with practice the way browsers do work start to make an awful lot of sense - enough for languages that have previously used mostly synchronous I/O like Python to add "promises" too.
I would like to write a program with just one step after the other.
You can return responses from asynchronous calls in JavaScript. Using async/await code looks pretty similar and readable. See the async function mdn page.
Moreover the code should be properly structured and I wonder how I can realize this without the promise-hell.
"promise-hell" refers to the fact once a function does I/O all other functions it calls that. I'm not convinced that's a bad thing or that there is anything wrong with
async function sequence() {
let one = await fetch('/your-endpoint?someParam').then(x => x.json());
// do something with one
let param = one.param;
let two = await fetch('/your-other?param=' + param).then(x => x.json());
// do something with second call
}
sequence();
That is, async/await lets you write asynchronous code synchronously.
Don't expect to understand this all at once - it's ok it takes time.
Related
This is a question about performance more than anything else.
Node exposes three different types of methods to accomplish various filesystem tasks:
Promises API (async)
Callback API (async)
Synchronous API (sync)
I've read more articles and stackoverflow answers than I can count, all of which claiming to never need the sync methods.
I recently wrote a script which required a couple directories to be made if they didn't already exist. During this, I noticed that if I used the async/await methods (primarily fs.promises.mkdir and fs.promises.access), the event loop would simply continue to the next async bit of code, regardless of the fact that the next bits require those directories. This is expected behavior, after all, it's async.
I understand this could be solved with a nice little callback hell sesh, but that isn't the question, whereas the idea that the promises api can be used over all other methods is.
The question then becomes:
Is it ever better to use Node's filesystem sync methods over the same async methods?
Is it ever truly required in situations like this to block the process?
Or said differently:
Is it possible to completely avoid sync methods and ONLY use the promises api (NOT promises + callbacks)?
It seems like using the sync methods (given my situation above, where the directories are required to be there before any other call is made) can be EXTREMELY useful to write readable, clear code, even though it may negatively impact performance.
With that being said, there's an overwhelming level of information to say that the sync api is completely useless and never required.
Again, this purely caters to the promises api. Yes, callbacks and promises are both async, but the difference between the job and message queues makes the both api's completely different in this context.
PS: For additonal context on examples, I've provided a code sample so you don't have to imagine my example ;)
Thanks! :)
// Checks if dir exists, if not, creates it. (not the actual code, just an example)
// Sync version
if (!fs.existsSync(dirPath)) {
fs.mkdirSync(dirPath);
}
// Async version
try {
await fs.promises.access(dirPath);
} catch {
await fs.promises.mkdir(dirPath);
}
It depends on the situation. The main benefit of the sync methods is that they allow for easier consumption of their results, and the main disadvantage is that they prevent all other code from executing while working.
If you find yourself in a situation where other code not being able to respond to events is not an issue, you might consider it to be reasonable to use the sync methods - if the code in question has no chance of or reason for running in parallel with anything else.
For example, you would definitely not want to use the sync methods inside, say, a server handling a request.
If your code requires reading some configuration files (or creating some folders) when the script first runs, and there aren't enough of them such that parallelism would be a benefit, you can consider using the sync methods.
That said, even if your current implementation doesn't require parallelism, something to keep in mind is that, if the situation changes and you find that you do actually need to allow for parallel processing, you won't have to make any changes to your existing code if you had started out by using the promise-based methods in the first place - and if you understand the language, using the Promises properly should be pretty easy, so if there's a chance of that, you might consider using the Promises anyway.
In Node 13, an external 3rd party library calls my code:
const myInput = myCode.run(somVar); // it doesn't use await
As my code then has to perform nested synchronous calls, how could I provide an appropriate return value to the 3rd party library that is not a promise, but the result of my promises? Ideally something like this:
const run = (inputVar) =>{
let result
(async ()=>{
result = await doSyncCalls(inputVar);
})(); // code should not proceed until after await
return result;
} // will return undefined, but ideally should return doSyncCalls result
deasync would be a good solution, except it has an unresolved bug that causes nested promises to not resolve.
Well, there is no way to synchronously return (from the callback) a value that you obtain asynchronously within the callback. Javascript does not currently support that. So, these are your options:
Put the asynchronous code in a child_process. Then run that child process with something like child_process.execFileSync(). That allows you to run the asynchronous code in the child process, but have the parent process block and wait for the result. This is a hack because it blocks the parent while waiting for the result. But, it can be made to work.
Before calling the library function, prefetch whatever value it is you will need when the callback is called. This allows you to use regular asynchronous programming before you call the library and then once you have the desired value in some sort of cache, you can then call the 3rd party library and when it calls you back and wants the value, you will have it synchronously in a cache somewhere. Obviously, this only works if you can figure what value or range of possible values will be required in the callback so you can pre-fetch them.
Modify the code in the library to add support for an asycnhronous callback.
Redesign code to work a different way that doesn't require this library or that can use some other library that doesn't have this problem.
Write or find some native code add-on (like deasync) that lets you block somehow during the asynchronous operation while still letting the event queue do what it needs to do to process the asynchronous completion. This would have to hook deep into the internals of the V8 engine. Or fix, deasync so it works for your case.
Write a blocking add-on in native code that could carry out your asynchronous operation in native code while blocking the V8 engine.
FYI, since everything but #2, #3 and #4 all block the main JS thread which is generally a bad thing to do in any server environment, and you've said that #2 was not practical, my preference would be #3 or #4. Since you don't share the actual code and actual detailed library and problem, we can't help you with any specifics.
Probably the most straightforward solution to implement is #1 (package the code up into a child process that you run synchronously), but it blocks the app while running which is a downside.
I am trying to migrate an old Javascript application to use AsyncJAX (AJAX) instead of Synchronous calls to AJAX(SJAX). I have considered few approaches, each taking quite some time
1) using async/await
in order to get this work i would have to prepend every function that results in calling the SJAX somewhere in the stack by "async" and every call to SJAX by "await"
2) Analyze all possible stacktraces (i do not know of any tool that can do it for JS) and do the re-write by hand using callbacks/Promisses (don't really care about cleanliness of the code right now. There is more 200 occurences of the SJAX calls in the app so i expect there might be much more possible stacktraces to handle
3) throwing an error before the actual data-fetch (SJAX call), recording the caller somewhere and repeating the call when the data is readily available (there might be other issues, imagine running the same code twice using a "stateful" variable)
4) something completely different
My question is, what approach would you think would be the best. I know this is opinion based, but maybe there is other way to handle this elegantly (point 4). If you know an answer to that ... shoot it.
I am attempting to create a library to make API calls to a web application (jira, if you care to know) I have my api calls working no problem, but I am looking to make the code a bit more readable and use-able. I have tried searching for my needs, but it turns out I am not sure what I need to be searching for.
I am having an issue with Asynchronous calls that depend on each other, I understand that I have to wait until the callback is ran to run my next item, but I am not sure of the best way to design this.
I really would like to make Chaining a feature of my api, which I would hope to look like this:
createProject(jsonProjectStuff)
.setLeadUser("myusername")
.createBoard("boardName")
.setBoardPermissions(boardPermissionJSONvar)
.addRole(personRoleJSONvar);
with this example, everything would have to wait on the createProject as it will return the project. createBoard doesn't rely on the project normally, but used in this context it should be "assigned" to the project made, setting the board permissions only relies on the createBoard to work. addRole is specific to the project again.
the questions I have are:
Is this possible to switch context like this and keep data in-between them without the need to run the function from the response hard coded?
If this is possible, is this a good idea? If not I am open to other schemes.
I can think of a couple ways to make it work, including registering the function calls with a dependency tree and then fulfilling promises as we go, although that is mostly conceptual for me at this point as I am trying to decide the best.
Edit 2/19/2016
So I have looked into this more and I have decided on a selective "then" only when it creating a new item doesn't relate directly to the parent.
//Numbers are ID, string is Name
copyProject(IDorName)
.setRoles(JSONItem)
.setOwner("Project.Owner")
.setDefaultEmail("noreply#fake.com")
.then(
copyBoard(IDorName)
.setName("Blah blah Name {project.key}"),
saveFilterAs(IDorName, "Board {project.key}",
"project = {project.key} ORDER BY Rank ASC")
.setFilterPermissions({shareValuesJSON})
)
I like this solution a lot, the only thing I am unsure of how to do is the string "variables", I suppose it could be "Blah blah Name " + this.project.key
either way I am unsure of how to give copyBoard or saveFilterAs access to it via the "then" function.
Any thoughts?
I've been using Nightmare (a headless browser) lately.
It has a fluent API that uses a nice design pattern.
Calling the API doesn't directly execute the actions, it only queues them and when you are ready to execute you must call the end function which returns a promise. The promise is resolved when the queue has completed its async execution.
For example, in your situation
createProject(jsonProjectStuff)
.setLeadUser("myusername")
.createBoard("boardName")
.setBoardPermissions(boardPermissionJSONvar)
.addRole(personRoleJSONvar)
.end() // Execute the queue of operations.
.then() => {
// All operations completed.
))
.catch(err => {
// An error occurred.
});
I feel like this pattern is quite elegant. It allows you to have a fluent API to build a sequence of actions. Then when you are ready to execute said operations you call end (or whatever). The sequence of operations are then completed asynchronously and you use the promise to handle completion and errors.
I have an interesting situation that my usually clever mind hasn't been able to come up with a solution for :) Here's the situation...
I have a class that has a get() method... this method is called to get stored user preferences... what it does is calls on some underlying provider to actually get the data... as written now, it's calling on a provider that talks cookies... so, get() calls providerGet() let's say, providerGet() returns a value and get() passes it along to the caller. The caller expects a response before it continues its work obviously.
Here's the tricky part... I now am trying to implement a provider that is asychronous in nature (using local storage in this case)... so, providerGet() would return right away, having dispatched a call to local storage that will, some time later, call a callback function that was passed to it... but, since providerGet() already returned, and so did get() now by extension to the original called, it obviously hasn't returned the actual retrieved data.
So, the question is simply is there a way to essentially "block" the return from providerGet() until the asychronous call returns? Note that for my purposes I'm not concerned with the performance implications this might have, I'm just trying to figure out how to make it work.
I don't think there's a way, certainly I know I haven't been able to come up with it... so I wanted to toss it out and see what other people can come up with :)
edit: I'm just learning now that the core of the problem, the fact that the web sql API is asychronous, may have a solution... turns out there's a synchronous version of the API as well, something I didn't realize... I'm reading through docs now to see how to use it, but that would solve the problem nicely since the only reason providerGet() was written asychronously at all was to allow for that provider... the code that get() is a part of is my own abstraction layer above various storage providers (cookies, web sql, localStorage, etc) so the lowest common denominator has to win, which means if one is asychronous they ALL have to be asychronous... the only one that was is web sql... so if there's a way to do that synchronously my point become moot (still an interesting question generically I think though)
edit2: Ah well, no help there it seems... seems like the synchronous version of the API isn't implemented in any browser and even if it was it's specified that it can only be used from worker threads, so this doesn't seem like it'd help anyway. Although, reading some other things it sounds like there's a way to pull of this trick using recursion... I'm throwing together some test code now, I'll post it if/when I get it working, seems like a very interesting way to get around any such situation generically.
edit3: As per my comments below, there's really no way to do exactly what I wanted. The solution I'm going with to solve my immediate problem is to simply not allow usage of web SQL for data storage. It's not the ideal solution, but as that spec is in flux and not widely implemented anyway it's not the end of the world... hopefully when its properly supported the synchronous version will be available and I can plug in a new provider for it and be good to go. Generically though, there doesn't appear to be any way to pull of this miracle... confirms what I expected was the case, but wish I was wrong this one time :)
spawn a webworker thread to do the async operation for you.
pass it info it needs to do the task plus a unique id.
the trick is to have it send the result to a webserver when it finishes.
meanwhile...the function which spawned the webworker sends an ajax request to the same webserver
use the synchronous flag of the xmlhttprequest object(yes, it has a synchronous option). since it will block until the http request is complete, you can just have your webserver script poll the database for updates or whatever until the result has been sent to it.
ugly, i know. but it would block without hogging cpu :D
basically
function get(...) {
spawnWebworker(...);
var xhr = sendSynchronousXHR(...);
return xhr.responseTEXT;
}
No, you can't block until the asynch call finishes. It's that simple.
It sounds like you may already know this, but if you want to use asynchronous ajax calls, then you have to restructure the way your code is used. You cannot just have a .get() method that makes an asynchronous ajax call, blocks until it's complete and returns the result. The design pattern most commonly used in these cases (look at all of Google's javascript APIs that do networking, for example) is to have the caller pass you a completion function. The call to .get() will start the asynchronous operation and then return immediately. When the operation completes, the completion function will be called. The caller must structure their code accordingly.
You simply cannot write straight, sequential procedural javascript code when using asynchronous networking like:
var result = abc.get()
document.write(result);
The most common design pattern is like this:
abc.get(function(result) {
document.write(result);
});
If your problem is several calling layers deep, then callbacks can be passed along to different levels and invoked when needed.
FYI, newer browsers support the concept of promises which can then be used with async and await to write code that might look like this:
async function someFunc() {
let result = await abc.get()
document.write(result);
}
This is still asynchronous. It is still non-blocking. abc.get() must return a promise that resolves to the value result. This code must be inside a function that is declared async and other code outside this function will continue to run (that's what makes this non-blocking). But, you get to write code that "looks" more like blocking code when local to the specific function it's contained within.
Why not just have the original caller pass in a callback of its own to get()? This callback would contain the code that relies on the response.
The get() method will forward the callback to providerGet(), which would then invoke it when it invokes its own callback.
The result of the fetch would be passed to the original caller's callback.
function get( arg1, arg2, fn ) {
// whatever code
// call providerGet, passing along the callback
providerGet( fn );
}
function providerGet( fn ) {
// do async activity
// in the callback to the async, invoke the callback and pass it the data
// ...
fn( received_data );
// ...
}
get( 'some_arg', 'another_arg', function( data ) {
alert( data );
});
When your async method starts, I would open some sort of modal dialog (that the user cannot close) telling them that the request is in process. When the request finishes, close the modal in your callback.
One possible way to do this is with jqModal, but that would require you to load jQuery into your project. I'm not sure if that's an option for you or not.
This is ugly, but anyway I think the question is kindof implying an ugly solution is desired...
In your get function, serialize your query into a string.
Open an iframe, passing (A) this serialized query and (B) a random number in querystring to this iframe
Your iframe has some javascript code that reads the SQL query and number from its own querystring
Your iframe asynchronously begins running the query.
When your iframe query is asynchronously finished, it sends it, along with the random number to a server of yours, say to /write.php?rand=###&reslt="blahblahblah"
Write.php saves this info somewhere
Back in your main script, after creating and loading the iframe, you create a synchronous AJAX request to your server, say to /read.php?rand=####
/read.php blocks until the written info is available, then returns it to your main page
Alternately, to avoid sending the data over the network, you could instead have your iframe encode the result into a canvas-generated image that the browser caches (similar to the approach that Zombie cookie reportedly used). Then your blocking script would try to continually load this image over and over again (with some small network-generated delay on each request) until the cached version is available, which you could recognize via some flag that you've set to indicate it's done.