Well, I have 2 sites, each site content a JS with XHR Task.
The second site has a clear body, however, when his XHR Task is resolved, the content is added.
So, when I try to fetch the content of the second site to the first one, this just return me the content empty. So, basically, the fetch from the first site is executed before the second one ends the XHR Task.
I tried to do some researches about it, but sadly I couldn't find anything. I won't repeat code by having the same functions in both sites ...etc
Thanks for reading, any suggestion will be usefull, best regards.
Related
This is a weird situation, I've looked at loads of SO questions and nothing is quite like it. Hopefully I can get some feedback regarding it
I'm creating a new web page in an existing application and am trying to execute a simple PUT api call and for some reason it is showing a status of cancelled on my network tab in chrome dev tools. The server I'm hitting is a VM on my local machine. I can hit the same endpoint from a different existing page in my application and it goes through just fine so I know that there's nothing wrong with the endpoint. Here's some screenshots:
This is what the network tab in chrome dev tools looks like:
This is what I see when I click on the "cancelled" put call:
and this is what shows on the console tab of chrome dev tools:
One thing to note is that in the second screenshot under the General section on the right it doesn't have anything listed for Request Method, Status Code or Remote Address, see this screenshot of the successful api put request I referred to earlier for reference:
The really weird thing is that my database is getting updated with the updated data, so in some way even though the PUT is showing as cancelled it's working to some degree.
The call originates from a vue component on my page and my backend is in PHP if that matters at all.
here is the call in my .js file that executes the PUT:
return await SimpleService.put(`${app.API_URL}/matching/questions/${borrowerId}`,
JSON.stringify(answerData), {contentType: 'application/json'})
So, I recognize that without seeing all the code attached to this it isn't really realistic to ask for a black and white answer but if someone can even just give me some ideas of things to check I would greatly appreciate it.
I've tried to include everything I can think of without including unnecessary things but if any additional information is needed from me to figure this out please let me know.
Phil was right in his comment, here's an explanation from what I understand. When the button was clicked that submitted the api call the default behavior for the button click was executed, which was to re-load the page. Well, when a page is reloaded any in flight network requests are not being tracked anymore, which is why the request was showing as "cancelled" in my console tab. And that also explains why the api call was successful in updating the database, because there wasn't any problem with the actual request. Here's the steps I took to fix my problem:
remove the onClick event from my button that was calling my javascript function that begins the api call process
add this code to the form tag my button lives inside: #submit.prevent="myJavascriptFunctionThatStartsAPICall()"
Adding the .prevent prevents the default behavior of the page reloading from happening thus when the response is returned back to the page the page is still listening for it. Problem solved.
I hope this helps someone else out.
Its just the request status. In backend you should return status code 200 if everything is correct. Chrome thinks that the request got fail because you return a error code like 499 Client Closed Request or 444 No Response.
I need some help for my application and sorry for my English.
I'm working on front-end of a website. The final app should work fine with a lot of tabs (~100 in a single browser). Each tab needs to listen for a series of events sent from server and change its content dynamically.
I implemented this feature by using Websocket. Since opening a connection is very expensive. So I assigned a master tab, which will listen to events from the server and distribute them to other tabs using BroadcastChannel. And I have following questions:
How to pick a master tab from all of them and make other tabs listen
to it?
I had these ideas:
1. Using BroadcastChannel.
During initilization tab asks using BroadcastChannel: "is there a master tab?". If it receives an answer, then it will continue working. If it won't receive any response, then it makes itself a master tab.
Problem:
If master tab will freeze inside of heavy loop, then it won't be able to respond in short amount of time, resulting 2 opened connections to the server and a conflict, which needs to be resolved.
2. Using LocalStorage.
During initilization tab will request some field called "X" or smth. If field is empty, then tab will create that field and assign some value, after it will make itself a master tab. If field "X" is not empty, then tab will make itself a slave tab.
Problem:
If two tabs will initilize in the same time, there might be a conflict:
tab_Alpha -> localStorage.getItem("haveMaster") -> undefined // There is no master, so i will make myself a master tab!
tab_Beta -> localStorage.getItem("haveMaster") -> undefined // There is no master, so i will make myself a master tab!
tab_Alpha -> localStorage.setItem("haveMaster", true) // Now it's time to open connection and listen for events
tab_Beta -> localStorage.setItem("haveMaster", true) // Now it's time to open connection and listen for events
And as a result, I have a conflict and two opened connections.
Can someone point out a lightweight solution to me? I will appreciate that.
After a research.
Halcyon suggested to use code from gist.github.com/neilj/4146038. But i was not satisfied with this solution, because if Math.random() will return 2 same numbers for 2 different tabs, while they will initilize, then browser will call masterDidChange() two times. And browser will end up, with conflict and 2 connections.
Then Dimitry Boger suggested to read this: alpha.de/2012/03/javascript-concurrency-and-locking-the-html5-localstorage.
I liked this solution more, but i still have managed to find a way to break it.
If each tab will freeze in couple of places, then it will result, again, an execution of criticalSection().
Then i found a wikipedia.org/wiki/Peterson's_algorithm. I have not found a method to break this locking mechanism, but there is a problem.
For this algorithm to work browser have to know exact amount of tabs before execution. Because if it will initilize a new tab, then previous one's can just miss this event and start their criticalSection() before the new tab will finish it's own.
So, if you are not scared of 1024 tabs initilizing and freezing them selves at the same time, then you can use any of this solutions. But i need a bullet proof one, so i decided, to give a honor of finding a master tab to the backend server.
Thanks everyone for help.
After another research.
There is a bullet proof answer under the post. It can work without a backend server.
SharedWorker can be used to solve the problem.
It will use dedicated blocking thread for all of communication between itself and other browser contexts (tabs/iframes). So it can be used to sync different things.
MDN Article about SharedWorker
I ran into this exact problem a while ago. I found this windowcontroller.js from an app called Overture.
It correctly handles race conditions and timeouts.
I think it works out of the box but I modified it a bit to suit my needs.
I'm having a hard time figuring out the solution to a problem that I thought would be very common or straight-forward to solve, but apparently I was wrong.
I have to re-write some web code (that I didn't write) that's causing a problem. When a user clicks a link, a request is sent to the web server, which in turn fetches or creates a PDF document from somewhere. The PDF data is returned with the Content-Disposition header set to attachment, and the browser shows the save-as dialog.
The reason the save-as dialog appears is because when the user clicks the link, the Javascript sets window.location.href to the server URL (with some parameters).
There's no loading animation other than the one the browser shows in the tab etc. while the request is being processed.
The problem is that if a request hangs or takes a while, users tend to click the link again (possibly multiple times) which means requests for that same resource just keep building up on the server (even accidental double clicks on a link, which are common, cause two requests to be processed).
How can I prevent this from happening? If I do something like this (with window.location.href replaced by window.open:
var REQUEST_PENDING = false;
function getPDF(param1, param2) {
if (REQUEST_PENDING) return;
REQUEST_PENDING = true;
var w = window.open("/GetPdf.servlet?param1="+param1+"¶m2="+param2);
w.onload = function() {
DOC_REQUEST_PENDING = false;
}
}
...then only one request will be processed at any one time, but the onload callback only works if the return content is HTML. When it's an attachment, which is what I have, the DOC_REQUEST_PENDING variable is never set back to false, so no further requests can be made.
I know that the ultimate solution should probably be implemented server-side, but is it not possible to achieve what I'm trying to do client-side? (I can use jQuery).
The question linked to in the comments above by #Cory does seem to be a duplicate of my question, and while I'm sure the accepted answer is perfectly fine, there is a bit involved in it. There's another answer for that question down the list somewhat that provides a link to this jquery plugin:
http://johnculviner.com/jquery-file-download-plugin-for-ajax-like-feature-rich-file-downloads/
...and for me anyway, this is the ultimate solution. Easy to use and works great.
I want to implement AJAX like facebook, so my sites can be really fast too. After weeks of research and also knowing about bigPipe (which is not ajax).
so the only thing left was how they are pulling other requests like going to page/profile, I opened up firebug and was just checking things there for what I get if I click on different profiles. But the problem is, firebug doen'tt record any such request and but still page gets loaded with AJAX and changes the HTML also, firebug does show change on html.
So I'm wondering, if they are using iframe to block firebug to see the request or what? Because I want to know how much data they pull on each request. Is it the complete page or only a part of page, because page layout changes as well, depending on the page it is (for example: groups, page, profile, ...).
I would be really grateful if a pro gives some feedback on this, because i cant find it anywhere for weeks.
The reason they use iframe, usually its security. iframes are like new tabs, there is no communication between your page and the iframe facebook page. The iframe has its own cookies and session, so really you need to think about it like another window rather than part of your own page (except for the obvious fact that the output is shown within your page).
That said - the developer mode in chrome does show you the communications to and from the iframe.
When I click on user's profile at facebook, then in Firebug I clearly see how request for data happens, and how div's content changing.
So, what is the question about?
After click on some user profile, Facebook does following GET request:
http://www.facebook.com/ajax/hovercard/user.php?id=100000655044XXX&__a=1
This request's response is a complex JS data, which contain all necessary information to build a new page. There is a array of profile's friends (with names, avatar thumbnails links, etc), array of the profile last entries (again, with thumbnails URLs, annotations, etc.).
There is no magic, no something like code hiding or obfuscation. =)
Looking at face book through google chromes inspector they use ajax to request files the give back javascript which is then used to make any changes to the page.
I don't know why/wether Facebook uses IFRAMEs to asynchroneously load data but I guess there is no special reason behind that. We used IFRAMEs too but now switched to XMLHttpRequest for our projects because it's more flexible. Perhaps the IFRAME method works better on (much) older browsers, but even IE6 supports XMLHttpRequest fine.
Anyway, I'm certain that there is no performance advantage when using IFRAMEs. If you need fast asynchroneous data loading to dynamically update your page, go with XMLHttpRequest since any modern browsers supports it and it's fast as HTTP can be.
If you know about bigPipe then you will be able to understand that,
As you have read about big pipe their response look like this :-
<script type="text/javascript"> bigpipe.onPageArrive({ 'css' : '', '__html' : ' ' }); </script>
So if they ajax then they will not able to use bigpipe, mean if they use ajax and one server they flush buffer, on client there will no effect of that, the ajax oncomplete only will call when complete data received and connection closed, In other words they will not able to use their one of the best page speed technique there,
but what if they use iframe for ajax,, this make point,, they can use their bigpipe in iframe and server will send data like this :-
<script type="text/javascript"> parent.bigpipe.onPageArrive({ 'some' : 'some' });
so server can flush buffer and as soon as buffer will clear, browser will get that, that was not possible in ajax case.
Important :-
They use iframe only when page url change, mean when a new page need to be downloaded that contains the pagelets, for other request like some popup box or notifications etc they simple send ajax request.
All informations are unofficial, Actually i was researching on that, so i found,
( I m not a native english speaker, sorry for spelling and grammer mistakes! )
when you click on different profile, facebook doesn't use ajax for loading the profile
you simple open a new link plain old html... but maybe I misunderstood you
So I've set up a pagination system similar to Twitter's where where 20 results are shown and the user can click a link to show the next twenty or all results. The number of results shown can be controlled by a parameter at the end of the URL however, this isn't updated with AJAX so if the user clicks on one of the results and then chooses to go back they have to start back at only 20 results.
One thought I've had is if I update the URL when while I'm pulling in the results with AJAX it should—I hope—enable users to move back and forth without losing how many results are shown.
Is this actually possible or have I got things completely wrong?
Also, how would I go about changing the URL? I have a way to edit the URL with javascript and have it be a variable but I'm not sure how to apply that variable to the URL.
Any help here would be great!
A side note: I'm using jQuery's load() function to do all my AJAX.
Not mentioned in the duplicate threads, but useful nonetheless: Really Simple History (RSH).
This would be the answer I would put here:
Browser back button and dynamic elements
You can't actually change the url of the page from javascript without reloading the page.
You may wish to consider using cookies instead. By setting a client cookie you could "remember" how many results that user likes to see.
A good page on javascript cookies.
The answer for this question will be more or less the same as my answers for these questions:
How to show Ajax requests in URL?
How does Gmail handle back/forward in rich JavaScript?
In summary, two projects that you'll probably want to look at which explain the whole hashchange process and using it with ajax are:
jQuery History (using hashes to manage your pages state and bind to changes to update your page).
jQuery Ajaxy (ajax extension for jQuery History, to allow for complete ajax websites while being completely unobtrusive and gracefully degradable).
First 3 results google returns:
first
second
third
I'll eat my shorts if none of them are useful. ^^
And yeah - you can't change URL through JS.