How to implement window load callback when content has Content Disposition attachment? - javascript

I'm having a hard time figuring out the solution to a problem that I thought would be very common or straight-forward to solve, but apparently I was wrong.
I have to re-write some web code (that I didn't write) that's causing a problem. When a user clicks a link, a request is sent to the web server, which in turn fetches or creates a PDF document from somewhere. The PDF data is returned with the Content-Disposition header set to attachment, and the browser shows the save-as dialog.
The reason the save-as dialog appears is because when the user clicks the link, the Javascript sets window.location.href to the server URL (with some parameters).
There's no loading animation other than the one the browser shows in the tab etc. while the request is being processed.
The problem is that if a request hangs or takes a while, users tend to click the link again (possibly multiple times) which means requests for that same resource just keep building up on the server (even accidental double clicks on a link, which are common, cause two requests to be processed).
How can I prevent this from happening? If I do something like this (with window.location.href replaced by window.open:
var REQUEST_PENDING = false;
function getPDF(param1, param2) {
if (REQUEST_PENDING) return;
REQUEST_PENDING = true;
var w = window.open("/GetPdf.servlet?param1="+param1+"&param2="+param2);
w.onload = function() {
DOC_REQUEST_PENDING = false;
}
}
...then only one request will be processed at any one time, but the onload callback only works if the return content is HTML. When it's an attachment, which is what I have, the DOC_REQUEST_PENDING variable is never set back to false, so no further requests can be made.
I know that the ultimate solution should probably be implemented server-side, but is it not possible to achieve what I'm trying to do client-side? (I can use jQuery).

The question linked to in the comments above by #Cory does seem to be a duplicate of my question, and while I'm sure the accepted answer is perfectly fine, there is a bit involved in it. There's another answer for that question down the list somewhat that provides a link to this jquery plugin:
http://johnculviner.com/jquery-file-download-plugin-for-ajax-like-feature-rich-file-downloads/
...and for me anyway, this is the ultimate solution. Easy to use and works great.

Related

After Redirecting to a page after loading another page

Currently I have a page that when you fill out a text box and click a button, it redirects you to another page.
The page needs to be loaded, since it updates and shows xml. (I cannot currently change how this is)
However what I what to do is after page was redirected once, redirect it again or just load another page in general.
The thing to note about the xml link, is that part of it is created with the text box, so it will be dynamic.
I currently have something along the lines of this
//please note that username is a textbox, I've just left it out
<script runat = "server">
void Button_Click(Object sender, EventArgs e)
{
var url = "http://website.com/scripts/" + username.text "/value/0"
try
{
Response.Redirect(url, true);
}
catch(Exception ex)
{//From what I learnt, adding true to redirect throws an exception,
//which is how I tried executing another redirect, but it doesn't seem to
//to load the first direct, and skips straight to this, I also put this
//in finally, because it seemed more appropriate to no avail
Response.Redirect(someurl, true);
}
}
So I'm wondering if this is actually possible, I also wonder if I'm just looking up the wrong keywords to find a solution.
I've spent a bit of time on this, and have yet to come to some sort of solution, but I'm new to web development so I may just be missing some incredibly simple.
Also I only really understand how C# works in asp, but am willing to learn how to add in javascript or VB if necessary.
Thanks in advance for the help
Edit: Solution!
So I managed to use javascript to append the textbox value to the xml link, request it and without showing the user (showing the user, is not necessary in this case).
After which a popup confirms that it is successful then reloads the page.
it is very self explanatory but what I did was
url = "website";
var xmlHttp = new XMLHttpRequest();
xmlHttp.open("GET", url, true);
window.alert("success");
return true;//this reloads the page, that or just window.location.reload();
For an added check, I will see if I can verify that the username is a valid username, and popup with failure text if not.
You seem to have a misunderstanding about what Response.Redirect(...) actually does. The method name is, in my opinion, a bit misleading. It suggests that somehow the Response to the currently executing request will be sent somewhere else than the requesting browser. This is not the case. The name could as well have been Response.SendRedirectResponseToBrowser, because that's what Response.Redirect does.
So when you do Response.Redirect(url) you are telling the server that is executing your page that is should send a response to the browser, telling the browser to do a GET request of the supplied url. The browser will then do that, at which point that page needs to include a separate Redirect in order to further tell the browser where to go next.
In this case then, the page at "http://website.com/scripts/" + username.text "/value/0" needs to be patched up so that after processing the request, it will also send a redirect response with the url you want to display next.
If you have no control over that page, then you must solve this some other way. Some options:
Use ajax to request the "http://website.com/scripts/" + username.text "/value/0" url. Then after completion set the page location to the url you want to show next.
Open the http://website.com/.... url in a _blank target, then set to location to the next page.
Use System.Net.Http.HttpClient in your code behind method to request the http://website.com/.... url, then do a redirect. This means that the server requests the url as part of processing the button click.
Notes:
If the http://website.com/.... url updates some state (like store some changes in a database or similar), then you should request it using a POST request, not a GET. GET requests can get a cached response which means that the server might never actually see the request, and therefore not do any processing.
Piecing together the url like this "http://website.com/scripts/" + username.text "/value/0" looks risky. You should at the very minimum url encode the username.text - HttpUtility.UrlEncode(username.text). Better yet would be the first validate that the entered username is actually a valid user name.
You can add a Refresh header (not a meta-refresh element) to the response that contains the XML. In the header, you can specify another URL and the number of seconds to wait before redirecting.
I guess it should be using JavaScript (front-end) instead of back-end error handling, because it goes to another page. Use promise to handle exception

Dynamic web page with no reloading

I've just recently discovered slack.com and I fell in love with the way they handle their interface. If you've never used it before it's quite easy:
There is a side navbar and an main container on the right. Everytime you click an item in the side navbar it's content is loaded in the container. The focused item changes, the container's content changes, but the page doesn't reload.
If the data changes in the meantime it is magically updated.
What would it take to achieve something like that?
URL changing, page not reloading
Content always up to date
I've been looking at meteorjs in the past few days but the url part is never mentionned.
Yes. Slack is awesome. We (My team) use it everyday. I use it so regularly, at some point I don't check email but I check slack.
So, up to your question.
URL changing, page not reloading
It can be easily done by javascript [ Tl;dr ]
Code:
window.history.pushState("object or string", "Title", "/new-url");
Content always up to date
Well this can be done in two way,
i. via Ajax and Javascript
ii. via socket
i. via Ajax and Javascript:
in javascript you can make setTimeout function to fire ajax request in some duration. via Ajax it will get newest message from backend and it will be shown.
ii. via socket:
in socket, in your case if you use node.js there is a very popular library named socket.io which will get and update message in real time.
Good luck!
You need Ajax. You can use it in conjunction with a script, probably PHP that checks the state of the databse over a timer interval (a "heartbeat") and if anything has changed you load in the new data. I'd recommend having a specific column for a datetimestamp to compare with to make the smallest possible load on your database from this as a lot of users being on the page at the same time will make a lot of requests.
For the "url changing feature but no reload", I think #Kavan Pancholi answered your question. Another way to achieve that is by using the yield templates feature of iron-router.
You are using meteor, it means that you can do it without too much trouble (forget about Ajax & Sockets).
I don't know Slack (but I'll definitely have a look at it) but from what I understand, all data is preloaded/lazy loaded and they only change the displayed elements. In other terms, you keep ready and loaded all your client subscriptions or you bring them up when your yield template is loaded.
I will have a look at Slack and edit this if I realize I did not understood correctly what you are aiming for.
Edit Ok I tried it. You need to use yield templates with iron-router and they also added some transitions effect you can achieve with _uihooks + a loading template
On top of that, if you use a framework like angular, you'll notice urls like this:
http://localhost:3000/#/chat/room
You've probably seen similar with wikipedia in having urls like this:
https://en.wikipedia.org/wiki/Cat#Cats_and_humans
That little # won't reload the page on the url, so you can use that to make a url routing action without changing the page. You can access it with window.location.hash. So on the wikipedia article, you'd get
> window.location.hash
#Cats_and_humans
Combine that with ajax and event listeners and you can do something similar.
// using jquery
// set a callback when the hash changes
$(window).on('hashchange', function (e) {
e.preventDefault();
var hash = window.location.hash;
// get your container where you want to add data and clear it out
var $container = $('#container');
$container.html('<ul></ul>');
if (hash === '#/movies') {
// request json from an endpoint, and with the data, append it to the dom
$.getJSON('/api/movies', function (data) {
data.each(function (el) {
$container.append('<li>' + el.name + '</li>');
});
});
}
});

img requests before windows close

I have the situation that data needs to be reliably sent before browser window closes. My current implementation is to use a synchronous AJAX calls. However that's unlikely to work in the near future because the browsers are deprecating synchronous XHR calls according to https://xhr.spec.whatwg.org/#synchronous-flag
What I'm trying is to replace the ajax call with a fake "img" call, parameterize data to be sent and append it as the image's url query string. It seemed to work so far I tried. I don't really care about the server response so that as long as the request is made and pushed to the wire before browser window is unloaded.
My question is how reliable it is? Has anyone gotten any expeirences?
My other options is to keep the data in a cookie or webstorage and send them on the next request but that's based on the assumption that user will revisit which may not be true in my case.
Thanks.
You can do it in unload event of window using ajax
you can refer the following links to know more about the problems and functionalities you need to take care of at this time in following links
Is there any possibility sending request before window closes
Is it reliable?
Is $(window).unload wait for AJAX call to finish before leaving a webpage
Hope this helps
I think better use ajax request. I have no proof, but from my expirience, dom work slowly then js. For an example, when you do this one:
var div = document.createElement('div');
div.innerHTML = "mama";
div.className = "myDiv";
document.getElementById("myWrapper").appendChild(div);
var text = document.getElementByClassName('myDiv')[0].innerHTML;
sometimes you will get exception with message - can't read property innerHTML of undefined.
But, if you will do that
setTimeout(function(){
var text = document.getElementByClassName('myDiv')[0].innerHTML;
}, 50);
it work allways fine. It's because dom still not updated. So, when you add image, dom may not be able to process it. And, when you send ajax request, it will be sended in any case I think.

Facebook makes their AJAX calls through iframe?

I want to implement AJAX like facebook, so my sites can be really fast too. After weeks of research and also knowing about bigPipe (which is not ajax).
so the only thing left was how they are pulling other requests like going to page/profile, I opened up firebug and was just checking things there for what I get if I click on different profiles. But the problem is, firebug doen'tt record any such request and but still page gets loaded with AJAX and changes the HTML also, firebug does show change on html.
So I'm wondering, if they are using iframe to block firebug to see the request or what? Because I want to know how much data they pull on each request. Is it the complete page or only a part of page, because page layout changes as well, depending on the page it is (for example: groups, page, profile, ...).
I would be really grateful if a pro gives some feedback on this, because i cant find it anywhere for weeks.
The reason they use iframe, usually its security. iframes are like new tabs, there is no communication between your page and the iframe facebook page. The iframe has its own cookies and session, so really you need to think about it like another window rather than part of your own page (except for the obvious fact that the output is shown within your page).
That said - the developer mode in chrome does show you the communications to and from the iframe.
When I click on user's profile at facebook, then in Firebug I clearly see how request for data happens, and how div's content changing.
So, what is the question about?
After click on some user profile, Facebook does following GET request:
http://www.facebook.com/ajax/hovercard/user.php?id=100000655044XXX&__a=1
This request's response is a complex JS data, which contain all necessary information to build a new page. There is a array of profile's friends (with names, avatar thumbnails links, etc), array of the profile last entries (again, with thumbnails URLs, annotations, etc.).
There is no magic, no something like code hiding or obfuscation. =)
Looking at face book through google chromes inspector they use ajax to request files the give back javascript which is then used to make any changes to the page.
I don't know why/wether Facebook uses IFRAMEs to asynchroneously load data but I guess there is no special reason behind that. We used IFRAMEs too but now switched to XMLHttpRequest for our projects because it's more flexible. Perhaps the IFRAME method works better on (much) older browsers, but even IE6 supports XMLHttpRequest fine.
Anyway, I'm certain that there is no performance advantage when using IFRAMEs. If you need fast asynchroneous data loading to dynamically update your page, go with XMLHttpRequest since any modern browsers supports it and it's fast as HTTP can be.
If you know about bigPipe then you will be able to understand that,
As you have read about big pipe their response look like this :-
<script type="text/javascript"> bigpipe.onPageArrive({ 'css' : '', '__html' : ' ' }); </script>
So if they ajax then they will not able to use bigpipe, mean if they use ajax and one server they flush buffer, on client there will no effect of that, the ajax oncomplete only will call when complete data received and connection closed, In other words they will not able to use their one of the best page speed technique there,
but what if they use iframe for ajax,, this make point,, they can use their bigpipe in iframe and server will send data like this :-
<script type="text/javascript"> parent.bigpipe.onPageArrive({ 'some' : 'some' });
so server can flush buffer and as soon as buffer will clear, browser will get that, that was not possible in ajax case.
Important :-
They use iframe only when page url change, mean when a new page need to be downloaded that contains the pagelets, for other request like some popup box or notifications etc they simple send ajax request.
All informations are unofficial, Actually i was researching on that, so i found,
( I m not a native english speaker, sorry for spelling and grammer mistakes! )
when you click on different profile, facebook doesn't use ajax for loading the profile
you simple open a new link plain old html... but maybe I misunderstood you

What are techniques to get around the IE file download security rules?

Internet Explorer (with default settings, which I generally assume will be in effect on the desktops of the Great Unwashed) seems to dislike the idea of accepting attachment content in an HTTP response if the corresponding request wasn't made directly from a user action (like a "click" handler, or a native form submit). There are probably more details and nuances, but that's the basic behavior that's frustrating me.
It seems to me that this situation is common: the user interface in front of some downloadable content — say, a prepared PDF report — allows for some options and inputs to be used in the creation of the content. Now, as with all forms that allow the user to stipulate how an application does something, it's possible that the input will be erroneous. Not always, but sometimes.
Thus there's a dilemma. If the client tries to do something fancy, like run an AJAX transaction to let the server vet the form contents, and then resubmit to get the download, IE won't like that. It won't like it because the actual HTTP transaction that carries the attachment back will happen not in the original user-action event handler, but in the AJAX completion callback. Worse, since the IE security bar seems to think that the solution to all one's problems is to simply reload the outer page from its original URL, its invitation to the user to go ahead and download the suspicious content won't even work.
The other option is to just have the form fire away. The server checks the parameters, and if there's anything wrong it responds with the form-container page, peppered appropriately with error messages. If the form contents are OK, it generates the content and ships it back in the HTTP response as an attached file. In this case (I think), IE is happy because the content was apparently directly requested by the user (which is, by the way, a ridiculously flimsy way to tell good content from bad content). This is great, but the problem now is that the client environment (that is, the code on my page) can't tell that the download worked, so the form is still just sitting there. If my form is in some sort of dialog, then I really need to close that up when the operation is complete — really, that's one of the motivations for doing it the AJAX way.
It seems to me that the only thing to do is equip the form dialogs with messaging that says something like, "Close this when your download begins." That really seems lame to me because it's an example of a "please push this button for me" interface: ideally, my own code should be able to push the buutton when it's appropriate. A key thing that I don't know is whether there's any way for client code to detect that form submission has resulted in an attachment download. I've never heard of a way to detect that, but that'd break the impasse for me.
I take it you're submitting the form with a different target window; hence the form staying in place.
There are several options.
Keep the submit button disabled and do ongoing validation in the background, polling the form for changes to fields and then firing off the validation request for a field as it changes. When the form is in a valid state, enable the button; when it isn't, disable the button. This isn't perfect, as there will tend to be a delay, but it may be good enough for whatever you're doing.
Do basic validation that doesn't require round-trips to the server in a handler for the form's submit event, then submit the form and remove it (or possibly just hide it). If the further validation on the server detects a problem, it can return a page that uses JavaScript to tell the original window to re-display the form.
Use a session cookie and a unique form ID (the current time from new Date().getTime() would do); when the form is submitted, disable its submit button but keep it visible until the response comes back. Make the response set a session cookie with that ID indicating success/failure. Have the window containing the form poll for the cookie every second or so and act on the result when it sees it. (I've never done this last one; not immediately seeing why it wouldn't work.)
I expect there are about a dozen other ways to skin this cat, but those are three that came to mind.
(Edit) If you're not submitting to a different target, you might want to go ahead and do that -- to a hidden iframe on the same page. That (possibly combined with the above or other answers) might help you get the user experience you're looking for.
There's a whole number of really good reasons IE does this, and I'm sure it's not something anyone would argue with - so the main objective is to get around it somehow to make things better for your users.
Sometimes its worth re-thinking how things are done. Perhaps disable the button, use javascript to check when all the fields are filled out, and fire off an ajax request once they are. If the ajax was successful, enable the button. This is but one suggestion, I'm sure there will be more...
Edit: more...
Do simple submission (non-AJAX), and if the checks fail, send a page back rather than an attachment. The page sent back could contain all the information originally submitted (plus whatever error message to the user) so the user doesn't need to fill out the entire form again. And I'm also sure there will be more ideas...
Edit: more...
I'm sure you've seen this type of thing before - and yes, it is an extra click (not ideal, but not hard).... an "if your download fails, click here" -> in this case, do it as you want to do it, but add a new link/button to the page when the AJAX returns, so if the download failed, they can submit the already validated form from a "direct user action". And I'm sure I'll think of more (or someone else will).....
I have been fighting a similar issue for a while. In my case, posting to a hidden iframe didn't work if my web app was embedded in an iframe on another site (third party cookie issues) unless our site was added to the Trusted Sites list.
I have found that I could break up the download into POST and GET sequence. The post returns a short lived GUID that can be used in a GET request to initiate the download. The POST can do the form validation as well as return the GUID in a successful response. Once the client has the GUID, you can set the src property of a hidden iframe element to the download URL. The browser sees the 'Content-Disposition': 'attachement' header and gives the user a download ribbon to download the file.
So far it appears to work in all the latest browsers. Unfortunately it requires you to modify you server side API for downloading the file.

Categories

Resources