Simple way to include browser dimensions in request (Flask) - javascript

I would like to have the user's browser dimensions come through on each request. I tried having a global javascript snippet that would put it in the cookie, but that seemed to only work every other request for some reason.
It looks like this is setting the cookie after a page has loaded, in effect getting the cookie ready for the next request. This would kind of work, but as I mentioned it only works every other time.
jQuery(document).ready(function() {
var browser_width = jQuery(window).width();
document.cookie = 'width=' + browser_width;
console.log()
});
Is there a way to reliably provide the server (Flask, uwsgi) the dimensions on each request? I understand how getting dimensions the very first time the user visits the domain may be tricky, but you guys are smarter than I am.
(from the second request on would probably be ok if that's the reality)

Related

img requests before windows close

I have the situation that data needs to be reliably sent before browser window closes. My current implementation is to use a synchronous AJAX calls. However that's unlikely to work in the near future because the browsers are deprecating synchronous XHR calls according to https://xhr.spec.whatwg.org/#synchronous-flag
What I'm trying is to replace the ajax call with a fake "img" call, parameterize data to be sent and append it as the image's url query string. It seemed to work so far I tried. I don't really care about the server response so that as long as the request is made and pushed to the wire before browser window is unloaded.
My question is how reliable it is? Has anyone gotten any expeirences?
My other options is to keep the data in a cookie or webstorage and send them on the next request but that's based on the assumption that user will revisit which may not be true in my case.
Thanks.
You can do it in unload event of window using ajax
you can refer the following links to know more about the problems and functionalities you need to take care of at this time in following links
Is there any possibility sending request before window closes
Is it reliable?
Is $(window).unload wait for AJAX call to finish before leaving a webpage
Hope this helps
I think better use ajax request. I have no proof, but from my expirience, dom work slowly then js. For an example, when you do this one:
var div = document.createElement('div');
div.innerHTML = "mama";
div.className = "myDiv";
document.getElementById("myWrapper").appendChild(div);
var text = document.getElementByClassName('myDiv')[0].innerHTML;
sometimes you will get exception with message - can't read property innerHTML of undefined.
But, if you will do that
setTimeout(function(){
var text = document.getElementByClassName('myDiv')[0].innerHTML;
}, 50);
it work allways fine. It's because dom still not updated. So, when you add image, dom may not be able to process it. And, when you send ajax request, it will be sended in any case I think.

How to implement window load callback when content has Content Disposition attachment?

I'm having a hard time figuring out the solution to a problem that I thought would be very common or straight-forward to solve, but apparently I was wrong.
I have to re-write some web code (that I didn't write) that's causing a problem. When a user clicks a link, a request is sent to the web server, which in turn fetches or creates a PDF document from somewhere. The PDF data is returned with the Content-Disposition header set to attachment, and the browser shows the save-as dialog.
The reason the save-as dialog appears is because when the user clicks the link, the Javascript sets window.location.href to the server URL (with some parameters).
There's no loading animation other than the one the browser shows in the tab etc. while the request is being processed.
The problem is that if a request hangs or takes a while, users tend to click the link again (possibly multiple times) which means requests for that same resource just keep building up on the server (even accidental double clicks on a link, which are common, cause two requests to be processed).
How can I prevent this from happening? If I do something like this (with window.location.href replaced by window.open:
var REQUEST_PENDING = false;
function getPDF(param1, param2) {
if (REQUEST_PENDING) return;
REQUEST_PENDING = true;
var w = window.open("/GetPdf.servlet?param1="+param1+"&param2="+param2);
w.onload = function() {
DOC_REQUEST_PENDING = false;
}
}
...then only one request will be processed at any one time, but the onload callback only works if the return content is HTML. When it's an attachment, which is what I have, the DOC_REQUEST_PENDING variable is never set back to false, so no further requests can be made.
I know that the ultimate solution should probably be implemented server-side, but is it not possible to achieve what I'm trying to do client-side? (I can use jQuery).
The question linked to in the comments above by #Cory does seem to be a duplicate of my question, and while I'm sure the accepted answer is perfectly fine, there is a bit involved in it. There's another answer for that question down the list somewhat that provides a link to this jquery plugin:
http://johnculviner.com/jquery-file-download-plugin-for-ajax-like-feature-rich-file-downloads/
...and for me anyway, this is the ultimate solution. Easy to use and works great.

Problems with cached result when sending a XMLHttpRequest?

I'm new to the idea of AJAX as well as caching.
On the AJAX - Send a Request To a Server from W3Schools, it says you should add "?t=" + Math.random() to the end of the URL of the script to run to prevent caching.
On Wikipedia, the simple definition of "cache" is:
In computer science, a cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere.
But, shouldn't this be better? The script will run faster if the computer already has some duplicate data stored. Also, the first example on the tutorial page, without the addition to the URL, worked fine.
Can somebody please tell me the reason behind using "?t=" + Math.random()?
But, shouldn't this be better?
Yes it's better to have a cache system for perfomance reason, your application pages will load quickly because the elements loaded once will be retrieved without making each time a HTTP request to the server.
Can somebody please tell me the reason behind using "?t=" + Math.random()?
Adding this "?t=" + Math.random() is like renaming the URL of the script each time you reload it. The caching system will see it as a new element and not as an old one he as already stored even if nothing as really changed. So it's forcing to reload the element from the server.
Generally, we may want to do that on elements (like images, scripts) that are often updated. For example, it's the case for a profile picture in a website that a user could change, if the old picture file is in cache, the user will not see the new picture appear immediatly if we don't use that trick of the random number. The user could think his upload didn't work. He would have to empty the cache manually in his browser, which is not always very intuitive.
A second reason could be that it's good to do it while we are developping because we don't need to empty the cache every minutes that our code changes are taken into account...
However don't use this trick on elements you are sure will don't change or very rarely.
The reason behind adding some random element to the end of a web service request like that is because in many cases you want the data to always be fresh. If you are caching it, it is possible the data won't be fresh.
For example, say you have an AJAX request which gives you the current high score of a game. You call it with http://example.com/get_high_score.php. Say it returns 100. Now, say you wait 5 seconds and call this again (or the user refreshes their page). If that request was cached, it may return 100 again. However, in that time, the score may actually now be 125.
If you call http://example.com/get_high_score.php?t=12345786, the score would be the latest value, because it wasn't cached.
url + "?t=" + Math.random() is just one means of doing this. I actually prefer to use a timestamp instead, as that is guaranteed to always be unique.
url + "?t=" + (new Date()).getTime()
On the flip side, if you don't need the data to always be fresh (e.g., you are just sending a list of menu item options which almost never change), then caching is okay and you'd want to leave off the extra bit.
An alternative is to use a timestamp, or design one that changes every few seconds. Although the best method (if you can) is to add entries in the header in your server response to tell the browser not to cache the result.
var t = new Date().getTime();
var t2 = Math.floor(t/10000);
url = target_url + "?t=" + t2;
Although its unlikely in this case, be aware if your site continually generates links to random internal URLs, say through server side code, then it becomes a "spider trap" and crawlers such as search engines get stuck in a loop following these random links around causing peaks in your server load.

Can I tell from javascript whether my page was hard refreshed?

I've given up on this, but I thought I'd post here out of curiosity.
What I call a "hard refresh" is the Ctrl+R or Shift+F5 that you do during development to see your changes.
This causes the browser to add a Cache-Control: max-age=0 header to the request and "child" requests like images and scripts, etc.
If you're doing your job, you'll get a 304 on everything but the resource that's changed. (Okay, well, see comments. This is assuming that other validators are sent based on browser caches.)
So far, so good.
The problem is that I'm not loading scripts directly from the page, but through a load.js, and the browsers are inconsistent about whether they include that Cache-Control header on those requests. Chrome doesn't do it at all, and Firefox seems to stop in the middle of a series.
Since I can't access the headers of the current request, there's no way to know whether that header should be included or not.
The result is that when I change a script (other than load.js), a hard refresh does not reliably work, and I have to, e.g., clear the browser cache (which is a bit heavy-handed).
Any thoughts on this?
Unfortunately you cannot detect a hard refresh from JavaScript (there is no access to the headers for the currently loaded page).
However, the server can tell from the request headers if this is a hard refresh, so there's the option of cooperating. For example the server can include a custom <meta> tag in the response or add a special class to <body> and your script will then have access to this information.
Once load.js detects a hard refresh it can then propagate it to the dependent scripts by e.g. attaching a URL parameter to the requests (think "?t=" + timestamp).
You could try checking localStorage. Set a localStorage variable and check it. If it's there, it's not a hard refresh, otherwise, it is a hard refresh.

How is this working?

I was browsing through one site called BSEINDIA.com (http://www.bseindia.com/stockreach/stockreach.htm?scripcd=532667), i Noticed that on click of Get Quote it seems to fire an Ajax request and get the price of selected equities. I tried to segregate this request and fire it separately, but it doesn't seem to work.
I copied over the code from the HTML of same page (http://www.bseindia.com/stockreach/stockreach.htm?scripcd=532667)
Any pointers why is this not working, is there some sort of Authentication going on , i am not even a member of this site??
following is what i am trying to do
<script type="text/javascript">
var oHTTP=getHTTPObject();
var seconds = Math.random().toString(16).substring(2);
if(oHTTP)
{
oHTTP.open("GET","http://www.bseindia.com/DotNetStockReachs/DetailedStockReach.aspx?GUID="+seconds+"&scripcd=532667",true);
oHTTP.onreadystatechange=AJAXRes;
oHTTP.send(null);
}
function AJAXRes()
{
if(oHTTP.readyState==4)alert(oHTTP.responseText);
}
function getHTTPObject(){var obj;
try{obj=new ActiveXObject("Msxml2.XMLHTTP");}
catch(e){try{
obj=new ActiveXObject("Microsoft.XMLHTTP");}
catch(e1){obj=null;}}
if(!obj&& typeof XMLHttpRequest!='undefined'){
try{obj=new XMLHttpRequest();}
catch(e){obj=false;}}return obj;}
</script>
Found out my Answer here
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.referer%28VS.71%29.aspx
Actually, it is fairly easy. When you send an HTTP request, an header called Referrer gets sent with the request. The Referrer is basically the URL of the page which initiated the request.
BSEINDIA checks the Referrer value to make sure that the request is coming from their site. If it is, it sends the data. If not, it sends its 404 page.
You can easily test that theory by disabling the Referrer in your browser. In Firefox, you can do that by typing about:config and setting network.http.sendRefererHeader to 0.
If you still want to get the data, you will need to write a script (in PHP or another language) which will make the request with the proper Referrer and output the results.
There might be some form of IP restriction in place for accessing the files / data needed to save themselves from third party scripts accessing their data through their own scripts. Thats what I'd do.
Possibly Http Referrer. Make sure you do not break any copyright restriction.

Categories

Resources