Cache-Control in node.js http.get - javascript

I'm making requests to JSON objects with node using http.get(), it all works fine but, in some cases, I get an outdated version of the page (there's a date field that enables me to make sure). The behaviour is really inconsistent, I can get the right thing one moment, and the wrong thing the next... Here's my request :
var options = {host:'host.com',path:urlPath,headers:{'Cache-Control':'no-cache'}}
http.get(options, function(res){
//JSON.parse result and check the date, sometimes 17/1, sometimes 10/1
});
Is there anything wrong with the request header? I tried 'max-age=0' instead of 'no-cache', to no avail..., does anyone have an idea where this could come from? In my browser, I get the last version all the time, a bit lost here, Help!

Solved it, thanks to users comment, what I did is :
urlPath+="&ie="+(new Date()).getTime();
var options = {host:'host.com',path:urlPath,headers:{'Cache-Control':'no-cache'}}
http.get(options, function(res){
//JSON.parse result and check the date, sometimes 17/1, sometimes 10/1
});
Stupid and awesome at the same time...

Related

AngularJS $http.jsonp Geoserver with custom callback

I'm trying to make a $http.jsonp request in angularjs to a geoserver wfs which expects the callback name to be "parseResponse", however I can see the request being made being controlled by angular and set to their standard such as "angular.callbacks._0"
The vendor parameters for the WFS can be found at
http://docs.geoserver.org/maintain/en/user/services/wms/vendor.html#wms-vendor-parameters which demonstrates that the callback can be set by setting format_options=callback:myCallback in the URL, and by default is set to "parseResponse", however Angular seems to disregard this and add its own callback parameter regardless, which results in the http call failing as parseResponse is undefined
This has left me in the unusual position of having to set the callback name to "angular.callbacks._0" in the URL to get a response from the request, which is obviously a messy solution if i want to make any other calls to this WFS (which I do/will be doing)
http://jsfiddle.net/ADukg/15394/
myApp.controller("searchCont",function($scope, $http,$sce){
$scope.name = 'Superhero';
jsonSchools = "http://inspire.dundeecity.gov.uk/geoserver/inspire/wfs?service=wfs&version=2.0.0&request=GetFeature&typeNames=inspire:SCHOOL_CATCHMENTS_PRIMARY&%20srsName=EPSG:27700&bbox=338906.9,732790.9,338907.1,732791.1&&outputFormat=text/javascript&format_options=callback:angular.callbacks._0";
//jsonSchools = "http://inspire.dundeecity.gov.uk/geoserver/inspire/wfs?service=wfs&version=2.0.0&request=GetFeature&typeNames=inspire:SCHOOL_CATCHMENTS_PRIMARY&%20srsName=EPSG:27700&bbox=338906.9,732790.9,338907.1,732791.1&&outputFormat=text/javascript&format_options=callback:parseResponse";
var trustedUrlSchools = $sce.trustAsResourceUrl(jsonSchools);
$http.jsonp(trustedUrlSchools).then(function (response) {
$scope.schools = response.data.features;
console.log($scope.schools);
});
});
I've set up a JS Fiddle to demonstrate the problem, with the first URL demonstrating the working interim solution and the second commented out URL showing how the URL should in theory look. Notable if you copy the URL itself without the format_options parameter it will still also default to parseResponse
Lastly I came across a previous stackoverflow issue which looks similar at (how to custom set angularjs jsonp callback name?) but it didn't seem to help and gave generic method undefined errors.
Any help or direction on this problem would be appreciated, I have thus far tried setting jsonpCallbackParam in the http request but to no avail. Hoping this is an oversight on my part that I can put down to inexeperience.
Thanks in advance

download file and then leave page

$http.post('#Url.Action("donePressed")',
{
id: $scope.id,
}).then(
function (response) // success callback
{
window.location = '#Url.Action("PdfCreator", "someController")?Id=' + $scope.id;
window.location='#Url.Action("Index","AnotherController")';
},
function (response) // failure callback
{
alert(response.statusText);
});
Hi, I guess I am doing somehting wrong, I want to call to a function the sends me a file as a response, and afterwords I want to leave the page and go somewhere else.
the problem is, because this is a sync I don't get my download.
How can I make this synced?
Async has nothing to do with it. Once you're inside the success callback, the async part is already done. The problem is that you're changing the window location again before the first change has had time to load. In other words, it's the exact opposite of an async problem; the problem is that this code is synchronous and runs too fast.
However, the approach here is flawed to begin with. It might work if the browser was forced to download the file, as the then the first change to window.location would not itself cause the browser view to change. Since PDF is typically a browser-viewable type, this is not guaranteed, though. Regardless, you still have the same issue of need to delay the second call until the first has gotten a response, which is basically impossible. There's no built in event for this type of thing, so the best you'd could do would be to is use setTimeout with a 1-2 second delay, and just hope that that is enough time to get the first response. Even then, if it ever took longer, your code would break again. In other words, it's going to be extremely brittle.
The simple fact is that this is just simply not how HTTP works. You're basically trying to return two responses for a single request, which is not possible. This is a clever way to try to skirt around inherent restrictions in the protocol, I'll give you that, but it's ultimately still insufficient.
All that said, you can actually make this happen via the HTML5 File API and AJAX, but your solution then will only be compatible with modern browsers (basically everything except IE 10 and under). If you do not need to support lesser versions of IE, then you can use the following code instead:
function (response) // success callback
{
$http.get('#Url.Action("PdfCreator", "someController")?Id=' + $scope.id').then(
function (response) // success callback
{
var a = document.createElement('a');
var url = window.URL.createObjectURL(response.data);
a.href = url;
a.download = 'myfile.pdf';
a.click();
window.URL.revokeObjectURL(url);
window.location = '#Url.Action("Index","AnotherController")';
},
function (response) // failure callback
{
alert(response.statusText);
}
);
},
The secret sauce is in fetching the PDF via AJAX and then creating an object URL out of the PDF data. You can then use that to create an anchor element in the DOM and "click" it dynamically to prompt the download. The caveat, though, is that I haven't tried to do this with Angular, so I'm unsure if $http supports getting a binary response. I know with jQuery, you just have to tell it that the XHR object's response type is 'blob', but I'm not sure if you can or how you would do the same thing with Angular. As an alternative, you can simply use XMLHttpRequest directly for this particular AJAX, and simply set xhr.responseType = 'blob'.

How to make a subdomain ajax call with jQuery (without iFrames)

site.com/api/index.php is where I need the ajax request to go. From site.com/sub/ the request works perfectly but sub.site.com is sending the request to sub.site.com/api/index.php which obviously does not exist... I've Google and StackOverflowed the hell out of the question, but can't seem to find an answer that works.
Code:
var jQuery_ajax = {
url: "site.com/api/index.php",
type: "POST",
data: $.param(urlData),
dataType: "json"
}
var request = $.ajax(jQuery_ajax);
The most common answer was to set document.domain to the regular site, but that does not seem to do anything... I've also seen answers talking about iFrames, but I want to stay away from iFrames at all costs.
document.domain = "site.com";
** Note: everything is on the same server.
HACKY SOLUTION: made sub.site.com/api/index.php a file that simply reads
include_once("$path2site/api/index.php");
Once you've corrected the URL to http://site.com/api/index.php try adding the following to api/index.php:
header("Access-Control-Allow-Origin: http://sub.site.com");
e: it's possible that doing so may disallow use from site.com as well; I'm not seeing a way to provide two values, so you may need a way to tell it which site to use, like a ?sub=1 arg to index.php

JSON.parse causes Chrome tab to crash ("Aw, Snap") despite use of try/catch

I'm loading some JSON data from an AJAX query:
$.ajax({'url': url, type: params.method, 'data': data, timeout: this.settings.timeout, success: function(d,a,x){
console.log('request Complete',params.endpoint,params.params);
var json = null;
try {
json = JSON.parse(d);
} catch(e) {
console.error(e);
}
console.log('json');
// omitted for brevity...
}
});
I'm seeing occasional "Aw, Snap" crashes in chrome where the last console.log is the "request Complete" (the error or 2nd log never get shown).
I suppose that it's important to note that the data may be large (sometimes as big as ~15Mb), which is why I'm not printing out d on every request and looking for malformed JSON (yet... I may result to that). FWIW, I've also tried $.parseJSON instead of JSON.parse
Research I've done into the "Aw, Snap" error is vague, at best. My best guess atm is that this is an OOM. Unfortunately, there's not much I can do to decrease the footprint of the result-set.
Is there any way I could, at the least, gracefully fail?
What happens when you tell jQuery the response data is JSON via the dataType property? Doing so should cause jQuery to pre parse and just give you the data. If you're right about what is going on, it seems this might cause a crash too. jQuery does some sanity checks before parsing, though.
$.ajax({
url: url,
type: params.method,
data: data,
dataType: 'json',
timeout: this.settings.timeout,
success: function (d, a, x){
// `d` should already be parsed into an object or array, and ready to use
}
});
If that doesn't help, please post your actual JSON response for us to take a look at.
"#the system" nailed it. 15MB of info coming back is too much for your browser to handle. I tripped upon this trying to see why my 64 Byte encoded image string is crashing the chrome tab, and it's only 200-500KB.
To 'gracefully fail', it seems you will need to have server side logic in place to prevent this from happening; perhaps limit the size of the JSON string. Only have X characters of length in the initial JSON string and add a property for 'isFinishedSending: false' or 'leftOff: [index/some marker]' so you can timeout for a bit and hit your server again with the the place you left off.

Creating an Amazon S3 Bucket using jQuery and REST

I am wanting to try and do a handful of things with the use of jQuery and Amazons S3 API via REST. My key issue is not being familiar with REST well enough (or not as well as I thought I knew) to know if this approach would even remotely work right. I have tried searching endlessly for some tidbit of an example and came up fruitless, maybe I am searching for the wrong things I don't know, but as a last ditch effort I figured I'd hit up my new favorite place, here..
What I need to do is send. PUT a request to the API to create the bucket. Based on the S3 API docs I came up with
var AWSAccessKeyId = "";
var AWSSecretAccessKey = "";
var AWSDomain = ".s3.amazonaws.com";
function createNewBucket(bucketName)
{
var bucketString = 'HTTP/1.1\n';
bucketString += bucketName + AWSDomain + '\n';
bucketString += 'Content-Length: 0 \n';
bucketString += 'Date: Wed, 01 Mar 2009 12:00:00 GMT \n';
bucketString += 'Authorization: AWS ' + sha1_string;
$.ajax({
url: bucketName + AWSDomain,
type: 'PUT',
data: bucketString,
success: function(data)
{
},
error: ''
});
}
though concept isn't complete with the above I am just starting it out, and I started questioning if this idea of approach was even going to work.. And if it is to work with the above or in any means provided here for help how would I also work with the response to know if it was successful or not? I know if I can nail this one piece down I can handle for the most part the rest of my issues to come. Its just tackling the first hump and figuring out if I am going about it the right way. Its also worth mentioning that I have been tasked with doing this purely javascript style with or without the help of a lib like jquery. I can't use PHP, or the like in this concept. So if anyone can throw me a bone i'd be greatly appreciative.
On a side note, does anyone know if theres a means of actually testing something like this stuff out without actually having a S3 account, cause I can't afford to pay for an account just for the sake of testing let alone any other reason.
Firstly, I am getting the feeling that you are quite new to consuming web-services client side.
It is often best to start with something simple.
If I have a resource that returns a string... say test.html -> "Hello World!"
And the URL for this web-service is some-realy-long-id.s3.amazonaws.com
then we have the following:
$.ajax({
url:'some-realy-long-id.s3.amazonaws.com/test.html',
type: 'PUT',
data: {
'myKey':'myValue'
},
success: function(data) {
//alert dialog containing "Hello World!"
alert(data);
},
error: ''
});
You must remember that requests from the browser follow the same-origin policy, so unless you are planning to use JSOP, or some other cross-domain hack you will run into trouble.
p.s. another little piece of advice is to use right hand braces in Javascript as it performs semi-colon insertion (which will bite you if you return a object literal).
Oh yes and a lot of old browsers do not support 'PUT' which you may need to consider.

Categories

Resources