Navigate in JavaScript but adjust the `Cache-Control` header - javascript

I have a timeout of 5 seconds to detect if one is offline.
If a user tries an AJAX POST and they are offline, it will timeout after 5 seconds, and then the AJAX POST fails. When the AJAX POST fails, I wish to perform a navigation to a different URL, but because I know the server is down, I wish to tell the Service worker, don't try query the server, and waste another 5 seconds, just hit the cache (the response will be pre-cached).
To tell the service worker to hit the cache from my research the most "correct" way seems to be set the Cache-Control header to only-if-cached.
How does one perform:
window.location.replace(url);
But set the Cache-Control header?
At the moment a failed AJAX POST takes 10 seconds to handle, whereas it should only take 5 seconds.

Related

Nest.js long requests are blocked by CORS and return null

I have an endpoint in a Nest.js controller that receives a post request with an excel file and have to do an operation that takes a lot of time over that excel, the problem is that every request that takes more than one minute (60000 ms), returns null. The most confusing thing is that is not even throwing timeout error, the process in the server runs correctly under the hood, but in the front i'm receiving a CORS block error.
Something like this.
Access to XMLHttpRequest at 'https://api-asd.com/endpoint/upload-excel' from origin
'http://asd.com.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin'
header is present on the requested resource.
When i want to process the response i can't because i receive null in the front. I tried setting the timeout of nest.js server to three minutes but still doesn't work. Also tried checking the timeout in client side, but axios default timeout is 0 and i didn't changed it, so it should wait until the request is finished.
CORS is enabled and it works perfectly if the operations lasts less than one minute. Even checked if the file limit was blocking the upload, but in the back-end the file is received and the process runs anyway.
It is possible to solve this problem?
Note: I'm using next.js in the front.

Handling long response time for REST API

I have created a Javascript based REST API page (private chrome extension) which integrates with the Oracle tool and fetches response. It works fine if the response is received within around 3-5 mins however, if it takes additional time it gives ERR_EMPTY_RESPONSE error.
I have tried xhr.timeout but still it gives the same ERR_EMPTY_RESPONSE error. How can we ask the Javascript to wait for more time?
Thanks..
If you are making ajax call to server and want to increase waiting time of response
then you need to set "timeout" interval at server side.
In nodejs I am giving the way that you can apply at server side to increase timeout period.
in app.js file(express framework)
write down following code
app.use(function(req, res, next) {
//Set time out for request to 24hour
req.connection.setTimeout(24 * 60 * 60 * 1000);
next();
});
You can also refer this
HTTP keep-alive timeout
Proper use of KeepAlive in Apache Htaccess
you need to do this at server side

Preventing spammy XMLHTTP requests to php

I have a site that sends XMLHTTPRequests to a php file that handles the HTTP POST Request and returns data in JSON format. The urls for the post_requests files are public information (since a user can just view the JS code for a page and find the URLs I'm sending HTTP requests to)
I mainly handle HTTP Post Requests in PHP by doing this:
//First verify XMLHTTPRequest, then get the post data
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) === 'xmlhttprequest')
{
$request = file_get_contents('php://input');
$data = json_decode($request);
//Do stuff with the data
}
Unfortunately, I'm fairly sure that the headers can be spoofed and some devious user or click bot can just spam my post requests, repeatedly querying my database until either my site goes down or they go down fighting.
I'm not sure if their requests will play a HUGE role in the freezing the server with their requests (as 20 requests per second isn't that much). Should I be doing something about this? (especially in the case of a DDOS attack). I've heard of rate-limiting where you record an instance of every time some IP requests data and then trace if they are spammy in nature:
INSERT INTO logs (ip_address, page, date) values ('$ip', '$page', NOW())
//And then every time someone loads the php post request, check to see if they loaded the same one in the past second or 10 seconds
But that means every time there's a request by a normal user, I have to expend resources to log them. Is there a standard or better "practice" (maybe some server configuration?) for preventing or dealing with his concern?
Edit: Just for clarification. I'm referring to some person coding a software (with a cookie or is logged in) that just sends millions of requests per second to all my PHP post request files on my site.
The solution for this is to rate-limit requests, usually per client IP.
Most webservers have modules which can do this, so use one of them - that way your application only receives requests it's suppsed to handle.
nginx: ngx_http_limit_req
Apache: mod_evasive
There are many things you can do:
Use tokens to authenticate request. Save token in session and allow only some amount of requests per token (eg. 20). Also make tokens expire after some amount of time (eg. 5 min). The exact values depend on your site usage patterns. This of course will not stop attacker, as he may refresh the site and grab new token, but it is a small and almost costless aggravation.
Once you have tokens, require captcha after several token refresh requests. Also adjust it to your usage patterns to avoid displaying captcha to regular users.
Adjust your server's firewall rules. Use iptables connlimit and recent modules (see http://ipset.netfilter.org/iptables-extensions.man.html). This will reduce request ratio handled by your http server, so it will be harder to exhaust resources.

javascript listen during 10 minutes for a http post request, if not cancel it

I am developing a little bitcoin payment platform. I am using nodejs as backend and html5 and javascript as frontend. Ok i coded the part where i receive the callback of the payment being done notification in nodejs and post a 'paid' string to the client. In the client side i need write a code to be waiting during 10 minutes for that POST request. If the request does not arrive in that 10 minutes the payment is cancelled.
I just have no clue on how to make the frontend listen to a http POST request during 10 minutes and cancel it if it does not arrive in that period..
Regards,
Aitor
You can't "POST" TO the client.
Also 10 minutes would be too long to set for a request timeout ( as browser would timeout before then - about 2 mins max depending on browser)
You can do polling after the request has been POSTed (from the client to the nodejs backend) - you'd use AJAX for that
Or you could set up a WebSockets socket.io connection and "push" a result from the nodejs backend to the client - see here http://en.wikipedia.org/wiki/WebSocket
You can use the timer for do this.
set a timer width the java script,check the POST parameter that if received or no. and when the time finished you can redirect the user to an other page.
With jQuery:
$.ajax({url:"your url", timeout: 1000*60*10, success:function(result){
//handle response here...
}});
Change "your url" to your url. This code will make an ajax call to your server and when it responds, it will enter the success: function with the result. The timeout of 1000*60*10 is in milliseconds and evaluates to 10 minutes.

Setting the timeout when using Dojo

I recently updated an app that uses Dojo to sent asynchronous petitions to my server which serves these petitions with cgi.
My problem is as follows. So for example the variable that makes the requests is
parent.sc_dojo.io.script.jsonp_sc_dojoIoScript2
This new service takes too long to send the response approximately 40 - 60 seconds, and after this time the variable parent.sc_dojo.io.script.jsonp_sc_dojoIoScript2 appears as UNDEFINED
I made an analysis using firebug, see the following image for major details.
The petition to the server has the following data:
Connection Keep-Alive
Content-Type text/javascript; charset=utf-8
Date Tue, 10 Sep 2013 12:39:22 GMT
Keep-Alive timeout=5, max=100
Server Apache/2.2.22 (Ubuntu)
Transfer-Encoding chunked
The timeout ranges from 5 to 100, I don't really know the units of this measure, Any ideas?
About Connection Keep-Alive
When a client browser sends the "Connection: Keep-alive" header to an HTTP/1.1 server, the browser is saying "hey, I want to carry on a long conversation, so don't close the connection after the first exchange."
The keep-alive "timeout" value is in seconds. The "max" value is unit-less, representing the maximum number of requests to service per connection. Taken together, these augment the client's request to "hey, I want to carry on a long conversation, so don't close the connection after the first exchange BUT if nothing exchanges in 5 seconds (timeout) OR if more than 100 requests go back and forth (max), I'm ok with you closing the connection." The server responds with the actual values it will service for timeout and max.
The penalty for a closed connection is that a new one has to be opened up. Some modern browsers limit the number of simultaneous open connections, so keeping these values too small may introduce latency (while your app waits for free connections). On the other hand, the server need not agree to the timeout and max values requested: the server sets its own limits.
See these articles for details:
http://www.feedthebot.com/pagespeed/keep-alive.html
http://en.wikipedia.org/wiki/HTTP_persistent_connection
http://www.hpl.hp.com/personal/ange/archives/archives-95/http-wg-archive/1661.html
About dojo timeouts
I don't see your code or dojo version, but dojo does allow you to set how long it will wait for a response via the timeout property in the XHR request. The default timeout is "never". Code below.
In practice, "never" is misleading: browsers have their own defaults for keep-alive timeouts and upstream routers might have their own timeouts.
Try to keep it short. If the response takes more than 15 seconds, there may need to be a different design approach to the problem: reverse ajax, polling, combined response, etc.
require(['dojo/request/xhr'], function (xhr) {
xhr(
'http://www.example.com/echo',
{ timeout:15000 /* change this, units are milliseconds */, handleAs:'json' }
).then(function (r) {
console.log(r);
});
});
The specific problem
Ok, finally. If you have a long server side run, here's what I would do:
Send a request from client to server that starts the job
Server responds with a unique URL that can be polled for status
In Javascript, use setInterval() to periodically check the returned URL for status
When the URL shows "status" done, kill the setInterval and issue a final call to get the result

Categories

Resources