We run our staging environment on a shared host. Occasionally we run into troubles with unknown errors. Cloudflare displays it as error 520 when the page loads.
It's not a resource problem, and we don't have server access. It really does not happen often, so I want to run a Cloudflare Worker that refreshes the page if there is a 520 error.
Does anyone know how to write this, please?
Something like below should do it. Note that this doesn't "refresh" the page. Instead, the error page never reaches the user's browser at all, because on an error, the whole request is retried and the retry response goes to the browser instead.
Of course, it would be better to figure out why the error is happening. Cloudflare's error 520 means that your origin server is returning invalid responses to Cloudflare. Here is a page discussing what to do about it.
That said, while the problem is being investigated, a Worker can offer a convenient way to "sweep the problem under the rug" so that your visitors can access your site without problems.
export default {
async fetch(request, env, ctx) {
if (request.body) {
// This request has a body, i.e. it's submitting some information to
// the server, not just requesting a web page. If we wanted to be able
// to retry such requests, we'd have to buffer the body so that we
// can send it twice. That is expensive, so instead we'll just hope
// that these requests (which are relatively uncommon) don't fail.
// So we just pass the request to the server and return the response
// nomally.
return fetch(request);
}
// Try the request the first time.
let response = await fetch(request);
if (response.status == 520) {
// The server returned status 520. Let's retry the request. But
// we'll only retry once, since we don't want to get stuck in an
// infinite retry loop.
// Let's discard the previous response body. This is not strictly
// required but it helps let the Workers Runtime know that it doesn't
// need to hold open the HTTP connection for the failed request.
await response.arrayBuffer();
// OK, now we retry the request, and replace the response with the
// new version.
response = await fetch(request);
}
return response;
}
}
Related
Is it possible (without an application layer cache of requests) to prevent sending an HTTP request for the same resource multiple times when it's cachable? And if yes, how?
E.g. instead of
at time 0: GET /data (request#1)
at time 1: GET /data (request#2)
at time 2: received response#1 for request#1 // headers indicate that the response can be cached
at time 3: received response#2 for request#2 // headers indicate that the response can be cached
at time 0: GET /data (request#1)
at time 1: GET /data (will wait for the response of request#1)
at time 2: received response#1 for request#1 // headers indicate that the response can be cached
at time 3: returns response#1 for request#2
This would require that its possible to indicate to the browser that the response will be cachable before the response headers are read. I am asking if there is such a mechanism. E.g. with a preceding OPTIONS or HEAD request of some kind.
My questions is, if there is a mechanism to signal the browser that the response for URI will be cachable
Yes this is what the Cache-control headers do.
and any subsequent requests for that URI can return the response of any in-flight request....Ideally this would be part of the HTTP spec
No HTTP does not do this for you, you need to implement the caching yourself. This is what browsers do.
I did want to check if there is already something ready ouf-of-the-box
Javascript libraries don't typically honour caching, as a AJAX request is usually for data and any caching of data usually happens on the server. I don't know any library and of course asking for Js libraries is out of scope on SO.
Depending on the browser the second request could be stalled and served if cachable, e.g. in Chromium for non range requests:
The cache implements a single writer - multiple reader lock so that only one network request for the same resource is in flight at any given time.
https://www.chromium.org/developers/design-documents/network-stack/http-cache
Here an example where three concurrent requests result in only a single server call:
fetch('/data.json').then(async r => console.log(await r.json()));
fetch('/data.json').then(async r => console.log(await r.json()));;
setTimeout(() => fetch('/data.json').then(async r => console.log(await r.json())), 2000);
The subsequent request have 0B transferred and have the same random number, showing that only a single server call was made.
This behavior is not the same for e.g. Firefox:
An interesting question that comes to mind is what would happen when a request for a resource is made while a H2 push for that resource was initiated before but not yet finished.
For reproducing here the test code:
https://gist.github.com/nickrussler/cd74ac1c07884938b205556030414d34
I am using Fetch to make cross origin requests in javascript.
Cloudflare (proxying my traffic) will sometimes return a 429 (rate limiting).
When they return 429, they do not include the Access-Control-Allow-Origin header.
So now my fetch with mode: 'cors' fails, and throws a TypeError
How can I catch when this happens, vs. when it throws for other reasons like network errors?
My code is as follows:
try {
let response = await fetch(uri, config); // this throws
if (!response.ok) { // this line does not run
throw response.statusText;
}
let json = await response.json();
return json;
} catch (e) {
console.log(e.message); // "Failed to fetch"
}
Checking MDN docs, I'm not sure if it's possible to detect this 429 separate from other network errors?
A fetch() promise will reject with a TypeError when a network error is encountered or CORS is misconfigured on the server-side, although this usually means permission issues or similar
The short version is you can't, and this is by design.
The only place where this could be fixed is the server-side. Either Cloudflare needs to be changed/configured to send the appropriate headers or using a different service that does send the headers on error.
Without a server-side change, the error will be a generic CORS error.
The other alternative might to do build something like an 'iframe proxy', effectively letting you circumvent CORS entirely.
status code: 429 is returned when user sends too many requests in given time.
that means eventListners are making multiple requests(thats what I think).
So, to stop eventListners from making multiple requests use
event.stopImmediatePropagation();
If several listeners are attached to the same element for the same event type, they are called in the order in which they were added. If stopImmediatePropagation() is invoked during one such call, no remaining listeners will be called.
read more about this here MDN docs.
I have set up a NodeJS server, listening on localhost:3000, and I am testing it with Chrome browser.
When the response status code is 500:
If the response body is empty, then the client shows:
This page isn’t working
localhost is currently unable to handle this request.
HTTP ERROR 500
If the response body is not empty, then the client (surprisingly) shows it with no error
Here is a sample code to reproduce this:
let http = require("http");
let state = true;
let server = http.createServer(async function(request, response) {
if (request.url == "/") {
response.statusCode = 500;
response.end(state ? "" : "data");
console.log(state);
state = !state;
}
});
server.listen(3000, async function(error) {
if (error)
return console.log(error);
console.log("server is listening");
});
To tell you the truth, I am not worried about how the browser displays it, because my real client is some other process in the system. But I want to make sure that this process receives the correct status code (500) even if I enclose some data in the response body.
So I'd like to know if there is something wrong in my code (i.e., if I am not allowed to send data along with status 500), or if it's just Chrome's default behavior.
According to this post and this discussion Chrome if displaying a "friendly" error page is server responded with error code without the body (earlier it was for less than 512 bytes in response) - that's why you see a different behavior.
In most cases you would send the custom error page along with the error code, to display it to user. So there's no need to show error if there's a custom page displaying it anyway.
And if you just need to validate the response headers there are special extensions to chrome that allows you to see it. For example there's HTTP Headers.
Also I suppose you could use some kind of debugging proxy, like for example fiddler to capture the traffic, and inspect the requests and responses.
Making a get request to an HTTP REST API from an Ionic and Angular app. When making the request on a home network the API receives and responds correctly to every request.
However, when I make the same request on a 4g data connection only the first request is successful. In all following requests, the server never receives said request, but the $http.get acts as though it receives a response and reports a status code of 200. In these cases the "response" never changes from the response the initial request resulted in.
Clearing the app data restarts this cycle: the first request after the clear succeeds, and that response is repeated.
This is the function which makes the GET request. The contents of res are shown in the screenshot below.
this.getSongInfo = function() {
return $http.get('http://REDACTED:8080/getName').then(function(res) {
console.log(res);
return res.data;
}, function(err) {
console.log(err);
return false;
});
}
UPDATE
Lex's answer helped. I had not realised I couldn't inject $httpProvider into a service. Once I added it as to my app.config everything seemed to work correctly. Still not sure why it was only caching when on a data connection though, I couldn't find anything in the angular documentation to say that caching is enabled for data connections.
It sounds like the response is being cached somewhere. This is useful when you expect the same request to return the same response every time, but your case, that's obviously not the desired behavior. Based on what you've provided there is no way to know what external tool/setting is causing the response to be cached.
I'm writing e2e test for a single page application with nightwatch.js.
I have some API request like an authentication. So I want to use fakeServer of sinon.js for mocking response data. Here's my code.
import sinon from 'sinon';
const WAIT_TIME = 5000;
const host = 'http://localhost:3000/#/';
const uri = new RegExp(escape('/users/login'));
module.exports = {
'Login Test': function(browser) {
let server;
browser
.windowSize('basicTest', 1440, 710)
.url(host + 'account/login')
.waitForElementVisible('body', WAIT_TIME)
.setValue('input[type=email]', 'sample#sample.com')
.setValue('input[type=password]', 'password')
.execute(function() {
server = sinon.fakeServer.create();
server.respondWith('POST', uri, [
200, { 'Content-Type': 'application/json' }, JSON.stringify(someResponseData),
]);
})
.submitForm('form')
.execute(function() {
server.respond();
})
.waitForElementNotPresent('input.[type=submit]', WAIT_TIME) // the page should be redirected to another page
.execute(function() {
server.restore();
server = null;
})
.end();
},
};
I can't mock response, and got the error below (When the API serve is running, got no error, but the response won't be mocked one).
Error: Origin is not allowed by Access-Control-Allow-Origin
I want to know, first of all, is it correct way to use sinon.js's fakeServer? And is that possible on e2e(and nightwatch.js)?
Please give me a help.
To answer your question I first need to explain the nature of the error you receive.
That's a CORS (cross origin resource sharing) error.
Basically, some service behind your single page app knows not to allow requests which don't originate from your app. The service returning that error (I can't tell what it is from the information you've posted) detects a request coming from not your app, and rejects it.
You could disable the CORS security of the service (I highly recommend you don't do this) or you could attempt to change the origin header of the request. This is tricky, as most modern browsers specifically prevent this change in order to protect users. Since you are using Nightwatch, your environment will, in general, be that of a browser.
Based on this error you must have set up the server incorrectly because it seems as if your request is still hitting your actual API and not the mocked server.
Probably because you when submit the form, the browser will still submit the form to where it is supposed to go (and not your mock server) unless it is told otherwise. Looking at your code, you are setting up a mock server, but from this file alone it is not clear how the browser is supposed to know to send requests to that mocked server.
See nock for an alternative solution, I'm about to start using it :)