Service worker: intercept responses - javascript

I know that I can use a service worker to intercept outgoing fetch operations, and even generate custom responses to them, e.g.
self.addEventListener('fetch', (event) => {
if (/\.jpg$/.test(event.request.url)) {
event.respondWith(
fetch('/images/anotherimage.jpg'));
}
});
However, what if I want to intercept the response to a given fetch request before that response is passed back to the page? Is this doable?
To be clear, I do not in any way want to modify the request itself - I just want to access the response to it.

I'm assuming that you mean you'd like to add logic to a service worker so that it requests a resource, and then modifies the response from the network, resulting in a response that's passed back to the page that is a mix of what you'd get from the network and what the service worker added.
If so, the answer is yes, you can do that for same-origin responses, and for cross-origin responses when CORS is used. (You can't modify opaque responses, which is what you get when making a cross-origin request without using CORS.)
Here's an example of a fetch handler that responds to requests for an hypothetical /api endpoint that returns JSON my making the request to /api, and then adding in an additional field to the API response before returning the response to the page.
async function modifyAPIResponse(request) {
const apiResponse = await fetch(request);
const json = await apiResponse.json();
json.extraField = 'set by fetch handler';
return new Response(JSON.stringify(json), {
// Ensure that the Content-Type: and other headers are set.
headers: apiResponse.headers,
});
}
self.addEventListener('fetch', (event) => {
const url = new URL(event.request.url);
if (url.pathname === '/api') {
event.respondWith(modifyAPIResponse(event.request));
}
});

Related

Abort a JS fetch() after request is sent and before the wait/download time

My problem:
I am currently trying to send several fetch() requests to a url without having to wait for the server's response.
The exact use case is to build a small chrome extension that increase a counter on an external server (which I do not have control on) that increments on GET requests.
But, for the sake of UX, I would want to do this as quickly as possible.
What I tried :
My first try was to cancel the fetch after 100 millisec.
This works OK with a good connection and quick computer but if I do not have both, the fetch() is aborted too soon : the request is never sent.
Here is the code I have so far :
let controller = new AbortController();
let timer = setTimeout(() => controller.abort(), 100);
await fetch(url, {signal: controller.signal}).catch(err => {
if (err.name === 'AbortError') {
}
});
clearTimeout(timer);
Question:
Is there a way to know when a fetch has passed the "send request" part of the fetch so I can abort it right there ?
I am trying to send several requests to a url without having to wait for the server's response.
That's what sendBeacon is meant to do, no need to involve fetch.
If that doesn't meet your requirements, you can still use fetch and simply not wait for the response - nothing forces you to use await.
Is there a way to know when a fetch has passed the "send request" part of the fetch so I can abort it right there?
No. And aborting the request is not the right solution anyway. Notice that even when closing the connection only after the HTTP request is sent, the server might still notice that and will not process the request without a writable destination for the response.
if you want to send a request but don't need the body of the reply you should send using HEAD method, this instructs the server to just reply with the headers of the call and not the body, so you can check for a status or content type etc that is in the header package
send 10 requests as fast as possible then await the reponses
const pending=[];
for(let i=0;i<10:i++)
{
pending.push(fetch(url, {method:"HEAD"});
}
for(const req of pending){
const res = await req;
//check for a 200 code or any other bodyless data
}
or if you really don't care about the response as all there is no need to await the promise completion
for(let i=0;i<10:i++)
{
fetch(url, {method:"HEAD"});
}
aborting is for when you want to terminate the call even if it hasn't been sent
as mentions by #Bergi , if you just want to ping the url then you can use
navigator.sendBeacon(url)
but this will send a post request giving you much less control of what you do with the request

Cloudflare Worker TypeError: One-time-use body

I'm trying to use a Cloudflare Worker to proxy a POST request to another server.
It is throwing a JS exception – by wrapping in a try/catch blog I've established that the error is:
TypeError: A request with a one-time-use body (it was initialized from a stream, not a buffer) encountered a redirect requiring the body to be retransmitted. To avoid this error in the future, construct this request from a buffer-like body initializer.
I would have thought this could be solved by simply copying the Response so that it's unused, like so:
return new Response(response.body, { headers: response.headers })
That's not working. What am I missing about streaming vs buffering here?
addEventListener('fetch', event => {
var url = new URL(event.request.url);
if (url.pathname.startsWith('/blog') || url.pathname === '/blog') {
if (reqType === 'POST') {
event.respondWith(handleBlogPost(event, url));
} else {
handleBlog(event, url);
}
} else {
event.respondWith(fetch(event.request));
}
})
async function handleBlog(event, url) {
var newBlog = "https://foo.com";
var originUrl = url.toString().replace(
'https://www.bar.com/blog', newBlog);
event.respondWith(fetch(originUrl));
}
async function handleBlogPost(event, url) {
try {
var newBlog = "https://foo.com";
var srcUrl = "https://www.bar.com/blog";
const init = {
method: 'POST',
headers: event.request.headers,
body: event.request.body
};
var originUrl = url.toString().replace( srcUrl, newBlog );
const response = await fetch(originUrl, init)
return new Response(response.body, { headers: response.headers })
} catch (err) {
// Display the error stack.
return new Response(err.stack || err)
}
}
A few issues here.
First, the error message is about the request body, not the response body.
By default, Request and Response objects received from the network have streaming bodies -- request.body and response.body both have type ReadableStream. When you forward them on, the body streams through -- chunks are received from the sender and forwarded to the eventual recipient without keeping a copy locally. Because no copies are kept, the stream can only be sent once.
The problem in your case, though, is that after streaming the request body to the origin server, the origin responded with a 301, 302, 307, or 308 redirect. These redirects require that the client re-transmit the exact same request to the new URL (unlike a 303 redirect, which directs the client to send a GET request to the new URL). But, Cloudflare Workers didn't keep a copy of the request body, so it can't send it again!
You'll notice this problem doesn't happen when you do fetch(event.request), even if the request is a POST. The reason is that event.request's redirect property is set to "manual", meaning that fetch() will not attempt to follow redirects automatically. Instead, fetch() in this case returns the 3xx redirect response itself and lets the application deal with it. If you return that response on to the client browser, the browser will take care of actually following the redirect.
However, in your worker, it appears fetch() is trying to follow the redirect automatically, and producing an error. The reason is that you didn't set the redirect property when you constructed your Request object:
const init = {
method: 'POST',
headers: event.request.headers,
body: event.request.body
};
// ...
await fetch(originUrl, init)
Since init.redirect wasn't set, fetch() uses the default behavior, which is the same as redirect = "automatic", i.e. fetch() tries to follow redirects. If you want fetch() to use manual redirect behavior, you could add redirect: "manual" to init. However, it looks like what you're really trying to do here is copy the whole request. In that case, you should just pass event.request in place of the init structure:
// Copy all properties from event.request *except* URL.
await fetch(originUrl, event.request);
This works because a Request has all of the fields that fetch()'s second parameter wants.
What if you want automatic redirects?
If you really do want fetch() to follow the redirect automatically, then you need to make sure that the request body is buffered rather than streamed, so that it can be sent twice. To do this, you will need to read the whole body into a string or ArrayBuffer, then use that, like:
const init = {
method: 'POST',
headers: event.request.headers,
// Buffer whole body so that it can be redirected later.
body: await event.request.arrayBuffer()
};
// ...
await fetch(originUrl, init)
A note on responses
I would have thought this could be solved by simply copying the Response so that it's unused, like so:
return new Response(response.body, { headers: response.headers })
As described above, the error you're seeing is not related to this code, but I wanted to comment on two issues here anyway to help out.
First, this line of code does not copy all properties of the response. For example, you're missing status and statusText. There are also some more-obscure properties that show up in certain situations (e.g. webSocket, a Cloudflare-specific extension to the spec).
Rather than try to list every property, I again recommend simply passing the old Response object itself as the options structure:
new Response(response.body, response)
The second issue is with your comment about copying. This code copies the Response's metadata, but does not copy the body. That is because response.body is a ReadableStream. This code initializes the new Respnose object to contain a reference to the same ReadableStream. Once anything reads from that stream, the stream is consumed for both Response objects.
Usually, this is fine, because usually, you only need one copy of the response. Typically you are just going to send it to the client. However, there are a few unusual cases where you might want to send the response to two different places. One example is when using the Cache API to cache a copy of the response. You could accomplish this by reading the whole Response into memory, like we did with requests above. However, for responses of non-trivial size, that could waste memory and add latency (you would have to wait for the entire response before any of it gets sent to the client).
Instead, what you really want to do in these unusual cases is "tee" the stream so that each chunk that comes in from the network is actually written to two different outputs (like the Unix tee command, which comes from the idea of a T junction in a pipe).
// ONLY use this when there are TWO destinations for the
// response body!
new Response(response.body.tee(), response)
Or, as a shortcut (when you don't need to modify any headers), you can write:
// ONLY use this when there are TWO destinations for the
// response body!
response.clone()
Confusingly, response.clone() does something completely different from new Response(response.body, response). response.clone() tees the response body, but keeps the Headers immutable (if they were immutable on the original). new Response(response.body, response) shares a reference to the same body stream, but clones the headers and makes them mutable. I personally find this pretty confusing, but it's what the Fetch API standard specifies.

Koa cookie returning `undefined`

After a POST request is sent from the browser to the /generate url in the server, I want to create a string and save it as a cookie. When a GET request is later sent from the browser to the /retrieve url in the server, I want to send that string as a response to the client.
Here is what I tried:
routes.js
const Router = require('koa-router')
const router = new Router()
router.post('/generate', function * () {
this.cookies.set('generatedString', 'example')
this.response.body = 'String saved as cookie!'
})
router.get('/retrieve', function * () {
const cookie = this.cookies.get('generatedString')
console.log(cookie) // undefined!
this.response.body = cookie
})
Why does doing this.cookies.get('generatedString') return undefined even though the POST request handler has already run and should have set that cookie? Any help would be appreciated!
EDIT: In case it is of importance, I thought it would be worth mentioning that I am using the fetch API to make the POST and GET requests.
In case it is of importance, I thought it would be worth mentioning that I am using the fetch API to make the POST and GET requests.
The fetch API mentions that "By default, fetch won't send any cookies to the server, resulting in unauthenticated requests if the site relies on maintaining a user session."
If you want fetch to send cookies, you will need to add an option to the request you send out called credentials and set it to a value of include.
Example POST request:
const request = {
method: 'POST',
credentials: 'include',
headers: ...,
body: ...
}
fetch('/generate', request).then(...)
Example GET request:
fetch('/retrieve', { credentials: 'include' }).then(...)

JavaScript: parsing JSONP callback response from GitHub API

I'm creating an Express route that calls the GitHub API with a ?callback=foo pattern added to the endpoint so that it will return the Link headers which I'll need to parse out the Link: header because it contains the link that I'll have to call to get the next page of the response.
The problem is that the response has the expected pattern, but when I try to create a function to tease out the meta and data portions of the function, they turn up undefined.
My code:
app.get('/populate', function(req, res, next) {
console.log('/populate route hit');
var token = "<something>";
var options = {
url: 'https://api.github.com/users?callback=resp',
headers: {
'User-Agent': 'Our-App',
'Authorization': 'token '+ token
}
};
api(options) // 'api' is request-promise module, makes http requests
.then(function(response) {
console.log(response); // Note 1
function resp(res) {
var meta = res.meta;
var data = res.data;
console.log('meta ', meta); // Note 2
console.log('data ', data);
}
resp(response);
Note 1: The response looks like:
/**/resp({"meta":{"X-RateLimit-Limit":"5000","X-RateLimit-Remaining":"4993",
"X-RateLimit-Reset":"1435297775","X-OAuth-Scopes":"public_repo, user:email",
"X-Accepted-OAuth-Scopes":"repo","Cache-Control":"private, max-age=60, s-maxage=60",
"Vary":"Accept, Authorization, Cookie, X-GitHub-OTP",
"ETag":"\"0cbbd180648a54f839a237b0302025db\"",
"X-GitHub-Media-Type":"github.v3; format=json",
"Link":[["https://api.github.com/users?callback=resp&since=46",
{"rel":"next"}],["https://api.github.com/users{?since}",
{"rel":"first"}]],"status":200},"data":[{"login":"mojombo","id":1,
...etc etc...
}]})
The response looks like it's been JSON.stringified but when I JSON.parse(response) it returns an error. I don't know how to access the deeply-embedded Link: headers and even the data, which looks like JSON, too.
Note 2 The res.meta and res.data log as undefined.
The response isn't JSON, it's JSONP. JSONP is a cross-domain mechanism for retrieving data. You don't use XHR (e.g., app.get) to request JSONP, you use a script tag. (Because XHR is limited by the Same Origin Policy; script tags aren't.)
If your call retrieving that data via XHR works, it means cross-domain XHR calls are allowed in your situation (the server supports Cross-Origin Resource Sharing with your page's origin, and the browser supports CORS). You can get JSON instead of JSONP by removing the ?callback=resp in the URL.

How can we check if response for the request came from Service Worker

In Google Chrome console next to the status code of HTTP Request we have info (from ServiceWorker). Can Request be aware somehow that the Response came from ServiceWorker? Comparing date from Response Headers maybe?
By design, a response returned via a FetchEvent#respondWith() is meant to be indistinguishable from a response that had no service worker involvement. This applies regardless of whether the response we're talking about is obtained via XMLHttpRequest, window.fetch(), or setting the src= attribute on some element.
If it's important to you to distinguish which responses originated via service worker involvement, the cleanest way I could think of would be to explicitly add an HTTP header to the Response object that is fed into FetchEvent#respondWith(). You can then check for that header from the controlled page.
However, depending on how your service worker is obtaining its Response, that might be kind of tricky/hacky, and I can't say that I recommend it unless you have a strong use case. Here's what an (again, not recommending) approach might look like:
event.respondWith(
return fetch(event.request).then(function(response) {
if (response.type === 'opaque') {
return response;
}
var headersCopy = new Headers(response.headers);
headersCopy.set('X-Service-Worker', 'true');
return response.arrayBuffer().then(function(buffer) {
return new Response(buffer, {
status: response.status,
statusText: response.statusText,
headers: headersCopy
});
});
})
)
If you get back an opaque Response, you can't do much with it other than return it directly to the page. Otherwise, it will copy a bunch of things over into a new Response that has a an X-Service-Worker header set to true. (This is a roundabout way of working around the fact that you can't directly modify the headers of the Response returned by fetch().)

Categories

Resources