React/Redux + super agent, first call gets terminated - javascript

I am writing a react-redux app where I am making some service calls in my middlewares using superagent. I have found a very strange behavior where the first call to my search api always gets terminated. I have tried waiting 10-30 seconds before making the first call, and logging every step along the process and I cannot seem to pinpoint why this is happening.
My action creator looks like
export function getSearchResults(searchQuery) {
return {
query: searchQuery,
type: actions.GO_TO_SEARCH_RESULTS
}
}
It hits the middleware logic here :
var defaultURL = '/myServer/mySearch';
callPendingAction();
superagent.get(defaultURL)
.query({query: action.query})
.end(requestDone);
//sets state pending so we can use loading spinner
function callPendingAction() {
action.middlewares.searchIRC.readyState = READY_STATES.PENDING;
next(action);
}
//return error or response accordingly
function requestDone(err, response) {
console.log("call error", err);
const search = action.search;
if (err) {
search.readyState = READY_STATES.FAILURE;
if (response) {
search.error = response.err;
} else if (err.message) {
search.error = err.message;
} else {
search.error = err;
}
} else {
search.readyState = READY_STATES.SUCCESS;
search.results = fromJS(response.body);
}
return next(action);
}
The query is correct even when the call is terminated, I get this err message back :
Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
at Request.crossDomainError (http://localhost:8000/bundle.js:28339:14)
at XMLHttpRequest.xhr.onreadystatechange (http://localhost:8000/bundle.js:28409:20)
It appears the page refreshes each time too.
I cannot seem to find any clues as to why this happens, it seems not matter what the first call fails, but then it is fine after that first terminated call. Would appreciate any input, thanks!
UPDATE: so it seems this is related to chrome, I am on Version 47.0.2526.80 (64-bit). This app is an iframe within another app and I believe that is causing a problem with chrome because when I try this in firefox there is no issue. What is strange is only the first call gives the CORS issue, then it seems to be corrected after that. If anyone has input or a workaround, I would greatly appreciate it. Thanks for reading.

Had the same problem, just figured it out thanks to the answer provided by #KietIG on the topic ReactJS with React Router - strange routing behaviour on Chrome.
The answer had nothing to do with CORS. The request was cancelled because Chrome had navigated away from the page in the middle of the request. This was happening because event.preventDefault() had not been called in one of the form submit handlers. It seems Chrome handles this differently than other browsers.
See the answer link above for more detail.

In my case this was happening when I tried to set a random HTTP request header (like X-Test) on the client side and either AWS Lambda rejected it during the OPTIONS request or something else did that.

I don't know about the side effects, but you're getting CORS errors. Add the .withCredentials() method to your request.
From the superagent docs:
The .withCredentials() method enables the ability to send cookies from
the origin, however only when "Access-Control-Allow-Origin" is not a
wildcard ("*"), and "Access-Control-Allow-Credentials" is "true".
This should fix it:
superagent.get(defaultURL)
.query({query: action.query})
.withCredentials()
.end(requestDone);
More information on Cross Origin Resource Sharing can be found here.

Related

What can cause fetch() to return success, but subsequently accessing the body to throw TypeError or AbortError?

I use fetch for getting some resources from the server. I can see from logs that from time to time obtaining the JSON body fails on "TypeError: Failed to fetch", which is very interesting because this type of error should only happen when the request fails.
The simplified code I use:
const response = await fetch('https://something.com/api')
try {
// below throws TypeError: Failed to fetch in Chrome/Safari
// and AbortError in Firefox
await response.json();
} catch(error) {
console.log('error happened', error);
}
I cannot really find a case when this might happen. I tested a number of plausible scenarios and all failed on the first line of code, i.e. fetch('https:/something.com/api'). I have no idea when this might happen. I also should mention that it happens in modern browsers like Chrome 99, so it is not definitely something like Internet Explorer thing.
I found this useful example and it shows that requests are cancelled when you unload the document. Obviously, cancellation happens on the fetch line but I decided to stop logging these errors when the document is unloaded/hidden. Even when these cases are not logged I can see it happens on visible documents too.
https://github.com/shuding/request-cancellation-test
Cases tested:
Network error - user disconnected from the internet
CORS - missing CORS headers
Obviously, this does not prove anything but people in the comments think that I do something wrong in implementation and do not trust me when I say it indeed happens when obtaining the JSON body. This is the information I am able to log when I catch the error. Most properties are from the Response object.
This is the log captured from visitor using Chrome 100. Firefox does not throw TypeError instead it throws AbortError but it also happens when converting to json.
This looks like a network error that happened after the response HTTP headers have already been received. In such a situation, the fetch method returns success, but subsequent attempts to access the response body may fail.
I managed to trigger this kind of error with this simple server:
#!/usr/bin/env python3
import http.server
import time
class Handler(http.server.BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
self.wfile.write(b'HTTP/1.0 200 OK\r\n')
self.wfile.write(b'Content-type: text/html\r\n')
self.wfile.write(b'\r\n')
self.wfile.write(b"""\
<script type="module">
const f = await fetch("/j");
try {
await f.json();
} catch (e) {
alert(e);
}
</script>
""")
return
self.wfile.write(b'HTTP/1.0 200 OK\r\n')
self.wfile.write(b'Content-encoding: gzip\r\n') # bogus
self.wfile.write(b'Content-type: application/json\r\n')
self.wfile.write(b'\r\n')
self.wfile.write(b'[]')
server = http.server.HTTPServer(('127.0.0.1', 31337), Handler)
server.serve_forever()
It introduces a deliberate framing error when serving the JSON response; the headers indicate that gzip compression is employed, but no compression is actually used. When I open http://127.0.0.1:31337/ in Chromium 100.0.4896.127, it displays the following alert:
TypeError: Failed to fetch
Firefox ESR 91.8.0 displays a marginally more helpful:
TypeError: Decoding failed.
The particular framing error demonstrated above is rather contrived, and I doubt it is exactly the kind that the asker experienced. But the fact that it appeared in the middle of the response body is probably at the heart of the described problem.
The specific pair of error types from the question can be triggered by modifying the server thus:
self.wfile.write(b'HTTP/1.1 200 OK\r\n')
self.wfile.write(b'Content-type: application/json\r\n')
# sic: larger than the actual body length
self.wfile.write(b'Content-length: 2\r\n')
self.wfile.write(b'\r\n')
self.wfile.write(b'[')
This displays the same alert:
TypeError: Failed to fetch
in Chromium (same version), and
AbortError: The operation was aborted.
in Firefox (also same).
As such, a likely cause would be a flaky connection dropped in the middle of receiving the body. The browsers’ errors are not particularly detailed here (and sometimes are downright misleading), so we are resigned to speculate, but this seems like the best guess.
Your error from this line:
const response = await fetch('https://something.com/api')
I removed await response.json() to test, the error still exists.
Please look console tab in DevTool, make sure that you use this setting
These are 2 example errors Failed to fetch:
(async () => {
try {
const response = await fetch('http://my-api.com/example')
await response.json()
} catch (e) {
console.log('e')
console.log(e)
}
})()

Is it a bad practice to not check if fetch request failed?

In my app ( i'm using next.js but it's more a general question ) i have a button that updates number of likes when clicked (+1). Here is relevant part of code:
const handleLikeClick = () => {
setNumberLikes(numberLikes + 1)
fetch('/api/updateLikes?origin=so-filter', {
method: 'POST'
})
}
And my API:
import { connectToDatabase } from '../../utils/mongodb'
export default async (req, res) => {
try {
const { db } = await connectToDatabase()
const { origin } = req.query
if (req.method === 'POST') {
await db.collection('likes').findOneAndUpdate({ page: origin }, { $inc: { likes: 1 }})
res.status(200)
}
}
catch (err) {
res.status(500)
}
}
I don't really care much if this POST request fails or not, therefore, i'm not checking for it and there is no additional logic if it actually fails. Is it a bad practice to do so ? Should i actually res.status(200).json({success:'updated'}) and .then my fetch request? Thank you.
Depends on what you want to achieve at the user level.
Although the result doesn't influence the flow of your program and doesn't break it, most of the times there is some importance to let the fetcher/user know what happened with the request.
Sometimes (like in your case) it can have an impact to the user experience. In your example, in case of failure, I think the user should get an error message or some sort of visualization that the like didn't cast, so he could try again or at least know that there was a problem.
(I'm pretty sure Facebook, Youtube, and StackOverflow just grays out upvoted or likes if something went wrong. In StackOverflow you even get a message with the specific error).
Edit
Code-wise it will work just fine since you are care to give a returned status code in any case (of success or failure).
From the documents:
The Promise returned from fetch() won’t reject on HTTP error status even if the response is an HTTP 404 or 500. Instead, it will resolve normally (with ok status set to false), and it will only reject on network failure or if anything prevented the request from completing.`
(Notice that you will want to handle network failures though).

Can't login to Skype SDK with non-existing domain name

I have problem about SigninManager. When I login in with tan.tastan#abcd.com, abdc.com is a reachable domain. But if I write a wrong domain, for example tan.tastan#abcd***E***.com, I am not getting a response and my application is waiting. Nothing happens and there is no return error code.
Here is my sample code, settings includes username, password, and domain information.
function doLogin(settings) {
return new Promise((resolve, reject) => {
window.skypeWebSDKApi.signInManager.signIn(settings).then((response) => {
resolve(response);
}, (error) => {
reject(error);
}).catch(reject);
});
}
What is the problem?
It's hard to know exactly what is going on without seeing the contents of settings but I suspect you're issue here is a promise not getting resolves. Try simplifying your call:
function doLogin(settings) {
var app = new api.application;
app.signInManager.signIn(settings).then(function () {
console.log('success');
}, function (error) {
console.log(error);
});
}
I've been using the SDK for quite a while now and this is my experience:
When trying to log in using a non existing domain, the Web SDK never returns an error. I've tried different SDK versions and both General Availability and Public Preview API keys.
I ended up starting my own signin timer when trying to sign in.
When no response is received within 20 seconds, I send a signOut request (which cancels the sign in) and show a message to the user (Please make sure you've entered the right username etc..).
It's really lame to have a workaround like this but unfortunately I haven't found a better way yet to deal with this issue, also assuming Microsoft is not going to fix this anymore...

How to handle ETIMEDOUT error?

How to handle etimedout error on this call ?
var remotePath = "myremoteurltocopy"
var localStream = fs.createWriteStream("myfil");;
var out = request({ uri: remotePath });
out.on('response', function (resp) {
if (resp.statusCode === 200) {
out.pipe(localStream);
localStream.on('close', function () {
copyconcurenceacces--;
console.log('aftercopy');
callback(null, localFile);
});
}
else
callback(new Error("No file found at given url."), null);
})
There are a way to wait for longer? or to request the remote file again?
What exactly can cause this error? Timeout only?
This is caused when your request response is not received in given time(by timeout request module option).
Basically to catch that error first, you need to register a handler on error, so the unhandled error won't be thrown anymore: out.on('error', function (err) { /* handle errors here */ }). Some more explanation here.
In the handler you can check if the error is ETIMEDOUT and apply your own logic: if (err.message.code === 'ETIMEDOUT') { /* apply logic */ }.
If you want to request for the file again, I suggest using node-retry or node-backoff modules. It makes things much simpler.
If you want to wait longer, you can set timeout option of request yourself. You can set it to 0 for no timeout.
We could look at error object for a property code that mentions the possible system error and in cases of ETIMEDOUT where a network call fails, act accordingly.
if (err.code === 'ETIMEDOUT') {
console.log('My dish error: ', util.inspect(err, { showHidden: true, depth: 2 }));
}
In case if you are using node js, then this could be the possible solution
const express = require("express");
const app = express();
const server = app.listen(8080);
server.keepAliveTimeout = 61 * 1000;
https://medium.com/hk01-tech/running-eks-in-production-for-2-years-the-kubernetes-journey-at-hk01-68130e603d76
Try switching internet networks and test again your code. I got this error and the only solution was switching to another internet.
Edit: I now know people besides me that have had this error and the solution was communicating with the ISP and ask them to chek the dns configuration because the http request were failing. So switching networks definitely could help with this.
That is why I will not delete the post. I could save people a few days of headaches (especially noobs like me).
Simply use a different network. Using a different network solved this issue for me within seconds.

OAuth.io connection failed

I'm new to web programming and I try to use oauth.io in my web-app. I finished configurations to facebook and Google due to the instruction. Everything works fine when i tested the configuration from their site. However when i tried to implemented to my webapp, OAuth won't connect to the provider.
I loaded the oauth.js in html, created a button in html and use onclick="pop" to invoke the function in javascript. And within the pop() function in javascript I've added:
OAuth.initialize('the-public-key-in-my-acc");
OAuth.popup('facebook', function(err, res) { if (err) { alert(something)});
Then I click the button. a popup window just flashed up and closed immediately. I've also tried to use OAuth.redirect and redirect it to http://oauth-io.github.io/oauth-js or my localhost, but then it says connection failed.
Is there something missing/wrong in the implementation?
Thanks a lot for the help.
PS: I'm working on localhost and i've tried to set redirect-url to localhost:portnr. but still failed. :(
Here is the sample code i've written:
Html:
<div><button onclick="oauthPop()">Try OAuth-io</button></div>
JS:
var oauthPop = function() {
OAuth.initialize('my-pub-key-on-authio');
OAuth.popup('facebook', function(err, res) { // or OAuth.callback
// handle error with err
if (err) {
alert ("error")
} else {
// get my name from fb
res.get('/me').done(function(data) {
alert(data.name)
})
}});
}
OAuth.io needs to have jQuery loaded to make HTTP requests using the result of OAuth.popup(). It use jQuery.ajax() behind the scene to let you a well-known function with all the option you might need.

Categories

Resources