In my app ( i'm using next.js but it's more a general question ) i have a button that updates number of likes when clicked (+1). Here is relevant part of code:
const handleLikeClick = () => {
setNumberLikes(numberLikes + 1)
fetch('/api/updateLikes?origin=so-filter', {
method: 'POST'
})
}
And my API:
import { connectToDatabase } from '../../utils/mongodb'
export default async (req, res) => {
try {
const { db } = await connectToDatabase()
const { origin } = req.query
if (req.method === 'POST') {
await db.collection('likes').findOneAndUpdate({ page: origin }, { $inc: { likes: 1 }})
res.status(200)
}
}
catch (err) {
res.status(500)
}
}
I don't really care much if this POST request fails or not, therefore, i'm not checking for it and there is no additional logic if it actually fails. Is it a bad practice to do so ? Should i actually res.status(200).json({success:'updated'}) and .then my fetch request? Thank you.
Depends on what you want to achieve at the user level.
Although the result doesn't influence the flow of your program and doesn't break it, most of the times there is some importance to let the fetcher/user know what happened with the request.
Sometimes (like in your case) it can have an impact to the user experience. In your example, in case of failure, I think the user should get an error message or some sort of visualization that the like didn't cast, so he could try again or at least know that there was a problem.
(I'm pretty sure Facebook, Youtube, and StackOverflow just grays out upvoted or likes if something went wrong. In StackOverflow you even get a message with the specific error).
Edit
Code-wise it will work just fine since you are care to give a returned status code in any case (of success or failure).
From the documents:
The Promise returned from fetch() won’t reject on HTTP error status even if the response is an HTTP 404 or 500. Instead, it will resolve normally (with ok status set to false), and it will only reject on network failure or if anything prevented the request from completing.`
(Notice that you will want to handle network failures though).
Related
I have this function...
function waterCalibrator() {
axios.post("http://localhost:3001/api/update-water", {
waterValue: props.moisture
}).then(function(response){
console.log("Water calibration worked")
console.log(response);
}).catch(function(error) {
console.log(error);
})
props.airValueObject.setWaterFlag(true);
props.airValueObject.setWaterValue(props.moisture);
}
Can anyone explain why then() is not ever triggered? I have no errors. It simply isn't triggering. Everything works except for this...
}).then(function(response){
console.log("Water calibration worked")
console.log(response);
}).catch(function(error) {
console.log(error);
})
server side looks like this...
app.post("/api/update-water", (req, res) => {
const waterValue = req.body.waterValue;
const sqlUpdateWater = "UPDATE user SET waterValue=? WHERE uid='j' ";
db.query(sqlUpdateWater, [waterValue], (err, result)=>{
console.log(`water is: ${waterValue}`)
})
})
You are getting a 204 status code, which means the request went through successfully but there is no content at all, hence you are trying to log nothing.
The HTTP 204 No Content success status response code indicates that the request has succeeded, but... The common use case is to return 204 as a result of a PUT request, updating a resource, without changing the current content of the page displayed to the user. If the resource is created, 201 Created is returned instead. If the page should be changed to the newly updated page, the 200 should be used instead.
You also need, as suggested in previous comments, open the dev console and check the status of the promise itself. This promise seems to be successful.
If this answer helps solving the issue, consider accepting the answer or upvoting it. Thanks.
Can we explicitly and specifically catch Puppeteer (Chromme/Chromium) error net::ERR_ABORTED? Or is string matching the only option currently?
page.goto(oneClickAuthPage).catch(e => {
if (e.message.includes('net::ERR_ABORTED')) {}
})
/* "net::ERROR_ABORTED" occurs for sub-resources on a page if we navigate
* away too quickly. I'm specifically awaiting a 302 response for successful
* login and then immediately navigating to the auth-protected page.
*/
await page.waitForResponse(res => res.url() === href && res.status() === 302)
page.goto(originalRequestPage)
Ideally, this would be similar to a potential event we could catch with page.on('requestaborted')
I'd recommend putting your api calls and so in a trycatch block
If it fails, you catch the error, like you are currently doing. But it just looks a bit nicer
try {
await page.goto(PAGE)
} catch(error) {
console.log(error) or console.error(error)
//do specific functionality based on error codes
if(error.status === 300) {
//I don't know what app you are building this in
//But if it's in React, here you could do
//setState to display error messages and so forth
setError('Action aborted')
//if it's in an express app, you can respond with your own data
res.send({error: 'Action aborted'})
}
}
If there are not specific error codes in the error responses for when Puppeteer is aborted, it means that Puppeteer's API has not been coded to return data like that, unfortunately :')
It's not too uncommon to do error messages checks like you are doing in your question. It's, unfortunately, the only way we can do it, since this is what we're given to work with :'P
As said in the title, nothing is happening when I subscribe to my observable. There is no error in the console or during the build. Here is my code :
My service
getBlueCollars(): Observable<BlueCollar[]> {
return this.http.get(this.defaultAPIURL + 'bluecollar?limit=25').map(
(res: Response) => {
return res.json();
});
}
My component
ngOnInit() {
this.planifRequestService.getBlueCollars().subscribe(
data => {
this.blueCollars = data;
console.log('Inner Blue Collars', this.blueCollars);
},
err => console.log(err)
);
console.log('Value BlueCollars : ', this.blueCollars);
}
So the second console.log is triggering with "Value BlueCollars : Undefined", and the log in my subscribe is never showed. As well, I can't see the request sent in the Networt tab of Chrome.
So I tried to simplify everything with the following code :
let response: any;
this.http.get('myUrl').subscribe(data => response = data);
console.log('TestRep: ', response);
Same problem here, no error, response is undefined. It seems the subscribe is not triggering the observable. (The URL is correct, it is working on my swagger or with postman.)
I'm on Angular 2.4.9
Edit
So I tried to copy/past the code of my request on a brand new project, everything is working fine. The request is triggered and I can get the JSON response correctly. So there is something maybe on the configuration of my project that is forbiding the request to trigger correctly.
Ok just found what was going on. I am using a fake backend in order to try my login connexions that is supposed to catch only specified URL. However for wathever raison it was catching all the requests, so that explain everything. Thx for your help everybody.
Try adding a catch block to your service code:
getBlueCollars(): Observable<BlueCollar[]> {
return this.http.get(this.defaultAPIURL + 'bluecollar?limit=25')
.map(
(res: Response) => {
return res.json();
})
.catch(err => Observable.throw(err))
}
Don't forget to
import 'rxjs/add/observable/throw';
import 'rxjs/add/operator/catch';`
I imagine this will result in the error that'll give you an idea where your code is going wrong.
The reason the console.log outside the subscribe call is undefined is because the subscribe/http call is happening asynchronously and so, in effect, the order (in time!) the code is running is:
1) the observable is subscribed to (and then waits for a response)
2) the outer console log runs with blueCollars undefined
3) when the response (or error) comes back from the http request (potentially after several seconds), only then will the inner assignment of this.blueCollar = data happen (and the inner console log), OR an error will get logged
Apart from that the subscribe code looks fine...!
Question: Would you consider dangling callbacks as bad node.js style or even dangerous? If so under which premise?
Case: as described below, imagine you need to make calls to a DB in an express server that updates some data. Yet the client doesn't need to be informed about the result. In this case you could return a response immediately, not waiting for the asynchronous call to complete. This would be described as dangling callback for lack of a better name.
Why is this interesting?: Because tutorials and documentation in most cases show the case of waiting, in worst cases teaching callback hell. Recall your first experiences with say express, mongodb and passport.
Example:
'use strict'
const assert = require('assert')
const express = require('express')
const app = express()
function longOperation (value, cb) {
// might fail and: return cb(err) ...here
setTimeout(() => {
// after some time invokes the callback
return cb(null, value)
}, 4000)
}
app.get('/ping', function (req, res) {
// do some declartions here
//
// do some request processesing here
// call a long op, such as a DB call here.
// however the client does not need to be
// informed about the result of the operation
longOperation(1, (err, val) => {
assert(!err)
assert(val === 1)
console.log('...fired callback here though')
return
})
console.log('sending response here...')
return res.send('Hello!')
})
let server = app.listen(3000, function () {
console.log('Starting test:')
})
Yeah, this is basically what called a "fire and forget" service in other contexts, and could also be the first step in a good design implementing command-query response separation.
I don't consider it a "dangling callback", the response in this case acknowledges that the request was received. Your best bet here would be to make sure your response includes some kind of hypermedia that lets clients get the status of their request later, and if it's an error they can fix have the content at the new resource URL tell them how.
Think of it in the case of a user registration workflow where the user has to be approved by an admin, or has to confirm their email before getting access.
I am writing a react-redux app where I am making some service calls in my middlewares using superagent. I have found a very strange behavior where the first call to my search api always gets terminated. I have tried waiting 10-30 seconds before making the first call, and logging every step along the process and I cannot seem to pinpoint why this is happening.
My action creator looks like
export function getSearchResults(searchQuery) {
return {
query: searchQuery,
type: actions.GO_TO_SEARCH_RESULTS
}
}
It hits the middleware logic here :
var defaultURL = '/myServer/mySearch';
callPendingAction();
superagent.get(defaultURL)
.query({query: action.query})
.end(requestDone);
//sets state pending so we can use loading spinner
function callPendingAction() {
action.middlewares.searchIRC.readyState = READY_STATES.PENDING;
next(action);
}
//return error or response accordingly
function requestDone(err, response) {
console.log("call error", err);
const search = action.search;
if (err) {
search.readyState = READY_STATES.FAILURE;
if (response) {
search.error = response.err;
} else if (err.message) {
search.error = err.message;
} else {
search.error = err;
}
} else {
search.readyState = READY_STATES.SUCCESS;
search.results = fromJS(response.body);
}
return next(action);
}
The query is correct even when the call is terminated, I get this err message back :
Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
at Request.crossDomainError (http://localhost:8000/bundle.js:28339:14)
at XMLHttpRequest.xhr.onreadystatechange (http://localhost:8000/bundle.js:28409:20)
It appears the page refreshes each time too.
I cannot seem to find any clues as to why this happens, it seems not matter what the first call fails, but then it is fine after that first terminated call. Would appreciate any input, thanks!
UPDATE: so it seems this is related to chrome, I am on Version 47.0.2526.80 (64-bit). This app is an iframe within another app and I believe that is causing a problem with chrome because when I try this in firefox there is no issue. What is strange is only the first call gives the CORS issue, then it seems to be corrected after that. If anyone has input or a workaround, I would greatly appreciate it. Thanks for reading.
Had the same problem, just figured it out thanks to the answer provided by #KietIG on the topic ReactJS with React Router - strange routing behaviour on Chrome.
The answer had nothing to do with CORS. The request was cancelled because Chrome had navigated away from the page in the middle of the request. This was happening because event.preventDefault() had not been called in one of the form submit handlers. It seems Chrome handles this differently than other browsers.
See the answer link above for more detail.
In my case this was happening when I tried to set a random HTTP request header (like X-Test) on the client side and either AWS Lambda rejected it during the OPTIONS request or something else did that.
I don't know about the side effects, but you're getting CORS errors. Add the .withCredentials() method to your request.
From the superagent docs:
The .withCredentials() method enables the ability to send cookies from
the origin, however only when "Access-Control-Allow-Origin" is not a
wildcard ("*"), and "Access-Control-Allow-Credentials" is "true".
This should fix it:
superagent.get(defaultURL)
.query({query: action.query})
.withCredentials()
.end(requestDone);
More information on Cross Origin Resource Sharing can be found here.