Sails js, error when connect to cloud mongodb - javascript

I used cloud MongoDB in sails adapter but when I am running app, it throw an error, can someone help how to solve it?
default: {
adapter: 'sails-mongo',
url: 'mongodb://USERNAME:PASS#cluster0-shard-00-00.ikncs.mongodb.net:27017,cluster0-shard-00-01.ikncs.mongodb.net:27017,cluster0-shard-00-02.ikncs.mongodb.net:27017/test?ssl=true&replicaSet=atlas-qhs0wy-shard-0&authSource=admin&retryWrites=true&w=majority'
}
error: Error: Consistency violation: Unexpected error creating db connection manager:
MongoError: connection 3 to cluster0-shard-00-01.ikncs.mongodb.net:27017 closed
error: Could not tear down the ORM hook. Error details: Error: Consistency violation: Attempting to tear down a datastore (default) which is not currently registered with this adapter. This is usually due to a race condition in userland code (e.g. attempting to tear down the same ORM instance more than once), or it could be due to a bug in this adapter. (If you get stumped, reach out at http://sailsjs.com/support.)

This looks like you are unable to connect to the cluster hosted on Atlas.
You will need to add your IP to the whitelist on Atlas. In the security section, under Network Access, add your IP to the whitelist (or the IP of the server you're connecting to the cluster from if you are using a remote server).

Related

Debugging ENOTFOUND error using aws sdk v3

I've been using AWS JS SDK V3 and have noticed that my lambdas are intermittently hitting errors connecting to AWS resources. Below I have an example for dynamodb, but I have also had issues connecting to secrets manager. My lambdas and resources are all contained within a VPC. I've noticed that these issues seem to be hit more often during a lambda cold start, but I'm not entirely sure. If a request is resent (user on the frontend refreshes the page) this error seems to go away. I was hoping that the built in client retries would reduce the errors that my code sees, but it appears that no retries are attempted.
I am looking for potential debugging tips that might reveal what is the cause of these issues. So far I've been looking through cloudwatch logs which does not appear to have any good insights. I believe this is being cause by bad DNS resolution, but I am surprised by the frequency of these errors. Stopping short of moving my lambdas to ec2 and utilizing a cache, what are ways in which I can fix this.
Reading this article: https://aws.amazon.com/premiumsupport/knowledge-center/vpc-find-cause-of-failed-dns-queries/ suggests increasing the DNS retry timer, but I'm unsure how I would do that as well.
{
"errorType": "Error",
"errorMessage": "getaddrinfo ENOTFOUND dynamodb.us-east-1.amazonaws.com",
"code": "ENOTFOUND",
"errno": -3008,
"syscall": "getaddrinfo",
"hostname": "dynamodb.us-east-1.amazonaws.com",
"$metadata": {
"attempts": 1,
"totalRetryDelay": 0
},
"stack": [
"Error: getaddrinfo ENOTFOUND dynamodb.us-east-1.amazonaws.com",
" at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:71:26)"
]
}
How do you connect to DynamoDB from within your VPC?
Are you using a NAT instance or gatway?
Are you using custom DNS resolution?
I would suggest that you add a DynamoDB VPCe to your VPC to allow you to connect to DynamoDB via AWS private network.

MongoDB and Heroku Deployment

Alright stuck here. This app - https://budget-vacation-offline.herokuapp.com/ .
I have added my environment variables to Heroku as well. There is an error in the console too regarding the mongo network. What am I missing? Happy to link the repository if needed. This is my error locally.
App running on port 7894!
(node:1177) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [localhost:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017
The error mean that your app need to connect to an instance of mongodb, which not available on the address you set(localhost). So to fix the error you need to provide the correct url to connect.
To get a mongodb instance you can host your own mongodb server or use online mongodb service such as mongodb official service Atlas.
Previously, there is exist an add-on for mongodb mLab on heroku that you can use but it seem that it's has been discontinued.

Unit testing with firebase auth emulator requires real service account?

We are using the firebase emulators to write integration tests. One of our functions modifies the claims on a user. As such, our test checks to see if the claim has been added. In our test, we call the following function:
admin.auth().getUser(user.userId)
Our intention is to then check the claims. Unfortunately, when this function is called, we get an error.
(node:96985) UnhandledPromiseRejectionWarning: Error: Credential
implementation provided to initializeApp() via the "credential"
property failed to fetch a valid Google OAuth2 access token with the
following error: "Error fetching access token: Error while making
request: getaddrinfo ENOTFOUND metadata.google.internal. Error code:
ENOTFOUND".
Keep in mind we are running against the local auth emulator, not a cloud service. We found an issue on github which seems to be related: https://github.com/firebase/firebase-tools/issues/1708
Unfortunately, the recommended course of action in that issue is to use an actual service account file from an actual cloud service. We do not check such files into our repos as this would be a security hazard. Does anyone know of a better way to deal with this situation?
In case it is relevant, we also get the following warning:
{"severity":"WARNING","message":"Warning, FIREBASE_CONFIG and
GCLOUD_PROJECT environment variables are missing. Initializing
firebase-admin will fail"}

Problems with MongoDB trying to deploy a Meteor App

I have been setting up a server on a Digital Ocean droplet in order to host a couple of Meteor apps. I'm doing everything from scratch so I can learn as much as possible. I am trying to use "Meteor-Up" (mup) to deploy an app, but it is having problem communicating with MongoDB. When I run "mup setup" I get the following error:
Started TaskList: Setup (linux)
[Gibson] - Installing Docker
[Gibson] - Installing Docker: SUCCESS
[Gibson] - Setting up Environment
[Gibson] - Setting up Environment: SUCCESS
[Gibson] - Copying MongoDB configuration
[Gibson] - Copying MongoDB configuration: SUCCESS
[Gibson] - Installing MongoDB
[Gibson] x Installing MongoDB: FAILED
-----------------------------------STDERR-----------------------------------
docker: Error response from daemon: driver failed programming external connectivity on endpoint mongodb (1e188b51b171446cd22d96f40ceab1e696019e5ac33ca713d78827246ae37ec8): Error starting userland proxy: listen tcp 127.0.0.1:27017: bind: address already in use.
-----------------------------------STDOUT-----------------------------------
latest: Pulling from library/mongo
Digest: sha256:beff97308c36f7af664a1d04eb6ed09be1d14c17427065b2ec4b0de90967bb3f
Status: Image is up to date for mongo:latest
mongodb
c17e5ac9e9369b779da4aff639c16578dedbc7c357985f67d6e7b005d9cf3939
----------------------------------------------------------------------------
But I can't get from this any indication of what's going wrong. Is the problem with Mongo, Meteor, mup, or docker?
EDIT:
So far I understand from the message that "mup" is trying to connect to Mongo on port 27017 and is failing, I just don't understand why or how to fix it. I have a database that I want the app to connect to, which I moved onto the server from my local machine using mongodump and mongorestore. The thing I can't solve is how to connect my meteor app to that mongo DB.
It does not just try to connect to mongod, but it installs mongod in a container and tries to bind port 27017 to local interface.
If you already have mongodb installed and prefer to use it instead, you need to disable installation of mongodb in mup.js, mup.json, or whatever configuration file is being used in your version of mup.

Can't deploy todos; Failed to remove container (todos-frontend)

First time with linux and meteor up, so sorry if there's a stupid mistake. I try to deploy the meteor example app todos with mupx, and followed the instructions from the readme, but I'm getting the following mistake. (I'm using Ubuntu 14.04 LTS Server ). Thanks for help.
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Meteor app path : /home/jan/todos
Using buildOptions : {}
Currently, it is only possible to build iOS apps on an OS X system.
Started TaskList: Deploy app 'todos' (linux)
[h2544161.stratoserver.net] - Uploading bundle
[h2544161.stratoserver.net] - Uploading bundle: SUCCESS
[h2544161.stratoserver.net] - Sending environment variables
[h2544161.stratoserver.net] - Sending environment variables: SUCCESS
[h2544161.stratoserver.net] - Initializing start script
[h2544161.stratoserver.net] - Initializing start script: SUCCESS
[h2544161.stratoserver.net] - Invoking deployment process
Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use.
-----------------------------------STDOUT-----------------------------------
todos
base: Pulling from meteorhacks/meteord
518dc1482465: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
537c534356b6: Already exists
b65a0e1e554b: Already exists
a3ed95caeb02: Already exists
a3ed95caeb02: Already exists
Digest: sha256:b5a4f6efa98e4070792ed36d33b14385a28e6ceda691a492ee5b9f2431b1515a
Status: Image is up to date for meteorhacks/meteord:base
d6d192579495851d5817288ff89abb69512562d7c2a7075f965484e64583c61b
Failed to remove container (todos-frontend): Error response from daemon: No such container: todos-frontend
docker: Error response from daemon: failed to create endpoint todos on network bridge: Bind for 0.0.0.0:80 failed: port is already allocated.
Just had the same issue,
finally deployed after changing file port number to an unused port in my-deployment mup.json somehow docker service could release ports automatically when it wants. I've used 80, 8000, 8001 so far but I haven't successfully deployed to the same port twice, but reading
credit to this
It seems that different deployments may conflict each other pretty easily. I have no resolution for this.

Categories

Resources