I Have the following code..
async function bulkInsert(db, collectionName, documents) {
try {
const cosmosResults = await db.collection(collectionName).insertMany(documents);
console.log(cosmosResults);
return cosmosResults
} catch (e) {
console.log(e)
}
}
If I run it with a large array of documents I get ( not unexpectedly)
{ MongoError: Message: {"Errors":["Request rate is large"]}
ActivityId: b3c83c38-0000-0000-0000-000000000000,
Request URI: /apps/DocDbApp/services/DocDbServer24/partitions/a4cb4964-38c8-11e6-8106-8cdcd42c33be/replicas/1p/,
RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.102.5
at G:\Node-8\NodeExample\node_modules\oracle-movie-ticket-demo\node_modules\mongodb-core\lib\connection\pool.js:596:61
at authenticateStragglers (G:\Node-8\NodeExample\node_modules\oracle-movie-ticket-demo\node_modules\mongodb-core\lib\connection\pool.js:514:16)
at Connection.messageHandler (G:\Node-8\NodeExample\node_modules\oracle-movie-ticket-demo\node_modules\mongodb-core\lib\connection\pool.js:550:5)
at emitMessageHandler (G:\Node-8\NodeExample\node_modules\oracle-movie-ticket-demo\node_modules\mongodb-core\lib\connection\connection.js:309:10)
at TLSSocket.<anonymous> (G:\Node-8\NodeExample\node_modules\oracle-movie-ticket-demo\node_modules\mongodb-core\lib\connection\connection.js:452:17)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at TLSSocket.Readable.push (_stream_readable.js:208:10)
name: 'MongoError',
message: 'Message: {"Errors":["Request rate is large"]}\r\nActivityId: b3c83c38-0000-0000-0000-000000000000,
Request URI: /apps/DocDbApp/services/DocDbServer24/partitions/a4cb4964-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.102.5',
_t: 'OKMongoResponse',
ok: 0,
code: 16500,
errmsg: 'Message: {"Errors":["Request rate is large"]}\r\nActivityId: b3c83c38-0000-0000-0000-000000000000,
Request URI: /apps/DocDbApp/services/DocDbServer24/partitions/a4cb4964-38c8-11e6-8106-8cdcd42c33be/replicas/1p/,
RequestStats: ,
SDK: Microsoft.Azure.Documents.Common/1.19.102.5',
'$err': 'Message: {"Errors":["Request rate is large"]}\r\nActivityId: b3c83c38-0000-0000-0000-000000000000,
Request URI: /apps/DocDbApp/services/DocDbServer24/partitions/a4cb4964-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats: ,
SDK: Microsoft.Azure.Documents.Common/1.19.102.5' }
It appears that some (approx. 165) of the 740 records I was processing have been loaded. All of them appear to have been assigned '_id' attributes.
Does anyone have any idea how to handle this (or at least tell which records were inserted and which were not processes)...
Requests with cosmosdb need to consume RUs. Obviously, your insert request exceeded the RU throughput and error code 16500 occurred.
Applications that exceed the provisioned request units for a
collection will be throttled until the rate drops below the reserved
level. When a throttle occurs, the backend will preemptively end the
request with a 16500 error code - Too Many Requests. By default, API
for MongoDB will automatically retry up to 10 times before returning a
Too Many Requests error code.
You could find more instructions from official document.
You could follow the ways as below to try to solve the issue:
Import your data in batches to reduce throughput.
Add your own retry logic in your application.
Increasing the reserved throughput for the collection. Of course, it increases your cost.
You could refer to this article.
Hope it helps you.
Update Answer:
It looks like your documents are not uniquely identifiable. So I think the "_id" attribute which automatically generated by Cosmos DB cannot determine which documents have been inserted and which documents have not been inserted.
I suggest you increasing throughput settings, empty the database and then bulk import the data.
Considering the cost , please refer to this document for setting the appropriate RU.
Or you could test bulk import operation locally via Cosmos DB Emulator.
Related
Sails Version: 1.2.3
Node Version: v10.14.2
Sails Mongo Version: 1.0.1
Datastore configuration:
default: {
adapter: 'sails-mongo',
url:
'mongodb://user:password#xxx.xxx.xxx.xxx:27017/user_db'
The password consists of special character like #!_-. Is that the
problem?
For some reasons, even after change of password with special
characters to the appropriate one, still it's not able to connect.
I look forward assistance on connection to establish using localhost as well without username and password. How to construct such connection string?
error: A hook (`orm`) failed to load!
error: Could not tear down the ORM hook. Error details: Error: Consistency violation: Attempting to tear down a datastore (`default`) which is not currently registered with this adapter. This is usually due to a race condition in userland code (e.g. attempting to tear down the same ORM instance more than once), or it could be due to a bug in this adapter. (If you get stumped, reach out at http://sailsjs.com/support.)
at Object.teardown (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails-mongo/lib/index.js:390:19)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/lib/waterline.js:758:27
at /Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/node_modules/async/dist/async.js:3047:20
at eachOfArrayLike (/Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/node_modules/async/dist/async.js:1002:13)
at eachOf (/Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/node_modules/async/dist/async.js:1052:9)
at Object.eachLimit (/Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/node_modules/async/dist/async.js:3111:7)
at Object.teardown (/Users/apple/Documents/projects/ozone-login-system/node_modules/waterline/lib/waterline.js:742:11)
at Hook.teardown (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails-hook-orm/index.js:246:30)
at Sails.wrapper (/Users/apple/Documents/projects/ozone-login-system/node_modules/#sailshq/lodash/lib/index.js:3275:19)
at Object.onceWrapper (events.js:273:13)
at Sails.emit (events.js:182:13)
at Sails.emitter.emit (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails/lib/app/private/after.js:56:26)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/sails/lib/app/lower.js:67:11
at beforeShutdown (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails/lib/app/lower.js:45:12)
at Sails.lower (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails/lib/app/lower.js:49:3)
at Sails.wrapper [as lower] (/Users/apple/Documents/projects/ozone-login-system/node_modules/#sailshq/lodash/lib/index.js:3275:19)
error: Failed to lift app: Error: Consistency violation: Unexpected error creating db connection manager:
```
MongoError: Authentication failed.
at flaverr (/Users/apple/Documents/projects/ozone-login-system/node_modules/flaverr/index.js:94:15)
at Function.module.exports.parseError (/Users/apple/Documents/projects/ozone-login-system/node_modules/flaverr/index.js:371:12)
at Function.handlerCbs.error (/Users/apple/Documents/projects/ozone-login-system/node_modules/machine/lib/private/help-build-machine.js:665:56)
at connectCb (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/Users/apple/Documents/projects/ozone-login-system/node_modules/mongodb/lib/mongo_client.js:428:5)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/mongodb/lib/mongo_client.js:376:13
at process._tickCallback (internal/process/next_tick.js:61:11)
```
at Object.error (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails-mongo/lib/index.js:268:21)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/machine/lib/private/help-build-machine.js:1514:39
at proceedToFinalAfterExecLC (/Users/apple/Documents/projects/ozone-login-system/node_modules/parley/lib/private/Deferred.js:1153:14)
at proceedToInterceptsAndChecks (/Users/apple/Documents/projects/ozone-login-system/node_modules/parley/lib/private/Deferred.js:913:12)
at proceedToAfterExecSpinlocks (/Users/apple/Documents/projects/ozone-login-system/node_modules/parley/lib/private/Deferred.js:845:10)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/parley/lib/private/Deferred.js:303:7
at /Users/apple/Documents/projects/ozone-login-system/node_modules/machine/lib/private/help-build-machine.js:952:35
at Function.handlerCbs.error (/Users/apple/Documents/projects/ozone-login-system/node_modules/machine/lib/private/help-build-machine.js:742:26)
at connectCb (/Users/apple/Documents/projects/ozone-login-system/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/Users/apple/Documents/projects/ozone-login-system/node_modules/mongodb/lib/mongo_client.js:428:5)
at /Users/apple/Documents/projects/ozone-login-system/node_modules/mongodb/lsib/mongo_client.js:376:13
at process._tickCallback (internal/process/next_tick.js:61:11)
I am using mongo altas for learning the following worked for me
adapter: require('sails-mongo'),
url: 'mongodb://dbUser:dbpass#learning-cluster-shard-XXXXXXXXXXX',
},
If you are using atlas, don't forget to allow access to your current IP on mongo atlas website
If you're viewing this in 2020, you need to use the older mongo-db url settings to connect mongo Atlas to sails.
//config/datastores.js
default:
{ adapter: require('sails-mongo'),
url: "mongodb://<YOUR_USERNAME>:<YOUR_PASSWORD>#cluster0-shard-00-00-kiodk.mongodb.net:27017,cluster0-shard-00-01-kiodk.mongodb.net:27017,cluster0-shard-00-02-kiodk.mongodb.net:27017/<YOUR_DB_NAME>?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin&retryWrites=true&w=majority",
}
For it to work with sails, your URL should be in the format above!
The program I am writing is a status display screen for alarms, each of which is represented by a channel.
When the server is started (run on a vagrant virtual machine), an Influx database is accessed, the data (comprising of 1574 'channels') is processed and put into a Redis database. This runs fine and the GUI is displayed with no issues when the webpage is refreshed, although it takes a long time to load (up to 20s), and nearly all of this time is spent in the method below.
However, after a few refreshes/moving around the site, it often crashes with the following error:
{ AbortError: Redis connection lost and command aborted. It might
have been processed.
at RedisClient.flush_and_error (/vagrant/node_modules/redis/index.js:362:23)
at RedisClient.connection_gone (/vagrant/node_modules/redis/index.js:664:14)
at RedisClient.on_error (/vagrant/node_modules/redis/index.js:410:10)
at Socket. (/vagrant/node_modules/redis/index.js:279:14)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at onwriteError (_stream_writable.js:417:12)
at onwrite (_stream_writable.js:439:5)
at _destroy (internal/streams/destroy.js:39:7)
at Socket._destroy (net.js:568:3) code: 'UNCERTAIN_STATE', command: 'HGETALL', args: [
'vista:hash:Result:44f59707-c873-11e8-93b9-7f551d0bdd1f' ], origin:
{ Error: Redis connection to 127.0.0.1:6379 failed - write EPIPE
at WriteWrap.afterWrite (net.js:868:14) errno: 'EPIPE', code: 'EPIPE', syscall: 'write' } }
This error is displayed 1574 times (once for each channel), and occurs when the program reaches this function:
Result.getFormattedResults = async function (cycle) {
const channels = await Channel.findAndLoad()
const formattedResults = await mapAsyncParallel(channels, async channel => {
const result = await this.findAndLoadByChannel(channel, cycle)
const formattedResult = await result.format(channel)
return formattedResult
})
return formattedResults
}
mapAsyncParallel() is as follows:
export const mapAsyncParallel = (arr, fn, thisArg) => {
return Promise.all(arr.map(fn, thisArg))
}
findAndLoadByChannel() finds the channel and loads it with this line:
const resultModel = await this.load(resultId)
And format() takes the model and outputs the data as in a JSON format
There are two 'fetch(...)' commands (which are needed and cannot be combined) in the front end, and the problem rarely occurs when I comment out one of them (either one). This is making me think it could be a max memory or max connections problem? (increasing maxmemory in the config file didn't help). Or a problem with using so many promises (a concept I am fairly new to).
This has only started to occur as I have added more functionality and I assume the function needs optimizing but I have taken over this project from someone else and am still quite new to node.js and redis.
Versions:
Vagrant: 2.0.1
Ubuntu: 16.04.5
Redis: 4.0.9
Node: 8.12.0
npm: 5.7.1
I've now moved all the 'getting' of the data (from redis) to the server side channels.controller file.
So, where before I would have:
renderPage: async (req, res) => {
res.render('page')
},
I now have a method like:
renderPage: async (req, res) => {
const data1 = getData1()
const data2 = getData2()
res.render('page', {data1, data2})
},
(Don't worry, these aren't my actual variable names)
Where the two 'data' variables were previously retrieved using the 'fetch' method.
I export the data once it's loaded into redis, and import it in the controller file, where I have the getters to combine it all into one return array.
The pages now take milliseconds to refresh and I haven't had any crashes
I'm kind of new to JS and I can't solve this problem, so I hope you can help me.
I will explain shortly what the situation is, I installed the app Homebridge from Github on my Raspberry: https://github.com/nfarina/homebridge
Installation was successful, so, so far so good. But then I installed the plugin eWeLink for the Homebridge app: https://github.com/gbro115/homebridge-ewelink the installation went good as well, but on the startup there seems to be a probleem in the index.js from the plugin, I get the following output:
[2018-5-31 23:10:37] [eWeLink] A total of [0] accessories were loaded from the local cache [2018-5-31 23:10:37] [eWeLink] Requesting
a list of devices from eWeLink HTTPS API at
[https://eu-ota.coolkit.cc:8080] [2018-5-31 23:10:37] Homebridge is
running on port 51826. [2018-5-31 23:10:37] [eWeLink] eWeLink HTTPS
API reports that there are a total of [108] devices registered
/usr/lib/node_modules/homebridge-ewelink/index.js:98
body.forEach((device) => { ^
TypeError: body.forEach is not a function at
/usr/lib/node_modules/homebridge-ewelink/index.js:98:22 at
Object.parseBody
(/usr/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:74:12)
at Request._callback
(/usr/lib/node_modules/homebridge-ewelink/node_modules/request-json/main.js:148:26)
at Request.self.callback
(/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:186:22)
at emitTwo (events.js:126:13) at Request.emit (events.js:214:7) at
Request.
(/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:1163:10)
at emitOne (events.js:116:13) at Request.emit (events.js:211:7) at
IncomingMessage.
(/usr/lib/node_modules/homebridge-ewelink/node_modules/request/request.js:1085:12)
So the terminal tells me there is a error on line 98 from the index.js, that will be the next part of the script:
var devicesFromApi = new Map();
var newDevicesToAdd = new Map();
body.forEach((device) => {
platform.apiKey = device.apikey;
devicesFromApi.set(device.deviceid, device);
});
// Now we compare the cached devices against the web list
platform.log("Evaluating if devices need to be removed...");
function checkIfDeviceIsStillRegistered(value, deviceId, map) {
var accessory = platform.accessories.get(deviceId);
if (devicesFromApi.has(deviceId)) {
platform.log('Device [%s] is regeistered with API. Nothing to do.', accessory.displayName);
} else {
platform.log('Device [%s], ID : [%s] was not present in the response from the API. It will be removed.', accessory.displayName, accessory.UUID);
platform.removeAccessory(accessory);
}
}
I found some similar problems with the fromEach function but I still can't seem to figure out what I should change in the script.
Hope you can help me :)
body is not an Array, therefore you cannot invoke .forEach on it, you can try converting it like
Array.from(body).forEach(function (device) { ... }
Take a look on this answer that might help : forEach is not a function error with JavaScript array
Trying to make a request to Paypal's API using PayPal-node-SDK
exports.requestPayment = functions.https.onRequest((req, res) => {
return new Promise(function (fullfilled, rejected) {
paypal.payment.create(create_payment_json, {}, function (error, payment) {
if (error) {
rejected(error);
} else {
console.log("Create Payment Response");
console.log(payment);
res.status(200).send(JSON.stringify({
paymentID: payment.id
})).end();
fullfilled(payment);
}
});
});
});
but I'm constantly getting an error:
Error: getaddrinfo ENOTFOUND api.sandbox.paypal.com api.sandbox.paypal.com:443
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:76:26)
Things I've tried:
Making a request to a totally different host, still ENOTFOUND
Wrapping the request with cors(req,res, ()=>{...})
Prepending https:// to the host
What is the problem?
You'll need to be on a paid plan to make external API requests.
Firebase's Blaze plan (pay as you go) has a free allotment for Cloud Functions. https://firebase.google.com/pricing/
in my situation I had to wait and let what ever lag was happening pass. Now it's fine again.
I was having this issue because of weak internet, change the internet connection.
You need to include service account to the admin initialization. this fixed the same issue for me
Switch to the Firebase "Blaze" plan, which includes the free usage tier of the Spark plan before incurring any costs. Use the Blaze pricing calculator to see what you'd be charged for a given usage.
The first 5GB of outbound (egress) networking is free, which is the same as what "native" Google Cloud Functions would give you.
I am having a problem similar to socket.io issue using sails.js. Every once in a while (once per day, or even few hours, it varies), a visitor to the web site/app will crash Node, seemingly due to the way his websocket client tries to connect. Anyway, here's the crash log:
debug: Lowering sails...
/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/express/node_modules/connect/lib/utils.js:216
return 0 == str.indexOf('s:')
^
TypeError: Cannot call method 'indexOf' of undefined
at exports.parseSignedCookie (/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/express/node_modules/connect/lib/utils.js:216:19)
at Manager.socketAttemptingToConnect (/Volumes/Two/Sites/lsdfinder/node_modules/sails/lib/hooks/sockets/authorization.js:35:26)
at Manager.authorize (/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/socket.io/lib/manager.js:910:31)
at Manager.handleHandshake (/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/socket.io/lib/manager.js:786:8)
at Manager.handleRequest (/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/socket.io/lib/manager.js:593:12)
at Server.<anonymous> (/Volumes/Two/Sites/lsdfinder/node_modules/sails/node_modules/socket.io/lib/manager.js:119:10)
at Server.EventEmitter.emit (events.js:98:17)
at HTTPParser.parser.onIncoming (http.js:2076:12)
at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:120:23)
at Socket.socket.ondata (http.js:1966:22)
9 Oct 10:42:24 - [nodemon] app crashed - waiting for file changes before starting...
In config/sockets.js, authorization is set to true. Not sure what else to do, where to fix this. Any suggestions? I can read the Sails docs too, but this appears to be a problem in Express/Connect, no? Thanks.
...René
The problem is that once every so often, a client will connect that has no cookies. Sails.js is using util.parseSignedCookie() from Connect without checking for errors, and therefore an error is thrown. This is what it looks like in Sails:
if (handshake.headers.cookie) {
handshake.cookie = cookie.parse(handshake.headers.cookie);
handshake.sessionID = parseSignedCookie(handshake.cookie[sails.config.session.key], sails.config.session.secret);
}
If you take a look into the cookieParser() middleware of Connect, you can see error checking is required:
if (cookies) {
try {
req.cookies = cookie.parse(cookies);
if (secret) {
req.signedCookies = utils.parseSignedCookies(req.cookies, secret);
req.signedCookies = utils.parseJSONCookies(req.signedCookies);
}
req.cookies = utils.parseJSONCookies(req.cookies);
} catch (err) {
err.status = 400;
return next(err);
}
}
I've created a Gist here that fixes the problem, and will submit a pull request to Sails.js when I have the time. The Gist uses Connect's cookieParser() middleware to automatically handle errors. If you want to use this, modify this file in your modules folder:
node_modules/sails/lib/hooks/sockets/authorization.js
If you are doing a crossdomain request, you could turn off authorization.
In *site_dir/config/sockets.js* set authorization to false. One way of doing it. You can also call your api with something like this
bash
**http://localhost:1337?cookie=smokeybear**
Its is in the comments on the sockets.js file.