MongoDB profile is set to admin database only - javascript

I have set the mongoDB profiler on my QA database.
db.setProfilingLevel(1, 100)
And get query result is:
db.getProfilingStatus()
{
"was" : 1,
"slowms" : 100,
"sampleRate" : 0.42
}
When I find the result with:
db.system.profile.find()
0
QA env is working and query is triggered from my Nodejs server.
Also operation is performed on qa database (test_Qa) via nodejs service.
But still I am not able to see any query in system profile.
even if some query took more that 30 secs.
Whatever query is doneon admin database (via mongo client directly) thats getting logged in system profile. (if take more time)
Is profile is applicable for admin database only ? (With above command I don't think so)
Whats going wrong here ?
Thanks for your help in advance.

Before apply the profile you should use that DB. In above case Logs getting added in admin database because admin is default selected.
here I need to apply profile on test_QA db. So before applying all above queries I should use test_QA.
use test_QA;
and then,
db.setProfilingLevel(1, 100);
It will apply profile on db test_QA.
Hope it will help if someone run in same problem.

Related

Is there anyway i can update a firestore document automatically even if app is closed, is there a listeener or something

i have a firestore and project that needs to be updated automatically without user interaction but i do not know how to go about it, any help would be appreciated. take a look at the json to understand better
const party = {
id: 'bgvrfhbhgnhs',
isPrivate: 'true',
isStarted: false,
created_At: '2021-12-26T05:20:29.000Z',
start_date: '2021-12-26T02:00:56.000Z'
}
I want to update the isStarted field to true once the current time is equal to start_date
I think you will need Firebase Cloud Function, although I don't understand exactly what you mean.
With Cloud Functions, you can automatically run (add, delete, update, everything) codes on Google's Servers without the need for application and user interaction.
For example, in accordance with your example, it can automatically set "isStarted" to true when it hits the "start_date" time. If you want to code a system that does not require user interaction and should work automatically, you should definitely use Cloud Functions. Otherwise, you cannot do this on the application side.
For more info visit Cloud Functions
Ok, I managed to find a workaround to updating my documents automatically without user interaction since the google billing service won’t accept my card to enable cloud functions for my project, I tried what I could to make my code work and I don’t know if other peeps would follow my idea or if my idea would solve similar issues.
What I did was that in my nextjs file I created an API endpoint to fetch and update documents after installing the firebase admin SDK, so I fetched all documents and converted all start_date fields in each document to time and checked for documents whose start date is less than or equal to current date, so after getting the document, I ran a firestore function to Update the document.
Tho this will only run when you make a request to my domain.com/api/update-parties and never run again
In other to make it run at scheduled intervals, I signed up for a free tier account at https://www.easycron.com and added my API endpoint to EASYCRON to make requests to my endpoint at a minute interval, so when the request hits my endpoint, it runs my code like other serverless functions😜. Easy peezy.

how we save logs of our application actions in mongodb(node.js) with one schema and limited information

I am working on a node js with MongoDB database we want to logs of our application action with limited information like success, action name, user id, time, etc so how we can do this in node js with MongoDB
You can look in the official website of mongo how to do it Mongo Logs, and if you want a custom one just create a new db/collection to put in your logs and send it as a regular request.

MongoDB: Can't connect to new replica set master

Trying to set up MongoDB for my Node.JS application. I running this command:
mongo "mongodb+srv://cluster0-gjc2u.mongodb.net/test" --username <myusername>
And getting this response every single time.
MongoDB shell version v4.2.1
Enter password:
connecting to: mongodb://cluster0-shard-00-00-gjc2u.mongodb.net:27017,cluster0-shard-00-01-gjc2u.mongodb.net:27017,cluster0-shard-00-02-gjc2u.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true
2019-12-07T12:14:39.630-0600 I NETWORK [js] Starting new replica set monitor for Cluster0-shard-0/cluster0-shard-00-00-gjc2u.mongodb.net:27017,cluster0-shard-00-01-gjc2u.mongodb.net:27017,cluster0-shard-00-02-gjc2u.mongodb.net:27017
2019-12-07T12:14:39.630-0600 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-00-gjc2u.mongodb.net:27017
2019-12-07T12:14:39.631-0600 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-01-gjc2u.mongodb.net:27017
2019-12-07T12:14:39.631-0600 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-02-gjc2u.mongodb.net:27017
2019-12-07T12:14:40.259-0600 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for Cluster0-shard-0 is Cluster0-shard-0/cluster0-shard-00-00-gjc2u.mongodb.net:27017,cluster0-shard-00-01-gjc2u.mongodb.net:27017,cluster0-shard-00-02-gjc2u.mongodb.net:27017
2019-12-07T12:14:40.799-0600 I NETWORK [js] Marking host cluster0-shard-00-00-gjc2u.mongodb.net:27017 as failed :: caused by :: Location40659: can't connect to new replica set master [cluster0-shard-00-00-gjc2u.mongodb.net:27017], err: AuthenticationFailed: bad auth Authentication failed.
*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.
2019-12-07T12:14:40.800-0600 E QUERY [js] Error: can't connect to new replica set master [cluster0-shard-00-00-gjc2u.mongodb.net:27017], err: AuthenticationFailed: bad auth Authentication failed. :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-12-07T12:14:40.803-0600 F - [main] exception: connect failed
2019-12-07T12:14:40.804-0600 E - [main] exiting with code 1
I have whitelisted my IP address and made sure everything is in order. What could be causing this problem and how could I fix it? Why would this problem be occurring?
What I tried was:
Create new user
Made sure the username and password for my mongoURI when connecting my application is the same as the username and password for when I connect with the mongo shell.
Run your connection string in your command line in your application.
I realise that "just wait a bit" isn't a terribly constructive answer, but I had the same issue and no luck in finding a solution, so left it for a couple hours and came back to find it working perfectly.
The text that says your change has been deployed is misleading. Not sure why it takes several hours to kick in, but for reference I found this when using the M0 Sandbox cluster tier.
So you have to take care of 2 things.
1) First off, having mongodb in your $PATH, create a .bash_profile under the home folder if you don't have it already, then insert the following.(For Ubuntu)
export PATH="$PATH:/usr/bin/mongo"
Make sure you place the location of mongo on your computer. If you don't know the location, type whereis mongo on the terminal.
After saving, type source ~/.bashrc in the terminal.
2) Lastly, copy the connect link from the mongodb atlas, and when asked for username and password
please provide the credentials set to access the database not the mongdb atlas account.
This resolution might be specific for the mLabs to MongoDB Atlas migration tool provided by Cloud MongoDB.
My resolution was to:
Re-create the user with the same user name
Change from spesific mLabs grants to Atlas built-in roles
Set new credentials, I also avoided using special chars.
Good luck!
Create a simple password which doesn't have any special characters, it only includes alphabets and numbers.
I wasn't able to access but changing password worked for me.

Couchdb / Pouchdb Relation between multiple users and multiple documents

I have a problematic here:
I'm builing a mobile app with ionic frmaework who needs to be able to work offline.
I'd like to adopt the CouchDB / PouchDB solution. But to do that i need to know how to put my data in the noSQl datatbase (MySQL user before ...). So, nosql is new to me but it seems interesting.
So my app has a connection part so a user database. And each user has documents who are attached to him. But many users can have many documents (sharing documents). And I want to replicate the data of one user (so his information + his documents on the mobile app).
What I thought is this:
One database per. One database for all Document with a server filtering to send only the documents that belongs to the user.
And on the client side I'd juste have to call :
var localDB = new PouchDB("myuser");
var remoteDB = new PouchDB("http://128.199.48.178:5984/myuser");
localDB.sync(remoteDB, {
live: true
});
And like that on the client side I'd have something like that :
{
username: "myuser",
birthday : "Date",
documents : [{
"_id": "2",
"szObject": "My Document",
},
{
"_id": "85",
"szObject": "My Document",
}]
}
Do you think something like that is possible using Couchdb and pouchdb, and if yes, am I thinking about it the right way?
I read it's not a problem to have one database per document, but I don't know if the replication will work like I imagine it
Plain CouchDB doesn't have any per-document access options, but these could be your solutions:
A. Create a View, then sync Pouch-To-Couch with a filter. But although this will only sync the documents that the user is supposed to see, anyone with enough knowledge could alter the code and view someone else's documents or just do anything with the database actually (probably not what you're looking for).
B. Create a master DB with all documents, then a database for each user, and a filtered replication between the master & per-user-dbs. Probably the simplest and most proper way to handle this.
C. Unfortunately there isn't a validate_doc_read (as there is a validate_doc_update) but perhaps you could make a simple HTTP proxy, which would parse out incoming JSON, check if a particular user can view it and if not, throw a 403 Forbidden. Well you'd also have to catch any views that query with include_docs=true.
(late reply, I hope it's still useful - or if not, that you found a good solution for your problem)

Dropping a Mongo Database Collection in Meteor

Is there any way to drop a Mongo Database Collection from within the server side JavaScript code with Meteor? (really drop the whole thing, not just Meteor.Collection.remove({}); it's contents)
In addition, is there also a way to drop a Meteor.Collection from within the server side JavaScript code without dropping the corresponding database collection?
Why do that?
Searching in the subdocuments (subdocuments of the user-document, e.g. userdoc.mailbox[12345]) with underscore or similar turns out quiet slow (e.g. for large mailboxes).
On the other hand, putting all messages (in context of the mailbox-example) of all users in one big DB and then searching* all messages for one or more particular messages turns out to be very, very slow (for many users with large mailboxes), too.
There is also the size limit for Mongo documents, so if I store all messages of a user in his/her user-document, the mailbox's maximum size is < 16 MB together with all other user-data.
So I want to have a database for each of my user to use it as a mailbox, then the maximum size for one message is 16 MB (very acceptable) and I can search a mailbox using mongo queries.
Furthemore, since I'm using Meteor, it would be nice to then have this mongo db collection be loaded as Meteor.Collection whenever a user logs in. When a user deactivates his/her account, the db should of course be dropped, if the user just logs out, only the Meteor.Collection should be dropped (and restored when he/she logs in again).
To some extent, I got this working already, each user has a own db for the mailbox, but if anybody cancels his/her account, I have to delete this particular Mongo Collection manually. Also, I have do keep all mongo db collections alive as Meteor.Collections at all times because I cannot drop them.
This is a well working server-side code snippet for one-collection-per-user mailboxes:
var mailboxes = {};
Meteor.users.find({}, {fields: {_id: 1}}).forEach(function(user) {
mailboxes[user._id] = new Meteor.Collection("Mailbox_" + user._id);
});
Meteor.publish("myMailbox", function(_query,_options) {
if (this.userId) {
return mailboxes[this.userId].find(_query, _options);
};
});
while a client just subscribes with a certain query with this piece of client-code:
myMailbox = new Meteor.Collection("Mailbox_"+Meteor.userId());
Deps.autorun(function(){
var filter=Session.get("mailboxFilter");
if(_.isObject(filter) && filter.query && filter.options)
Meteor.subscribe("myMailbox",filter.query,filter.options);
});
So if a client manipulates the session variable "mailboxFilter", the subscription is updated and the user gets a new bunch of messages in the minimongo.
It works very nice, the only thing missing is db collection dropping.
Thanks for any hint already!
*I previeously wrote "dropping" here, which was a total mistake. I meant searching.
A solution that doesn't use a private method is:
myMailbox.rawCollection().drop();
This is better in my opinion because Meteor could randomly drop or rename the private method without any warning.
You can completely drop the collection myMailbox with myMailbox._dropCollection(), directly from meteor.
I know the question is old, but it was the first hit when I searched for how to do this
Searching in the subdocuments...
Why use subdocuments? A document per user I suppose?
each message must be it's own document
That's a better way, a collection of messages, each is id'ed to the user. That way, you can filter what a user sees when doing publish subscribe.
dropping all messages in one db turns out to be very slow for many users with large mailboxes
That's because most NoSQL DBs (if not all) are geared towards read-intensive operations and not much with write-intensive. So writing (updating, inserting, removing, wiping) will take more time.
Also, some online services (I think it was Twitter or Yahoo) will tell you when deactivating the account: "Your data will be deleted within the next N days." or something that resembles that. One reason is that your data takes time to delete.
The user is leaving anyway, so you can just tell the user that your account has been deactivated, and your data will be deleted from our databases in the following days. To add to that, so you can respond to the user immediately, do the remove operation asynchronously by sending it a blank callback.

Categories

Resources