I have some objects that encode text formatting methods and css styles, and I'd like to save them in a MongoDB collection (using Mongoose). The objects are a more complicated version of this:
const myStyle = {
book: {
templates: ["/authors/. ", "/title/. ", "/date/. "],
authors: {
format: formatAuthors
},
title: {
format: formatTitle,
style: {fontStyle: "italic"}
}
}
}
Does anyone know how to send this this sort of thing to a server and save it in a MongoDB collection? According to the Mongoose documentation, Object is not a valid schemaType, so I can't just save it straightforwardly as a JS object.
You can add methods through the Schema.methods field.
SchemaName.methods.methodName = function(){...}
Also take a look at this example here for basic mongoose usage.
Object is not a valid mongoose schema type but Mixed is.
An "anything goes" SchemaType, its flexibility comes at a trade-off of
it being harder to maintain. Mixed is available either through
Schema.Types.Mixed or by passing an empty object literal.
You can read more about it using the above link.
In your case as long as the data you want to save is valid JS object you will be able to insert it.
Please note however that since it is a schema-less type there are some performance limitations when using Mixed as well as you need to call .markModified(path) when changed as again stated in the docs.
No, you can't save methods to mongodb
Related
This question already has answers here:
Why can't you modify the data returned by a Mongoose Query (ex: findById)
(3 answers)
Closed 3 years ago.
while development of an API, I often need to set extra properties in the result of mongoDb query results. But I can't do it in a good way. For example
Model
const Cat = mongoose.model('Cat', { name: String,age:Number });
Query
Cat.findOne({age:2}) .then(
cat=>{
cat.breed="puppy";
console.log(cat)
} )
here after I get the result from mongoDb I want to set the property of breed to the result , but I can't do it because the property is not defined in the Schema
So to set an extra property I use a hack
cat = JSON.parse(JSON.stringify(cat));
cat.favFood = "Milk"
I don't think its a good way to code. please give a better way of setting property and explain how the hack is working.
Mongoose can actually do the conversion toObject for you with the .lean() option. This is preferred over manual conversion after the query (as willis mentioned) because it optimizes the mongoose query by skipping all the conversion of the raw Mongo document coming from the DB to the Mongoose object, leaving the document as a plain Javascript object. So your query will look something similar to this:
Cat.findOne({age:2}).lean().then(
cat=>{
cat.breed="puppy";
console.log(cat)
}
)
The result will be the same except that this will skip the Mongoose document-to-object conversion middleware. However, note that when you use .lean() you lose all the Mongoose document class methods like .save() or .remove(), so if you need to use any of those after the query, you will need to follow willis answer.
Rather than using JSON.parse and JSON.stringify, you can call toObject to convert cat into a regular javascript object.
Mongoose objects have methods like save and set on them that allow you to easily modify and update the corresponding document in the database. Because of that, they try to disallow adding non-schema properties.
Alternatively, if you are trying to save these values you to the database, you may wish to look into the strict option (which is true by default).
I have a big JSON string that is 10 records, each with their own properties. I need to ingest them into my MongoDB with Javascript. I'm basically useless with Javascript, and google has largely failed me. The JSON looks like this, basically:
[{"ID":1,"Name":"bob"},{"ID":2,"Name":"Jim"}]
Obviously a lot more, but that's the basic structure. How would one, using Node.js for example, import that into Mongo? Mongo's documentation largely only covers their shell commands, but those dont' directly translate into Javascript.
You could do a bulk insert like so:
var MyObject = mongoose.model('MyObject', MyObjectSchema);
var objectsArray = [/* array of MyObject objects */];
MyObject.collection.insert(objectsArray, callback);
Well i normally use mongoose plugin driver, to save such a document , define the schema first, at a glance your schema seems to have two fields ID and Name, with id custom. It is custom because mongodb uses it own id, to change this use auto-increment-plugin. So after you define your schema, mongodb will only save or insert a object if the fields match the schema.
db.collection.insert(
<document or array of documents>,
{
writeConcern: <document>,
ordered: <boolean>
}
)
The above is the format for a document insertion.document document or array A document or array of documents to insert into the collection.
Hope this helps.
I am using Simple Schema to validate my database entries in a meteor application. I started developing a module to create forms automatically (I know autoform is quite good, but it was not exactly what I needed). To make the radio component I need to know the allowed values for that field, and since it was already specified in the schema I wanted to know if it is possible to retrieve it. Any ideas?
Consider a very simple schema:
s=new SimpleSchema({
list: {
type: String,
allowedValues: ["foo","bar"]
}
});
If you explore the created object you'll find that:
s._schema['list'].allowedValues
returns
["foo", "bar"]
One can deduce the general pattern is:
schemaObject._schema['keyName'].allowedValues
Should I store objects in an Array or inside an Object with top importance given Write Speed?
I'm trying to decide whether data should be stored as an array of objects, or using nested objects inside a mongodb document.
In this particular case, I'm keeping track of a set of continually updating files that I add and update and the file name acts as a key and the number of lines processed within the file.
the document looks something like this
{
t_id:1220,
some-other-info: {}, // there's other info here not updated frequently
files: {
log1-txt: {filename:"log1.txt",numlines:233,filesize:19928},
log2-txt: {filename:"log2.txt",numlines:2,filesize:843}
}
}
or this
{
t_id:1220,
some-other-info: {},
files:[
{filename:"log1.txt",numlines:233,filesize:19928},
{filename:"log2.txt",numlines:2,filesize:843}
]
}
I am making an assumption that handling a document, especially when it comes to updates, it is easier to deal with objects, because the location of the object can be determined by the name; unlike an array, where I have to look through each object's value until I find the match.
Because the object key will have periods, I will need to convert (or drop) the periods to create a valid key (fi.le.log to filelog or fi-le-log).
I'm not worried about the files' possible duplicate names emerging (such as fi.le.log and fi-le.log) so I would prefer to use Objects, because the number of files is relatively small, but the updates are frequent.
Or would it be better to handle this data in a separate collection for best write performance...
{
"_id": ObjectId('56d9f1202d777d9806000003'),"t_id": "1220","filename": "log1.txt","filesize": 1843,"numlines": 554
},
{
"_id": ObjectId('56d9f1392d777d9806000004'),"t_id": "1220","filename": "log2.txt","filesize": 5231,"numlines": 3027
}
From what I understand you are talking about write speed, without any read consideration. So we have to think about how you will insert/update your document.
We have to compare (assuming you know the _id you are replacing, replace {key} by the key name, in your example log1-txt or log2-txt):
db.Col.update({ _id: '' }, { $set: { 'files.{key}': object }})
vs
db.Col.update({ _id: '', 'files.filename': '{key}'}, { $set: { 'files.$': object }})
The second one means that MongoDB have to browse the array, find the matching index and update it. The first one means MongoDB just update the specified field.
The worst:
The second command will not work if the matching filename is not present in the array! So you have to execute it, check if nMatched is 0, and create it if it is so. That's really bad write speed (see here MongoDB: upsert sub-document).
If you will never/almost never use read queries / aggregation framework on this collection: go for the first one, that will be faster. If you want to aggregate, unwind, do some analytics on the files you parsed to have statistics about file size and line numbers, you may consider using the second one, you will avoid some headache.
Pure write speed will be better with the first solution.
I'm starting to make use of virtual getter methods in Mongoose in a real-world application and am wondering if there is a performance impact to using them that would be good to know about up-front.
For example:
var User = new Schema({
name: {
first: String,
last: String
}
});
User.virtual('name.full').get(function () {
return this.name.first + ' ' + this.name.last;
});
Basically, I don't understand yet how the getters are generated into the Objects Mongoose uses, and whether the values are populated on object initialisation or on demand.
__defineGetter__ can be used to map a property to a method in Javascript but this does not appear to be used by Mongoose for virtual getters (based on a quick search of the code).
An alternative would be to populate each virtual path on initialisation, which would mean that for 100 users in the example above, the method to join the first and last names is called 100 times.
(I'm using a simplified example, the getters can be much more complex)
Inspecting the raw objects themselves (e.g. using console.dir) is a bit misleading because internal methods are used by Mongoose to handle translating objects to 'plain' objects or to JSON, which by default don't include the getters.
If anyone can shed light how this works, and whether lots of getters may become an issue at scale, I'd appreciate it.
They're probably done using the standard way:
Object.defineProperty(someInstance, propertyName, {get: yourGetter});
... meaning "not on initialization". Reading the virtual properties on initialization would defeat the point of virtual properties, I'd think.