Right, I have this really weird problem, that is probably more related to how MongoDB works rather than a coding issue.
I'm making a website in Javascript (mostly) that is very heavy on jquery, using a MongoDB database and a nodejs server. I have users which have walls, and lots of information updates asynchronically. Because of this, I have to make dives into the database quite often, mainly to update things - for example if a user leaves a message on somebody's wall, or adding a friend.
The problem is, that once I make a query - trying to make another one just like it afterwards fail. They are literally identical (I tested this, copy&paste, they were a bit different first). It doesn't matter if I use find or update, the result is always null from the second query. I am closing the database after each dive, and returning as well just for good measure.
I can't find any issues anywhere where multiple queries to the same place fails after one succeeds. It's as if there is a lock somewhere, because I'm 100% sure my query is correct, and shouldn't return null. Not even hard coding works. The first query works perfectly. Same data, same collection.
So, my question is: does MongoDB have some query cache that could be blocking the query, or is there something else I've missed in how the queries work? Do you have any good tips and hints when having to do multiple database queries?
I get TypeError: Cannot read property '_id' of undefined as a result of the query returning null.
I hope this is enough information for anyone to have a clue what's wrong. I'm not providing any code as I think this is more a matter of me not really getting how MongoDB works rather than a coding issue.
Problem is now solved. I have no idea what caused it, but I rewrote some large chunks and that did the trick. Thanks for your time!
Related
I was checking Sequelize's examples and documentation the other day, and I came across:
Albums.update(myAlbumDataObject,
{ where:
{ id: req.params.albumId },
returning: true /*or: returning: ['*'], depending on version*/
});
I was very exited when I saw this. A very good way to avoid quite a few lines of code. What I was doing before was to get the object by Model.findOne(), re-set every field to their new values and invoke the instance method .save(), instead of using a static method like that.
Needless to say I was quite happy and satisfied when I saw that such static method existed, disappointing however, was to learn the method only returns the instance it updates if you're running Sequelize with PostgreSQL.
Very sad to learn that, as I'm using MySQL.
The method sure, issues a SQL statement containing the proper UPDADE string in it, but that's it. I don't know if it hit anything, and I don't have a copy of the updated data to return.
Turns out I need a Model.findOne() first, in order to know if that object exists with that id(and/or other filtering parameters), the Model.update() to issue the updates and finally a Model.findByPk() to return the updated model to the layer above (all of it inside a transaction, naturally). That's too much code!
Also during the update, if there's a UniqueConstraintError exception thrown (witch can be quite common), it's errors[] array carries no valid model 'instance', it's just 'undefined', so it complicates matters if you want details about what happened and/or throw custom error messages defined inside the models.
My questions are: Are there workarounds out there better than those I'm already implementing? Any sequelize plugins that may give me that with MySQL? Any sequelize beta code that can give me that? Is there any effort by the part of the sequelelize dev team to give us that? I'd appreciate any help given.
I'm running Sequelize version is 6.7.0, with Node.js v14.17.5.
Ps.: I even realized now that static Model.update() under MySQL will even update something that doesn't exist without complaining about it.
In the last few weeks, I tried to code my own Blockchain, just to understand the whole concept better.
You can find my code here: https://github.com/Snixells/js-blockchain .
I already implemented that the Blockchain + Transactions are created through nodeJs arrays and JSON.
The problem I am working on right now is that the data does not get saved. I want to run the whole blockchain on a (maybe) express server and access it by a RESTful API. Because of that, I need a way to store the Blockchain somewhere. I also have already some ideas but none of them seems like a good one.
I could save the whole chain as a JSON file and always when needed open that and afterwards save it. But that won't scale at all later
I thought about saving each block as a single JSON file, but I think that wouldn't work that great either.
I could use any kind of database, like RethinkDB or MongoDB but that conflicts with the whole idea of Blockchain being the database itself.
I would love to hear some answers, for example, what frameworks and so on I could use. Or maybe any ideas on how to store the database at all.
Thanks for your help :)
Update:
I tried out rethinkDB and it seems to be a great choice because you can just simply store json objects in that database. It's perfect for what I need!
We have built a small script and a database, based on PouchDB in order to display all the products of one of our clients in a so called "product tree".
You can find the product tree here: http://www.bodyrevitaliser.nl/nl/service/product-tree/
As you can see the tree is loading properly only in Chrome. If you check the console in safari and Firefox the DB seems to be loaded as well but something seems to be blocking the tree itself to be loaded.
What are you thoughts? Any ideas what might be causing this and solutions.
The problem with your code is that your usage of promises is not correct. I strongly recommend you read this blog post: We have a problem with promises. I know it's long, but it's worthwhile to read the whole thing.
In particular, read the section called "WTF, how do I use forEach() with promises?", because that is exactly the mistake that you're making. You are doing a bunch of insertions inside of a $.each, and then you are immeditely doing an allDocs() inside the same function. So you have zero guarantees that any documents have actually been inserted into PouchDB by the time you try to read from PouchDB. Perhaps it will, perhaps it won't, but it all depends on subtle timing differences between different browsers, so you can't count on it.
I'm using jasmine-node to test my API, and it has worked great for my GET routes. Now, however, I need to test some POSTs and I'm not sure how to go about this without changing my database.
One thought I had was to reset whatever value I change at the end of each spec.
Is this reasonable or is there a better way to go about testing POST requests to my API?
Wrap anything that modifies your database into a transaction. You can have your database changes and then rollback after each test.
usually you are supposed to have a test database, so modify that one is not a big issue. also, a general approach would be not to rely on predefined values on the database (i.e, the GET always request the SAME object..) but try with different objects each time. (using predefined objects may hide problems when the data is slighty different..).
in order to implement the second strategy, you can execute a test with a POST with pseudo-random data to create a new object, and use the returned ID to feed the following GET, UPDATE and finally the DELETE tests.
Just make a duplicate processing page/function and send the data to that for debugging. Comment out anything that makes changes to the database.
Alternatively, pass a variable in your call such as "debug" and have an if/else section in your original function for debugging, ignoring the rest of the function.
Another alternative still is to duplicate your database table and name it debug table. It will have the same structure as your original. Send the test data to it instead and it won't change your original database tables.
I'm pretty sure that you've come up with some solution for your problem already.
BUT, if you don't, the Angular $httpBackend will solve your problem. It is a
Fake HTTP backend implementation suitable for unit testing applications that use the $http service.
I feel like it should be obvious doing this from reading the documentation, but maybe somebody can save me some time. We are using Ruby's CookieStore, and we want to share the cookie with another server that is part of our website which is using WCF. We're already b64-decoding the cookie and we are able to validate the signature (by means of sharing the secret token), all of that is great... but of course the session object is marshalled as a Ruby object, and it's not clear what is the best way to proceed. We could probably have the WCF application make a call to Ruby and have it unmarshal the object and write it out as JSON, but that seems like it will add an unnecessary layer of complexity to the WCF server.
What I'd really like to do is maybe subclass CookieStore, so that instead of just b64 encoding the session object, it writes the object to JSON and then b64's it. (And does the reverse on the way back in, of course) That way, the session token is completely portable, I don't have to worry about Ruby version mismatches, etc. But I'm having trouble figuring out where to do that. I thought it would be obvious if I pulled up the source for cookie_store.rb, but it's not (at least not to me). Anybody want to point me in the right direction?
(Anticipating a related objection: Why the hell do we have two separate servers that need to be so intimately coordinated that they share the session cookie? The short answer: Deadlines.)
Update: So from reading the code, I found that when the MessageVerifier class gets initialized, it looks to see if there is an option for :serializer, and if not it uses Marshal by default. There is already a class called JSON that fulfills the same contract, so if I could just pass that in, I'd be golden.
Unfortunately, the initialize function for CookieStore very specifically only grabs the :digest option to pass along as the options to MessageVerifier. I don't see an easy way around this... If I could get it to just pass along that :serializer option to the verifier_for call, then achieving what I want would literally be as simple as adding :serializer => JSON to my session_store.rb.
Update 2: A co-worker found this, which appears to be exactly what I want. I haven't gotten it to work yet, though... getting a (bah-dump) stack overflow. Will update once again if I find anything worthy of note, but I think that link solves my problem.