How to save my Nodejs BLockchain - javascript

In the last few weeks, I tried to code my own Blockchain, just to understand the whole concept better.
You can find my code here: https://github.com/Snixells/js-blockchain .
I already implemented that the Blockchain + Transactions are created through nodeJs arrays and JSON.
The problem I am working on right now is that the data does not get saved. I want to run the whole blockchain on a (maybe) express server and access it by a RESTful API. Because of that, I need a way to store the Blockchain somewhere. I also have already some ideas but none of them seems like a good one.
I could save the whole chain as a JSON file and always when needed open that and afterwards save it. But that won't scale at all later
I thought about saving each block as a single JSON file, but I think that wouldn't work that great either.
I could use any kind of database, like RethinkDB or MongoDB but that conflicts with the whole idea of Blockchain being the database itself.
I would love to hear some answers, for example, what frameworks and so on I could use. Or maybe any ideas on how to store the database at all.
Thanks for your help :)

Update:
I tried out rethinkDB and it seems to be a great choice because you can just simply store json objects in that database. It's perfect for what I need!

Related

How do you use npm fs?

I'm trying to make a Discord bot and I want to store a boolean value that will stay the same even when my bot restarts, I'm pretty sure I need fs to do this but I can't figure out how to use it, and I'm not successfully finding any documentation for it on github or npm...
So how would I go about storing a variable in a javascript file? (Or JSON if I need to)
Use for example LowDB LowDB. Super light-weight and allows easy read/write to a simple JSON file. All your storing is a simple boolean? Seems overkill to setup a full db engine.
Hesitant to provide a solution here...You should NOT store values this way - you need to utilize a DB. Jonas' solution will work - but consider the DB route going forward.

Waiting for one publish to finish before starting another

I have yet to find a relatively good solution for this. Maybe the community can help?
I'm pulling data into my meteor app from some restful end points. One builds on the other. For example..I hit one end point and get a collection of authors. then I need to hit a second endpoint to pull the books each of the authors have written.
Right now I have two separate publish functions on the server side to get the sets of data, however the second one relies on the data from the first. (My initial foray in my app was simply to do it all in one publish, but this felt like not the best architecture)
Is there any way to subscribe to another publish from within another publish server side? Or, some other method of checking that i can do?
So far the internet and stack overflow have yielded little results. I am aware of the publishComposite packages available.. but they seem relatively heavy handed and don't necessarily seem applicable to what I'm trying to do. Any advice would be greatly appreciated
i suggest a divide and conquer strategy. you have basically 2 questions to answer:
for the collections, am i going to do a client-side or server-side join?
what drives calling the remote service to get the new data?
i think you can build these pieces separately and tie them together with the db and Meteor's reactivity.
e.g. you can start by writing the code that hits the remote REST APIs. i think the strategy there is to make the authors call, get the data, then make the books calls. i would do that in one function, tied together with promises. when the book data returns, write it and the authors data to their respective collections (if you don't already have that data), ensuring the foreign keys are intact. now you can tie that function to a button press, and that part is done for now.
next you can move on to the collections and publishing that data. you'll have to decide, as i mentioned, where to do that join. but do the publish(es) in such a way that, per standard Meteor practice, when the collections update in the db, your client gets the updated data.
at this point, you can test everything is storing correctly and updating reactively when you push the button.
the last piece is to decide what drives the API call, to replace the button push. as i'd mentioned in the comments, perhaps a cron job, but maybe there's something else going on in your app that makes it more natural. the danger of putting in the publish, as i think you already know, is that you could get 50 simultaneous subscribes, and you don't want to hit that REST API 50x.

Strategy for testing POST to API without changing database

I'm using jasmine-node to test my API, and it has worked great for my GET routes. Now, however, I need to test some POSTs and I'm not sure how to go about this without changing my database.
One thought I had was to reset whatever value I change at the end of each spec.
Is this reasonable or is there a better way to go about testing POST requests to my API?
Wrap anything that modifies your database into a transaction. You can have your database changes and then rollback after each test.
usually you are supposed to have a test database, so modify that one is not a big issue. also, a general approach would be not to rely on predefined values on the database (i.e, the GET always request the SAME object..) but try with different objects each time. (using predefined objects may hide problems when the data is slighty different..).
in order to implement the second strategy, you can execute a test with a POST with pseudo-random data to create a new object, and use the returned ID to feed the following GET, UPDATE and finally the DELETE tests.
Just make a duplicate processing page/function and send the data to that for debugging. Comment out anything that makes changes to the database.
Alternatively, pass a variable in your call such as "debug" and have an if/else section in your original function for debugging, ignoring the rest of the function.
Another alternative still is to duplicate your database table and name it debug table. It will have the same structure as your original. Send the test data to it instead and it won't change your original database tables.
I'm pretty sure that you've come up with some solution for your problem already.
BUT, if you don't, the Angular $httpBackend will solve your problem. It is a
Fake HTTP backend implementation suitable for unit testing applications that use the $http service.

Can I make Rails' CookieStore use JSON under the hood?

I feel like it should be obvious doing this from reading the documentation, but maybe somebody can save me some time. We are using Ruby's CookieStore, and we want to share the cookie with another server that is part of our website which is using WCF. We're already b64-decoding the cookie and we are able to validate the signature (by means of sharing the secret token), all of that is great... but of course the session object is marshalled as a Ruby object, and it's not clear what is the best way to proceed. We could probably have the WCF application make a call to Ruby and have it unmarshal the object and write it out as JSON, but that seems like it will add an unnecessary layer of complexity to the WCF server.
What I'd really like to do is maybe subclass CookieStore, so that instead of just b64 encoding the session object, it writes the object to JSON and then b64's it. (And does the reverse on the way back in, of course) That way, the session token is completely portable, I don't have to worry about Ruby version mismatches, etc. But I'm having trouble figuring out where to do that. I thought it would be obvious if I pulled up the source for cookie_store.rb, but it's not (at least not to me). Anybody want to point me in the right direction?
(Anticipating a related objection: Why the hell do we have two separate servers that need to be so intimately coordinated that they share the session cookie? The short answer: Deadlines.)
Update: So from reading the code, I found that when the MessageVerifier class gets initialized, it looks to see if there is an option for :serializer, and if not it uses Marshal by default. There is already a class called JSON that fulfills the same contract, so if I could just pass that in, I'd be golden.
Unfortunately, the initialize function for CookieStore very specifically only grabs the :digest option to pass along as the options to MessageVerifier. I don't see an easy way around this... If I could get it to just pass along that :serializer option to the verifier_for call, then achieving what I want would literally be as simple as adding :serializer => JSON to my session_store.rb.
Update 2: A co-worker found this, which appears to be exactly what I want. I haven't gotten it to work yet, though... getting a (bah-dump) stack overflow. Will update once again if I find anything worthy of note, but I think that link solves my problem.

MongoDB query issue - error after second query

Right, I have this really weird problem, that is probably more related to how MongoDB works rather than a coding issue.
I'm making a website in Javascript (mostly) that is very heavy on jquery, using a MongoDB database and a nodejs server. I have users which have walls, and lots of information updates asynchronically. Because of this, I have to make dives into the database quite often, mainly to update things - for example if a user leaves a message on somebody's wall, or adding a friend.
The problem is, that once I make a query - trying to make another one just like it afterwards fail. They are literally identical (I tested this, copy&paste, they were a bit different first). It doesn't matter if I use find or update, the result is always null from the second query. I am closing the database after each dive, and returning as well just for good measure.
I can't find any issues anywhere where multiple queries to the same place fails after one succeeds. It's as if there is a lock somewhere, because I'm 100% sure my query is correct, and shouldn't return null. Not even hard coding works. The first query works perfectly. Same data, same collection.
So, my question is: does MongoDB have some query cache that could be blocking the query, or is there something else I've missed in how the queries work? Do you have any good tips and hints when having to do multiple database queries?
I get TypeError: Cannot read property '_id' of undefined as a result of the query returning null.
I hope this is enough information for anyone to have a clue what's wrong. I'm not providing any code as I think this is more a matter of me not really getting how MongoDB works rather than a coding issue.
Problem is now solved. I have no idea what caused it, but I rewrote some large chunks and that did the trick. Thanks for your time!

Categories

Resources