Persistant data, pure functions and RAM - javascript

I'm currently reading Eloquent JavaScript, and I don't really understand the interest of using persistent data structures like indicated in this paragraph.
If I get it right, we are using pure functions (methods?) in this example, as the this.move method returns a new VillageState object without affecting the state of the original VillageState.
This way all the objects created until the problem gets solved are stored somewhere in the RAM stack, right? Then, should this extra data storage slow down the program too?
I don't really understand how it might be simpler to understand than using mutable data in this case. So I'd be glad if some of you could clarify this out for me, thanks.
And please correct me if I'm wrong somewhere!

Related

Using OrderedMap.merge to translate Objects to OrderedMaps instead of Maps?

I'm trying to use OrderedMap.merge to store application state using reflux (specifically reflux-immutable), but I noticed this does not translate Objects into OrderedMaps, but regular Maps, which do not guarantee order when iterated over. There are several parts of my application where I need order to be retained, so I was wondering if there was a way to accomplish this using OrderedMap.merge or something like merge. I came up with this, but it's super ugly and relies on ripping source code out of Immutable.js, which I'm not comfortable with.
Does anyone have any other ideas? Thanks in advance.
I decided to solve this problem a different way, namely by explicitly converting the objects I needed to be ordered maps prior to invoking OrderedMap.merge on the entire store's state. This works because the definition of merge essentially ignores objects that are already immutable, so there's no risk of duplicate work and merge's functionality is retained without having to do all the silly hacked together stuff I was doing.

Web workers with OO Javascript, ThreeJS and ScrollMagic

I'm developing a personal website to combine Three.js and ScrollMagic with OO Javascript. As the user scrolls the 3d Objects transform. This all works well but there is a slight performance issue. To improve this I want to move some loop/for functions that calculate positions to a web worker (whenever I call a loop function the scrolling lags).
The problem is I'm trying to pass an array (512) of class instances (THREE.PointCloud) to the web worker. I can't seem to get any meaning full properties from these instances in the web worker.
Firstly, I just tried to pass the array to the worker and got this error 'Uncaught DataCloneError: Failed to execute 'postMessage' on 'Worker': An object could not be cloned.'
Then I realised I couldn't do this so then I used JSON.stringify() and JSON.Parse(). I could get the length of the array. However, I couldn't get the properties for each instance.
I think I need to use an ArrayBuffer? But I have no idea how to convert my array of instances to an ArrayBuffer. Anyone? or is there an easier way to improve the performance?
Help would be really appreciated.
Thanks.
I think you are probably right that you needs an ArrayBuffer (or similar).
Using the postMessage() won't really get you what you want I think. Because the json (de)serialisation process is a fairly time consuming one in some cases.
But what you are probably looking for is "transferable objects". Instead of cloning the object(s) it changes the owner so there is no copying required.
There are quite a few places that talk about transfer objects online so google will be your friend here. But here is one https://developer.mozilla.org/en-US/docs/Web/API/Worker/postMessage
Hope that helps.

Can I make Rails' CookieStore use JSON under the hood?

I feel like it should be obvious doing this from reading the documentation, but maybe somebody can save me some time. We are using Ruby's CookieStore, and we want to share the cookie with another server that is part of our website which is using WCF. We're already b64-decoding the cookie and we are able to validate the signature (by means of sharing the secret token), all of that is great... but of course the session object is marshalled as a Ruby object, and it's not clear what is the best way to proceed. We could probably have the WCF application make a call to Ruby and have it unmarshal the object and write it out as JSON, but that seems like it will add an unnecessary layer of complexity to the WCF server.
What I'd really like to do is maybe subclass CookieStore, so that instead of just b64 encoding the session object, it writes the object to JSON and then b64's it. (And does the reverse on the way back in, of course) That way, the session token is completely portable, I don't have to worry about Ruby version mismatches, etc. But I'm having trouble figuring out where to do that. I thought it would be obvious if I pulled up the source for cookie_store.rb, but it's not (at least not to me). Anybody want to point me in the right direction?
(Anticipating a related objection: Why the hell do we have two separate servers that need to be so intimately coordinated that they share the session cookie? The short answer: Deadlines.)
Update: So from reading the code, I found that when the MessageVerifier class gets initialized, it looks to see if there is an option for :serializer, and if not it uses Marshal by default. There is already a class called JSON that fulfills the same contract, so if I could just pass that in, I'd be golden.
Unfortunately, the initialize function for CookieStore very specifically only grabs the :digest option to pass along as the options to MessageVerifier. I don't see an easy way around this... If I could get it to just pass along that :serializer option to the verifier_for call, then achieving what I want would literally be as simple as adding :serializer => JSON to my session_store.rb.
Update 2: A co-worker found this, which appears to be exactly what I want. I haven't gotten it to work yet, though... getting a (bah-dump) stack overflow. Will update once again if I find anything worthy of note, but I think that link solves my problem.

What's a clean way to handle ajax success callbacks through a chain of object methods?

So, I'm trying to improve my javascript skills and get into using objects more (and correctly), so please bear with me, here.
So, take this example: http://jsfiddle.net/rootyb/mhYbw/
Here, I have a separate method for each of the following:
Loading the ajax data
Using the loaded ajax data
Obviously, I have to wait until the load is completed before I use the data, so I'm accessing it as a callback.
As I have it now, it works. I don't like adding the initData callback directly into the loadData method, though. What if I want to load data and do something to it before I use it? What if I have more methods to run when processing the data? Chaining this way would get unreadable pretty quickly, IMO.
What's a better, more modular way of doing this?
I'd prefer something that doesn't rely on jQuery (if there even is a magical jQuery way), for the sake of learning.
(Also, I'm sure I'm doing some other things horribly in this example. Please feel free to point out other mistakes I'm making, too. I'm going through Douglas Crockford's Javascript - The Good Parts, and even for a rank amateur, it's made a lot of sense, but I still haven't wrapped my head around it all)
Thanks!
I don't see a lot that should be different. I made an updated version of the fiddle here.
A few points I have changed though:
Use the var keyword for local variables e.g., self.
Don't add a temporary state as an object's state e.g., ajaxData, since you are likely to use it only once.
Encapsulate as much as possible: Instead of calling loadData with the object ajaxURL, let the object decide from which URL it should load its data.
One last remark: Don't try to meet requirements you don't have yet, even if they might come up in the future (I'm referring to your "What if...?" questions). If you try, you will most likely find out that you either don't need that functionality, or the requirements are slightly different from what you expected them to be in the past. If you have a new requirement, you can always refactor your model to meet them. So, design for change, but not for potential change.

Mobile-Web-App: keeping huge data in json-string vs. object vs. localstorage

I currently think about an architecture for one-page mobile-web-apps, let them work offline with a lot of data.
My concern is that loading and keeping all data loaded in objects is wasting too much memory. I think about older android phones, iphones etc.
Would it be a good idea to start with variables initialized whith json-strings of data-model-objects, which i load/parse into an object when i need them?
i could free the loaded object as soon as the use of the model changes (of course only if its unlikely that the object is needed in the near future)
or are those string-variables hold in memory anyway, so i dont save memory?
What is the difference in memory consumption between a javascript object and its (stringyfied JSON-)String?
UPDATE:
i found the answer to the question in this article about javascript object size.
so comparing a json-string and its corresponding loaded object shows that the string is smaller. Thats what i expected.
Would it be better to retreive the strings via ajax and put them into localStorage? After the anonymous ajax callback finishes the GC could do its job...
is this even the right direction? what is the best approach keeping data like that?
I know all this is very vague, so any help is highly appreciated!
localStorage is stored on the real disk,so every time you read the data will not as quick as in a Object. localStorage is good for offline.if a large data don't require to offline and don't read too much often,just store it in an hidden will better.

Categories

Resources