spine.js: why does it serialize POSTs blindly? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This came up during a discussion on another spine.js question:
Since, this is significantly distinct from that question, I am upholding the "one question per post" policy of SO, by branching this off as a new question.
#numbers1311407 mentioned that
spinejs will queue the requests and submit them serially, each consecutive request being made only after its predecessor completes.
But that seems so wrong ! Why should Spine assume so much control and sequentialize all POSTS (for example) from a client ? What if the POSTS are on related URIs ? Even if Spine tries achieve some sanity by sequentializing all POSTs for each client, it still has no way of preventing concurrent and conflicting POSTS happening from different clients. So, why bother ?

But that seems so wrong !
It makes sense when you consider the design goal of spine, the realization of what MacCaw calls an "Asynchronous UI" (the title of the blog post you linked to in your related question). This paradigm attempts to eliminate all traces of the traditional request/response pattern from the user experience. No more "loading" graphics; instantly responsive user interaction.
This paradigm expects a different approach to development. Persisting to the server becomes more of a background thread that, under normal circumstances, should never fail.
This means that under normal circumstances your app logic should not be dependent on, or care about, the order or scheduling of requests by spine.
If your app is highly dependent on server responses, then spine might be the wrong library.
Why should Spine assume so much control and sequentialize all POSTS (for example) from a client ?
Because of spine's non-blocking design philosophy, your app relinquishes the flow control that you might have in another lib like Backbone, wherein you might do something like disable a "submit" button while a request is being made, or otherwise prevent users from spamming the server with non-idempotent requests.
Spine's Model#save, for example, returns immediately and as far as the client is concerned, has already happened before the server even gets the request. This means that spinejs needs to collect and handle requests in the order that they come to ensure that they are handled properly. Consider a user jamming on a "Save" button for a new record. The first click will send a POST, the second, a PUT. But the button spamming user does not know or care about this. Spine needs to ensure that the POST completes successfully before the PUT is sent, or there will be problems in the client.
Combine the order sensitivity of non-blocking UI input with the first point, that your app shouldn't concern itself overmuch with the persistence layer, and you can begin to see why spinejs serializes requests.
Even if Spine tries achieve some sanity by sequentializing all POSTs for each client, it still has no way of preventing concurrent and conflicting POSTS happening from different clients. So, why bother ?
Because the point of the solution is to give a consistent, somewhat user-error-proof UI experience to a single client. E.g. if a client creates, then updates, then deletes a record, spine should ensure that all these requests make it to the server successfully and in the proper order. Handling conflicting posts between clients is a separate issue, and not the point of the request queue.

Related

How can I cancel a node thread prematurely from my frontend? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 13 days ago.
Improve this question
I have a react web app which generates solutions for rubik's cubes. When the user makes a query on my site, it starts a long computation process (anywhere from 1 second - 240 seconds). Every time a solution is found, the state is changed and the user can see the new solution.
However, this app often crashes on mobile for large queries, I believe the browser is demanding too much memory and kills my page. Because of this, I want to add a node.js backend to handle the computation.
I would like for the following functionality:
When the user makes a query request, it sends that to the backend which can start computing. Every so often, the frontend can update to show the current tally of solutions.
If the user prematurely wants to cancel the process, they can do so, also killing the backend thread.
How can I set this up? I know I can very easily make HTTP requests to my backend and receive a response when it is done. However, I'm not sure how to accomplish the dynamic updating as well as how to cancel a running thread. I have heard of long polling but I'm not sure if this is the right tool for the job, or if there is a better method.
I would also like this app to support multiple people trying to use it at the same time, and I'm not sure if I need to consider that in the equation as well.
Any help is appreciated!
However, I'm not sure how to accomplish the dynamic updating. I have heard of long polling but I'm not sure if this is the right tool for the job, or if there is a better method.
Three main options:
A webSocket or socket.io connection from client to server and the server can then send updates.
Server-sent events (SSE is another way for the server to send updates to the client)
Client polls the http server on some time interval to get a regular progress report.
as well as how to cancel a running thread
If by "thread" here, you mean a WorkerThread in nodejs, then there are a couple of options:
From your main nodejs process, you can send the thread a message tell it to exit. You would have to program whatever processing you're doing in the thread to be able to respond to incoming messagessto that it will receive that message from the parent and be able to act on it. A solution like this allows for an orderly shut-down by the thread (it can release any resources it may have opened).
You can call worker.terminate() from the parent to proactively just kill the thread.
Either of these options can be triggered by the client sending a particular http request to your server that identifies some sort of ID so the server's main thread can tell which workerThread it should stop.
I would also like this app to support multiple people trying to use it at the same time, and I'm not sure if I need to consider that in the equation as well.
This means you'll have to program your nodejs server such that each one of these client/thread combinations has some sort of ID such that you can associate one with the other and can have more than one pair in operation at once.

Should I use two seperate projects for frontend/backend in addition with API or merge them to one? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have a few questions which I'd appreciate to have some answers on.
So I've created a backend node server with express & mongo which is running specific tasks on the net and saves it in the database in a loop. I've also added an admin page with express & bootstrap. And that works fine. What I needed then was a frontend page - for this I chose VueJS. I started that project seperate for multiple reasons. I felt that this would be easier to get started, since I didn't have any frontend framework experience before and the backend project was written in typescript and I'd rather use normal es6 JS for now.
Right now - the site has already made some pretty decent progress and is at the point where I need to establish connection with the database and also use some of the already implemented functions in the backend project.
And this created the question:
Should I create new functions and/or create and use API's? Would there be any problem with the mongodb in the form of accessing and writing to it by two different processes? Would there be security issues if I'd create "public" apis from my already existing backend logic? (Haven't written any apis yet.)
Or should I use the time and import the frontend project into the backend (meaning also either translating new to typescript or switching to normal ES6 JS)? Would this be a security risk since I'd rather not have the backend logic in my frontend site.
I appreciate any answer to that!
Thank you :)
This is a question of can you afford to run two servers? separating your front end from your back end is actually a good move considering all things microservices since it allows you to scale these things separately for future purposes. Like your backend needing more resources once you start catering to mobile user as well or once you get more api calls, while your front end server need only serve the ui and assets, nothing more. Though the clear downside is the increase in costs since you do need to run two servers instead of one, something that is difficult when you are just starting out
Should I create new functions and/or create and use API's?
For your backend? Yes. APIs are the way to do things now in the webspace as it future proofs you and allows a more controlled and uniform way to access your backend(everything goes through the api). So if your front end isnt accessing your database through the APIs yet, i suggest you refactor them to do so.
For your concerns about mongo, im pretty sure mongo already has features in place to avoid deadlocks.
As for security of your API, I suggest checking out JWT.
should I use the time and import the frontend project into the backend
should you go this path instead due to cost concerns, i would suggest rewriting one of the codebase to comply with the other for uniformity's sake, though do that at your leisure(we can't have you wasting all your precious time rewriting code that already works just fine). this isnt really that much of a security issue since backend code isnt being sent to the front end for all your users to see
Let me start by saying I've never used Vue. However, whenever I use react, I always make separate projects for the front end and the back end. I find it's 'cleaner' to keep the two separate.
I see no reason for you to transcribe your entire project from typescript. Simply have your frontend make requests to your backend.
If you're looking to brush up on your web security, I recommend you look into the Open Web Application Security Project.

What are the benefits of using multiple method tokens for ajax? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
instead of doing all your server operations via the POST method token ( and the content type set to json ).
I've done some research here and I am referring to the method tokens mentioned in the ietf document.
https://www.rfc-editor.org/rfc/rfc2616#section-5.1.1
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
I don't see the benefit of using all the other server type requests. I know they are used, particularly what spurned this interest was Backbone's use as seen here:
var methodMap = {
'create': 'POST',
'update': 'PUT',
'patch': 'PATCH',
'delete': 'DELETE',
'read': 'GET'
};
These properties are eventually passed to the xhr open method which you can read about in the links I posted above.
Actually the MDN article has pretty much no information while the W3 articles seems a bit esoteric.
What you've described is an application design philosophy called Representational State Transfer (REST). The philosophy is much more encompassing than just using multiple request methods. It also covers the idea that each type of data needs its own URL, how that URL should be logically structured and what should belong to query parameters and what should be a URL path. REST is one of the earliest ideas related to Semantic Web - the idea that websites should be easily readable to machines as it is to humans (or to put it another way, the idea that the website should be easily understandable to developers as it is to regular users).
You can read the original paper describing REST here: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
REST is actually just one chapter in the whole paper. The paper describes what a web architecture should ideally look like.
Do you need REST?
The short answer is of course no. Technically speaking you're allowed to do whatever you want that works. Indeed, back when ideas of REST was first introduced there were no easy way to do PUT and DELETE requests on some browsers. So people stuck to GET and POST and the HTTP spec was updated specifically to make GET and POST have RESTful meaning.
The HTTP specification recommends that GET only be used for indempotent operations (requests with no side effect) while POST should be used whenever a request causes something to change in the server. But developers have been using GET to update databases due to easy debugging because you can just construct the query in the URL entry field in your browser. The RESTful way is to only allow POST requests to update the database (or save anything to file).
What's the advantage?
The advantage is to essentially allow developers to treat the web as an API. Allowing machines to read webpages allows for things like mashups, creating mobile apps as front-ends etc.
A good API or library is consistent. A consistent API is easier to use, easier to remember and doesn't require the developer to look-up documentation too often. REST attempts to provide this consistency by giving real meaning to the types of request. Therefore if you see PUT request you don't have to guess what it's doing.
As such, as a programmer, it is to your advantage to not only be RESTful as much as possible but also convince as many other programmers as possible to create RESTful websites. If all websites are RESTful, writing scripts to do smart things with online data becomes much easier.
On the other hand, as a programmer, you also have the freedom to disagree with other people's ideas.

Prevent running js code after download from site [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How can I prevent the user running my web page after downlading the page source?
We can't prevent downloading the source code, but can encrypt it.
But it doesn't good enough for me, because the encrypted code can work after downloading.
Thank in advance.
As a general rule, if the code runs on a remote machine it can be manipulated so they can execute it anyway.
You can make this more difficult through code obfuscation or by implementing some sort of DRM, but I would suggest that this will largely be more trouble than it's worth (since it just takes one person to break it and your code is back out).
1) You could, for example, require that some key be downloaded from a site you control before it'll execute, but the recipient might simply sniff their traffic and pass that value to the game themselves.
2) Or you could possibly set up your game to stream each of the levels or some important aspect of it to your game client, but again, there's not a whole lot stopping someone from just reading these aspects and implementing this mechanism themselves.
3) Perhaps you could encrypt these level packages dynamically on the server with a time-based key, but it just takes that one bored programmer with the technical know-how to reverse-engineer what your method is.
4) Another option that comes to mind is requiring some regular polling to a server you control and requiring some sort of response, but again, if your client can predict what this response is supposed to look like, it's easy for someone to rewrite the game to talk to their own program instead of your server.
5) You could also daisy chain a ridiculous number of dependencies of your javascript logic (breaking your own code into a number of dependencies) so it's slightly more difficult for another user to rebuild the required paths on their system. This might be useful to put off a casual user, but I doubt it'd put off a more knowledgeable user.
All in all, I'd suggest that you simply make the game available as is. Various game companies larger than you have attempted to implement DRM measures of their own with disastrous results (when they don't work as advertised) or just plain annoying for the end user.

What are the security trends for client side coding in web development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
There are a lots of new frameworks, technologies coming up. And it's becoming so hard to follow up all of them. One of the thing that confuses me is client side frameworks. I heard that Angularjs,Backbone,Knockout,jsviews,knockback, SPA... are the most popular right now.But I can't understand how does the security concept applies? If we take an example of querying a table form database it's now possible to make queries from client side database, by specifying table name and fields and etc... So if it works that way, than everyone else can write another query and get all other information. I am pretty sure that I am missing something very important here, and it doesn't click my mind. So please can anybody explain me where can I start learning those primitives.
I really appreciate, and I am really eager to learn but I am searching it wrong way I guess.
Whatever the framework used, the security matter will still the same, and very similar to mobile apps:
which data can you afford to be handled in an untrusted environnement
which treatment can be applied in an untrusted environnement
By "untrusted environnement" I mean the browser itself. You have to understand that any code executed in the browser can be corrupted by a medium/good JS developper.
Data security suffer the same threat: giving access to data from your client means that you do not control anymore who is using it.
Once you've dealt with this simple matter, it became easier to decide what must stay on server side, and what can be deported to client.
That said, there are various ways to make data/algorithm steal more difficult:
Obfuscation that comes with minification
Double data validation (forms for example): both client and server side
Authentication protocols, like OAuth
Binary over webSockets, instead of plain json and ajax call...
The browser sandbox imposes some limitations, but mainly to protect the local computer from damages due to malicious JS code. It does not protect your code nor your data from being seen and manipulated by the user itself.
I am using angular for some of my projects. I haven't used other frameworks , but in angular you usually consume an API to get the data. You don't query your database directly. So, the responsability of securing your data is more in you API (Backend) than in your angular client.
You can use OAUTH, or other security method that you want to make your api safe.

Categories

Resources