I've been tinkering a bit on a small application which would show a limited amount of data to the viewer in a nicer way. I was thinking instead of opting for a database (be it SQLiteor / MongoDB) to have my data stored in a simple json file. It would have below characteristics:
Static data (will never have to be updated - 100-150 arrays)
Not private data - can be freely accessed by anybody which has access to the application
Offline application (not a single connection with internet)
Multiple users which would read only the data
JavaScript being used for this
What I am wondering about though is simultaneous reads. The application would never be used to update the data, it remains static. However there might be several people using the application simultaneously. As the tool will be stored on a shared drive, accessible by several other clients at the same time (only to read the file).
As I haven't touched anything with data or databases yet, I'd wanted to see if anybody already tried this out before I go into it deeper.
I am aware of the implications in terms of security however the data inside the application is not secure data and can be accessed by anybody freely. I only want to show it in a nice way. And as it is static anyway I was going to opt for a JSON file i/o starting to work with a database to speed up the development.
As far as I can see from your description, I think that there should be no conflict. You should be fine.
It's actually fairly common to use JSON-formatted files to store truly-static data.
Related
I want to use model-viewer or three.js to showcase some of my 3d models on a personal website. In order to display 3d models on the web, the client needs to fetch the files from the server (the 3d mesh and the texture images)
But I don't want my visitors being able to access any of these files. I hope you can point me in the right direction. Here are some ideas I had, but I don't think they'll work:
(1) Using something like crypto-js to encrypt and decrypt files
But when decrypting files on the frontend aren't users able to decrypt the files, too?
The key has to be transferred to the frontend code somehow, doesn't it?
(2) Splitting the files up into little pieces and recomposing them on the client
Same issue as with #1
The code for recomposition needs to sit on the client and can be used to access the files
When elaborating on those ideas, I am not quite sure if what I am trying to do is even possible 🤔
In case it is impossible... is there anything I can do to make it really hard for users to get access to the files?
The short answer is: If it is on a website, you don't stand a chance to protect it against a determined person with enough time on their hands. The only exception here was made for video-streams, which can use the 'Encrypted Media Extensions' API to get video to the screen without any parts of the browser being able to interact with raw data.
Whatever you do to protect the files, the code to read them needs to be sent to the browser as well. Eventually, the raw data will be somewhere in the memory of the js-runtime where it can be extracted using the built-in debugger. The same goes for any mechanism to somehow encrypt the code. It makes it more difficult, but not impossible. You could use WebAssembly to make that part of the code even harder to reverse-engineer, but I wouldn't need to do that:
In the end, the data needs to get to the webgl-api, so I could just use a browser-extension to intercept the relevant webgl-calls and obtain all the raw data there. You could go on and also encrypt the vertex-data in a way that can be decoded in the vertex-shader, but guess what: I can read the vertex-shader code as well.
And so the list goes on. There just is just no way to do it that cannot be somehow circumvented. But maybe you make it difficult enough for nobody to bother...
For me the most promising options seem to be:
use LoFi or partial models for rendering in the browser alongside renders of the full-resolution model. I've seen that on several sites for downloading CAD-/3D-models. They used merged models, sometimes reduced vertex-count, low-res textures and so on while providing images of what the final result will look like once I paid for it.
make up your own file-format or hide the file-format used in the network-view of the developer-tools. Google maps/earth for instance does that with their 3d-data (they are probably using something based on protobuf, but it's incredibly hard to reverse-engineer)
and yes, I guess you could also use the WebCrypto-API with a pre-shared secret so it is at least not too obvious which of the files contain the 3d-data.
I'm currently in the process of making an App using Javascroipt and Phonegap that needs to save a Database or something similar localy while offline until it is later synced with an external Database (not the main Problem).
So whats the best solution for managing relativly big chunks of data that have to be modified a lot during runtime, be able to delete entrys, add new entrys, read entrys using attributes and ids, sort entrys and be able to import and export data in a file (i.e give me a string or object that I can save in a file using phonegap)?
I already looked at TaffyDB (abandonned since 2 years) and pounchDB (seams to work using ajax and therefore require internet connection).
Its good that you have already tried pouchdb.js that is client side implementation of couchdb database and is supported and tested for all the major browsers and platforms.
indexeddb is actually latest web browser standard for storing large chunks of data in the form of objects.
All the major storage libraries including pouchdb are based on this only.
Kindly mark this answer if this is what you need or comment back for more explanations.
If I take a look at the stream library landscape i see a lot of nice stuff (like mapping/reducing streams) but I'm not sure how to use them effectively.
Say I already have an express app that serves static files and has some JSON REST handlers connecting to a MongoDB database server. I have a client heavy app that can display information in widgets and charts (think highcharts) with the user filtering, drilling down into information etc. I would like to move to using real-time updating of the interface, and this is the perfect little excuse to introduce node.js into the project, I think, however the data isn't really real-time so pushing new data to a lot of client's isn't what I'm trying to achieve (yet). I just want a fast experience.
I want to use browserify, which gives me access to the node.js streams api in the browser (and more..) and given the enormity of the data sets, processing is done server-side (by a backend API over JSONP).
I understand that most of the connections at some point are already expressed as streams, but I'm not sure where I could use streams elsewhere effectively to solve a problem;
Right now, when sliders/inputs are changed, spinning loaders appear in affected components until the new JSON has arrived and is parsed and ready to be shot into the chart/widget. Putting a Node.JS server in between, can streaming things instead of request/responding chunks of JSONPified number data speed up the interactivity of the application?
Say that I have some time series data. Can a stream be reused so that when I say I want to see only a subset of the data (by time), I can have the stream re-send it's data, filtering out the ones I don't care about?
Would streaming data to a (high)chart be a better user experience then using a for loop and an array?
I'm building a project that relies heavily on data read from a bunch Garmin eTrex HC devices, to do that I use the Garmin Communicator Plugin API, I have successfully found the device, read the data and uploaded it to a server where I will do further data manipulation.
However, I now want to delete the data I have read from the device, I have found nothing in the API reference provided by Garmin and now, I need to turn to you clever folks in order to solve my problem since I've been tearing my hair out all morning trying to figure this out.
I cannot rely on the person carrying the device to reset it properly since there is an angle of competition in the mix.
Any way I can delete the data from the device will be greatly appreciated, any solution that involves, deleting data, resetting the device or really, whatever.
If the community answer is "This cannot be done" I will have to accept that and do some fact checking on the server side (which I might do anyways) in order to prevent data uploaded multiple times.
It looks like garmin simply hasn't included this natively into the API, but...
When you delete data you're actually writing data.
It doesn't look like the JS file GarminDevice.js has any method for removing data. I would write zeros or otherwise into the data space.
I'm developing a html5 browser multi-player RPG with node.js running in the backend with a web sockets plug-in for client data transfer. The problem i'm facing is accessing and updating user data, as you can imagine this process will be taking place many times a second even with few users connected.
I've done some searching and found only 2 plug-ins for node.js that enable MySQL capabilities but they are both in early development and I've figured that querying the database for every little action the user makes is not efficient.
My idea is to get node.js to access the database using PHP when a user connects and retrieve all the information related to that user. The information collected will then be stored in an JavaScript object in node.js. This will happen for all users playing. Updates will then be applied to the object. When a user logs off the data stored in the object will be updated to the database and deleted from the object.
A few things to note are that I will separate different types of data into different objects so that more commonly accessed data isn't mixed together with data that would slow down lookups. Theoretically if this project gained a lot of users I would introduce a cap to how many users can log onto a single server at a time for obvious reasons.
I would like to know if this is a good idea. Would having large objects considerably slow down the node.js server? If you happen to have any ideas on another possible solutions for my situation I welcome them.
Thanks
As far as your strategy goes, for keeping the data in intermediate objects in php, you are adding a very high level of complexity to your application.
Just the communication between node.js and php seems complex, and there is no guarantee this will be any faster than just putting things right in mysql. Putting any uneeded barier between you and your data is going to make things more difficult to manage.
It seems like you need a more rapid data solution. You could consider using an asynchronous database like mongodb, or redis that will read and write quickly (redis will write in memory, should be incredibly fast)
These are both commonly used with node.js just for the reason that they can handle the real time data load.
Actually redis is what your really asking for, it actually stores things in memory and then persists it to the disk periodically. You can´t get any faster than that, but you will need enough ram. If ram looks like an issue, go with mongodb which is still really fast.
The disadvantage is you will need to relearn the ideas about data persistance, and that is hard. I´m in the process of doing that myself!
I have an application doing allmost what you describe- I choosed to do it that way since th MYSQL drivers for node was unstable/ undocumented at the time of development.
I have 200 connected users - requesting data 3-5 times each second, and fetch entire tables through php pages (each 200-800 ms) returning JSON from apache , with approx 1000 lines and put the contents in arrays. I loop through the arrays and find the relevant data on request - it works, and its fast - putting no significant load on cpu and memory.
All data insertion/updating, which is limited goes through php/mysql.
Advantages:
1. its a simple solution, with known stable services.
2. Only 1 client connecting to apache/php/mysql each 200-800 ms
3. all node clients get the benefit of non-blocking.
4. Runs on 2 small "pc-style" servers - and handles about 8000 req/second. (apache bench)
Disadvantages:
1. many - but it gets the job done.
I found that my node script COULD stop -1-2 times a week- maybe due to some connection problems (unsolved) - but Combined with Upstart and Monit it restarts and alerts with no problems.....