How to import json file into AngularJs Application? - javascript

I am using AngularJS and I have to import/export an array.
I could export the array object converting it into JSON object then using FileSave.js library to save the file locally.
Now I can't find any information about how to import this json file from my PC to my application, then converting it into an object to display the array.
Thanks

Client-side javscript is unable to access the local file system by design for security reasons. As far as I am aware, there are 4 possible solutions for you. I've listed them below in order of ease.
1) Create a variable in your program, and simply copy paste the contents of your json file into your js file as the value. This will take two seconds, but it can be really messy if your JSON file is large or if you need to use multiple json files.var localJSONFile = [literally copy-pasted JSON text]
2) Check out Brackets by Adobe. I just did some quick googling and found this page that shows how to access local files. Open that and do a ctrl+f > 'local' and you'll find it. This is my recommended approach, as it's fast and easy. You will have to switch your IDE, which if you are just starting out, then most IDEs (Brackets, Sublime, VSCode, Atom) will feel the same anyways.
3) Create a basic angular service to inject into your program with the sole purpose of storing copy-pasted JSON files as variables. This is ultimately the same as 1), but will help you make the files you are working in less cluttered and easier to manage. This is probably the best option if you don't want to switch IDEs and will have a couple JSON files you are working with.
4) Get a local server going. There are tons of different options. When I was in your position I went the node.js route. There is definitely a learning curve involved, as there is with learning to set up any server, but at least with node, you are still using javascript so you won't have to learn a new language. This is the recommended approach if you know you will need to have lots of different data files flowing back and forth between the project you are working on. If that is the case, you will ideally have a back-end developer joining you soon. If not, you can set up a server quickly by downloading node.js and npm (comes with it) and using npm from your command prompt to install something called express, and then express-generator. With express generator you can run an init command from your command line and it will build an entire fully functioning web server for you, including local folder structure, which you can instantiate with a quick command from your command prompt. Then you would just go to the file it provides for your routes and adjust it. Node.js CAN read your local file system, so you could set up a quick route that when hit, reads the file from your file system and sends it to the requester. That would let you move forward immediately. If you need to add a database later on, you will need to install a database locally, get the plugins from npm for that database (there are tons, so no worries there), and then update your route to read from the database instead.

This seems too easy, so forgive me if I'm oversimplifying:
$http.get('/myJsonFile.json').
success(function(data, status, headers, config) {
$scope.myJsonData = data;
});
Or, if your response headers aren't set up to serve application/json:
$http.get('/myJsonFile.json').
success(function(data, status, headers, config) {
$scope.myJsonData = JSON.parse(data);
});

Related

Saving filename in DB after uploading to GCP Storage or using bucket.getFiles()

I've been searching in StackOverflow, but it seems that this question has not been asked yet. It's an architecture question about files being uploaded to GCP Storage.
TL;DR : Is there any issue using bucket.getFiles() directly (from a server), rather than storing each filename in my db, and then asking for them one by one and returning the array to the client ?
The situation:
I’m working on a feature that will allow the user to upload image attachements linked to a delivery note. This delivery note can have multiple attachements.
I use a simple upload button on my client (mobile device), and upload the content in GCP in a path/to/id-deliveryNote folder such as: path/to/id-deliveryNote/filename.jpg path/to/id-deliveryNote/filename2.jpg etc…
Somewhere else in the app the user should be able to click and download on each of those attachements.
The solution
After the upload being done in GCP, I asked myself how to read those files and give the user a download link to the file. That’s when I found the: bucket.getFiles() function.
Since my path to files are all below the same id-deliveryNote/ prefix, I leverage the usage of bucket.getFiles(prefix) and after the promise resolve can safely return to my user the list of links available.
The issue
I do not store the filenames in my deliveryNote table in my DB. Which can sound a bit problematic, relying on GCP to know the attachements of one deliveryNote. The way I see it is that, in my way I do not need to replicate the information in our DB (and possibly handling failure at two spots), and if I need those files I will at the ask GCP to give me their links. The opposed way of thinking is that, storing the names you will be able to list the attachements for the clients, and then generating the download link, when the user click a specific attachement.
My question is: Is there any issue using bucket.getFiles() directly (from a server), rather than storing each filename in my db, and then asking for them one by one and returning the array to the client ?
Some point that could influence the chosen method:
GCP costs per call difference ?
Invalid application data structure ?
Other things ?
There is no issue with using this method to return the link for the files to download. In the API documentation for this method - accessible here - they even show an example of returning files using prefixes as well. You just need to look out that Cloud Storage actually doesn't use real folders and only names that look like they are in folders - more details in this case here - so you don't mix up concepts when working with names and prefixes.
For the pricing point, you can get the whole pricing for Google Cloud Storage in this documentation, including how much each operation will cost - for example, it will cost you $ 0.02 per 50000 operations for object gets, retrieving bucket and object metadata - storing data, etc. After you check that, you can compare with your database costs as well, to check it if this point will impact you.
To summarize, there is no problem for you to follow this. The advantage of storing the names on Database, it's actually that even though you could have failure in two spots, it's more probable for you to face issues in only one place and this way the replication would be a great thing to have. So, you just need to decide which one fits you best.

Auto add files to IPFS network

On my Raspberry Pi, I have a Python script running that makes sure the room temperature is being measured (via a DHT22) and then log the temperature to a CVS file every half hour.
A new CVS file is created for everyday that the script is running. Therefore, the name of the files are temp_dd-mm-yy.cvs. These files are all saved in my loggings folder.
I now want to automatically pin add the cvs files to the IPFS network, because I don't want to write ipfs add <file.cvs> in the terminal every day.
In other words, is there a way to have a piece of code running that makes sure all files in the logging folder are added to IPFS network every 24 hours?
I have experimented with IPFS API, but I didn't manage to get anything useful out of that.
From python directly, there is two ways to do this. Either you call the ipfs binary using the subprocess call command, or you directly use the REST api using something like urllib.
For using the REST API, you have to add the data as a POST request, with the data being passed as form data.
Here is the equivalent curl request to add two "files":
$ curl -X POST -F 'file1=somedata' -F 'file2=somemoredata' http://localhost:5001/api/v0/add
{"Name":"QmaJLd3cTDQFULC4j61nye2EryYTbFAUPKVAzrkkq9wQ98",
"Hash":"QmaJLd3cTDQFULC4j61nye2EryYTbFAUPKVAzrkkq9wQ98","Size":"16"}
{"Name":"Qman7GbdDxgT3SzkzeMinvUkaiVduzKHJGE5P2WGPqV2uq",
"Hash":"Qman7GbdDxgT3SzkzeMinvUkaiVduzKHJGE5P2WGPqV2uq","Size":"20"}
With shell, you could just do a cron job that does e.g.
ipfs add -R /logging
every day. This will be reasonably efficient until your logging directory becomes really large even though it adds files again and again.
Of course, you will need to put the hashes somewhere or use IPNS so people can actually see this data.
A simple solution could be to use ipfs-sync to keep the directory in-sync on IPFS. It'd keep a directory pinned for you, as well as update an IPNS key for you if you'd like a consistent address for the data.
It can also be tuned to only update IPNS every 24 hours if you desire.

How to read a large file(>1GB) in javascript?

I use ajax $.get to read a file at local server. However, the web crashed since my file was too large(> 1GB). How can I solve the problem? If there's other solutions or alternatives?
$.get("./data/TRACKING_LOG/GENERAL_REPORT/" + file, function(data){
console.log(data);
});
A solution, assuming that you don't have control over the report generator, would be to download the file in multiple smaller pieces, using range headers, process the piece, extract what's needed from it (I assume you'll be building some html components based on the report), and move to the next piece.
You can tweak the piece size until you find a reasonable value for it, a value that doesn't make the browser crash, but also doesn't result in a large number of http requests.
If you can control the report generator, you can configure it to generate multiple smaller reports instead of a huge one.
Split the file into a lot of files or give a set user ftp access. I doubt you'd want too many people downloading a gig each off your web server.

How do I write data to a file with Javascript?

I made a game with javascript using this tutorial as a base: http://html5gamedev.samlancashire.com/making-a-simple-html5-canvas-game-part-3-drawing-images/
How do I get it to write the data from the item counter (var itemCounter = 50;) to a text file named savedata.txt? I googled it, but no helpful results came up. Can someone help me?
Technically, you can create a server with nodejs [which is built with javascript]. Details can be found here
Its not possible to store the data as a file on the client.
But you can use localstorage, websql, indexeddb or simply cookies for it.
Note that all of these storages have different properties in terms of lifetime.
You could also create a blob using the blobapi and then create a dataurl and request the user to save it, using drag and drop + fileapi to read the data, this approach however will make it easy for users to modify the data.
Writing a file is posible with the new FileWriter and FileSystem APIs.
More mature solutions (not using files) have already been mentioned
Javascript does not support working with files, for data storage several options are available:
cookies
Local Storage
Server side storage

Ruby on Rails - Storing and accessing large data sets

I am having a hard time managing the storage and access of a large dataset within a Ruby on Rails application. Here is my application in a nutshell: I am performing Dijkstra's algorithm as it pertains to a road network, and then displaying the nodes that it visits using the google maps API. I am using an open dataset of the US road network to construct the graph by iterating over two txt files given in the link, but I am having trouble storing this data in my app.
I am under the impression that a large dataset like this not an ActiveRecord object - I don't need to modify the contents of this data, rather be able to access it and cache it locally in a hash to perform ruby methods on it. I have tried a few things but I am running into trouble.
I figured that it would make most sense to parse the txt files and store the graph in yml format. I would then be able to load the graph into a DB as seed data, and grab the graph using Node.all, or something along those lines. Unfortunately, the yml file becomes too large for rails to handle. Running a Rake causes the system to run at 100% for infinity...
Next I figured, well since I don't need to modify the data, I can just create the graph every time the application loads as start of its "initialization." But I don't exactly know where to put this code, I need to run some methods, or at least a block of data. And then store it in some sort of global/session variable that I can access in all controllers/methods. I don't want to be passing this large dataset around, just have access to it from anywhere.
This is the way I am currently doing it, but it is just not acceptable. I am parsing the text files that creates the graph on a controller action, and hoping that it gets computing before the server times out.
Ideally, I would store the graph in a database that I could grab the entire contents to use locally. Or at least only require the parsing of the data once as the application loads and then I would be able to access it from different page views, etc.. I feel like this would be the most efficient, but I am running into hurdles at the moment.
Any ideas?
You're on the right path. There are a couple of ways to do this. One is, in your model class, outside of any method, set up constants like these examples:
MY_MAP = Hash[ActiveRecord::Base.connection.select_all('SELECT thingone, thingtwo from table').map{|one| [one['thingone'], one['thingtwo']]}]
RAW_DATA = `cat the_file` # However you read and parse your file
CA = State.find_by_name 'California'
NY = State.find_by_name 'New York'
These will get executed once in a production app: when the model's class is loaded. Another option: do this initialization in an initializer or other config file. See the config/initializers directory.

Categories

Resources