I'm attempting to write a web page that loads an upwards of a gigabyte of binary data. The page is ran directly off the local disk, not from a web server.
I tried encoding the data as base64 and embedded it through a script file, but the browser slowed to a crawl and eventually crashed from out-of-memory errors. I then tried encoding the data as an Uint8Array, but it ran into the same problem. I've also tried breaking the binary data down into multiple files and retried both methods on them, but it reached the point where I had over 100 script files and the browser still ran into out-of-memory errors.
So far, the only way to load the data into memory is to use the File API. However, it's not intuitive to have to select these files every time in order to use the web page.
Are there any other methods for reading large binary data into a web page without having to use the File API? I can't use AJAX/fetch because this is ran directly off the local disk.
Related
Let's say I have a web application, and its function is to display data from a fairly large JSON file. The web page is one folder on my desktop (a data folder storing JSON files, HTML file, CSS folder storing CSS scripts, and JavaScript folder storing JS scripts) that I've pushed onto GitHub. Very straightforward.
"My application folder"
- css folder
-style.css
- data folder
-json1
-json2
-json3
-json4
-json5
-json6
- javascript folder
-main.js
-index.html
My question is this: will storing many large JSON files (each has different data) in the data folder cause my website to run slower, even if only one JSON is being rendered by the webpage at any given time? Does the loading time of the page increase if there's more data files in the data folder, regardless of whether they are actively being used by the web app and loaded into the DOM?
I'm asking because I'm thinking of storing many JSON files in the data folder, and dynamically loading different JSON files into the webpage based on what the user does. But - I don't want to use this approach if it's going to cause the webpage to load extremely slowly.
Yes - I know I could use a database in the cloud like PostGres, etc. But for now, I would like to avoid using a database if possible.
It will not slow your website because the files are not in use other then taking up space which could affect performance if the disks starts be coming full.
I am creating a grails app and loading JSON files in my controller and am faced with a high risk Data issue.
The JSON files are loaded using requests in the index.gsp file of the views part.
I have various drop downs, date slider and charts in my web application using various javascript libraries (d3, jquery, highcharts, dc and crossfilter)
Now, the issue is every dropdown selection loads a new JSON table in the browser, which can be seen in the network part of the browser after pressing F12
So, after a set of selections (depending on json table size) as soon as net data transferred exceeds 200 Mb, the browser and the page becomes very slow, eventually leading to browser/page crash.
So, I started searching for clearing cache memory using javascript and apparently from the results, it's quite tough and not practical as it deals with the security issues of a browser.
How to programmatically empty browser cache?
Clear the cache in JavaScript
So, I need a method for the browser to clear the cache every time a drop down is selected.
But, Is the JSON file data being loaded in cache memory or something else?
As I also tried clearing cache manually by going to tools ..etc and still the browser crashed.
Can anyone tell me an approach to go about solving the problem?
Two questions mainly :
1. Where does the JSON file load data in the browser ( which memory)?
2. How do I keep clearing this data to reduce load on the browser?
Any help would be appreciated.
I am currently trying Gzip and other compressions.
My JavaScript application has to load many micro text files from my web server asynchronously (there're about 200 * 5kB files). I know that downloading 1 large file is far more fast than downloading many tiny files. But I cannot predict which files are gonna be loaded (the client makes requests) and I have tons of files like this.
How can I speed up the transfer of those files?
I thought about concatenating requested files with PHP. Is that a good idea?
"I thought about concatenating requested files with PHP. Is that a good idea?"
We do the same thing in production with a servlet in java and it works quite well. But, to get it right we had to cache the concatenated files, don't read them for each request. The file IO has a lot of overhead.
Here's a list of PHP cache tools. Given a highly cursory look at the doc for xcache you should be able to write a php file that collects all of your individual files and concatenates them and then store that in memory to be used as a resource.
Is it possible to download files with the help of AJAX requests via XHR2? I am able to upload files to the sever, but I can not find a way to download them.
PHP Version 5.4.12
Apache/2.4.4 (Win64) PHP/5.4.12
I need to find a solution to be able to download very large files (up to 4GB). To save server memory, the files need to be downloaded in chunks. I need to monitor download progress too, to provide feedback especially on large files.
I have tried many things, but nothing works well. I run our PHP memory, it takes very long to start the download, I can not monitor the progress, cURL is very slow, x-sendfile provides no download progress, or I can not run two PHP scripts at once(when one sends the data and the other monitors the progress.)
Simperium looks like an awesome way to sync data across a variety of platforms and to deal with on/offline access from mobile.
For a project I'm working on some of the data is in the form of generated image and video files. I can't find any information about whether it is possible to sync this kind of data through Simperium (I guess I could base64 encode the images but it seems like a hack).
Or would I need to sync the URLs and then manually download these resources and somehow store them locally?
Simperium has basic support for binary files on the iOS side, currently in testing. This isn't yet available in the JavaScript library, but it will be. The way it works is similar to what you described. Simperium can handle the syncing of both a URL and its associated binary content in cases where that makes sense.
On iOS, binary files are stored to the local file system (though small files can indeed be stored as base64 encoded strings if you prefer).
In JavaScript, if you're working on the client side, the situation is less clear given the storage limits imposed by browsers, but you always have the option of syncing and using standard links depending on what you're trying to do. On the server side, there are of course more options. If you have some uses cases to share, you should get in touch.