Access JSON property value before loading the contents of the file - javascript

I have an angularjs project which retrieves JSON files from a server and uses the contents to display the data in the screen.
I'm using a service to load the data, and this service calls the server for a new JSON file every 2 seconds (I removed that from the code below for simplicity).
var data = $resource(:file.json', {}, {
query: {method: 'GET', params: {file: '#file'}}
});
this.load = function(file, myFunction) {
data.query({file:file}, function(data) {
myFunction(data);
}
}
Now, these files can be really big and sometimes there's no need to process the file because there are no changes from the previous one received. I have a property in the JSON file with the version number, and I should not process the file unless that version number is higher than the one in the previous file.
I can do that by calling the query service, which loads the file contents into a js object and then check the version, if the file is really big it might take a while to load it. Is there a way to access that property value (version) ONLY and then, depending on it, load the file into a js object?
EDIT: The thing that I'm guessing is that loading a 1MB JSON file to check a version number inside it might take a while (or maybe no and that $resource action is really fast, anyone knows?), but I'm not really sure that it can be done any other way, as I'm checking a specific property inside the file.
Many thanks in advance.

HTML5 and Javascript now provides a File API which can be used to read the file line by line. You can find information regarding this feature here:
http://www.html5rocks.com/en/tutorials/file/dndfiles/
This will slice the full file into string and take just the first line(asuming the version is in there)
data.substr(0, data.indexOf("\n"));
--
Bonus:
Also in this answer you will find out how to read the first line of a file:
https://stackoverflow.com/a/12227851/2552259
var XHR = new XMLHttpRequest();
XHR.open("GET", "http://hunpony.hu/today/changelog-en.txt", true);
XHR.send();
XHR.onload = function (){
console.log( XHR.responseText.slice(0, XHR.responseText.indexOf("\n")) );
};
Another question with the same topic:
https://stackoverflow.com/a/6861246/2552259
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "http://website.com/file.txt", true);
txtFile.onreadystatechange = function()
{
if (txtFile.readyState === 4) { // document is ready to parse.
if (txtFile.status === 200) { // file is found
allText = txtFile.responseText;
lines = txtFile.responseText.split("\n");
}
}
}
txtFile.send(null);

Do you have access to the json files?
I'm not sure how you generate your json files but you could try adding the version number in the filename and check if a newer filename exists. I have not tested this but maybe it's worth a try.

Related

Rails file download using send_data with follow-up action

I am working on a code base where I need to allow the user to download a PDF document that already resides on AWS S3. I have implemented a download concern that was used for a previous feature.
For this feature, I need to update the UI (A progress stepper) after the user has completed the file download. I was initially thinking that this would be as simple as:
User clicks download
API call is made where the file is downloaded using send_data. In this API call, I'd also update the Foo model to change state to indicate that the user has downloaded the file;
Execute a redirect_to request.referer to reload the data. The changed state in Foo will be responsible for showing the updated progress in the UI;
I was mistakenly thinking that this was going to be simple. The reasons for complexity:
send_data is already rendering data, so I can't refresh the page using redirect_to as this triggers a multiple render error;
send_data does not work with the remote: true option, so requesting data via an AJAX link and updating the ERB template is out;
I can write everything into a JS on click function, but this seems like a bit of a hack. I probably need to retrieve the file directly from AWS and skip my api? I'm suspecting that I might run into CORS issues as I don't have control over the server.
This is what my rails download method looks like currently:
def download
attachment = Attachment.find_by_id(params[:attachment_id])
content = send_data(
attachment.file.read,
filename: "#{attachment.title}.#{attachment.file.file.extension}",
type: attachment.content_type,
disposition: "attachment",
)
end
Th js code that basically worked looks like this where all the relevant paths & filenames are passed on to the JS via data-attributes:
$(document).on("click", "#download", function(e){
e.preventDefault();
const data = $('#temp-information').data();
var req = new XMLHttpRequest();
req.open("GET", data.path, true);
req.responseType = "blob";
const filename = data.title;
req.onload = function (event) {
var blob = req.response;
console.log(blob.size);
var link=document.createElement('a');
link.href=window.URL.createObjectURL(blob);
link.download= filename;
document.body.appendChild(link);
link.click();
};
if (typeof window.navigator.msSaveBlob !== 'undefined') {
// Fix to work in IE11
window.navigator.msSaveBlob(blob, filename);
} else {
req.send();
}
});
What is the most effective & rails'y way of handling a file download & updating the UI after the download has been completed?
It's not 100% clear what you're trying to accomplish. If you're trying to let the user see download progress, I'm not sure that you really need to do anything except send_data, and most browsers will then begin downloading the file, including showing a progress bar.
Since it seems you want to do something after the file download is complete, that's quite a bit trickier. There's nothing Rails-specific about the problem, and the approach you have used looks pretty reasonable to me.
On this SO thread you'll find a lengthy discussion of this problem and various ways that people have tried to solve it. In general the solutions follow the same basic structure, which is to simply poll the server.
In your Rails app you could implement that roughly as follows. Suppose you added a field status to your attachment model...
def download
attachment = Attachment.find_by_id(params[:attachment_id])
attachment.update(status: "downloading")
send_data(
attachment.file.read,
filename: "#{attachment.title}.#{attachment.file.file.extension}",
type: attachment.content_type,
disposition: "attachment",
)
attachment.update(status: "complete")
end
Then you can add an endpoint that returns the status of a file. Thus when the user starts to download the file you begin to poll that endpoint.
def attachment_status
attachment = Attachment.find_by_id(params[:attachment_id])
respond_to do |format|
format.json do
{status: attachment.status}
end
end
end
Then in Javascript, for example using HttpPromise:
var http = new HttpPromise;
function poll(doneFn) {
http.get("/status.json") // you will need to set your actual status endpoint path here
.success(function(data,xhr){
if (data.status == "complete") {
doneFn();
}
});
};
function downloadFinished(){
// ... do whatever you want on finish here ...
};
setInterval(function(){ poll(downloadFinished) }, 5000);
It's not the most beautiful thing in the world, but it should get the job done.
Good luck!

Large blob file in Javascript

I have an XHR object that downloads 1GB file.
function getFile(callback)
{
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
callback.apply(xhr);
}else{
console.log("Request error: " + xhr.statusText);
}
};
xhr.open('GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";
xhr.send();
}
But the File API can't load all that into memory even from a worker
it throws out of memory...
btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(e.data); // FileSaver.js it creates URL from blob... but its too large
};
worker.postMessage(this.response);
});
});
Web Worker
onmessage = function (e) {
var view = new DataView(e.data, 0);
var file = new File([view], 'file.zip', {type: "application/zip"});
postMessage('file');
};
I'm not trying to compress the file, this file is already compressed from server.
I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..
I want to create blob: url and send it to user after been downloaded by browser
I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...
Do i have to build an extension for firefox, in order to do the same thing as FileSystem does for google chrome?
Ubuntu 32 bits
Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.
Instead I would just send the file with a Content-Disposition header to save the file.
There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom
I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Beside it doesn't work in private mode.
Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it
The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.
fetch(url).then(res => {
// One idea is to get the filename from Content-Disposition header...
const size = ~~res.headers.get('Content-Length')
const fileStream = streamSaver.createWriteStream('filename.zip', size)
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// res.body.pipeTo(fileStream)
// instead of pumping
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
// here you know how large the value (chunk) is and you can
// figure out the download speed/progress when comparing it to the size
return done
? writeStream.close()
: writeStream.write(value).then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
This will not take up any memory
I have a theory that is if you split the file into chunks and store them in the indexedDB and then later merge them together it will work
A blob isn't made of data... it's more like pointers to where a file can be read from
Meaning if you store them in indexedDB and then do something like this (using FileSaver or alternative)
finalBlob = new Blob([blob_A_fromDB, blob_B_fromDB])
saveAs(finalBlob, 'filename.zip')
But i can't confirm this since i haven't tested it, would be good if someone else could
Blob is cool until you want to download a large file, there is a 600MB limit(chrome) for blob since it stores everything in memory.

javascript - reading in an external local .txt file to load data into an array

I currently have javascript code (please see below) that searches an array for a month/day combination, and if it is found, assigns the name of a .jpg file (which is used for the background image of a page). Instead of hard-coding all of the data in the array, I'd like to be able to create an external .txt file with month/day codes and associated image file names, that could be read and loaded into the array. Thanks for your help!
var ourdates = ['0000','0118','0215','0530','0614','0704','0911','1111','1207']
if (ourdates.indexOf(monthday) != -1)
{
ourimage = "flag";
}
If you mean loading it from your server, that's a classic use-case for ajax, frequently combined with JSON:
var ourdates = null;
var xhr = new XMLHttpRequest();
xhr.open("GET", "/path/to/your/data");
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && xhr.status == 200) {
ourdates = JSON.parse(xhr.responseText);
// call something that uses `ourdates`
}
};
xhr.send(null)
If you mean from the user's computer, my answer here shows how to do that with the File API. Doing that requires an <input type="file"> input (or drag-and-drop event) that the user uses to grant access to the file to your script. You can't read a file from their machine without them specifically giving you access to the file.

load json from external file

I've never touched on JSON, but I just need some bits clearing up so that I can research how to solve my problem properly.
I have
-HTML file
-JS file
-JSON file.
All are linked in the html file.
My challenge is to load the JSON file and add together some of the values that are located within it.
So far I'm struggling to find anything other than JQuery to open it... I can find things about parsing, but many examples use code inline and i'm lost as to whether they're coding on the js file or the JSON one!
I'm seeing AJAX mentioned too, but i plead ignorance to its use so far (i'm very new to JS).
so, what would you recommend to load it?
what should i research to see about obtaining the values and creating additions with them?
Loading a JSON file:
jQuery:
$.getJSON('/my/url', function(data) {
console.log(data);
});
Non-jQuery:
request = new XMLHttpRequest();
request.open('GET', '/my/url', true);
request.onload = function() {
if (request.status >= 200 && request.status < 400){
// Success!
var data = JSON.parse(request.responseText);
console.log(data);
} else {
// We reached our target server, but it returned an error
}
};
request.onerror = function() {
// There was a connection error of some sort
};
request.send();
note the console.log prints the contents of the JSON file to the javascript console. You can do whatever you want with the "data" variable.

How to load a PDF into a blob so it can be uploaded?

I'm working on a testing framework that needs to pass files to the drop listener of a PLUpload instance. I need to create blob objects to pass inside a Data Transfer Object of the sort generated on a Drag / Drop event. I have it working fine for text files and image files. I would like to add support for PDF's, but it seems that I can't get the encoding right after retrieving the response. The response is coming back as text because I'm using Sahi to retrieve it in order to avoid Cross-Domain issues.
In short: the string I'm receiving is UTF-8 encoded and therefore the content looks like you opened a PDF with a text editor. I am wondering how to convert this back into the necessary format to create a blob, so that after the document gets uploaded everything looks okay.
What steps do I need to go through to convert the UTF-8 string into the proper blob object? (Yes, I am aware I could submit an XHR request and change the responseType property and (maybe) get closer, however due to complications with the way Sahi operates I'm not going to explain here why I would prefer not to go this route).
Also, I'm not familiar enough but I have a hunch maybe I lose data by retrieving it as a string? If that's the case I'll find another approach.
The existing code and the most recent approach I have tried is here:
var data = '%PDF-1.7%����115 0 obj<</Linearized 1/L ...'
var arr = [];
var utf8 = unescape(encodeURIComponent(data));
for (var i = 0; i < utf8.length; i++) {
arr.push(utf8.charCodeAt(i));
}
var file = new Blob(arr, {type: 'application/pdf'});
It looks like you were close. I just did this for a site which needed to read a PDF from another website and drop it into a fileuploader plugin. Here is what worked for me:
var url = "http://some-websites.com/Pdf/";
//You may not need this part if you have the PDF data locally already
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
//console.log(this.response, typeof this.response);
//now convert your Blob from the response into a File and give it a name
var fileOfBlob = new File([this.response], 'your_file.pdf');
// Now do something with the File
// for filuploader (blueimp), just use the add method
$('#fileupload').fileupload('add', {
files: [ fileOfBlob ],
fileInput: $(this)
});
}
}
xhr.open('GET', url);
xhr.responseType = 'blob';
xhr.send();
I found help on the XHR as blob here. Then this SO answer helped me with naming the File. You might be able to use the Blob by itself, but you won't be able to give it a name unless its passed into a File.

Categories

Resources