Force IE to re-download text file on load - javascript

My JavaScript has an function that goes off onload that takes a .txt file and creates an array from it. The page works fine in Chrome, but doesn't update in IE -- IE seems to cache the .txt file and re-create the array from the cache, ignoring any updates made to the .txt. Is there any way to force IE to re-download the .txt before creating the array so that the user isn't working with an outdated version of the information?
edit: Code!! (changed the file pathname, all else is the same)
function createArray() {
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "http://PATHNAME/names.txt", true);
txtFile.onreadystatechange = function() {
if (txtFile.readyState === 4) {
if (txtFile.status === 200 || txtFile.status === 0) {
nameArray = txtFile.responseText.split("\n");
}
}
};
txtFile.send(null);
}
Furthermore, the file is stored on the server, in the same folder as the page that the data displays on, which is one level above the JavaScripts folder. So the directories look like:
page.html
names.txt
SCRIPTS FOLDER
array.js

Get requests cache, force it to fetch a new file by changing the url.
txtFile.open("GET", "http://PATHNAME/names.txt?ts=" + new Date().getTime(), true);

Related

Large blob file in Javascript

I have an XHR object that downloads 1GB file.
function getFile(callback)
{
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
callback.apply(xhr);
}else{
console.log("Request error: " + xhr.statusText);
}
};
xhr.open('GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";
xhr.send();
}
But the File API can't load all that into memory even from a worker
it throws out of memory...
btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(e.data); // FileSaver.js it creates URL from blob... but its too large
};
worker.postMessage(this.response);
});
});
Web Worker
onmessage = function (e) {
var view = new DataView(e.data, 0);
var file = new File([view], 'file.zip', {type: "application/zip"});
postMessage('file');
};
I'm not trying to compress the file, this file is already compressed from server.
I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..
I want to create blob: url and send it to user after been downloaded by browser
I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...
Do i have to build an extension for firefox, in order to do the same thing as FileSystem does for google chrome?
Ubuntu 32 bits
Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.
Instead I would just send the file with a Content-Disposition header to save the file.
There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom
I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Beside it doesn't work in private mode.
Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it
The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.
fetch(url).then(res => {
// One idea is to get the filename from Content-Disposition header...
const size = ~~res.headers.get('Content-Length')
const fileStream = streamSaver.createWriteStream('filename.zip', size)
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// res.body.pipeTo(fileStream)
// instead of pumping
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
// here you know how large the value (chunk) is and you can
// figure out the download speed/progress when comparing it to the size
return done
? writeStream.close()
: writeStream.write(value).then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
This will not take up any memory
I have a theory that is if you split the file into chunks and store them in the indexedDB and then later merge them together it will work
A blob isn't made of data... it's more like pointers to where a file can be read from
Meaning if you store them in indexedDB and then do something like this (using FileSaver or alternative)
finalBlob = new Blob([blob_A_fromDB, blob_B_fromDB])
saveAs(finalBlob, 'filename.zip')
But i can't confirm this since i haven't tested it, would be good if someone else could
Blob is cool until you want to download a large file, there is a 600MB limit(chrome) for blob since it stores everything in memory.

javascript - reading in an external local .txt file to load data into an array

I currently have javascript code (please see below) that searches an array for a month/day combination, and if it is found, assigns the name of a .jpg file (which is used for the background image of a page). Instead of hard-coding all of the data in the array, I'd like to be able to create an external .txt file with month/day codes and associated image file names, that could be read and loaded into the array. Thanks for your help!
var ourdates = ['0000','0118','0215','0530','0614','0704','0911','1111','1207']
if (ourdates.indexOf(monthday) != -1)
{
ourimage = "flag";
}
If you mean loading it from your server, that's a classic use-case for ajax, frequently combined with JSON:
var ourdates = null;
var xhr = new XMLHttpRequest();
xhr.open("GET", "/path/to/your/data");
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && xhr.status == 200) {
ourdates = JSON.parse(xhr.responseText);
// call something that uses `ourdates`
}
};
xhr.send(null)
If you mean from the user's computer, my answer here shows how to do that with the File API. Doing that requires an <input type="file"> input (or drag-and-drop event) that the user uses to grant access to the file to your script. You can't read a file from their machine without them specifically giving you access to the file.

Access JSON property value before loading the contents of the file

I have an angularjs project which retrieves JSON files from a server and uses the contents to display the data in the screen.
I'm using a service to load the data, and this service calls the server for a new JSON file every 2 seconds (I removed that from the code below for simplicity).
var data = $resource(:file.json', {}, {
query: {method: 'GET', params: {file: '#file'}}
});
this.load = function(file, myFunction) {
data.query({file:file}, function(data) {
myFunction(data);
}
}
Now, these files can be really big and sometimes there's no need to process the file because there are no changes from the previous one received. I have a property in the JSON file with the version number, and I should not process the file unless that version number is higher than the one in the previous file.
I can do that by calling the query service, which loads the file contents into a js object and then check the version, if the file is really big it might take a while to load it. Is there a way to access that property value (version) ONLY and then, depending on it, load the file into a js object?
EDIT: The thing that I'm guessing is that loading a 1MB JSON file to check a version number inside it might take a while (or maybe no and that $resource action is really fast, anyone knows?), but I'm not really sure that it can be done any other way, as I'm checking a specific property inside the file.
Many thanks in advance.
HTML5 and Javascript now provides a File API which can be used to read the file line by line. You can find information regarding this feature here:
http://www.html5rocks.com/en/tutorials/file/dndfiles/
This will slice the full file into string and take just the first line(asuming the version is in there)
data.substr(0, data.indexOf("\n"));
--
Bonus:
Also in this answer you will find out how to read the first line of a file:
https://stackoverflow.com/a/12227851/2552259
var XHR = new XMLHttpRequest();
XHR.open("GET", "http://hunpony.hu/today/changelog-en.txt", true);
XHR.send();
XHR.onload = function (){
console.log( XHR.responseText.slice(0, XHR.responseText.indexOf("\n")) );
};
Another question with the same topic:
https://stackoverflow.com/a/6861246/2552259
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "http://website.com/file.txt", true);
txtFile.onreadystatechange = function()
{
if (txtFile.readyState === 4) { // document is ready to parse.
if (txtFile.status === 200) { // file is found
allText = txtFile.responseText;
lines = txtFile.responseText.split("\n");
}
}
}
txtFile.send(null);
Do you have access to the json files?
I'm not sure how you generate your json files but you could try adding the version number in the filename and check if a newer filename exists. I have not tested this but maybe it's worth a try.

How to read a local json file?

I have seen similar questions here but I just can't understand them.
I am building a small web page and I want to read a .json file from my file system and get the object in it.
The web page is also local and the .json file is in the same folder as the .html file.
How to do that on my Ubuntu machine without using any servers and without jquery if it is possible?
Here's some vanilla javascript XMLHTTPRequest code, which does take into account the IE quirks of ActiveX objects:
var useActiveX = typeof ActiveXObject !== 'undefined';
function loadJSON(file, callback) {
var xobj;
if (useActiveX) {
xobj = new ActiveXObject('Microsoft.XMLHTTP');
} else {
xobj = new XMLHttpRequest();
}
xobj.callback = callback;
if (xobj.overrideMimeType) {
xobj.overrideMimeType('application/json');
}
xobj.open('GET', file, false);
xobj.onreadystatechange = function() {
if (this.readyState === 4) {
this.callback(this);
}
}
xobj.send(null);
}
Then you just run it by feeding it a filepath and a callback function:
loadJSON('filename.json', function(obj) {
alert(obj.responseText);
}
You can simply append a <script> tag to your page, pointing the SRC to the local .js file in the same folder. You don't need to use Ajax.

Get different image file formats in JavaScript variable

I'm developing my own portfolio website, which is based on JavaScript gallery. The script shows and prealoads images by tracking current position and it works brilliant when it comes to load only one filetype. Here comes extract:
var $current = 1;
var $sourceImage = 'path-to-images/'+$current+'.jpg';
var $newImage = new Image();
$newImage.src = $sourceImage;
But what if in the directory there are more than one filetype, for example: 1.jpg 2.jpg 3.gif 4.png ... ? What's the best way to find extension of file that exists on server and pass it to the variable?
Thanks for any advice.
To check if a file exists with JavaScript you have to send an ajax request:
var req = this.window.ActiveXObject ? new ActiveXObject("Microsoft.XMLHTTP") : new XMLHttpRequest();
if (!req) {
throw new Error('XMLHttpRequest not supported');
}
// HEAD Results are usually shorter (faster) than GET
req.open('HEAD', url, false);
req.send(null);
if (req.status == 200) {
console.log('file exists');
}
else {
console.log('file does not exist');
}
from phpjs.
Your solutions are limited when only using Javascript. The simplest way is to have an array containing all of the file names,
var imageFiles = ["1.jpg", "2.jpg", "3.gif", "4.png"];
However, this may be undesirable if there are a large number of images.
Alternatively, you can write a page in PHP (or any language of your choice) that returns all the images in the directory as a JSON array.
["1.jpg", "2.jpg", "3.gif", "4.png"]
Then just use a framework such as jQuery to request the page; getJSON() would work nicely in this case. You can always reinvent the wheel, but I highly suggest a framework for AJAX.

Categories

Resources