I'm trying to read all the files in a user's directory and display their content in a text box.
Reading single files works perfectly, however, when I try to read a whole directory, things are getting weird.
While iterating through a directory, only the last file in the directory is read correctly. This behavior is consistent no matter how many files are in the directory.
Here's the code I use for reading the files:
results.forEach(function(item) {
reader = new FileReader();
// This line is reached
console.log("filename: " + item.name);
item.file(function(File) {
// This one only for the last file in that directory
reader.readAsText(File);
console.log("success");
});
// This line is reached
console.log("read: " + item.name);
});
Here's the log (from the dev tools):
filename: app.js
read: app.js
filename: main.js
read: main.js
filename: SharedPreferences.js
read: SharedPreferences.js
filename: KeyConstants.js
read: KeyConstants.js
success
If you have any questions, please ask them, I'm trying this for hours now and I'm slowly getting tired of failing over and over ..
This happens because FileReader works asynchronously, which means approximately that it starts executing a task (reading the file) while the code continues to be executed. If you want to do something with the result for each file as soon as the load is finished, you need to play with this method:
reader.onloadend = function(evt) {
// file is loaded
// do something with evt.target object
};
My final solution:
Like abcdn said, the problem was that I was overriding the reader with a new one.
I solved this by using javascript closures (which I had no idea existed because I'm coming from C#).
Here's the full code I used in the end:
chrome.fileSystem.chooseEntry({type: "openDirectory"}, function(dir) {
readFolderAsArrayBuffer(dir, function() {
console.log("read folder");
});
});
function readFolderAsArrayBuffer(dir, callback) {
if (dir && dir.isDirectory) {
var reader = dir.createReader();
var handlefile = function (entries) {
for (var i = 0; i < entries.length; i++) {
arr[i] = (function(fileEntry, number) {
console.log("returning function " + number);
entries[number].file(function(file) {
handleread(fileEntry, file);
console.log("reading" + url);
});
})(entries[i], i);
}
}
var handleerror = function() {
console.log("error");
};
reader.readEntries(handlefile, handleerror);
}
}
var handleread = function(fileEntry, file) {
var fileReader = new FileReader();
fileReader.onloadend = function(evt) {
console.log("Read file: " + fileEntry.name + "with the content: " + evt.target.result);
};
fileReader.readAsText(file);
}
This reads a whole user selected directory and outputs each file's content to the console.
Related
I'm trying to be able to display images that can be edited in the back end, but these have to be available while offline too. To acomplish this goal I'm downloading these images with the following method (which is the only one that worked for me)
download_save_picture(picture_url) {
let splited_url = picture_url.split('/');
var filename = splited_url[splited_url.length - 1];
var config = { responseType: 'blob' as 'blob' };
this.httpClient.get(picture_url, config).subscribe(
data => {
this.file.writeFile(this.file.dataDirectory, filename, data, { replace: true });
},
err => console.error(err),
() => { }
);
}
And then I can verify that these files exist and that their weight is also different from 0, so I guess everyting is correct until this point.
this.file.checkFile(this.file.dataDirectory, filename)
.then(() => {
alert("File Exist!!!"); // I always enter here
})
.catch((err) => {
alert("ERR : " + err);
});
this.file.resolveLocalFilesystemUrl(this.file.dataDirectory + filename).then((entry: any)=>{
entry.getMetadata((metadata) => {
alert("SIZE: " + metadata.size);
})
}).catch((error)=>{
alert(error);
});
So the next step is to display the image which is in the path this.file.dataDirectory + filename, how can I do this?
After searching for a solution and reading I understand that I have to:
Load the file binary content.
Convert this binary content to base 64.
Then display it with src="data:image/jpeg;base64,"+{{image_in_base64}};
But until now I've not been able to do steps 1 (load the file content) and 2 (convert it to base 64), how can I do that?
At the end it was easier and faster to use LocalStorage instead of files
First I made the function download_save_picture(picture_url) which save the content of the image in picture_url in Base64 in localStorage(key), the key will be everything after the last /, so if we use the URL https://www.w3schools.com/w3css/img_lights.jpg the content will be saved in icon.img_lights.jpg.
download_save_picture(picture_url) {
let splited_url = picture_url.split('/');
var name = splited_url[splited_url.length - 1];
if (localStorage.getItem('icon.' + name) === null) {
var config = { responseType: 'blob' as 'blob' };
this.httpClient.get(picture_url, config).subscribe(
data => {
var reader = new FileReader();
reader.readAsDataURL(data);
reader.onload = function() {
window.localStorage.setItem('icon.' + name, reader.result);
}
},
err => console.error(err),
() => { }
);
}
}
Then at the view I display the image with <img src={{useLocalImage(item.image)}}></p>, where useLocalImage simply returns the content saved in localStorage.
useLocalImage(picture_url) {
let splited_url = picture_url.split('/');
var name = splited_url[splited_url.length - 1];
var icon_name = window.localStorage.getItem('icon.' + name);
return icon_name;
}
Following is the code that worked for me.
<input #fileinput type="file" [(ngModel)]="file_obj" (click)="resetFileInput()" (change)="onUploadChange()"/>
Then in your typescript code.
#ViewChild('fileinput') file_input;
file_obj:any;
onUploadChange() {
const file = this.file_input.nativeElement.files[0];
const fReader = new FileReader();
fReader.readAsDataURL(file);
fReader.onloadend = function(event) {
//do whatever you want with the result here.
console.log(event.target['result']);
};
}
resetFileInput() {
this.file_input.nativeElement.value = '';
}
UPDATE - If you are using Ionic Native File or Cordova File Plugin
Ionic Native file is different from the browser File object.
There seems to be a method called getFile() , which returns FileEntry object
This has something called method .file() which returns a Native file object .
And then use the FileReader to read the file as dataURL using readAsDataURL method.
There are many similar questions to this, however, when I used the code provided, it didn't work. My code is as follows:
function write(fs) {
fs.root.getFile('archive.txt', {create: true}, function(fileEntry) {
// Create a FileWriter object for our FileEntry (log.txt).
fileEntry.createWriter(function(fileWriter) {
fileWriter.onwriteend = function(e) {
console.log('Write completed.');
};
fileWriter.onerror = function(e) {
console.log('Write failed: ' + e.toString());
};
var blob = new Blob([prompt("MESSAGE: ")], {type: 'text/plain'});
fileWrite.write(blob);
}, errorHandler);
}, errorHandler);
}
function onInitFs(fs) {
fs.root.getFile('archive.txt', {}, function(fileEntry) {
// Get a File object representing the file,
// then use FileReader to read its contents.
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
var txtArea = document.createElement('textarea');
txtArea.value = this.result;
document.body.appendChild(txtArea);
};
reader.readAsText(file);
}, errorHandler);
}, errorHandler);
}
window.requestFileSystem(window.TEMPORARY, 5*1024*1024 /*5MB*/, onInitFs, errorHandler);
The file archive.txt does exist but when I call the function, it doesn't work. So instead I used window.requestFileSystem() which I found on a website. However, when I compile this code through Github, it doesn't work.
Also, if someone could tell me a way to read and write to a file without using php as this is all in html file using Github without git. I have another file in Github in the same directory as this. Would I need to include the full directory rather than archive.txt?
It's not possible without server side code.
I've asked a similar question a while ago.
Does anyone know why using 'fileEntry.file' keeps failing in my Windows 8 app?
If I use the following code it fails:
Windows.Storage.StorageFile.getFileFromApplicationUriAsync(new Windows.Foundation.Uri(cordova.file.applicationDirectory + 'www/assets/pages/en/navigation.html')).done(usethisfile, fail);
function usethisfile(fileEntry) {
console.log("Im going to use the file... " + fileEntry.path);
fileEntry.file(function (file) {
var reader = new FileReader();
reader.onloadend = function() {
console.log("Successful file read: " + this.result);
};
reader.readAsText(fileEntry);
}, onErrorReadFile);
}
but if I remove the 'fileEntry.file' part it works fine:
Windows.Storage.StorageFile.getFileFromApplicationUriAsync(new Windows.Foundation.Uri(cordova.file.applicationDirectory + 'www/assets/pages/en/navigation.html')).done(usethisfile, fail);
function usethisfile(fileEntry) {
console.log("Im going to use the file... " + fileEntry.path);
//fileEntry.file(function (file) {
var reader = new FileReader();
reader.onloadend = function() {
console.log("Successful file read: " + this.result);
};
reader.readAsText(fileEntry);
//}, onErrorReadFile);
}
The official docs say to use 'fileEntry.file': https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-file/index.html and I already have the app running on both the Android and the Apple stores so I'm hoping I can continue to use all the current functions that already use 'fileEntry.file' for the Windows version.
The error I get is:
0x800a01b6 - JavaScript runtime error: Object doesn't support property or method 'file'.
I'm using Cordova via the command-line and Visual Studio to run it if that helps at all.
Not 100% sure but try adding e argument when you define the onloadend method
I'm trying to read the first byte of the selected file.
But when I select a large file (>100Mb) I get an error: "NotReadableError".
See the code below. Is "array buffer" really a buffer or it just loads the whole stuff into the memory and I MUST use file#slice?
function readFile(file) {
var reader = new FileReader();
reader.onload = function() {
var buffer = reader.result;
var view = new Int8Array(buffer);
try {
view.forEach(function(v, index, array) {
console.log(v);
alert("ok - " + v);
throw "BreakException";
})
} catch (e) {
if (e!=="BreakException") throw e;
}
}
reader.onerror = function() {
alert("error");
console.log(reader.error);
}
reader.readAsArrayBuffer(file);
}
var fileField = document.getElementById("file");
fileField.onchange = function(e) {
var file = e.target.files[0];
readFile(file);
}
<form>
<input id="file" type="file"/>
</form>
An ArrayBuffer is really a buffer, an in-memory buffer. That's how buffers work. Your code tries to load the whole file into memory. To access specific ranges of a file without loading the whole into memory, you must use Blob.slice (Files implement all the methods of Blobs) as you suspected.
Example:
I have
var r = new FileReader();
r.onload = function(e) {
drawGraph(r.result);
}
r.readAsText(f);
drawing a graph from the file f input by the user.
Is there a way to check to see if the file f has been changed and then re load it's content without needing for the user to pick the file again?
Yes, this can be achieved using Node.js's filesystem API which provides a "watch" function
NodeJS Filesystem API docs: https://nodejs.org/api/fs.html
Similar question:
Observe file changes with node.js
fs.watch('somedir', function (event, filename) {
console.log('event is: ' + event);
if (filename) {
console.log('filename provided: ' + filename);
} else {
console.log('filename not provided');
}
});