FileReader not triggering load event - javascript

I have this function which I am trying to unit-test. It is called when a file is selected from HTML <input>. I am creating a FileReader which I suppose should fire load event. I want to call my function _handleReaderLoaded when load is fired.
handleFileSelect(files:ArrayLike<File>){
console.log("got file upload event: ");
console.log(" image attachment count: ",this.currentImageAttachmentCount);
if(this.currentImageAttachmentCount >= this.maxImageAttachmentCount)
{
console.log("reached max attachment size");
this.showDialog("You can't attach more files",new DialogContext("",""));
return;
}
console.log("files selected:",files);
console.log("total selected: ",files.length);
for(let i=0;i<files.length;i++)
{
console.log("files name:",files[i].name);
console.log("files object:",files[i])
}
//working with only 1 file at the moment
let file = files[0];
console.log("file at index 0 ",file);
if (files && file) {
console.log("reading file");
let reader:FileReader = new FileReader();
reader.onload =this._handleReaderLoaded.bind(this);
reader.onerror = this.debugPrintFileEvents.bind(this); //what is the purpose of bind and what does this refer to?
reader.onloadend = this.debugPrintFileEvents.bind(this);
reader.onloadstart = this.debugPrintFileEvents.bind(this);
reader.onprogress = this.debugPrintFileEvents.bind(this);
reader.onabort = this.debugPrintFileEvents.bind(this);
;
//The readAsBinaryString method is used to start reading the contents of the specified Blob or File.
reader.readAsBinaryString(file);
this.currentImageAttachmentCount++;
}
}
but I notice that the load event is not getting fired. The unit test case is
fit('should upload maximum 3 image files', () => {
let newPracticeQuestionComponent = component;
expect(newPracticeQuestionComponent.currentImageAttachmentCount).toBe(0);
let file1 = new File(["foo1"],"foo1.txt");
spyOn(newPracticeQuestionComponent,'_handleReaderLoaded');
spyOn(newPracticeQuestionComponent,'showDialog');
newPracticeQuestionComponent.handleFileSelect([file1]);
expect(newPracticeQuestionComponent.currentImageAttachmentCount).toBe(1);
});
The following are the debug prints in the browser window. You'll notice that there is no load event, thus my function _handleReaderLoaded is not getting executed
got file upload event:
context.js:1972 image attachment count: 0
context.js:1972 files selected: [File(4)]
context.js:1972 total selected: 1
context.js:1972 files name: foo1.txt
context.js:1972 files object: File(4) {name: "foo1.txt", lastModified: 1548101766552, lastModifiedDate: Mon Jan 21 2019 20:16:06 GMT+0000 (Greenwich Mean Time), webkitRelativePath: "", size: 4, …}
context.js:1972 file at index 0 File(4) {name: "foo1.txt", lastModified: 1548101766552, lastModifiedDate: Mon Jan 21 2019 20:16:06 GMT+0000 (Greenwich Mean Time), webkitRelativePath: "", size: 4, …}
context.js:1972 reading file
context.js:1972 got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 0, total: 4, type: "loadstart", …}
context.js:1972 got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 4, total: 4, type: "progress", …}
context.js:1972 got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 4, total: 4, type: "loadend", …}
Interestingly, if I change the onload handler to this
then I see that that handler is called
reader.onload = function(){
console.log('onload event for reader ',reader);
};
got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 0, total: 4, type: "loadstart", …}
context.js:1972 got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 4, total: 4, type: "progress", …}
context.js:1972 onload event for reader _global.(anonymous function) {__zone_symbol__originalInstance: FileReader} <<----- THIS GETS CALLED
context.js:1972 got file reader event ProgressEvent {isTrusted: true, lengthComputable: true, loaded: 4, total: 4, type: "loadend", …}
the _handleReaderLoaded method is
_handleReaderLoaded(event:FileReaderProgressEvent) {
console.log("got load event of file reader ",event);
let thumbnailTemplateViewRef:EmbeddedViewRef<any>;
/*
When the read operation is finished, the result attribute contains the raw binary data from the file.
*/
let binaryString = event.target.result;
this.base64textString= btoa(binaryString);
console.log(this.base64textString);
/*show image as thumbnail*/
let src = "data:image/png;base64,";
src += this.base64textString;
//create new ids for div, img and a in the template
++this.consecutiveIdGenerator;
let divId = "thumbnail-"+(this.consecutiveIdGenerator);
console.log("div id "+divId);
let imgId = "img-"+(this.consecutiveIdGenerator);
console.log("img id "+imgId);
let closeId = "close-button-"+(this.consecutiveIdGenerator)
console.log("close Id is "+closeId);
//TODOM - define context as a class so that it can be used in new question and question details
thumbnailTemplateViewRef = this.thumbnailContainerRef.createEmbeddedView(this.thumbnailTemplateRef,{option:{divId:divId,
imgId:imgId,
closeId:closeId,
imgSrc:src}});
//store the reference of the view in context of the template. This will be used later to retrive the index of the view when deleting the thumbnail
thumbnailTemplateViewRef.context.option.viewRefId = thumbnailTemplateViewRef;
}

Urrrg!!!! I am spying on _handleReaderLoaded, so obviously its original implementation will not be called! I should have used spyOn(newPracticeQuestionComponent,'_handleReaderLoaded').and.callThrough();

Related

File only 15MB - drive.files.insert failed with error: Request Too Large

I have a short script to OCR jpg files by converting them to GDOCS. It works fine for JPGs around 5MB. But for a 600dpi scan, where the file size is more like 15MB I get the following error for a single image:
5:40:58 PM Notice Execution started
5:41:03 PM Error
GoogleJsonResponseException: API call to drive.files.insert failed with error: Request Too Large
convertJpegToGdoc # convertJpegToGdoc.gs:27
The relevant line of code is:
Drive.Files.insert({title: file.getName(), parents: [{id: dstFolderId}]
}
I am aware of quotas Quotas for Google Services The error I am getting is not one of these. The time shows that the script is not exceeding the 6 mins listed in the docs. BTW I can convert multiple images, each approx 1.5MB, with 24 character JPG file basenames, into gdocs without problems using this script.
The Google Drive API for insert docs javascript example suggests, perhaps, that I may need to upgrade my code to handle larger files. But I am not sure where to start.
Any suggestion appreciated.
Full code:
// this function does OCR while copying from ocrSource to ocrTarget
function convertJpegToGdoc() {
var files = DriveApp.getFolderById(srcFolderId).getFilesByType(MimeType.JPEG);
while (files.hasNext()) { var file = files.next();
Drive.Files.insert({title: file.getName(), parents: [{id: dstFolderId}]
},
file.getBlob(), {ocr: true});
}
// this moves files from ocrSource to ocrScriptTrash
// handy for file counting & keeping ocrSource free for next batch of files
var inputFolder = DriveApp.getFolderById(srcFolderId);
var processedFolder = DriveApp.getFolderById(trshFolderId);
var files = inputFolder.getFiles();
while (files.hasNext()) {
var file = files.next();
file.moveTo(processedFolder);
}
}
I believe your goal is as follows.
You want to convert a JPEG to Google Doucment as OCR using Drive API.
When I saw your question, I remembered that I experienced the same situation as you. At that time, even when the resumable upload is used, the same error of Request Too Large couldn't be removed. For example, as the reason for this issue, I thought the file size, the image size, the resolution of the image, and so on. But I couldn't find a clear reason for this. So, in that case, I used the following workaround.
My workaround is to reduce the image size. By this, the file size and image size can be reduced. By this, I could remove the issue. In this answer, I would like to propose this workaround.
When your script is modified, it becomes as follows.
From:
Drive.Files.insert({title: file.getName(), parents: [{id: dstFolderId}]
},
file.getBlob(), {ocr: true});
}
To:
try {
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, file.getBlob(), { ocr: true });
} catch ({ message }) {
if (message.includes("Request Too Large")) {
const link = Drive.Files.get(file.getId()).thumbnailLink.replace(/=s.+/, "=s2000");
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, UrlFetchApp.fetch(link).getBlob(), { ocr: true });
}
}
In this modification, when the error of Request Too Large occurs, the image size is reduced by modifying the thumbnail link. In this sample, the horizontal size is 2000 pixels by keeping the aspect ratio.
Note:
This modified script supposes that Drive API has already been enabled at Advanced Google services. Please be careful about this.
Added:
Your script in your question is as follows.
// this function does OCR while copying from ocrSource to ocrTarget
function convertJpegToGdoc() {
var files = DriveApp.getFolderById(srcFolderId).getFilesByType(MimeType.JPEG);
while (files.hasNext()) { var file = files.next();
Drive.Files.insert({title: file.getName(), parents: [{id: dstFolderId}]
},
file.getBlob(), {ocr: true});
}
// this moves files from ocrSource to ocrScriptTrash
// handy for file counting & keeping ocrSource free for next batch of files
var inputFolder = DriveApp.getFolderById(srcFolderId);
var processedFolder = DriveApp.getFolderById(trshFolderId);
var files = inputFolder.getFiles();
while (files.hasNext()) {
var file = files.next();
file.moveTo(processedFolder);
}
}
In my answer, I proposed the following modification.
From:
Drive.Files.insert({title: file.getName(), parents: [{id: dstFolderId}]
},
file.getBlob(), {ocr: true});
}
To:
try {
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, file.getBlob(), { ocr: true });
} catch ({ message }) {
if (message.includes("Request Too Large")) {
const link = Drive.Files.get(file.getId()).thumbnailLink.replace(/=s.+/, "=s2000");
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, UrlFetchApp.fetch(link).getBlob(), { ocr: true });
}
}
But, when I saw your current script, your current script is as follows.
// convertJpegToGdoc.js - script converts .jpg to .gdoc files
// Google Script Project - ocrConvert https://script.google.com/home/projects/1sDHfmK4H19gaLxxtXeYv8q7dql5LzoIUHto-OlDBofdsU2RyAn_1zbcr/edit
// clasp location C:\Users\david\Google Drive\ocrRollConversion
// Begin with empty folders (see below)
// Transfer a set of Electoral Roll .JPG from storage into ocrSource folder
// Running this script performs OCR conversions on the .JPG files
// .JPG files are converted to .GDOC & stored in ocrTarget
// The .JPG ocrSource files are transferred to ocrScriptTrash leaving the ocrSource folder empty if all goes well
// Uses Google Drive root folders (~\Google Drive\)
// 1. ocrSource
// 2. ocrTarget
// 3. ocrScriptTrash
// to check Id value open folder in Google Drive then examine URL
let srcFolderId = "###"; //ocrSource
let dstFolderId = "###"; //ocrTarget
let trshFolderId = "###"; //ocrScriptTrash
// this function does OCR while copying from ocrSource to ocrTarget (adjusted try/catch for larger jpgs)
function convertJpegToGdocRev1() {
var files = DriveApp.getFolderById(srcFolderId).getFilesByType(MimeType.JPEG);
try {
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, file.getBlob(), { ocr: true });
} catch ({ message }) {
if (message.includes("Request Too Large")) {
const link = Drive.Files.get(file.getId()).thumbnailLink.replace(/=s.+/, "=s2000");
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, UrlFetchApp.fetch(link).getBlob(), { ocr: true });
}
}
// this moves files from ocrSource to ocrScriptTrash
// handy for file counting & keeping ocrSource free for next batch of files
var inputFolder = DriveApp.getFolderById(srcFolderId);
var processedFolder = DriveApp.getFolderById(trshFolderId);
var files = inputFolder.getFiles();
while (files.hasNext()) {
var file = files.next();
file.moveTo(processedFolder);
}
}
Unfortunately, it seems that you miscopied my proposed answer. In your current script, var files = DriveApp.getFolderById(srcFolderId).getFilesByType(MimeType.JPEG); is not used. And, when try-catch is not used, an error occurs. But, because of try-catch, when you run the script, No result. No errors occurs. I think that the reason for your current issue of No result. No errors is due to this.
In order to use my proposed modification, please correctly copy and paste the script. By this, the modified script is as follows.
Modified script:
Please enable Drive API at Advanced Google services.
let srcFolderId = "###"; //ocrSource
let dstFolderId = "###"; //ocrTarget
let trshFolderId = "###"; //ocrScriptTrash
function convertJpegToGdocRev1() {
var files = DriveApp.getFolderById(srcFolderId).getFilesByType(MimeType.JPEG);
while (files.hasNext()) {
var file = files.next();
var name = file.getName();
console.log(name) // You can see the file name at the log.
try {
Drive.Files.insert({ title: name, parents: [{ id: dstFolderId }] }, file.getBlob(), { ocr: true });
} catch ({ message }) {
if (message.includes("Request Too Large")) {
const link = Drive.Files.get(file.getId()).thumbnailLink.replace(/=s.+/, "=s2000");
Drive.Files.insert({ title: file.getName(), parents: [{ id: dstFolderId }] }, UrlFetchApp.fetch(link).getBlob(), { ocr: true });
}
}
}
var inputFolder = DriveApp.getFolderById(srcFolderId);
var processedFolder = DriveApp.getFolderById(trshFolderId);
var files = inputFolder.getFiles();
while (files.hasNext()) {
var file = files.next();
file.moveTo(processedFolder);
}
}

File length is undefined

Using the Filesystem API of Tizen SDK, I'm getting a javascript File object that prints the following datas on console.log:
File
created: Thu Dec 14 2017 09:59:51 GMT+0100 (CET)
fullPath: "/opt/share/folder/image.jpg"
get fileSize: function fileSizeGetter() {var _realPath=commonFS_.toRealPath(this.fullPath);var _result=native_.callSync('File_statSync',{location:_realPath});var _aStatObj=native_.getResultObject(_result);return _aStatObj.isFile?_aStatObj.size:undefined;}
isDirectory: false
isFile: true
length: undefined
mode: "rw"
modified: Thu Dec 14 2017 09:59:51 GMT+0100 (CET)
name: "image.jpg"
parent: File
path: "/opt/share/folder/"
readOnly: false
set fileSize: function () {}
__proto__: File
Problem is that the length of the File is undefined. This cause my Filereader readyState to stay at 0 (EMPTY) state (or maybe the problem is somewhere else).
Why is my code returning undefined for length parameter?
My code:
tizen.filesystem.resolve('/opt/share/folder/image.jpg', function(file) {
console.log(file);
var reader = new FileReader();
console.log(reader);
reader.readAsArrayBuffer(file);
reader.onload = fileLoad;
reader.onerror = function(evt){
console.log(evt.target.error.name);
}
});
Console value for reader:
FileReader
constructor: FileReaderConstructor
error: null
onabort: null
onerror: function (evt) {
onload: function fileLoad(evt) {
onloadend: null
onloadstart: null
onprogress: null
readyState: 0
result: null
__proto__: FileReaderPrototype
Precision:
Using the file url to insert image in a canvas is working and the file is existing on device
According to the documentation, length is for File instances representing directories (it tells you how many files and directories the directory contains). For a File actually representing a file, you'd use fileSize.
I don't see a FileReader anywhere in the Tizen file system documentation. Instead, examples reading and writing files use a FileStream via openStream.

correct implementation of plupdate

i´m implementing this upload library, maybe not much people use this, but maybe somebody can help me to figure how to solve this.
So i'm already uploading, the thing is that i want to implement the "uploader" objet, like
upload.bind();
i would like to know if anybody here can provide me links or maybe clear my idea.
thank you so much.
This is my code:
uploader = $("#uploader").plupload({
// General settings
runtimes: 'html5,flash,silverlight,html4',
url: objMaterial.geturl(),
urlstream_upload: true, //cambiar url en tiempo real
multi_selection: multiSelection,
unique_names: unicoNombre,
// User can upload no more then 20 files in one go (sets multiple_queues to false)
max_file_count: 1,
chunk_size: '1mb',
// Resize images on clientside if we can
filters: {
// Maximum file size
max_file_size: '50mb',
// Specify what files to browse for
mime_types: [
{
title: titulo,
extensions: extensiones
}
]
},
// Rename files by clicking on their titles
rename: true,
// Sort files
sortable: true,
// Enable ability to drag'n'drop files onto the widget (currently only HTML5 supports that)
dragdrop: true,
// Views to activate
views: {
list: true,
thumbs: true, // Show thumbs
active: 'thumbs'
},
// Flash settings
flash_swf_url: '../../js/Moxie.swf',
// Silverlight settings
silverlight_xap_url: '../../js/Moxie.xap'
});
//uploader = $("#uploader").plupload();
uploader = $('#uploader').plupload();
console.log(uploader);
//uploader = $("#flash_uploader").pluploadQueue();
uploader.bind('QueueChanged', function (up, files)
{
files_remaining = uploader.files.length;
});
i want to answer this question, i found a solution.
so all these objects are events.
here you have a complete example of how to implement them.
uploader = $("#uploader").plupload({
// General settings
runtimes: 'html5,html4',
url: objMaterial.geturl(),
// Maximum file size
max_file_size: '50mb',
chunk_size: '1mb',
max_file_count: 1,
// Resize images on clientside if we can
resize: {
width: 200,
height: 200,
quality: 90,
crop: true // crop to exact dimensions
},
// Specify what files to browse for
filters: [
{title: "PDF", extensions: "PDF"}
],
// Rename files by clicking on their titles
rename: true,
// Sort files
sortable: true,
// Enable ability to drag'n'drop files onto the widget (currently only HTML5 supports that)
dragdrop: true,
// Views to activate
views: {
list: true,
thumbs: true, // Show thumbs
active: 'thumbs'
},
// Post init events, bound after the internal events
init: {
PostInit: function () {
// Called after initialization is finished and internal event handlers bound
log('[PostInit]');
document.getElementById('uploadfiles').onclick = function () {
uploader.start();
return false;
};
},
Browse: function (up) {
                // Called when file picker is clicked                
            },
            Refresh: function (up) {
                // Called when the position or dimensions of the picker change                 
            }, 
            StateChanged: function (up) {
                // Called when the state of the queue is changed                 
            }, 
            QueueChanged: function (up) {
                // Called when queue is changed by adding or removing files                 
            },
OptionChanged: function (up, name, value, oldValue) {
// Called when one of the configuration options is changed
},
BeforeUpload: function (up, file) {
// Called right before the upload for a given file starts, can be used to cancel it if required
},
            UploadProgress: function (up, file) {
                // Called while file is being uploaded                 
            },
FileFiltered: function (up, file) {
// Called when file successfully files all the filters                 
},
            FilesAdded: function (up, files) {
                // Called when files are added to queue                
                plupload.each(files, function (file) {                     
                });
            },
            FilesRemoved: function (up, files) {
                // Called when files are removed from queue                 
                plupload.each(files, function (file) {                     
                });
            }, 
            FileUploaded: function (up, file, info) {
                // Called when file has finished uploading
jQueryMessage('El archivo se ha enviado exitosamente!', 1);                 
            }, 
            ChunkUploaded: function (up, file, info) {
                // Called when file chunk has finished uploading                 
            },
UploadComplete: function (up, files) {
// Called when all files are either uploaded or failed                 
},
Destroy: function (up) {
// Called when uploader is destroyed                 
},
            Error: function (up, args) {
                // Called when error occurs                 
            }
    },
// Flash settings
flash_swf_url: '/plupload/js/Moxie.swf',
// Silverlight settings
silverlight_xap_url: '/plupload/js/Moxie.xap'
});
i hope this can help you

How to save the data as a string from a csv file in javascript?

I'm making a local html5 stats page using Highcharts, and I want to get the data for my charts from a csv file allocated in my own laptop. The javascript code is:
var arch = new FileReader();
var content = arch.readAsArrayBuffer('./csvs/sample1.csv');
//var content = arch.readAsText('./csvs/sample1.csv'.files);
var sample = $.csv.toArrays(content);
console.log(sample1);
$(function () {
$('#container').highcharts({
xAxis: {
min: -0.5,
max: 5.5
},
yAxis: {
min: 0
},
title: {
text: 'Scatter plot with regression line'
},
series: [{
type: 'line',
name: 'Regression Line',
data: [[0, 1.11], [5, 4.51]],
marker: {
enabled: true
},
states: {
hover: {
lineWidth: 0
}
},
enableMouseTracking: false
}, {
type: 'scatter',
name: 'Observations',
data: sample,
marker: {
radius: 4
}
}]
});
});
I'm using jquery-csv plugin too, but it doesn't work. I've tested with fopen but nothing too. The console tells me:
Uncaught TypeError: Failed to execute 'readAsArrayBuffer' on 'FileReader': parameter 1 is not of type 'Blob'.
Thanks.
To read local file you need input type file:
function readSingleFile(evt) {
//Retrieve the first (and only!) File from the FileList object
var f = evt.target.files[0];
if (f) {
var r = new FileReader();
r.onload = function(e) {
var contents = e.target.result;
document.getElementById('output').innerHTML = contents;
}
r.readAsText(f);
} else {
alert("Failed to load file");
}
}
document.getElementById('fileinput').addEventListener('change', readSingleFile, false);
<input type="file" id="fileinput"/>
<textarea id="output" cols="60" rows="10"></textarea>
You need to read the file into an object, then pass that object to your FileReader.readAsXXX method.
FileReader.readAsArrayBuffer() doesn't take a string.
Here is some API docs for you that may help.
FileReader.readAsArrayBuffer(..) expects a Blob as parameter, not a string. A blob is binary data, read from a file. You can find the documentation about FileReader on mdn.
It tells us that we can pass a File (docs) instead too, which we can extract from a FileList (docs).
You cannot read files directly from your computer. That would be a major security breach. What we need to do is have an input element where we either select the file, or where we drag-and-drop the file onto. Then we read the file and execute your code:
<input type="file" id="myfile"> <input type="button" id="csvbutton" value="Load">
And javascript:
$("#csvbutton").on( "click", function() {
var file = $("#myfile")[0].files[0];
if( file == undefined ) {
//RIP
return;
}
var arch = new FileReader();
var content = arch.readAsArrayBuffer(file);
//etc
} );

How to get selected filepath in dropzone.js

I'm using Dropzone.js for uploading files. Now I want to do some additional validation for some files. But I didn't found a way to get the absolute path of selected file. Please anyone tell me whether there have any way to get the file path.
This is the file array returned by dropzone when we add a file
accepted: true
lastModifiedDate: Wed Dec 17 2014 13:01:03 GMT+0530 (IST)
name: "xxxxxx.pdf"
previewElement: div.dz-preview.dz-file-preview.dz-processing
previewTemplate: div.dz-preview.dz-file-preview.dz-processing
processing: true
size: 407552
status: "uploading"
type: "application/pdf"
upload: ObjectwebkitRelativePath: ""
xhr: XMLHttpRequest
__proto__: File
The full path comes through in the POST data:
Dropzone.options.yourDropzoneID = {
init: function() {
this.on("sending", function(file, xhr, data) {
if(file.fullPath){
data.append("fullPath", file.fullPath);
}
});
}
};

Categories

Resources