How to generate a blob url from a actual url - javascript

How do I convert video source url like
http://localhost:3000/videos/abc.mp4 to blob url
let dataUrl = "http://localhost:3000/videos/abc.mp4"
this.videoBlobUrl = URL.createObjectURL(dataUrl);

All you are looking for is fetch's blob function
fetch("URL")
.then(response => response.blob())
.then(blobData => /* you got the blob data */)
.catch(reason => /* handling errors */)
You can also use URL.createObjectURL(blobData); to create temporary URL as said by
For Example
fetch("https://i.picsum.photos/id/682/200/300.jpg")
.then(r => r.blob())
.then(blobData => console.log(URL.createObjectURL(blobData)))
.catch(console.error)

what is a blob url?
in my opinion, it's not even a real standard on the internets.
i always use my own way of generating a 'blob url'.
for that, i'd use (in javascript) :
var
dataURL = 'http://localhost:3000/videos/abc.mp4',
generateRandomID = function() {
var
r = '',
returnLength = 10,
cipher='abc0123';
for (var i=0; i<returnLength; i++) {
r += cipher.substr(Math.random() * cipher.length, 1)
};
return r;
},
generateBlobURL = function (url) {
var
urlStripped = url.replace('http://localhost:3000/videos',''),
r = urlStripped + generateRandomID();
return r;
},
dataURL_blob = generateBlobURL (dataURL);
and do not forget to use the F12 button to test this code, and to see where the errors in it might be :)

Related

Validating & Converting pdf file type to base 64

I am using this tutorial to create a drag and drop interface in React. In the tutorial, the valid file types are for images.
const validTypes = ['image/jpeg', 'image/jpg', 'image/png', 'image/gif', 'image/x-icon'];
However, since I am customizing my component, I want to use PDF and convert it to base64.
Validate Files
const validateFile = (file) => {
const validTypes = ['file/pdf'];
if (validTypes.indexOf(file.type) === -1) {
return false;
}
console.log('valid?', validateFile);
return true;
};
uploadFiles function
const uploadFiles = () => {
uploadModalRef.current.style.display = 'block';
uploadRef.current.innerHTML = 'File(s) Uploading...';
for (let i = 0; i < validFiles.length; i++) {
const formData = new FormData();
formData.append('file', validFiles[i]);
axios
.post('/data', formData, {
onUploadProgress: (progressEvent) => {
const uploadPercentage = Math.floor(
(progressEvent.loaded / progressEvent.total) * 100
);
progressRef.current.innerHTML = `${uploadPercentage}%`;
progressRef.current.style.width = `${uploadPercentage}%`;
if (uploadPercentage === 100) {
uploadRef.current.innerHTML = 'File(s) Uploaded';
validFiles.length = 0;
setValidFiles([...validFiles]);
setSelectedFiles([...validFiles]);
setUnsupportedFiles([...validFiles]);
}
},
})
.catch(() => {
// If error, display a message on the upload modal
uploadRef.current.innerHTML = `<span class="error">Error Uploading File(s)</span>`;
// set progress bar background color to red
progressRef.current.style.backgroundColor = 'red';
});
}
};
I've never done this but I assumed that for PDF would be ```'file/.pdf``.
This however always returns an invalid file type. This article https://www.geeksforgeeks.org/file-type-validation-while-uploading-it-using-javascript/#:~:text=Using%20JavaScript%2C%20you%20can%20easily,complete%20file%20type%20validation%20code. discusses file type validation, but again it is for images.
How can I check that I have the correct file type and convert it to base64?
In your validTypes array you must include the standard mimetype used for pdfs. According to rfc3778 it is application/pdf.
And to convert a string to base64, you can use the javascript helper function btoa('my_string');.

Fetch image from API

Q1) In my reactjs application, I am trying to fetch an API from my backend Nodejs server. The API responds with an image file on request.
I can access and see image file on http://192.168.22.124:3000/source/592018124023PM-pexels-photo.jpg
But in my reactjs client side I get this error on console log.
Uncaught (in promise) SyntaxError: Unexpected token � in JSON at position 0
Reactjs:
let fetchURL = 'http://192.168.22.124:3000/source/';
let image = name.map((picName) => {
return picName
})
fetch(fetchURL + image)
.then(response => response.json())
.then(images => console.log(fetchURL + images));
Nodejs:
app.get('/source/:fileid', (req, res) => {
const { fileid } = req.params;
res.sendFile(__dirname + /data/ + fileid);
});
Is there any better way to do than what I am doing above?
Q2) Also, how can I assign a value to an empty variable (which lives outside the fetch function)
jpg = fetchURL + images;
So I can access it somewhere.
The response from the server is a binary file, not JSON formatted text. You need to read the response stream as a Blob.
const imageUrl = "https://.../image.jpg";
fetch(imageUrl)
// vvvv
.then(response => response.blob())
.then(imageBlob => {
// Then create a local URL for that image and print it
const imageObjectURL = URL.createObjectURL(imageBlob);
console.log(imageObjectURL);
});
Equivalent to solution by #maxpaj, but using async and await.
async function load_pic() {
const url = '<REPLACE-WITH-URL>'
const options = {
method: "GET"
}
let response = await fetch(url, options)
if (response.status === 200) {
const imageBlob = await response.blob()
const imageObjectURL = URL.createObjectURL(imageBlob);
const image = document.createElement('img')
image.src = imageObjectURL
const container = document.getElementById("your-container")
container.append(image)
}
else {
console.log("HTTP-Error: " + response.status)
}
}
This question is 4 years old and I think in 2022 there are many ways to solve this. This is ES6 version using async calls.
First, I don't know if you are trying to download the image or insert the image into a img tag. So I will assume we want to download the image.
The process is simple: a) fetch the image as a blob; b) convert blob to Base64 using URL.createObjectURL(blob); and c) trigger the download using a ghost a tag.
const $btn = document.getElementById('downloadImage')
const url = 'https://s3-ap-southeast-1.amazonaws.com/tksproduction/bmtimages/pY3BnhPQYpTxasKfx.jpeg'
const fetchImage = async url => {
const response = await fetch(url)
const blob = await response.blob()
return blob
}
const downloadImage = async url => {
const imageBlob = await fetchImage(url)
const imageBase64 = URL.createObjectURL(imageBlob)
console.log({imageBase64})
const a = document.createElement('a')
a.style.setProperty('display', 'none')
document.body.appendChild(a)
a.download = url.replace(/^.*[\\\/]/, '')
a.href = imageBase64
a.click()
a.remove()
}
$btn.onclick = event => downloadImage(url)
<button id="downloadImage">Download Image</button>
Note:
StackOverflow uses a sandboxed iframe's so we can test the download but you can use my codepen

How to convert a image src from a blob string to data URI

I have a page where the user can paste an image into a content editable div. When I get the image the src returns a string. When I look in debug tools this is what I see:
<img src="blob:http://www.example.com/3955202440-AeFf-4a9e-b82c-cae3822d96d4"/>
How do I convert that to a base 64 string?
Here is the test script, http://jsfiddle.net/bt7BU/824/.
// We start by checking if the browser supports the
// Clipboard object. If not, we need to create a
// contenteditable element that catches all pasted data
if (!window.Clipboard) {
var pasteCatcher = document.createElement("div");
// Firefox allows images to be pasted into contenteditable elements
pasteCatcher.setAttribute("contenteditable", "");
// We can hide the element and append it to the body,
pasteCatcher.style.opacity = 0.5;
document.body.appendChild(pasteCatcher);
// as long as we make sure it is always in focus
pasteCatcher.focus();
document.addEventListener("click", function() { pasteCatcher.focus(); });
}
// Add the paste event listener
window.addEventListener("paste", pasteHandler);
/* Handle paste events */
function pasteHandler(e) {
// We need to check if event.clipboardData is supported (Chrome)
if (e.clipboardData) {
// Get the items from the clipboard
var items = e.clipboardData.items || e.clipboardData.files;
var itemcount = items ? items.length : 0;
pasteArea.value = "items found:"+itemcount;
if (itemcount) {
// Loop through all items, looking for any kind of image
for (var i = 0; i < items.length; i++) {
if (items[i].type.indexOf("image") !== -1) {
// We need to represent the image as a file,
var blob = items[i].getAsFile();
// and use a URL or webkitURL (whichever is available to the browser)
// to create a temporary URL to the object
var URLObj = window.URL || window.webkitURL;
var source = URLObj.createObjectURL(blob);
// The URL can then be used as the source of an image
createImage(source);
}
}
} else {
console.log("no items found. checking input");
// This is a cheap trick to make sure we read the data
// AFTER it has been inserted.
setTimeout(checkInput, 1);
}
// If we can't handle clipboard data directly (Firefox),
// we need to read what was pasted from the contenteditable element
} else {
console.log("checking input");
// This is a cheap trick to make sure we read the data
// AFTER it has been inserted.
setTimeout(checkInput, 1);
}
}
/* Parse the input in the paste catcher element */
function checkInput() {
console.log("check input");
// Store the pasted content in a variable
var child = pasteCatcher.childNodes[0];
// Clear the inner html to make sure we're always
// getting the latest inserted content
//pasteCatcher.innerHTML = "";
//console.log( "clearing catcher");
console.log(child);
if (child) {
// If the user pastes an image, the src attribute
// will represent the image as a base64 encoded string.
if (child.tagName === "IMG") {
createImage(child.src);
reader = new FileReader();
reader.readAsDataURL(child.src);
reader.loadend = function(e) {
console.log(e.target.result);
}
}
}
}
/* Creates a new image from a given source */
function createImage(source) {
var pastedImage = new Image();
pastedImage.onload = function(e) {
//pasteArea.text = pastedImage.src;
console.log(1);
console.log(e);
loadImage.src = e.target.src;
console.log(loadImage.src);
}
pastedImage.src = source;
}
<textarea id="pasteArea" placeholder="Paste Image Here"></textarea>
<img id="loadImage" />
I'm testing this in Safari on Mac.
Since the blobURI is generated automatically by the browser, you can use this, which will download the produced image as a new Blob:
const toDataURL = url => fetch(url)
.then(response => response.blob())
.then(blob => new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onloadend = () => resolve(reader.result)
reader.onerror = reject
reader.readAsDataURL(blob)
}))
And then on your function createImage(source) { you can call it:
toDataURL(source)
.then(dataUrl => {
console.log('RESULT:', dataUrl)
})
This answer is complimentary to #BrunoLM's answer for when you don't have ES6 or you want to read in a different image type.
ES6:
const toDataURL = url => fetch(url)
.then(response => response.blob())
.then(blob => new Promise((resolve, reject) => {
const reader = new FileReader()
reader.onloadend = () => resolve(reader.result)
reader.onerror = reject
reader.readAsDataURL(blob)
}))
Not ES6 (seems to work the same):
const toDataURL = function(url) {
return fetch(url).then(function(response) {
return response.blob();
}).then(function (blob) {
var type = blob.type;
var size = blob.size;
return new Promise(function(resolve, reject) {
const reader = new FileReader();
reader.onerror = reject;
reader.readAsDataURL(blob);
reader.onloadend = function() {
return resolve(reader.result);
}
}
)}
)}
Based on my understanding of ES6 (ES6 to not ES6):
var a = url => fetch(url)
var a = function(url) { return fetch(url) }
var a = function(parameter) { return statement }
var b = (parameter, parameter) => { fetch(param, param) }
var b = function(foo1, foo2) => { return fetch(param, param) }
var c = url = () => resolve(reader.result)
var c = url = function() { return resolve() }
Making a call:
toDataURL(url).then(function(dataUrl) {
console.log("RESULT:" + dataUrl);
});
Note:
The value returned by the above method is of type "image/tiff" when run in Safari on OSX. If you want to specify another type, such as PNG, there more info on that here.

Chrome memory issue - File API + AngularJS

I have a web app that needs to upload large files to Azure BLOB storage. My solution uses HTML5 File API to slice into chunks which are then put as blob blocks, the IDs of the blocks are stored in an array and then the blocks are committed as a blob.
The solution works fine in IE. On 64 bit Chrome I have successfully uploaded 4Gb files but see very heavy memory usage (2Gb+). On 32 bit Chrome the specific chrome process will get to around 500-550Mb and then crash.
I can't see any obvious memory leaks or things I can change to help garbage collection. I store the block IDs in an array so obviously there will be some memory creeep but this shouldn't be massive. It's almost as if the File API is holding the whole file it slices into memory.
It's written as an Angular service called from a controller, I think just the service code is pertinent:
(function() {
'use strict';
angular
.module('app.core')
.factory('blobUploadService',
[
'$http', 'stringUtilities',
blobUploadService
]);
function blobUploadService($http, stringUtilities) {
var defaultBlockSize = 1024 * 1024; // Default to 1024KB
var stopWatch = {};
var state = {};
var initializeState = function(config) {
var blockSize = defaultBlockSize;
if (config.blockSize) blockSize = config.blockSize;
var maxBlockSize = blockSize;
var numberOfBlocks = 1;
var file = config.file;
var fileSize = file.size;
if (fileSize < blockSize) {
maxBlockSize = fileSize;
}
if (fileSize % maxBlockSize === 0) {
numberOfBlocks = fileSize / maxBlockSize;
} else {
numberOfBlocks = parseInt(fileSize / maxBlockSize, 10) + 1;
}
return {
maxBlockSize: maxBlockSize,
numberOfBlocks: numberOfBlocks,
totalBytesRemaining: fileSize,
currentFilePointer: 0,
blockIds: new Array(),
blockIdPrefix: 'block-',
bytesUploaded: 0,
submitUri: null,
file: file,
baseUrl: config.baseUrl,
sasToken: config.sasToken,
fileUrl: config.baseUrl + config.sasToken,
progress: config.progress,
complete: config.complete,
error: config.error,
cancelled: false
};
};
/* config: {
baseUrl: // baseUrl for blob file uri (i.e. http://<accountName>.blob.core.windows.net/<container>/<blobname>),
sasToken: // Shared access signature querystring key/value prefixed with ?,
file: // File object using the HTML5 File API,
progress: // progress callback function,
complete: // complete callback function,
error: // error callback function,
blockSize: // Use this to override the defaultBlockSize
} */
var upload = function(config) {
state = initializeState(config);
var reader = new FileReader();
reader.onloadend = function(evt) {
if (evt.target.readyState === FileReader.DONE && !state.cancelled) { // DONE === 2
var uri = state.fileUrl + '&comp=block&blockid=' + state.blockIds[state.blockIds.length - 1];
var requestData = new Uint8Array(evt.target.result);
$http.put(uri,
requestData,
{
headers: {
'x-ms-blob-type': 'BlockBlob',
'Content-Type': state.file.type
},
transformRequest: []
})
.success(function(data, status, headers, config) {
state.bytesUploaded += requestData.length;
var percentComplete = ((parseFloat(state.bytesUploaded) / parseFloat(state.file.size)) * 100
).toFixed(2);
if (state.progress) state.progress(percentComplete, data, status, headers, config);
uploadFileInBlocks(reader, state);
})
.error(function(data, status, headers, config) {
if (state.error) state.error(data, status, headers, config);
});
}
};
uploadFileInBlocks(reader, state);
return {
cancel: function() {
state.cancelled = true;
}
};
};
function cancel() {
stopWatch = {};
state.cancelled = true;
return true;
}
function startStopWatch(handle) {
if (stopWatch[handle] === undefined) {
stopWatch[handle] = {};
stopWatch[handle].start = Date.now();
}
}
function stopStopWatch(handle) {
stopWatch[handle].stop = Date.now();
var duration = stopWatch[handle].stop - stopWatch[handle].start;
delete stopWatch[handle];
return duration;
}
var commitBlockList = function(state) {
var uri = state.fileUrl + '&comp=blocklist';
var requestBody = '<?xml version="1.0" encoding="utf-8"?><BlockList>';
for (var i = 0; i < state.blockIds.length; i++) {
requestBody += '<Latest>' + state.blockIds[i] + '</Latest>';
}
requestBody += '</BlockList>';
$http.put(uri,
requestBody,
{
headers: {
'x-ms-blob-content-type': state.file.type
}
})
.success(function(data, status, headers, config) {
if (state.complete) state.complete(data, status, headers, config);
})
.error(function(data, status, headers, config) {
if (state.error) state.error(data, status, headers, config);
// called asynchronously if an error occurs
// or server returns response with an error status.
});
};
var uploadFileInBlocks = function(reader, state) {
if (!state.cancelled) {
if (state.totalBytesRemaining > 0) {
var fileContent = state.file.slice(state.currentFilePointer,
state.currentFilePointer + state.maxBlockSize);
var blockId = state.blockIdPrefix + stringUtilities.pad(state.blockIds.length, 6);
state.blockIds.push(btoa(blockId));
reader.readAsArrayBuffer(fileContent);
state.currentFilePointer += state.maxBlockSize;
state.totalBytesRemaining -= state.maxBlockSize;
if (state.totalBytesRemaining < state.maxBlockSize) {
state.maxBlockSize = state.totalBytesRemaining;
}
} else {
commitBlockList(state);
}
}
};
return {
upload: upload,
cancel: cancel,
startStopWatch: startStopWatch,
stopStopWatch: stopStopWatch
};
};
})();
Are there any ways I can move the scope of objects to help with Chrome GC? I have seen other people mentioning similar issues but understood Chromium had resolved some.
I should say my solution is heavily based on Gaurav Mantri's blog post here:
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/#comment-47480
I can't see any obvious memory leaks or things I can change to help
garbage collection. I store the block IDs in an array so obviously
there will be some memory creeep but this shouldn't be massive. It's
almost as if the File API is holding the whole file it slices into
memory.
You are correct. The new Blobs created by .slice() are being held in memory.
The solution is to call Blob.prototype.close() on the Blob reference when processing Blob or File object is complete.
Note also, at javascript at Question also creates a new instance of FileReader if upload function is called more than once.
4.3.1. The slice method
The slice() method returns a new Blob object with bytes ranging
from the optional start parameter up to but not including the
optional end parameter, and with a type attribute that is the
value of the optional contentType parameter.
Blob instances exist for the life of document. Though Blob should be garbage collected once removed from Blob URL Store
9.6. Lifetime of Blob URLs
Note: User agents are free to garbage collect resources removed from
the Blob URL Store.
Each Blob must have an internal snapshot state, which must be
initially set to the state of the underlying storage, if any such
underlying storage exists, and must be preserved through
StructuredClone. Further normative definition of snapshot state can
be found for Files.
4.3.2. The close method
The close() method is said to close a Blob, and must act as
follows:
If the readability state of the context object is CLOSED, terminate this algorithm.
Otherwise, set the readability state of the context object to CLOSED.
If the context object has an entry in the Blob URL Store, remove the entry that corresponds to the context object.
If Blob object is passed to URL.createObjectURL(), call URL.revokeObjectURL() on Blob or File object, then call .close().
The revokeObjectURL(url) static method
Revokes the Blob URL provided in the string url by removing the corresponding entry from the Blob URL Store. This method must act
as follows:
1. If the url refers to a Blob that has a readability state of CLOSED OR if the value provided for the url argument is
not a Blob URL, OR if the value provided for the url argument does
not have an entry in the Blob URL Store, this method call does
nothing. User agents may display a message on the error console.
2. Otherwise, user agents must remove the entry from the Blob URL Store for url.
You can view the result of these calls by opening
chrome://blob-internals
reviewing details of before and after calls which create Blob and close Blob.
For example, from
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Refcount: 1
Content Type: text/plain
Type: data
Length: 3
to
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Refcount: 1
Content Type: text/plain
following call to .close(). Similarly from
blob:http://example.com/c2823f75-de26-46f9-a4e5-95f57b8230bd
Uuid: 29e430a6-f093-40c2-bc70-2b6838a713bc
An alternative approach could be to send file as an ArrayBuffer or chunks of array buffers. Then re-assemble the file at server.
Or you can call FileReader constructor, FileReader.prototype.readAsArrayBuffer(), and load event of FileReader each once.
At load event of FileReader pass ArrayBuffer to Uint8Array, use ReadableStream, TypedArray.prototype.subarray(), .getReader(), .read() to get N chunks of ArrayBuffer as a TypedArray at pull from Uint8Array. When N chunks equaling .byteLength of ArrayBuffer have been processed, pass array of Uint8Arrays to Blob constructor to recombine file parts into single file at browser; then send Blob to server.
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<input id="file" type="file">
<br>
<progress value="0"></progress>
<br>
<output for="file"><img alt="preview"></output>
<script type="text/javascript">
const [input, output, img, progress, fr, handleError, CHUNK] = [
document.querySelector("input[type='file']")
, document.querySelector("output[for='file']")
, document.querySelector("output img")
, document.querySelector("progress")
, new FileReader
, (err) => console.log(err)
, 1024 * 1024
];
progress.addEventListener("progress", e => {
progress.value = e.detail.value;
e.detail.promise();
});
let [chunks, NEXT, CURR, url, blob] = [Array(), 0, 0];
input.onchange = () => {
NEXT = CURR = progress.value = progress.max = chunks.length = 0;
if (url) {
URL.revokeObjectURL(url);
if (blob.hasOwnProperty("close")) {
blob.close();
}
}
if (input.files.length) {
console.log(input.files[0]);
progress.max = input.files[0].size;
progress.step = progress.max / CHUNK;
fr.readAsArrayBuffer(input.files[0]);
}
}
fr.onload = () => {
const VIEW = new Uint8Array(fr.result);
const LEN = VIEW.byteLength;
const {type, name:filename} = input.files[0];
const stream = new ReadableStream({
pull(controller) {
if (NEXT < LEN) {
controller
.enqueue(VIEW.subarray(NEXT, !NEXT ? CHUNK : CHUNK + NEXT));
NEXT += CHUNK;
} else {
controller.close();
}
},
cancel(reason) {
console.log(reason);
throw new Error(reason);
}
});
const [reader, processData] = [
stream.getReader()
, ({value, done}) => {
if (done) {
return reader.closed.then(() => chunks);
}
chunks.push(value);
return new Promise(resolve => {
progress.dispatchEvent(
new CustomEvent("progress", {
detail:{
value:CURR += value.byteLength,
promise:resolve
}
})
);
})
.then(() => reader.read().then(data => processData(data)))
.catch(e => reader.cancel(e))
}
];
reader.read()
.then(data => processData(data))
.then(data => {
blob = new Blob(data, {type});
console.log("complete", data, blob);
if (/image/.test(type)) {
url = URL.createObjectURL(blob);
img.onload = () => {
img.title = filename;
input.value = "";
}
img.src = url;
} else {
input.value = "";
}
})
.catch(e => handleError(e))
}
</script>
</body>
</html>
plnkr http://plnkr.co/edit/AEZ7iQce4QaJOKut71jk?p=preview
You can also use utilize fetch()
fetch(new Request("/path/to/server/", {method:"PUT", body:blob}))
To transmit body for a request request, run these
steps:
Let body be request’s body.
If body is null, then queue a fetch task on request to process request end-of-body for request and abort these steps.
Let read be the result of reading a chunk from body’s stream.
When read is fulfilled with an object whose done property is false and whose value property is a Uint8Array object, run these
substeps:
Let bytes be the byte sequence represented by the Uint8Array object.
Transmit bytes.
Increase body’s transmitted bytes by bytes’s length.
Run the above step again.
When read is fulfilled with an object whose done property is true, queue a fetch task on request to process request end-of-body
for request.
When read is fulfilled with a value that matches with neither of the above patterns, or read is rejected, terminate the ongoing
fetch with reason fatal.
See also
Progress indicators for fetch?
Fetch with ReadableStream

Converting byte array output into Blob corrupts file

I am using the Office Javascript API to write an Add-in for Word using Angular.
I want to retrieve the Word document through the API, then convert it to a file and upload it via POST to a server.
The code I am using is nearly identical to the documentation code that Microsoft provides for this use case: https://dev.office.com/reference/add-ins/shared/document.getfileasync#example---get-a-document-in-office-open-xml-compressed-format
The server endpoint requires uploads to be POSTed through a multipart form, so I create a FormData object on which I append the file (a blob) as well as some metadata, when creating the $http call.
The file is being transmitted to the server, but when I open it, it has become corrupted and it can no longer be opened by Word.
According to the documentation, the Office.context.document.getFileAsync function returns a byte array. However, the resulting fileContent variable is a string. When I console.log this string it seems to be compressed data, like it should be.
My guess is I need to do some preprocessing before turning the string into a Blob. But which preprocessing? Base64 encoding through atob doesn't seem to be doing anything.
let sendFile = ( fileContent ) => {
let blob = new Blob([fileContent], { type: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' }),
fd = new FormData();
blob.lastModifiedDate = new Date();
fd.append('file', blob, 'uploaded_file_test403.docx');
fd.append('case_id', caseIdReducer.data());
$http.post('/file/create', fd, {
transformRequest: angular.identity,
headers: { 'Content-Type': undefined }
})
.success( ( ) => {
console.log('upload succeeded');
})
.error(( ) => {
console.log('upload failed');
});
};
function onGotAllSlices(docdataSlices) {
let docdata = [];
for (let i = 0; i < docdataSlices.length; i++) {
docdata = docdata.concat(docdataSlices[i]);
}
let fileContent = new String();
for (let j = 0; j < docdata.length; j++) {
fileContent += String.fromCharCode(docdata[j]);
}
// Now all the file content is stored in 'fileContent' variable,
// you can do something with it, such as print, fax...
sendFile(fileContent);
}
function getSliceAsync(file, nextSlice, sliceCount, gotAllSlices, docdataSlices, slicesReceived) {
file.getSliceAsync(nextSlice, (sliceResult) => {
if (sliceResult.status === 'succeeded') {
if (!gotAllSlices) { // Failed to get all slices, no need to continue.
return;
}
// Got one slice, store it in a temporary array.
// (Or you can do something else, such as
// send it to a third-party server.)
docdataSlices[sliceResult.value.index] = sliceResult.value.data;
if (++slicesReceived === sliceCount) {
// All slices have been received.
file.closeAsync();
onGotAllSlices(docdataSlices);
} else {
getSliceAsync(file, ++nextSlice, sliceCount, gotAllSlices, docdataSlices, slicesReceived);
}
} else {
gotAllSlices = false;
file.closeAsync();
console.log(`getSliceAsync Error: ${sliceResult.error.message}`);
}
});
}
// User clicks button to start document retrieval from Word and uploading to server process
ctrl.handleClick = ( ) => {
Office.context.document.getFileAsync(Office.FileType.Compressed, { sliceSize: 65536 /*64 KB*/ },
(result) => {
if (result.status === 'succeeded') {
// If the getFileAsync call succeeded, then
// result.value will return a valid File Object.
let myFile = result.value,
sliceCount = myFile.sliceCount,
slicesReceived = 0, gotAllSlices = true, docdataSlices = [];
// Get the file slices.
getSliceAsync(myFile, 0, sliceCount, gotAllSlices, docdataSlices, slicesReceived);
} else {
console.log(`Error: ${result.error.message}`);
}
}
);
};
I ended up doing this with the fileContent string:
let bytes = new Uint8Array(fileContent.length);
for (let i = 0; i < bytes.length; i++) {
bytes[i] = fileContent.charCodeAt(i);
}
I then proceed to build the Blob with these bytes:
let blob = new Blob([bytes], { type: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' });
If I then send this via a POST request, the file isn't mangled and can be opened correctly by Word.
I still get the feeling this can be achieved with less hassle / less steps. If anyone has a better solution, I'd be very interested to learn.
thx for your answer, Uint8Array was the solution. Just a little improvement, to avoid creating the string:
let bytes = new Uint8Array(docdata.length);
for (var i = 0; i < docdata.length; i++) {
bytes[i] = docdata[i];
}
Pff! what is wrong with a getting a instance of File and not using FileReader api? c'mon Microsoft!
You should take the byte array and throw it into the blob constructor, turning a binary blob to string in javascript is a bad idea that can lead to "out of range" error or incorrect encoding
just do something along with this
var byteArray = new Uint8Array(3)
byteArray[0] = 97
byteArray[1] = 98
byteArray[2] = 99
new Blob([byteArray])
if the chunk is an instance of a typed arrays or a instance of blob/file. in that case you can just do:
blob = new Blob([blob, chunk])
And please... don't base64 encode it (~3x larger + slower)

Categories

Resources