How to handle blob data in Electron? - javascript

I'm using Github's Electron, which builds native desktop applications in HTML/JS. I need to handle some blob data from the clipboard, but there are only methods to read text, HTML, images (JPG and PNG) and RTF data. (http://electron.atom.io/docs/v0.37.3/api/clipboard/)
I don't mind not being able to handle blob data in any specific way, I just need to be able to store it in a local database and then reload it into the clipboard. I assumed I could do this using readText and writeText but I'm not sure that's possible. When copying a PSD file and printing that out using writeText, for example, I get 0 bytes.
I see blob data as being anything other than the formats listed above. So things like: .psd, .doc, .img, .bin, or anything with binary data that cannot be read in plain text.
How can I read, store and put this data back into the clipboard?

In your scenario, I suggest using Electron File object API and store file path in clipboard for later usage.

Related

In Javascript, when opening a file via an "input file" button, is the entire file read into memory

In javascript, when opening a file via a button returns a Blob object (e.g. blob1).
I can then get the actual data of the blob via blob1ArrayBuffer = blob1.arrayBuffer();
When the Blob object (e.g. blob1) is created, does it load all the bytes into memory?
Or does it just returns the address so that later the actual bytes can be read via blob1.arrayBuffer() ?
No, all the file isn't read in memory (you can try to open a file of a few TB, that should still work).
However note that the OS will still need to read some of that file to produce the metadata of the file. This may take some times in some conditions (e.g when selecting folders with many files, or when selecting a file from a network disk etc.)
Even when doing blob1.arraybuffer() the full file isn't necessarily put in memory, since the specs ask that all the consumers of the Blob use a ReadableStream to get the data from it. But obviously in this case, the full data will be copied in the resulting ArrayBuffer that will most probably live in memory.

Export custom file to disk via Excel Add-in?

I'm new to Excel Web Add-Ins and want to figure out if it's possible to make an add-in that can export a custom file.
I've looked around and all I find are Excel specific commands like Workbook.SaveAs() but I can't find anything on making custom export functions. I need to convert the file into XML but a specific XML setup and so, I could just work the data before I save it to XML. But again, can't find much of anything to suggest that this is supported.
How would I go about writing a file to disk from Excel that isn't just the Workbook?
There's no such API to support exporting custom file to disk. It seems we can have workaround to do this work, this workaround just works for excel online.
Please see this link:
How to create a file in memory for user to download, but not through server?
The closest thing there is for what you want to do is:
Office.context.document.getFileAsync(Office.FileType.Compressed, (result) => {
const file = result.value;
// do whatever ...
});
The file variable in this case contains the entire document in Office Open XML (OOXML) format as a byte array.

AWS S3 getting Not a valid bitmap file

I have been struggling with this for a while and I am going to provide you with as much information as possible (some maybe irrelevant) because I am completely stuck. I am using Ionic and I would like to be able to take a picture with my phone and upload it to an AWS S3 bucket. I used Cordova camera to accomplish this.
As far as I know; these pictures come out in a large base64 string and I have to convert it to a Blob, convert it to a File object then upload that file object into AWS. However, when I do this it always uploads it as something other than an image. Whenever I open it I get an error saying:
"Not a valid bitmap file. its format is not currently supported."
https://s3.amazonaws.com/mng-moment/moment/PA/40.008446_-75.26046_1502414224619.jpg
Here is an example of a WORKING one (This used to work it somehow broke):
https://s3.amazonaws.com/mng-moment/bestMoments/40.008446_-75.26046_1499659199473.jpg
I tried to open each one in a text editor to see what is going on. For the first one (The broken one) I get this:
When I try to open the working one in a text editor I get this:
Now it seems like a conversion problem but I think I am converting it correctly.
Here is the code I am using to upload (You can see the console.logs later on the post):
core.js
awsServices.js
If you look at the comments in the code I labeled some of the console logs. I will display them here for more information:
A - (uploadToAWS):
B - (awsServices.upload):
This is how I convert the dataURI to a Blob (Called in uplpoadToAWS - The first screenshot):
This is what gets passed into the 'dataURI' parameter in the code right above:
If there is any more information please let me know. I've been scratching my head at this for a while. Any help is appreciated. Thanks!
As stated in MDN File API:
A File object is a specific kind of a Blob, and can be used in any context that a Blob can. In particular, FileReader, URL.createObjectURL(), createImageBitmap(), and XMLHttpRequest.send() accept both Blobs and Files.
So, i think your problem reside in your uploadToAws function because you first create a Blob and then use the Blob to create a File, when I think you simply should initialize the File object with the byte array returned by dataURItoBlob since the File object is in fact already a Blob object.

how to get data from a base64 encoding of a .pptx file in javascript

I get data from a server of the .pptx file in base64 encoding now i would like to get the text that is present inside the base64 data.
Is there any third party java script library to do this especially scanning in base64 code rather than taking the file path and i would like insert these strings into a power point using office js.
Client side would be preferred.
thanks
Seems that what you need is a JavaScript decoder for base64 files, there are many projects in Github Doing this, for instance here is one https://github.com/mathiasbynens/base64.
That said, I am not sure about your scenario, and what types of files are been base64-encoded. Base64 at the end of the day is a text "representation" of usually a binary file, like an image or a compressed zip file. I wonder if once you decode it you will get what you expect. And if you are expecting text, i wonder why your service is even encoding it like this.
Anyways... once you have whatever text you want to insert, you can use the setSelectedDataAsync method of our Office.js in PPT to write it in your presentation's active selection. https://dev.office.com/reference/add-ins/shared/document.setselecteddataasync

Storing base64 strings to GridFS(mongoDB file storage)

I'm trying to paste an image from the clipboard into an <img> tag. The source of this image is Prnt Scrn command and not a file. This clipboard image would be in base64 format. This base64 string can be inserted into src attribute of <img> tag(once ctrl-v is pressed) using javascript for display purposes. This is accomplishable by using this plugin.
So the <img> tag would be something like this:
<img id="screen_image" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABVYAAAMACAI......(long string here)"
Although, I could persist this entire string into a mongoDB collection and retrieve it back for displaying that image, my ultimate goal is to persist this image into gridFS. Is there a way if I could interpret base64 as a file and persist it into gridFS?
I hope I've made it clear. Comments welcome.
UPDATE: I want to maintain a common collection to store images or any file for that matter(I'm already using gridFS to persist file attachments so I do not want to create a new collection to store clipboard images). I have also tried decoding the string using window.atob() but then I don't know how that could be persisted to gridFS
I'm using Mongo for my senior project right now and previously I was storing child (client is a local non profit like world vision) pictures with GridFS but recently I've moved to storing them in the actual child doc in base64. All of the images are around 3mb and base64 converts out to mostly 4-5mb. If you can store them as base64 it makes for a much simpler schema, I think.
I know you said you wanted to use GridFS, but I would only go that route if you have files over 16mb. For what I hope are obvious reasons it's much simpler to just store them as strings in docs.

Categories

Resources