I made this picture that explains what I'm trying to achieve.
Brief summary: I want users to be able to upload an encrypted video that only those which have the password will be able to see as a stream.
The point of all this is for the server to not be able to decipher it.
What I managed to do at the moment is only the first part, encrypting the chunks and sending them :
fileInput.addEventListener("change", async () => {
const file = fileInput.files[0]
const stream = file.stream()
const reader = stream.getReader()
while(true) {
const {value, done} = await reader.read()
if(done) break
handleChunk(value)
}
const out = {
size: file.size,
type: file.type,
encryptedStream: encryptedStream
}
})
function handleChunk(chunk) { // chunk: Uint8Array
const parsedChunk = CryptoJS.enc.Utf8.parse(chunk)
encryptedStream.push(CryptoJS.AES.encrypt(parsedChunk, "encryptionKey").toString())
}
I guess you can also do this with File.slice() but I did not test it.
At this point I feel like I've looked into every API related to files/streams both in Node and in vanilla js but I can't find a way to make this work, all I've managed is to make a Readable out of the encryptedStream array in the backend.
const Readable = require("stream").Readable
let readable = new Readable()
encryptedStream.forEach(aesChunk => readable.push(aesChunk))
readable.push(null)
I have doubts on how to stream it back to another frontend and literally no idea on how to play the video after deciphering it.
From what I've looked online, it looks like it's not possible to stream a crypted file, only to download it.
Is this possible to do ?
Related
I am trying to download an excel file and then upload it to Azure Blob Storage for use in Azure Data Factory. I have a playwright javascript that worked when the file was a .csv but when I try the same code with an excel file, it will not open in Excel. It says,
"We found a problem with some content in 'order_copy.xlsx'. Do you want us to try to recover as much as we can?:
After clicking yes, it says,
"Excel cannot open the file 'order_copy.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file."
Any ideas on how to use the createReadStream more effectively to do this and preserve the .xlsx format?
I don't think the saveAs method will work since this code is being executed in an Azure Function with no access to a local known path.
My first thought was the content type was not right, so I set that, but it still did not work. I tried a UTF-8 encoder but that also did not work.
//const data = await streamToText(download_csv.createReadStream())
const download_reader = await download_csv.createReadStream();
let data = '';
for await (const chunk of download_reader) {
data += chunk; //---I suspect I need to do something different here
}
// const data_utf8 = utf8.encode(data) //unescape( encodeURIComponent(data) );
const AZURE_STORAGE_CONNECTION_STRING = "..." //---Removed string here
// Create the BlobServiceClient object which will be used to create a container client
const blob_service_client = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
// Get a reference to a container
const container_client = blob_service_client.getContainerClient('order');
const blob_name = 'order_copy.xlsx';
// Get a block blob client
const block_blob_client = container_client.getBlockBlobClient(blob_name);
const contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
const blobOptions = { blobHTTPHeaders: { blobContentType: contentType } };
//const uploadBlobResponse = await block_blob_client.upload(data_utf8, data_utf8.length, blobOptions);
const uploadBlobResponse = await block_blob_client.upload(data, data.length, blobOptions);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
Any guidance would be appreciated. Thank you in advance for your help!
-Chad
Thanks #Gaurav for the suggestion on not setting the data to a string. The following code worked after I changed to using a array of the chunks and concatenated it using the Buffer similar to your suggested code.
let chunks = []
for await (const chunk of download_reader) {
chunks.push(chunk)
}
const fileBuffer = Buffer.concat(chunks)
...
const uploadBlobResponse = await block_blob_client.upload(fileBuffer, fileBuffer.length, blobOptions);
Thanks everyone!
So I am using this npm library that returns SoundCloud music as a Stream. But after a while searching for an answer, I couldn't get an answer. I searched and discovered that it is impossible to get the full size of the data in a stream. But, is there a way for me to get the size of the data in that stream because I plan on using the size, to implement download progress in my app. Thanks a lot in advance.
From the library docs:
const scdl = require('soundcloud-downloader').default
const fs = require('fs')
const SOUNDCLOUD_URL = 'https://soundcloud.com/askdjfhaklshf'
const CLIENT_ID = 'asdhkalshdkhsf'
scdl.download(SOUNDCLOUD_URL).then(stream => stream.pipe(fs.createWriteStream('audio.mp3')))
Everything seems to work perfectly, but I am not able to count the bytes available in the stream instance returned in the callback
Yes, if the API returns a Stream you can calculate the size yourself as you read it. You didn't specify which library you're using but if it returns a stream you can add up the size of it as chunks of data come in.
e.g. (adapted straight from the Node.js docs):
// get the stream from some API.
const readable = getReadableStreamSomehow();
let readBytes = 0;
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readBytes += chunk.length;
});
readable.on('end', () => {
console.log('All done.');
});
I'm building an application that will allow me to take a picture from my react app which accesses the web cam, then I need to upload the image to google cloud storage using a Hapi node.js server. The problem I'm encountering is that the react app snaps a picture and gives me this blob string (I actually don't even know if that's what it's called) But the string is very large and looks like this (I've shortened it due to it's really large size:
"imageBlob": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/...
I'm finding it hard to find resources that show me how to do this exactly, I need to upload that blob file and save it to a google cloud storage bucket.
I have this in my app so-far:
Item.postImageToStorage = async (request, h) => {
const image = request.payload.imageBlob;
const projectId = 'my-project-id'
const keyFilename = 'path-to-my-file'
const gc = new Storage({
projectId: projectId,
keyFilename: keyFilename
})
const bucket = gc.bucket('my-bucket.appspot.com/securityCam');
const blob = bucket.file(image);
const blobStream = blob.createWriteStream();
blobStream.on('error', err => {
h.response({
success: false,
error: err.message || '=-->' + err
})
});
console.log('===---> ', 'no errors::::')
blobStream.on('finish', () => {
console.log('done::::::', `https://storage.googleapis.com/${bucket.name}/${blob.name}`)
// The public URL can be used to directly access the file via HTTP.
const publicUrl = format(
`https://storage.googleapis.com/${bucket.name}/${blob.name}`
);
});
console.log('===---> ', 'past finish::::')
blobStream.end(image);
console.log('===---> ', 'at end::::')
return h.response({
success: true,
})
// Utils.postRequestor(path, payload, headers, timeout)
}
I ge to the success message/response h.response but no console logs appear except the ones outside of the blobStream.on I see all that start with ===---> but nothing else.
Not sure what I'm doing wrong, thanks in advance!
At the highest level, let us assume you want to write into file my-file.dat that is to live in bucket my-bucket/my-folder. Let us assume that the data you want to write is a binary chunk of data that is stored in a JavaScript Buffer object referenced by a variable called my_data. We would then want to code something similar to :
const bucket = gc.bucket('my-bucket/my-folder');
const my_file = bucket.file('my-file.dat');
const my_stream = my_file.createWriteStream();
my_stream.write(my_data);
my_stream.end();
In your example, something looks fishy with the value you are passing in as the file name in the line:
const blob = bucket.file(image);
I'm almost imagining you are thinking you are passing in the content of the file rather than the name of the file.
Also realize that your JavaScript object field called "imageBlob" will be a String. It may be that it indeed what you want to save but I can also imagine that what you want to save is binary data corresponding to your webcam image. In which case you will have to decode the string to a binary Buffer. This looks like it will be extracting the string data starting data:image/jpeg;base64, and then creating a Buffer from that by treating the string as Base64 encoded binary.
Edit: fixed typo
What I'm trying to achieve is to make Chrome load a video file as data (via the Fetch API, XHR, whatever) and to play it using <video> while it's still being downloaded without issuing two separate requests for the same URL and without waiting until the file is completely downloaded.
It's easy to get a ReadableStream from the Fetch API (response.body), yet I can't find a way to feed it into the video element. I've figured out I need a blob URL for this, which can be created using a MediaSource object. However, the SourceBuffer#appendStream method, which sounds like just what is needed, isn't implemented in Chrome, so I can't connect the stream directly to the MediaSource object.
I can probably read the stream in chunks, create Uint8Arrays out of them, and use SourceBuffer#appendBuffer, but this means playback won't start immediately unless the chunk size is really small. Also it feels like manually doing something that all these APIs should be able to do out of the box. If there is no other solutions, and I go this way, what caveats should I expect?
Are there probably other ways to create a blob URL for a ReadableStream? Or is there a way to make fetch and <video> share a request? There are so many new APIs that I could easily miss something.
After hours of experimenting, found a half-working solution:
const video = document.getElementById('audio');
const mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', async () => {
const sourceBuffer = mediaSource.addSourceBuffer('audio/webm; codecs="opus"');
const response = await fetch(audioSRC);
const body = response.body
const reader = body.getReader()
let streamNotDone = true;
while (streamNotDone) {
const {value, done} = await reader.read();
if (done) {streamNotDone = false; break;}
await new Promise((resolve, reject) => {
sourceBuffer.appendBuffer(value)
sourceBuffer.onupdateend = (() => {
resolve(true);
})
})
}
});
It works with https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
Also, I tested this only with webm/opus format but I believe it should work with other formats as well as long as you specify it.
I have a little "big" problem.
I use agile-uploader to upload multiple image, this component resize all the picture (it works very well) but by doing this I lose exif data.
Can I read exif data in the client-side using JS ? given that isn't the same name domain.
Yes. There's a new library exifr with which you can do exactly that. It's maintained, actively developed library with focus on performance and works in both nodejs and browser.
Simple example of extracting exif from one file:
document.querySelector('#filepicker').addEventListener('change', async e => {
let file = e.target.files[0]
let exifData = await exif.parse(file)
console.log('exifData', exifData)
})
Complex example of extracting exif from multile files:
document.querySelector('#filepicker').addEventListener('change', async e => {
let files = Array.from(e.target.files)
let promises = files.map(exif.parse)
let exifs = await Promise.all(promises)
let dates = exifs.map(exif => exif.DateTimeOriginal.toGMTString())
console.log(`${files.length} photos taken on:`, dates)
})
And you can even extract thumbnail thats embedded in the file:
let img = document.querySelector("#thumb")
document.querySelector('input[type="file"]').addEventListener('change', async e => {
let file = e.target.files[0]
img.src = await exifr.thumbnailUrl(file)
})
You can also try out the library's playground and experiment with images and their output, or check out the repository and docs.