I has being develop video uploading app using react-native.User can record video by in app camera.
Are there any way to reduce file size and/or uploading time also displaying/loading time in react-native or javascript? Are there any help to solve this one in front-end?
Yes, you can optimize file size and also display upload progress. FFmpeg is generally used to manipulate video files. With the help of WebAssembly, you can execute FFMpeg right within the app and manipulate video files. Refer to the GitHub link. [You may have to check if this works with React native]
The idea is to read the blob received from MediaRecorder (available in chrome and firefox, there is also npm package for react native) and pass it to FFMpeg WebAssembly port. Later the optimized bytes can be sliced using Blob and sent to the server (check the slice method in JavaScript Blob).
Recommendation
While you can perform all the above steps right within the client app, it is better to run video optimization utilities in the server rather than within the client app. The right approach to stream the data received from MediaRecorder at regular intervals to the server using either WebSocket or Ajax requests.
Related
What I am Doing
I have a javascript file that uses getUserMedia() to get the audio and video from the client’s computer. How do I implement computer vision to video feed which was extracted from the camera using javascript getUserMedia() and speech recognition to audio feed which was extracted from the microphone using javascript getUserMedia() using python?
Why do I want to use Python?
I want to use python because I have already written all the code to run this locally on my machine, using flask but to run on other computers I have to use javascript to access the client's mic and cam and can't use python to do that or python would detect a mic and cam in the server instead of the client's machine.
What I Tried
Attempt #1
I have tried using js2py to translate .js file to .py file using:
js2py.translate_file(static/sketch.js', 'sketch.py')
from sketch import sketch
# I use getUserMedia() function after that from the .py file
I got an error which I found out was a too large .js file error.
Attempt #2
I looked at MDN docs articles:
https://developer.mozilla.org/en-US/docs/Web/Guide/AJAX
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Client-side_web_APIs/Fetching_data
I got really confused about how to use Ajax so didn’t even try because I didn’t know what to do. I am highly leaning towards thinking this could solve my problem but I could be wrong that’s is why I am asking this question.
Underlying Question
My question is how to deliver audio/video data from a user's browser to a server?
I am looking a functionality like follows for a web application,
User select a local folder with high resolution images.
Resize this images and send to server for processing.
Make folders in the local file system and copy the original high resolution images according to the server response
Can we achieve this without using any locally installed application but with a web application only. Please guide me, which method/technology can I start with.
Browsers will not allow to access local file system due to security reasons. Image any website able to play with local files. Browser provides sandbox where your js app can run.
You best options could be to use webstorage. You have limited capacity there though and it is not accessible directly to user. Different browsers can be varying implementations.
You can do this with node.js and sharp or Imagemagick.
As others have mentioned in browser Javascript cannot access a local file system for security reasons. So you'd have to provide an upload interface to upload the image to a node server first, then you can convert the image into a buffer/data stream resize the buffer and save it again on the server ready to be downloaded.
NodeJs is a Javascript runtime
https://nodejs.org/en/
Express is a application framework you can build a webserver that can execute your javascript
https://expressjs.com/
(You've already said you'd prefer not to but) You could build it as an node desktop application using electron which would have access to the filesystem, but it would be a self contained app not a in browser application.
https://electronjs.org/
Sharp and imagemagick plugins are the most popular nodejs based image processors
https://www.npmjs.com/package/sharp
https://www.npmjs.com/package/imagemagick
Hope this helps you get started
I am using this library to upload videos from my web app to Vimeo. It is working fine. However, I need some assistance please in relation to the uploading speed. I have run two tests to determine why my implementation of the library is rather quite slow compared to uploading the same video directly onto the Vimeo Platform.
I observed a major thing which had me quite concerned and I just had to reach out for assistance. I observed that, when I opened my Task Manager in windows to observe the network performance, I can see that my upload using the library indicates that my upload speed is averaging/locking around 2Mbps while the upload directly is averaging around 20Mbps hence the upload directly on the Vimeo site is around 10 times faster.
I tried researching about this issue and the closest I got to an explanation is what one writer termed the size of the upload chunk. According to the writer, in their case, this problem was because the upload chunk was around 100Kb and the solution was just to change this size to make it bigger, say to 1Mb and the upload became faster. However, this was a whole different situation uploading to some other place, not Vimeo. Trying to look for the same situation in the library, I have realized that there doesn't seem to be any defined upload chunk size or its somewhere and I can't locate it.
My request is for you to kindly please assist me with tips and hints to make uploads from the library as fast as directly uploading to Vimeo.
Thank you for your assistance in advance.
That websemantics library, at current time, does not provide a version header in its requests. Because no version header is provided, the Vimeo API defaults to the version specified on your app management page. If you have an older application defaulting to <3.4, then that library will work for uploading. If you have a newer application defaulting to >3.4, that library will not work for uploading unless the library is modified to include a version header.
All that being said, the "upload chunk" size you mention only applies to uploads using open source tus. The websemantics library does not use tus, instead it uses an older resumable upload method that has been deprecated by Vimeo.
Vimeo provides a NodeJS library that is in active development and supported by Vimeo. Internally, the library uses the tus upload method, as implemented by Vimeo. The library will send the video file all at once, not in chunks, from the client.
I suggest migrating to the Vimeo Node library first, or another library that supports Vimeo's tus implementation, and once that is up and running, evaluate any upload speed issues you may encounter.
I have a live stream of raw h264 (no container) coming from a remote webcam. I wanna stream it live in browser using DASH. DASH requires creating mpd file (and segmentation). I found tools (such as mp4box) that accomplish that in static files, but i'm struggling to find a solution for live streams. any suggestions - preferably using node.js modules?
Threads i have checked:
mp4box - from one hand i saw this comment that states " You cannot feed MP4Box with some live content. You need to feed MP4Box -live with pre-segmented chunks." on the other hand there's a lot of people directing to this bitmovin tutorial which does implement a solution using mp4box. In the toturial they are using mp4box (which has a node.js api implementation) and x264 (which doesn't have node.js module? or is contained in ffmpeg/mp4box?)
ngnix - ngnix has a module that support streaming to DASH using rtmp. for exemple in this toturial. I prefer not to go this path - as mention i'm trying to do it all in node.js.
Although i read couple of posts with similar problem, I couldn't find a suitable solution. Help would be much appreciated!
The typical architecture is to send your live stream to a streaming server which will then do the heavy lifting to make the stream available to other devices, using streaming protocols such as HLS and DASH.
So the client devices connect to the server rather than to your browser.
This allows the video to be encoded and packaged to reach as many devices as possible with the server doing any transcoding necessary and potentially also creating different bit rate versions of your stream to allow for different network conditions, if you want to provide this level of service.
The typical structure is encoded stream (e.g. h.264 video), packaged into a container (e.g. mp4 fragmented) and delivered via a streaming protocol such as HLS or DASH.
I am working on integrate a chrome extension that captures video from the current tab, with the PNACL SDK in order to record the video stream into a .webm file. I already did that in a only-javascript version (with whammy) but I am interested in replace whammy with native code for performance reasons.
I wonder how to pass the stream obtained from chrome.tabCapture.capture in js to the native side (I guess it is through a postMessage but not sure if the js stream object can be passed as is, and in which kind of c++ structure receive it at native side).
I appreciate any suggestions or feedback,
The Native Client SDK has an example plugin that does this. It's an API demo called media_stream_video.
Here are instructions on how to build and run the examples:
https://developer.chrome.com/native-client/sdk/examples