I have some large audio files in the S3 bucket.
I want to play them in a browser on-demand.
I don't want to firstly download the whole file and only then play it.
I want to achieve something like YouTube, where the whole video is not immediately downloaded but rather downloaded in small pieces.
Are there any native out-of-the-box solutions provided by AWS? Can a pre-signed S3 URL be used for this purpose? Do I need other services like AWS MediaLive or AWS IVS?
Also, maybe there are advanced JavaScript (browser based) players, that support this type of workflow?
Thank you.
Related
I am going to build an educational site where users will not have access to the original video file and will only be able to watch it through the site. Is there a way that the file is not recognized by the IDM plugin?
According to the articles I read on the Internet, blob can be used, but there was no solution for local use of Azure Storage blob.
In terms of preventing downloads via browser plugins like IDM, you can use
streaming to send the video content in small chunks rather than as a single file. This can make it hard for plugins to intercept and download the entire file. Azure Blob Storage supports "pseudo streaming" of video content via the "Range" header in HTTP requests. This can be useful - https://stackoverflow.com/a/15166547/13105803
Or another option is to use Azure media service
In our project we only need the audio of a video file, to reduce the uploading size for a client I'm looking for a way to convert chosen video file to audio in clientside then try to upload the file to the backend.
It's a react app by the way.
Any idea or solution or library will help.
Taking the audio track from the video is a conversion process that uses video/audio codecs and could take almost the same time as the video length itself to re-encode the streams.
It might be either a client application (downloaded on a user pc) or you have to upload the file to your backend, or to some online service that provides an Online Converter from MP4 to MP3 for example, this converter - https://conversiontools.io/convert/mp4-to-mp3. It has limitations for a free tier.
Check other links:
https://stackoverflow.com/a/17532410/4299666
I'm building a web app that uploads a local .mp4 to an S3 bucket. It's intended for low bandwidth environments. Is it possible to upload every 5th frame of an .mp4 in JavaScript on the frontend and upload this reduced .mp4?
I don't know if it's possible (I didn't find anything online) but it's certainly not easy.
It would be easier to maybe have an endpoint in a server that receives a video and returns a minified version. Like this:
Client --can you minify this video?---> Server
Client <--minified video---- Server
Client ---upload minified video---> S3 bucket
No, it’s not possible. Frames are not independent. Uploading a frame, without uploading the frames it references will crate a corrupt video.
You could make use of FFMPEG in your browser in fact. From ffmpeg.wasm:
ffmpeg.wasm is a pure Webassembly / Javascript port of FFmpeg. It
enables video & audio record, convert and stream right inside
browsers.
Check out the different parameters in regards to your nth frame rate from the documentation on FFMPEG.org.
PS: It might be better solutions to shrink the upload size, ie. compress harder, resize the video dimensions, or combinations of all these, including lowering the frame rate.
I has being develop video uploading app using react-native.User can record video by in app camera.
Are there any way to reduce file size and/or uploading time also displaying/loading time in react-native or javascript? Are there any help to solve this one in front-end?
Yes, you can optimize file size and also display upload progress. FFmpeg is generally used to manipulate video files. With the help of WebAssembly, you can execute FFMpeg right within the app and manipulate video files. Refer to the GitHub link. [You may have to check if this works with React native]
The idea is to read the blob received from MediaRecorder (available in chrome and firefox, there is also npm package for react native) and pass it to FFMpeg WebAssembly port. Later the optimized bytes can be sliced using Blob and sent to the server (check the slice method in JavaScript Blob).
Recommendation
While you can perform all the above steps right within the client app, it is better to run video optimization utilities in the server rather than within the client app. The right approach to stream the data received from MediaRecorder at regular intervals to the server using either WebSocket or Ajax requests.
I have a live stream of raw h264 (no container) coming from a remote webcam. I wanna stream it live in browser using DASH. DASH requires creating mpd file (and segmentation). I found tools (such as mp4box) that accomplish that in static files, but i'm struggling to find a solution for live streams. any suggestions - preferably using node.js modules?
Threads i have checked:
mp4box - from one hand i saw this comment that states " You cannot feed MP4Box with some live content. You need to feed MP4Box -live with pre-segmented chunks." on the other hand there's a lot of people directing to this bitmovin tutorial which does implement a solution using mp4box. In the toturial they are using mp4box (which has a node.js api implementation) and x264 (which doesn't have node.js module? or is contained in ffmpeg/mp4box?)
ngnix - ngnix has a module that support streaming to DASH using rtmp. for exemple in this toturial. I prefer not to go this path - as mention i'm trying to do it all in node.js.
Although i read couple of posts with similar problem, I couldn't find a suitable solution. Help would be much appreciated!
The typical architecture is to send your live stream to a streaming server which will then do the heavy lifting to make the stream available to other devices, using streaming protocols such as HLS and DASH.
So the client devices connect to the server rather than to your browser.
This allows the video to be encoded and packaged to reach as many devices as possible with the server doing any transcoding necessary and potentially also creating different bit rate versions of your stream to allow for different network conditions, if you want to provide this level of service.
The typical structure is encoded stream (e.g. h.264 video), packaged into a container (e.g. mp4 fragmented) and delivered via a streaming protocol such as HLS or DASH.