I have a webserver (node) to access Sony Cameras via their API. The webserver is currently setup over the same wifi network as the cameras, but will later on be on its on external webserver. The cameras have been paired via WPS to my wifi router.
I can access my two cameras at the same time, and send requests to take photo, zoom etc. However, my questions are:
What would be the best way to "stream" the liveview from the Sony Camera to my webpage? I've been trying both over socket.io, watching files etc. Kind of similar result. I need best possible performance here, since they are not going to be on the same network later on. Right now I'm saving the image buffer to a file on the server that I watch for changes and then emits event to the webpage to load that image.
When I start liveview from just one camera, it works pretty well (it
stops sometimes, but comes back and most of the time. However, when I
have two liveviews running on the same page, the update of my image
is kind of stopping right away, it is super laggy. Any ideas why?
Thanks!
Related
I wanna enable users of my web app to upload videos with a maximum lenght of 10s and cropped/scaled to a certain resolution. There for users should be able to trim there selected video to 10s before uploading it with a simple editor.
Is there any library or examples enabling client side video editting to cut video length as well as croping before uploading it to a server? I found some canvas approaches for filters, single video frames and export to webM videos but nothing bringing it all together. So anyone done that before?
Apreciate any ideas :)
Tipicaly video processing is a server thing because it's easier to find and run complex (often compiled) libraries (like ffmpeg) there than in browser and it may cause less performance problems for end user. Anyway I think there are two options:
1. Process video on server - send file and configuration
The first approach assume that you prepare client side "editor" based on canvas which simulate video editing. After setup of all filters, crops etc. the client might send original video file and video processing configuration which would be used on a server to do the same thing
Depends on which language you prefer on backend side implementation might be different so I don't give you ready snippet of code.
Of course you can switch order of tasks and upload original file at first place then smiulate video processing on client side and after all send mentioned configuration to backend and process video.
2. Process video within WebAssembly
If you really need to keep everything on client side you can try with WebAssembly ports of libraries like https://github.com/ffmpegwasm/ffmpeg.wasm and send already processed video file to server.
I need to record a webpage and save it as a video, in an automated manner, without human interaction.
I am creating a NodeJS app that generates MP4 videos on the request of the user. The user provides an MP3 file, the app generates animated waveforms for the sound file on top of an illustration.
What I came up with so far is a system that opens a generated web page in the backend, plays the audio file, and shows audio visualization for the audio file on an HTML canvas element. On top of another canvas with mainly static components, such as images, that do not animate. The system records this, the output will be a video file. Finally, I will merge the video file with the sound file to create the final file for the user.
I came up with 2 possible solutions but both of them have problems which I am not able to solve at the moment.
Solution #1
Use a headless browser API such as Phantomjs or Puppeteer to snatch a screenshot x time every second and pipe it to FFmpeg.
The problem
The problem with this is that the process is not realtime. It would work fine if it's JUST an animation but mine is dependant on the audio file. The audio file will play-on during the render which results in a glitchy 1FPS-esque video.
Possible solution?
Don't play the audio file live but convert the audio file into raw data. Animate the audio visualization based on the raw data instead.
Not sure how to do this and if it's even possible.
Solution #2
Play, record, and save the animation, all in the frontend.
Could use ccapture.js to record and save a canvas.
Use a headless browser to open the page and save it to disk when it's done playing.
Doesn't sound like it's the best solution.
The problem(s)
I have more than 1 canvas.
It takes a while, especially when the audio file is longer than 10 minutes.
Making users wait for a long time can be a deal-breaker.
Possible solution?
Merge canvases into one.
No idea how to speed up the rendering time and I doubt it's possible this way.
Late answer from someone looking for similar options due to the convenience of some browser SVG APIs:
My first recommendation, as someone who has written a fair amount of my own audio visualization software, is to use a graphics library and language that don't require a browser or GPU, like Gd or Anti-grain Geometry or Cairo with any server-side language. You might also check out Processing.org (which I haven't used), not sure if there's a headless version.
If that's not possible, I've found these so far but haven't tried them:
https://github.com/tungs/timecut
https://github.com/myplanet/headless-render
https://wave.video/blog/how-we-render-animated-content-from-html5-canvas/
I'm currently working on the following:
On one computer, I have a browser with a white canvas, where you can draw in.
On many other computers, you should be able to receive that canvas as a video stream. Plan would be to somehow convert the canvas surface to a video stream and send it via udp to other computers.
What I achieved so far is, that the canvas is redrawed on other computers with node.js and socket.io (so I basically just send the drawing information, like the coordinates). Then I also use the WebRTC's captureStream()-method, to convert the canvas surface to a video tag. So "visually", its working, I draw on one computer, and on other computers, I can just set the video as fullscreen and it seems to be working.
But thats not yet what I want and need. I need it as a real video stream, so like receiving it with MPV then. So the question is: How can I send the canvas surface as a UDP live video stream? Propably I would also need to send it through FFMPEG or something to transcode it..
I read a lot so far, but basically didn't completely figure out what to do...
I had a look at the MediaStream you get back from captureStream(), but that doesn't seem to help a lot, as getTracks() isn't working when capturing from a canvas.
Also, when talking about WebRTC, I'm not sure if its working, isn't it 2P2? Or can I somehow broadcast it and send packets to a UDP adress? What I read here
is that it is not directly possible. But even if, what should I send then? So how can I send the canvas surface as a video?
So there's basically two question: 1. What would I have to send, how can I get the canvas to a video stream and 2. How can I send it as a stream to other clients?
Any approaches or tips are welcome.
The timetocode.org site is an example of streaming from an HTML5 canvas (on the host computer) to a video element (on a client computer).
There's help in the "More on the demos" link on the main page. Read the topic on the multiplayer stuff there. But basically you just check the "Multiplayer" option, name a "room", connect to that room (that makes you the host of that room), follow one of links to the client page, then connect the client to the room that you set up. You should shortly see the canvas video streaming out to the client.
It uses socket.io for signaling in establishing WebRTC (P2P) connections. Note that the client side sends mouse and keyboard data back to the host via a WebRTC datachannel.
Key parts of the host-side code for the video stream are the captureStream method of the canvas element,
var hostCanvas = document.getElementById('hostCanvas');
videoStream = hostCanvas.captureStream(); //60
and the addTrack method of the WebRTC peer connection object,
pc.addTrack( videoStream.getVideoTracks()[0], videoStream);
and on the client-side code, the ontrack handler that directs the stream to the srcObject of the video element:
pc.ontrack = function (evt) {
videoMirror.srcObject = evt.streams[0];
};
Can any one guide me on how to achieve this.. I am listing them in pointers..
A linux binary captures frames from the locally attached webcamera
and stores them in a folder. This is a continuous process. The
images are stored numerically.
I have a webserver which gives a output of the latest images received from the webcamera. This is a PHP file which gets the recent most image received and prints out.
What I have now is a javascript which refreshes the image every second and displays in the img tag.
Though it works the output is slow and updates slowly one frame at a time.
I am trying to display the images quickly and in a way it should
look like a mjpeg movie being played (not that it has to be so good
as I learned from the forums that the http does have its overhead)
<script type="text/javascript">
function refresh(){
document.images["pic1"].src="/latimage.php?camid=$selectedcamid&ref=" + new Date();
setTimeout('refresh()', 1000);}
if(document.images)window.onload=refresh;
</script>
<img src='/latimage.php?camid=$selectedcamid' id='pic1'>
Above code works perfect. But my unsatisfied mind wants to display the frames obtained from the webcam quickly..like displaying atleast 3 to 4 frames per second.
As I understood from my searches so far it is not too feasible to do
the refresh act too quickly as the HTTP process does take time.
I am trying to find some details on getting this done using a method
by which I can prefetch 100 frames into a image array (I would call
it buffering) and start displaying one image at a time at the rate
of 3 images / second.
Whiles displaying the images the older images should be removed from
the array and the latest ones fetched should be inserted in the end.
Thus the looping is infinite.
I am sorry for asking too many questions..I am unable to find any proper direction to start off with. I can do the above in .net windows application quite easily but in web browser I am unable to get any ideas. I am not sure if jQuery image array or json or simple javascript would do.
I need some guidance please..
If you need to capture the camera output to disk, then I suggest capturing the camera output as video (at 3 FPS) and then streaming that video file to your browser using WebSockets. Here is an example of doing that. If you are willing to run nginx on your server then live_thumb is a complete solution that captures and streams video via WebSockets.
On the other hand, if your goal is just to view the output of the camera and you don't need to store the video, you could consider using WebRTC and running a browser at both ends and then just hooking up the media stream. In other words one browser (perhaps a headless variant) would run on the system with your camera and would stream the video to your other browser using WebRTC. With WebRTC you could get much higher frame rates and your bandwidth would probably still be significantly lower than sending individual images at a slow frame rate.
I want to stream images (print screens of the server) using a local server (apache). For an example i will go to the website using a machine on the same network and then this web site will show me set of images at a speed around 30fps (then i will see it as a video). The image quality has to be good.
At the moment i can go to this website using a machine connected to local network. But i cannot figure out a way to stream images. And I have no knowledge of PHP..
Is this possible to achieve??
Can anyone point me in the right direction...
Thanks.
ffmpeg can help you create videos from images. It has a cli binary that can do the task.
Ref: https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images