I'm working on a video heavy site, and an event triggers a few videos to start playing, but one of the larger ones unloads itself after a second or two, resulting an error:
FAILED TO LOAD RESOURCE ERROR
even though it was loaded a moment ago.
Staggering the buffering of each video helps slightly, but the unloading still happens occasionally. Any suggestions on managing this issue would be greatly appreciated.
There is perhaps too little information in the post to give an exact answer but I would look into bandwidth (computer and internet) and video bit-rates as a first point. What dimensions are the videos and at what bit-rate are they encoded at would be an important question (HD, PAL/NTSC, custom).
Bandwidth problems can happen at several stages:
Is the server capable of delivering the total bit-rate required (sum of the video bit-rates + overhead) which must be continuously at this bit-rate as a minimum. This is not just about the internet bandwidth the server has available but also factors such as loading from storage, server load and so forth.
Is the internet connection (the bottle neck point) able to pass through this bit rate. If total bit-rate of videos supersede the available bandwidth incl. overhead you won't be able to load the streams fast enough
Is the computer able to buffer and decode all these video streams simultaneously. If the videos are for example HD (even if they are scaled down in the browser window the initial frame will be decoded at full frame dimension) the computer would need to decode and compute a huge amount of data even if it is hardware accelerated.
It could be any point really but I would perhaps start with point 3 if you already know your internet connection is more than capable (including overhead). Also if the browser uses the disc as a temporary cache for the buffer the disc will become a factor as well (seek times, fragmentation).
To eliminate you should find out what the bit-rate is for each video, sum them and see if your internet connection can handle it, if it does, do a test against the server to see if it has problems delivering the content streams. If none shows any sign of problems try to run your application with videos from local disc (through a local server) and see if your computer is capable of decoding all simultaneously.
Even if unlikely there are also the possibilities of (packet) errors in transmission regardless of good bandwidth as well as the video stream's encoding themselves (general file errors, atypical encoding scheme in case these are video container files etc.).
Related
This is my website hosted by netlify all is good except when I try to cycle through this array of objects I get an initial lag in my audio the code that plays the audio is "audio.play()" of .5-2 sec.
And after I have cycled through them once the lag almost all completely disappears is this a netlify thing?
On my localhost is works like in the movies so perfect!
Would love to get a helpful link/video/advice, thanks.
https://csgo-weapons.netlify.app/
It isn't a Netlify thing in particular, just an internet thing in general.
File loading isn't instantaneous on the web. When someone requests a file (in this case, the gun sound), it needs to get from the server to the client, and that takes some time (depending on things like network speeds, physical distance, etc.). On your local machine, these loading times are negligible, since the files are not traveling over the web.
After a file is loaded it's cached in the browser, which is why you're noticing no delay after cycling through all the guns.
A method to mitigate this issue would be to request and load all the sound files before the user starts cycling through all of the guns. That way, they don't need to be requested one-at-a-time on-demand. You could also try to reduce filesizes, although that won't help as much as the preloading.
I have a somewhat working system that
Produces audio on a server in to a 1 second WAV file
Reads the WAV file and sends it through a websocket
Websocket sends the binary data to AudioContext.decodeAudioData
Decoded audio is buffered until 4 packets (4 seconds)
Buffer is processed and sent to AudioBufferSourceNode.start(time) where time = (clip_count * duration)
So if I have 4 audio clips, the calls would look like
AudioBufferSourceNode.start(0);
AudioBufferSourceNode.start(1);
AudioBufferSourceNode.start(2);
AudioBufferSourceNode.start(3);
I thought this would perfectly schedule 4 seconds of audio, but I seem to be facing clock issues, perhaps because I am expecting the audio clock to be perfect. I have already used a gain node to remove clicks between each sound clip (1 second) but I start to get timing issues either right away or after a long period of time. Basically, in the worst case, my audio plays like this
---------------------- ----------- ----------- -----------
| 1 second | 1 second | | 950ms | | 900ms | | 850ms |
---------------------- ----------- ----------- -----------
gap gap gap
In this diagram, "1 second" and "#ms" is how much audio is playing. It should always be 1 second. As the audio progresses, it seems to also develop gaps. I guess even when I tell the audio context to play a file at exactly 0, its fine, but other scheduled audio clips may or may not be on time.
Is this correct, or is there something else going wrong in my system? Is there 100% reliability that I could schedule an audio clip to play at the exact right time, or do I need to add in some calculations to figure a +/- of a few ms when to play?
It looks like the thing that serves the purpose of this task is AudioWorkletNode.
According to AudioBufferSourceNode documentation:
The AudioBufferSourceNode interface is an AudioScheduledSourceNode which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. It's especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. To play sounds which require accurate timing but must be streamed from the network or played from disk, use a AudioWorkletNode to implement its playback.
This case exactly implements streaming from the network. AudioBufferSourceNode is not designed to be updated on the fly from the network.
What can lead to desync:
By the nature of the javascript scheduler, there is no guarantee to execute code at the exact time. The node might perform another job at the same time which leads to delay in sending information
The timer runs next tick after sending all the data, which can take some time
The client-side scheduler has even more restrictions than server-side ones. Generally, the browser can perform around 250 timers per second (one each 4ms).
The used API is not designed for that flow
Recommendations:
Always keep the buffer. If by some reason frames from buffer had played already it might be reasonable to request new ones faster.
Increase buffer on the fly. After receiving two messages it is fine to start playing, but it might be reasonable to increase count of buffered messages on the fly to, maybe, something like 15 seconds.
Prefer another tool to work with the connection and data transferring. Nginx will serve perfectly. In case the client will have a slow connection it will "hold" node till data will be transferred.
In case of connection drops for a second (on the mobile network, for example) there should be something to restore state from the proper frame, update buffer and do all of that without interruptions.
If you have a website running on any server. And the website comes in three different version: Heavy, Medium and Lite. Now you have to load lite version if clients speed is below certain limit (Lets say 500kbps), Medium version (Lets say >500kbps and <25mbps), Heavy version (Lets say more than 25mbps). Can you do it?
I was thinking making a server side script that first check the connection speed with client (don't know how), then based on the speed result redirecting them to respected website.
If there is another way, please do tell...
There is no definitive, reliable way to do this and I recommend that you focus on building an optimized site for your intended target audience and their devices.
Internet connections are pretty good around the world. The effort and ongoing maintenance in updating and managing three frontends is not feasible. Instead, focus on serving optimized content and use modern techniques to serve media targetting screen size and device. Limit unnecessary media, compile and bundle scripts, ensure servers are serving gzipped content and place your servers/cdn's near your audience.
If you did, however, want to pursue this exercise you can play with the following idea: You would need to make an initial request to the server to get a timestamp - we want to work with the server's time, not the client which could be off. The client receives the timestamp and responds immediately, passing the timestamp back to the server. The server considers the difference between the two and redirects accordingly.
The problem is that connections are not consistent, and you cannot rely on that first connection to represent the client's connection quality. There may be a dip in connection quality as they are connecting etc.
To maintain two or more server side codes is not easy or ideal. Focus on optimization especially on the website assets such as images. Images form about 75% of site load times.
Ideally you can have multiple image source to start with img srcset.
What you are talking about is obtainable with respect to images and videos . You can have more than three images and the browser will select the best based on the available connection speed.
I'm using VideoJs to play various videos. Some bigger than others.
Here's a simple scenario. A video starts playing that has 100mb length in total with a duration of 10 minutes. If the user skips to minute 2 then a call will be made to the backend to server the whole remaining video.
That's not good as far as user experience goes.The download time can be quite big and the player will be stuck in loading until it's finished.
Ideally what I'd want for it to do is download in chunks of 5-10 seconds.
Honestly javascript isn't my strong point so I don't really know where to being in doing that.
The backend accepts byte ranges. And I also have a Varnish.
Also I'm not opposed to using another video player if the one I'm currently using is not ok or for some reason doesn't support what I'm looking for.
Any pointing in the right direction is greatly appreciated.
For anybody who comes across this question and has the same problem:
https://info.varnish-software.com/blog/caching-partial-objects-varnish
Also make sure that varnish forwards the Range header.
This is quite possibly an issue with your file or server configuration, and not necessarily VideoJS. When you want users to be able to seek beyond the current buffer, you're usually talking about psueudo streaming.
To do this, your server must:
Support byte-range requests (you indicated that your back-end does support this)
Return the correct content-type header
Since you stated your server does support byte-range requests, I'd double check the content-type header.
Also, if you are using H.264 MP4 files, you might need to optimize them for streaming by moving the metadata (MOOV atom) to the beginning of the file. Some video encoders also refer to this as "fast start". A standalone application that can do this to already encoded MP4s is qtfaststart.
Otherwise, VideoJS should support seeking automatically. You can find a number of examples of them on JSFiddle.
You can also try to seek programmatically to see if that behaves any differently:
let player = VideoJS.setup("video");
player.play();
player.currentTime(340); // time to seek to
Can any one guide me on how to achieve this.. I am listing them in pointers..
A linux binary captures frames from the locally attached webcamera
and stores them in a folder. This is a continuous process. The
images are stored numerically.
I have a webserver which gives a output of the latest images received from the webcamera. This is a PHP file which gets the recent most image received and prints out.
What I have now is a javascript which refreshes the image every second and displays in the img tag.
Though it works the output is slow and updates slowly one frame at a time.
I am trying to display the images quickly and in a way it should
look like a mjpeg movie being played (not that it has to be so good
as I learned from the forums that the http does have its overhead)
<script type="text/javascript">
function refresh(){
document.images["pic1"].src="/latimage.php?camid=$selectedcamid&ref=" + new Date();
setTimeout('refresh()', 1000);}
if(document.images)window.onload=refresh;
</script>
<img src='/latimage.php?camid=$selectedcamid' id='pic1'>
Above code works perfect. But my unsatisfied mind wants to display the frames obtained from the webcam quickly..like displaying atleast 3 to 4 frames per second.
As I understood from my searches so far it is not too feasible to do
the refresh act too quickly as the HTTP process does take time.
I am trying to find some details on getting this done using a method
by which I can prefetch 100 frames into a image array (I would call
it buffering) and start displaying one image at a time at the rate
of 3 images / second.
Whiles displaying the images the older images should be removed from
the array and the latest ones fetched should be inserted in the end.
Thus the looping is infinite.
I am sorry for asking too many questions..I am unable to find any proper direction to start off with. I can do the above in .net windows application quite easily but in web browser I am unable to get any ideas. I am not sure if jQuery image array or json or simple javascript would do.
I need some guidance please..
If you need to capture the camera output to disk, then I suggest capturing the camera output as video (at 3 FPS) and then streaming that video file to your browser using WebSockets. Here is an example of doing that. If you are willing to run nginx on your server then live_thumb is a complete solution that captures and streams video via WebSockets.
On the other hand, if your goal is just to view the output of the camera and you don't need to store the video, you could consider using WebRTC and running a browser at both ends and then just hooking up the media stream. In other words one browser (perhaps a headless variant) would run on the system with your camera and would stream the video to your other browser using WebRTC. With WebRTC you could get much higher frame rates and your bandwidth would probably still be significantly lower than sending individual images at a slow frame rate.