This is my website hosted by netlify all is good except when I try to cycle through this array of objects I get an initial lag in my audio the code that plays the audio is "audio.play()" of .5-2 sec.
And after I have cycled through them once the lag almost all completely disappears is this a netlify thing?
On my localhost is works like in the movies so perfect!
Would love to get a helpful link/video/advice, thanks.
https://csgo-weapons.netlify.app/
It isn't a Netlify thing in particular, just an internet thing in general.
File loading isn't instantaneous on the web. When someone requests a file (in this case, the gun sound), it needs to get from the server to the client, and that takes some time (depending on things like network speeds, physical distance, etc.). On your local machine, these loading times are negligible, since the files are not traveling over the web.
After a file is loaded it's cached in the browser, which is why you're noticing no delay after cycling through all the guns.
A method to mitigate this issue would be to request and load all the sound files before the user starts cycling through all of the guns. That way, they don't need to be requested one-at-a-time on-demand. You could also try to reduce filesizes, although that won't help as much as the preloading.
Related
I need to record a webpage and save it as a video, in an automated manner, without human interaction.
I am creating a NodeJS app that generates MP4 videos on the request of the user. The user provides an MP3 file, the app generates animated waveforms for the sound file on top of an illustration.
What I came up with so far is a system that opens a generated web page in the backend, plays the audio file, and shows audio visualization for the audio file on an HTML canvas element. On top of another canvas with mainly static components, such as images, that do not animate. The system records this, the output will be a video file. Finally, I will merge the video file with the sound file to create the final file for the user.
I came up with 2 possible solutions but both of them have problems which I am not able to solve at the moment.
Solution #1
Use a headless browser API such as Phantomjs or Puppeteer to snatch a screenshot x time every second and pipe it to FFmpeg.
The problem
The problem with this is that the process is not realtime. It would work fine if it's JUST an animation but mine is dependant on the audio file. The audio file will play-on during the render which results in a glitchy 1FPS-esque video.
Possible solution?
Don't play the audio file live but convert the audio file into raw data. Animate the audio visualization based on the raw data instead.
Not sure how to do this and if it's even possible.
Solution #2
Play, record, and save the animation, all in the frontend.
Could use ccapture.js to record and save a canvas.
Use a headless browser to open the page and save it to disk when it's done playing.
Doesn't sound like it's the best solution.
The problem(s)
I have more than 1 canvas.
It takes a while, especially when the audio file is longer than 10 minutes.
Making users wait for a long time can be a deal-breaker.
Possible solution?
Merge canvases into one.
No idea how to speed up the rendering time and I doubt it's possible this way.
Late answer from someone looking for similar options due to the convenience of some browser SVG APIs:
My first recommendation, as someone who has written a fair amount of my own audio visualization software, is to use a graphics library and language that don't require a browser or GPU, like Gd or Anti-grain Geometry or Cairo with any server-side language. You might also check out Processing.org (which I haven't used), not sure if there's a headless version.
If that's not possible, I've found these so far but haven't tried them:
https://github.com/tungs/timecut
https://github.com/myplanet/headless-render
https://wave.video/blog/how-we-render-animated-content-from-html5-canvas/
I am working on a web-based tool that includes a number of videos. This tool will be stored on a network share, and will normally run from there. However, users will occasionally make local copies of this tool to take offline. So, I want to include some JS to check if it can load the video from the C: drive before loading it over the network, to cut down on bandwidth use. The local copy will be managed by an automated utility, so the paths should be consistent.
However, when I try to load a video from the local machine, I get a Video playback was aborted error. I found this thread, which lead me to this link. I checked the suggest reg keys and they were already set to 0, except for restricted sites.
I believe it might have something to do with the video being in a different zone than the HTML page. When run from the local machine, the page is in the My Computer zone, and when run from a network share, it is in the Local Intranet zone.
Is there any way I can make this work? Or even a setting I can look at to verify that I'm correct in assuming that the issue is due to the video being loaded across security zones?
Can any one guide me on how to achieve this.. I am listing them in pointers..
A linux binary captures frames from the locally attached webcamera
and stores them in a folder. This is a continuous process. The
images are stored numerically.
I have a webserver which gives a output of the latest images received from the webcamera. This is a PHP file which gets the recent most image received and prints out.
What I have now is a javascript which refreshes the image every second and displays in the img tag.
Though it works the output is slow and updates slowly one frame at a time.
I am trying to display the images quickly and in a way it should
look like a mjpeg movie being played (not that it has to be so good
as I learned from the forums that the http does have its overhead)
<script type="text/javascript">
function refresh(){
document.images["pic1"].src="/latimage.php?camid=$selectedcamid&ref=" + new Date();
setTimeout('refresh()', 1000);}
if(document.images)window.onload=refresh;
</script>
<img src='/latimage.php?camid=$selectedcamid' id='pic1'>
Above code works perfect. But my unsatisfied mind wants to display the frames obtained from the webcam quickly..like displaying atleast 3 to 4 frames per second.
As I understood from my searches so far it is not too feasible to do
the refresh act too quickly as the HTTP process does take time.
I am trying to find some details on getting this done using a method
by which I can prefetch 100 frames into a image array (I would call
it buffering) and start displaying one image at a time at the rate
of 3 images / second.
Whiles displaying the images the older images should be removed from
the array and the latest ones fetched should be inserted in the end.
Thus the looping is infinite.
I am sorry for asking too many questions..I am unable to find any proper direction to start off with. I can do the above in .net windows application quite easily but in web browser I am unable to get any ideas. I am not sure if jQuery image array or json or simple javascript would do.
I need some guidance please..
If you need to capture the camera output to disk, then I suggest capturing the camera output as video (at 3 FPS) and then streaming that video file to your browser using WebSockets. Here is an example of doing that. If you are willing to run nginx on your server then live_thumb is a complete solution that captures and streams video via WebSockets.
On the other hand, if your goal is just to view the output of the camera and you don't need to store the video, you could consider using WebRTC and running a browser at both ends and then just hooking up the media stream. In other words one browser (perhaps a headless variant) would run on the system with your camera and would stream the video to your other browser using WebRTC. With WebRTC you could get much higher frame rates and your bandwidth would probably still be significantly lower than sending individual images at a slow frame rate.
I'm working on a video heavy site, and an event triggers a few videos to start playing, but one of the larger ones unloads itself after a second or two, resulting an error:
FAILED TO LOAD RESOURCE ERROR
even though it was loaded a moment ago.
Staggering the buffering of each video helps slightly, but the unloading still happens occasionally. Any suggestions on managing this issue would be greatly appreciated.
There is perhaps too little information in the post to give an exact answer but I would look into bandwidth (computer and internet) and video bit-rates as a first point. What dimensions are the videos and at what bit-rate are they encoded at would be an important question (HD, PAL/NTSC, custom).
Bandwidth problems can happen at several stages:
Is the server capable of delivering the total bit-rate required (sum of the video bit-rates + overhead) which must be continuously at this bit-rate as a minimum. This is not just about the internet bandwidth the server has available but also factors such as loading from storage, server load and so forth.
Is the internet connection (the bottle neck point) able to pass through this bit rate. If total bit-rate of videos supersede the available bandwidth incl. overhead you won't be able to load the streams fast enough
Is the computer able to buffer and decode all these video streams simultaneously. If the videos are for example HD (even if they are scaled down in the browser window the initial frame will be decoded at full frame dimension) the computer would need to decode and compute a huge amount of data even if it is hardware accelerated.
It could be any point really but I would perhaps start with point 3 if you already know your internet connection is more than capable (including overhead). Also if the browser uses the disc as a temporary cache for the buffer the disc will become a factor as well (seek times, fragmentation).
To eliminate you should find out what the bit-rate is for each video, sum them and see if your internet connection can handle it, if it does, do a test against the server to see if it has problems delivering the content streams. If none shows any sign of problems try to run your application with videos from local disc (through a local server) and see if your computer is capable of decoding all simultaneously.
Even if unlikely there are also the possibilities of (packet) errors in transmission regardless of good bandwidth as well as the video stream's encoding themselves (general file errors, atypical encoding scheme in case these are video container files etc.).
I've created a web app that caches all necessary code and data for use offline through applicationCache. However, every time the app starts up, it immediately tries to check for updates. This blocks the browser for a significant chunk of time, even if it doesn't find anything to update. This behavior is highly disruptive to the app (shouldn't updates be done in the background, anyway?). Just the checking stage takes a lot of time on a mobile device, and if it find updates all bets are off as to how long downloading will take (b/c it has to redownload all files) - which also freezes up the browser.
So, I am wondering:
Is there a way to delegate applicationCache updates to a shared Web Worker? OR
Is there a way to block all applicationCache updates until the user specifically wants to check for updates and presses a button that will initiate updates through applicationCache.update()? OR
Are there other ways to mitigate the time spent on checking for updates?
Shouldn't application cache updates run asynchronously in the background?
edit: perhaps a carefully-constructed cache-control header on the manifest file is the answer? I'll be investigating this, but I hope somebody can give me more info on these updates. Thanks.
UPDATE
Ok, I've played with headers, and nothing has helped. I'm starting a bounty. If you can help, please do!
If you want to give the user more control over the actual update, you could parameterise the manifest URL with something specific to that user. Then when the user wants to update, you fire off a request to the server which rolls that particular user's manifest file, and then reload the page client-side to force a reload of the manifest.
I've since been doing some reading and came across this article which seems to be a much more elegant solution if the user is already on the cached page -
http://www.html5rocks.com/tutorials/appcache/beginner/#toc-updating-cache
As for the load time involved in the manifest up-to-date check, that's not something I've ever had a problem with. My understanding was that it happens in the background, are you just concerned about the browser showing loading hints?
The abort() method may be the answer...someday, but I haven't come across any browsers that implement it yet.
I had a similar problem, and tried everything, including the wild idea of putting the manifest inside itself to see if it would cache itself. So I could do the updates manually with ajax requests and eval'ing javascript stuffed into localStorage...yikes.
Finally, I created a very simple html page with a simple manifest. When I tested it, the UI didn't lock up. Slowly I started to add things into the page, and play around with the manifest contents to see what things might cause it to freeze during the applicationCache check. I finally got failure when I added an image to the page, but left it out of the manifest -- that's when the UI started to lock up again. I went back to my original project, and found a few images that needed to be in the manifest, and that fixed the locking UI issue there too.
The checking phase of the applicationCache tries to be asynchronous (at least on the devices I've tested). However, if there are any files missing from the manifest, then everything must wait for the applicationCache to finish checking.
It appears that when the browser needs a file that has not been cached, it waits for the applicationCache to complete the update before it will make a request for the file -- which kind of makes sense, since other resources may rely on the missing file. This puts the brakes on the rendering and makes the UI freeze up. If the manifest is unreachable (e.g. on a different network), the UI can be locked up for about a minute.
To find the files that need to be added to the manifest, watch the server logs as you refresh the app a few times. The suspects will be any GET requests for files other than the manifest.