Improve Unity 3D AR game performance [Low FPS] - javascript

WE have created a Unity-based AR car game that uses the head position of the user for moving the car left or right similar to any simple mobile car race video game.
The game is a web-based AR game and uses unity WebGL output. As for head tracking, we are using Mediapipe.js. We have tested the experience on different OS and the following are the fps results.
Model
Browser
FPS & Range
Benchmark
MediaPipe Face Detection
Chrome PC
62(60-74)
60/100
MediaPipe Face Detection
Chrome Android
10(6-11 )
30
MediaPipe Face Detection
Chrome IOS
12(8-12 )
30
We are looking to achieve benchmark 30fps in mobile devices using Mediapipe. Any solutions or insights on improving the performance to make the game smoother are appreciated.
references:
Mediapipe Face Detection {Model: #mediapipe/face_detection#0.4.1 | model-config: short}
Tech-stack
HTML5/CSS JS(ES6)

Start with profiling. First you need to understand what operations are slowing down your application. It could be:
Lots of CPU work per frame
Load on the GPU due to unoptimized and non-mobile shaders, a large number of vertices (100-150K+) or a large number of draw calls (if there are many different materials that are not batched)
Frequent memory allocation and deallocation

Related

Performance of video stream processing in Google Chrome

I develop web application for mobile browsers which processes video stream from back camera in real time. Based on some calculated feature we may store current frame for followed operations.
The detailed workflow is next:
Input video stream in browser has 4K resolution. We use 3 canvases: two canvases have 4K resolution and the last one has significant lower resolution ( approximately 400x600 ). We get frame from video stream and draw it on 4K canvas. Next we draw this 4K canvas onto smallest one. Then we get array representation of image from this small canvas and perform some calculation with it. Based on that calculation we decide if this frame should be stored or not ( we say that we found "Best frame" ). This best frame we store in original resolution 4K in the second 4K canvas for final processing and continue to process next frames in hope to fine little bit better.
In my work I faced with problem that Google Chrome shows less performance than FireFox on the same device and dramatically worse performance than Safari on the devices with same class.
In order to demonstrate the problem I created test html example. In it I use all of the operations which I consider as critical. There are: drawing frame from video stream onto first 4K canvas, scale this canvas and draw it onto smallest canvas, obtaining of array buffer from smallest canvas for calculation. These three operations are called for each frame therefore their performance is most critical for me.
Repository with example
Deployed example
Measured timers:
Execution time originalCanvas.drawFrame, ms - drawing of videoframe onto 4K canvas
Execution time scaledCanvas.drawFrame, ms - drawing 4K canvas onto small canvas
Execution time scaledCanvas.getBmpData, ms - obtaining byte array from small canvas
All big canvases in testing have resolution 3840x2160, all small ones have resolution 711x400.
Now let's move on to the most important thing... Why we have such big inequality in performance in different browsers and on different devices which have the same class? Unfortunately I can't test example in Chrome and FireFox on iPhones because of prohibition of access to camera there. I consider that manipulation with canvases is a simple operation which shouldn't be so long. And why Safari has extremely amazing performance compare with Chrome or even FireFox?
I hope my topic wasn't very boring. I would be glad to hear everything about my workflow in application or about my conclusions. Thanks a lot!

Can glsl be used instead of webgl?

This may be a bit of a naive question so please go easy on me. But I was looking at shaders at shadertoy.com and I'm amazed at how small the glsl code is for the 3d scenes. Digging deeper I noticed how most of the shaders use a technique called ray marching.
This technique makes it possible to avoid using vertices/triangles altogether and just employ the pixel shader and some math to create some pretty complex scenes.
So I was wondering why is it that 3d scenes often use triangle meshes with webgl instead of just using pixel shaders. Can't we just render the entire scene with glsl and pixel shaders (aka fragment shaders)?
The simple answer is because the techniques on shadertoy are probably 10,100,1000 times slower than using vertices and triangles.
Compare this shadertoy forest that runs at 1fps at best fullscreen on my laptop
https://www.shadertoy.com/view/4ttSWf
To this Skyrim forest which runs at 30 to 60fps
https://www.youtube.com/watch?v=PjqsYzBrP-M
Compare this Shadertoy city which runs at 5fps on my laptop
https://www.shadertoy.com/view/XtsSWs
To this Cities:Skylines city which runs at 60fps
https://www.youtube.com/watch?v=0gI2N10QyRA
Compare this Shadertoy Journey clone which runs 1fps fullscreen on my laptop
https://www.shadertoy.com/view/ldlcRf
to the actual Journey game on PS3, a machine with an arguably slower GPU than my laptop given the PS3 came out in 2006, and yet runs at 60fps
https://www.youtube.com/watch?v=61DZC-60x20#t=0m46s
There's plenty of other reasons. A typical 3D world uses gigabytes of data for textures, characters, animations, collisions etc, none of that is available in just GLSL. Another is often they use fractal techniques so there's no easy way to actually design anything. Instead they just search the math for something interesting. That would not be a good way to design game levels for example. In other words using data of vertices makes things far more flexible and editable.
Compare the Journey examples above. The Shadertoy example is a single scene vs the game which is a vast designed world with buildings and ruins and puzzles etc...
There's a reason it's called ShaderTOY. It's a meant as a fun challenge. Given a single function who's only input is which pixel is currently being drawn, write code to draw something. As such the images people have managed to draw given that limit are amazing!
But, they aren't generally the techniques used to write real apps. If you want your app to run fast and be flexible you use the more traditional techniques of vertices and triangles. The techniques used by GTA5, Red Dead Redemption 2, Call of Duty, Apex Legends, Fortnite, etc....

How to detect slow GPU on mobile device with three.js?

I've define that my games is extremely slow with enabled shadows on old mobile devices (Samsung galaxy S4, IPhone 5). When I turn off shadows it's improving performance greatly.
Does any one know how to detect slow GPU to turn off shadows completely on slow devices or how to improve shadow performance?
I've try to use diferrent shadow.mapSize on lights and shadowMap.type on renderer and it dosen't improve performance.
Some details:
I use PerspectiveCamera and WebGLRenderer with render size 1700x667.
Used lights: new THREE.AmbientLight(0xffffff, 0.7) and new THREE.SpotLight(0xffffff, 0.4, 4000, 100)
Used materials: MeshPhongMaterial
Options
As Andrey pointed out do a benchmark
Try using failIfMajorPerformanceCaveat: true when creating the WebGL context.
Make a fingerprint. Query all the various gl.getParameter stats related to GPU limits and create a fingerprint. See if there are certain fingerprints that = slow.
Try getting and using the WEBGL_debug_renderer_info extension unmasked renderer/vendor strings (this is really just more data for #3).
Like most PC games, have an options screen that let's users choose which graphics features to use.

Paper.js/Canvas performance and simple Raster animations

I am trying to create a tiny Guitar Hero game using Paper.js as an experiment.
Here is a live and testable version of my code (wait for 1 second) -
http://codepen.io/anon/pen/GgggwK
I have an array with delays, e.g intervalArray =[200,300,500,100,200], and I use that array to fire up a function
that pushes a Raster Image into a group.
There are a total of 5 Groups(for 5 guitar chords) which are animated
with view.onFrame so their position.y changes at a specified
dropSpeed.
Hence whatever I push into those Groups is animated(flowing) down the
canvas.
There are also 5 Circles and the images/notes that are flowing down
overlap the circles at some point.
If the user clicks that circle at the right time(when note
overlaps), he gets some points.
I am recycling the images when they reach the end of the canvas, so I will not have too many objects to eat up memory.
The thing is, I was expecting to see a very, very fast performance with this.
I am using Raster Images which are supposed to be very fast to render
in comparison to vectors, I am recycling the images and using few
extra items on the Canvas but still on mobile browsers I am having
some serious performance issues.
Even on my iMac I was expecting to see this run at the full frame rate - 60fps that requestAnimationFrame(this is what view.onFrame uses internally) allows, but sometimes the frame rate variates there as well.
I have tested this on:
Galaxy S3, Stock and Chrome browsers (laggy at some points when animating, the framerate freezes for 5,6 frames every 35 frames).
Google Nexus 5, Stock and Chrome browsers (works somehow better, freezes for 5,7 frames and continues)
iPhone 4 Safari browser (very sluggish)
iMac 2011, 8GB ram, Core2 Duo processor (fairly good framerate, sometimes variates)

Is it possible to create a polarized 3d website?

is it possible to create a website which would display in 3d similar to 3d games or movies? What if I were to modulate the website using some sort of css or webgl technique?
Something which i've found quite interesting is adjusting the angle of an image e.g:
Polarised would require APIs that don't exist (yet). You can however play with analglyphic techniques.
I made this 3d spinning cube (requires a browser with 3d transforms to work) that uses red cyan glasses to work: http://css3.bradshawenterprises.com/demos/3d.php
It kinda works!
Doing a 3d website is possible, but hard, using red/cyan anaglyph glasses. Getting 3d using active shutter glasses is perhaps theoretically possible, but almost certainly unfeasible without huge timing issues. Polarized glasses is impossible without using a projector with a split image lens and polarized filters (or two projectors).
Latest stable version of Chrome 9 now has built-in support for WebGL. Some of famous example will be Aquarium, Jellyfish and even Virtual globe.

Categories

Resources