so I'm trying to create an online game using Babylon.js but have run into a problem thats got me a little stumped so hoping someone here would be willing to help me out. Please bear with me on this one, i'm a complete newbie with babylon as i've only every worked with THREE.js. Right now my game consists of a scene compromising of multiple meshes with multiple users represented as avatars (created from basic circle geometry for the moment) loaded into an environment. What I want to do is highlight the outline of these avatars ONLY when they are occluded by any other object, meaning that when they are not occluded they look normal with no highlight but when behind an object their highlighted silhouette can be seen by others (including yourself as you can see your own avatar). This is very akin to effects used in many other video games (see example below).
Example of Effect
Thus far, based on some googling and forum browsing (Babylonjs outline through walls & https://forum.babylonjs.com/t/highlight-through-objects/8002/4) I've figured out how to highlight the outline of objects using Babylon.HighlighLayer and I know that i can render objects above others via RenderingGroups but I can't seem to figure out how to use them in conjunction to create the effect I want. The best i've managed to do is get the highlighted avatar render above everything but I need just the silhouette not the entire mesh. I'm also constrained by the fact that my scene has many meshes in it that are loaded dynamically and i'm also trying to keep things as optimal as possible. Can't afford to use very computationally expensive procedures.
Anybody know of the best way to approach this? Would greatly appreciate any advice or assistance you can provide.Thanks!
So I asked the same question on the babylon forums which helped me to find a solution. All credit goes to the guy's that helped me out over there but just in case someone else comes across this question seeking an answer, here is a link to that forum question https://forum.babylonjs.com/t/showing-highlighted-silhouette-of-mesh-only-when-it-is-occluded/27783/7
Edit:
Ok thought i'd include the two possible solutions here properly as well as their babylon playgrounds. All credit goes to roland & evgeni_popov who came up with these solutions on the forum linked above.
The first solution is easier to implement but slightly less performant than the second solution.
Clone Solution: https://playground.babylonjs.com/#JXYGLT%235
// roland#babylonjs.xyz, 2022
const createScene = function () {
const scene = new BABYLON.Scene(engine);
const camera = new BABYLON.ArcRotateCamera('camera', -Math.PI / 2, Math.PI / 2, 20, new BABYLON.Vector3(0, 0, 0), scene)
camera.attachControl(canvas, true);
const light = new BABYLON.HemisphericLight("light", new BABYLON.Vector3(0, 1, 0), scene);
light.intensity = 0.7;
const wall = BABYLON.MeshBuilder.CreateBox('wall', { width: 5, height: 5, depth: 0.5 }, scene)
wall.position.y = 1
wall.position.z = -2
const sphere = BABYLON.MeshBuilder.CreateSphere('sphere', { diameter: 2, segments: 32 }, scene)
sphere.position.y = 1
const sphereClone = sphere.clone('sphereClone')
sphereClone.setEnabled(false)
const matc = new BABYLON.StandardMaterial("matc", scene);
matc.depthFunction = BABYLON.Constants.ALWAYS;
matc.disableColorWrite = true;
matc.disableDepthWrite = true;
sphereClone.material = matc;
sphere.occlusionQueryAlgorithmType = BABYLON.AbstractMesh.OCCLUSION_ALGORITHM_TYPE_ACCURATE
sphere.occlusionType = BABYLON.AbstractMesh.OCCLUSION_TYPE_STRICT
const hl = new BABYLON.HighlightLayer('hl1', scene, { camera: camera })
hl.addMesh(sphereClone, BABYLON.Color3.Green())
hl.addExcludedMesh(wall);
let t = 0;
scene.onBeforeRenderObservable.add(() => {
sphere.position.x = 10 * Math.cos(t);
sphere.position.z = 100 + 104 * Math.sin(t);
if (sphere.isOccluded) {
sphereClone.setEnabled(true)
sphereClone.position.copyFrom(sphere.position);
} else {
sphereClone.setEnabled(false)
}
t += 0.03;
})
return scene;
};
This second solution is slightly more performant than above as you don't need a clone but involves overriding the AbstactMesh._checkOcclusionQuery function which is the function that updates the isOccluded property for meshes such that the mesh is always rendered even when occluded. There’s no overhead if you are using the occlusion queries only for the purpose of drawing silhouettes however If you are also using them to avoid drawing occluded meshes then there’s an overhead because the meshes will be drawn even if they are occluded. In which case your probably best of going with the first solution
Non-Clone solution: https://playground.babylonjs.com/#JXYGLT#14
// roland#babylonjs.xyz, 2022
const createScene = function () {
const scene = new BABYLON.Scene(engine);
const camera = new BABYLON.ArcRotateCamera('camera', -Math.PI / 2, Math.PI / 2, 20, new BABYLON.Vector3(0, 0, 0), scene)
camera.attachControl(canvas, true);
const light = new BABYLON.HemisphericLight("light", new BABYLON.Vector3(0, 1, 0), scene);
light.intensity = 0.7;
const wall = BABYLON.MeshBuilder.CreateBox('wall', { width: 5, height: 5, depth: 0.5 }, scene)
wall.position.y = 1
wall.position.z = -2
const sphere = BABYLON.MeshBuilder.CreateSphere('sphere', { diameter: 2, segments: 32 }, scene)
sphere.position.y = 1
sphere.occlusionQueryAlgorithmType = BABYLON.AbstractMesh.OCCLUSION_ALGORITHM_TYPE_ACCURATE
sphere.occlusionType = BABYLON.AbstractMesh.OCCLUSION_TYPE_STRICT
const mats = new BABYLON.StandardMaterial("mats", scene);
sphere.material = mats;
const hl = new BABYLON.HighlightLayer('hl1', scene, { camera: camera })
hl.addExcludedMesh(wall);
let t = 0;
const cur = BABYLON.AbstractMesh.prototype._checkOcclusionQuery;
scene.onDisposeObservable.add(() => {
BABYLON.AbstractMesh.prototype._checkOcclusionQuery = cur;
});
BABYLON.AbstractMesh.prototype._checkOcclusionQuery = function() {
cur.apply(this);
return false;
}
scene.onBeforeRenderObservable.add(() => {
sphere.position.x = 10 * Math.cos(t);
sphere.position.z = 100 + 104 * Math.sin(t);
if (sphere.isOccluded) {
hl.addMesh(sphere, BABYLON.Color3.Green())
mats.depthFunction = BABYLON.Constants.ALWAYS;
mats.disableColorWrite = true;
} else {
hl.removeMesh(sphere);
mats.depthFunction = BABYLON.Constants.LESS;
mats.disableColorWrite = false;
}
t += 0.03;
})
return scene;
};
Related
i have tried a lot of ways to go around this topic, before asking and now i really have no clue how to accomplish object picking with gpu on a gltf loaded file, so im hoping for any help that i can get :(
I've loaded a huge GLTF file, with a lot of little objects in it, due to the file count its not possible to achieve a good fps, if i just add them to the scene, so i have managed to achieve 60fps merging the gltfs children into chunks, but when i try to implement the webgl_interactive_cubes_gpu example, but it doesn't seem to be working for me, I always get the same object when im clicking.
To debug i have tried rendering the pickingScene and everything seems to be in place, graphically speaking, but when it comes to picking it doesnt seem to be working as i expected, unless im doing something wrong.
Raycast picking is not a suitable option for me as there are a lot of objects and adding renderin them would kill the fps. (55k objects);
Below is the code once the gltf is loaded:
var child = gltf.scene.children[i];
var childGeomCopy = child.geometry.clone();
childGeomCopy.translate(geomPosition.x, geomPosition.y, geomPosition.z);
childGeomCopy.scale(child.scale.x * Scalar, child.scale.y * Scalar, child.scale.z * Scalar);
childGeomCopy.computeBoundingBox();
childGeomCopy.computeBoundingSphere();
childGeomCopy.applyMatrix(new THREE.Matrix4());
geometriesPicking.push(childGeomCopy);
var individualObj = new THREE.Mesh(childGeomCopy, IndividualObjMat);
individualObj.name = "individual_" + child.name;
pickingData[childCounter] = {
object: individualObj,
position: individualObj.position.clone(),
rotation: individualObj.rotation.clone(),
scale: individualObj.scale.clone()
};
childCounter++;
Edit:
gltf.scene.traverse(function (child) {
//console.log(child.type);
if (child.isMesh) {
let geometry = child.geometry.clone();
let position = new THREE.Vector3();
position.x = child.position.x;
position.y = child.position.y;
position.z = child.position.z;
let rotation = new THREE.Euler();
rotation.x = child.rotation.x;
rotation.y = child.rotation.y;
rotation.z = child.rotation.z;
let scale = new THREE.Vector3();
scale.x = child.scale.x;
scale.y = child.scale.y;
scale.z = child.scale.z;
quaternion.setFromEuler(rotation);
matrix.compose(position.multiplyScalar(Scalar), quaternion, scale.multiplyScalar(Scalar));
geometry.applyMatrix(matrix);
applyVertexColors(geometry, color.setHex(Math.random() * 0xffffff));
geometriesDrawn.push(geometry);
geometry = geometry.clone();
applyVertexColors(geometry, color.setHex(childCounter));
geometriesPicking.push(geometry);
pickingData[childCounter] = {
object: new THREE.Mesh(geometry.clone(), new THREE.MeshBasicMaterial({ color: 0xffff00, blending: THREE.AdditiveBlending, transparent: true, opacity: 0.8 })),
id: childCounter,
position: position,
rotation: rotation,
scale: scale
};
childCounter++;
//console.log("%c [childCounter] :", "", childCounter);
}
});
...
var pickingGeom = THREE.BufferGeometryUtils.mergeBufferGeometries(geometriesPicking);
pickingGeom.rotateX(THREE.Math.degToRad(90)); pickingScene.add(new THREE.Mesh(pickingGeom, pickingMaterial));
Then on my MouseUp function I call pick(mouse*) and pass in the mouse* information:
function pick(mouse) {
camera.setViewOffset(renderer.domElement.width, renderer.domElement.height, mouse.x * window.devicePixelRatio | 0, mouse.y * window.devicePixelRatio | 0, 1, 1);
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
camera.clearViewOffset();
var pixelBuffer = new Uint8Array(4);
renderer.readRenderTargetPixels(pickingTexture, 0, 0, 1, 1, pixelBuffer);
var id = (pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | (pixelBuffer[2]);
var data = pickingData[id];
if (data) {
console.log(data.object.name, ":", data.position); // Always return the same object
}}
I have started working with PixiJs to develop a simple game. I am trying to rotate a sprite based on a click of the button, and then allowing the user top stop the rotation with another button click.
What I am not able to achieve is determine how many "cycles" the rotation would have done, for example if the image did a full rotation 3, 4 times, and its stopping location determining how many remaining rotations are needed for another full cycle. Is there something in place to easily retrieve this?
The code I have so far is quite basic and simple:
initGameLayout() {
const top = new PIXI.Graphics();
top.beginFill(0x2185c7);
top.drawRect(0, 0, this.app.screen.width, this.margin);
const headerStyle = new PIXI.TextStyle({
fontSize: 24,
fontStyle: 'italic',
fontWeight: 'bold',
});
const headerText = new PIXI.Text('', headerStyle);
headerText.x = Math.round((top.width - headerText.width) / 2);
headerText.y = Math.round((this.margin - headerText.height) / 2);
top.addChild(headerText);
const spinButton = new PIXI.Graphics();
spinButton.beginFill(0x2185c7);
spinButton.drawRect(0, 0, this.app.screen.width, this.margin);
spinButton.width = 150;
spinButton.height = 100;
spinButton.x = 620
spinButton.y = 500
spinButton.buttonMode = true;
spinButton.interactive = true;
spinButton.on('pointerdown', this.spinWheel);
const spinButton2 = new PIXI.Graphics();
spinButton2.beginFill(0x2185c3);
spinButton2.drawRect(0, 0, this.app.screen.width, this.margin);
spinButton2.width = 150;
spinButton2.height = 100;
spinButton2.x = 420
spinButton2.y = 500
spinButton2.buttonMode = true;
spinButton2.interactive = true;
spinButton2.on('pointerdown', this.stopWheel);
this.bunny = new PIXI.Sprite.from('https://pixijs.io/examples-v4/examples/assets/bunny.png');
this.bunny.width = 50;
this.bunny.height = 50;
this.bunny.anchor.set(0.5);
this.bunny.x = this.app.screen.width / 2;
this.bunny.y = this.app.screen.height / 2;
this.bunny.rotate += 0.1;
this.app.stage.addChild(top);
this.app.stage.addChild(spinButton);
this.app.stage.addChild(spinButton2);
this.app.stage.addChild(this.bunny);
}
spinWheel() {
if (!this.running)
{
this.running = true;
this.app.ticker.add((delta: any) => {
this.bunny.rotation += 0.1;
});
} else {
this.running = false;
this.bunny.rotation -= -0.1;
}
}
stopWheel() {
this.bunny.rotation -= -0.1;
this.running = false;
}
Appreciate any help anyone could give on the above issue
-Jes
The rotation member of a sprite is the measure of radians it is rotated. There are 2*Math.PI radians in a full circle. You can use this information to calculate the desired values:
When the sprite is first clicked, store originalRotation = bunny.rotation;
When the sprite is clicked again, calculate angleRotated = Math.abs(bunny.rotation - originalRotation);
Then numCycles = Math.floor(angleRotated / (2*Math.PI));
And radiansUntilNextCycle = 2*Math.PI - (angleRotated % (2*Math.PI));
If you are more familiar with degrees, you can use those instead. Swap:
bunny.rotation with bunny.angle
2*Math.PI with 360
I'm assuming by "cycle" you mean a single rotation of 360 degrees. However, your question is difficult to understand because each time you use the word "rotation" it seems to have a different meaning. So it doesn't quite make sense.
It may also help to explain why you want these values; what will you do with them?
And pixiplayground.com is a great place to share live, functional code.
I have made a Threejs project, where I dynamically create a ground from perlin noise. Here is the code:
createGround() {
const resolutionX = 100
const resolutionY = 100
const actualResolutionX = resolutionX + 1 // plane adds one vertex
const actualResolutionY = resolutionY + 1
const geometryPlane = new THREE.PlaneGeometry(this.sizeX, this.sizeY, resolutionX, resolutionY)
const noise = perlin.generatePerlinNoise(actualResolutionX, actualResolutionY)
let i = 0
for (let x = 0; x < actualResolutionX; x++) {
for (let y = 0; y < actualResolutionY; y++) {
let h = noise[i]
}
geometryPlane.vertices[i].z = h
i++
}
}
geometryPlane.verticesNeedUpdate = true
geometryPlane.computeFaceNormals()
const materialPlane = new THREE.MeshStandardMaterial({
color: 0xffff00,
side: THREE.FrontSide,
roughness: 1,
metallness: 0,
})
const ground = new THREE.Mesh(geometryPlane, materialPlane)
geometryPlane.computeVertexNormals()
ground.name = GROUND_NAME
ground.receiveShadow = true
scene.add(ground)
}
I am happy with the geometry that is generated, but the problem is the shadows look really inaccurate.
Here is my code for the light:
const light = new THREE.DirectionalLight(
'white',
1,
)
light.shadow.mapSize.width = 512 * 12 // default is 512. doesnt do anything
light.shadow.mapSize.height = 512 * 12 // default is 512
light.castShadow = true
light.position.set(100, 100, 100)
scene.add(light)
light.shadowCameraVisible = true
My question is, how can I make the grounds shadow look more accuarate and defined, and show off the grounds geometry?
Rest of the code can be found here: https://github.com/Waltari10/workwork
This shadow doesn't look so bad given the fact that you didn't setup any lights :).
I suppose this render works with some default lighting or lighting that you've added before thinking about shadows.
Try to add some directional light, simulating Sun direction and color and see if it helps, or tell us more about your lighting setup.
If you have any lights that render shadowmap already then check its resolution, you may need to make it bigger.
I'm trying to make a WebVR environment using Three.js. I exported some scenes from Cinema4D and loaded them in with the Colladaloader of Three.js. Now I wanted to try this environment in my Google Cardboard but I needed to have the split screen for both my eyes, of course.
I used the npm module three-stereo-effect to achieve the VR effect, but it's overlapping when using it in a cardboard. I looked it up and saw that a lot of WebVR examples had a rounded rectangle for each eye (example of what I mean), not a straight rectangle, I thought I needed to find matrices to fix that (When looking at the examples of this repository). But then I downloaded a VR tunnel racing game and saw that they used straight rectangles and the vision was fine.
Now I'm thinking the eyeSeparation of my stereo effect is incorrect, I saw someone use the property eyeSeparation on the StereoEffect module and tried that out, but I think I shouldn't just be guessing a value...
Am I on the right track here to find a solution? Or am I looking in the total wrong direction why my 3D scene does not give a good vision when using a Cardboard?
This is the code I'm experimenting with.
import {sets} from './data/';
import * as THREE from 'three';
import threeOrbitControls from 'three-orbit-controls';
import ColladaLoader from 'three-collada-loader';
import threeStereoEffect from 'three-stereo-effect';
import {BufferLoader} from './modules/sound';
import {SpawnObject} from './modules/render';
const OrbitControls = threeOrbitControls(THREE);
const StereoEffect = threeStereoEffect(THREE);
let scene, camera, renderer;
let audioCtx, bufferLoader;
const notes = [];
let stereoEffect = null;
const init = () => {
window.AudioContext = window.AudioContext || window.webkitAudioContext;
audioCtx = new AudioContext();
bufferLoader = new BufferLoader(audioCtx);
bufferLoader.load(sets.drums)
.then(data => spawnObject(data));
initEnvironment();
};
const spawnObject = data => {
for (let i = 0;i < 5;i ++) {
const bol = new SpawnObject(`object.dae`, audioCtx, data[0], scene, false);
notes.push(bol);
}
// console.log(notes);
};
const initEnvironment = () => {
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(
45, window.innerWidth / window.innerHeight,
1, 10000
);
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
stereoEffect = new StereoEffect(renderer);
// stereoEffect.eyeSeparation = 1;
stereoEffect.setSize(window.innerWidth, window.innerHeight);
console.log(stereoEffect);
document.querySelector(`main`).appendChild(renderer.domElement);
camera.position.set(0, 0, 2);
camera.lookAt(scene.position);
new OrbitControls(camera);
//LIGHTS
const light = new THREE.PointLight(0xFFFFFF);
light.position.set(0, 0, 9);
light.castShadow = true;
light.shadow.mapSize.width = 1024;
light.shadow.mapSize.height = 1024;
light.shadow.camera.near = 10;
light.shadow.camera.far = 100;
scene.add(light);
// const hemiLight = new THREE.HemisphereLight(0xffffff, 0xffffff, 0.6);
// hemiLight.color.setHSL(0.6, 1, 0.6);
// hemiLight.groundColor.setHSL(0.095, 1, 0.75);
// hemiLight.position.set(0, 500, 0);
// scene.add(hemiLight);
//
// const dirLight = new THREE.DirectionalLight(0xffffff, 1);
// dirLight.color.setHSL(0.1, 1, 0.95);
// dirLight.position.set(- 1, 1.75, 1);
// dirLight.position.multiplyScalar(50);
// scene.add(dirLight);
// dirLight.castShadow = true;
//FLOOR
const matFloor = new THREE.MeshPhongMaterial();
const geoFloor = new THREE.BoxGeometry(2000, 1, 2000);
const mshFloor = new THREE.Mesh(geoFloor, matFloor);
matFloor.color.set(0x212E39);
mshFloor.receiveShadow = true;
mshFloor.position.set(0, - 1, 0);
scene.add(mshFloor);
//ENVIRONMENT
const loader = new ColladaLoader();
loader.load(`../assets/environment.dae`, collada => {
collada.scene.traverse(child => {
child.castShadow = true;
child.receiveShadow = true;
});
scene.add(collada.scene);
render();
});
};
const render = () => {
// stereoEffect.render(scene, camera);
// effect.render(scene, camera);
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
renderer.gammaInput = true;
renderer.gammaOutput = true;
renderer.setClearColor(0xdddddd, 1);
stereoEffect.render(scene, camera);
requestAnimationFrame(render);
};
init();
From the PDF "Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays":
2.1.1 Binocular Disparity
Binocular disparity is the positional difference between the two retinal projections of a given point in space. This positional difference results from the fact that the two eyes are laterally separated and therefore see the world from the two slightly different vantage points. For the average person the mean lateral separation also known as the interocular is 65mm. Most of the population has an eye separation within ±10mm of the average interocular.
It would seem, with a little testing with friends with a variety of face shapes you will find a happy average for the eyeSeparation value for the device and the people using it. I would then also provide some settings panel which allows a few different settings of the eyeSeparation for users to choose from if they find disparity or overlap in their stereo experience. Normally I think this would be done with a keyboard connected to the same system to dial in the stereo alignment, but you're in cardboard, so the user may need trial and error to get it right.
My display has a resolution of 7680x4320 pixels. I want to display up to 4 million different colored squares. And I want to change the number of squares with a slider. If have currently two versions. One with canvas-fillRect which looks somethink like this:
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
for (var i = 0; i < num_squares; i ++) {
ctx.fillStyle = someColor;
ctx.fillRect(pos_x, pos_y, pos_x + square_width, pos_y + square_height);
// set pos_x and pos_y for next square
}
And one with webGL and three.js. Same loop, but I create a box geometry and a mesh for every square:
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
for (var i = 0; i < num_squares; i ++) {
var material = new THREE.MeshLambertMaterial( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
}
They both work quite fine for a few thousand squares. The first version can do up to one million squares, but everything over a million is just awful slow. I want to update the color and the number of squares dynamically.
Does anyone has tips on how to be more efficient with three.js/ WebGL/ Canvas?
EDIT1: Second version: This is what I do at the beginning and when the slider has changed:
// Remove all objects from scene
var obj, i;
for ( i = scene.children.length - 1; i >= 0 ; i -- ) {
obj = scene.children[ i ];
if ( obj !== camera) {
scene.remove(obj);
}
}
// Fill scene with new objects
num_squares = gui_dat.squareNum;
var window_pixel = window.innerWidth * window.innerHeight;
var pixel_per_square = window_pixel / num_squares;
var width_height = Math.floor(Math.sqrt(pixel_per_square));
var geometry = new THREE.BoxGeometry( width_height, width_height, 0);
var pos_x = width_height/2;
var pos_y = width_height/2;
for (var i = 0; i < num_squares; i ++) {
//var object = new THREE.Mesh( geometry, );
var material = new THREE.Material()( { color: Math.random() * 0xffffff } );
material.emissive = new THREE.Color( Math.random(), Math.random(), Math.random() );
var object = new THREE.Mesh( geometry, material );
object.position.x = pos_x;
object.position.y = pos_y;
pos_x += width_height;
if (pos_x > window.innerWidth) {
pos_x = width_height/2;
pos_y += width_height;
}
scene.add( object );
}
The fastest way to draw squares is to use the gl.POINTS primitive and then setting gl_PointSize to the pixel size.
In three.js, gl.POINTS is wrapped inside the THREE.PointCloud object.
You'll have to create a geometry object with one position for each point and pass that to the PointCloud constructor.
Here is an example of THREE.PointCloud in action:
http://codepen.io/seanseansean/pen/EaBZEY
geometry = new THREE.Geometry();
for (i = 0; i < particleCount; i++) {
var vertex = new THREE.Vector3();
vertex.x = Math.random() * 2000 - 1000;
vertex.y = Math.random() * 2000 - 1000;
vertex.z = Math.random() * 2000 - 1000;
geometry.vertices.push(vertex);
}
...
materials[i] = new THREE.PointCloudMaterial({size:size});
particles = new THREE.PointCloud(geometry, materials[i]);
I didn't dig through all the code but I've set the particle count to 2m and from my understanding, 5 point clouds are generated so 2m*5 = 10m particles and I'm getting around 30fps.
The highest number of individual points I've seen so far was with potree.
http://potree.org/, https://github.com/potree
Try some demo, I was able to observe 5 millions of points in 3D at 20-30fps. I believe this is also current technological limit.
I didn't test potree on my own, so I cant say much about this tech. But there is data convertor and viewer (threejs based) so should only figure out how to convert the data.
Briefly about your question
The best way handle large data is group them as quad-tree (2d) or oct-tree (3d). This will allow you to not bother program with part that is too far from camera or not visible at all.
On the other hand, program doesnt like when you do too many webgl calls. Try to understand it like this, you want to do create ~60 images each second. But each time you set some parameter for GPU, program must do some sync. Spliting data means you will need to do more setup so tree must not be too detialed.
Last thing, someone said:
You'll probably want to pass an array of values as one of the shader uniforms
I dont suggest it, bad idea. Texture lookup is quite fast, but attributes are always faster. If we are talking about 4M points, you cant afford reading data from uniforms.
Sorry I cant help you with the code, I could do it without threejs, Im not threejs expert :)
I would recommend trying pixi framework( as mentioned in above comments ).
It has webgl renderer and some benchmarks are very promising.
http://www.goodboydigital.com/pixijs/bunnymark_v3/
It can handle allot of animated sprites.
If your app only displays the squares, and doesnt animate, and they are very simple sprites( only one color ) then it would give better performance than the demo link above.