Pixel Mapping for Rendering DICOM Monochrome2 - javascript

Trying to render dicom monochrome2 onto HTML5 canvas
what is the correct pixel mapping from grayscale to canvas rgb ?
Currently using incorrect mapping of
const ctx = canvas.getContext( '2d' )
const imageData = ctx.createImageData( 512, 512 )
const pixelData = getPixelData( dataSet )
let rgbaIdx = 0
let rgbIdx = 0
let pixelCount = 512 * 512
for ( let idx = 0; idx < pixelCount; idx++ ) {
imageData.data[ rgbaIdx ] = pixelData[ rgbIdx ]
imageData.data[ rgbaIdx + 1 ] = pixelData[ rgbIdx + 1 ]
imageData.data[ rgbaIdx + 2 ] = 0
imageData.data[ rgbaIdx + 3 ] = 255
rgbaIdx += 4
rgbIdx += 2
}
ctx.putImageData( imageData, 0, 0 )
Reading through open source libraries, not very clear how, could you please suggest a clear introduction of how to render?
Fig 1. incorrect mapping
Fig 2. correct mapping, dicom displayed in IrfanView

There are two problems here: your monochrome data has a higher resolution (e.g. value range) than can be shown in RGB, so you cannot just map the pixel data into the RGB data directly.
The value range depends on the Bits Stored tag - for a typical value of 12 the data range would be 4096. The simplest implementation could just downscale the number, in this case by 16.
The second problem with your code: to represent a monochrome value in RGB, you have to add 3 color components with the same value:
let rgbaIdx = 0
let rgbIdx = 0
let pixelCount = 512 * 512
let scaleFactor = 16 // has to be calculated in real code
for ( let idx = 0; idx < pixelCount; idx++ ) {
# assume Little Endian
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
let displayValue = Math.round(pixelValue / scaleFactor)
imageData.data[ rgbaIdx ] = displayValue
imageData.data[ rgbaIdx + 1 ] = displayValue
imageData.data[ rgbaIdx + 2 ] = displayValue
imageData.data[ rgbaIdx + 3 ] = 255
rgbaIdx += 4
rgbIdx += 2
}
To get a better representation, you have to take the VOI LUT into account instead of just downscaling. In case you have the Window Center / Window Width tags defined, you can calulate the minimum and maximum values and get the scale factor from that range:
let minValue = windowCenter - windowWidth / 2
let maxValue = windowCenter + windowWidth / 2
let scaleFactor = (maxValue - minValue) / 256
...
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
let displayValue = max((pixelValue - minValue) / scaleFactor), 255)
...
EDIT: As observed by #WilfRosenbaum: if you don't have a VOI LUT (as suggested by the empty values of WindowCenter and WindowWidth) you best calculate your own one. To do this, you have to calculate the min/max values of your pixel data:
let minValue = 1 >> 16
let maxValue = 0
for ( let idx = 0; idx < pixelCount; idx++ ) {
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
minValue = min(minValue, pixelValue)
maxValue = max(maxValue, pixelValue)
}
let scaleFactor = (maxValue - minValue) / 256
and then use the same code as shown for the VOI LUT.
A few notes:
if you have a modality LUT, you have to apply it before the VOI LUT; CT images usually have one (RescaleSlope/RescaleIntercept), though this one only has an identity LUT, so you can ignore it
you can have more than one WindowCenter / WindowWindow value pairs, or could have a VOI LUT sequence, which is also not considered here
the code is out of my head, so it may have bugs

Turned out 4 main things needed to be done (reading through fo-dicom source code to find these things out)
Prepare Monochrome2 LUT
export const LutMonochrome2 = () => {
let lut = []
for ( let idx = 0, byt = 255; idx < 256; idx++, byt-- ) {
// r, g, b, a
lut.push( [byt, byt, byt, 0xff] )
}
return lut
}
Interpret pixel data as unsigned short
export const bytesToShortSigned = (bytes) => {
let byteA = bytes[ 1 ]
let byteB = bytes[ 0 ]
let pixelVal
const sign = byteA & (1 << 7);
pixelVal = (((byteA & 0xFF) << 8) | (byteB & 0xFF));
if (sign) {
pixelVal = 0xFFFF0000 | pixelVal; // fill in most significant bits with 1's
}
return pixelVal
}
Get Minimum and Maximum Pixel Value and then compute WindowWidth to eventually map each pixel to Monochrome2 color map
export const getMinMax = ( pixelData ) => {
let pixelCount = pixelData.length
let min = 0, max = 0
for ( let idx = 0; idx < pixelCount; idx += 2 ) {
let pixelVal = bytesToShortSigned( [
pixelData[idx],
pixelData[idx+1]
] )
if (pixelVal < min)
min = pixelVal
if (pixelVal > max)
max = pixelVal
}
return { min, max }
}
Finally draw
export const draw = ( { dataSet, canvas } ) => {
const monochrome2 = LutMonochrome2()
const ctx = canvas.getContext( '2d' )
const imageData = ctx.createImageData( 512, 512 )
const pixelData = getPixelData( dataSet )
let pixelCount = pixelData.length
let { min: minPixel, max: maxPixel } = getMinMax( pixelData )
let windowWidth = Math.abs( maxPixel - minPixel );
let windowCenter = ( maxPixel + minPixel ) / 2.0;
console.debug( `minPixel: ${minPixel} , maxPixel: ${maxPixel}` )
let rgbaIdx = 0
for ( let idx = 0; idx < pixelCount; idx += 2 ) {
let pixelVal = bytesToShortSigned( [
pixelData[idx],
pixelData[idx+1]
] )
let binIdx = Math.floor( (pixelVal - minPixel) / windowWidth * 256 );
let displayVal = monochrome2[ binIdx ]
if ( displayVal == null )
displayVal = [ 0, 0, 0, 255]
imageData.data[ rgbaIdx ] = displayVal[0]
imageData.data[ rgbaIdx + 1 ] = displayVal[1]
imageData.data[ rgbaIdx + 2 ] = displayVal[2]
imageData.data[ rgbaIdx + 3 ] = displayVal[3]
rgbaIdx += 4
}
ctx.putImageData( imageData, 0, 0 )
}

Related

Canvas Draws Points it should not draw

Consider the following snippet: Why are there visible points outside the rectangle?
Is the switching of the context color slower than the drawing of the rectangle?
const templateCanvas = document.getElementById( "template" );
const tctx = templateCanvas.getContext( "2d" );
tctx.fillStyle = "red";
tctx.fillRect( 300, 300, 200, 200 )
const canvas = document.getElementById( "canvas" );
const ctx = canvas.getContext( "2d" );
const max = {
x: 800,
y: 800
};
const sites = [];
const points = 10000;
for ( let i = 0; i < points; i++ ) sites.push( {
x: Math.floor( Math.random() * max.x ),
y: Math.floor( Math.random() * max.y )
} );
const c = ( alpha ) => 'rgba(255,0,0,' + alpha + ')';
const c2 = ( alpha ) => {
let colors = [
'rgba(78,9,12,' + alpha + ')',
'rgba(161,34,19,' + alpha + ')',
'rgba(171,95,44,' + alpha + ')',
'rgba(171,95,44,' + alpha + ')',
'rgba(252,160,67,' + alpha + ')'
]
return colors[ Math.round( Math.random() * colors.length ) ];
}
sites.forEach( p => {
let imgData = tctx.getImageData( p.x, p.y, 1, 1 ).data;
ctx.fillStyle = ( imgData[ 0 ] == 255 ) ? c2( 1 ) : c2( 0 );
ctx.fillRect( p.x, p.y, 2, 2 )
} );
<canvas id="canvas" width="800" height="800"></canvas>
<canvas id="template" width="800" height="800"></canvas>
I think what's happening is that your random color function sometimes returns an invalid color, because it's fetching from an undefined array element. That's caused by the use of Math.round() instead of Math.floor():
return colors[ Math.round( Math.random() * colors.length ) ];
Because of that, every once in a while a bad color expression will be used for the fill style, and that will be ignored by the canvas mechanism. Thus you get some dots outside the area covered by red pixels (the square).

Programmatically determine best foreground color to be placed onto an image

I'm working on a node module that will return the color that will look best onto a background image which of course will have multiple colors.
Here's what I have so far:
'use strict';
var randomcolor = require('randomcolor');
var tinycolor = require('tinycolor2');
module.exports = function(colors, tries) {
var topColor, data = {};
if (typeof colors == 'string') { colors = [colors]; }
if (!tries) { tries = 10000; }
for (var t = 0; t < tries; t++) {
var score = 0, color = randomcolor(); //tinycolor.random();
for (var i = 0; i < colors.length; i++) {
score += tinycolor.readability(colors[i], color);
}
data[color] = (score / colors.length);
if (!topColor || data[color] > data[topColor]) {
topColor = color;
}
}
return tinycolor(topColor);
};
So the way it works is first I provide this script with the 6 most dominant colors in an image like this:
[ { r: 44, g: 65, b: 54 },
{ r: 187, g: 196, b: 182 },
{ r: 68, g: 106, b: 124 },
{ r: 126, g: 145, b: 137 },
{ r: 147, g: 176, b: 169 },
{ r: 73, g: 138, b: 176 } ]
and then it will generate 10,000 different random colors and then pick the one that has the best average contrast ratio with the 6 given colors.
The problem is that depending on which script I use to generate the random colors, I'll basically get the same results regardless of the image given.
With tinycolor2 I'll always end up with either a very dark gray (almost black) or a very light gray (almost white). And with randomcolor I'll either end up with a dark blue or a light peach color.
My script might not be the best way of going about this but does anybody have any ideas?
Thank you
Finding dominant hue.
The provided snippet show an example of how to find a dominant colour. It works by breaking the image into its Hue, saturation and luminance components.
The image reduction
To speed up the process the image is reduced to a smaller image (in this case 128 by 128 pixels). Part of the reduction process also trims some of the outside pixels from the image.
const IMAGE_WORK_SIZE = 128;
const ICOUNT = IMAGE_WORK_SIZE * IMAGE_WORK_SIZE;
if(event.type === "load"){
rImage = imageTools.createImage(IMAGE_WORK_SIZE, IMAGE_WORK_SIZE); // reducing image
c = rImage.ctx;
// This is where you can crop the image. In this example I only look at the center of the image
c.drawImage(this,-16,-16,IMAGE_WORK_SIZE + 32, IMAGE_WORK_SIZE + 32); // reduce image size
Find mean luminance
Once reduced I scan the pixels converting them to hsl values and get the mean luminance.
Note that luminance is a logarithmic scale so the mean is the square root of the sum of the squares divided by the count.
pixels = imageTools.getImageData(rImage).data;
l = 0;
for(i = 0; i < pixels.length; i += 4){
hsl = imageTools.rgb2hsl(pixels[i],pixels[i + 1],pixels[i + 2]);
l += hsl.l * hsl.l;
}
l = Math.sqrt(l/ICOUNT);
Hue histograms for luminance and saturation ranges.
The code can find the dominant colour in a range of saturation and luminance extents. In the example I only use one extent, but you can use as many as you wish. Only pixels that are inside the lum (luminance) and sat (saturation) ranges are used. I record a histogram of the hue for pixels that pass.
Example of hue ranges (one of)
hues = [{ // lum and sat have extent 0-100. high test is no inclusive hence high = 101 if you want the full range
lum : {
low :20, // low limit lum >= this.lum.low
high : 60, // high limit lum < this.lum.high
tot : 0, // sum of lum values
},
sat : { // all saturations from 0 to 100
low : 0,
high : 101,
tot : 0, // sum of sat
},
count : 0, // count of pixels that passed
histo : new Uint16Array(360), // hue histogram
}]
In the example I use the mean Luminance to automatically set the lum range.
hues[0].lum.low = l - 30;
hues[0].lum.high = l + 30;
Once the range is set I get the hue histogram for each range (one in this case)
for(i = 0; i < pixels.length; i += 4){
hsl = imageTools.rgb2hsl(pixels[i],pixels[i + 1],pixels[i + 2]);
for(j = 0; j < hues.length; j ++){
hr = hues[j]; // hue range
if(hsl.l >= hr.lum.low && hsl.l < hr.lum.high){
if(hsl.s >= hr.sat.low && hsl.s < hr.sat.high){
hr.histo[hsl.h] += 1;
hr.count += 1;
hr.lum.tot += hsl.l * hsl.l;
hr.sat.tot += hsl.s;
}
}
}
}
Weighted mean hue from hue histogram.
Then using the histogram I find the weighted mean hue for the range
// get weighted hue for image
// just to simplify code hue 0 and 1 (reds) can combine
for(j = 0; j < hues.length; j += 1){
hr = hues[j];
wHue = 0;
hueCount = 0;
hr.histo[1] += hr.histo[0];
for(i = 1; i < 360; i ++){
wHue += (i) * hr.histo[i];
hueCount += hr.histo[i];
}
h = Math.floor(wHue / hueCount);
s = Math.floor(hr.sat.tot / hr.count);
l = Math.floor(Math.sqrt(hr.lum.tot / hr.count));
hr.rgb = imageTools.hsl2rgb(h,s,l);
hr.rgba = imageTools.hex2RGBA(imageTools.rgba2Hex4(hr.rgb));
}
And that is about it. The rest is just display and stuff. The above code requires the imageTools interface (provided) that has tools for manipulating images.
The ugly complement
What you do with the colour/s found is up to you. If you want the complementary colour just convert the rgb to hsl imageTools.rgb2hsl and rotate the hue 180 deg, then convert back to rgb.
var hsl = imageTools.rgb2hsl(rgb.r, rgb.g, rgb.b);
hsl.h += 180;
var complementRgb = imageTools.rgb2hsl(hsl.h, hsl.s, hsl.l);
Personally only some colours work well with their complement. Adding to a pallet is risky, doing it via code is just crazy. Stick with colours in the image. Reduce the lum and sat range if you wish to find accented colours. Each range will have a count of the number of pixels found, use that to find the extent of pixels using the colors in the associated histogram.
Demo "Border the birds"
The demo finds the dominant hue around the mean luminance and uses that hue and mean saturation and luminance to create a border.
The demo using images from wikipedia's image of the day collection as they allow cross site access.
var images = [
// "https://upload.wikimedia.org/wikipedia/commons/f/fe/Goldcrest_1.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Cistothorus_palustris_CT.jpg/450px-Cistothorus_palustris_CT.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/Black-necked_Stilt_%28Himantopus_mexicanus%29%2C_Corte_Madera.jpg/362px-Black-necked_Stilt_%28Himantopus_mexicanus%29%2C_Corte_Madera.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Daurian_redstart_at_Daisen_Park_in_Osaka%2C_January_2016.jpg/573px-Daurian_redstart_at_Daisen_Park_in_Osaka%2C_January_2016.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Myioborus_torquatus_Santa_Elena.JPG/675px-Myioborus_torquatus_Santa_Elena.JPG",
"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Great_tit_side-on.jpg/645px-Great_tit_side-on.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Sarcoramphus_papa_%28K%C3%B6nigsgeier_-_King_Vulture%29_-_Weltvogelpark_Walsrode_2013-01.jpg/675px-Sarcoramphus_papa_%28K%C3%B6nigsgeier_-_King_Vulture%29_-_Weltvogelpark_Walsrode_2013-01.jpg",,
];
function loadImageAddBorder(){
if(images.length === 0){
return ; // all done
}
var imageSrc = images.shift();
imageTools.loadImage(
imageSrc,true,
function(event){
var pixels, topRGB, c, rImage, wImage, botRGB, grad, i, hsl, h, s, l, hues, hslMap, wHue, hueCount, j, hr, gradCols, border;
const IMAGE_WORK_SIZE = 128;
const ICOUNT = IMAGE_WORK_SIZE * IMAGE_WORK_SIZE;
if(event.type === "load"){
rImage = imageTools.createImage(IMAGE_WORK_SIZE, IMAGE_WORK_SIZE); // reducing image
c = rImage.ctx;
// This is where you can crop the image. In this example I only look at the center of the image
c.drawImage(this,-16,-16,IMAGE_WORK_SIZE + 32, IMAGE_WORK_SIZE + 32); // reduce image size
pixels = imageTools.getImageData(rImage).data;
h = 0;
s = 0;
l = 0;
// these are the colour ranges you wish to look at
hues = [{
lum : {
low :20,
high : 60,
tot : 0,
},
sat : { // all saturations
low : 0,
high : 101,
tot : 0,
},
count : 0,
histo : new Uint16Array(360),
}]
for(i = 0; i < pixels.length; i += 4){
hsl = imageTools.rgb2hsl(pixels[i],pixels[i + 1],pixels[i + 2]);
l += hsl.l * hsl.l;
}
l = Math.sqrt(l/ICOUNT);
hues[0].lum.low = l - 30;
hues[0].lum.high = l + 30;
for(i = 0; i < pixels.length; i += 4){
hsl = imageTools.rgb2hsl(pixels[i], pixels[i + 1], pixels[i + 2]);
for(j = 0; j < hues.length; j ++){
hr = hues[j]; // hue range
if(hsl.l >= hr.lum.low && hsl.l < hr.lum.high){
if(hsl.s >= hr.sat.low && hsl.s < hr.sat.high){
hr.histo[hsl.h] += 1;
hr.count += 1;
hr.lum.tot += hsl.l * hsl.l;
hr.sat.tot += hsl.s;
}
}
}
}
// get weighted hue for image
// just to simplify code hue 0 and 1 (reds) can combine
for(j = 0; j < hues.length; j += 1){
hr = hues[j];
wHue = 0;
hueCount = 0;
hr.histo[1] += hr.histo[0];
for(i = 1; i < 360; i ++){
wHue += (i) * hr.histo[i];
hueCount += hr.histo[i];
}
h = Math.floor(wHue / hueCount);
s = Math.floor(hr.sat.tot / hr.count);
l = Math.floor(Math.sqrt(hr.lum.tot / hr.count));
hr.rgb = imageTools.hsl2rgb(h,s,l);
hr.rgba = imageTools.hex2RGBA(imageTools.rgba2Hex4(hr.rgb));
}
gradCols = hues.map(h=>h.rgba);
if(gradCols.length === 1){
gradCols.push(gradCols[0]); // this is a quick fix if only one colour the gradient needs more than one
}
border = Math.floor(Math.min(this.width / 10,this.height / 10, 64));
wImage = imageTools.padImage(this,border,border);
wImage.ctx.fillStyle = imageTools.createGradient(
c, "linear", 0, 0, 0, wImage.height,gradCols
);
wImage.ctx.fillRect(0, 0, wImage.width, wImage.height);
wImage.ctx.fillStyle = "black";
wImage.ctx.fillRect(border - 2, border - 2, wImage.width - border * 2 + 4, wImage.height - border * 2 + 4);
wImage.ctx.drawImage(this,border,border);
wImage.style.width = (innerWidth -64) + "px";
document.body.appendChild(wImage);
setTimeout(loadImageAddBorder,1000);
}
}
)
}
setTimeout(loadImageAddBorder,0);
/** ImageTools.js begin **/
var imageTools = (function () {
// This interface is as is.
// No warenties no garenties, and
/*****************************/
/* NOT to be used comercialy */
/*****************************/
var workImg,workImg1,keep; // for internal use
keep = false;
const toHex = v => (v < 0x10 ? "0" : "") + Math.floor(v).toString(16);
var tools = {
canvas(width, height) { // create a blank image (canvas)
var c = document.createElement("canvas");
c.width = width;
c.height = height;
return c;
},
createImage (width, height) {
var i = this.canvas(width, height);
i.ctx = i.getContext("2d");
return i;
},
loadImage (url, crossSite, cb) { // cb is calback. Check first argument for status
var i = new Image();
if(crossSite){
i.setAttribute('crossOrigin', 'anonymous');
}
i.src = url;
i.addEventListener('load', cb);
i.addEventListener('error', cb);
return i;
},
image2Canvas(img) {
var i = this.canvas(img.width, img.height);
i.ctx = i.getContext("2d");
i.ctx.drawImage(img, 0, 0);
return i;
},
rgb2hsl(r,g,b){ // integers in the range 0-255
var min, max, dif, h, l, s;
h = l = s = 0;
r /= 255; // normalize channels
g /= 255;
b /= 255;
min = Math.min(r, g, b);
max = Math.max(r, g, b);
if(min === max){ // no colour so early exit
return {
h, s,
l : Math.floor(min * 100), // Note there is loss in this conversion
}
}
dif = max - min;
l = (max + min) / 2;
if (l > 0.5) { s = dif / (2 - max - min) }
else { s = dif / (max + min) }
if (max === r) {
if (g < b) { h = (g - b) / dif + 6.0 }
else { h = (g - b) / dif }
} else if(max === g) { h = (b - r) / dif + 2.0 }
else {h = (r - g) / dif + 4.0 }
h = Math.floor(h * 60);
s = Math.floor(s * 100);
l = Math.floor(l * 100);
return {h, s, l};
},
hsl2rgb (h, s, l) { // h in range integer 0-360 (cyclic) and s,l 0-100 both integers
var p, q;
const hue2Channel = (h) => {
h = h < 0.0 ? h + 1 : h > 1 ? h - 1 : h;
if (h < 1 / 6) { return p + (q - p) * 6 * h }
if (h < 1 / 2) { return q }
if (h < 2 / 3) { return p + (q - p) * (2 / 3 - h) * 6 }
return p;
}
s = Math.floor(s)/100;
l = Math.floor(l)/100;
if (s <= 0){ // no colour
return {
r : Math.floor(l * 255),
g : Math.floor(l * 255),
b : Math.floor(l * 255),
}
}
h = (((Math.floor(h) % 360) + 360) % 360) / 360; // normalize
if (l < 1 / 2) { q = l * (1 + s) }
else { q = l + s - l * s }
p = 2 * l - q;
return {
r : Math.floor(hue2Channel(h + 1 / 3) * 255),
g : Math.floor(hue2Channel(h) * 255),
b : Math.floor(hue2Channel(h - 1 / 3) * 255),
}
},
rgba2Hex4(r,g,b,a=255){
if(typeof r === "object"){
g = r.g;
b = r.b;
a = r.a !== undefined ? r.a : a;
r = r.r;
}
return `#${toHex(r)}${toHex(g)}${toHex(b)}${toHex(a)}`;
},
hex2RGBA(hex){ // Not CSS colour as can have extra 2 or 1 chars for alpha
// #FFFF & #FFFFFFFF last F and FF are the alpha range 0-F & 00-FF
if(typeof hex === "string"){
var str = "rgba(";
if(hex.length === 4 || hex.length === 5){
str += (parseInt(hex.substr(1,1),16) * 16) + ",";
str += (parseInt(hex.substr(2,1),16) * 16) + ",";
str += (parseInt(hex.substr(3,1),16) * 16) + ",";
if(hex.length === 5){
str += (parseInt(hex.substr(4,1),16) / 16);
}else{
str += "1";
}
return str + ")";
}
if(hex.length === 7 || hex.length === 9){
str += parseInt(hex.substr(1,2),16) + ",";
str += parseInt(hex.substr(3,2),16) + ",";
str += parseInt(hex.substr(5,2),16) + ",";
if(hex.length === 9){
str += (parseInt(hex.substr(7,2),16) / 255).toFixed(3);
}else{
str += "1";
}
return str + ")";
}
return "rgba(0,0,0,0)";
}
},
createGradient(ctx, type, x, y, xx, yy, colours){ // Colours MUST be array of hex colours NOT CSS colours
// See this.hex2RGBA for details of format
var i,g,c;
var len = colours.length;
if(type.toLowerCase() === "linear"){
g = ctx.createLinearGradient(x,y,xx,yy);
}else{
g = ctx.createRadialGradient(x,y,xx,x,y,yy);
}
for(i = 0; i < len; i++){
c = colours[i];
if(typeof c === "string"){
if(c[0] === "#"){
c = this.hex2RGBA(c);
}
g.addColorStop(Math.min(1,i / (len -1)),c); // need to clamp top to 1 due to floating point errors causes addColorStop to throw rangeError when number over 1
}
}
return g;
},
padImage(img,amount){
var image = this.canvas(img.width + amount * 2, img.height + amount * 2);
image.ctx = image.getContext("2d");
image.ctx.drawImage(img, amount, amount);
return image;
},
getImageData(image, w = image.width, h = image.height) { // cut down version to prevent intergration
if(image.ctx && image.ctx.imageData){
return image.ctx.imageData;
}
return (image.ctx || (this.image2Canvas(image).ctx)).getImageData(0, 0, w, h);
},
};
return tools;
})();
/** ImageTools.js end **/
Sounds like an interesting problem to have!
Each algorithm you're using to generate colors likely has a bias toward certain colors in their respective random color algorithms.
What you're likely seeing is the end result of that bias for each. Both are selecting darker and lighter colors independently.
It may make more sense to keep a hash of common colors and use that hash as opposed to using randomly generated colors.
Either way your 'fitness' check, the algorithm that checks to see which color has the best average contrast is picking lighter and darker colors for both color sets. This makes sense, lighter images should have darker backgrounds and darker images should have lighter backgrounds.
Although you don't explicitly say, I'd bet my bottom dollar you're getting dark background for lighter average images and brighter backgrounds on darker images.
Alternatively rather than using a hash of colors, you could generate multiple random color palettes and combine the result sets to average them out.
Or rather than taking the 6 most commonly occurring colors, why not take the overall color gradient and try against that?
I've put together an example where I get the most commonly occurring color and invert it to get the complementary color. This in theory at least should provide a good contrast ratio for the image as a whole.
Using the most commonly occurring color in the image seems to work quite well. as outlined in my example below. This is a similar technique that Blindman67 uses without the massive bloating of including libraries and performing un-necessary steps, I borrowed the same images that Blindman67 uses for a fair comparison of the result set.
See Get average color of image via Javascript for getting average color (getAverageRGB() function written by James).
var images = [
"https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Cistothorus_palustris_CT.jpg/450px-Cistothorus_palustris_CT.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/Black-necked_Stilt_%28Himantopus_mexicanus%29%2C_Corte_Madera.jpg/362px-Black-necked_Stilt_%28Himantopus_mexicanus%29%2C_Corte_Madera.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Daurian_redstart_at_Daisen_Park_in_Osaka%2C_January_2016.jpg/573px-Daurian_redstart_at_Daisen_Park_in_Osaka%2C_January_2016.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Myioborus_torquatus_Santa_Elena.JPG/675px-Myioborus_torquatus_Santa_Elena.JPG",
"https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Great_tit_side-on.jpg/645px-Great_tit_side-on.jpg",
"https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Sarcoramphus_papa_%28K%C3%B6nigsgeier_-_King_Vulture%29_-_Weltvogelpark_Walsrode_2013-01.jpg/675px-Sarcoramphus_papa_%28K%C3%B6nigsgeier_-_King_Vulture%29_-_Weltvogelpark_Walsrode_2013-01.jpg",
];
// append images
for (var i = 0; i < images.length; i++) {
var img = document.createElement('img'),
div = document.createElement('div');
img.crossOrigin = "Anonymous";
img.style.border = '1px solid black';
img.style.margin = '5px';
div.appendChild(img);
document.body.appendChild(div);
(function(img, div) {
img.addEventListener('load', function() {
var avg = getAverageRGB(img);
div.style = 'background: rgb(' + avg.r + ',' + avg.g + ',' + avg.b + ')';
img.style.height = '128px';
img.style.width = '128px';
});
img.src = images[i];
}(img, div));
}
function getAverageRGB(imgEl) { // not my work, see http://jsfiddle.net/xLF38/818/
var blockSize = 5, // only visit every 5 pixels
defaultRGB = {
r: 0,
g: 0,
b: 0
}, // for non-supporting envs
canvas = document.createElement('canvas'),
context = canvas.getContext && canvas.getContext('2d'),
data, width, height,
i = -4,
length,
rgb = {
r: 0,
g: 0,
b: 0
},
count = 0;
if (!context) {
return defaultRGB;
}
height = canvas.height = imgEl.offsetHeight || imgEl.height;
width = canvas.width = imgEl.offsetWidth || imgEl.width;
context.drawImage(imgEl, 0, 0);
try {
data = context.getImageData(0, 0, width, height);
} catch (e) {
return defaultRGB;
}
length = data.data.length;
while ((i += blockSize * 4) < length) {
++count;
rgb.r += data.data[i];
rgb.g += data.data[i + 1];
rgb.b += data.data[i + 2];
}
// ~~ used to floor values
rgb.r = ~~(rgb.r / count);
rgb.g = ~~(rgb.g / count);
rgb.b = ~~(rgb.b / count);
return rgb;
}
It depends on where the text is that is overlayed on the background image. If the background has some large feature on part of it, the text will likely be placed away from that, so must contrast with that part of the image, but you may also want to pick up a certain color or complement the other colors in the image. I think practically speaking you will need to create a widget for people to easily slide/adjust the foreground color interactively. Or you will need to create a deep learning system in order to do this really effectively.

Intersection of 2 SVG Paths

I need to check if two SVG Path elements intersect. Checking for intersection of the bounding boxes with .getBBox() is too inaccurate.
What I'm currently doing is iterating both paths with .getTotalLength() and then checking if two points .getPointAtLength() are equal. Below is a snippet, but as you can see this is very slow and blocks the browser tab.
There must be a more efficient method to check for intersections between two paths.
var path1 = document.getElementById("p1");
var path2 = document.getElementById("p2");
var time = document.getElementById("time");
var btn = document.getElementById("start");
btn.addEventListener("click", getIntersection);
function getIntersection() {
var start = Date.now();
for (var i = 0; i < path1.getTotalLength(); i++) {
for (var j = 0; j < path2.getTotalLength(); j++) {
var point1 = path1.getPointAtLength(i);
var point2 = path2.getPointAtLength(j);
if (pointIntersect(point1, point2)) {
var end = Date.now();
time.innerHTML = (end - start) / 1000 + "s";
return;
}
}
}
}
function pointIntersect(p1, p2) {
p1.x = Math.round(p1.x);
p1.y = Math.round(p1.y);
p2.x = Math.round(p2.x);
p2.y = Math.round(p2.y);
return p1.x === p2.x && p1.y === p2.y;
}
svg {
fill: none;
stroke: black;
}
#start {
border: 1px solid;
display: inline-block;
position: absolute;
}
<div id="start">Start
</div>
<svg xmlns="http://www.w3.org/2000/svg">
<path d="M 50 10 c 120 120 120 120 120 20 z" id="p1"></path>
<path d="M 150 10 c 120 120 120 120 120 20 z" id="p2"></path>
</svg>
<div id="time"></div>
I'm not sure but it may be possible to solve this mathematically if you could extract the vectors and curves from the paths. However, your function can be optimized by caching the points from one path, and reducing the number of calls to getTotalLength and getPointAtLength.
function getIntersection() {
var start = Date.now(),
path1Length = path1.getTotalLength(),
path2Length = path2.getTotalLength(),
path2Points = [];
for (var j = 0; j < path2Length; j++) {
path2Points.push(path2.getPointAtLength(j));
}
for (var i = 0; i < path1Length; i++) {
var point1 = path1.getPointAtLength(i);
for (var j = 0; j < path2Points.length; j++) {
if (pointIntersect(point1, path2Points[j])) {
var end = Date.now();
time.innerHTML = (end - start) / 1000 + "s";
return;
}
}
}
}
This can calculate the example paths in around 0.07 seconds instead of 4-5 seconds.
jsfiddle
time 0.027s
function getIntersection2() {
function Intersect(p1, p2) {
return p1.z!==p2.z && p1.x === p2.x && p1.y === p2.y;
}
var paths = [path1,path2];
var start = Date.now(),
pathLength = [path1.getTotalLength(),path2.getTotalLength()],
pathPoints = [],
inters = [];
for (var i = 0; i < 2; i++) {
for (var j = 0; j < pathLength[i]; j++) {
var p = paths[i].getPointAtLength(j);
p.z=i;
p.x=Math.round(p.x);
p.y=Math.round(p.y);
pathPoints.push(p);
}
}
pathPoints.sort((a,b)=>a.x!=b.x?a.x-b.x:a.y!=b.y?a.y-b.y:0)
// todos os pontos
.forEach((a,i,m)=>i&&Intersect(m[i-1],a)?inters.push([a.x,a.y]):0)
// somente o primeiro
//.some((a,i,m)=>i&&Intersect(m[i-1],a)?inters.push([a.x,a.y]):0);
result.innerHTML = inters;
var end = Date.now();
time.innerHTML = (end - start) / 1000 + "s";
return;
}
And this, while totally not being what the OP asked for, is what I was looking for.
A way to detect intersections over a large number of paths by sampling:
function pointIntersects(p1, p2) {
return (Math.abs(p1.x - p2.x) > 10)
? false
: (Math.abs(p1.y - p2.y) < 10)
}
function pointsIntersect(points, point) {
return _.some(points, p => pointIntersects(p, point))
}
function samplePathPoints(path) {
const pathLength = path.getTotalLength()
const points = []
for (let i = 0; i < pathLength; i += 10)
points.push(path.getPointAtLength(i))
return points
}
let pointCloud = []
_(document.querySelectorAll('svg path'))
.filter(path => {
const points = samplePathPoints(path)
if (_.some(pointCloud, point => pointsIntersect(points, point)))
return true
points.forEach(p => pointCloud.push(p))
})
.each(path => path.remove())
note: underscore/lodash has been used for brevity
You can further optimize performance by reducing the amount of sample points.
getPointAtLength() is rather expensive especially when run >100 times.
The following examples should usually need only a few milliseconds.
Example 1: Boolean result (intersecting true/false)
let svg = document.querySelector("svg");
function check() {
perfStart();
let intersections = checkPathIntersections(p0, p1, 24);
time.textContent = '1. stroke intersection: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
//render indtersection point
gInter.innerHTML = '';
renderPoint(gInter, intersections[0], 'red', '2%');
}
function checkPathIntersections(path0, path1, checksPerPath = 24, threshold = 2) {
/**
* 0. check bbox intersection
* to skip sample point checks
*/
let bb = path0.getBBox();
let [left, top, right, bottom] = [bb.x, bb.y, bb.x + bb.width, bb.y + bb.height];
let bb1 = path1.getBBox();
let [left1, top1, right1, bottom1] = [bb1.x, bb1.y, bb1.x + bb1.width, bb1.y + bb1.height];
let bboxIntersection =
left <= right1 - threshold &&
top <= bottom1 - threshold &&
bottom >= top1 - threshold &&
right >= left1 - threshold ?
true :
false;
if (!bboxIntersection) {
return false;
}
// path0
let pathLength0 = path0.getTotalLength();
// set temporary stroke
let style0 = window.getComputedStyle(path0);
let fill0 = style0.fill;
let strokeWidth0 = style0.strokeWidth;
path0.style.strokeWidth = threshold;
// path1
let pathLength1 = path1.getTotalLength();
// set temporary stroke
let style1 = window.getComputedStyle(path1);
let fill1 = style1.fill;
let strokeWidth1 = style1.strokeWidth;
path1.style.strokeWidth = threshold;
/**
* 1. check sample point intersections
*/
let checks = 0;
let intersections = [];
/**
* 1.1 compare path0 against path1
*/
for (let c = 0; c < checksPerPath && !intersections.length; c++) {
let pt = path1.getPointAtLength((pathLength1 / checksPerPath) * c);
let inStroke = path0.isPointInStroke(pt);
let inFill = path0.isPointInFill(pt);
// check path 1 against path 2
if (inStroke || inFill) {
intersections.push(pt)
} else {
/**
* no intersections found:
* check path1 sample points against path0
*/
let pt1 = path0.getPointAtLength(
(pathLength0 / checksPerPath) * c
);
let inStroke1 = path1.isPointInStroke(pt1);
let inFill1 = path1.isPointInFill(pt1);
if (inStroke1 || inFill1) {
intersections.push(pt1)
}
}
// just for benchmarking
checks++;
}
// reset styles
path0.style.fill = fill0;
path0.style.strokeWidth = strokeWidth0;
path1.style.fill = fill1;
path1.style.strokeWidth = strokeWidth1;
console.log('sample point checks:', checks);
return intersections;
}
/**
* simple performance test
*/
function perfStart() {
t0 = performance.now();
}
function perfEnd(text = "") {
t1 = performance.now();
total = t1 - t0;
console.log(`excecution time ${text}: ${total} ms`);
return total;
}
function renderPoint(
svg,
coords,
fill = "red",
r = "2",
opacity = "1",
id = "",
className = ""
) {
//console.log(coords);
if (Array.isArray(coords)) {
coords = {
x: coords[0],
y: coords[1]
};
}
let marker = `<circle class="${className}" opacity="${opacity}" id="${id}" cx="${coords.x}" cy="${coords.y}" r="${r}" fill="${fill}">
<title>${coords.x} ${coords.y}</title></circle>`;
svg.insertAdjacentHTML("beforeend", marker);
}
svg {
fill: none;
stroke: black;
}
<p><button onclick="check()">Check intersection</button></p>
<svg xmlns="http://www.w3.org/2000/svg">
<path d="M 50 10 c 120 120 120 120 120 20 z" id="p0"></path>
<path d="M 150 10 c 120 120 120 120 120 20 z" id="p1"></path>
<g id="gInter"></g>
</svg>
<p id="time"></p>
How it works
1. Check BBox intersections
Checking for intersection of the bounding boxes with .getBBox() is too
inaccurate.
That's true, however we should always start with a bbox intersection test to avoid unnecessary calculations.
2. Check intersection via isPointInStroke() and isPointInFill()
These natively supported methods are well optimized so we don't need to compare retrieved point arrays against each other.
By increasing the stroke-width of the paths we can also increase the tolerance threshold for intersections.
3. Reduce sample points
If we don't need all intersecting points but only a boolean value, we can drastically reduce the number of intersection checks by creating them progressively within the testing loop.
Once we found any intersection (in stroke or fill) – we stop the loop and return true.
Besides we can usually reduce the number by splitting the path length in e.g 24-100 steps.
Example 2: get all intersection points
let svg = document.querySelector("svg");
let paths = svg.querySelectorAll("path");
function check() {
// reset results
gInter2.innerHTML = '';
gInter.innerHTML = '';
time.textContent = '';
/**
* Boolean check
*/
perfStart();
let intersections = checkPathIntersections(p0, p1, 24);
time.textContent += '1. stroke intersection: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
renderPoint(svg, intersections[0]);
perfStart();
let intersections1 = checkPathIntersections(p2, p3, 24);
time.textContent += '2. fill intersection: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
renderPoint(svg, intersections1[0])
/**
* Precise check
*/
perfStart();
let intersections3 = checkIntersectionPrecise(p4, p5, 100, 1);
time.textContent += '3. multiple intersections: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
if (intersections3.length) {
intersections3.forEach(p => {
renderPoint(svg, p, 'red')
})
}
// no bbox intersection
perfStart();
let intersections4 = checkPathIntersections(p5, p6, 24);
time.textContent += '4. no bbBox intersection: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
perfStart();
let intersections5 = checkIntersectionPrecise(p8, p9, 1200, 0);
time.textContent += '5. multiple intersections: ' + perfEnd().toFixed(3) * 1 + ' ms; \n ';
if (intersections5.length) {
intersections5.forEach(p => {
renderPoint(gInter2, p, 'green', '0.25%');
})
}
}
function checkIntersectionPrecise(path0, path1, split = 1000, decimals = 0) {
/**
* 0. check bbox intersection
* to skip sample point checks
*/
let bb = path0.getBBox();
let [left, top, right, bottom] = [bb.x, bb.y, bb.x + bb.width, bb.y + bb.height];
let bb1 = path1.getBBox();
let [left1, top1, right1, bottom1] = [bb1.x, bb1.y, bb1.x + bb1.width, bb1.y + bb1.height];
let bboxIntersection =
left <= right1 &&
top <= bottom1 &&
bottom >= top1 &&
right >= left1 ?
true :
false;
if (!bboxIntersection) {
console.log('no intersections at all');
return false;
}
// path0
let pathData0 = path0.getPathData({
normalize: true
})
let points0 = pathDataToPolygonPoints(pathData0, true, split);
let points0Strings = points0.map(val => {
return val.x.toFixed(decimals) + '_' + val.y.toFixed(decimals)
});
// filter duplicates
points0Strings = [...new Set(points0Strings)];
// path1
let pathLength1 = path1.getTotalLength();
let pathData1 = path1.getPathData({
normalize: true
})
let points1 = pathDataToPolygonPoints(pathData1, true, split);
let points1Strings = points1.map(val => {
return val.x.toFixed(decimals) + '_' + val.y.toFixed(decimals)
});
points1Strings = [...new Set(points1Strings)];
// 1. compare
let intersections = [];
let intersectionsFilter = [];
for (let i = 0; i < points0Strings.length; i++) {
let p0Str = points0Strings[i];
let index = points1Strings.indexOf(p0Str);
if (index !== -1) {
let p1 = p0Str.split('_');
intersections.push({
x: +p1[0],
y: +p1[1]
});
}
}
// filter nearby points
if (intersections.length) {
intersectionsFilter = [intersections[0]];
let length = intersections.length;
for (let i = 1; i < length; i += 1) {
let p = intersections[i];
let pPrev = intersections[i - 1];
let diffX = Math.abs(pPrev.x - p.x);
let diffY = Math.abs(pPrev.y - p.y);
let diff = diffX + diffY;
if (diff > 1) {
intersectionsFilter.push(p)
}
}
} else {
return false
}
return intersectionsFilter;
}
/**
* convert path d to polygon point array
*/
function pathDataToPolygonPoints(pathData, addControlPointsMid = false, splitNtimes = 0, splitLines = false) {
let points = [];
pathData.forEach((com, c) => {
let type = com.type;
let values = com.values;
let valL = values.length;
// optional splitting
let splitStep = splitNtimes ? (0.5 / splitNtimes) : (addControlPointsMid ? 0.5 : 0);
let split = splitStep;
// M
if (c === 0) {
let M = {
x: pathData[0].values[valL - 2],
y: pathData[0].values[valL - 1]
};
points.push(M);
}
if (valL && c > 0) {
let prev = pathData[c - 1];
let prevVal = prev.values;
let prevValL = prevVal.length;
let p0 = {
x: prevVal[prevValL - 2],
y: prevVal[prevValL - 1]
};
// cubic curves
if (type === "C") {
if (prevValL) {
let cp1 = {
x: values[valL - 6],
y: values[valL - 5]
};
let cp2 = {
x: values[valL - 4],
y: values[valL - 3]
};
let p = {
x: values[valL - 2],
y: values[valL - 1]
};
if (addControlPointsMid && split) {
// split cubic curves
for (let s = 0; split < 1 && s < 9999; s++) {
let midPoint = getPointAtCubicSegmentLength(p0, cp1, cp2, p, split);
points.push(midPoint);
split += splitStep
}
}
points.push({
x: values[valL - 2],
y: values[valL - 1]
});
}
}
// linetos
else if (type === "L") {
if (splitLines) {
//let prevCoords = [prevVal[prevValL - 2], prevVal[prevValL - 1]];
let p1 = {
x: prevVal[prevValL - 2],
y: prevVal[prevValL - 1]
}
let p2 = {
x: values[valL - 2],
y: values[valL - 1]
}
if (addControlPointsMid && split) {
for (let s = 0; split < 1; s++) {
let midPoint = interpolatedPoint(p1, p2, split);
points.push(midPoint);
split += splitStep
}
}
}
points.push({
x: values[valL - 2],
y: values[valL - 1]
});
}
}
});
return points;
}
/**
* Linear interpolation (LERP) helper
*/
function interpolatedPoint(p1, p2, t = 0.5) {
//t: 0.5 - point in the middle
if (Array.isArray(p1)) {
p1.x = p1[0];
p1.y = p1[1];
}
if (Array.isArray(p2)) {
p2.x = p2[0];
p2.y = p2[1];
}
let [x, y] = [(p2.x - p1.x) * t + p1.x, (p2.y - p1.y) * t + p1.y];
return {
x: x,
y: y
};
}
/**
* calculate single points on segments
*/
function getPointAtCubicSegmentLength(p0, cp1, cp2, p, t=0.5) {
let t1 = 1 - t;
return {
x: t1 ** 3 * p0.x + 3 * t1 ** 2 * t * cp1.x + 3 * t1 * t ** 2 * cp2.x + t ** 3 * p.x,
y: t1 ** 3 * p0.y + 3 * t1 ** 2 * t * cp1.y + 3 * t1 * t ** 2 * cp2.y + t ** 3 * p.y
}
}
function checkPathIntersections(path0, path1, checksPerPath = 24, threshold = 2) {
/**
* 0. check bbox intersection
* to skip sample point checks
*/
let bb = path0.getBBox();
let [left, top, right, bottom] = [bb.x, bb.y, bb.x + bb.width, bb.y + bb.height];
let bb1 = path1.getBBox();
let [left1, top1, right1, bottom1] = [bb1.x, bb1.y, bb1.x + bb1.width, bb1.y + bb1.height];
let bboxIntersection =
left <= right1 - threshold &&
top <= bottom1 - threshold &&
bottom >= top1 - threshold &&
right >= left1 - threshold ?
true :
false;
if (!bboxIntersection) {
return false;
}
// path0
let pathLength0 = path0.getTotalLength();
// set temporary stroke
let style0 = window.getComputedStyle(path0);
let fill0 = style0.fill;
let strokeWidth0 = style0.strokeWidth;
path0.style.strokeWidth = threshold;
// path1
let pathLength1 = path1.getTotalLength();
// set temporary stroke
let style1 = window.getComputedStyle(path1);
let fill1 = style1.fill;
let strokeWidth1 = style1.strokeWidth;
path1.style.strokeWidth = threshold;
/**
* 1. check sample point intersections
*/
let checks = 0;
let intersections = [];
/**
* 1.1 compare path0 against path1
*/
for (let c = 0; c < checksPerPath && !intersections.length; c++) {
let pt = path1.getPointAtLength((pathLength1 / checksPerPath) * c);
let inStroke = path0.isPointInStroke(pt);
let inFill = path0.isPointInFill(pt);
// check path 1 against path 2
if (inStroke || inFill) {
intersections.push(pt)
} else {
/**
* no intersections found:
* check path1 sample points against path0
*/
let pt1 = path0.getPointAtLength(
(pathLength0 / checksPerPath) * c
);
let inStroke1 = path1.isPointInStroke(pt1);
let inFill1 = path1.isPointInFill(pt1);
if (inStroke1 || inFill1) {
intersections.push(pt1)
}
}
// just for benchmarking
checks++;
}
// reset styles
path0.style.fill = fill0;
path0.style.strokeWidth = strokeWidth0;
path1.style.fill = fill1;
path1.style.strokeWidth = strokeWidth1;
console.log('sample point checks:', checks);
return intersections;
}
/**
* simple performance test
*/
function perfStart() {
t0 = performance.now();
}
function perfEnd(text = "") {
t1 = performance.now();
total = t1 - t0;
console.log(`excecution time ${text}: ${total} ms`);
return total;
}
function renderPoint(
svg,
coords,
fill = "red",
r = "2",
opacity = "1",
id = "",
className = ""
) {
//console.log(coords);
if (Array.isArray(coords)) {
coords = {
x: coords[0],
y: coords[1]
};
}
let marker = `<circle class="${className}" opacity="${opacity}" id="${id}" cx="${coords.x}" cy="${coords.y}" r="${r}" fill="${fill}">
<title>${coords.x} ${coords.y}</title></circle>`;
svg.insertAdjacentHTML("beforeend", marker);
}
body {
font-family: sans-serif;
}
svg {
width: 100%;
}
path {
fill: none;
stroke: #000;
stroke-width: 1px;
}
p {
white-space: pre-line;
}
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 500 150">
<path id="p0" d="M27.357,21.433c13.373,3.432,21.433,17.056,18.001,30.43
c-3.432,13.374-17.057,21.434-30.43,18.002" />
<path id="p1" d="M80.652,80.414c-12.205,6.457-27.332,1.8-33.791-10.403
c-6.458-12.204-1.8-27.333,10.404-33.791" />
<path id="p2"
d="M159.28 40.26c6.73 12.06 2.41 27.29-9.65 34.01s-27.29 2.41-34.01-9.65s-2.41-27.29 9.65-34.01c12.06-6.73 27.29-2.41 34.01 9.65z" />
<path id="p3"
d="M191.27 53.72c-0.7 13.79-12.45 24.4-26.24 23.7s-24.4-12.45-23.7-26.24s12.45-24.4 26.24-23.7s24.4 12.45 23.7 26.24z" />
<path id="p4"
d="M259.28 40.26c6.73 12.06 2.41 27.29-9.65 34.01s-27.29 2.41-34.01-9.65s-2.41-27.29 9.65-34.01c12.06-6.73 27.29-2.41 34.01 9.65z" />
<path id="p5"
d="M291.27 53.72c-0.7 13.79-12.45 24.4-26.24 23.7s-24.4-12.45-23.7-26.24s12.45-24.4 26.24-23.7s24.4 12.45 23.7 26.24z" />
<path id="p6"
d="M359.28 40.26c6.73 12.06 2.41 27.29-9.65 34.01s-27.29 2.41-34.01-9.65s-2.41-27.29 9.65-34.01c12.06-6.73 27.29-2.41 34.01 9.65z" />
<path id="p7"
d="M420 53.72c-0.7 13.79-12.45 24.4-26.24 23.7s-24.4-12.45-23.7-26.24s12.45-24.4 26.24-23.7s24.4 12.45 23.7 26.24z" />
<g id="gInter"></g>
</svg>
<p>Based on #Netsi1964's codepen:
https://codepen.io/netsi1964/pen/yKagwx/</p>
<svg id="svg2" viewBox="0 0 2000 700">
<path d=" M 529 664 C 93 290 616 93 1942 385 C 1014 330 147 720 2059 70 C 1307 400 278 713 1686 691 " style="stroke:orange!important" stroke="orange" id="p8"/>
<path d=" M 1711 363 C 847 15 1797 638 1230 169 C 1198 443 1931 146 383 13 C 1103 286 1063 514 521 566 " id="p9"/>
<g id="gInter2"></g>
</svg>
<p><button onclick="check()">Check intersection</button></p>
<p id="time"></p>
<script src="https://cdn.jsdelivr.net/npm/path-data-polyfill#1.0.4/path-data-polyfill.min.js"></script>
This example calculates sample points from a parsed pathData array - retrieved with getPathData() (needs a polyfill).
All commands are normalized/converted via
path.getPathData({normalize:true})
to absolute coordinates using only M,C,L and Z.
We can now easily calculate points on bézier C commands with an interpolation helper.
function getPointAtCubicSegmentLength(p0, cp1, cp2, p, t=0.5) {
let t1 = 1 - t;
return {
x: t1 ** 3 * p0.x + 3 * t1 ** 2 * t * cp1.x + 3 * t1 * t ** 2 * cp2.x + t ** 3 * p.x,
y: t1 ** 3 * p0.y + 3 * t1 ** 2 * t * cp1.y + 3 * t1 * t ** 2 * cp2.y + t ** 3 * p.y
}
}
p0 = previous commands last point
cp1 = first C control point
cp2 = second control point
p = C control end point
t = split position: t=0.5 => middle of the curve
Admittedly, quite a chunk of code.
However way faster for calculating hundreds of sample points than using getPointAtLength().
Codepen example.

How to add faces to THREE.BufferGeometry?

I have created programmatically a simple mesh:
var CreateSimpleMesh = new function () {
var xy = [],
maxX = 7,
maxY = 10,
river = [[0, 5], [0, 4], [1, 3], [2, 2], [3, 2], [4, 1], [5, 1], [6, 0]],
grassGeometry = new THREE.BufferGeometry(),
grassVertexPositions = []
this.init = function () {
for (i = 0; i < maxX; i++) {
for (j = 0; j < maxY; j++) {
xy.push([i, j])
}
}
for (var i = 0; i < xy.length; i++) {
grassVertexPositions.push([xy[i][0], xy[i][1], 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1], 0])
grassVertexPositions.push([xy[i][0], xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0], xy[i][1] + 1, 0])
grassVertexPositions.push([xy[i][0] + 1, xy[i][1], 0])
}
for (var i = 0; i < grassVertexPositions.length; i++) {
for (var j = 0; j < river.length; j++) {
if (river[j][0] == grassVertexPositions[i][0] && river[j][1] == grassVertexPositions[i][1]) {
grassVertexPositions[i][2] = -0.5
}
}
}
var grassVertices = new Float32Array(grassVertexPositions.length * 3)
for (var i = 0; i < grassVertexPositions.length; i++) {
grassVertices[i * 3 + 0] = grassVertexPositions[i][0];
grassVertices[i * 3 + 1] = grassVertexPositions[i][1];
grassVertices[i * 3 + 2] = grassVertexPositions[i][2];
}
grassGeometry.addAttribute('position', new THREE.BufferAttribute(grassVertices, 3))
var grassMaterial = new THREE.MeshLambertMaterial({color: 0x00ff00}),
grassMesh = new THREE.Mesh(grassGeometry, grassMaterial)
grassMesh.rotation.x = -Math.PI / 2
Test.getScene().add(grassMesh);
}
}
Problem is that this mesh has only vertices. I have tried to add to it faces like in this question using THREE.Shape.Utils.triangulateShape but BufferGeometry is different than normal geometry and it does not work. Is it possible to add faces to BufferGeometry?
EDIT:
Working fiddle
Here is how to create a mesh having BufferGeometry. This is the simpler "non-indexed" BufferGeometry where vertices are not shared.
// non-indexed buffer geometry
var geometry = new THREE.BufferGeometry();
// number of triangles
var NUM_TRIANGLES = 10;
// attributes
var positions = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var normals = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var colors = new Float32Array( NUM_TRIANGLES * 3 * 3 );
var uvs = new Float32Array( NUM_TRIANGLES * 3 * 2 );
var color = new THREE.Color();
var scale = 15;
var size = 5;
var x, y, z;
for ( var i = 0, l = NUM_TRIANGLES * 3; i < l; i ++ ) {
if ( i % 3 === 0 ) {
x = ( Math.random() - 0.5 ) * scale;
y = ( Math.random() - 0.5 ) * scale;
z = ( Math.random() - 0.5 ) * scale;
} else {
x = x + size * ( Math.random() - 0.5 );
y = y + size * ( Math.random() - 0.5 );
z = z + size * ( Math.random() - 0.5 );
}
var index = 3 * i;
// positions
positions[ index ] = x;
positions[ index + 1 ] = y;
positions[ index + 2 ] = z;
//normals -- we will set normals later
// colors
color.setHSL( i / l, 1.0, 0.5 );
colors[ index ] = color.r;
colors[ index + 1 ] = color.g;
colors[ index + 2 ] = color.b;
// uvs
uvs[ index ] = Math.random(); // just something...
uvs[ index + 1 ] = Math.random();
}
geometry.addAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );
geometry.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) );
geometry.addAttribute( 'color', new THREE.BufferAttribute( colors, 3 ) );
geometry.addAttribute( 'uv', new THREE.BufferAttribute( uvs, 2 ) );
// optional
geometry.computeBoundingBox();
geometry.computeBoundingSphere();
// set the normals
geometry.computeVertexNormals(); // computed vertex normals are orthogonal to the face for non-indexed BufferGeometry
See the three.js examples for many additional examples of creating BufferGeometry. Also check out the source code for PlaneGeometry and SphereGeometry, which are reasonably easy to understand.
three.js r.143
You can add faces using three.js internal function- fromBufferGeometry. In your case it would be something like this.
var directGeo = new THREE.Geometry();
directGeo.fromBufferGeometry(grassGeometry);
Then use directGeo to build your mesh, and it will have faces.

three.js plane buffergeometry uvs

I'm trying to create a buffergeometry plane, I'm having troubles with the uv coordinates though. I've tried to follow Correct UV mapping Three.js yet I don't get a correct result.
The uv code is below. I also saved the entire buffergeometry code at http://jsfiddle.net/94xaL/.
I would very much appreciate a hint on what I'm doing wrong here!
Thanks!
var uvs = terrainGeom.attributes.uv.array;
var gridX = gridY = TERRAIN_RES - 1;
for ( iy = 0; iy < gridY; iy++ ) {
for ( ix = 0; ix < gridX; ix++ ) {
var i = (iy * gridY + ix) * 12;
//0,0
uvs[ i ] = ix / gridX
uvs[ i + 1 ] = iy / gridY;
//0,1
uvs[ i + 2 ] = ix / gridX
uvs[ i + 3 ] = ( iy + 1 ) / gridY;
//1,0
uvs[ i + 4 ] = ( ix + 1 ) / gridX
uvs[ i + 5 ] = iy / gridY;
//0,1
uvs[ i + 6 ] = ix / gridX
uvs[ i + 7 ] = ( iy + 1 ) / gridY;
//1,1
uvs[ i + 8 ] = ( ix + 1 ) / gridX
uvs[ i + 9 ] = ( iy + 1 ) / gridY;
//1,0
uvs[ i + 10 ] = ( ix + 1 ) / gridX
uvs[ i + 11 ] = iy / gridY;
}
}
The latest three.js version in the dev branch now builds planes with BufferGeometry: https://github.com/mrdoob/three.js/blob/dev/src/extras/geometries/PlaneGeometry.js
If you still want to build your own you can get some inspiration there.

Categories

Resources