i want to make a little photoshop javascript. Technically, i just need to know how to compare the color values of pixels af if they were an array with three integer values in each, for example: (pseudocode)
for all pixels x
for all pixels y
if left pixel's green channel is bigger than red channel:
set the blue channel to 25
else
if the blue channel is greater than 50
set the green channel to 0
in the documentation, there's a ton of things like filters, text and layers you can do, but how do you do something as simple as this?
Reading and writing pixel values in Photoshop scripts is indeed not as simple as it could be ... Check out the following script which inverts the blue channel of an image:
var doc = app.open(new File("~/Desktop/test1.bmp"));
var sampler = doc.colorSamplers.add([0, 0]);
for (var x = 0; x < doc.width; ++x) {
for (var y = 0; y < doc.height; ++y) {
sampler.move([x, y]);
var color = sampler.color;
var region = [
[x, y],
[x + 1, y],
[x + 1, y + 1],
[x, y + 1],
[x, y]
];
var newColor = new SolidColor();
newColor.rgb.red = color.rgb.red;
newColor.rgb.green = 255 - color.rgb.green;
newColor.rgb.blue = color.rgb.blue;
doc.selection.select(region);
doc.selection.fill(newColor);
}
}
I'm not sure if there's a prettier way of setting a pixel color than the select + fill trick.
This script runs super slow, so maybe Photoshop scripts are not the best tool for pixel manipulation ...
Related
I'm trying to translate the following JavaScript into R but having problems:
/*
Author of the script: Carlos Bentes
*/
// Normalized Difference Vegetation Index
var ndvi = (B08-B04)/(B08+B04);
// Threshold for vegetation
var veg_th = 0.4;
// Simple RGB
var R = 2.5*B04;
var G = 2.5*B03;
var B = 2.5*B02;
// Transform to Black and White
var Y = 0.2*R + 0.7*G + 0.1*B;
var pixel = [Y, Y, Y];
// Change vegetation color
if(ndvi >= veg_th)
pixel = [0.1*Y, 1.8*Y, 0.1*Y];
return pixel;
This is the Green City Index which is part of the repository of custom scripts that can be used with Sentinel-Hub services.
Specifically I'm having issues with:
var pixel = [Y, Y, Y];
And
if(ndvi >= veg_th)
pixel = [0.1*Y, 1.8*Y, 0.1*Y];
return pixel;
I think var pixel = [Y, Y, Y]; is the same as pixel <- c(Y, Y, Y)? Which means in the if statement I'd have:
if(ndvi >= veg_th){
pixel <- c(0.1*Y, 1.8*Y, 0.1*Y)
return(pixel)
}
But I'm getting an error:
Error in if (ndvi >= veg_th) { : argument is not interpretable as logical
Some notes about the parts I do understand for those not familiar with sentinel:
B04 and B08 are raster bands from Sentinel so
ndvi is a raster as well (normalized difference vegetation index)
veg_th is a value set as a threshold for interpreting results of ndvi
R, G, and B are making a color composite from the sentinel rasters
I want to draw StackOverflow's logo with this Neural Network:
The NN should ideally become [r, g, b] = f([x, y]). In other words, it should return RGB colors for a given pair of coordinates. The FFNN works pretty well for simple shapes like a circle or a box. For example after several thousands epochs a circle looks like this:
Try it yourself: https://codepen.io/adelriosantiago/pen/PoNGeLw
However since StackOverflow's logo is far more complex even after several thousands of iterations the FFNN's results are somewhat poor:
From left to right:
StackOverflow's logo at 256 colors.
With 15 hidden neurons: The left handle never appears.
50 hidden neurons: Pretty poor result in general.
0.03 as learning rate: Shows blue in the results (blue is not in the orignal image)
A time-decreasing learning rate: The left handle appears but other details are now lost.
Try it yourself: https://codepen.io/adelriosantiago/pen/xxVEjeJ
Some parameters of interest are synaptic.Architect.Perceptron definition and learningRate value.
How can I improve the accuracy of this NN?
Could you improve the snippet? If so, please explain what you did. If there is a better NN architecture to tackle this type of job could you please provide an example?
Additional info:
Artificial Neural Network library used: Synaptic.js
To run this example in your localhost: See repository
By adding another layer, you get better results :
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
There are small improvements that you can do to improve efficiency (marginally):
Here is my optimized code:
const width = 125
const height = 125
const outputCtx = document.getElementById("output").getContext("2d")
const iterationLabel = document.getElementById("iteration")
const stopAtIteration = 3000
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
let iteration = 0
let inputData = (() => {
const tempCtx = document.createElement("canvas").getContext("2d")
tempCtx.drawImage(document.getElementById("input"), 0, 0)
return tempCtx.getImageData(0, 0, width, height)
})()
const getRGB = (img, x, y) => {
var k = (height * y + x) * 4;
return [
img.data[k] / 255, // R
img.data[k + 1] / 255, // G
img.data[k + 2] / 255, // B
//img.data[(height * y + x) * 4 + 3], // Alpha not used
]
}
const paint = () => {
var imageData = outputCtx.getImageData(0, 0, width, height)
for (let x = 0; x < width; x++) {
for (let y = 0; y < height; y++) {
var rgb = perceptron.activate([x / width, y / height])
var k = (height * y + x) * 4;
imageData.data[k] = rgb[0] * 255
imageData.data[k + 1] = rgb[1] * 255
imageData.data[k + 2] = rgb[2] * 255
imageData.data[k + 3] = 255 // Alpha not used
}
}
outputCtx.putImageData(imageData, 0, 0)
setTimeout(train, 0)
}
const train = () => {
iterationLabel.innerHTML = ++iteration
if (iteration > stopAtIteration) return
let learningRate = 0.01 / (1 + 0.0005 * iteration) // Attempt with dynamic learning rate
//let learningRate = 0.01 // Attempt with non-dynamic learning rate
for (let x = 0; x < width; x += 1) {
for (let y = 0; y < height; y += 1) {
perceptron.activate([x / width, y / height])
perceptron.propagate(learningRate, getRGB(inputData, x, y))
}
}
paint()
}
const startTraining = (btn) => {
btn.disabled = true
train()
}
EDIT : I made another CodePen with even better results:
https://codepen.io/xurei/pen/KKzWLxg
It is likely to be over-fitted BTW.
The perceptron definition:
let perceptron = new synaptic.Architect.Perceptron(2, 8, 15, 7, 3)
Taking some insights from the lecture/slides of Bhiksha Raj (from slides 62 onwards), and summarizing as below:
Each node can be assumed like a linear classifier, and combination of several nodes in a single layer of neural networks can approximate any basic shapes. For example, a rectangle can be formed by 4 nodes for each lines, assuming each nodes contributes to one line, and the shape can be approximated by the final output layer.
Falling back to the summary of complex shapes such as circle, it may require infinite nodes in a layer. Or this would likely hold true for a single layer with two disjoint shapes (A non-overlapping triangle and rectangle). However, this can still be learnt using more than 1 hidden layers. Where, the 1st layer learns the basic shapes, followed by 2nd layer approximating their disjoint combinations.
Thus, you can assume that this logo is combination of disjoint rectangles (5 rectangles for orange and 3 rectangles for grey). We can use atleast 32 nodes in 1st hidden layer and few nodes in the 2nd hidden layer. However, we don't have control over what each node learns. Hence, a few more number of neurons than required neurons should be helpful.
I have a Bezier curve: (0,0), (.25,.1), (.25,1), and (1,1).
This is graphically seen here: http://cubic-bezier.com/#.25,.1,.25,1
We see on the x axis is time.
This is my unknown. This is a unit cell. So I was wondering how can I get x when y is 0.5?
Thanks
I saw this topic: y coordinate for a given x cubic bezier
But it loops, I need to avoid something loops
So I found this topic: Cubic bezier curves - get Y for given X
But I can't figure out how to solve a cubic polynomial in js :(
This is mathematically impossible unless you can guarantee that there will only be one y value per x value, which even on a unit rectangle you can't (for instance, {0,0},{1,0.6},{0,0.4},{1,1} will be rather interesting at the mid point!). The fastest is to simply build a LUT, like for instance:
var LUT_x = [], LUT_y = [], t, a, b, c, d;
for(let i=0; i<100; i++) {
t = i/100;
a = (1-t)*(1-t)*(1-t);
b = (1-t)*(1-t)*t;
c = (1-t)*t*t;
d = t*t*t;
LUT_x.push( a*x1 + 3*b*x2 + 3*c*x3 + d*x4 );
LUT_y.push( a*y1 + 3*b*y2 + 3*c*y3 + d*y4 );
}
Done, now if you want to look up an x value for some y value, just run through LUT_y until you find your y value, or more realistically until you find two values at index i and i+1 such that your y value lies somewhere in between them, and you will immediately know the corresponding x value because it'll be at the same index in LUT_x.
For nonexact matches with 2 indices i and i+1 you simply do a linear interpolation (i.e. y is at distance ... between i and i+1, and this at the same distance between i and i+1 for the x coordinates)
All the solutions that use a look up table can only give you an approximate result. If that is good enough for you, you are set. If you want a more accurate result, then you need to use some sort of numeric method.
For a general Bezier curve of degree N, you do need to loop. Meaning, you need to use bi-section method or Newton Raphson method or something similar to find the x value corresponding to a given y value and such methods (almost) always involve iterations starting with an initial guess. If there are mutiple solutions, then what x value you get will depend on your initial guess.
However, if you only care about cubic Bezier curves, then analytic solution is possible as roots of cubic polynomials can be found using the Cardano formula. In this link (y coordinate for a given x cubic bezier), which was referenced in the OP, there is an answer by Dave Bakker that shows how to solve cubic polynomial using Cardano formula. Source codes in Javascript is provided. I think this will be your good source to start your investigation on.
Thanks again to Mike's help we found the fastest way to do this. I put this function togather, takes 0.28msg on average:
function getValOnCubicBezier_givenXorY(options) {
/*
options = {
cubicBezier: {xs:[x1, x2, x3, x4], ys:[y1, y2, y3, y4]};
x: NUMBER //this is the known x, if provide this must not provide y, a number for x will be returned
y: NUMBER //this is the known y, if provide this must not provide x, a number for y will be returned
}
*/
if ('x' in options && 'y' in options) {
throw new Error('cannot provide known x and known y');
}
if (!('x' in options) && !('y' in options)) {
throw new Error('must provide EITHER a known x OR a known y');
}
var x1 = options.cubicBezier.xs[0];
var x2 = options.cubicBezier.xs[1];
var x3 = options.cubicBezier.xs[2];
var x4 = options.cubicBezier.xs[3];
var y1 = options.cubicBezier.ys[0];
var y2 = options.cubicBezier.ys[1];
var y3 = options.cubicBezier.ys[2];
var y4 = options.cubicBezier.ys[3];
var LUT = {
x: [],
y: []
}
for(var i=0; i<100; i++) {
var t = i/100;
LUT.x.push( (1-t)*(1-t)*(1-t)*x1 + 3*(1-t)*(1-t)*t*x2 + 3*(1-t)*t*t*x3 + t*t*t*x4 );
LUT.y.push( (1-t)*(1-t)*(1-t)*y1 + 3*(1-t)*(1-t)*t*y2 + 3*(1-t)*t*t*y3 + t*t*t*y4 );
}
if ('x' in options) {
var knw = 'x'; //known
var unk = 'y'; //unknown
} else {
var knw = 'y'; //known
var unk = 'x'; //unknown
}
for (var i=1; i<100; i++) {
if (options[knw] >= LUT[knw][i] && options[knw] <= LUT[knw][i+1]) {
var linearInterpolationValue = options[knw] - LUT[knw][i];
return LUT[unk][i] + linearInterpolationValue;
}
}
}
var ease = { //cubic-bezier(0.25, 0.1, 0.25, 1.0)
xs: [0, .25, .25, 1],
ys: [0, .1, 1, 1]
};
var linear = {
xs: [0, 0, 1, 1],
ys: [0, 0, 1, 1]
};
//console.time('calc');
var x = getValOnCubicBezier_givenXorY({y:.5, cubicBezier:linear});
//console.timeEnd('calc');
//console.log('x:', x);
I have a set number of color palettes (8), each with 5 colors. The goal is to process an image with canvas and determine which color palette is the closest match.
Atm the minute I am getting the average RGB value from the palette then, doing the same with the source image before converting it to LAB and using CIE1976 to calculate the color difference. The closest match is the smallest distance.
This works to an extent, but many of the images I'm testing match two particular palettes. Is there a better way to calculate the most relevant palettes for an image?
So I've changed it to work with histograms. I'll put some of the code below but basically I'm:
Creating a 3D RGB histogram from the selected image, splitting rgb values into one of 8 banks, (8*8*8) so 512.
Flattening the histogram to create a single 512 array.
Normalizing the values by dividing by the total pixels in the image.
I do the same for the color palettes, creating a flat 512 histogram.
Calculate the chi-squared distance between the two histogram to find the closest color palette.
With my color palettes only having 5 colors their histogram is quite empty. Would this be an issue when comparing histograms with chi-squared.
This is how I create the flat histogram for the images to be analysed.
var canvas = document.createElement('canvas'),
ctx = canvas.getContext('2d'),
imgWidth = this.width,
imgHeight = this.height,
totalPixels = imgWidth * imgHeight;
ctx.drawImage(this, 0, 0, this.width, this.height);
var data = ctx.getImageData(0, 0, imgWidth, imgHeight).data;
var x, y, z, histogram = new Float64Array(512);
for(x=0; x<imgWidth; x++){
for(y=0; y<imgHeight; y++){
var index = imgWidth * y + x;
var rgb = [data[index], data[index+1], data[index+2] ];
// put into relevant bank
var xbin = Math.floor((rgb[0]/255)*8)
var ybin = Math.floor((rgb[1]/255)*8)
var zbin = Math.floor((rgb[2]/255)*8)
histogram[ (ybin * 8 + xbin) * 8 + zbin ] ++;
}
}
// normalize values.
for(var i=0; i<512; i++) {
histogram[i] /= totalPixels;
}
This is how I am creating the histograms for the color palettes. The colors are just stored in an array of RGB values, each palette has 5 colors.
var pals = [];
palettes.forEach(function(palette){
var paletteH = new Float64Array(512);
palette.forEach(function(color){
var xbin = Math.floor((color[0]/255)*8);
var ybin = Math.floor((color[1]/255)*8);
var zbin = Math.floor((color[2]/255)*8);
paletteH[ (ybin * 8 + xbin) * 8 + zbin ] ++;
});
for(var i=0; i<512; i++) { paletteH[i] /= 5; }
pals.push(paletteH);
});
To calculate the chi-squared distance I'm looping through each palette getting the distance to the image histogram. Then the smallest should be most similar.
for(var p = 0; p<pals.length; p++){
var result = 0;
for(var i=0; a = histogram[i], b = pals[p][i], i < 512; i++ ){
result += 0.5 * ( Math.pow(a-b,2) / (a + b + 1e-10));
}
console.log(result);
}
This works, but the results seem wrong. For example I'll analyse an image of a forest scene expecting it to result in the green color palette, but it will return another. I'd appreciate any guidance at all.
You need to use a least squares difference between your palette color and sample color.
Also you need to do this for each channel R G B and possibly A.
It would look something like this (pseudo code in [...]):
var min = 999999;
var paletteMatch;
[loop sample colors] {
[loop palette colors] {
float lsd = (Math.pow(paletteR - sampleR, 2) + [greed] + [blue]) / 3;
if (lsd < min) {
min = lsd;
paletteMatch = currentPaletteInThisLoop;
}
}
[award a point for paletteMatch for this sample Color
}
[which palette has the most points?]
I'm trying to implement ColorPicker using Canvas just for fun. But i seem lost. as my browser is freezing for a while when it loads due to all these for loops.
I'm adding the screenshot of the result of this script:
window.onload = function(){
colorPicker();
}
function colorPicker(){
var canvas = document.getElementById("colDisp"),
frame = canvas.getContext("2d");
var r=0,
g=0,
b= 0;
function drawColor(){
for(r=0;r<255;r++){
for(g=0;g<255;g++){
for(b=0;b<255;b++){
frame.fillStyle="rgb("+r+","+g+","+b+")";
frame.fillRect(r,g,1,1);
}
}
}
}
drawColor();
Currently , i only want a solution about the freezing problem with better algorithm and it's not displaying the BLACK and GREY colors.
Please someone help me.
Instead of calling fillRect for every single pixel, it might be a lot more efficient to work with a raw RGBA buffer. You can obtain one using context.getImageData, fill it with the color values, and then put it back in one go using context.putImageData.
Note that your current code overwrites each single pixel 255 times, once for each possible blue-value. The final pass on each pixel is 255 blue, so you see no grey and black in the output.
Finding a good way to map all possible RGB values to a two-dimensional image isn't trivial, because RGB is a three-dimensional color-space. There are a lot of strategies for doing so, but none is really optimal for any possible use-case. You can find some creative solutions for this problem on AllRGB.com. A few of them might be suitable for a color-picker for some use-cases.
If you want to fetch the rgba of the pixel under the mouse, you must use context.getImageData.
getImageData returns an array of pixels.
var pixeldata=context.getImageData(0,0,canvas.width,canvas.height);
Each pixel is defined by 4 sequential array elements.
So if you have gotten a pixel array with getImageData:
// first pixel defined by the first 4 pixel array elements
pixeldata[0] = red component of pixel#1
pixeldata[1] = green component of pixel#1
pixeldata[2] = blue component of pixel#1
pixeldata[4] = alpha (opacity) component of pixel#1
// second pixel defined by the next 4 pixel array elements
pixeldata[5] = red component of pixel#2
pixeldata[6] = green component of pixel#2
pixeldata[7] = blue component of pixel#2
pixeldata[8] = alpha (opacity) component of pixel#2
So if you have a mouseX and mouseY then you can get the r,g,b,a values under the mouse like this:
// get the offset in the array where mouseX,mouseY begin
var offset=(imageWidth*mouseY+mouseX)*4;
// read the red,blue,green and alpha values of that pixel
var red = pixeldata[offset];
var green = pixeldata[offset+1];
var blue = pixeldata[offset+2];
var alpha = pixeldata[offset+3];
Here's a demo that draws a colorwheel on the canvas and displays the RGBA under the mouse:
http://jsfiddle.net/m1erickson/94BAQ/
A way to go, using .createImageData():
window.onload = function() {
var canvas = document.getElementById("colDisp");
var frame = canvas.getContext("2d");
var width = canvas.width;
var height = canvas.height;
var imagedata = frame.createImageData(width, height);
var index, x, y;
for (x = 0; x < width; x++) {
for (y = 0; y < height; y++) {
index = (x * width + y) * 4;
imagedata.data[index + 0] = x;
imagedata.data[index + 1] = y;
imagedata.data[index + 2] = x + y - 255;
imagedata.data[index + 3] = 255;
}
}
frame.putImageData(imagedata, 0, 0);
};
http://codepen.io/anon/pen/vGcaF