I am trying to create a macro in photoshop that will resize my canvas to 4x the size of the existing image, leaving the original image as the upper left quadrant.
Currently I am doing it by hand, just using the crop tool and pulling the right corner down on each photo until it is approximately a 4x larger Square.
The images I am modifying are all squares and I want to create a 4x bigger square leaving the original square as the upper left quadrant.
Effectively it would be like resizing the canvas, pegging the image to the upper left and doubling the two dimensions.
Is this possible, via Javascript or any other method?
Thanks
Corey
Welcome to Stack Overflow
You can adjust the canvas size with a photoshop script. They come in three flavours VB, Applescript or JavaScript.
Here's something to get you started
// Call the source doc
var srcDoc = app.activeDocument;
// get original width and height
var w = srcDoc.width.value;
var h = srcDoc.height.value;
// adjust canvas size four times original
// with the anchor in the top left
srcDoc.resizeCanvas(w*4, h*4, AnchorPosition.TOPLEFT);
You also might want to look at the photoshop CC javascript ref pdf
Related
I am using createjs.Bitmap to add an SVG to the stage. This is in Adobe Animate HTML5 Canvas which uses the Create.js/Easel.js frameworks. The project is using a responsive layout.
The problem is the SVG does not scale relative to the stage and other canvas objects around it. The SVG was created from Adobe Illustrator and it's size in Illustrator is 151.5px wide X 163.7px high.
var root = this;
var patientImagePath = data['patient_image_path']
patientImage = new createjs.Bitmap(patientImagePath);
patientImage.x = 96;
patientImage.y = 36.05;
patientImage.scaleX = 0.16;
patientImage.scaleY = 0.16;
root.stage.addChild(patientImage);
'Normal' view on large monitor showing the SVG (person's face). The surrounding elements are canvas objects...
and after reducing the browser size by dragging (not zoom...).
Also tried:
patientImage = new createjs.Bitmap(patientImagePath).set({scaleX: 0.16, scaleY: 0.16});
Makes no difference.
The odd thing is that the SVG code shows:
viewBox="0 0 151.5 163.7"
which is shows the correct width and height of the SVG, but if I don't apply any scale to the SVG on load in JS, it gets loaded in an enormous size, (virtually occupies the whole monitor...).
When I change the format to PNG (same dimensions), the PNG loads with createjs.Bitmap without any scaling at its original size, which is perfect. No issues and the PNG scales in relation to the other Canvas objects when the browser size is changed. Why is SVG different?
I want to use SVG, not PNG. It seems the SVG just gets scaled up significantly on load by create.js without reference to its original size in the SVG's viewbox...
So how do I get the SVG to load at it's original size??
See also my logged issue on GitHub on this for create.js
https://github.com/CreateJS/EaselJS/issues/1070
Without attaching or linking your SVG file it's hard to guess (beside one comment above).
I'll assume Adobe Illustrator (I don't have it anymore) saves SVG with viewBox attribute AND height and width attributes. What happens if you remove these two attributes and rely on browser (and create.js) automatic resizing of SVG?
If you are not aware of different SVG aspectRatio parameter values take a look on the specification:
https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/preserveAspectRatio
Consider this simple example of a cube centered on the origin of the world. Since the camera is looking directly at it, the resulting rendered image shows the cube in the middle of the rendered 2D image and only its front face is visible. I'd like to have control over that cube's placement. I.e. I'd like to shift the rendered output up and to the left by some amount. That way, I can for example shift everything by half of the canvas's width and height and have the cube centered on the top left corner of the rendered output.
To be clear: I don't want to move the camera nor the object in the 3D world (nor the canvas). I just want the rendered result itself to shift, and I'd like to define this shift in 2D screen units rather than in 3D space. It entails that after the said shift, the sides of the cube will still not be visible — only the front face as it is currently. It also entails that if I shift the output to the left, some geometry that's on the right side of the view but previously out of the frame would now shift into view and get rendered.
In some 3D software I've encountered the ability to do this by modifying the camera's X and Y "center shift". Maybe in three.js I'd have to do it by applying a transformation to the camera or to the renderer? I'm not familiar enough with the library to know where to dig.
There's no relevant code to share, but StackOverflow won't let me submit this question without some code ;)
You offset of the camera using a pattern like so:
var fullWidth = window.innerWidth;
var fullHeight = window.innerHeight;
var xPixels = 600;
var yPixels = 200;
camera.setViewOffset ( fullWidth, fullHeight, xPixels, yPixels, fullWidth, fullHeight );
To undo it, call
camera.clearViewOffset();
See the docs for more info about this method and multi-monitor setups. It works for OrthographicCamera, too.
three.js r.84
Why don't you just use canvas translate?
Just do something like this:
// adjust for camera
ctx.translate(-this.camera.x, -this.camera.y);
// render scene here
// end of camera view
ctx.translate(this.camera.x, this.camera.y);
I have an image that was changed through Javascript and that image is placed on top of a background image, however when I look at it on the site you can see a clear square where the image was placed.
I wanted to know if there is a way to fade the images edges and have the background come in, making it look one.
Sidenote. I also wish to avoid JQuery at this time, If possible I would like to achieve this through javascript.
Here you can see the site page: http://imgur.com/a/vC9VV
Javascript code:
// attempting to move the images only
var y = document.getElementById("pic2");
// Sorted by pathing the image to the folder ( can use .. to get in the folder)
y.setAttribute("src", "../images/water_image1.png");
y.style.cssFloat = 'right';
// change the images to the same size as the replacing pic
y.style.width = "400px";
y.style.height = "250px";
// This is sorting out the alignment with the text
y.style.margin = "110px 20px 10px";
y.style.opacity = "0.8";
You could loop through each pixel of image and set the value of its alpha channel depending on distance to closest border of image, then render the result on <canvas>.
For easier but less generic approaches, take a look at this.
I have a super simple need of OCR.
My application allows creating image from text. Its very simple. People choose a font face, bold or not, and size.
So they get outputs like this, ignoring the border:
I wanted to create a very simple OCR to read these. I thought of this approach:
In the same way I generate an image for the message. I should generate an image for each character. Then I go through and try to match each character image to the black occourances in the canvas. Is this right approach?
The method I use to draw element to image is this copy paste example here: MDN :: Drawing DOM objects into a canvas
Ok, another couple of tries...
Another method that's simpler than OCR: use Steganography to embed the text message as part of the image itself. Here's a script that uses the alpha channel of an image to store text: http://www.peter-eigenschink.at/projects/steganographyjs/index.html
You can try this "home brewed" OCR solution...but I have doubts about its effectiveness.
Use the clipping form of context.drawImage to draw just the message-text area of your image on the canvas.
Use context.getImageData to grab the pixel information.
Examine each vertical column starting from the left until you find an opaque pixel (this is the left side of the first letter).
Continue examining each vertical column until you find a column with all transparent pixels (this is the right side of the first letter).
Resize a second canvas to exactly contain the discovered letter and drawImage just the first letter to a second canvas.
Set globalCompositeOperation='destination-out' so that any new drawing will erase any existing drawings where the new & old overlap.
fillText the letter "A" on the second canvas.
Use context.getImageData to grab the pixel information on the second canvas.
Count the opaque pixels on the second canvas.
If the opaque pixel count is high, they you probably haven't matched the letter A, so repeat steps 5-9 with the letter B.
If the opaque pixel count is low, then you may have found the letter A.
If the opaque pixel count is medium-low, you may have found the letter A but the 2 A's are not quite aligned. Repeat steps 5-9 but offset the A in step#7 by 1 pixel horizontally or vertically. Continue offsetting the A in 1 pixel offsets and see if the opaque pixel count becomes low.
If step#12 doesn't produce a low pixel count, continue with the letter B,C,etc and repeat steps 5-9.
When you're done discovering the first letter, go back to step#1 and only draw the message-text with an offset that excludes the first letter.
OCR is always complex and often inaccurate.
I hate to wave you off of a solution, but don't use OCR for your purpose
Simple and effective solution...
Put your message in the image's file name.
Solution found - GOCR.js - https://github.com/antimatter15/gocr.js/tree/d820e0651cf819e9649a837d83125724a2c1cc37
download gocr.js
decide if you want to go from WebWorker, or mainthread
worker
In the worker put this code:
importScripts(gocr.js)
GOCR(aImgData)
where aImgData is, take an image, load it, draw it to canvas, then send the data to the webworker. (see mainthread method)
mainthread
<script src="gocr.js">
<script>
var img = new Image()
img.onerror = function() {
console.error('failed')
}
img.onload = function() {
var can = document.createElementNS('http://www.w3.org/1999/xhtml', 'canvas');
can.width = img.width;
can.height = img.height;
var ctx = can.getContext('2d')
ctx.drawImage(img, 0, 0)
// to use this in a worker, do ctx.getImageData(0, 0, img.width, img.height), then transfer the image data to the WebWorker
var text = GOCR(can);
}
</script>
TLDR;
given this svg element:
<image width="30" height="48" x="3.75" y="6" href="http://some/image.jpg">
How can I retrieve the image's actual height and width (seeing as it is defined in part by the image's aspect ratio).
I have a d3js script that draws a bunch of <rect>s and a bunch of <image>s.
Now stuff is laid out so that the images fall inside the rects, as though the rects were borders. There is other stuff inside the rects too.
Now each of these images has it's own unique and special aspect ratio and this bothers me because it means each of the rects then has a different amount of blank space. This is untidy.
To avoid this I want to load the images then get the actual image dimensions and then adjust the positions and sizes of the surrounding goodies. The crux of the matter is getting the actual image sizes. How can I do this?
I've googled around a bit and spent some quality time with the debugging console to no avail. Just nothing comes up. I'll keep hunting but an answer would be really nice.
First, set the width attribute only, keep height unspecified.
Then, call the getBBox for the image element.
Note that image box is available after it's properly rendered by the SVG
const image = parent.append('image').attr('xlink:href', url).attr('width', 100);
setTimeout(() => {
const box = image.node().getBBox();
const ratio = box.width / box.height;
}, 0);
This is the best I can come up with. I would be surprised and horrified if there isn't an easier way. Anyway:
add a normal html <img> element with a suitable src.
use js to fetch the image height and width
remove the extra html
Ugly but it works...
Here's some code:
var oImage = document.createElement("img");
oImage.setAttribute("src",sUrl);
document.body.appendChild(oImage);
var iWidth = oImage.width;
var iHeight = oImage.height;
oImage.remove();