drawImage Houdini CSS Paint API - javascript

I am trying out Houdini's CSS Paint API on Chrome 70 and I can't this example to work:
https://github.com/w3c/css-houdini-drafts/blob/master/css-paint-api/EXPLAINER.md#drawing-images
I am passing an Image via custom property like this:
--my-image: url("bridge.jpg");
and I use the CSS Properties and Values API Level (enabled via the Experimental Web Platform features flag) to register my image property:
CSS.registerProperty({
name: '--my-image',
syntax: '<image> | none',
initialValue: 'none',
inherits: false,
});
Then I am trying to draw it in my worklet:
class MyWorklet {
static get inputProperties() {
return ['--my-image'];
}
paint(ctx, geom, properties) {
const img = properties.get('--my-image');
console.log(img);
ctx.drawImage(img,10,10,150,180);
}
}
registerPaint('myworklet', MyWorklet);
for displaying the worklet I made a simple 500 * 500 div and added the style:
background-image: paint(myworklet);
The Console tells me that img is a CSSImageValue,
so it seems to work in some way but the image does not get drawn onto the background of the div.
Has anyone managed to get this to work or are some functionalities which are required for this kind of stuff not yet implemented?

Image is not implemented in Chrome till now. Should be available in the latest Safari Technical Preview (v72). But it requires inline code, see https://vitaliy-bobrov.github.io/blog/trying-paint-wroklet-in-safari-tp/

Related

Get Video Width and Height from Azure Media Player

I have video's I'm streaming from Azure Media Services and are being rendered in my web page using Azure Media Player API.
I don't know ahead of time what the videos dimensions are (and they will vary). My issue is that when I play the video there is a black border (either at top/bottom or at left/right) around the video if I don't create the video element with the correct ratio to match the video. See for example the image below, notice the large black borders on the left and right of the video. I'd like to get the video size so I can correct the dimensions and get rid of the border.
The Azure Media Player API seems to say I can get the videoWidth and videoHeight. But I'm not sure (in Javascript) what object to get those values from.
In my script below, when I console.log the player object I don't see videoWidth or videoHeight as part of the player object.
let myOptions = {
controls: true,
autoplay: true,
logo: { enabled: false }
};
myPlayer = amp(video, myOptions, () => {
console.log(myPlayer);
});
myPlayer.src([{
src: "<manifestURL>",
type: "<type>"
}]);
The following screenshot is what gets logged. Unless I'm missing something, I don't see the videoWidth or videoHeight values.
Any assistance is greatly appreciated.
Actually videoWidth/videoHeight are functions.
Also you should use the this keyword inside the ready handler.
For example :
amp(video, options, () => console.log(this.videoWidth())

Puppeteer - How to change orientation of some pages in a document?

I'm using Puppeteer to create a 30-page long pdf and a few of the pages need to be landscape orientated. How can I specify it only for page x and page y ?
Pseudo Selectors for #page
According to the documentation or CSS spec, you can set up different orientation to some pages using CSS.
#page :pseudo-selector{
size: landscape;
}
The acceptable and working pseudo-selectors (that I tested with puppeteer and google chrome) includes,
:blank
:first
:left
:right
Result:
PS: At the moment of answer, other selectors like :nth-child and page identifies mentioned on the draft does not work on chrome version 73.
Alternative way
The only other way to deal with this is to print pages separately and then merge them later on. You can print specific pages with pageRanges,
page.pdf({pageRanges: '1-5', path: 'first.pdf'})
And use packages like pdf-merge to merge two pdf file.
const PDFMerge = require('pdf-merge');
PDFMerge(['first.pdf', 'second.pdf'], {output: `${__dirname}/3.pdf`})
Well, according to caniuse, you can use the page property with Chrome 85 and up
So you can use #page followed by a "named page name" in combination with the page property to set a different orientation (or any other properties) to any page you want.
example:
#page rotated {
size: landscape;
}
.rotated_class {
page: rotated;
}
Page example, just right click -> print to see it in action
EDIT
Well, after writing this solution I found a major problem with the #page approach. If you try to print the example page I linked to just above, you'll see that the viewport of the rotated pages is limited to the width of the first page if the first page is in portrait (and limited in height if the first page is in landscape).
The solution I ended up using for the page rotation is a bit whack and needs a second library, but it works :
Set a different size on the pages you want to be rotated (here I change it by 0.1 inch which is about 1 pixel in the end) and assign it to your corresponding pages
#page rotated {
size: 8.49in 11in;
}
.rotated_class {
page: rotated;
}
I use pdf-lib after the puppeteer generation to open the pdf and rotate every page smaller than a normal page (in my case I check if the width is smaller than 612 pixels)
const pages = pdf.getPages();
for (let i = 0; i < pages.length; ++i) {
const pageToRotate = pages[i];
if (pageToRotate.getWidth() < 612) {
pageToRotate.setRotation(degrees(-90));
}
}
pdfBuffer = Buffer.from(await pdf.save());
the Page size is the only info I found that could be "transferred" from the css to the pdf-lib to flag any page to be rotated, hence why I'm using that solution.
Hopefully the "viewport bug" will be fixed in the future and we'll be able to use the #page landscape/portrait property
We need to set 'landscape' property to 'true' in options.
var options = {
...
landscape: true
}
page.pdf(options);

Slidein element is visible before the animation

I'm trying to recreate the animations when loading from this website:
https://uchuhimo.me
I think they are using velocity.js to do the animations.
I tried to recreate some of this and kind of succeeded (though not sure if doing it properly). There is one problem though, that the elements are there and then they animate (slidein), whereas correctly they should be hidden and then they slide in so they become visible (like on the website). I looked into documentation and i think that should be expected behaviour? But here in my example it does not work like that.
https://codepen.io/pokepim/pen/EpyKWR
The sequence of animation I run is the following:
And they should imitate the animation of that website im trying to imitate.
var loading = [
{ elements: $(".logo-line-before"), properties: {width: '100%'}},
{ elements: $(".logo-line-after"), properties: {width: '100%'}, options: { sequenceQueue: false }},
{ elements: $(".ttl"), properties:"transition.slideDownIn"},
{ elements: $(".ui.top.vertical.segment"), properties:"transition.slideDownBigIn"}
];
$.Velocity.RunSequence(loading);
That's all using Velocity V1 so there's limited help available (it's not supported any more), however you do need to pre-load the elements for opacity:0, there's no need for changing the display property on them as it's just a "get it visible" animation on an element that should still take up space.
I'd suggest simply adding a style="opacity:0;" on each of those elements in the HTML source and going from there.

How to prevent partially loaded images from displaying?

If you have, let's say, 3MB image in img tag, it will take a few seconds to load. When the image is loading, browser is sort of "printing" it - it shows the top part first, then middle and then bottom. How do I prevent this from happening?
I'd rather have the image hidden and after second or two shown - when it is fully loaded.
One way would be to give them a class that gives them opacity: 0 so they don't show:
<img src="/path/to/image" class="loading">
And in CSS:
.loading {
opacity: 0;
}
In head, we override that if JavaScript is disabled (so we're not unfriendly to non-JavaScript visitors):
<noscript>
<style>
.loading {
opacity: 1;
}
</style>
</noscript>
...and then in code at the bottom of your page, find all your images and remove the class when they've loaded, and...(see comments):
(function() {
// Get an array of the images
var images = Array.prototype.slice.call(document.querySelectorAll("img.loading"));
// Hook their load and error events, even though they may have already fired
images.forEach(function(image) {
image.addEventListener("load", imageDone.bind(null, image));
image.addEventListener("error", imageDone.bind(null, image)); // Could handle errors differently
});
// Check to see if any images are already complete
checkImages();
function imageDone(img) {
img.classList.loading("remove");
images = images.filter(function(entry) { entry != img });
}
function checkImages() {
images.forEach(function(image) {
if (image.complete) {
imageDone(image);
}
});
if (images.length) {
// Check back in a second
setTimeout(checkImages, 1000);
}
}
})();
That's a belt-and-braces approach. It proactively checks to see if images have finished loading, and also reactively handles the load and error event of images. In theory, we shouldn't need the setTimeout, and you might do testing without it, but...
Notice how once an image is complete, we remove the class so it's visible.
Old school:
To avoid the partial display of an image as it renders, save your large images as progressive, rather than baseline jpgs.
a progressive jpg renders as a series of scans of increasing quality
a baseline jpg renders top to bottom (what you described as “printing”).
The progressive option is considered more user friendly than both the sudden appearance of the image or the slow top to bottom rendering you dislike. The progressive file variant can even be smaller than its baseline counterpart.
For more about this read: The Return of the Progressive JPEG.
I think everyone here gave you some good answers and I just want to add in. 3MB is fairly big for a web image. Don't use something that large for an image being used for logo or layout. That's a larger amount of pixel data that you should only stick with if you are loading something that is a nice, large scale real-life image you want to preserve the quality to (or providing a download to a high-quality graphic of something). Besides the above, if you do a Google search, you find tons of solutions for loading images. Something nice I would use for larger images is a jQuery/ajax solution.

Using javascript to show a grey-scale version of an image on mouse-over

I need a way to display a grayscale version of an image on mouseover. I've seen this implemented using the Canvas functionality of the browser but don't want to use that method as it will be a while before canvas is implemented on all browsers.
Has anyone done such a thing?
Assuming, as reko_t has commented, you can't just create grey scale versions of the images on the server for some reason, it's possible in IE using the proprietary filter CSS attribute, BasicImage with grayScale. You don't need JS to do this, it can be declared in CSS:
a {
display: block;
width: 80px;
height: 15px;
background-image: url(http://www.boogdesign.com/images/buttons/microformat_hcard.png);
}
a:hover {
filter:progid:DXImageTransform.Microsoft.BasicImage(grayScale=1);
}
In Firefox, you could apply an SVG mask, or you could try using the canvas element.
However, the simplest solution may be to either manually create grey scale versions of your images, or do it server side with something like GD.
If you don't use Canvas and dont want to utilize browser-specific features, you are going to need to generate your grayscale images on the server. Either beforehand or on demand. How to do That has been answered elsewhere on SO
Found on the net:
HTML 5 introduces Canvas object which
can be used to draw and manipulate
images
The Script:
function grayscale(image, bPlaceImage)
{
var myCanvas=document.createElement("canvas");
var myCanvasContext=myCanvas.getContext("2d");
var imgWidth=image.width;
var imgHeight=image.height;
// You'll get some string error if you fail to specify the dimensions
myCanvas.width= imgWidth;
myCanvas.height=imgHeight;
// alert(imgWidth);
myCanvasContext.drawImage(image,0,0);
// This function cannot be called if the image is not rom the same domain.
// You'll get security error if you do.
var imageData=myCanvasContext.getImageData(0,0, imgWidth, imgHeight);
// This loop gets every pixels on the image and
for (j=0; j<imageData.height; i++)
{
for (i=0; i<imageData.width; j++)
{
var index=(i*4)*imageData.width+(j*4);
var red=imageData.data[index];
var green=imageData.data[index+1];
var blue=imageData.data[index+2];
var alpha=imageData.data[index+3];
var average=(red+green+blue)/3;
imageData.data[index]=average;
imageData.data[index+1]=average;
imageData.data[index+2]=average;
imageData.data[index+3]=alpha;
}
}
if (bPlaceImage)
{
var myDiv=document.createElement("div");
myDiv.appendChild(myCanvas);
image.parentNode.appendChild(myCanvas);
}
return myCanvas.toDataURL();
}
The usage:
<img id="myImage" src="image.gif"
onload="javascript:grayscale(this, true);"></img>
Tests Passed Using:
FireFox 3.5.4
Chrome 3.0
Safari 4.0
Tests Failed Using:
Internet Explorer 6
Internet Explorer 7
Resources:
http://www.permadi.com/tutorial/jsCanvasGrayscale/index.html
img {
mix-blend-mode: luminosity;
background: #000;
}

Categories

Resources