Is it possible to create the same transparency effects as with Color Burn, Color Dodge, Soft Light, etc. which are available in Photoshop? I know about 'background-blend-mode' in CSS3, but its support is limited. Is there some way to make this work across the mainly used browsers, including IE9? Maybe with some hack similar to CSS3pie?
You mentioned background-blend-mode, and you're right - it's one of the best methods for blending colors, but surprise, surprise, IE doesn't support it, even in Edge, which is saddening. This fantastic article lists all of the blend-mode options and their effects, including color-burn and color-dodge as you mentioned.
There's also filter, which is supported by some browsers but not Internet Explorer/Edge.
Polyfills get us closer, but not all the way.
There is already a Filter Polyfill on Github that provides IE6-9 support for filter (the other major browsers already support it).
It's tougher going for blend-mode. There are polyfills available but they often don't support IE or even more than one type of blend-mode.
Where's that leave us? If even polyfills won't save us, then it's time to abandon the idea of supporting an unsupported feature and simple use a full-featured library, such as a plugin for jquery.
A little research turned up Blend-Mode, which appeared to work okay on Internet Explorer, although it was by no means perfect.
There is also the Pixastic library which has many other features, and did seem to work on IE9, but struggled on 8 or below.
What's it mean?
Basically - and I know this is hardly the answer you wanted - but according to my research, finding a truly IE-supportive solution at the moment isn't likely. Support for canvas blend-mode is non-existant, and these shims are questionable at best.
You have a choice - use these techniques on only the modern, supporting browsers, or use a Photoshop-esque program first, then upload the images to your webserver (and lose the ability to do it on the fly, all because of Internet Explorer).
I hope my research here helps you out.
Related
There are many tools to work around browser prefixes, be it IDE auto-expansion for troublesome properties or making mixins with precompilers, but why exactly do browser manufacturers NEED to implement prefixes. From what I've heard it's so they can implement their own implementation of something, but what prevents them from doing so without a prefix. Nowadays especially almost all browsers try to follow the W3 specifications, so I doubt Safari would make, say box-shadow, draw a rainbow inside an element.
Isn't it better to leave their "solution" although maybe a bit buggy or lacking in functionality, unprefixed, so when they actually roll out a working version that meets all criteria, sites written using that style of JavaScript property can function as intended? I just really can't see how everyone decided that it's a good idea to implement them.
The idea was that authors who wanted to "try" out these experimental features locally could do so with the prefixed properties on shipping browsers, rather than having to deal with nightly builds or even downloading and compiling the source manually, because shipping buggy behavior unprefixed was akin to shipping beta software as production-quality.
What the vendors didn't see coming was that authors would take the prefixed properties, put them on production sites, and encourage other authors to do the same. As a result the prefixes spread like wildfire to the point where vendors were too afraid to break sites by removing the prefixes after shipping stable, unprefix implementations as originally intended. I mean, just look at what happened when Mozilla dropped support for -moz-opacity, a prefixed property that was relatively little used compared to today's -webkit- properties, less than 4 years ago. For perspective, -moz-opacity was unprefixed in Firefox 0.9, almost 12 years ago.
Another unfortunate result of this prefix fiasco? Opera, Microsoft, and finally Mozilla, have all reluctantly changed their CSS implementations to recognize -webkit- prefixes because WebKit was making its way into practically every mobile device and every niche browser, and authors were under the impression that WebKit was the One True Layout Engine™, and so they coded for WebKit and nothing else. Clearly we haven't learned from the IE/Netscape browser wars.
And this is why vendors have agreed not to use prefixes for experimental implementations of upcoming standards anymore going forward. New CSS features will ship unprefixed, but not made available by default. For example, Firefox hides these features behind special about:config options and switches them on by default at a later date, while Chrome hides them in about:flags in a similar fashion.
In Coders at work, Douglas Crockford discusses how bugs in browsers cause Javascript to be a complex, clunky language and fixing it is a catch-22. In Beginning JavaScript with DOM scripting and Ajax Christian Heilmann says something similar "[The] large variety of user agents, of different technical finesse [...] is a great danger to JavaScript."
Why doesn't JS have a breaking new version? Is there something inherent n the language design where backwards compatibility becomes a must?
Update
Why can't javascript run with multiple engines in parallel? Similar to how .NET runs versions 2, 3 and 4 on the same machine.
Lazy copypasta at OP's request:
JavaScript is just a programming language: syntax and semantics. It has no built-in support for browsers (read: the browser DOM). You could create a JS program that runs outside of a browser. You (should) know what an API is - the DOM is just a JavaScript API for manipulating an HTML page. There are other DOM APIs in other languages (C#, Java, etc.), though they are used more for things like XML. Does that make sense?
Perhaps this MDC article can clarify further.
Well a breaking change would break a lot of existing websites, which would make a lot of people very angry :)
Backwards compatibility is important because of the large number of browsers deployed and the wide variety of versions of those browsers.
If you serve a new, incompatible kind of Javascript to old browsers, they all break.
If you invent a new language that is not considered to be Javascript by existing browsers, then it doesn't work with the majority of browsers. Very few users will be willing to download a new browser just to work with your new language. So web developers have to keep writing compatible Javascript to support the majority of the users, no matter how great the new language is.
A lot of people would like to see something better than current Javascript be supported by browsers, but it just isn't going to happen any time soon. All the makers of browsers and development tools would have to support the new thing, and continue to support the old Javascript stuff too. Many interested parties just wouldn't consider the benefit to be worth the cost. Slow evolution of Javascript seems to be the only viable solution.
As a matter of fact, ECMAScript 5 is not fully backwards-compatible for the very reasons you mentioned.
Inertia.
Making a breaking change would break too many sites, no browser vendor would want to deal with all the bug reports.
And PHBs would be against targeting a new version, why should they have their developers write javascript for the broken and the fixed languages? Their developers will have to write it for the broken version anyway so why bother with 2 implementations (which from a developer perspective sucks too since now they have to update, support and debug 2 separate trees).
Ecmascript 5 has a "strict" mode. I think this strict mode is intended to combat the problem you mention. Eventually you'd mark scripts "strict" that you want to use the new engine, all others get run in an old crufty VM, or with un-optimized codepaths or whatever.
This is kind like IE and Mozilla browsers having multiple "modes" of rendering websites (IE even swaps out rendering engines).
See this question about it
Javascript has subtle differences across different browsers. This is because each browser manufacturer has different sets of responsibilities to their users to support backwards compatibility (if any). If I had to pick, I'd say the biggest barrier to the advancement of javascript is older versions of Internet Explorer. Due to service agreements with their users, Microsoft is contractually obliged to support older browsers. Even if other browsers cutoff backwards-compatibility, Microsoft will not. To be fair, Microsoft does realize how terrible their browsers are and will hopefully push IE 9.0 very hard. Despite the inconsistencies of javascript across different browsers, they are subtle enough to make cross-browser programming more than feasible. Abruptly cutting off backwards-compatibility would be a practice that would make web development a nightmare. Incrementally cutting of backwards-compatibility for specific aspects of javascript is feasible.
There is much more else wrong with JavaScript. You can't be fully backwards-compatible with things that were never fully compatible when they were fresh... Say, the length of the array [1,] is reported as 2 by at least older versions of internet explorer.
The biggest fault of JavaScript is that is comes with a tiny, incomplete and pretty much unusable standard library. That is why everyone retreats to using jQuery, Dojo, Mochikit etc. - these offer mostly functionality that should be part of some standard library included with the browsers instead of floating around in thousands of copies and versions. It's actually what makes .NET and Java so popular: the language comes with a reasonable standard library. With C and C++, you have to dig out the nice libraries (Boost e.g.) yourself.
But other than that, the ECMAScript standard occasionally is updated.
Google is also trying to do this bold step forwards and redo JavaScript in a slightly more sane way. The efforts are known as Dart: http://www.dartlang.org/
For all I can tell, Dart largely uses the syntax of JavaScript minus a couple of its quirks. Apart from that, it also is nicer for the virtual machine and will thus likely run faster (unless of course you compile Dart to Javascript and use a JavaScript VM; which is offered as a compatibility option). But of course any hardcore JavaScript nazi^W enthusiast will not like anything that claims to be better than JavaScript. Whereas for me, they don't go far enough. In particular, they still don't provide enough "classpath".
I want to determine the best javascript framework to use in order to maintain IE 6 compatibility.
Specifically I want to know which best supports IE6 - Dojo or JQuery.
I determine compatibility based on the amount of work you have to do to make the framework work with IE6 (framework may have features that need special coding to enable them to work with IE6, or there may be features that are incompatible altogether).
Are there any benchmarks, or compatibility matrices, for the various javascript frameworks that quantify the work you would have to do to maintain IE6 compatibility?
Both jQuery and DOJO claim they support IE6:
http://docs.jquery.com/Browser_Compatibility
http://o.dojotoolkit.org/support/faq/what-browsers-does-dojo-support
DOJO does seem to have quite a grandiose claim:
... 100% of the available
functionality works, that
accessibility is handled correctly,
and that all internationalization and
localization is supported. This is a
very high bar, ...
And jQuery claims they test regularly in IE6.
Personally, I would let other requirements dictate which framework you use. One of the fundamental jobs for a JavaScript library is to be cross browser compatible, so any decent library is going to be good at it.
"Better" in your question indicates subjectivity, so I'd probably change this to a community wiki.
The best thing to do in each case is to look at what the libraries say they support. I know that he following frameworks handle IE6 well:
Prototype
jQuery
I don't have up-to-date personal experience with Dojo or ExtJS, but they supported IE6 well a couple of years back when I looked into them — I'd be surprised if they don't still support it (for now). (ExtJS's "learn more" page says IE6 and up; the "supported" list on the Dojo front page is not, shocking, a link to a list.)
The Closure team originally said they supported IE6 (although they have no official list), but that may have changed with Google's recent decision to drop IE6 support from their web apps.
Amongst other things, the main priority of a javascript library is to solve the cross-browser issues. Having said that, I personally use jQuery and yes it overcomes the IE6 issues too other than other later versions of IE.
I'm not a JavaScript wiz, but is it possible to create a single embeddable JavaScript file that makes all browsers standards compliant? Like a collection of all known JavaScript hacks that force each browser to interpret the code properly?
For example, IE6 does not recognize the :hover pseudo-class in CSS for anything except links, but there exists a JavaScript file that finds all references to :hover and applies a hack that forces IE6 to do it right, allowing me to use the hover command as I should.
There is an unbelievable amount of time (and thus money) that every webmaster has to spend on learning all these hacks. Imagine if there was an open source project where all one has to do is add one line to the header embedding the code and then they'd be free to code their site per accepted web standards (XHTML Strict, CSS3).
Plus, this would give an incentive for web browsers to follow the standards or be stuck with a slower browser due to all the JavaScript code being executed.
So, is this possible?
Plus, this would give an incentive for web browsers to follow the standards or be stuck with a slower browser due to all the JavaScript code being executed.
Well... That's kind of the issue. Not every incompatibility can be smoothed out using JS tricks, and others will become too slow to be usable, or retain subtle incompatibilities. A classic example are the many scripts to fake support for translucency in PNG files on IE6: they worked for simple situations, but fell apart or became prohibitively slow for pages that used such images creatively and extensively.
There's no free lunch.
Others have pointed out specific situations where you can use a script to fake features that aren't supported, or a library to abstract away differences. My advice is to approach this problem piecemeal: write your code for a decent browser, restricting yourself as much as possible to the common set of supported functionality for critical features. Then bring in the hacks to patch up the browsers that fail, allowing yourself to drop functionality or degrade gracefully when possible on older / lesser browsers.
Don't expect it to be too easy. If it was that simple, you wouldn't be getting paid for it... ;-)
Check out jQuery it does a good job of standardizing browser javascript
Along those same lines explorercanvas brings support for the HTML5 canvas tag to IE browsers.
You can't get full standards compliance, but you can use a framework that smooths over some of the worst breaches. You can also use something called a reset style sheet.
There's a library for IE to make it act more like a standards-compliant browser: Dean Edwards' IE7.
Like a collection of all known
javascript hacks that force each
browser to interpret the code properly
You have two choices: read browser compatibility tables and learn each exception a browser has and create one yourself, or use avaiable libraries.
If you want a javascript correction abstraction, you can use jQuery.
If you want a css correction abstraction, you can check /IE7/.
I usually don't like to use css corrections made by javascript. It's another complexity to my code, another library that can insert bugs to already bugged browsers. I prefer creating conditional statements to ie6, ie7 and such and create separate stylesheets for each of them. This approach works and doesn't generate a lot of overhead.
EDIT: (I know that we have problems in other browsers, but since IE is the major browser out there and usually we need really strange hacks to make it work, css conditional statements is a good approach IMO).
Actually you can,there are lots of libraries to handle this issue. From the start of the time, javascript compliance issue always was a problem for developers and thanks to innovative ones who developed libraries to get over this problem...
One of them and my favorite is JQuery.
Before JavaScript 1.4 there was no global arguments Array, and it is impossible to implement the arguments array yourself without a highly advanced source filter. This means it is going to be impossible for the language to maintain backwards-compatibility with Netscape 4.0 and Internet Explorer 4.0. So right out I can say that no, you cannot make all browser standards compliant.
Post-netscape, you can implement nearly all of the features in the core of the language in JavaScript itself. For example, I coded all methods of the Array object in 100% JavaScript code.
http://openjsan.org/doc/j/jh/jhuni/StandardLibrary/1.81/index.html
You can see my implementation of Array here if you go to the link and then go down to Array and then "source."
What most of you are probably referring to is implementing the DOM objects yourself, which is much more problematic. Using VML you can implement the Canvas tag across all the modern browsers, however, you will get a buggy/barely-working performance in Internet Explorer because VML is markup which is not a good format for implementing the Canvas tag...
http://code.google.com/p/explorercanvas/
Flash/Silverlight: Using either of these you can implement the Canvas tag and it will work quite well, you can also implement sound. However, if the user doesn't have any browser plugins there is nothing you can do.
http://www.schillmania.com/projects/soundmanager2/
DOM Abstractions: On the issue of the DOM, you can abstract away from the DOM by implementing your own Event object such as in the case of QEvent, or even implementing your own Node object like in the case of YAHOO.util.Element, however, these usually have some subtle changes to the standard API, so people are usually just abstracting away from the standard, and there is hundreds of cases of libraries that abstract away.
http://code.google.com/p/qevent/
This is probably the best answer to your question. It makes browsers as standards-compliant as possible.
http://dean.edwards.name/weblog/2007/03/yet-another/
It's the year 2009. Internet Explorer 8 has finally been released, and Firefox is coming up to 3.5. Many of the large browsers are starting to integrate features from CSS3 and HTML 5, or have been doing that for quite a while now. Still, I find myself developing web pages exactly the same way I did back in 2005.
A lot of progress has been made since then, and I think the reason that I haven't started taking advantage of these new possibilities is that it's so hard to know which of the new features that work in all major browsers. Since I'm mostly a backend developer I just don't have the time to keep up these developments anymore. Still, I feel like I'm missing out on a lot of cool stuff that actually would make my life a lot easier.
How can I quickly determine if a feature of CSS3 or HTML5 is supported by all major modern browsers?
Can I Use is a website which tracks browser support for current and upcoming web standards. Check it out if you would like to know whether or not a given feature is widely supported.
Font embedding through CSS, using #font-face. Webkit/Safari has been supporting it since version 3.1, Microsoft since IE4, Mozilla since Firefox 3.5 (browser support overview).
Also, the varied implementations of the Selectors API, which provides a browser-native CSS selector engine for use in DOM scripting.
For other examples, When Can I Use... seems to be a very good reference.
I would say display:table and a range of CSS2.1 selectors are the big wins for designers. display:table solves some unsolvable or difficult layouts like 100% height and inside borders without breaking semantics and using actual tables.
Multiple classes (.c1.c2)
I use min/max-width/height a lot.
Also working :hover and !important are awesome.
I would have liked to add SVG support to that list but naturally Microsoft messed that up.
BTW, big warning to those getting excited about HTML5 features. There is no official date for the adoption of this spec. It's even been implied it could take another 10 years (though I doubt that). The point is anything you do with HTML5 now is subject to breakage when the official spec does arrive and in the meantime you can expect plenty of browser inconsistencies, bugs and API changes (not to mention browsers that don't support the features at all).
Browser support for local storage should enable a bunch of new ideas now that some content can be saved on a user's machine.
Reference docs:
Mozilla Firefox
Internet Explorer