Are there any advanced solutions for capturing a hand drawing (from a tablet, touch screen or iPad like device) on a web site in JavaScript, and storing it on server side?
Essentially, this would be a simple mouse drawing canvas with the specialty that its resolution (i.e. the number of mouse movements it catches per second) needs to be very high, otherwise round lines in the drawing will become "polygonal" when moving the pen / mouse fast:
(if this weren't the case, the inputDraw solution suggested by #Gregory would be 100% perfect.)
It would also have to have a high level of graphical quality, i.e. antialias the penstroke. Nothing fancy here but a MS Paint style, 1x1 Pixel stroke won't cut it.
I find this a very interesting thing in general, seeing as Tablet PCs are becoming at least a bit more common. (Not that they get the attention I feel they deserve).
Any suggestions are highly appreciated. I would prefer an Open Source solution, but I am also open to proprietary solutions like ActiveX controls or Java Applets.
FF4, Chrome support is a must; Opera, IE8/9 support is desired.
Please note that most "canvas" libraries around, and most answers to other questions similar to mine, refer to programmatically drawing onto a canvas. This is not what I am looking for. I am looking for something that records the actual pen or mouse movements of the user drawing on a certain area.
Starting a bounty out of curiosity whether anything has changed during the time since this question was asked.
I doubt you'll get anything higher resolution than the "onmousemove" event gives you, without writing an efficient assembler program on some embedded system custom built for the purpose. You run inside an OS, you play by the OS's rules, which means you're limited by the frequency of the timeslices an OS will give you. (usually about 100 per second, fluxuating depending on load) I've not used a tablet that can overcome the "polygon" problem, and I've used some high end tablets. Photoshop overcomes the problem with cubic interpolation.
That is, unless, you have a very special tablet that will capture many movement events and queue them up to some internal buffer, and send a whole packet of coordinates at a time when it dispatches data to the OS. I've looked at tablet api's though, and they only give one set of coordinates at a time, so if this is going to happen, you'll need custom hardware, and a custom driver, and custom apis that can handle packets of multiple coordinates.
Or you could just use a damned canvas tag, the onmousemove event, event.pageX|pageY some cubic interpolation, the "toDataURI" api of canvas, post the result to your php script, and then just say you did all that other fancy stuff.
onmousemove, in my tests, will give you one event per pixel of movement, limited only by the speed of the event loop in the browser. You'll get sparse data points (polygons) with fast movement and that's as good as it gets without a huge research grant and a hardware designer. Deal.
there are some applets for this in the oekaki world: Shi painter, Chibipaint or PaintBBS. Here you have php classes for integration.
Drawings produced by these applets can have quite good quality. If you register in oekakicentral.com you can see all the galleries and some drawings have an animation link that shows how was it drawn (it depends on the applet), so you can compare the possibilities of the applets. Some of them are open source.
Edit: See also this made in HTML 5.
Have a look at <InputDraw/>: a flash component that turns freehand drawing into SVG. Then you could send back the generated SVG to your server.
It's free for non commercial use. According to their site, commercial use price is 29€. It's not open source though.
IMHO it's worth a look.
Alternatively, you implement something based on svg-edit which is open source and uses jQuery (demo). If requires the Google Frame Plugin for IE6+ support though.
EDIT: I just found the svg-freehand-signature project (demo) that captures your handwritten signature and sends it to a server as a SVG using POST. It's distributed as a straight-forward and self-contained zip (works out of the box with Safari and Firefox, you may want to combine it with svgweb that brings SVG support to Internet Explorer).
EDIT: I successfully combined Cesar Oliveira's canvaslol (just look at the source of the page to see how it works) with ExplorerCanvas to have something on IE. You can also have a look at Anne van Kesteren's Paintr experiment.
markup.io is doing that with an algorithm applied after the mouseup.
I asked a similar question recently, and got interesting but not satisfying answers: Is there any way to accelerate the mousemove event?
Related
I am currently working on my thesis project, where I am building a Javascript/node library making it easier for developers to merge the canvas in the browser from multiple devices together. So that objects can live within a large canvas of all the devices together. Basically, the idea is that you'll be able to put multiple phones/pads next to each other in different positions relative to each other, but use all their browsers as just one canvas.
I will also create another library extension with a bunch of restrictions to it, and hold a hackathon to see what developers creates with this tool and within these restrictions.
Anyway, I have ran into a problem. To make the tool more versatile and flexible I optimally want every device to be able to detect where in space the other devices are in relation to itself. But I have ran out of ideas about how to solve it, do you guys have any ideas? Do you think it is possible? Or will I have to come up with a manual solution? Can any other technology help? Bluetooth?
I have looked at projects like:
Google Chrome Racer (https:/ /www.chrome.com/racer)
Coca-Cola Penguin Curling (http:/ /cargocollective.com/rafaeldante/Coca-Cola-Penguin-Curling)
How do you think these projects solved the issue of positioning order? Which device is where in the order?
Sadly, Chrome Racer doesn't seem to be running anymore. But as far as I can remember playing it a while ago, you did not have to put in the position of your device manually? Analyzing this clip(https://youtu.be/17P67Uz0kcw?t=4m46s), it looks like the application understands where in line that specific device is, right? Any ideas on this?
Just a random musing on possible paths to a solution.
They all have cameras that are facing up. If any two can capture an image that overlaps you have way of orienting relative to each other. If every device had a view that overlapped with at least one other then you can get a reasonable approximation of the relative orientation and positions of them all. The more devices the better the result.
You can listen to the ambient sound environment and use arrival time of sounds to give another relative positional clue. Devices can also emit both sound and light, if done with pre determined order, the sound can produce relative position. The display if flashed on and off in specific patterns could also be detected ( not directly but as subtle ambient reflection.)
Ambient light levels are also a source of relative position and orientation.
If only two devices tried these methods they would fail most of the time. But with each extra device you get its relative information compared to all the others thus growing the data exponentially making a solution easier to find. Given enough devices the solution of position and orientation may be possible via passive sensing only.
I am looking to simulate a subtle movement (think wave) of cloth in browser entirely with an image effectively placed on top and then have the background do the processing of the movement? Is this possible?
Effectively imagine it as a cloth simulation with an image (as this will need to change dependant on the product) stuck on top of it.
Hope this helps.
Also would be good to know if this could work on a mobile phone too.
You could process the image on a per pixel basis, apply time dependent transformations on it, and draw it on a canvas element.
It will involve some interesting maths, and one of the primary concerns would be performance, but i think a fairly optimal implementation should work easily on modern Desktops.
Whether it works on mobile devices depends on their HTML5 support and processing power.
you might want to start with simple things such as this question
Add flaglike waving to 2d Context
related links -
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/canvas
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas
The stylus on the MS Surface Pro is really good, especially in OneNote. For engineers and designers, it makes sketching diagrams and wireframes super simple.
On the other hand, OneNote is fairly limited in its online capabilities, and I can think of many other uses for a good stylus within web applications.
Is there way, or better yet an existing library, to replicate OneNote's stylus behavior (e.g. palm rejection, smoothing lines, erasing strokes, etc.) in JavaScript for use in a browser?
EDIT: To be clear, I mean more than just basic drawing on a , more like the inking features in MS Office, where you can draw freehand on top of document content without interfering with how you interact with the app, and the ink stays put when you scroll and zoom.
I envisage some kind of fullscreen transparent canvas that appears when a pen hover is detected, and hidden when the pen is removed to allow regular mouse input. The contents would be conveyed to some kind of SVG and displayed with position: relative; or something.
Would this new possible in raw HTML5, or am I looking at some kind of plugin?
The Pointer Events draft spec provides the primitives you would need to write such a library, but unfortunately the Chrome team has voted against supporting it, and it remains to be seen whether it will become a sufficiently supported standard to be usable on public sites.
What is the best cross-browser way to get a flat mouse coordinate input data and simple callback for mouse events for my rectangular game area on my web page, even when it has loads of larger and smaller images and text string overlaid haphazard onto it?
And what is the best way to insert or remove a text string or semi-transparent image overlay at an arbitrary location (and Z order, specified relative to existing objects) in a board game rectangle with cross-browser DHTML?
And how can I stop the user selecting part or all of my montage of images (I just want them to interact with it as if it was Flash), and can I stop the right click menus coming up in IE, FF etc?
I want to do this without Flash because I want something that will work both on desktops and on iPhone and potentially other mobile platforms too.
I appreciate there are serious limitations (eg less image scaling capabilities, not vector, no rotation capability) to what I can do if I'm not using Flash but I'm very interested to know what capabilities are available.
Are there perhaps any frameworks available to make it easier than coding from scratch?
Would J/Query be a good match for some of the requirements? What else do I need?
I would recommend Google Web Toolkit. It lets you program in Java, which gives you all the type-safety and nice IDE functionality that Java entails, but compiles to Javascript so that you can just run it in a browser. It also does a ton of optimization and supports tons of features.
jQuery is excellent at doing this. I used jQuery's UI and Ajax functionality to implement the frontend for a game of chess.
I made it a little easier by creating an 8-by-8 table with unique div names for each tile, so Javascript can access them by getting the elements by id. If you can't create something like that, you do have the option of placing elements anywhere on the page (either absolute or relative to a given element). You can also easily change the z-index, including when the use is dragging a piece or when they have dropped it.
As far as disable right click and item selection goes, that's something that I didn't figure out how to do. You might want to take a look at some other Ajax games like Grand Strategy, which are much more polished than my experiment and may have figured out how to do this.
There are two main APIs for working with arbitrary drawing and positioning on the web, Canvas and SVG.
Take a look at Chrome Canvas Experiments and the Raphael Javascript toolkit to see some examples and Javascript abstractions.
The key is element.style.position = 'absolute'. To illustrate just what's possible here's how far I've managed to push javascript (and from scratch at that!):
http://slebetman.110mb.com/tank3.html - RTS in DOM! Click on units/squads then click somewhere else to tell them where to go. You can control both sides.
I'm doing some animations and I want to implement something like this on the web. I was thinking that the HTML canvas can do this kind of job. Because I can scale part of an image. I just need the algorithm to actually make it work.
The effect is elastic, if the window is small, the greater the elasticity of the window when you restore it. I was thinking that I can make this work in web images.. if the user click the image it will scale with this kind of effect, not the boring way of scaling.
This is ubuntu, I know that we can look at the source code maybe to see how it actually implements the animation. But I dont know where to find it. Or i don't even understand codes written in linux because I just understand php, javascript. Basically I'm not a software developer, My core expertise is in web development.
http://www.youtube.com/watch?v=hgQP-aFragQ
I believe your best bet is having a look at John Resig's Processing.js.
Processing is a animation language for Java; John has ported it to the browser using canvas.
Your not going to find a web based solution that is going to do this for you. If you need something like this done it will have to be in flash or some other application (Lenni mentioned Java) that runs in a separate media box embedded in a web page.
People don't want big flashy animations, seeing something that is 'boring' is much better if it becomes more usable.
First up - I don't know the actual algorithm they use here.
However, I'd attack this by creating a grid of points (say 10x10), each point attached to it's neighbors by damped springs. It might be worth anchoring the edge/corner points to the screen with springs too.
By deforming the grid (stretching and compressing the springs) and then modeling the spring responses, you'd get some interesting effects like those shown. You might then be able to record the patterns so that the points can follow a pre-computed path for faster animation if your animations are predictable.
Then you need to work out how to split the image and map it onto the grid. The splitting may be better done once on the server, but the client can do it if you use canvas.
svg & vml is a possibility - they'll work without plugins and are similar enough to code for, but I don't think you'll get correct enough image deformation. However, you can scale and rotate with impunity (and quickly) so if you just anchor 2 cell image points to the grid rather than all 4, you'll get an interesting animation - not quite like the video, but pretty good.
As for how to model damped springs, you'll need to keep track of the mass of each point (how heavy it is), how much force the spring is exerting on each point (scalar of how compressed/stretched it is and it's vector) and a damping force on the points (resistive force to the square of the velocity of the point).
It's physics modeling, to be sure, but quite possible.
The response may well be slow. Especially on IE. Canvas needs a plug-in on IE, so if you use canvas, IE folk wont see it. SVG works on almost everything except IE, but it does have VML which is similar. http://raphaeljs.com/ is a library that uses whatever's available. This will be a challenge to tune up :)
However you do this, it will always look best in chrome, the V8 javascript engine outstrips everything else for this kind of work. IE has the slowest javascript engine.