Leaflet.js path clicking on mobile - javascript

I'm using leaflet in a PhoneGap project, with offline map tiles. Everything technically works just fine. However I'm finding that it is very hard to get the click event to fire on the paths, requiring the user to do a lot of frustrating tapping over and over in the same spot until it finally fires.
These aren't teeny tiny paths. Think bike paths in a mid-sized city, spiderwebbed everywhere.
Is this just something I have to live with with leaflet, or are there some tips and tricks to make the maps more touch friendly?
EDIT: And if not: is there a better way to do PhoneGap / Web based cached maps with better touch responsiveness?

As Leaflet.js claims to already have been mobile-optimized with the following thing
Multi-touch zoom (iOS, Android 4+, Win8)
Double tap zoom
Various events: click (tap), mouseover, contextmenu, etc
Tap delay elimination on mobile devices
I do not think there is much you can do to improve the touch-friendliness of it, unfortunately.

For those interested, about the only solution I've come up with on this one was to simply make the lines thicker. I chose a weight of "8" which seems to work slightly better for my fat, sausage like fingers on my iPhone, but you'll want to test different devices and see what works for you.
You might also want to adjust your opacity too, as the larger lines will overlap more and might not look the greatest in congested path areas.

Related

How to view 3d html5/css3/native javascript page stereoscopically on mobile?

Short version: Kieth Clark has a 3D html fps shooter demo. It uses 3d tansforms on html5 elements to produce a 3D world experience. It is not VR. Is there an API to view it stereoscopically?
I have a similar engine. I came up with a way to view it stereoscopically using Cardboard-style viewer, 3D TV/Monitor, or red/cyan anaglyph glasses. I had to use a pair of iframes, however, and load a copy of the "world" document into each frame, however.
This doubles the load on the gpu, however, requires duplicates of all changes to "the world" for both iframes, and workarounds for focusable items such as textareas. This all works great, but diminishes the capacity for detail without RAF noticably slowing down and getting jumpy. Especially true in Firefox on mobile, and of course there is also the added problem of security limitations on iframes.
If theres an API to just view and control a 3D html5 page in stereo without explicitly duplicating everything that would make things a lot simpler and more efficient.
I'm using a Google Chrome on Galaxy Note 3 as my standard-level target-device, if anyone needs to know.
Long version (old):
I have a 3d game I'm writing with native html5/css3/javascript. It is primarily for mobile and already contains a fully-functional camera system with the ability to zoom in or out of first-person, second-person, overhead etc, rotate the yaw and pitch of the view, as well as location on the map around the avatar, etc. Is there an easy way to view it stereoscopically? It will be embedded in an Android app, or at least accessed through one, or through Chrome as a web-app. I thought Chrome Dev with VR Shell would be a possibility to try it out and hopefully integrate into an app eventually. Not having luck with that yet. Theoretically, I just need to be able to view an ordinary html5 page that has css3 3d transforms. For example, if you had a 3d cube made of divs or whatever, to view it with two points of view, one for each eye, without changing anything in the page itself. Basically, if you could view anything 3d in the page stereoscopically, much like the VR Shell sounds like, it should work. All I seem to come up with is how to turn on the flag in Chrome Dev, but I'm not seeing anything to actually activate it. It's been fully restarted etc. The page is already 3d and fully-functional with orientation control in first-person or otherwise. All I seem to find are how to turn it on or about 3d videos. Can this be done in Googles VR libs for Studio without using all the other stuff? I just need the second eyeball.
Ok. Was hoping an api existed for this, but my solution was to make a parent document with two iFrames and load the doc with the 3d tranforms into both iframes. Then offset the perspective-origin in each about 1% (i.e. 49% in one and 51% in the other). Worked great without added mods using device orientation, but obvi not for mouse control. Ideally, both iframed documents should be controlled from js in the parent doc. Downside is you have to control two objects for every change, and complications arise if you have inputs or textareas that take exclusive focus. Fixed all that, but this is the down-low version of the solution.

Recognise other devices positions

I am currently working on my thesis project, where I am building a Javascript/node library making it easier for developers to merge the canvas in the browser from multiple devices together. So that objects can live within a large canvas of all the devices together. Basically, the idea is that you'll be able to put multiple phones/pads next to each other in different positions relative to each other, but use all their browsers as just one canvas.
I will also create another library extension with a bunch of restrictions to it, and hold a hackathon to see what developers creates with this tool and within these restrictions.
Anyway, I have ran into a problem. To make the tool more versatile and flexible I optimally want every device to be able to detect where in space the other devices are in relation to itself. But I have ran out of ideas about how to solve it, do you guys have any ideas? Do you think it is possible? Or will I have to come up with a manual solution? Can any other technology help? Bluetooth?
I have looked at projects like:
Google Chrome Racer (https:/ /www.chrome.com/racer)
Coca-Cola Penguin Curling (http:/ /cargocollective.com/rafaeldante/Coca-Cola-Penguin-Curling)
How do you think these projects solved the issue of positioning order? Which device is where in the order?
Sadly, Chrome Racer doesn't seem to be running anymore. But as far as I can remember playing it a while ago, you did not have to put in the position of your device manually? Analyzing this clip(https://youtu.be/17P67Uz0kcw?t=4m46s), it looks like the application understands where in line that specific device is, right? Any ideas on this?
Just a random musing on possible paths to a solution.
They all have cameras that are facing up. If any two can capture an image that overlaps you have way of orienting relative to each other. If every device had a view that overlapped with at least one other then you can get a reasonable approximation of the relative orientation and positions of them all. The more devices the better the result.
You can listen to the ambient sound environment and use arrival time of sounds to give another relative positional clue. Devices can also emit both sound and light, if done with pre determined order, the sound can produce relative position. The display if flashed on and off in specific patterns could also be detected ( not directly but as subtle ambient reflection.)
Ambient light levels are also a source of relative position and orientation.
If only two devices tried these methods they would fail most of the time. But with each extra device you get its relative information compared to all the others thus growing the data exponentially making a solution easier to find. Given enough devices the solution of position and orientation may be possible via passive sensing only.

How can I do rotating polygon on the background?

I'd like to create a polygon like on picture below, but it should rotating and bouncing between the borders of a web page, also it should be at the background and doesn't interfere with other elements of a web page.
I can do this in pure JavaScript with a lot of math, but maybe there's another approach which can simplify this task?
UPDATE: It need to work without user actions on Safari/Chrome/FF, at least few latest releases of these browsers, and mobile devices of course.

Recording and storing high-res hand drawing

Are there any advanced solutions for capturing a hand drawing (from a tablet, touch screen or iPad like device) on a web site in JavaScript, and storing it on server side?
Essentially, this would be a simple mouse drawing canvas with the specialty that its resolution (i.e. the number of mouse movements it catches per second) needs to be very high, otherwise round lines in the drawing will become "polygonal" when moving the pen / mouse fast:
(if this weren't the case, the inputDraw solution suggested by #Gregory would be 100% perfect.)
It would also have to have a high level of graphical quality, i.e. antialias the penstroke. Nothing fancy here but a MS Paint style, 1x1 Pixel stroke won't cut it.
I find this a very interesting thing in general, seeing as Tablet PCs are becoming at least a bit more common. (Not that they get the attention I feel they deserve).
Any suggestions are highly appreciated. I would prefer an Open Source solution, but I am also open to proprietary solutions like ActiveX controls or Java Applets.
FF4, Chrome support is a must; Opera, IE8/9 support is desired.
Please note that most "canvas" libraries around, and most answers to other questions similar to mine, refer to programmatically drawing onto a canvas. This is not what I am looking for. I am looking for something that records the actual pen or mouse movements of the user drawing on a certain area.
Starting a bounty out of curiosity whether anything has changed during the time since this question was asked.
I doubt you'll get anything higher resolution than the "onmousemove" event gives you, without writing an efficient assembler program on some embedded system custom built for the purpose. You run inside an OS, you play by the OS's rules, which means you're limited by the frequency of the timeslices an OS will give you. (usually about 100 per second, fluxuating depending on load) I've not used a tablet that can overcome the "polygon" problem, and I've used some high end tablets. Photoshop overcomes the problem with cubic interpolation.
That is, unless, you have a very special tablet that will capture many movement events and queue them up to some internal buffer, and send a whole packet of coordinates at a time when it dispatches data to the OS. I've looked at tablet api's though, and they only give one set of coordinates at a time, so if this is going to happen, you'll need custom hardware, and a custom driver, and custom apis that can handle packets of multiple coordinates.
Or you could just use a damned canvas tag, the onmousemove event, event.pageX|pageY some cubic interpolation, the "toDataURI" api of canvas, post the result to your php script, and then just say you did all that other fancy stuff.
onmousemove, in my tests, will give you one event per pixel of movement, limited only by the speed of the event loop in the browser. You'll get sparse data points (polygons) with fast movement and that's as good as it gets without a huge research grant and a hardware designer. Deal.
there are some applets for this in the oekaki world: Shi painter, Chibipaint or PaintBBS. Here you have php classes for integration.
Drawings produced by these applets can have quite good quality. If you register in oekakicentral.com you can see all the galleries and some drawings have an animation link that shows how was it drawn (it depends on the applet), so you can compare the possibilities of the applets. Some of them are open source.
Edit: See also this made in HTML 5.
Have a look at <InputDraw/>: a flash component that turns freehand drawing into SVG. Then you could send back the generated SVG to your server.
It's free for non commercial use. According to their site, commercial use price is 29€. It's not open source though.
IMHO it's worth a look.
Alternatively, you implement something based on svg-edit which is open source and uses jQuery (demo). If requires the Google Frame Plugin for IE6+ support though.
EDIT: I just found the svg-freehand-signature project (demo) that captures your handwritten signature and sends it to a server as a SVG using POST. It's distributed as a straight-forward and self-contained zip (works out of the box with Safari and Firefox, you may want to combine it with svgweb that brings SVG support to Internet Explorer).
EDIT: I successfully combined Cesar Oliveira's canvaslol (just look at the source of the page to see how it works) with ExplorerCanvas to have something on IE. You can also have a look at Anne van Kesteren's Paintr experiment.
markup.io is doing that with an algorithm applied after the mouseup.
I asked a similar question recently, and got interesting but not satisfying answers: Is there any way to accelerate the mousemove event?

How to trigger Mouse-Over on iPhone?

This might seem like a really dumb question, but I am writing an application and I have come across where my mouse-over, mouse-click and mouse-hover need different events bound to them. Now on Internet Explorer, Firefox, and Safari. It all works as expected.
However, on my iPhone the actions will not trigger. Now my question is are their any specific ways I can have the Mouse-Over essentially be fired when I hold my finger down and trigger an event?
An example where this doesn't work is right on this website when you hover over a comment it is supposed to display the +1 or flag icon.
I am using jQuery.
The answer is in the documentation that Remus posted. If you add an onclick = "void(0)" declaration, you will instruct Mobile Safari that the element is clickable, and you will gain access to the mouseover event on that element.
More info here
I think you need to reconsider your design for the iPhone (and any mobile for that matter). iPhone web interfaces shouldn't depend on mouse-overs and hovers, as they just complicate the interface significantly.
I strongly recommend that you design a new interface that is optimized for mobile viewing, that don't require clicking on small tiny arrows just to show more options.
Mobile Safari has no mouse and hover events (at least not in the usual accepted sense), they are explicitly called out in Creating Compatible Web Content Unsupported iPhone OS Technologies:
Mouse-over events The user cannot “mouse-over” a
nonclickable element on iPhone OS. The
element must be clickable for a
mouseover event to occur as described
in “One-Finger Events.”
Hover styles Since a mouseover event is sent only
before a mousedown event, hover styles
are displayed only if the user touches
and holds a clickable element with a
hover style. Read “Handling Events”
for all the events generated by
gestures on iPhone OS.
Yeah...I don't think anyone posing the question actually expected the device to "sense" a hover or mouseover. Actually you'd have to be pretty arrogant to assume someone actually meant that. Some method of triggering those event handlers is what is desired. I can definitely see a use for them in "hint" text appearing above items.
And whomever said not using mouse events makes a cleaner, simpler experience is taking their own opinion a bit too seriously. Those can greatly enhance a web page/application experience or make them worse. It's a matter of prudent usage.
The only answer anyone provided here worthwhile is whomever said it is best to have an alternate site optimized for mobile. Or possibly use a content management system that generates the page based on the browser type (similar to how Wikipedia works).
Congratulations on discovering the first thing about touch screen UI design. The bad news, is that what you want just is not going to happen.
The good news is that this will force you to make a much easier interface, for both iphone users and regular web users.
You simply cannot have a mouseover or hover functionality on touch screen devices, unless you can move a virtual pointer (though no touch UI offer that kind of functionality), but that would defeat the point of a touch screen UI.
Touch screen UI's are a paradigm shift and retro-fitting mouse-pointer UI interfaces back into touch UI design only limits and damages your solution.
Writing a mousehandler in javascript seems fairly straightforward, although I can imagine it being easy to get a lot of edge cases wrong.
The good news is, someone wrote a javascript mouse-handler/emulator whatever -- as a bookmarklet. It's called iCursor (not to be confused with the pointless mac app of the same name).
The bad news is, the guy's site (icursor.mobi) has gone off the air, and I can't find a copy, so I can't tell you how well it works. Here's a review (because I can only post one link):
What apple should have done for the iPhone/iPad was make one-finger panning move a virtual mouse pointer, and two-finger panning move within the viewport (as one-finger does now).
Two finger panning is easy; the only reason I can imagine for Apple not doing this is that they actually wanted to break 50% of the websites in the world. Seriously. It's right up there with the evil manipulative attempts to break standards that Microsoft has been doing all these years.
You're a web developer. What do you hate most? Internet Explorer. Because of all the extra headaches it causes you. Well, Stevie had to have his "me too" moment, and you're going to pay for it.

Categories

Resources