Web app here:
http://www.digitaltransitions.com/visualizer/visualizer.html
Main javascript here:
http://www.digitaltransitions.com/visualizer/visualizer.js
Relevant functions are at bottom of visualizer.js, named "dragger" "move" and "up".
I was a programmer a decade ago, and recently took it back up to help my company create a web app that helps our customers visualize how a specific lens will look on a specific camera.
Never mind the info wall (info request form); feel free to put in any garbage entries. Or you can add the function unlock(); at the end of the window.onload and it will bypass the info wall screen and go straight into the app.
I've been very proud to get this far. But now I am majorly stuck and have been banging my head against the wall.
My web app passed testing on Mac_Safari, Mac_Chrome, Mac_Firefox. But it failed testing on an iPhone4s and iPad1; the sliders for focal length (the ##mm gizmo in the top right which changes how "zoomed" the lens is) do not function correctly. When the user grabs the slider some of the time it correctly slides back and forth, but other times it will jump to the far left of the screen at which point the app stops working at all.
Any thoughts would be greatly appreciated!!
By the way, if you were wondering how to create a custom Google Docs Form with validation and a custom confirmation page I got my methodology from here:
http://www.morningcopy.com.au
I'd say your first job is to determine if this your bug, or Raphael's bug. I'd start by switching out your "move" method with an empty method, and see what happens.
Another debugging approach would be to put a fixed-position div down in the corner of the page, and spit the x/y values into it, so you can see in real-time what the numbers look like.
I'd guess that you're running into a mathematical difference in how touch-points are calculated vs. how mouse-cursor-position is calculated.
Based on the hack you added, it looks like Raphael might be sending you a NaN value for Dx?
Related
I really hope this is a case of can't see the wood for the trees, because I just can't believe my own eyes at the moment! If you look at the image below you'll see my mouse pointer is hovering over a an IMG tag in the Chrome Debugger/Inspect Elements tab, and the handy tool-tip in the mobile display is usefully showing "img 45 x 40" exactly where my hg.png image should be rendering. Yet Hansel and Gretal are still stuck where they were before the: -
HandG.setPosition(path[currStep]);
See https://github.com/RichardMaher/Brotkrumen/blob/master/HandleMap.js line# 321 for complete example.
How can this possibly be? When can the SRC attribute completely divorce itself from its IMG tag?
This didn't used to happen a few years ago (trust me :-) Has there been some optimization in Chrome? Buffering Map Marker moves? Can I turn this behaviour off?
The "optimize" marker option is set to false. Anyway surely we can just forget Google Maps here, this is just a basic HTML DOM issue right?
NB: PLEASE keep your opinions on the wisdom or otherwise of relying on how Google Maps renders markers. I consider myself suitably chastised and if you have a better way to smoothly transition markers from point to point over a set period of time then please mail me directly. What I'm asking is what witchcraft (my destination marker is a itches hat :-) is at work here.
Edit 1
I'll investigate the FF debugging options but am really leaning towards a novel Google Map optimization because the first leg of the trip completes as expected but then my polygon.setPath and my marker.setPosition calls don't take effect progressively. The marker(img) moves after 2 more legs (then on the final leg) and no further path is plotted till the last leg completes.
Edit 2
Please Note: - The https://github.com/RichardMaher/Brotkrumen repository can be cloned by anyone! Just stick it in a internet facing folder/directory and go for "https://your.domain/TravelManager.html" You'll need a Google Maps API key to use the maps and at least a couple of GPS readings before you can press "Arrive".
Thanks to #Bravo I ran it on FireFox(1) and got a slightly more illuminating response. It does now look like a coding issue (event race condition or some such) as the pattern displayed was: -
The first leg (as with Chrome) Transitioned from A to B in correct time.
The second leg was skipped then HandG floated up to the third geolocation position but with what appeared to be a combined duration?
Likewise, the 4th leg was not visible but the 5th was peachy.
Unlike Chrome the progressive path was in sync with the marker.
So, yes, it looks like my code is firing 2 events before the browser can give me a Transition and acompanying transitionEnd.
(1) I have work to do on FF asthetic compatibility :-( also (I'm not asking you to teach me FF debug) with Chrome remote USB debugging I get to enter the URL on my PC and it appears in the phone's browser. I can then unplug it, go for a walk around the block, connect it back to the USB again, press "inspect" and have full debug sesion going. On FF I just entered http://localhost:1234 into the phone browser and it activated but I couldn't see how to get a debug session happening.
Please feel free to delete the question because, as pointed out by #Bravo, it is a red-herring :-(
The only take-away is: - There are now at least two of us who now strongly recommend using FireFox to debug youre mobile Web Apps and avoid Chrome!
Once I could believe my eyes again, I realized I was getting 2 transitions for my marker reposition. Property Height, Width;
It is with hand on heart that I tell you that several years ago I only got one transition for the multi-property but that's neither here nor there. We are where we are.
Thanks again to https://stackoverflow.com/users/10549313/bravo
I aim to create this pattern of game play with one main Lobby Scene and other GameScene opening through window.open.
But even when i try this with empty hello world project to open multiple window, I get restricted due to high drop in FPS.
So basically I need to know is this setup possible in cocos creator , that can even four windows render simultaneously without FPS taking a hit.
Any guidelines if any can be provided to help achieve this will be appreciated.
The game in reference pic i think is made via angular ,maybe thats why it is so smooth even after ten windows.
My team posted issue on cocos2djs but no help :- https://discuss.cocos2d-x.org/t/help-regarding-multi-window-game-in-cocos-creator/42688
After a little bit of dig-in and according to your answer in the comment, I think you can try a different approach "split-screen game". I believe when a new window opened it use the same assets and it drops the FPS.
I don't know what is the best practice for "split-screen game", but I have one suggestion on how to implement it:
Create a prefab template of the main screen.
Create different layers (node) for each screen
Add the prefab to the layer, for example :
layer with 1 screen - 1 prefab
layer with 2 screen - 2 prefabs (duplicate prefab)
etc.
If you move between screens (layers) don't forget to make active false to the last node and destroy all his children.
Also, I think your drop calls it a little bit high for even one window app, Try maybe to check it also.
I hope I helped you.
Oculus Connect 3
I know this site was built on react. I want to know more about how to get that background animation and mouse over effect. It's simply awesome to experience. Based on this I will decide to go with React or Angular 2.
If you open your browser's inspection tool (almost always F12) you can see the layout of the webpage. It contains a canvas element with the id "grid". The animation is made using this.
The animation itself looks like a simple node graph, where if you move your cursor the nodes close to the cursor try to stay away from it, thus creating an explosion-like effect.
If your cursor stays fix for 2-3 seconds, the animation starts using a point going randomly across the page instead.
I doubt this animation uses too much of any of the libraries you mentioned in your question, thus deciding which one of these you will be using based on this demo (which let's be honest, max 200 lines of vanilla JavaScript) is like deciding what you eat for breakfast based on the food statistics of Mongolia.
And also, animations like this are what scares off most users. I don't think you can show me any big multimedia or social network site, which has animation close to this.
There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.
I've seen similar questions asked and the answers were not quite what I'm after. Since this question is slightly different, I'm asking again - Hopefully you'll agree this isn't a duplicate.
What I want to do: Generate an image showing the contents of my own website as seen by the user (actually, each specific user).
Why I want to do it: I've got some code that identifies places on the page where the user's mouse hovers for a significant length of time (ppl tend to move the mouse to areas of interest). I also record click locations. These are recorded as X/Y co-ords. relative to the top-left of the page
NB: This is only done for users who are doing usability testing.
I'd ideally like to be able to capture a screenshot and then use something server-side to overlay the mouse data on the image (hotspots, mouse path, etc.)
The problem I have is that page content is very dynamic (not so much during display but during server-side generation) - depending on the type of user, assigned roles, etc... whole boxes can be missing - and the rest of the layout readjusts accordingly - consequently there's no single "right" screenshot for a page.
Option 1 (which feels a little nasty): would be to walk the DOM and serialize it and send that back to the server. I'd then open up the appropriate browser and de-serialize the DOM. This should work but sounds difficult to automate. I suspect there'd also be some issues around relative URLs, etc.
Option 2: Once the page has finished loading, capture an image of the client area (I'd ideally like to capture the whole length of the page but suspect this will be even harder). Most pages don't require scrolling so this shouldn't be a major issue - something to improve for version 2. I'd then upload this image to the server via AJAX.
NB: I don't want to see anything outside the contents of my own page (chrome, address bar, anything)
I'd prefer to be able to do this without installing anything on the end-user pc (hence javascript). If the only possibility is a client-side app, we can do that but it will mean more hassle when getting random users to usability test (currently, we just email friends/family/guinea pigs a different URL)
One alternative solution would be to "record" the positions and dimensions of the main structural elements on the page:
(using jQuery)
var pageStructure = {};
$("#header, #navigation, #sidebar, #article, #ad, #footer").each(function() {
var elem = $(this);
var offset = elem.offset();
var width = elem.outerWidth();
var height = elem.outerHeight();
pageStructure[this.id] = [offset.left, offset.top, width, height];
});
Then you send the serialized pageStructure along with the mouse-data, and based on that data you can reconstruct the layout of the given page.
One thing we always talk about where I work is the value of ownership vs the cost required to make something from scratch. With the group I have, we could build just about anything...however, at a per-hour rate in the $100 range, it would need to be a pretty marketable tool or replace a very expensive product for it to be worth our time. So, when it comes to things like this that are already done, I'd consider looking elsewhere first. Think of what you could do with all that extra time....
A simple, quick google search found this: http://www.trymyui.com/ It's likely not perfect, but it points to the fact that solutions like this are out there and already working/tested. Or, you could download a script such as this heatmap Obviously, you'd need to add a bit to allow you to re-create what was on the screen while the map was created.
Good Luck.
IMO, it's not worth reinventing the wheel. Just buy an existing solution like ClickTale.
http://www.clicktale.com/