How to Program Parrot AR.Drone to Fly Straight - javascript

I am using node.js & node-ar-drone to program my AR.Drone 2.0 to perform some basic flight maneuvers indoors. From what I can tell, the drone seems to never fly straight. It will always sway to the left and right, hover for a few seconds, or crash into a wall regardless of where I set the takeoff point from. In other words, if I run the same program to fly down a hallway 10 times, each time it will do something different.
If it does make it down a hallway it will land somewhere different each time. I would have built-in counter moves to adjust for the random swaying such as if it sways to the right, I would tell it to shift to the left, but it never seems to be enough. No amount of counter moves seems to get it to fly straight. I am using the latest firmware on the drone.
I was told that there is nothing on board the drone that corrects errors during flight, such as a feedback loop. In addition to this I was also told that these drones were primarily made for use outdoors or in very wide open spaces such that it wont crash.
I wanted to see if this held true with anyone else or if anyone had any suggestions to get it to fly straight. Any input or comment would be helpful

The AR.Drone does use feedback from its combination of sensors to improve its flight, as seen in this diagram (from "The Navigation and Control technology inside the AR.Drone micro UAV"):
For your situation, probably the most important thing is how well attitude and speed estimation is working, which uses the accelerometers, gyrometers and cameras. There are a few things you can do to help those systems work:
Make sure you take off from a completely level surface.
Call ftrim to set the flat trim level before taking off.
The vision algorithms are designed to try to do a good job even if the surface under the downward-facing camera doesn't have very much texture, but they still can get confused if the floor/ground is too featureless. Try flying over something with more texture and contrast.
For #3, flying over something like a uniformly colored carpet or a concrete floor can make it harder for the drone to see what it's doing--very similar to the problem of using an optical mouse on a smooth, featureless surface. When you see Parrot showing off the AR.Drone's abilities, you'll notice they often fly over a surface that is obviously chosen to make navigation easier. E.g.,
From https://www.youtube.com/watch?v=IcxBf-kegKo:
From https://www.youtube.com/watch?v=pEMD6P_j5uQ#t=8m25s:
That said, with my drone I've sometimes experienced situations where immediately on takeoff, the drone veers off to the side until it crashes even though I called ftrim and thought I took off from a flat surface. You may need to use trial and error to find a good takeoff point.
The drone is designed to be able to fly indoors (e.g. the styrofoam hull with the propeller protectors is recommended for indoor flight but not recommended for outdoor flight, and the FreeFlight app has indoor & outdoor flight modes), but in my experience the drone still wanders a bit and so you'll have the best results in a larger room.
Here's a demo where my drone flies in a very stable manner indoors, in a large room, with well textured carpet, from a very flat location: https://www.youtube.com/watch?v=uhBa11gdbeU
Even then you can see the drone make a small, quick correction at 0:23.

Related

changing layout background based on weather in users location

I am trying to make the top part of my site change background and have particles according to the user's weather and locale. I don't know how to approach this.
I have heard from some to use weatherbug api, would that be suitable? I am brand new to javascript.
The four main particles I am using are:
Sun Rays for clear skies
https://codepen.io/elijahskinner/pen/dyvGyJe
Rain for the obvious
https://codepen.io/elijahskinner/pen/vYxKxoq
(I need to change the color of the rain for this one so it shows up better)
This for Snow
https://vincentgarreau.com/particles.js/#snow
I also have a falling leaves particle thing but I am not sure how to use that with weather instead of seasons.
I cannot say whether weatherbug would be suitable (and SO is not keen on us giving opinions/recommendations on libraries and other services) but you can search for weather services and weigh up what they offer, price, accuracy etc.
However, there are a couple of other factors in what you propose which you may like to contemplate before implementing this:
The particles for your background can be very costly on processor time. I have just tried your rays particles on my fairly modern and with good GPU laptop. Nearly 30% CPU and 98% GPU were being used. The fan was whirring. This is battery flattening stuff so your users may not thank you.
To get weather info for where your user is you need to know where they are... And to do that with some accuracy you may want their geolocation for which you will have to ask their permission, see MDN
My apologies that this is not a complete answer to your actual question but it got too long for a comment.

Training Neural Network to play flappy bird with genetic algorithm - Why can't it learn?

I have been learning about neural networks and genetic algorithms, and to test my learning, have tried to make an AI that learns to play flappy bird:
I have left it running for at least 10 hours (overnight and longer), but the fittest member still fails to show any significant advancements in intelligence from when I began the simulation apart from avoiding the floor and ceilings.
The inputs are the rays (as you can see above) that act as sight lines, and the network is fed in their lengths, and the birds vertical velocity. It seems that the best bird is essentially ignoring all the sight lines except the horizontal one, and when it is very short, it is jumping.
The output is a number between 0 and 1, if the output is larger than 0.5, then the bird jumps.
There are 4 hidden layers, with 15 neurons each, with the input layer feeding forward to the first hidden layers, then the 1st hidden layer feeding forward to the 2nd one ... and the final hidden layer feeding forward to the output, the dna of a bird is an array of real numbers representing the weights of the neural networks, I have made another project using the same style of neural network, and genetic algorithm, in which ants had to travel to food, and it worked perfectly.
Here is the code: https://github.com/Karan0110/flappy-bird-ai
Please say in the comments if you need any additional information
Please can you say whether my method is flawed or not, as I am almost certain the code works correctly (I got from the previous working project).
I like your idea, but I suggest you change some things.
Don't use a network with a fixed structure. Look up Neural evolution of autgmenting topologies and rather implement it yourself, or use a library like neataptic.
I don't believe your network needs that many inputs. I believe 3-5 sensors (20-50° gaps) would be enough, since many of the input values seem to be very similar.
If you are not sure why exactly your project is not working try this:
Try view an image of your current best network. If the network doesn't take important sensors (like the velocity) into account, you'll see it instantly.
Make sure all of your sensors are working fine (looks fine in the image above) and be sure to nkrmalize the values in a meaningful way.
Check if the maximum & average score increases over time. If it doesn't your GA isn't working properly or your networ receives inputs that are not good enough to solve the problem.
One trick that helped me out a lot, is to keep the elite of the GA in a seperate array. Only replace elite networks if some other network has performed better than the elite. Keep the elite trough all the generations, so once your algorithm finds an extraordinarily good solution, it won't be lost in any future generation if nothing else performs better.

How do i detect webrtc shapes and object position in javascript?

I am looking to valide when an object is being held close enough to the camera in webrtc and thought that I might detect a shape and compare the bounds to the size of the stream. It seemed like a simple enough task but I am having trouble detecting shapes with javascript. There are lots of examples of detecting faces or parts of faces with Haar cascades, but I am not sure if that is really what I should be looking for. The end goal would be to have something similar to banking apps that take a picture of a check once it's lined up or taking enough space in the stream. I am looking to just let the user know that that they have the item they will be taking a picture of centered and close enough to the camera. I have been looking at jsFeat which seems pretty cool and works well with predefined cascades such as faces, but how do i detect shapes or atleast the positioning of the main item in a video stream, without training my own cascades?
Wow, shape recognition in a video stream, sounds like a challenge, and will need a powerful processor. These links may steer you in the right direction.
The first is titled "Object Detection with HTML5 getUserMedia", and is a discussion on face recognition using javascript, and provides a bunch of links to projects
http://techslides.com/object-detection-with-html5-getusermedia
The second is tracking.js
The tracking.js library brings different computer vision algorithms
and techniques into the browser environment. By using modern HTML5
specifications, we enable you to do real-time color tracking, face
detection and much more — all that with a lightweight core (~7 KB) and
intuitive interface
https://trackingjs.com/
Have fun, it sounds like a cool project!

ThreeJS human body painting

Having been misdiagnosed over the past 6 months, only to find a long overdue pneumothorax (lung-collapse) two weeks ago, I've decided to create a tool to help patients visually describe their pain-related symptoms.
The idea revolves around projecting the human body (only skin layer), and allowing the user to pain radiating/local/stabbing and etc. pain on top of this model.
I've researched the area a bit, but just can't get around theorizing how the user-panting should be facilitated.
My question is therefore, if I (as the user), wants to paint a red area WITHIN, say - the left thie, is the scene required to having rendered selectable objects (imagine 3d pixels) within said thie?
In other words, the user selects to draw, say - 50x50x50 "pixels" within the upper leg, whilst the leg being 100x140x250.
Am I going about this the right way? I've never been around webGL, but I consider myself very comfortable within javascript.
Btw, if any1 is interested in helping me in any way, you are more than welcome - I plan for the tool to be 100% free of use.

In JavaScript, is it possible to have to audio input as a event listener? (Idea construction)

There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.

Categories

Resources