We have a client who has asked us to build an app for a music artist (in a short period of time) - generally basic stuff - other that one feature...
They would like it to be able to 'listen' and detect between certain songs at a concert so that we it can trigger a jQuery/Javascript function that would make the phone display a certain colour during a certain song. It doesn't have to actually detect the song - the BPM would possibly surfice.
To add to the complication I am a front-end developer - so the app would need to be HTML5 and written using Phonegap, the app needs to be IOS.
I have no idea how to approach this or whether its even possible - but if anyone has any ideas - please let me know!
Implementing BPM Analysis with Web Tools sounds pretty scary...
Have a look at this library: https://github.com/corbanbrook/dsp.js/ and this here: http://sourceforge.net/projects/beatdetektor/
Not sure if you can do this in real time though
Related
thanks kindly for your time and attention. I recognise this is a long shot but I'm hoping someone might be generous enough to relay some advice or guidance. I am in the beginning phases of researching how I might build an app for a mobile device, using javascript and related tools, libraries and packages. The concept for the app is that it will access the device camera, recognise faces, and overlay animated AR assets onto the device display. However, I want users to couple their phone with a wearable headset, and use the app through a split screen VR style display. I assume I'll need some sort of VR wrapper for the core AR application. At the moment, I am focussed primarily on the graphical display and UI aspects, so that I can build a proof-of-concept to test whether the idea is even viable. I recognise I may be misguided to attempt this in javascript rather than native mobile languages - if this is the case, I would welcome any opinions on the matter. I'm asking about javascript because that's what I know, basically.
Thus far, I've been reading about various libraries such as WebXR, Three.js and others. I assume I'll need to use React Native, though it's not easy to get a clear sense of whether I should even be trying to achieve what I want using javascript. I have no code to show as yet.
Additionally, I recognise there are similar questions already posted to the forum - for example, this one: VR+AR on mobile phone.
I haven't found any recent threads that address this specific set of requirements so I do apologise if I've missed something. If there is info on the forum, grateful if someone could point me to the relevant thread. At the very least, thanks for reading. Cheers, all.
I found Snap's Lens Studio extremely intuitive and powerful. It provides templates for feature recognition, tracking, and physics. It also provides advanced controls for custom creative. I would also expect it to receive future feature development support. It can be monetized.
...or do you want to expose yourself to more computer vision terminology and patterns? Try searching CodePen or CodeSandbox for features such as: MediaPipe, OpenCV, face detection webcam. But the overhead of a VR/AR app is probably best described in an O'Reilly book or John Carmack keynote speech. Topics like pupil distance, foot tracking, and predictive tracking.
I'm working on a school project this semester and I want to try to program a Phantom 3 Standard to do some simple flight paths. Prior to acquiring the Phantom 3, I was playing around with Parrot's 2.0 AR Drone. I was able to write up a couple files using javascript and Node.js in order to help me program the drone for autonomous flight. I would like to do something similar with the Phantom 3, but it seems a lot more complex than just downloading something like Node.js (You have to sign up to be a developer on DJI's website and I don't think the SDK is easy enough for me to understand).
Does anyone have any recommendations on how to do this? Like I said, it would be optimal if programming the Phantom 3 could be as easy as it was to program the AR 2.0, which would be downloading something like node.js and running some scipts. Thank you!
You can choose to program for iOS or Android. Your mobile device will plug into the RC, and will be able to control the UAV as long as the RC controller is in autonomous mode. I will talk about the Android code, as I haven't used the iOS SDK, but I assume it's similar.
Creating a developer account is simple. You just put your information in on their website. The form only takes a minute to fill out. This data will be used in your manifest file. When your app starts for the first time, it will connect to DJI servers to verify your account.
The Android project has a sample application, which can get you started. You can download DJI's sample, and be up and running in 30 minutes (provided you know how to make Android apps).
In my own opinion, the DJI SDKs are EXTREMELY buggy. I have been using the Android SDK for over a year, and have briefly used their onboard SDK. Their code is sloppy, documentation incomplete, and support is non-existent. So, if you end up using DJI's SDK, you can be up and running in a short period of time, but expect that the only help you'll get is on stackoverflow.
The most simple way to start with DJI SDK is to use DJI UI Library.
https://github.com/dji-sdk/Mobile-UILibrary-Android
https://github.com/dji-sdk/Mobile-UILibrary-iOS
It is a suite of ready to use UI components. You just drop those UI elements into your Android or iOS application and they should work with DJI products.
Glhf
I was wondering if there is a way to control audio output device patching in HTML5/JavaScript? Like, if the user wanted to have one sound in my web app to go out of one audio device, and another sound out of a different audio device. I know the user can set the default output device on their computer, but for the web app I'm working on, I would like them to be able to send individual sounds to individual outputs while other sounds are playing, similar to the interface below (from a program called QLab).
I feel like the obvious answer is NO, and I do not want to resort to using flash or java. I MIGHT be okay with having to write some sort of browser plugin that interfaces with javascript.
So, after receiving basically zero helpful answers - and finding no further helpful information online, I think I figured out something that we, as developers, NEED to start requesting from browser vendors and w3c. We need to be able to request hardware access from users, in a similar fashion that we can request to access a user's location, or how we can request to send a user push notifications.
Until web developers are allowed the same control as native application developers over hardware, we will be left at a huge disadvantage over what we can offer our users. I don't want to have my users install third/fourth party plugins to enable a little more control/access to their I/O. Users should not have to be inundated with keeping more software than just their web browser updated to have websites run well and securely. And I, for one, do not feel like it should be necessary to write in more languages than HTML, JavaScript, CSS, and PHP to get the same experience a user would get from a native application.
I have no idea how we approach browser vendors about this, but I feel like it would be good to start doing this.
I know this is a little old, but just this year a method was added called "setSinkId" that you can apply to a media element (video, audio) to set the device that audio will be outputted to.
$('#video-element').setSinkId('default');
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
Though currently it seems only Chrome supports it. I haven't tested on Firefox or other web browsers.
I suggest you take a look at the Web Audio API:
Specs --- Tutorial
There is the destination property in the Web audio API. However it is a readonly property ... so not settable.
Here:
The destination property always correlates to the default hardware output of sound, whether it’s through speakers, attached headphones, or a Bluetooth headset.
I'm working on a sound installation based off web audio and have run into the same problem. I want to map different channel outputs to different speakers. have you had any progress on this?
This gentleman seems to have managed to do it: http://www.brucewiggins.co.uk/?p=311
I tested this out on a apogee quartet and it worked - outputting to 8 different channels.
I also found this article useful: http://www.html5audio.org/2013/03/surround-audio-comes-to-the-web.html
if (context.destination.maxChannelCount >= 4) {
context.destination.channelCount = 4;
}
// otherwise, let's down-mix to 2.0
else {
context.destination.channelCount = 2;
}
context.destination.channelCountMode = "explicit";
context.destination.channelInterpretation = "discrete";
context.destination.numberOfOutputs = 4;
While you can certainly use the splitter and merger nodes to assign to specific channels on the output, the actual devices you output are abstracted by the browser and inaccessible by your code.
I have done some experiments with 8-channel virtual audio cables and relaying that data to other sound devices outside of the browser. Unfortunately, I can't find a browser that will actually open an 8-channel sound card with more than 2 channels.
Hopefully, browsers in the future will provide more options. This flexibility will never come directly to JavaScript... and nor should it. This is an abstraction done for you, and if the browser uses it correctly, it won't be an issue.
Have made a typing game using HTML5 and javascript http://tweetraitor.herokuapp.com/.
Now I wanted to ask a couple of things
Images load each time the user plays the game. Though I have used img.src to load the image at the beginning.Why is it not loading from cache?
How can I speed up the game in general. The bullet firing mainly.
Code for the game area is here.
https://github.com/rohit-jain/Tweetraitor/blob/master/app/views/guests/play.html.erb
Thanks!
Rohit Jain
I can tell you for the first part of the question that you may want to try to cache the images in base64 into the localstorage of the user. This way, the data will already be loaded client side and won't require the client to download everything.
Here is a blog post about how to do it .
In short, you have to check if the image is in the local storage
if ( localStorage.getItem('myImageId')) {
if not to save it into it
localStorage.setItem('myImageId',image);
our general experience with HTML5/JavaScript/CSS based games on Android is that it is very slow - we have games we are using across platforms (desktop, TVs, iPhone) but for the Android we rewrite them natively in Java...
Even simple games like pong (the bouncing ball) are slow even on new devices. The situation is much worse when you try it on underpowered ~100-150 EUR devices from ZTE, Huawei or HTC.
If you will find out some general solution we will be more then happy to use it, but i doubt there is any. Does not matter if you are using some framework (i.e. Sencha touch) or not.
See there also.
Regards,
STeN
Basically just looking to see if you can capture an image from the webcam in javascript? I know you can capture the GPS position in an iphone in javascript so I'd figure there might also be extensions to do images as well.
ideally doing this in a cross platform way would be great. Basically it would mean we could develop a web app instead of a custom app. (if it requires a custom app, then unlikely to support the iphone as its too much of a diversion from our normal development equation)
The answer is (still*) no.
You can do this in Flash or, god help us, in an Applet. But if you do the later, you'll get fat, have no friends, and many puppies will die.
* This has already been asked before, but I can't find the thread.