I need to create a Teamviewer clone. Basically, I need it to work on all 3 major platforms (Win, Mac, Linux). I'm racking my brains to try to solve using NodeJS + Electron DesktopCapture to capture video, and transmute via P2P
I was thinking of doing with https://recordrtc.org instead of electron, since it works right from the browser, and I could only later package it to an executable.
But I think my biggest problems are:
1) How do I take the real time video capture from Electron / RecordRTC and stream it via a server? I can only get the recorded file later (.webm), but this is useless.
2) How will I control another user's mouse and keyboard? RobotJS, maybe? as if I were going to watch the video for 1 channel and contrast the keyboard and mouse for another?
I know how to program, I just need a light!
It's for my job! (Freelancer)
Ty <3
Related
I'm currently working on an experiential project building a sound recorder.
I will use an arduino with a button to start/stop recording, so that there are no interactions with the machine (a concealed Windows laptop) at all for the end user.
Looking for a way to record the sound on the machine from a microphone, ideally in Javascript.
The sound should be recorded locally (no dependency on Wifi connection) and each sound should be saved as a separate file.
Once the project is done/installed, I will have no access to the machine any more so the files need to be easily accessed by a non technical user (hence the arduino/laptop combo and not a raspberry pi for example).
My forte is JS so I was hoping to do it using Electron but I haven't found a way to do this just yet.
I have tried the obvious navigator.mediaDevices.getUserMedia which doesn't work on Electron for security reasons. There are a number of libraries out there but the ones I saw won't work at all, are outdated, and haven't been updated in years.
I also tried using p5.js, which despite being a bit convoluted worked quite well, but requires user input when saving the audio file, which is not an option given the installation will only have one button to start/stop recording an as interface.
Has anybody done this or is anyone able to put me in the right direction?
I was wondering if there is a way to control audio output device patching in HTML5/JavaScript? Like, if the user wanted to have one sound in my web app to go out of one audio device, and another sound out of a different audio device. I know the user can set the default output device on their computer, but for the web app I'm working on, I would like them to be able to send individual sounds to individual outputs while other sounds are playing, similar to the interface below (from a program called QLab).
I feel like the obvious answer is NO, and I do not want to resort to using flash or java. I MIGHT be okay with having to write some sort of browser plugin that interfaces with javascript.
So, after receiving basically zero helpful answers - and finding no further helpful information online, I think I figured out something that we, as developers, NEED to start requesting from browser vendors and w3c. We need to be able to request hardware access from users, in a similar fashion that we can request to access a user's location, or how we can request to send a user push notifications.
Until web developers are allowed the same control as native application developers over hardware, we will be left at a huge disadvantage over what we can offer our users. I don't want to have my users install third/fourth party plugins to enable a little more control/access to their I/O. Users should not have to be inundated with keeping more software than just their web browser updated to have websites run well and securely. And I, for one, do not feel like it should be necessary to write in more languages than HTML, JavaScript, CSS, and PHP to get the same experience a user would get from a native application.
I have no idea how we approach browser vendors about this, but I feel like it would be good to start doing this.
I know this is a little old, but just this year a method was added called "setSinkId" that you can apply to a media element (video, audio) to set the device that audio will be outputted to.
$('#video-element').setSinkId('default');
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
Though currently it seems only Chrome supports it. I haven't tested on Firefox or other web browsers.
I suggest you take a look at the Web Audio API:
Specs --- Tutorial
There is the destination property in the Web audio API. However it is a readonly property ... so not settable.
Here:
The destination property always correlates to the default hardware output of sound, whether it’s through speakers, attached headphones, or a Bluetooth headset.
I'm working on a sound installation based off web audio and have run into the same problem. I want to map different channel outputs to different speakers. have you had any progress on this?
This gentleman seems to have managed to do it: http://www.brucewiggins.co.uk/?p=311
I tested this out on a apogee quartet and it worked - outputting to 8 different channels.
I also found this article useful: http://www.html5audio.org/2013/03/surround-audio-comes-to-the-web.html
if (context.destination.maxChannelCount >= 4) {
context.destination.channelCount = 4;
}
// otherwise, let's down-mix to 2.0
else {
context.destination.channelCount = 2;
}
context.destination.channelCountMode = "explicit";
context.destination.channelInterpretation = "discrete";
context.destination.numberOfOutputs = 4;
While you can certainly use the splitter and merger nodes to assign to specific channels on the output, the actual devices you output are abstracted by the browser and inaccessible by your code.
I have done some experiments with 8-channel virtual audio cables and relaying that data to other sound devices outside of the browser. Unfortunately, I can't find a browser that will actually open an 8-channel sound card with more than 2 channels.
Hopefully, browsers in the future will provide more options. This flexibility will never come directly to JavaScript... and nor should it. This is an abstraction done for you, and if the browser uses it correctly, it won't be an issue.
I m trying to use the Lamba Labs Beirut Hackerspace 's RPiTv
I have configured my Raspberry Pi for it (Raspibian, node.js, omxplayer, youtube-dl...)
If I download a video and play it in omxplayer, all work fine.
When I try to do it via the node.js app, the video play fine, but there is no audio
The screen is a HP ProDisplay P201 linked by a DVI cable
Audio by a audio casque linked with jack
I can modify the code, but since I m still learning javascript, I d like to avoid it for now.
I m thinking that the lack of audio can come from:
-Is there limitation between omxcontrol and omxplayer? (Is the volume just set to 0 when omxplayer is called, or is there is no audio at all?)
_-In that case, what is the volume control option for omxcontrol, tried google but it seem he don t like me today
-Is it something I made wrong when installing omxcontrol or node.js? (I assume youtube-dl and omxplayer are fine, since it play well when lauched manually)
I know I ask a lot, but even a hint will help.
I figured it out, I just wasn t watching the right code.
I was thinking the problem came from remote.js, but then I saw omxcontrol was starting omxplayer with -o hdmi flag (in omxcontrol/index.js), since the screen didn t had audio output, the audio was lost.
I removed the flag and all work fine.
Hope it can help somebody else.
We have a client who has asked us to build an app for a music artist (in a short period of time) - generally basic stuff - other that one feature...
They would like it to be able to 'listen' and detect between certain songs at a concert so that we it can trigger a jQuery/Javascript function that would make the phone display a certain colour during a certain song. It doesn't have to actually detect the song - the BPM would possibly surfice.
To add to the complication I am a front-end developer - so the app would need to be HTML5 and written using Phonegap, the app needs to be IOS.
I have no idea how to approach this or whether its even possible - but if anyone has any ideas - please let me know!
Implementing BPM Analysis with Web Tools sounds pretty scary...
Have a look at this library: https://github.com/corbanbrook/dsp.js/ and this here: http://sourceforge.net/projects/beatdetektor/
Not sure if you can do this in real time though
Have made a typing game using HTML5 and javascript http://tweetraitor.herokuapp.com/.
Now I wanted to ask a couple of things
Images load each time the user plays the game. Though I have used img.src to load the image at the beginning.Why is it not loading from cache?
How can I speed up the game in general. The bullet firing mainly.
Code for the game area is here.
https://github.com/rohit-jain/Tweetraitor/blob/master/app/views/guests/play.html.erb
Thanks!
Rohit Jain
I can tell you for the first part of the question that you may want to try to cache the images in base64 into the localstorage of the user. This way, the data will already be loaded client side and won't require the client to download everything.
Here is a blog post about how to do it .
In short, you have to check if the image is in the local storage
if ( localStorage.getItem('myImageId')) {
if not to save it into it
localStorage.setItem('myImageId',image);
our general experience with HTML5/JavaScript/CSS based games on Android is that it is very slow - we have games we are using across platforms (desktop, TVs, iPhone) but for the Android we rewrite them natively in Java...
Even simple games like pong (the bouncing ball) are slow even on new devices. The situation is much worse when you try it on underpowered ~100-150 EUR devices from ZTE, Huawei or HTC.
If you will find out some general solution we will be more then happy to use it, but i doubt there is any. Does not matter if you are using some framework (i.e. Sencha touch) or not.
See there also.
Regards,
STeN