Recording sound in Electron.js - javascript

I'm currently working on an experiential project building a sound recorder.
I will use an arduino with a button to start/stop recording, so that there are no interactions with the machine (a concealed Windows laptop) at all for the end user.
Looking for a way to record the sound on the machine from a microphone, ideally in Javascript.
The sound should be recorded locally (no dependency on Wifi connection) and each sound should be saved as a separate file.
Once the project is done/installed, I will have no access to the machine any more so the files need to be easily accessed by a non technical user (hence the arduino/laptop combo and not a raspberry pi for example).
My forte is JS so I was hoping to do it using Electron but I haven't found a way to do this just yet.
I have tried the obvious navigator.mediaDevices.getUserMedia which doesn't work on Electron for security reasons. There are a number of libraries out there but the ones I saw won't work at all, are outdated, and haven't been updated in years.
I also tried using p5.js, which despite being a bit convoluted worked quite well, but requires user input when saving the audio file, which is not an option given the installation will only have one button to start/stop recording an as interface.
Has anybody done this or is anyone able to put me in the right direction?

Related

VS Code: How do I synchronize workspaces between multiple systems?

Update:
It turns out that what I really wanted was to be able to do remote development on my laptop, and - if I also did something local on my robot, to have the changes show up on my main development system.
Ref:
This substantially similar question was asked about 10 months ago and has received no replies since then.  As there have been a lot of improvements in VS Code since then, (and since Stack Overflow discourages "Me Too!" replies), I have decided to re-ask the question in hope that someone will notice it and reply.
Viz.: https://stackoverflow.com/questions/60034690/how-to-sync-workspace-folder-beween-host-and-remote-target
Environment:
A Windows 10 system running VS Code, both current as of this instant date.
A Raspberry Pi based robot, (a GoPiGo3) that has the remote development using SSH software installed that allows my Windows 10 system to communicate with it via VS Code.
I have made an exact copy of the workspace environment, in its entirety, including the enclosing workspace folder, from the Windows 10 system to the robot, using FileZilla.
My previous workflow was to develop on the Windows box, transfer to the robot, run on the 'bot using Thonny, note any errors and either fix them in-place, (within Thonny), and transfer back to the Win-10 machine or fix within Windows 10 and transfer back to the 'bot.
"Clumsy" is a masterpiece of understatement.
Now that I have set up Remote Development on the bot, I believe I can escape most of that.
What I notice is that within the robot's copy of the workspace, most, (if not all), of the files are now either "modified" or "untracked" and updating my GitHub repo from the 'bot will cause all kinds of confusion.
What I want is the ability to develop on either platform seamlessly. (i.e. Changes made on the one are automagically reflected on the other when next connected.) And I want to do this in such a way that the commit and/or change status is accurately reflected on both machines.
I could go into a long explanation as to why this is useful to me, but this question is long enough already.
Any help would be gratefully appreciated.
OK people, I think I have this figured out.
Lesson #1:
It turns out that my original problem was actually more about workflow and "what's the best way to do a specific something", as opposed to how to do dual-development.  So, in essence, I was asking the wrong question.
Lesson #2:
You do not have to install the entire VS Code IDE on the remote device.
That's the original mistake - I misunderstood "install VS Code on the remote device" and I installed the IDE itself in both locations.
The result was that it slowed down the robot so much that it was unusable.
Having more than one instance of the VS Code IDE installed created confusion about what was happening where.
Lesson #3:
I did not realize that VS Code can install a small server module, (like a shim of sorts), and do some SSH magic on the remote device that allows VS code to use the remote device as if it were local to your main computer or laptop.
What you do is open VS Code on the local device, and then tell it you want to connect to a remote device for development.
Once you have that sorted out - this is very site specific and a web search is your friend - you can edit code and even execute code from your local computer and have it run on the remote device as if you were physically there.
In my case, (after experimenting with several different ways to work on my projects), I discovered that placing the VS Code IDE on my Windows based laptop and the "server", (shim) module on the robot, (with the appropriate extensions installed), provides an almost seamless environment that doesn't appreciably load the robot's processor - a Raspberry Pi 4.
Make sure the workspace on the local computer is fully up to date on GitHub (which is where my project repo is located).
Install the requisite VS Code remote development modules and make sure you can communicate with the remote system.  Exactly how to do this is specific to your environment.
Either "sync" or "clone" the relevant GitHub repo down to the device using the remote dev tools in VS Code as if it were your local box.
Note that this is very system and site specific.  VS Code does a good job of helping walk you through this, and a web-search will rapidly clear up any lingering questions or issues.
Eventually you will have a fully up-to-date version on the remote platform.
When this is done, you won't have to mess with manually syncing code as the code is already on the 'bot. All you do is edit code on your local machine, (my Windows laptop for example), and run it from VS Code.
An additional advantage is that if you have to duplicate or clone the robot's workspace, or restore the workspace from a backup, (you DO have your projects in a separate folder don't you?), all the "vscode" and "git" information is located there too and you can re-open your project after moving it with everything intact.
Additionally, if you have VS Code set up on different machines in different places, it might be possible to connect to the same server endpoint and have the same environment available.
(i.e. One installation on a desktop at work and another installation on a laptop for use on the road, (or while quarantined), both connecting to the same server endpoint.)
Note: I have not done this personally and it might require further research.
What I ended up doing, workflow wise, is that I do the lion's share of the development within VS Code,executing remotely on the robot itself.
Sometimes, if I want to try a quick and dirty fix, I'll "break the forth wall" and open an editor directly on the 'bot itself and it automatically shows up as "modified" within VS Code.

Best step to create simple TeamViewer clone

I need to create a Teamviewer clone. Basically, I need it to work on all 3 major platforms (Win, Mac, Linux). I'm racking my brains to try to solve using NodeJS + Electron DesktopCapture to capture video, and transmute via P2P
I was thinking of doing with https://recordrtc.org instead of electron, since it works right from the browser, and I could only later package it to an executable.
But I think my biggest problems are:
1) How do I take the real time video capture from Electron / RecordRTC and stream it via a server? I can only get the recorded file later (.webm), but this is useless.
2) How will I control another user's mouse and keyboard? RobotJS, maybe? as if I were going to watch the video for 1 channel and contrast the keyboard and mouse for another?
I know how to program, I just need a light!
It's for my job! (Freelancer)
Ty <3

Video via OMXControl don t have sound, omxplayer play it fine

I m trying to use the Lamba Labs Beirut Hackerspace 's RPiTv
I have configured my Raspberry Pi for it (Raspibian, node.js, omxplayer, youtube-dl...)
If I download a video and play it in omxplayer, all work fine.
When I try to do it via the node.js app, the video play fine, but there is no audio
The screen is a HP ProDisplay P201 linked by a DVI cable
Audio by a audio casque linked with jack
I can modify the code, but since I m still learning javascript, I d like to avoid it for now.
I m thinking that the lack of audio can come from:
-Is there limitation between omxcontrol and omxplayer? (Is the volume just set to 0 when omxplayer is called, or is there is no audio at all?)
_-In that case, what is the volume control option for omxcontrol, tried google but it seem he don t like me today
-Is it something I made wrong when installing omxcontrol or node.js? (I assume youtube-dl and omxplayer are fine, since it play well when lauched manually)
I know I ask a lot, but even a hint will help.
I figured it out, I just wasn t watching the right code.
I was thinking the problem came from remote.js, but then I saw omxcontrol was starting omxplayer with -o hdmi flag (in omxcontrol/index.js), since the screen didn t had audio output, the audio was lost.
I removed the flag and all work fine.
Hope it can help somebody else.

What would be the best way to program a board game controlled by one person?

My dad wants me to program a board game. This board game will be used in a classroom setting, where the teacher will control the board game for the class. So, if a student gets a question right, the teacher presses one of the buttons and the board game advances.
The game state must also be savable as a file that the teacher can save on a USB drive.
To me, the backend sounds beyond simple, including storing the data in a file. In my opinion, I don't need a database for something this simple.
However, I'm more concerned about the front end, and which programming language to use in general. From what I've discerned from the computers he needs it to run on, they are all running Windows 7. I do have all of the graphics required. It will basically be the pieces moving from one square to the next in a linear fashion, with each piece having its own path.
Now, I was considering doing it in HTML5/JS, but I am worried that the computers do not support it. My dad is looking into it, including if they could install Chrome on the computers. I believe this would be ideal, as he also expressed interest in an iPad version, which would just work as a web page then.
But if these computers could not support HTML5/JS, or you more experience people tell me that HTML5/JS would be a terrible choice, what would you recommend for this project?
Thank you very much for your help.
You might want to consider Adobe Flash. It publishes to the web using the flash plugin, and pblishes to Windows, Mac OS, Android, and iOS as an Adobe AIR app. Plus it has really good handling of animation, and media, should you want to include sound and video, or animations.
If you don;t want to buy Flash from adobe, you might be able to publish it at least partiallu with other tools such as Stencyl.
Considering your requirements for graphics, you don't even need HTML5 advanced features, and everything that browsers were supporting for decade already with CSS2/HTML4 will do just fine. Simplify state saving to just outputing serialized dump for user to copy or generating data: URL for download or even prompt'ing serealized save data and you'll have code that will work on pretty much anything that have a browser.

Displaying a local gstreamer stream in a browser

I have a camera feed coming into a linux machine using a V4l2 interface as the source for a gstreamer pipeline. I'm building an interface to control the camera, and I would like to do so in HTML/javascript, communicating to a local server. The problem is getting a feed from the gst pipeline into the browser. The options for doing so seem to be:
A loopback from gst to a v4l2 device, which is displayed using flash's webcam support
Outputting a MJPEG stream which is displayed in the browser
Outputting a RTSP stream which is displayed by flash
Writing a browser plugin
Overlaying a native X application over the browser
Has anyone had experience solving this problem before? The most important requirement is that the feed be as close to real time as possible. I would like to avoid flash if possible, though it may not be. Any help would be greatly appreciated.
You already thought about multiple solutions. You could also stream in ogg/vorbis/theora or vp8 to an icecast server, see the OLPC GStreamer wiki for examples.
Since you are looking for a python solution as well (according to your tags), have you considered using Flumotion? It's a streaming server written on top of GStreamer with Twisted, and you could integrate it with your own solution. It can stream over HTTP, so you don't need an icecast server.
Depending on the codecs, there are various tweaks to allow low-latency. Typically, with Flumotion, locally, you could get a few seconds latency, and that can be lowered I believe (x264enc can be tweaked to reach less than a second latency, iirc). Typically, you have to reduce the keyframe distance, and also limit the motion-vector estimation to a few nearby frames: that will probably reduce the quality and raise the bitrate though.
What browsers are you targeting? If you ignore Internet Explorer, you should be able to stream OGG/Theora video and/or WebM video direct to the browser using the tag. If you need to support IE as well though you're probably reduced to a flash applet. I just set up a web stream using Flumotion and the free version of Flowplayer http://flowplayer.org/ and it's working very well. Flowplayer has a lot of advanced functionality that I have barely even begun to explore.

Categories

Resources