VS Code: How do I synchronize workspaces between multiple systems? - javascript

Update:
It turns out that what I really wanted was to be able to do remote development on my laptop, and - if I also did something local on my robot, to have the changes show up on my main development system.
Ref:
This substantially similar question was asked about 10 months ago and has received no replies since then.  As there have been a lot of improvements in VS Code since then, (and since Stack Overflow discourages "Me Too!" replies), I have decided to re-ask the question in hope that someone will notice it and reply.
Viz.: https://stackoverflow.com/questions/60034690/how-to-sync-workspace-folder-beween-host-and-remote-target
Environment:
A Windows 10 system running VS Code, both current as of this instant date.
A Raspberry Pi based robot, (a GoPiGo3) that has the remote development using SSH software installed that allows my Windows 10 system to communicate with it via VS Code.
I have made an exact copy of the workspace environment, in its entirety, including the enclosing workspace folder, from the Windows 10 system to the robot, using FileZilla.
My previous workflow was to develop on the Windows box, transfer to the robot, run on the 'bot using Thonny, note any errors and either fix them in-place, (within Thonny), and transfer back to the Win-10 machine or fix within Windows 10 and transfer back to the 'bot.
"Clumsy" is a masterpiece of understatement.
Now that I have set up Remote Development on the bot, I believe I can escape most of that.
What I notice is that within the robot's copy of the workspace, most, (if not all), of the files are now either "modified" or "untracked" and updating my GitHub repo from the 'bot will cause all kinds of confusion.
What I want is the ability to develop on either platform seamlessly. (i.e. Changes made on the one are automagically reflected on the other when next connected.) And I want to do this in such a way that the commit and/or change status is accurately reflected on both machines.
I could go into a long explanation as to why this is useful to me, but this question is long enough already.
Any help would be gratefully appreciated.

OK people, I think I have this figured out.
Lesson #1:
It turns out that my original problem was actually more about workflow and "what's the best way to do a specific something", as opposed to how to do dual-development.  So, in essence, I was asking the wrong question.
Lesson #2:
You do not have to install the entire VS Code IDE on the remote device.
That's the original mistake - I misunderstood "install VS Code on the remote device" and I installed the IDE itself in both locations.
The result was that it slowed down the robot so much that it was unusable.
Having more than one instance of the VS Code IDE installed created confusion about what was happening where.
Lesson #3:
I did not realize that VS Code can install a small server module, (like a shim of sorts), and do some SSH magic on the remote device that allows VS code to use the remote device as if it were local to your main computer or laptop.
What you do is open VS Code on the local device, and then tell it you want to connect to a remote device for development.
Once you have that sorted out - this is very site specific and a web search is your friend - you can edit code and even execute code from your local computer and have it run on the remote device as if you were physically there.
In my case, (after experimenting with several different ways to work on my projects), I discovered that placing the VS Code IDE on my Windows based laptop and the "server", (shim) module on the robot, (with the appropriate extensions installed), provides an almost seamless environment that doesn't appreciably load the robot's processor - a Raspberry Pi 4.
Make sure the workspace on the local computer is fully up to date on GitHub (which is where my project repo is located).
Install the requisite VS Code remote development modules and make sure you can communicate with the remote system.  Exactly how to do this is specific to your environment.
Either "sync" or "clone" the relevant GitHub repo down to the device using the remote dev tools in VS Code as if it were your local box.
Note that this is very system and site specific.  VS Code does a good job of helping walk you through this, and a web-search will rapidly clear up any lingering questions or issues.
Eventually you will have a fully up-to-date version on the remote platform.
When this is done, you won't have to mess with manually syncing code as the code is already on the 'bot. All you do is edit code on your local machine, (my Windows laptop for example), and run it from VS Code.
An additional advantage is that if you have to duplicate or clone the robot's workspace, or restore the workspace from a backup, (you DO have your projects in a separate folder don't you?), all the "vscode" and "git" information is located there too and you can re-open your project after moving it with everything intact.
Additionally, if you have VS Code set up on different machines in different places, it might be possible to connect to the same server endpoint and have the same environment available.
(i.e. One installation on a desktop at work and another installation on a laptop for use on the road, (or while quarantined), both connecting to the same server endpoint.)
Note: I have not done this personally and it might require further research.
What I ended up doing, workflow wise, is that I do the lion's share of the development within VS Code,executing remotely on the robot itself.
Sometimes, if I want to try a quick and dirty fix, I'll "break the forth wall" and open an editor directly on the 'bot itself and it automatically shows up as "modified" within VS Code.

Related

Cordova web server with configurable responses

Is there any way to set up a local web server in a Cordova application such that I can control the responses via Javascript? I'm currently developing a custom plugin that communicates with a remote system via HTTP. I'd like to be able to run integration tests on this that are written in the Javascript code of a Cordova application, so I can easily test them in all supported platforms (Android, iOS, and ideally in the browser too... although the latter seems a little unlikely to be possible), but this means I need to be able to set up mock API responses from javascript code ... which will require the presence of a mock server that the plugin can communicate with.
I'm familiar with this plugin, but it can only respond using files in the local system -- I want to be able to generate responses and capture sent POST data in a Javascript callback. Is there any existing way of doing this?
Having searched extensively, I came to the conclusion that no such plugin exists.
I have therefore started an implementation of such a plugin myself: the initial version is now available on NPM and GitHub, with a sample project available in the github repository.
Android support is currently functioning, and I intended to start on iOS support in the next few days.
(Updated to add: unfortunately the project I was working on was cancelled, so have not had time supported by my employer to finish the iOS version, and as I don't have a mac personally so can't work on this on my own time, this seems unlikely to happen in the near future, but if anyone else needs it, it should be relatively simple to add the iOS version)

How to debug javascript with Eclipse running local Tomcat server

There are a lot of questions on SO about how to debug a standalone piece of Javascript - that isn't what I want. None of the previous Eclipse/javascript questions seem to be on point, which surprised me.
I am using Eclipse for Java EE (Neon, the latest version) to develop a JSP/servlet website - a full website, not just javascript, and not just java/jsp - everything together. I can compile my Java and "debug as" on an instance of Tomcat spawned by Eclipse, and the web pages show up inside of a window in Eclipse. I can set and hit Java breakpoints all day long while using "debug as" - but setting breakpoints in javascript doesn't do diddly squat. I've been having to run a standalone instance of Tomcat, deploy war files to it, wait for the war files to decompress, then debug my Javascript inside of Firefox. This is particularly annoying because I'm relatively new to javascript and am doing some complex things on the page (and truth be told, making some silly mistakes a compiler in a typed language would catch for me before letting me waste my time trying to run the code) and the "change, watch Eclipse chew deploying war, wait for Tomcat to chew uncompressing the war, test" cycle is just unacceptably long.
Isn't there an easier way to debug BOTH java and javascript from the same IDE without having to export and deploy WAR files? Is there is a setting I can toggle or something I can install in Eclipse to make it an all-in-one IDE? Ideally I would like to be able to step through, for example, an AJAX call into my servlet AND watch what happens in the javascript after it returns - within the same debugging session - so let me preemptively state that copying the changed js file(s) directly to the decompressed folder in tomcat/webapps as a faster way to continue to do split debugging is not the kind of "workaround answer" I'm looking for.
JavaScript debugging is going to be supported in Eclipse Neon 1 release (September 2016). Here is a demo video in which step-by-step process of debugging both front-end and back-end is explained - https://youtu.be/7oQz1Ja1H08 .
Basically, running Chrome / Chromium with extra parameters and tuning source mapping manually is not really user-friendly now, but we are going to improve it for Neon 1 and future releases.
Contributions of any kind are most welcome ;)

Breakpoints not hit in javascript files in Visual Studio

I am experiencing some weird behaviors when debugging my MVC Web Application. Some days I experience these issues, but other days everything works fine.
The breakpoints in my javascript files are not getting hit. I get the dreaded "The breakpoint will not currently be hit. No symbols have been loaded for this document." error.
The debugger will detach from the process without me clicking the Stop Debugging button.
I have tried everything I can think of including:
Refreshing the page in IE to force the browser to get the latest version of the javascript files.
Clean / Rebuild the application in Visual Studio
Close / Reopen Visual Studio
Delete all files from bin and obj folders and rebuild
Cleaned up all old sites from my IIS Express applicationhost.config file
Installed VS2013 Update 4
Deleted / Reinstalled VS2013
Removed / Added IE11
Installed VS2015. Same behavior as VS2013
Deleted all local project files and performed a "Get Latest" from TFS
I can manually attach the debugger to an iexplore process and then I'm able to debug that specific file, but it seems like there is a different iexplore instance for each javascript file. I end up having to guess which one to use for each javascript file. To top it off, the debugger keeps detaching in the middle of me trying to find the right process to attach to. It is nearly impossible, and definitely not feasible to try and debug this way.
Our solution is in TFS, we're using IS Express and the three other developers on our team have none of the problems I have. We all have the exact same hardware.
Another clue that might help is that we are using the OWIN functionality to connect to ACS for security. If I bypass authentication through OWIN / ACS I can step into the javascript. This, however, creates other problems since the code is expecting me to be authenticated. This is not an acceptable workaround and, again, the other developers on the team are using OWIN/ACS and do not have any problems.
I'm extremely frustrated and at a loss for how to go about figuring our what is wrong with my environment. Any help will be greatly appreciated.

Obscure/Protect Javascript Source Code in Windows 8 Apps

As you might know, in Windows 8 (Metro/RT/WinJS) apps, when they are acquired from the windows app store and installed to a local computer, all the original source code javascript files are clearly viewable in the windows filesystem.
As such, is there some way that I can obscure, hide, or protect the javascript code so as to avoid the possibility of it being stolen and used to make a new app?
At the very least, I'd like it to be a bit harder than for someone to just open the folder and read the code in it's original state....
Thanks!
I wrote up some notes on the options you have for Windows 8/8.1: http://www.kraigbrockschmidt.com/2013/04/04/protecting-your-code/. Windows 10 might offer some better options. For example, the Hosted Web App option described on http://blogs.windows.com/buildingapps/2015/07/06/project-westminster-in-a-nutshell/ will let you keep lots of code on the server. But I haven't looked at everything that's being done there.
Look into minification[1], like the Closure compiler: http://closure-compiler.appspot.com/home
[1]http://closure-compiler.appspot.com/home

Testing browser extensions

I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.

Categories

Resources