Is it possible to write a script for VS or TFS in order to simulate the work of programmer? For example, it would make minor changes in js file (swap two function) an commit it to TFS on schedule.
It's not a recommend way to use automatic check-in/commit in a mature source control system.
If files are going to commit automatically, some negative issues have also followed. Who is going to add the comment, what comment should be add, Who is going to associate it with related work items? These things are really important features of TFS as an ALM tool to enable traceability and visibility.These will make source control history more detailed and easy to management.
Moreover, "how to guarantee the quality of the code available in the server?" is another serious problem. Even though it's just making minor changes, which may also has a huge impact. When another developer gets the latest version he may receive code that is not working, because of the work checked in automatically by script, and so on.
The answer in this question (even not about TFS) have some useful facts for your reference. (http://stackoverflow.com/questions/8397918/any-way-to-make-vault-auto-check-in-files)
If you really need this feature, to achieve your requirement, you could create a VS add-in project, and then use TFS API to check in the files. About how to use TFS API to check in files, please refer to this tutorial.
Related
hey im not really familar with JavaScript or react.
So i hope i dont a too easy question:
i want to have a "one-page"-website, and want to change this page dynamically with ajax-request.
I have coded for example code for four visibility-levels (guest-user, normal user, moderator, administrator)
if you log in into my page and you are an admin, you get the JS-Code from all levels. For example in the json-response there is a list with URLs to the Javascriptcode destination.
If you log in as a normal user you should get only the normal-user js-code. The guest-user-js-code you already have; you got that at the time you entered the page.
So i guess the thing is clear, what i want.
But how i should implement this?
Are there some ready solutions out there?
https://reactjs.org/docs/code-splitting.html
maybe i have to adjust this here?
and maybe there are some good bundlers out there, that i can use, doing that splitting with hiding the endpoint urls (which i get if i have the rights from an ajax-request)?
lg knotenpunkt
As I said in the comments, I think that the question is very, very broad. Each one of the requests is a full standalone argument.
Generally speaking, I hope that this will led you to the right way.
You can split your code by using CommonJS or ES6 modules (read more here). That is to keep it "modular". Then, during the bundling process, other splitting techniques may be applied, but this will depend on your development environment and used tools.
Your best option for bundling would be Webpack without any doubt. However, directly dealing with Webpack or setting up a custom development environment is not an easy task. You'll certainly want to read about Create React App, which is a good place to start for a Single Page Application. It will allow you to write your code in a "modular" fashion and will bundle, split and process it automatically (uses Webpack under the hood).
Finally securing access must be done server-side (there is another world of available options there).
I'm writing a UI to my R script, which asks the user some names of organisms and the location of a folder, using javascript/html that will be local (not hosted, ever).
At the moment, I have just that: a couple of text boxes that take input and pass an executable R script. Originally this UI was being written as a very user friendly option, but slowly I've realized that some nifty tricks can be added such as a textbox that completes the word for the user (so if the user misspells the name of the organism, the UI will correct the input based on the files uploaded. And this would come from a list of organisms text file that R would generate immediately once the files have been added).
Is there a way to make this more efficient? For example, retrieving plots from R (as .pngs) and updating my local webpage and being able to share a log file between R and the UI (mind you, I am aware of the potential File I/O errors)..but for the sake of brainstorming.
I'm aware of Shiny, but what I would like is a simple local UI, as I will be dealing with big data (average ~ 1 gigabyte worth of files that my script will process).
Another way to ask my question that is more to the point:
Here's an example of integrating PHP and R: http://www.r-bloggers.com/integrating-php-and-r/
I am looking to create something similar with javascript/css/html/jquery etc.
Thanks
You could definitely use nodejs (nodejs.org) for that. Take a look at https://github.com/elijah/r-node and r-node. Confusingly enough, this is two different projects with the same name. More info on the latter here: squirelove.net/r-node/doku.php
In recent years JavaScript has become one of the fastest programming languages. In one case I know of, JavaScript is faster than C++. See: benchmarksgame.alioth.debian.org/u32/performance.php?test=regexdna
Bear in mind, though, that memory is very difficult to manage in JavaScript, so you should run some sort of memory leak detection program on your code, if you plan to create long running processes.
E.I: memwatch (npmjs.org/package/memwatch) or nodeheap (npmjs.org/package/memwatch)
Good luck with your endeavors!
PS. sorry for the lack of real links. I'm apparently not allowed to post more than 2 links.
Why wouldn't you be able to use Shiny locally? You design your app on your computer and run it locally with runApp('myapp') from an R-prompt. Unless you are experienced with javascript I would give shiny another look: http://www.rstudio.com/shiny/
The example you linked to can be very easily implemented using Shiny. See link below for a tutorial on how to write the app:
http://rstudio.github.com/shiny/tutorial/#hello-shiny
To run that example locally:
install.packages('shiny')
shiny::runExample('01_hello')
I have a similar case, and shiny looked like a good idea to me. However, after I did a few first steps, I am no longer sure about this. Note that most of the examples use shiny to display results. When you get into editing some fields and using a database, things can become messy; the reactive-ness gets in the way once fields can be change by program and by the user.
As an example see https://gist.github.com/dmenne/4721235/edit. The main problem for the current state of shiny is that you must use the dynamic UI for this type of work, which kills any separation of ui and server because you have to create the ui elements in the server.
shiny is a great idea, but for anything larger with interaction it is too early now. Knowing that the amazing RStudio team is behind it, I am sure the stress should be on now.
What else is there around to make user interfaces for R? TclTk makes me shudder. I working in c# a lot, and I had been using R(D)COM for interfacing some years ago, but gave up after installation and licensing problems. There is R.DOTNet which works better now; it is the most hazzle-free installation-wise, but it is not a very active project, and tends to crash. Interfacing via RServe/RServeCLI is stable, but is too difficult to install on Windows, for example on hospital computers with their strict security issues.
And there is Qt. With the active RInside community, it would be a good choice and the interface is great. I wish however my programming skills were at the level of the RStudio-guys. The fact that even Dirk is one the proof-of-concept level (using rinside with qt in windows) is not encouraging.
These days when I code JavaScript I find myself using more and more plugins to accomplish common tasks. Often times when I use an existing plugin, e.g. to display tooltips, I'm not quite happy with certain options - so I extend them, or fix a bug.
This raises the question on how to fix/extend third-party code. Do you just modify the source file? That makes it almost impossible to update the plugin at a later date. You could extend it, by cloning the object or prototyping the existing one. But this often leads to duplicate code or scoping issues.
I've played with the idea of modifying the plugin code directly and generating a patch file at the end. These could then be applied "on build" with phing or some similar framework.
How do you folks handle this problem? Are there any existing projects/frameworks to make this easier?
Thanks for your thoughts.
If the third party code is a good citizen of open source, you fork it, enhance it, write tests for your enhancement, and submit a pull request back to the project's maintainer. Then your enhancement is available for all to use, including you. This is how open source happens.
If it's open source, but not a good citizen of open source, (no active maintainer, or perhaps your enhancement is too specific or unwelcome for some reason) the process is the same, they just never accept your pull request. Later in the future you can merge in changes committed in the official repository on top of your own in your private repository, even if they never make it into the official repository of the project.
This is the reason distributed version control systems like git are so awesome. No one owns the only canonical repository, and your private hacked version can be hosted and treated like the official version very easily.
Of course, most of that assumes that the project is on GitHub somewhere. If it's not, well, things get much more tricky.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am a java-programmer and a few months ago I discover frontend programming with javascript and html, and it is absolutely cool. So here is a js newbie's question.
Is there any IDE, default workflow or maybe best practices for javascript libs development? I didn't mean coverage and unit testing or something like this. I mean is there any tools or techniques which should I use while developing html control with some js-logic, for example? Should I write small test page in text editor and open it in dozen of browsers pressing F5 each minute and watching at browser's console or maybe there is some magic IDE which will on button press reload all browser instances and collect reports from browsers?
JavaScript Fundamentals
Be sure that you know the language. JavaScript is a bit tricky in this regard: you can easily think that you understand the language, but there are weird things that can sneak up on you. I highly recommend JavaScript: The Good Parts by Douglas Crockford. He also has some talks on Youtube with the same name that are definitively worth watching.
JSLint
Integrate JSLint or a similar tool in your workflow. It is a style checker and static analysis tool for Javascript and helps catching subtle bugs.
Avoiding F5
Check out live.js:
Just include Live.js and it will monitor the current page including
local CSS and Javascript by sending consecutive HEAD requests to the
server. Changes to CSS will be applied dynamically and HTML or
Javascript changes will reload the page. Try it!
Browser Developer Tools
Get familiar with them. I personally strongly prefer Chrome's developer tools over Firefox/Firebug, but regardless which one you choose, learn how to use the debugger.
Node.js
You should also be aware that you don't have to test Javascript logic in your browser: you can use it as any other scripting language using node.js.
As a seasoned programmer here are the steps you should follow for JS lib production.
Separate the UI from the application logic. In this case create one component for the application logic that is completely separated from all API access. API access, which means DOM and WSH and Node.js, should be separated out. I go so far as to use different files to force and ensure separation.
Create UI environments to access your logic. Have a production UI control for access by your audience and also a separate internal UI control for your sandboxed development. For example I have written my application at http://prettydiff.com/ to work from commandline, and from a browser. I have also written access methods to the application access similar to the published HTML but different for my own development for more rapid unit testing. In a sandboxed UI can use a recursive setTimeout to refresh the page in intervals for automated test plan verification as you are writing and saving code.
Focus on availability and distribution. People can get my libraries directly from my website, Github, and NPM for Node. I have a regimented process where by I upload code to the site during a production release and then perform a quick verification in the browser. If the release did not break the application then push to Github and then push this exact same location to NPM.
Distributed access to code operation is even better. Its nice having rapid and even automated access to libraries online, but being able to access unit testing via request is even better. I can remotely unit test my application's code by accessing code samples online directly by telling my application request a code sample via URI. This means that not only is my static code available for distribution and testing, but so is its operation.
Good documentation is everything. I will never even bother to examine a lib if the documentation is weak. I am not talking about code comments, though these are important. I am talking about end user documentation. I want to be able to read documentation and know everything about the subject. Node.js became popular because even before the codebase was stable it's documentation kicked ass. Get other people to QA your documentation before they QA your code. Without stunning documentation your lib is less than worthless to me.
Know your mission. Each and every lib should have a clear, simple, and well stated purpose. If this is not established then the lib is not ready for release. If an enhancement confuses the lib's purpose then it may be time to divide one library into multiple smaller libraries. Focus on precision, clarity, directness, and only the sole stated mission of the lib.
Independence is vital to adoption. I don't like libs dependent upon other libraries or frameworks. Its great that your jQuery library may be the best thing since slice bread, but I won't be looking at it. Independence means greater portability and freedom it mix and match it with other libraries that you are not aware of.
Style is important. This is touchy subject, but style is important. Keep the logic in your lib as simple and declarative as possible. If your code is absolutely declarative in nature then your algorithmic patterns are awesome. Avoid use of the new keyword unless you are an experienced JavaScript badass and severely limit use of this keyword as it will fail you in future maintenance operations. Do not algorithmically build out large strings with concatenation and continually watch the execution speeds of your code. Because even trivial changes to style or seemingly minor enhancements to the logic can destroy execution efficiency I put a timer on all my UI controls. In your development use profilers, such as Chrome's web tools to track down long operations in JS execution.
Be open about your failures. Software is never perfect and other developers will always respect this. If you encounter a bug before anybody else then be open about the existence of the bug. If it will take more than a week to get the solution out then don't delay notification. Notify your users immediately so that they are aware. I recently rolled back a major enhancement to the logic of my diff algorithm because additional unit testing showed heavy slippage. I rolled back the same day I made a decision the prior release or two was flawed and was open about the rollback. If you want people to contribute to your code base or provide bug reports then openness is critical.
I know you may have explicitly said that you didn't mean unit testing, but that is precisely the way I recommend you write javascript libraries.
If you're a Java developer, you may be familiar with jUnit. If so, qUnit may be more natural to you. Otherwise, I recommend you take a look at Jasmine or Mocha. While I prefer Mocha, my experience with Jasmine has generally been better for in-browser development thanks to the awesome Jasmine-Jquery plugin.
If you're writing qUnit tests, take a look at IntelliJ IDE which allows you to execute tests in addition to providing code coverage.
If you're developing on the browser, take a look at LiveReload. It'll watch files for you and auto-refresh your browser - great for instant feedback.
For browser compatibility, I would recommend you just get it to work on one first reasonably before worrying about the others. Check in from time to time to see if you find issues. See if jQuery can abstract some of that mess for you. Otherwise, take a look at Adobes BrowserLab.
So, you are using a bunch of javascript libraries in a website. Your javascript code calls the several APIs, but every once in a while after an upgrade, one of the API changes, and your code breaks, without you knowing it.
How do you prevent this from happening?
I'm mostly interested in javascript, but any answer regarding dynamically typed languages would be valuable.
I don't think there's much you can do. You always run a risk when updating any piece of software. The best advice is to:
Read and understand documentation about upgrading
Upgrade in your test environment
TEST
Roll out live when you are happy there are no regressions
You should consider building unit tests using tools such as JsUnit and Selenium. As long as your code passes the tests, you're good to go. If some tests fail, you would quickly identify what needs to be fixed.
As an example of a suite of Selenium tests, you can check the Google Maps API Tests, which you can download and run locally in your browser.
Well there are two options:
Don't upgrade
Retest everything after you upgrade.
There is no way to guarantee that an upgrade won't break something. Even if you have something that could check the underlying API and make sure it still all lines up, you can't be certain that the underlying functionality is the same.