These days when I code JavaScript I find myself using more and more plugins to accomplish common tasks. Often times when I use an existing plugin, e.g. to display tooltips, I'm not quite happy with certain options - so I extend them, or fix a bug.
This raises the question on how to fix/extend third-party code. Do you just modify the source file? That makes it almost impossible to update the plugin at a later date. You could extend it, by cloning the object or prototyping the existing one. But this often leads to duplicate code or scoping issues.
I've played with the idea of modifying the plugin code directly and generating a patch file at the end. These could then be applied "on build" with phing or some similar framework.
How do you folks handle this problem? Are there any existing projects/frameworks to make this easier?
Thanks for your thoughts.
If the third party code is a good citizen of open source, you fork it, enhance it, write tests for your enhancement, and submit a pull request back to the project's maintainer. Then your enhancement is available for all to use, including you. This is how open source happens.
If it's open source, but not a good citizen of open source, (no active maintainer, or perhaps your enhancement is too specific or unwelcome for some reason) the process is the same, they just never accept your pull request. Later in the future you can merge in changes committed in the official repository on top of your own in your private repository, even if they never make it into the official repository of the project.
This is the reason distributed version control systems like git are so awesome. No one owns the only canonical repository, and your private hacked version can be hosted and treated like the official version very easily.
Of course, most of that assumes that the project is on GitHub somewhere. If it's not, well, things get much more tricky.
Related
hey im not really familar with JavaScript or react.
So i hope i dont a too easy question:
i want to have a "one-page"-website, and want to change this page dynamically with ajax-request.
I have coded for example code for four visibility-levels (guest-user, normal user, moderator, administrator)
if you log in into my page and you are an admin, you get the JS-Code from all levels. For example in the json-response there is a list with URLs to the Javascriptcode destination.
If you log in as a normal user you should get only the normal-user js-code. The guest-user-js-code you already have; you got that at the time you entered the page.
So i guess the thing is clear, what i want.
But how i should implement this?
Are there some ready solutions out there?
https://reactjs.org/docs/code-splitting.html
maybe i have to adjust this here?
and maybe there are some good bundlers out there, that i can use, doing that splitting with hiding the endpoint urls (which i get if i have the rights from an ajax-request)?
lg knotenpunkt
As I said in the comments, I think that the question is very, very broad. Each one of the requests is a full standalone argument.
Generally speaking, I hope that this will led you to the right way.
You can split your code by using CommonJS or ES6 modules (read more here). That is to keep it "modular". Then, during the bundling process, other splitting techniques may be applied, but this will depend on your development environment and used tools.
Your best option for bundling would be Webpack without any doubt. However, directly dealing with Webpack or setting up a custom development environment is not an easy task. You'll certainly want to read about Create React App, which is a good place to start for a Single Page Application. It will allow you to write your code in a "modular" fashion and will bundle, split and process it automatically (uses Webpack under the hood).
Finally securing access must be done server-side (there is another world of available options there).
I'm hope you can shed some more insight on this topic for me. The website I'm currently working on has a terrible page speed test grade. One of the failing grades is the external JavaScript files. Which is telling me to combine these files in order to resolve this issue. I'm hesitating on moving forward with this because the JS files are associated with plugins. Please provide some feedback on what you think would be the best approach. Thank you.
Regardless which plugin it is combine all JS files into one and store it on the root directory.
Only combine JS files that are related too each of the specified plugins
Use a WordPress Minify plugin to get the job done.
Option one I'm nervous that once it comes time to update the plugin it may break. Can't have this happen due to heavy slider animation for galleries, can't afford the downtime.
Option two seems to be the most logical approach. However, what impact will this have when updating the plugin
Option three seems too good to be true. If it is though, please share some of the plugins that you've had success using.
It's true, the reason WP can be sluggish is because it's so easy to bloat it with third-party plugins. From my experience, you can only fiddle around with other people's code so much before it's easier to write your own - especially in an ecosystem that gets updated as often as WP's.
I wouldn't recommend trying to combine files manually unless you know what every bit of code in each of the installed plugins does and how the files are included in the background. Also, all your work will be overwritten on the next plugin update.
Ideally, what I would do is look for the plugins which cause the most code bloat (you know, those plugins that come with all the nice addons and integrations and skins and layouts and features, but which you only really use to display a single widget on your contact page - we all have one) and roll my own lightweight solution so I can remove the plugins altogether.
If you can't do that due to lack of time/money/patience/interest, your best bet is to use a minifier or "scripts to footer" plugin to at least move the slow stuff down the rendering line.
And don't forget: in the end, don't aim to get a higher score on a pagespeed test (like google's), aim to get a load time that is reasonable for your content and your users.
Try to use the Cloudflare service (it has a free plan). It contains an "Auto Minify" feature, where you can choose to minify:
CSS
Javascript
HTML
That way, you don't need to deal with the various plugins and you can quickly turn the minification off if it creates any problems.
Cloudflare also has other features that can improve page speed test, so give it a try.
Is it possible to write a script for VS or TFS in order to simulate the work of programmer? For example, it would make minor changes in js file (swap two function) an commit it to TFS on schedule.
It's not a recommend way to use automatic check-in/commit in a mature source control system.
If files are going to commit automatically, some negative issues have also followed. Who is going to add the comment, what comment should be add, Who is going to associate it with related work items? These things are really important features of TFS as an ALM tool to enable traceability and visibility.These will make source control history more detailed and easy to management.
Moreover, "how to guarantee the quality of the code available in the server?" is another serious problem. Even though it's just making minor changes, which may also has a huge impact. When another developer gets the latest version he may receive code that is not working, because of the work checked in automatically by script, and so on.
The answer in this question (even not about TFS) have some useful facts for your reference. (http://stackoverflow.com/questions/8397918/any-way-to-make-vault-auto-check-in-files)
If you really need this feature, to achieve your requirement, you could create a VS add-in project, and then use TFS API to check in the files. About how to use TFS API to check in files, please refer to this tutorial.
As far as I know, Aurelia does not support server-side rendering as mentioned here.
But the question is: is it possible to do this with some hacks/workarounds?
The most obvious idea would be to use Phantom, Nightmare.js or whatever to simply render that page in Chrome on server and serve that to client, but this is very likely to cause big productivity issues.
Thanks!
UPD
According to Rob Eisenberg's response on FDConf today (16 Apr 2016), server-side rendering will be implemented in 2016, there's one core team member working on that and there's a deadline for this feature.
There is an open issue for Universal/Isomorphic Aurelia which you can monitor. In particular EisenbergEffect (who is Rob Eisenberg, the creator of Aurelia) states that they are gradually working towards providing Universal support for Aurelia. This post from him provides most of the detail:
EisenbergEffect commented on Aug 25
We are trying to lock things down within the next month. That doesn't
mean we won't add anything after that, but we need to work towards
stabilization, performance and solid documentation without
distractions of lots of new features for a little bit.
Primarily, "isomorphism" isn't a use case we want to tackle for the
initial v1 release. Again, that doesn't mean we won't do it later.
But, we want to have a solid framework for browser-based apps as well
as phone gap and electron/nwjs desktop apps first. That was our
original goal and we want to make sure we handle those scenarios
better than any other framework or library.
After that, we've got some other features we want to do, which are
valuable in their own right, but will also take us closer to
isomorphism.
Enable all aurelia libraries to run on the server. This enables some
new testing scenarios, so it's valuable if only from that perspective.
Once code can run on the server, we can then implement server view
compilation. This isn't isomorphic rendering, but rather the ability
to run Aurelia's view compiler as part of your build and bundle
process. This enables more work to be done ahead of time, as part of
your build, and then it doesn't need to be done in the browser at
runtime. So, this will improve the startup time for all apps and
reduce initial render times for all components. It also will make it
possible to store compiled views in browser local cache to improve
performance of successive runs of the application.
After both of those
things are in place, then we can look at doing a full server render
for each route. This isn't quite isomorphic in the truest sense, but
it solves the SEO problem without needing 3rd party libraries. So,
it's nice to have a solution there.
Finally, we can then "sync" a
server pre-rendered app with a stateful Aurelia app running in
browser, giving us 100% isomorphic support. So, those are the stages.
The first two would be beneficial to all developers, even those who
are not interested in isomorphic apps. The 3rd stage can be done today
with 3rd party libraries, so this is a nice to have for us, for those
who don't want an extra dependency. All of that leads into 4 which
adds the final pieces.
We have already begun some of the work on 1. That might get into our
first release. We aren't going to push it, but it's already in
progress and we're looking for the problem areas so we can make it
work. Steps 2-4 involve significant work. Really, we are talking about
a collection of features here, each one being rather complex. So,
those will probably come in stages after v1, as point releases.
We really don't want to do what Angular 2 has done. They have
massively complicated their architecture...to the point that very few
people will be able to understand it and developing applications with
it has become much more complicated, with many nuances. We really
don't want that, so we're focusing on the developer experience we want
first, then we'll come back and see about isomorphic support (yes, we
already have ideas how to do this cleanly, but want to give those
ideas some time to mature). In all of this, our goal is to be modular.
So, if you don't care about isomorphism, you don't have to think or
worry about it. If you do, you would install the necessary packages,
agree to the "constraints" of the system and be on your way.
So, to all who are interested in this topic, I would just ask you
kindly to be patient. For those who aren't interested in isomorphism,
don't worry, we aren't going to brake the developer experience on you.
For those of you who want it badly, you will have to wait longer and
it will come in stages and in modular pieces so as not to disrupt
others.
Just for now
The only way I can propose: render pages with phantomjs + use redis to speedup that process.
But you will have lots of trouble restoring the state at client side.
.......
Dirty solution
Load rendered page from server and at the client side render new one in the usual way, than switch UI's.
It won't be a truly isomorphic, but something like https://github.com/rails/turbolinks on first page load.
.....
I hope soon Aurelia team will provide simpler stuff for that case.
In the current Aurelia there is the possibility to enhance existing html.
The document says
So far you've seen Aurelia replacing a portion of the DOM with a root component. However, that's not the only way to render with Aurelia. Aurelia can also progressively enhance existing HTML.
Check out enhancement section # http://aurelia.io/docs.html#/aurelia/framework/1.0.0-beta.1.0.8/doc/article/app-configuration-and-startup
I'm looking forward to get a better documentation of this feature.
It seems to me like rendering the html on the Server and inject aurelia will work with it and google will like it as well.
a hack i just came up with is to put a static copy of the initial rendering into the index.html file:
<html>
<body aurelia-app="main">
<h1>static copy of the website here</h1>
<script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script>
</body>
</html>
this is of course completely manual and if the initial rendering contains any content from a database then the static copy may need to be updated
every time the database content changes. (which is of course what isomorphic rendering is supposed to solve)
however for my needs, which is a simple website with some information that is rarely updated, this solution is good enough. it will at least suffice until i am able to implement proper isomorphic rendering.
I was recently tasked to document a large JavaScript application I have been maintaining for some time. So I do have a good knowledge of the system.
But due the sheer size of the application, it will probably take a lot of time even with prior knowledge around the code and the source code itself in uncompressed form.
So I'm looking for tools that would help me explore classes and methods and their relationships in JavaScript and if possible, document them along the way, is there one available?
Something like object browser in VS would be nice, but any tools that help me get things done faster will do.
Thanks!
Firebug's DOM tab lets you browse the contents of the global window object, and you can inspect a particular object by entering inspect(whatever) in the command line.
You won't be able to use it to detect relationships unless an instance of one object holds an instance of a related object, but it's a start.
You can also use the Options menu on the DOM tab to restrict what's shown to user-defined functions and properties, which should help reduce clutter.
Take a look at Aptana, they have an outline that can help you to determine what are the objects and somtetimes their relationship.
Firebug + uneval(obj) is a simple trick that is often helpful.
I see a lot of people talking about examining the DOM within Firebug. However, from your question it looks like you want something like jsdoc? just add type and class information through comments and jsdoc generates documentation including class relationships. http://jsdoc.sourceforge.net/
Google has a fork of it with added functionality http://code.google.com/p/jsdoc-toolkit/
UPDATE: It's not a fork, it's a rewrite by the developer that wrote jsdoc originally as a perl script. It aims at being more adaptable so you can use whatever js inheritance/events/properties style you'd like. Another feature is that it lets you modify the templates used to generate the HTML in a much simpler way.
We don't know if this JS application is designed to run in a Web browser...
If yes, as advised, Firebug (a Firefox extension) is excellent at debugging JS and exploring Dom.
On the IE side, you have some tools like IEDocMon, Web Accessibility Toolbar (it does more than its name) or Fiddler (unrelated to your question, but still a good tool to have).
Firebug (Firefox) / Dragonfly (Opera) can help you with viewing objects in realtime
Aptana / JS/UML(Eclipse) can help with relationships of objects
This is an old question, but let me answer it anyway.
Use an IDE. Integrated Development Environments were made for jumping around rapidly among the code. The key features you will exercise during exploration are viewing the file structure or outline, jumping to a declaration or usage, and searching the entire project for all instances of a string. If you are using WebStorm, set up a custom scope for files except generated files and node.js to aid in searching.
Run 'npm la | less' which lists all your dependent modules with one line descriptions. You may have never seen moment.js and never need to read the documentation, but taking the time to read a one line summary of it is worthwhile. If you need more information on a tool than one line summary, search for the term on SlideShare. Slides are faster than ReadTheDocs.
Document a little as you go. I'm a fan of forcing people to use notebooks constantly rather than scratch paper. Also, I find adding a one line comment to each JavaScript file is worthwhile. You want to know what should be in each directory of your project. I also recommend building a glossary of the exact meaning of domain terms in your system, e.g., what does "job" in your system.
Finally, you may need to just fire up the application in a debugger and start stepping through parts of it. Most large projects have accreted worth from programmers of various skill levels and motivations.
You are aiming for a level of "conceptual integrity" (to quote Yourdon) or to "grok" the software (to quote Heinlien). It does take some time, cannot be bypassed, and can be done efficiently.