I am working on a website that users can write HTML, CSS and JavaScript codes in a online code editor. The editor can highlight and pritty-print the code if user click the relative buttons. Lately, I've implemented those features by means of Highlight.js and Pritter.js. After I completed the code. I found that those two libaray are very large, especially Pritter.js, which takes up 753KB after parsed by webpack, and still requires 218.96KB after being gziped. It has significantly slowed down the page loading and wrosen users' experience. So, my question is, what I should do to reduce the bundle size and boost up the loading speed?
P.S. I think Pritter.js doesn't meant to be used in production, instead, it is a development tool for developers. Originally it runs on Node.js environment, and the version I used is a simple "browserify" version without enough optimizations.
Look at prism.js. There you can decide what type of features and languages support do you need.
Related
I often compile informal datasets by running some kind of XPath/XQuery on publicly available web pages. Usually the structure of the HTML is regular enough that useful information can be extracted easily.
But today I've come across tunefind.com. This website makes extensive use of the REACTJS framework, and so most of the structure of the page is configured client-side by Javascript. The pages, when initially downloaded, are very basic and missing a lot of information. The pages are populated by a script that uses a hopelessly messy blob of JSON data at the bottom of the page.
The only way I can think of to deal with this would be to use some kind of GUI-based web engine and just not display the GUI part. But that is a preposterous amount of work for these casual little CLI tools that I use to gather information.
Is there any way to perform the javascript preprocessing without dealing with unnecessary graphics?
Even if you were to process without the graphics the react javascript will be geared towards running in a browser context, at the very least it will expect a functioning DOM to exist, the application itself may also require clicks / transitions to happen before you can see some data.
Your best bet then is to load the page in a browser, to keep this simple, there are plenty of good browser automation frameworks designed for this.
I've used a fair few libraries over the years including phantomJS and recently I've gotten the most mileage out of nightmarejs.
It runs an electron browser for you and gives you a useful promisified javascript API to control it with, that has common browser functions such as clicking, following links etc.
You can configure it to hide the browser which is useful for making a CLI tool, however its a bit of a pseudo-headless mode and will still require a windowing/graphical context (e.g. x window).
Hope this helps.
PS - If you're at all used to docker it's not hard to make this just a running container!
I recently started working with a large code base with many (15-20) js requests per page. I'm tasked with optimization and performance improvement of these sites.
I've been using tools like Google's PageSpeed and Yahoo's YSlow in conjunction with WebPageTest.org's tests to determine a baseline speed of the site and area of improvement. I'm curious if there are some standard or best-practice solutions for concatenation and minification of JS and CSS files.
I watched: http://www.youtube.com/watch?v=30_AIEhar-I and the first 20 minutes were really good at hammering mod_pagespeed as a good target.
I'm currently considering mod_pagespeed with a YUI compressor and perhaps a sprite generator on top of it all.
What are some good tools that I may have missed or things that I should be concerned about with my current build?
Edit: It should be noted that this is one page of many (possibly hundreds) and the site receives a new build every two weeks so being able to automate this concatenation and minification is a must, I can't just do it once and call it good.
Edit 7/30/2012 -
I spent some time looking at different tools, it's hard to say which ones are the best but at this time, not very many people use mod_page speed.
Closure is more widely used for certain, but even that is lacking. It seems the optimal way to do this is to just use a plugin with YUI.
There are other places that suggest Packer but it seems that many believe the smaller file sizes are eliminated by the necessity to unpack them on the client machine. This stackoverflow response is a good read regarding these types of tools.
Google's Closure Compiler is quite nice for concatenating and minifying JavaScript. It has the added bonus of linting your code for you when you compile, it will remove dead code, and it can also perform compile-time type checking if you include type hints in docblocks.
In certain cases, the dead code removal feature gives Closure a huge advantage over other minifiers... for example, think of cases where you include a library, but only use about 10% of the functionality. The other 90% can be removed if you compress the library along with the rest of the project.
As for CSS, YUI compressor is probably your best bet if you want something fancy. Otherwise, you could just concatenate the files together using cat and take the hit of a few extra bytes from whitespace.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am a java-programmer and a few months ago I discover frontend programming with javascript and html, and it is absolutely cool. So here is a js newbie's question.
Is there any IDE, default workflow or maybe best practices for javascript libs development? I didn't mean coverage and unit testing or something like this. I mean is there any tools or techniques which should I use while developing html control with some js-logic, for example? Should I write small test page in text editor and open it in dozen of browsers pressing F5 each minute and watching at browser's console or maybe there is some magic IDE which will on button press reload all browser instances and collect reports from browsers?
JavaScript Fundamentals
Be sure that you know the language. JavaScript is a bit tricky in this regard: you can easily think that you understand the language, but there are weird things that can sneak up on you. I highly recommend JavaScript: The Good Parts by Douglas Crockford. He also has some talks on Youtube with the same name that are definitively worth watching.
JSLint
Integrate JSLint or a similar tool in your workflow. It is a style checker and static analysis tool for Javascript and helps catching subtle bugs.
Avoiding F5
Check out live.js:
Just include Live.js and it will monitor the current page including
local CSS and Javascript by sending consecutive HEAD requests to the
server. Changes to CSS will be applied dynamically and HTML or
Javascript changes will reload the page. Try it!
Browser Developer Tools
Get familiar with them. I personally strongly prefer Chrome's developer tools over Firefox/Firebug, but regardless which one you choose, learn how to use the debugger.
Node.js
You should also be aware that you don't have to test Javascript logic in your browser: you can use it as any other scripting language using node.js.
As a seasoned programmer here are the steps you should follow for JS lib production.
Separate the UI from the application logic. In this case create one component for the application logic that is completely separated from all API access. API access, which means DOM and WSH and Node.js, should be separated out. I go so far as to use different files to force and ensure separation.
Create UI environments to access your logic. Have a production UI control for access by your audience and also a separate internal UI control for your sandboxed development. For example I have written my application at http://prettydiff.com/ to work from commandline, and from a browser. I have also written access methods to the application access similar to the published HTML but different for my own development for more rapid unit testing. In a sandboxed UI can use a recursive setTimeout to refresh the page in intervals for automated test plan verification as you are writing and saving code.
Focus on availability and distribution. People can get my libraries directly from my website, Github, and NPM for Node. I have a regimented process where by I upload code to the site during a production release and then perform a quick verification in the browser. If the release did not break the application then push to Github and then push this exact same location to NPM.
Distributed access to code operation is even better. Its nice having rapid and even automated access to libraries online, but being able to access unit testing via request is even better. I can remotely unit test my application's code by accessing code samples online directly by telling my application request a code sample via URI. This means that not only is my static code available for distribution and testing, but so is its operation.
Good documentation is everything. I will never even bother to examine a lib if the documentation is weak. I am not talking about code comments, though these are important. I am talking about end user documentation. I want to be able to read documentation and know everything about the subject. Node.js became popular because even before the codebase was stable it's documentation kicked ass. Get other people to QA your documentation before they QA your code. Without stunning documentation your lib is less than worthless to me.
Know your mission. Each and every lib should have a clear, simple, and well stated purpose. If this is not established then the lib is not ready for release. If an enhancement confuses the lib's purpose then it may be time to divide one library into multiple smaller libraries. Focus on precision, clarity, directness, and only the sole stated mission of the lib.
Independence is vital to adoption. I don't like libs dependent upon other libraries or frameworks. Its great that your jQuery library may be the best thing since slice bread, but I won't be looking at it. Independence means greater portability and freedom it mix and match it with other libraries that you are not aware of.
Style is important. This is touchy subject, but style is important. Keep the logic in your lib as simple and declarative as possible. If your code is absolutely declarative in nature then your algorithmic patterns are awesome. Avoid use of the new keyword unless you are an experienced JavaScript badass and severely limit use of this keyword as it will fail you in future maintenance operations. Do not algorithmically build out large strings with concatenation and continually watch the execution speeds of your code. Because even trivial changes to style or seemingly minor enhancements to the logic can destroy execution efficiency I put a timer on all my UI controls. In your development use profilers, such as Chrome's web tools to track down long operations in JS execution.
Be open about your failures. Software is never perfect and other developers will always respect this. If you encounter a bug before anybody else then be open about the existence of the bug. If it will take more than a week to get the solution out then don't delay notification. Notify your users immediately so that they are aware. I recently rolled back a major enhancement to the logic of my diff algorithm because additional unit testing showed heavy slippage. I rolled back the same day I made a decision the prior release or two was flawed and was open about the rollback. If you want people to contribute to your code base or provide bug reports then openness is critical.
I know you may have explicitly said that you didn't mean unit testing, but that is precisely the way I recommend you write javascript libraries.
If you're a Java developer, you may be familiar with jUnit. If so, qUnit may be more natural to you. Otherwise, I recommend you take a look at Jasmine or Mocha. While I prefer Mocha, my experience with Jasmine has generally been better for in-browser development thanks to the awesome Jasmine-Jquery plugin.
If you're writing qUnit tests, take a look at IntelliJ IDE which allows you to execute tests in addition to providing code coverage.
If you're developing on the browser, take a look at LiveReload. It'll watch files for you and auto-refresh your browser - great for instant feedback.
For browser compatibility, I would recommend you just get it to work on one first reasonably before worrying about the others. Check in from time to time to see if you find issues. See if jQuery can abstract some of that mess for you. Otherwise, take a look at Adobes BrowserLab.
While I imagine that a definitive answer would require more specifics, I'm interested in a high-level perspective about the appropriateness of heavy JavaScript integration in either cms. Thanks.
I'm going to answer the question you actually asked — rather than the question you may have meant — by focussing on "appropriateness." In my mind, js integration is appropriate when 1) it actually improves usability, and is not simply decorative; 2) is done with careful attention to maintaining accessibility; 3) is done in a javascript integration layer that achieves separateness from the content layer and modularity in implementation; and 4) minimizes page-load impact.
In regard to Plone, all I can say is that we've spent a lot of time on these issues, and that our framework team is aggressive in demanding that we meet those tests before integrating new functionality. I'm in particular proud of how we're doing on accessibility, and expect us to continue to focus on that. We won't deliberately break accessibility, and if we break it accidentally, we're going to fix it.
One very tough area is consistency. We've been bounced around between choosing between consistency within the platform's use of JS and choosing individual tools as best-of-breed for a particular functionality. I can't say we've done well on either score. The results are OK, but it gives the new developer a burden to understand the toolbox.
Plone has integrated portal_javascripts registry which allows you to
Merge and compress Javascript files to bundles
Set "safe compress" flags per file etc.
Set conditions when Javascript files are served (only for authenticated users)
Use IE conditional includes
etc.
i.e. it's very heavy tool for Javascript processing and supported out of the box.
All this can be edited dynamically through-the-web in admin interface. This is far superior tool compared to manually adding your files to <head> or using an external tool to manage Javascript compression bundles.
Plone also sets Expires headers and generates cache busting URLs for JS files correctly, something many CMS lack out-of-the-box.
More info can be found by
1) Install Plone
2) Go to Zope Management Interfcae
3) Find portal_javascripts
I'm in a process of selecting an API for building a GWT application. The answer to the following questions will help me choose among a set of libraries.
Does a third-party code rewritten in
GWT run faster than a code using a
wrapped JavaScript library?
Will code using a wrapped library
have the same performance as a pure
GWT code if the underlying
JavaScript framework is well written
and tuned?
While JavaScript libraries get a lot of programming eyeballs and attention, GWT has the advantage of being able to doing some hideously not-human-readable things to the generated JavaScript code per browser for the sake of performance.
In theory, anything the GWT compiler does, the JavaScript writers should be able to do. But in practice the JS library writers have to maintain their code. Look at the jQuery code. It's obviously not optimized per browser. With some effort, I could take jQuery and target it for Safari only, saving a lot of code and speeding up what remains.
It's an ongoing battle. The JavaScript libraries compete against each other, getting faster all the time. GWT gets better and better, and has the advantage of being able to write ugly unmaintainable JavaScript per browser.
For any given task, you'll have to test to see where the arms race currently places us, and it'll likely vary between browsers.
In some cases you don't have another option. You can not rewrite everything when moving to GWT.
In a first step you could just wrap your existing code in a wrapper and if it turns out to be a performance bottleneck you can still move the code to Java/GWT
The code optimisation in GWT will certainly be better than what the majority of JS developpers can write. And when the Browsers change, it is just a matter of modifying the GWT optimizer and your code will be better tuned for the latest advances in Js technology.
Depends on how well the code is
written.
I would think so.
Generally look at the community around a 3rd party library before using it unless it is open-source (so you can fix bugs) and specifically look for posts concerning bugs - how quick do the maintainers respond to items. How long is a release cycle, etc.