I have been shown a trick - instead of having the data in datafile.json and load it with ajax, the data is encapsulated in a single object in a datafile.js such as var Data = { //all data goes here }. Then the datafile.js is just loaded as external script in the html head
It works well just as if the objects were loaded with ajax, are there any drawbacks to this?
JSON is JavaScript. (Back in the day, the idea was that you could simply eval it ...) Therefore, the file that you speak of is simply ... "a JavaScript assignment-statement, stored in a file."
The only potential issue with storing this as a separate file might be "a timing hole." The source-code must be separately retrieved. I'm not sure if the browser would wait to do that, so it might be possible for other JavaScript code to execute that does not see var Data because that block of code hasn't been retrieved and executed yet.
When you have "lots of invariant fixed data," I customarily put everything including the data into one ("yeah, it's big ...") JS file, so that I know "it's all there" before any of it tries to be executed. Yes, there are definite advantages to simply including fixed data directly into your source-file, as you're effectively doing here, but I'm not sure I see an advantage (and, I might see a hole ...) in using several JS files.
Related
How can i parse HTML page in Android with js results? The main problem is that if i simply use Jsoup.connect() method the Document object doesn't contain js results, because js needs some time for running. Is it possible to delay connection?
As already mentioned in the comments, JSOUP does not run any JavaScript. For that you would need a JavaScript Interpreter.
Since you mentioned that the page you are wanting to read takes some time to render, it seems clear that you actually need to run the JavaScript to render the DOM.
However, if you look into the source code of the page you may be able to figure out how the JavaScript actually renders the page. I see two possibilities:
1) The JavaScript really just runs to dynamically render the page with information that is already loaded with the initial access. That frequently happens for modern websites that are able to send along all relevant data with the first access (aka isomorphic rendering). Here you may get the wanted information for data that is usually available in the website as JSON objects. You can extract the JSON and then parse this with a JSON parser.
2) The JavaScript actually loads some data asynchronously. IN that case you can identify these http requests and use JSOUP to get this data. Usually such data is in JSON format, so also in this case it may make sense to use A JSON parser to read out the relevant parts.
What are the differences in performance and memory footprint , if any, in these different approaches:
1. Using Src
<script type='text/javascript" src="1MBOfjavascript.js"></script>
2. Directly injecting in the Head
$('head').append("<script type='text/javascript'>1MBOfJavascriptCode</script>");
I'm interested because we are developing a Cordova App where we use the second method to Inject to the DOM a previously downloaded Javascript bundle read from the HTML Local Storage.
Given that the script will probably grow in size, I'd like to know if with the second method I could incur in some memory issues or other DOM problem.
I believe the overhead in such cases should be insignificant as the main processing/memory consumption is based on how the actual script works. Ie the memory used by file would be the total size of the script which is what 1MB at max? However during execution the same script can use up 100MB easily.
Anyways , coming to the point.
Plain text inclusion would be better if it has to be included in all cases as it skips a script execution and also does not cause re-rendering by browser after the append.
Append option should be used in cases where you only need the script under specific conditions on client side and do not load it un-necessarily.
The browser goes through all elements before to render the page so I would imagine it is exactly the same, if anything if you had scripts after the page is downloaded chances are that you will get errors of the type "call to undefined functions" if you call onto a function that you have added after the load.
My advise would be add them at load using src but keep things tidy.
Javascript impact a lot depending upon how and where you are loading your file, there are many ways to load a file or code.
First method you mentioned is an conventional way to load javascipt file that is load file using src and usually its loaded before closing </head> tag but now a days it is in trend to load file befor your </body> tag. it speeds up application load and prepare DOM faster.
second method is not obviously a good way to load javascript at first as there can be some code that should be working only after your DOM is ready and as here you are loading your javscript with javascript/jquery append which depends upon your jquery file loading which will delay your code execution. and there might be possible that some of your code will not have desired output (depending upon how you have called functions and how much they are dependent upon DOM ready )
I would prefer to load a javascript file with first method and at the bottom of the page/app if possible.
asyncrounous loading can also be tried.
i think because browser behavior after retrieving response from any web server
will try to parse all the elements first for building the DOm, after the DOM are ready then your script would be executed, based on this it is obvious its the main
reasons is which is the first
will be executed first than injecting it.
About the second method
i think javascript engine in each browser can handle that thing easily and withouth any problem. the engine does not care about the size actually it only about how long the script will be loaded and then executed. what will be the issue is when you try injecting any elements into DOM is when you try to do it in loop there might be a memory problem.
when you say big javascript files yeah we should think about the memory. but
about the method you previously said its just about which are executed first.
thats my opinion
My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?
I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).
My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.
I've been trying to get this sorted all day, but really cant figure it out. I've got a page with <div id="ajax"></div> that is populated off a tab based menu pulling in a snippet of HTML code stored in an external file that contains some javascript (mainly form validation).
I've seen in places that I need to eval() the code, but then on the same side of things people say its the last thing to do.
Can someone point me in the right direction, and provide an example if possible as I am very new to jQuery / JavaScript.
Many thanks :)
pulling in a snippet of HTML code stored in an external file that contains some javascript (mainly form validation).
Avoid doing this. Writing <script> to innerHTML doesn't cause the script to get executed... though moving the element afterwards can cause it to get executed, at different times in different browsers.
So it's inconsistent in practice, and it doesn't really make any sense to include script anyway:
when you load the same snippet twice, you'd be running the same script twice, which might redefine some of the functions or variables bound to the page, which can leave you in extremely strange and hard-to-debug situations
non-async/defer scripts are expecting to run at parse time, and may include techniques which can't work when inserted into an existing document (in the case of document.write this typically destroys the whole page, a common symptom when trying to load third-party ad/tracking scripts).
Yes, jQuery tries to make some of this more consistent between browsers by extracting script elements and executing them. But no, it doesn't manage to fix all cases (and in principle can't). So don't ask it to. Keep your scripts static, and run any binding script code that needs to happen in the load callback.
If you fetch html via Ajax, and that html has a <script> tag in it, and you write that html into your document via something like $('#foo').append(html), then the JS should run immediately without any hassle whatsoever.
jquery automatically processes scripts received in an ajax request when adding the content to your page. If you are having a particular problem then post some code.
See this link
dataType: "html" - Returns HTML as
plain text; included script tags are
evaluated when inserted in the DOM.
I Know, for instance, that when Chrome downloads a Javascript file, it is interpreted and JITed.
My question is, when IE6,7,8, first download a Javascript file, is the entire thing parsed and interpreted?
My understanding was that only top level function signatures and anything executed in the global scope was parsed on load. And then function bodies and the rest were parsed on execution.
If they are fully parsed on load, what do you think the time savings would be on deferring the function bodies to be downloaded and parsed later?
They are fully parsed on load. (IE has to parse the script to know where each function body ends, of course.) In the open-source implementations, every function is compiled to bytecode or even to machine code at the same time, and I imagine IE works the same way.
If you have a page that's actually loading too slowly, and you can defer loading 100K of script that you're probably not going to use, it might help your load times. Or not—see the update below.
(Trivia: JS benchmarks like Sunspider generally do not measure the time it takes to parse and compile the code.)
UPDATE – Since I posted this answer, things have changed! Implementations still parse each script on load at least enough to detect any SyntaxErrors, as required by the standard. But they sometimes defer compiling functions until they are first called.
Because defining a function is actually an operation, yes, your entire javascript file is parsed, and all of the top-level operations are interpreted. The code inside of your functions is not actually executed until it's called, but it is parsed.
for example:
var i=0;
var print = function( a ) {
document.getElementById( 'foo' ).innerHtml = a;
}
Everything gets parsed in the above example, and lines 1 and 2 get executed. However, line 3 doesn't get executed until it's called.
There are little "perceptual games" you can play with your users, like putting the script tags at the bottom of the HTML instead of at the top, so that the browser will render the top of the page before it receives the instructions to download and parse the javascript. You could probably also push your function definitions into a document.onload function, so that they don't get executed until after the whole page is loaded and in memory. However, this could cause a "flash of unstyled content" if your javascript is applying visual styles to things (such as jQuery UI stuff).
Yes, on all browsers the downloading of the resource blocks everything else on the page (CSS downloading, other JS downloading, rendering) if done with a <script> tag.
If you're loading all the javascript at the beginning or throughout your page you will see hiccups as a request is about 50ms and the parsing for a library file or something similar could be more than 100ms. 100ms is used as the standard for which anything greater will be noticed as "lag" by the user.
The time savings may be negligible, but the slight loss of user experience if there are pauses when your page is loading may be significant depending on your situation.
See LABjs' site for a lot of articles and great explanations on the benefits of deferring loading and parsing.
What do you mean by "downloads"? When it's included with tag, or when it's downloaded through XMLHttpRequest?
If you mean the inclusion by script, then IE interpret all js files at once. Otherwise you will be not able to call functions in that file or see syntax error message.
If you mean download by XMLHttpRequest, then you have to evaluate the content of the file yourself.