How to attach large amounts of data with tampermonkey script? - javascript

My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?

I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).

My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.

Related

LSL HttpServer - Serving large Notecards from Prim Inventory on Secondlife

I am writing a Media-HUD that runs totally on local notecard files stored within the prim's inventory, with notecards named like index.html, style.css, icon.svg etc.
My hope is to use the LSL HttpServer functions, and the script's URL to create a totally self-contained media based HUD that is easy to edit like editing any web page.
This is completely possible on its own, however there is a limitation in that, the pages must fit into the memory allocated to the LSL script. Under mono this is only 64kb.
I want to remove this limitation, by somehow, perhaps from javascript, reading in each 'file' from a notecard line by line in the users browser itself (thusly, getting around the memory limit by only bringing one notecard line into memory at a time).
Is there a way to do this? generate a entire file in javascript procedurally by loading in the strings making it up line by line, and then serve it as though it were a whole file? I'm not sure how feasible this is.
Any idea's/guidance greatly appreciated!
You could do this through Javascript using XMLHttpRequest. jQuery's wrapper for this is called Ajax. You could request each line individually, which would be slightly slower, or read in a number of lines at a time, at the script's leisure. http_request is not throttled so either works. Note that the loader has to be sent in a single response, because the LSL server has no way of pushing data "piecemeal" like an actual server does.
Notes:
llGetNotecardLine only returns the first 255 bytes per line.
llHTTPResponse must be called within ~20 seconds of the request, so you can't feasibly read more than 20 lines from a notecard at a time.
I'm not sure how this would work for non-DOM filetypes. All files would need to be embed-able in the HTML using the Javascript DOM. To my knowledge, Javascript can't arbitrarily create an external file and serve it to itself. Obviously it would not work for non-text filetypes, but you could certainly load in the rest of the HTML, CSS, and HTML5 SVG. Basically, if it can go in a single HTML file, you could load it in through Javascript.
I have no experience with React but it gives a good example of what is possible on the UI side with loading things in purely through Javascript.
So less than 64 thousand characters in memory at most per script. I will give you some advise that might make your task feasable:
External resources
Minimize the amount of code you have to have in your notecards by sourcing popular libraries from the web Bootstrap, React and such.
You will have to rely on their mirror's availability thought. But it will greatly reduce the amount of memory needed to provide pretty and functional pages.
Minify your code
Use tools like Uglify or Closure Compiler to make your javascript lighter. Though you must be careful, as these tools will fit all your code a single long line by the default, and you can't read lines longer than 255 characters with LSL. Luckily you can customize your options in these tools to limit the amount of characters per line.
Divide and conquer
Since a single script can't handle much memory, make dedicated scripts. One could serve resources (act as a file server, providing your html and js) while the other would receive API request calls, to handle the application's logic.

Duplicate an HTML file (and its content) with a different name in Javascript

I have an HTML file with some Javascript and css applied on.
I would like to duplicate that file, make like file1.html, file2.html, file3.html,...
All of that using Javascript, Jquery or something like that !
The idea is to create a different page (from that kind of template) that will be printed afterwards with different data in it (from a XML file).
I hope it is possible !
Feel free to ask more precision if you want !
Thank you all by advance
Note: I do not want to copy the content only but the entire file.
Edit: I Know I should use server-side language, I just don't have the option ):
There are a couple ways you could go about implementing something similar to what you are describing. Which implementation you should use would depend on exactly what your goals are.
First of all, I would recommend some sort of template system such as VueJS, AngularJS or React. However, given that you say you don't have the option of using a server side language, I suspect you won't have the option to implement one of these systems.
My next suggestion, would be to build your own 'templating system'. A simple implementation that may suit your needs could be something mirroring the following:
In your primary file (root file) which you want to route or copy the other files through to, you could use JS to include the correct HTML files. For example, you could have JS conditionally load a file depending on certain circumstances by putting something like the following after a conditional statement:
Note that while doing this could optimize your server's data usage (as it would only serve required files and not everything all the time), it would also probably increase loading times. Your site would need to wait for the additional HTTP request to come through and for whatever requested content to load & render on the client. While this sounds very slow it has the potential of not being that bad if you don't have too many discrete requests, and none of your code is unusually large or computationally expensive.
If using vanilla JS, the following snippet will illustrate the above:
In a script that comes loaded with your routing file:
function read(text) {
var xhr=new XMLHttpRequest;
xhr.open('GET',text);
xhr.onload=show;
xhr.send();
}
function show() {
var text = this.response;
document.body.innerHTML = text;//you can replace document.body with whatever element you want to wrap your imported HTML
}
read(path/to/file/on/server);
Note a couple of things about the above code. If you are testing on your computer (ie opening your html file on a browser, with a path like file://__) without a local server, you will get some sort of cross origin request error when trying to make an XML request. To bypass this error, either test your code on an actual server (not ideal constantly pushing code, I know) or, preferably, set up a local testing server. If this is something you would want to explore, its not that difficult to do, let me know and I'd be happy to walk you through the process.
Alternately, you could implement the above loading system with jQuery and the .load() function. http://api.jquery.com/load/
If none of the above solutions work for you, let me know more specifically what it is that you need, and I'll be happy to give a more useful/ relevant answer!

Loading Coldfusion javascript asynchronously?

Whenever a page is loaded with Coldfusion, it loads many default Coldfusion javascripts. When I run the Goolge PageSpeed Tools, it always complain about the render-blocking JavaScript. Apparently, Coldfusion has many javascript when a page is loaded such as
...scripts/ajax/yui/yahoo-dom-event/yahoo-dom-event.js
...scripts/ajax/yui/aniax/yui/autocomplete/autocomplete-min.js
...scripts/ajax/yui/autocomplete/autocomplete-min.js
...scripts/ajax/messages/cfmessage.js
...scripts/ajax/package/cfajax.js
...scripts/ajax/package/cfautosuggest.js
...scripts/cfform.js
...scripts/masks.js
These all are considered render-blocking scripts. I can't find any information on how to make them none-render-blocking because obviously I can't add the async="async" parameter to the Coldfusion script which I can't see. How can I make the Coldfusion script none-render-blocking or am I stuck with it?
Can someone please shed some lights?
If you really wanted to do something about this instead of rewriting your UI code, you can actually grab the output from the buffer before it is sent to the client and modify it. Your modifications could be as simple as removing a hardcoded list of script tags and replacing them with a custom script file that you host in your webroot. The custom script file would simply be all of the other script files' contents combined.
To do this:
In onRequestEnd in application.cfc you can call var outputBuffer = getPageContext().popBody().getBuffer() which will return the body to be sent to the client.
Run replacements on outputBuffer, looking for those script tags and removing them via regular expressions. You'll want to keep track of whether or not you've actually removed any to use as a flag in the next step.
Finally, you would append to the output with your new master script tag if your flag that replacements have been made is true. After you do that, the buffer will be automatically flushed to the client.
I believe that Adobe no longer updates the script files, so you basically don't have to worry about versioning your master script file to clear the browser cache.
Edit/Note: Definitely avoid using the CF UI stuff, but we don't know what your situation is like right now. If you have hundreds or thousands of hours of rewriting to do, then obviously rewriting it all is likely not something that is practical at this time.

URL to address in page data?

We have a JavaScript widget which loads data from an URL.
To reduce round-trips I would like to avoid a second HTTP request and put the data into the HTML page.
It would be great if I could leave the JavaScript widget unchanged.
Is there a URL scheme to read data from the current HTML page?
Example: Instead of https://.... this dom://....
No, but you can use data URIs, if that's a feasible approach for you. It's not the best choice for large amounts of data though.
I'm not sure to have completely caught your needs, zeroflagL answer could be a correct answer; possibly read also
http://blog.teamtreehouse.com/using-data-uris-speed-website before discarding the option.
Otherwise, although it might take a little adaptation to your javascript, consider that HTML5 has a feature called data blocks
read about it in https://developer.mozilla.org/en/docs/Using_XML_Data_Islands_in_Mozilla:
Leveraging this feature you can reduce round-trips and put one or more dataset into the HTML page, in the case into namespaces script blocks like this:
<script id="purchase-order" type="application/xml">
<purchaseOrder xmlns="http://entities.your.own.domain/PurchaseOrderML">
or this
<script id="another-set-of-data" type="application/xml">
<dataSet xmlns="http://entities.your.own.domain/DataSetML">
therefore, your javascript can access data reading them from the current HTML page; ....example:
<script>
function runDemo() {
var orderSource = document.getElementById("purchase-order").textContent;
var parser = new DOMParser();
var doc = parser.parseFromString(orderSource, "application/xml");
var lineItems = doc.getElementsByTagNameNS("http://entities.your.own.domain/PurchaseOrderML", "lineItem");
var firstPrice = lineItems[0].getElementsByTagNameNS("http://entities.your.own.domain/PurchaseOrderML", "price")[0].textContent;
document.body.textContent = "The purchase order contains " + lineItems.length + " line items. The price of the first line item is " + firstPrice + ".";
}
</script>
I am also an advocate for dataURIs as they are the most transparent (client code-wise) way to implement embedding of data in webpages.
They were, however, first used to embed small images and other resources that would hamper performance due to the connection overhead and also the parallel download limitations of HTTP/1. The tradeoff is delicate since encoding data as dataURIs can cause a (ballpark estimation) of 30% increase in data size, however the critical point where dataURIs stop being helpful is around the size of small images, which are usually orders of magnitude above serialized data.
The critical point here for a single page application scenario is that there's more than the single data-fetch roundtrip to consider.
Embedding the data for use by page's scripts on otherwise static HTML has the following implications:
The HTML itself can't be cached (only with a cached copy for every different set of embedded data and every version of the page.)
The (multiple versions of the) entire page must be generated on a server that also has knowledge of how to get the data.
The inlined data might block page rendering up to a user-perceivable time (this might be worked around by embedding the data at the end of the HTML, but client script execution would probably have to wait completely, thus also making stuff like displaying a loading indicator harder to implement.)
On the other hand, keeping the data on a separate round trip, despite the round trip itself, would:
Probably keep your already working implementation as it is
Allow for clients to use the cached HTML and scripts which would only need a refresh on an actual version change (there was a failed specification called AppCache for this purpose, now superseded by the experimental Service Workers)
Allow for that HTML and scripts to be fully static assets that can be served from 'dumb' CDNs that are faster and closer to the client browser and don't need to query the database or run any server-side code
All those are big wins in my view, so I recommend you to seriously consider the need for embedding data, because it could be an early optimization that can lead to a lot of pain and an actual decrease in performance! Specially because SPDY and now HTTP/2 are already coming in to address these round-trip and connection-number issues.
You could put the data, whatever it is, in the global window object, and use it later on.
But that requires you to change the code.

Help me to understand <script src="some.js?param1=one;param2=two" />

I observed chunks like below sometimes on web pages. So i am curious to know what this really does? or why its written in such a way?
<script src="somefile.js?param1=one&param2=two" />
i can only make out following few intentions behind it
Its not page URL (i mean .aspx/.php/.jsp etc.) so its not hacking kind of code where user can add code like this to pass data without getting users attention as its tag which does not render on UI OR implementing old type of AJAX alternative
This kind of URL param are useful if user do not wish the JS file (any other resource like image) to get cached. This can be quick way to manage caching
But i am unable to figure out following
Looks like page URL parameters but are these parameters anyway readable in JavaScript file and have some additional utility?
Do these parameters have any extra role to play here ?
What are the other possible practical scenarios where code like this can be/is used?
So please provide some inputs related with the same
Thanks,
Running Non-JS Code within .js Extensions
In cases like this, that source .js file might (given proper server-configurations) actually have PHP/.NET code within it, which can read those appended values.
As you said, Avoiding Cache...
Additionally, people will at times append a random string at the end of their referenced elements to avoid loading cached data.
The URL having '.js' means nothing. It could still be handled by a server-side script like an ASP or PHP.
Either the javascript file is not static (it is generated by the server based on the parameters in its querystring)
OR
In the JavaScript file itself, you can have it check its own querystring parameters (not just that of the page, but that of the javascript source url).
OR
(This doesn't exactly match your scenario, but) you can also add parameters at the end of image and script urls as a way of versioning. The version with the url="somescript.js?V=3" will be cached by the user until the page then changes and the url is not="somescript.js?V=4". The file will be replaced by the version on the server no matter what the browser setting may be.
My guess (without looking at this specific case) is that the javascript file is reading its own querystring. I have done this, and its very helpful.
Looks like page URL parameters but are these parameters anyway readable in JavaScript file and have some additional utility?
Yes you can read them in JavaScript, Scriptaculous uses that approach for loading modules, eg:
<script type="text/javascript" src="scriptaculous.js?load=effects,dragdrop">
</script>
Do these parameters have any extra role to play here ?
What are the other possible practical scenarios where code like this can be/is used?
That can be also used for server-side script joining and minifying, of course using some url rewriting technique to have the .js extension, and as you say, it's a common technique to add timestamp parameters to break the browser cache.
It can be used for three different reasons:
1) To generate the JavaScript file in the server depending on the parameters;
2) To avoid caching;
3) To pass parameters to JavaScript itself
An example of this in practice would be a server side handler for somefile.js that uses the parameters (names of other scripts) to determine which scripts are actually required and combine/minify them, returning them as a single somefile.js script file.

Categories

Resources