How to compress json files into javascript - javascript

I am making a browser game (client side only). I am trying to make it smaller (meaning file sizes), which is first step for mobile version. I have minified CSS using LESS, JS using uglify and also angular templates using grunt-angular-templates. So at this moment I am loading very small number of files:
index.html
app.js
app.css
images.png (one file with all images)
But the remaining problem are JSON data files. There are (or will be) many levels and each level has its own JSON data file. Also there are some rule definitions etc. The problem is, that these JSON files are loaded dynamically when needed.
I am now trying to find a way, how to somehow get these files (at build time, probably some grunt task) into one file, or even better - directly into app.js. I have no problem in writing PHP script + JS class, that would do this, but I first tried to find some finished solution.
Does anybody know about something like that, or is there any other solution that I am not thinking about? Thanks for any help.
====
EDIT:
1) The point of this is getting rid of X requests and making one request (or zero) for JSON files.
2) The compiled thing does not have to be JSON at all. Part of my idea:
JsonManager.add('path/to/json/file.json', '{"json":"content of file"}');
making all these lines manually is bad idea, I was asking about something, if there is anything, that could do this job for me.
3) Ideally i am looking for some solution similar to what grunt-angular-templates task does with HTML templates (minifies them and adds them to app.js using Angular's $templateCache)

Say you have two JSONs: {'a':1} and {'b':2}.
You cannot simply concatenate them into one chunk as together they will not be a valid JSON, e.g. this {'a':1}{'b':2} is not valid JSON. You can do this with JS and CSS but not JSON.
The only option is to include them into larger structure:
[
{'a':1},
{'b':2}
]
If your code structure allows to do this then you can use any existing JS compressor/uglifier to compress the result.

For anybody who has same problem as me:
I gave up finding already finished solution, and made my own:
The solution
I have written PHP script, that iterates over files in data directory and lists all JSON files. It also minifies their contents and creates one big array, with keys as relative file names and values as JSON content of files. It then creates a .js file, in which this big array is encoded as JSON again and given to a JavaScript variable (module constant in my case - Angular)
I created a wrapper class, which serves this data as files, e.g.:
var data = dataStorage.getData('levels/level01.json'); // returns JSON content of file located at path/to/data/files/levels/level01.json but without any AJAX call or something
I used grunt-shell to automate running this php file
I added the result .js file to list of files, which should be minified by uglify (and connected together).
The result:
I can create any number of JSON files in any structure and link to them from js code using that wrapper class, but no AJAX calls are fired.
I decreased number of files needed to load at startup (but increased app.js size a bit, which is better than second request).
Thanks for your ideas and help. Hope this also helps someone

Related

How do I check if there is a duplicate file in a folder, without comparing file names using glob/listdir/etc..?

I have a folder that contains several images, the directory structure looks like this:
./images/
./images/1.png
./images/2.png
./images/3.png
./images/4.png
./images/{n}.png
These images have been downloaded and saved using the request and fs modules by a script called update.js.
Each file is named after the length of items in the folder (I.E: length + 1).
The update.js script downloads (and saves) each image, regardless of whether or not it exists.
I can get around this by deleting the images folder but this is a waste of resources.
What's the most efficient way to prevent this behaviour?
NOTE: I can't use a simple file name check since, the names are indexes.
Thanks.
You can issue a HTTP head request for each file and get its headers. Then you can see how big the target file is and avoid re-downloading it if the size matches exactly.
This isn’t ideal though as different files may have the same size.
Some servers give you a content md5 which would probably be the best. The md5 is unlikely to match between any two files you have unless your use case is very large.
You would be better served by just working to fix the script though so it store proper metadata, all this is quite hacky :). You can store the real file names and modified timestamps as another file in a sibling directory and be fairly sure it won’t affect anything. Then you can just check those before doing a download.

Load Random Image From Directory on Page Load Without a Listed Array of File Names

I've done some looking around on the site and every time I pull up a solution to this problem, one of the requirements is to have a naming convention and a list of every image to pull from the directory (example: image1.jpg, image2.jpg, etc.) All of the file names are different and there are thousands of them to pick from (so listing each one as a random opportunity in an array is not going to work).
I typically use CMS services and I'm writing this webpage from scratch in Notepad in an attempt to better my coding skills... and I'm not sure where to begin. I'm decent with HTML and CSS, but j Query and JavaScript are not my friends haha.
Thank you for any help! (Even if it's just pointing me to a tutorial or a solution I could not find!!!)
Are all file names image1 image2 image3 etc? Then you could try to generate a random number, create a new img element and have it's source pointing to image+randomnumber.jpg and append it to the DOM
One of the main problems your facing here is about your thinking when it comes to how content is delivered, in a standalone static website you do not have access to the file system. This means that if we want to query things outside of the browsers context we are not allowed, obviously without being able to access directories we can not generate a list of file names which can be loaded.
If your wondering why we can't access the file system directly from say the JavaScript it's because of the sandbox that most modern browsers live in, otherwise people could attack your native directories from the front end languages. Your question is interesting as electron removes this sandboxing in a sophisticated esk manner, which is necessary as it's used for building desktop apps with chromium.
These days the most obvious solution would be to use some form of back end language and to create a web server that has direct access to the native directories around it. Node, PhP, GoLang and many other populatr backend languages can parse a directory of files and then interpolate those into the frontend code which is the most common method.
The other popular method at the moment is to create API's which is just a fancy web server with a queryable end point that then executes code against our web server and provides back a list of such items. You could then for instance take the items and then print those out using javascript.
Reference directories method in php:
http://php.net/manual/en/ref.dir.php
List contents of directory in nodejs:
https://code-maven.com/list-content-of-directory-with-nodejs
The best place to really start with the easiest route to understand more would be to start a backend language in either node or php, with php being the simpler of the two.
https://www.w3schools.com/php/
First you need to get your file list from server side. then you can use a code like following:
var imageList = //your image list as an array of urls;
var imageNumber = Math.random() * imageList.length; //gives you a random number in the range of imageList's size
var imageToLoad = new Image();
imageToLoad.addEventListener("load", function(){
console.log( "image is loading" );
$('#my-container').append(this); //in this case this will return image dom
});
imageToLoad.src = imageList[imageNumber];
this will add image to a container with id 'my-container' its just an example you can do anything you want using 'this'
So after much help and guidance from the community, I have figured out the answer! To clarify my process in extreme detail, here is what I did to achieve the desired outcome:
Create the page as a .php file instead of a .html file (in my case, index.php). If you are using notepad to create the file, make sure you change the file extension to .php, the encoding to UTF-8, and save file type as "All Files". As I understand it, PHP can pick the file at random but cannot pass this info to a static HTML page.
Place this block of code into the webpage where the image should show. Currently, it is set up to reference a folder named, "images" out of the root directory (aka mysite.com/images/). This can be changed by modifying the text between the apostrophes after $imagesDir. All other html markup on the page will work correctly if it is outside of the php code block.
Code Block:
<?php
$imagesDir = 'images/';
$images = glob($imagesDir . '*.{jpg,jpeg,png,gif}', GLOB_BRACE);
$randomImage = $images[array_rand($images)];
echo "<img src='$randomImage'>";
?>
Thank you #bardizba for the code! Although there may be less resource intensive ways to write this, my situation was a bit different because the file names in the directory did not follow a naming convention and there was a mix of file types (jpg, gif, etc.)
Thanks again to everyone that helped me out!

Adjusting included js scripts in Magento in page.xml

I am trying to clean out some dead js includes, but am not having any luck. I have deleted the lines from page.xml, for example:
<action method="addJs"><script>custom/custom.js</script></action>
And have checked local.xml as well to ensure the lines are not there. But the page is still showing these files included and being loaded (I have not deleted the actual js files yet). I am not sure if I need to delete references in other places? If anyone can point me in the right direction, it would be much appreciated. Please let me know if I can provide anything else to help. Thanks!
Try to clear your Cache and check again.. you can do this via admin panel
JavaScript can be included on any page from literally any Layout XML file (provided that Layout XML file is being parsed by Magento). It can also be included in any template file.
Magento has a very complicated layout/theme hierarchy with tons of files, some of them parsed and rendered, most of them not parsed or rendered.
It makes it difficult to try to guess or intuit where/how a given piece of HTML is being rendered, so in situations like this I usually search for references to the file in the design folder:
cd /magento/document/root
grep -Ri 'custom.js' app/design
Assuming you're on Linux (or most other Unix-like operating systems), this should list every Layout XML file (or phtml template) referencing that dead JS file (custom.js). Then you just go to those files and remove the offending XML nodes (or script elements).

One JavaScript File Per Page or Combine when using Jquery and Document Ready Function

Ok So I know it always depends on the situation but I have, thus far, combined my jquery files/plugins into a single compressed file.
Now I am wondering what I should do with my page specific js/jQuery code. Should I have a single file with one Document.Ready function and my entires sites js code inside of it? Or split it up into seperate js files per page with a document ready call in each?
These files will inclide things such as .Click handlers and other jquery code specific to certain pages.
Whats the best practice here to optimize load times and maintainabilty?
One way to do it would be to use require.js and then have an array with files and page types. Give each body tag an ID and use it to reference what files should be loaded in.
<body id="pageName">
Keep your global files everything you need for the core functionality to work and then lazy load in the features that aren't required for your site to run faster. I've seen huge speed improvements from this technique.
http://requirejs.org/
We can do this in multiple ways , i did in the following way.
Aggregate your files broadylyas following
1) Aggregate all the files required for all the pages
2) aggregate the pages specific to the page.
Include all the common aggregated file for all the pages , and include other aggregated files conditionally on the page
1) jquery and other plugins common to all pages so // it will go to all files
2) homepage-aggregation /// for homepage
3) gallerypage-aggregation // for gallery page.
If you include the same file for all the pages ,it may not necessary for all the files.
I did it recently , let me know if you need anything else
Because you're almost certain to want to have different things executed in the Document.Ready function depending on what page you're on I don't think that having one function that is executed on every page is helpful.
Personally I mix my $.ready calls in with my HTML. These are simple calls to functions stored in a single, minimizing javascript file so don't take up too many bytes, and prevent the need for a separate Javascript file per page. It also allows me to initiate the Javascript where I create the markup, so it's all in one place.
If you're minimizing your javascript and serving it with the correct headers you've got most of the benefits already, don't compromise readability more than you have to.
It also depends on the server side technology you are using. You may find tools to assist you on this task. If you are coding a Java server side, you may try JAWR. It allows the creation of separated JS/CSS files, merging and compressing them server-side, turning all the separate files into a single file.
About Document.Ready, I prefer to keep specific code page in separate files, avoiding incorrect code execution and behavior. It is also cleaner and easier to maintain.

Javascript - To combine or not combine, that is the question

Ok, so I know that it's obvious to combine all of a pages Javascript into a single external file for efficiency purposes, but that's not quite the question here.
Say I have Default.htm with a search field that has a little Javascript magic attached to it. Then I have Contact.htm with a contact form that has some Javascript magic attached to it. And finally I have a FAQ.htm with some jQuery panels showing the answers... you get the picture.
Basically I have three pages that all have "some" javascript need, but none of the Javascript is used on any other pages.
Is it better to combine all of that Javascript into one big minified file that loads once and is then stored in Cache, or is it better to use an individual Javascript file on the Default page, but not use it on the Contact page... etc?
What works best in this scenario?
Option: 1
Default.htm
jquery.js
default.js
Contact.htm
jquery.js
contact.js
Faq.htm
jquery.js
faq.js
Option: 2
Default.htm
jquery-default-contact-faq-min.js
Contact.htm
jquery-default-contact-faq-min.js
Faq.htm
jquery-default-contact-faq-min.js
PS: for all you asp.net guys, I'm using Combres to Combine, Minify, and Version my Javascript files
I would definitely vote to combine them. If you are concerned about parse or setup time for the "not used" Javascript, then I would recommend structuring your Javascript with each file in a closure, and then run the closures you need on the pages you need them. For example:
// File 1
window.closures = window.closures || {}
window.closures["page1"] = (function() {
// Javascript for Page 1
});
// File 2
window.closures = window.closures || {}
window.closures["page2"] = (function() {
// Javascript for Page 2
});
// File 3
window.closures = window.closures || {}
window.closures["page2"] = (function() {
// Javascript for Page 2
});
Then, in your page:
<!-- This one combined.js file will be downloaded once and cached //-->
<script type="text/javascript" src="combined.js"></script>
<script>
// Run the Javascript in your combined.js intended for page2
window.closures["page2"]()
</script>
combine into 1 file. let it get cached. it loads once on any page, and for any subsequent pages it can use the cached copy.
Because it doesn't sound like there's a lot of javascript, combining it into one file would be better. Only if there's significant amounts of javascript that doesn't need to be loaded if a user doesn't visit a page then you would consider keeping the files separate.
It is always an act of balancing the number of HTTP requests and limiting the transferred bytes that are not really needed yet.
There are three possibilities:
combine everything in 1 file
have three separate files, and load them as needed
have three separate files, load the one needed for that page right away and preload the others (when the time is right)
You will only know what is best for your situation by doing some A-B load testing.
Everything depends on the size of the transferred data, the overlap of needed functionality and the probability that some functionality is needed.
If the combined file is under say, 25kb minified, then go for it. But if it is more than that, I'd say, identify the one that is the biggest of them, and let that one js file be separate. Combine the rest. That 25kb limit thingy too is not a hard rule, it is up to you.
If your individual files are in the magnitude of say, 30kb, I'd recommend not combining them, and letting the individual js files be cached as individual js files.
Hope that helps.

Categories

Resources