Generating code vs file format - javascript

So my friend and I had an argument that we couldn't resolve. He is writing a general purpose web game library and a map editor. The map editor saves maps as XML, but the map editor can also load a Lua script that exports the details of the map into a Javascript file that looks something like this(he didn't want to post the code, so this is just a snippet with the names changed):
// This probably isn't valid code, but this is the idea of the code generator
(function() {
Game.Level1 = function (state) {
GameEngine.Group.call(this);
var Object0 = new Game.Lo(new GameEngine.Point(654 , 975.13), 15, state);
var slot123 = new GameEngine.TimeSlot(123); //Start
slot123.addEvent(new GameEngine.Event(Object0, "x", "current", 15, 200));
...
The idea is that the game library would just run this code instead of having to parse a map file and generate objects on the fly. And the Lua script in the map editor that generates the code could be modified by anyone who wanted to output code in a different language for a different library. (not limited to scripting languages).
I've never heard of this idea, usually i'd expect the map data to be in a standard format like JSON or XML and have the game library parse it.
So given that his library is written in javascript and his map can generate javascript to load files, what are the tradeoffs between running the generated code vs parsing JSON/XML and generating objects from that?

In a generic sense loading the metadata from another script, can give added flexibility to the script generator about how the data is sent, displayed etc. For example, you can have complete math expressions, conditionals etc as the part of the loaded script, that will be parsed and loaded seamlessly by the script parser(interpreter). It might be harder to do the same thing by using XML or JSON ( imagine sending an expression to do Miller Cylindrical Projection via XML, on the fly).
I've seen many situations where the app creates its own scripting language (MAXscript, MEL for Maya) to provide flexibility to the user. These are probably not analogous to your friend's usage of Java script to load metadata. But IMHO, it is a continuous spectrum, starting from metadata text files, to XML,JSON, expression parsing, to full fledged script parsing.
On the other hand, sending complicated scripts, will mean exposing part your code base. Also everyone knows what XML does, and you can expect a non-programmer to use/modify XML files. They are comfortable doing it. They may not be comfortable even reading what they think of as 'programs' or 'scripts'. I've seen this first hand in my company, where the artists were uneasy modify a Lua file. They were comfortable modifying the same information, if it was a simple text file. There might some security issues as well, but I'm really not that familiar with it, so I can't comment.

Related

How can I read a file's metadata in Node.js, beyond what the fs.statSync provides without using a library?

This is a topic where I can't seem to find the answer on the Node.js docs (I know it's possible because of libraries like exif), nor can I find an answer on the internet without everyone saying to just use a library.
I don't want to use a library, so I want to do this natively and learn more about reading file metadata, and maybe eventually updating the metadata too while building my own mini-tool.
If I run something like fs.statSync() I can get generic metadata that returns in the Stats object; but, in my case, I'm looking for all the other metadata, NOT just the basic file info like size, birthtime, etc.
I want the other metadata like dimensions, date taken, and especially things you'd see in image, video, or audio files.
Maybe there's something like:
const deepMetaData = fs.readFileSync().getMetaDataAsString();
console.info(/Date Taken/.test(deepMetaData)); // true
or
const deepMetaData = fs.createReadStream().buffer().toString();
const dateTaken = deepMetaData.match(/Date Taken: (\d{4}-\d{2}-\d{2})/)[1];
console.info(dateTaken);
If I need to work with buffers, streams, whatever, instead of a string output, that's cool too. Ideally something synchronous. So if there's a simple example someone could provide of how to read that kind of meta data without a library, I'll at least be able to look up the methods used from that to understand more later and leverage the docs associated with whatever approach. Thank you!
Nodejs fs functions like fs.statSync() provide OS level metadata on the file only (such as createDate, modificationDate, file size, etc...). These are properties of the file in the file system. These do NOT have anything at all to do with the actual data of the file itself.
When you talk about EXIF (for a photo), this is parsed from the file data itself. To know about that type of data, you must read and parse at least the beginning of the file and you must be able to recognize and understand all the different file formats that you might encounter. For photos, this would include JPEG, PNG, HEIC, GIF, etc... Each of those have different file formats and will require unique code for understanding the metadata embedded in the file.
Nodejs does not have support for any of that built-in.
So, it will take custom code for each file type. If you further want to include other types of files like videos, you need to extend your list of different file types you can read, parse and understand. For the depth of files you're talking about this is a big job, particular when it comes to testing against all the different variants of files and metadata that exist out in the wild.
I personally would be fine with implementing my own code for one particular file type like JPEG, but if I was tasked with supporting dozens of types of files and particularly if tasked with supporting the wide range of video file formats, I'd immediately seek out help from existing libraries that have already done all the time consuming work to research, write and test how to properly read and understand all the variants.
I know it's possible because of libraries like exif
This is an example of a library that reads the beginning of the image file, parses it according to the expected format and knows how to interpret all the possible tags that can be in the EXIF header and what they all mean.
So if there's a simple example someone could provide of how to read that kind of meta data without a library
Go study the code for the EXIF library and see how it works. If you're going to implement it yourself, that's how you have to do it. I'm still not sure why you'd avoid using working libraries if they already exist. That is one of the biggest advantages of the nodejs ecosystem - you can build on all the open source code that already exists without reimplementing it all from scratch yourself and spend your coding time on parts of your problem that someone else has not already implemented.
how would one read that metadata using node?
You literally have to read the data from the file (usually at the start of the file). You can use any of the mechanisms that the fs module provides. For example, you can use fs.createReadStream() and then stream in the file, parsing and interpreting it as data arrives and then stop the stream when you get past the end of the metadata. Of, you can open a file handle using fs.open() and use fs.read() to read chunks of the file until you have read enough to have all the metadata.
You HAVE an example sitting right in front of you of code that does this in the EXIF library on NPM that you already seem to know about. Just go examine its code. The code is ALL there.
I'm just looking for a simple answer on getting that info, even if it's a blob of strings.
This is perhaps your main problem. There is no simple answer to get that info and it doesn't just exist as a blob of strings. These files are sometimes binary files (for space efficiency reasons). You have to learn how to read and parse binary data. Go study the code in the EXIF library and see what it is already doing and you can learn from that. There is no better example to start with.
But, for a simple example using the heic filetype, this will grab the first 5000 characters of the file's metadata, which can then be searched:
const fileDescriptor = fs.openSync(absPathToHeicPhoto);
const charCount = 5000;
const buffer = Buffer.alloc(charCount);
const headerBytes = fs.readSync(fileDescriptor, buffer, 0, charCount);
const bufferAsStr = buffer.toString('utf8', 0, charCount);
console.info(/\d{4}:\d{2}:\d{2}/.test(bufferAsStr));
FYI, I looked at the code for this EXIF library on NPM and it's poorly implemented. It uses fs.readFile() to load the ENTIRE image into RAM (even though it only needs a fraction of the data at the start of the file). This is a poor implementation for this reason (memory and disk inefficient).
But, it does have a method called processImage and one called extractExifData that process the binary data of the file to parse out the EXIF info. These are links to its actual code. You can start learning there.
FYI, as a photographer, I use a command line program called exiftool that will dump exif info to stdout or to a file for many images. As a different approach, you could just run that tool from your nodejs program (using the child_process module and capture its output and use that output, letting it do the hard work you just operate on the generated output.

Why declare variable with json in js file instead of reading json?

Recently I've been working with leaflets, and I'm currently looking at several plugins.
Several of the plugins I've seen (including Leaflet.markercluster) use json to plot points, but instead of using the actual stream, or a json file, the programmer includes a javascript .js file where a variable is set with the json stream. So the js file with the json data looks like this:
var data = [
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
I've been working with other type of javascript charts, and most of them read and process a json stream.
It seems these plugins will not work if I create a service that returns json.
Why not use json instead of including a js file that sets a variable with a json stream?
I'm not a javascript expert, but I find it easier to generate json than a javascript file with json in it.
You are wrong about concepts.
1st. JavaScript as a language has its own syntax, so, if you have a function that receive a JSON object as a parameter and you pass it a Number or a String, it'll will throw an Error when you try to access some property. For Ex.
function myjson (obj) {
console.log(obj.prop)
}
myjson(34); //wrong
myjson("{prop: 123}") //wrong
myjson({prop: 123}) //Good, will print 123
Now, imagine that you have some scripts, many .js files that you have indexed in your HTML file, like
<script src="/mycode.js"> </script>
<script src="/myapp.js"> </script>
And you want to add some data, like the one you show for the plot points; then you have to include it in two ways, putting that in a .js file or getting it from a service with an AJAX call.
If you add that in a .js file, you'll have access to them directly from your code, like this
var data = [
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
console.log(data)
and if you put that in a .json file file this
/mydata.json
[
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
you'll have to fetch and parse the data yourself
fetch("/mydata.json").then(async data => {
var myjson = await data.text();
myjson = JSON.parse(myjson);
console.log(myjson) //A Javascript object
console.log(myjson[1]) //The first element
})
I like #FernandoCarvajal's answer, but I would add more context to it:
JSON is more recent than JS (you could see JSON as a "spin-off" of JS, even though it is now used in combination with much more languages than just JS).
Before JSON was widespread, the main and easiest way to load external data in Browsers was the technique you saw in the plugins demo: assign data into a global variable, which you can use in the next script. Browsers happily execute JS even from cross domain (unless you explicitly specify a Content Security Policy). The only drawback is that you have to agree on a global variable name beforehand. But for static sites (like GitHub pages in the case of the plugins demo you mention), it is easy for the developer(s) to agree on such a convention.
At this stage, you should understand that using this simple technique already fits the job for the sake of the plugins static demo. It also avoids browsers compatibility issues, aligning with Leaflet wide browsers compatibility.
With the advent of richer clients / Front-End, typically with AJAX, we can get rid of the global variable name agreement issue, but now we may face cross domain difficulty, as pointed out by #Barmar's comment. We also start hitting browsers compatibility hell.
Now that we can load arbitrary data without having to agree on a static name beforehand, we can leverage Back-End served dynamic content to a bigger scale.
To workaround the cross domain issue, and before CORS was widespread, we started using JSONP: the Front-End specifies the agreed (callback) name in its request. But in fact we just fallback to a similar technique as in point 2, mainly adding asynchronicity.
Now that we have CORS, we can more simply use AJAX / fetch techniques, and avoid security issues inherent to JSONP. We also replace the old school XML by the more JS-like JSON.
There is nothing preventing you from replacing the old school technique in point 2 by a more modern JSON consumption. If you want to ensure wide browsers compatibility, make sure to use libraries that take care of it (like jQuery, etc.). And if you need cross domain, make sure to enable CORS.

Breeze js - how to create an entity from a JSON string and import it into the breeze cache

I am working on a mobile single page site that uses breeze js, angular js, web API, entity framework, etc.
To optimize the site, I included the breeze metadata in a bundled JavaScript that contains all the other JavaScript the site needs. Ideally, all I would like the browser to request is index.html, which should contain everything the app needs to run including bundled and minified inline styles and JavaScript.
However, just as the breeze metadata is very important for the site to function and is therefore embedded in the bundled JavaScript, there is also a required complex entity (with some deep navigation properties to some other entities) that must also be present for the site to be fully functional. I would like to embed this entity and all the entities it references in the bundled JavaScript as well. How can I do this?
I can always create a JSON string that represents this entity and all the entities it references. Then embed that JSON string in the bundled JavaScript along with the rest. However, how can I easily import this complex entity into the breeze entity system using the entity JSON string I have embedded in JavaScript? Or is there a better solution to preload the breeze entity system with a complex entity without having to make a request for that entity from the server?
I would also like to avoid writing server code to spit out JavaScript that creates the entity on the client.
Simplest approach would be to use the "initializer" argument to the EntityManager.createEntity call.
See
http://www.breezejs.com/documentation/creating-entities and http://www.breezejs.com/sites/all/apidocs/classes/EntityManager.html#method_createEntity
This call looks this.
myEntityManager.createEntity("Employee", { lastName: Smith", firstName: "John" });
So in your case you could try:
var initialValues = JSON.parse(json);
myEntityManager.createEntity("Employee", initialValues);
Depending on your use case you may want to also set the 'entityState' of this newly created entity as well.
Here's a technique I often use to create entity data for automated tests:
Preparation
prime an EntityManager with the entities (and entity graphs) that you want available at launch.
export as a string with var exported = manager.exportEntities();. The exported string has the metadata embedded in it so you won't have to bring that down separately. Two-for-one!
capture the contents of exported to a JavaScript file that you load as script in index.html. My "capture" process is usually just to display in the console and scrape it.
Usage
Now when you need it:
load that JavaScript metadata+data file.
create a new EntityManager (remember to target the same dataservice endpoint).
import the entities you captured in your script: manager.importEntities(launchData);.
And you are good to go.
Read up on the EntityManager exportEntities and importEntities methods.
Example
One place you can see a variation on this technique is in the test directory of the "Zza-Node-Mongo".
I personally do not combine the data with the metadata so I export using the "no metadata" option. I put the metadata in one script and the data-to-load-on-launch (lookups usually) in a separate script and load both in index.html.
Caution
You say
to optimize the site, I included the breeze metadata in a bundled JavaScript that contains all the other JavaScript the site needs. Ideally, all I would like the browser to request is index.html, which should contain everything the app needs to run including bundled and minified inline styles and JavaScript.
Beware of premature optimization
I rather doubt that you will measurably improve the launch time of the app by embedding metadata and launch date in script files. Maybe some of the time if the browser caches these scripts. But that comes with its own risks and isn't a reliable strategy.
The data you want has to come over the wire to the client one way or another. It isn't self-evident that loading a script file - even a minimized script file - is any faster than pulling the metadata and launch data (both gzipped) down from the server via a web api AJAX call.
The techniques I described do speed up testing because I have to recreate metadata and the launch data before each test. I can measure the performance gain from avoiding repeated trips to the server. I gain nothing for the first trip ... which is the equivalent of your application launch.
Be mentally prepared to discover that your hard earned optimization efforts did not improve launch times ... and might even make them worse for some users.

Parsing XML in a Web Worker

I have been using a DOMParser object to parse a text string to an XML tree. However it is not available in the context of a Web Worker (and neither is, of course, document.ELEMENT_NODE or the various other constants that would be needed). Is there any other way to do that?
Please note that I do not want to manipulate the DOM of the current page. The XML file won't contain HTML elements or anything of the sort. In fact, I do not want to touch the document object at all. I simply want to provide a text string like the following:
<car color="blue"><driver/></car>
...and get back a suitable tree structure and a way to traverse it. I also do not care about schema validation or anything fancy. I know about XML for <SCRIPT>, which many may find useful (hence I'm linking to it here), however its licensing is not really suitable for me. I'm not sure if jQuery includes an XML parser (I'm fairly new to this stuff), but even if it does (and it is usable inside a Worker), I would not include an extra ~50K lines of code just for this function.
I suppose I could write a simple XML parser in JavaScript, I'm just wondering if I'm missing a quicker option.
according to the spec
The DOM APIs (Node objects, Document objects, etc) are not available to workers in this version of this specification.
I guess thats why DOMParser is not availlable, but I don't really understand why that decision was made. (fetching and processing an XML document in a WebWorker does not seems unreasonnable)
but you can import other tools available: a "Cross Platform XML Parsing in JavaScript"
At this point I like to share my parser: https://github.com/tobiasnickel/tXml
with its tXml() method you can parse a string into an object and it takes only 0.5kb minified + gzipped

What are benefits of serving static HTML and generating content with AJAX/JSON?

https://urbantastic-blog.tumblr.com/post/81336210/tech-tuesday-the-fiddly-bits/amp
Heath from Urbantastic writes about his HTML generation system:
All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript. Put another way, the server software for Urbantastic produces and consumes JSON exclusively. HTML, CSS, Javascript, and images are all sent via a different service (a vanilla Nginx server).
I think this is an interesting model as it separates presentation from data physically. I am not an expert in architecture but it seems like there would be a jump in efficiency and stability.
However, the following concerns me:
[subjective] Clojure is extremely powerful; Javascript is not. Writing all the content generation on a language created for another goals will create some pain (imagine writing Javascript-type code in CSS). Unless he has a macro-system for generating Javascript, Heath is probably up to constant switching between JavaScript and Clojure. He'll also have a lot of JS code; probably a lot more than Clojure. That might not be good in terms of power, rapid development, succinctness and all the things we are looking at when switching to LISP-based langauges.
[performance] I am not sure on this but rendering everything on user's machine might lag.
[accessibility] If you have JS disabled you can't use site at all.
[accessibility#2] i suspect that a lot of dynamic data filling with JavaScript will create cross-browser issues.
Can anyone comment? I'd be interested in reading your opinions on this type of architecture.
References:
Link to discussion on HN.
Link to discussion on /r/programming.
"All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript."
I think that's the standard model of an RIA. The emphasis word seems to be 'All' here. Cause in many websites a lot of the dynamic content is still not obtained through Ajax, only key features are.
I don't think the rendering issues would be a major bottleneck if you don't have a huge webpage with a lot of elements.
JS accessibility is indeed a problem. But then, users who want to experience AJAX must have JS enabled. Have you done a survey on how many of YOUR users don't have it enabled?
The advantage is, you can serve 99% (by weight) of the content through CDN (like Akamai) or even put it on external storage (eg. S3). Serving only the JSON it's almost impossible for a site to get slashdoted.
When AJAX began to hit it big, late 2005 I wrote a client-side template engine and basically turned my blogger template into a fully fledged AJAX experience.
The thing is, that template stuff, it was really easy to implement and it eliminated a lot of the grunt work.
Here's how it's was done.
<div id="blogger-post-template">
<h1><span id="blogger-post-header"/></h1>
<p><span id="blogger-post-body"/><p>
<div>
And then in JavaScript:
var response = // <- AJAX response
var container = document.getElementById("blogger-post-template");
if (!template) { // template context
template = container.cloneNode(true); // deep clone
}
// clear container
while(container.firstChild)
container.removeChild(template.firstChild);
container.appendChild(instantiate(template, response));
The instantiate function makes a deep clone of the template then searches the clone for identifiers to replace with data found in the response. The end result is a populated DOM tree which was originally defined in HTML. If I had more than one result I just looped through the above code.

Categories

Resources