Using JSON in XUL <template>s - javascript

As far as I can tell, the template feature in XUL doesn't allow you to load JSON data into your listbox/tree/etc. element. -- it only supports XML and RDF. The closest thing I found to an indication that it might someday support JSON, is the comments on this blog post from 2007, saying that there was a bug filed. But the bug in question is marked RESOLVED FIXED and JSON is still not supported. So I guess my options are:
Get the data I need in XML, and display it using templates.
Get the data in JSON, and display it by direct DOM manipulation.
Use one of these third-party templating solutions.
So my question is, am I correct that templates don't support JSON? If not, where is that feature documented? If I am correct, what should I consider when choosing among the above three options?

It turns out that writing your own custom object that implements nsITreeView is a lot simpler than I expected, and makes everything seem nice and fast.

I'm not sure about JSON in XUL templates, however I'd suggest option 2, given the ease with which JSON is used within the browser.
From Firefox 3.5, you can just do
var obj = JSON.parse(xhr.responseText);

Related

Why declare variable with json in js file instead of reading json?

Recently I've been working with leaflets, and I'm currently looking at several plugins.
Several of the plugins I've seen (including Leaflet.markercluster) use json to plot points, but instead of using the actual stream, or a json file, the programmer includes a javascript .js file where a variable is set with the json stream. So the js file with the json data looks like this:
var data = [
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
I've been working with other type of javascript charts, and most of them read and process a json stream.
It seems these plugins will not work if I create a service that returns json.
Why not use json instead of including a js file that sets a variable with a json stream?
I'm not a javascript expert, but I find it easier to generate json than a javascript file with json in it.
You are wrong about concepts.
1st. JavaScript as a language has its own syntax, so, if you have a function that receive a JSON object as a parameter and you pass it a Number or a String, it'll will throw an Error when you try to access some property. For Ex.
function myjson (obj) {
console.log(obj.prop)
}
myjson(34); //wrong
myjson("{prop: 123}") //wrong
myjson({prop: 123}) //Good, will print 123
Now, imagine that you have some scripts, many .js files that you have indexed in your HTML file, like
<script src="/mycode.js"> </script>
<script src="/myapp.js"> </script>
And you want to add some data, like the one you show for the plot points; then you have to include it in two ways, putting that in a .js file or getting it from a service with an AJAX call.
If you add that in a .js file, you'll have access to them directly from your code, like this
var data = [
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
console.log(data)
and if you put that in a .json file file this
/mydata.json
[
{"loc":[41.575330,13.102411], "title":"aquamarine"},
{"loc":[41.575730,13.002411], "title":"black"},
{"loc":[41.807149,13.162994], "title":"blue"},
{"loc":[41.507149,13.172994], "title":"chocolate"}
]
you'll have to fetch and parse the data yourself
fetch("/mydata.json").then(async data => {
var myjson = await data.text();
myjson = JSON.parse(myjson);
console.log(myjson) //A Javascript object
console.log(myjson[1]) //The first element
})
I like #FernandoCarvajal's answer, but I would add more context to it:
JSON is more recent than JS (you could see JSON as a "spin-off" of JS, even though it is now used in combination with much more languages than just JS).
Before JSON was widespread, the main and easiest way to load external data in Browsers was the technique you saw in the plugins demo: assign data into a global variable, which you can use in the next script. Browsers happily execute JS even from cross domain (unless you explicitly specify a Content Security Policy). The only drawback is that you have to agree on a global variable name beforehand. But for static sites (like GitHub pages in the case of the plugins demo you mention), it is easy for the developer(s) to agree on such a convention.
At this stage, you should understand that using this simple technique already fits the job for the sake of the plugins static demo. It also avoids browsers compatibility issues, aligning with Leaflet wide browsers compatibility.
With the advent of richer clients / Front-End, typically with AJAX, we can get rid of the global variable name agreement issue, but now we may face cross domain difficulty, as pointed out by #Barmar's comment. We also start hitting browsers compatibility hell.
Now that we can load arbitrary data without having to agree on a static name beforehand, we can leverage Back-End served dynamic content to a bigger scale.
To workaround the cross domain issue, and before CORS was widespread, we started using JSONP: the Front-End specifies the agreed (callback) name in its request. But in fact we just fallback to a similar technique as in point 2, mainly adding asynchronicity.
Now that we have CORS, we can more simply use AJAX / fetch techniques, and avoid security issues inherent to JSONP. We also replace the old school XML by the more JS-like JSON.
There is nothing preventing you from replacing the old school technique in point 2 by a more modern JSON consumption. If you want to ensure wide browsers compatibility, make sure to use libraries that take care of it (like jQuery, etc.). And if you need cross domain, make sure to enable CORS.

Parsing RSS in JS without tierce service (vanilla JS or Angular)

I want to recover an RSS feed in JS.
I looked-up on the web a whole day, and found that nearly everybody use google feed API, Yahoo API, or a nodejs/php page for the computing and Jsonification. And I don't want to depend on a service like Google Feed API.
My goal is to fetch an RSS feed, and then create an array where each article on the feed will be an object, in full javascript.
I'm using Angular JS, so if the help could use the benefits of this lib, it would be great, but I'm not closed to any vanilla-JS code if needed.
For those who may want to ask why : it is for a Firefox OS appliaction, and that's why I can't have any php/nodejs. All have to be made in JS.
Thanks,
Tom
What is the problem to make fetching of xml structure directly?
I think using systemXHR permission regular AJAX request should work fine for you.
Then you'll be able to get from xml what you need in apy possible way.
So my best guess would to just use normal DOM parser, and then query the document:
var parser = new DOMParser();
var xmlDoc = parser.parseFromString(txt, "text/xml");
I think nowadays you can also use things like querySelectorAll to quickly iterate over the document, similar to normal DOM. E.g. something like this would work:
[].forEach.call(xmlDoc.querySelectorAll('item'), function(item) {
console.log(item.querySelector('title').textContent);
});
The short answer is that you can't fetch and parse XML feeds on the client without using a 3rd party service because of browser's Same Origin Policies.
From there, there are 2 options:
fetch and parser on the server side. You'll have to do all the grunt work by your self, but then you can easily load the data from the browser, because it will be under the same domain and hence the Same Origin Policy won't apply
compromise on your requirement to not use a 3rd party and use one that transforms the XML feeds into JSON to circumvent the SOP.
In both cases, I suggest you check Superfeedr (which I created!), which I believe can help a lot... we also have an Angular module for feeds.
Thanks for pople who took the time to answer me :)
It appears it is not really possible without any server computing.
I have to confess that I'm pretty lucky, because the service I wanted to call have just realesed a new API, so happy end for me :)
Thanks every body !

Parsing XML in a Web Worker

I have been using a DOMParser object to parse a text string to an XML tree. However it is not available in the context of a Web Worker (and neither is, of course, document.ELEMENT_NODE or the various other constants that would be needed). Is there any other way to do that?
Please note that I do not want to manipulate the DOM of the current page. The XML file won't contain HTML elements or anything of the sort. In fact, I do not want to touch the document object at all. I simply want to provide a text string like the following:
<car color="blue"><driver/></car>
...and get back a suitable tree structure and a way to traverse it. I also do not care about schema validation or anything fancy. I know about XML for <SCRIPT>, which many may find useful (hence I'm linking to it here), however its licensing is not really suitable for me. I'm not sure if jQuery includes an XML parser (I'm fairly new to this stuff), but even if it does (and it is usable inside a Worker), I would not include an extra ~50K lines of code just for this function.
I suppose I could write a simple XML parser in JavaScript, I'm just wondering if I'm missing a quicker option.
according to the spec
The DOM APIs (Node objects, Document objects, etc) are not available to workers in this version of this specification.
I guess thats why DOMParser is not availlable, but I don't really understand why that decision was made. (fetching and processing an XML document in a WebWorker does not seems unreasonnable)
but you can import other tools available: a "Cross Platform XML Parsing in JavaScript"
At this point I like to share my parser: https://github.com/tobiasnickel/tXml
with its tXml() method you can parse a string into an object and it takes only 0.5kb minified + gzipped

How to access JSON from an iframe originating from same domain?

In my webpage a hidden iframe is loaded with some JSON in it. This JSON is refreshed by some actions on the page. How do I access this JSON in iframe from my web page? for some unknown arcane unexplainable reason I am forced to use jQuery 1.3.2. so no $.parseJSON()
I think you can use:
var json = $.parseJSON($("#hiddeniframe").contents().text());
Something along those lines will work at least.
All modern browsers include a JSON parsing library:
var data = JSON.parse($("#hiddeniframe").contents().text());
If you need to support older browsers there are several libraries to choose from that will provide the same interface. The better ones will check to see if the browser is providing a native implementation and not override it, since it's bound to be faster.
See also JSON.stringify()
The code #Paulpro posted:
var json = $.parseJSON($("#hiddeniframe").contents().text());
doesn't work for me.
I changed the code to:
var json = $.parseJSON($("#hiddeniframe").contents().find("*").first().text());
And now it works.

What are benefits of serving static HTML and generating content with AJAX/JSON?

https://urbantastic-blog.tumblr.com/post/81336210/tech-tuesday-the-fiddly-bits/amp
Heath from Urbantastic writes about his HTML generation system:
All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript. Put another way, the server software for Urbantastic produces and consumes JSON exclusively. HTML, CSS, Javascript, and images are all sent via a different service (a vanilla Nginx server).
I think this is an interesting model as it separates presentation from data physically. I am not an expert in architecture but it seems like there would be a jump in efficiency and stability.
However, the following concerns me:
[subjective] Clojure is extremely powerful; Javascript is not. Writing all the content generation on a language created for another goals will create some pain (imagine writing Javascript-type code in CSS). Unless he has a macro-system for generating Javascript, Heath is probably up to constant switching between JavaScript and Clojure. He'll also have a lot of JS code; probably a lot more than Clojure. That might not be good in terms of power, rapid development, succinctness and all the things we are looking at when switching to LISP-based langauges.
[performance] I am not sure on this but rendering everything on user's machine might lag.
[accessibility] If you have JS disabled you can't use site at all.
[accessibility#2] i suspect that a lot of dynamic data filling with JavaScript will create cross-browser issues.
Can anyone comment? I'd be interested in reading your opinions on this type of architecture.
References:
Link to discussion on HN.
Link to discussion on /r/programming.
"All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript."
I think that's the standard model of an RIA. The emphasis word seems to be 'All' here. Cause in many websites a lot of the dynamic content is still not obtained through Ajax, only key features are.
I don't think the rendering issues would be a major bottleneck if you don't have a huge webpage with a lot of elements.
JS accessibility is indeed a problem. But then, users who want to experience AJAX must have JS enabled. Have you done a survey on how many of YOUR users don't have it enabled?
The advantage is, you can serve 99% (by weight) of the content through CDN (like Akamai) or even put it on external storage (eg. S3). Serving only the JSON it's almost impossible for a site to get slashdoted.
When AJAX began to hit it big, late 2005 I wrote a client-side template engine and basically turned my blogger template into a fully fledged AJAX experience.
The thing is, that template stuff, it was really easy to implement and it eliminated a lot of the grunt work.
Here's how it's was done.
<div id="blogger-post-template">
<h1><span id="blogger-post-header"/></h1>
<p><span id="blogger-post-body"/><p>
<div>
And then in JavaScript:
var response = // <- AJAX response
var container = document.getElementById("blogger-post-template");
if (!template) { // template context
template = container.cloneNode(true); // deep clone
}
// clear container
while(container.firstChild)
container.removeChild(template.firstChild);
container.appendChild(instantiate(template, response));
The instantiate function makes a deep clone of the template then searches the clone for identifiers to replace with data found in the response. The end result is a populated DOM tree which was originally defined in HTML. If I had more than one result I just looped through the above code.

Categories

Resources