I have recently begun to work with NodeJS, after a long time working with PHP, and i'm wondering if there was anything similar to 'echo'. a way of sending the data as parts, allowing me to send my own values in between. For example, writing the structure for a section of a website in html, then using node to send the relevant data inside of the div.
The short answer is "yes". You can repeatedly write to the response object and then call end on it when you are done.
response.setHeader("Content-Type", "text/plain");
response.write("foo");
response.write("bar");
response.write("baz");
response.end();
However, that generally isn't the approach taken for good reason.
Separating your concerns (e.g. splitting "data fetching" and "HTML generation") usually makes things more manageable.
This is why the most common way to build pages server-side with Node.js is to collect all the data for the page into an object and then pass it into a template.
You can, of course, split the data collection logic up into functions in their own modules so you can end up with something like:
const my_data = {
header: get_header_data(request),
content: get_template_name_data(request),
};
response.render('template_name', my_data)
With as much division into smaller chunks are you like.
Related
For some background, I have a number of enterprise systems management products that run a REST API called Redfish. I can visit the root folder on one such device by going here:
https://ip.of.device/redfish/v1
If I browse the raw JSON from a GET request, the output includes JSON data that look like this. I'm truncating this due to how long it is, so perhaps some JSON syntax errors here.
{
Description: "Data",
AccountService: {
#odata.id: "/redfish/v1/AccountService"
},
CertificateService: {
#odata.id: "/redfish/v1/CertificateService"
}
}
Perhaps in my searching I'm using the wrong terminology, but each of those #odata.id items is basically a 'folder' I can navigate into. Each folder has additional data values, but still more folders. I can capture contents of folders I know about via javascript and parse the JSON simple enough, but there are hundreds of folders here, some multiple layers deep and from one device to the next, sometimes the folders are different.
Due to the size and dynamic nature of this data, is there a way to either recursively query this from an API itself or recursively 'scrape' an API's #odata.id 'folder' structure like this using Javascript? I'm tempted to write a bunch of nested queries in foreach loops, but there's surely a better way to accomplish this.
My eventual goal is to perform this from nodejs, parse the data, then present the data in a web form for a user to select what fields to keep, which we'll store for faster lookups in a mongodb database along with the path to the original data for more targeted api queries later.
Thanks in advance for any suggestions.
I have an HTML file with some Javascript and css applied on.
I would like to duplicate that file, make like file1.html, file2.html, file3.html,...
All of that using Javascript, Jquery or something like that !
The idea is to create a different page (from that kind of template) that will be printed afterwards with different data in it (from a XML file).
I hope it is possible !
Feel free to ask more precision if you want !
Thank you all by advance
Note: I do not want to copy the content only but the entire file.
Edit: I Know I should use server-side language, I just don't have the option ):
There are a couple ways you could go about implementing something similar to what you are describing. Which implementation you should use would depend on exactly what your goals are.
First of all, I would recommend some sort of template system such as VueJS, AngularJS or React. However, given that you say you don't have the option of using a server side language, I suspect you won't have the option to implement one of these systems.
My next suggestion, would be to build your own 'templating system'. A simple implementation that may suit your needs could be something mirroring the following:
In your primary file (root file) which you want to route or copy the other files through to, you could use JS to include the correct HTML files. For example, you could have JS conditionally load a file depending on certain circumstances by putting something like the following after a conditional statement:
Note that while doing this could optimize your server's data usage (as it would only serve required files and not everything all the time), it would also probably increase loading times. Your site would need to wait for the additional HTTP request to come through and for whatever requested content to load & render on the client. While this sounds very slow it has the potential of not being that bad if you don't have too many discrete requests, and none of your code is unusually large or computationally expensive.
If using vanilla JS, the following snippet will illustrate the above:
In a script that comes loaded with your routing file:
function read(text) {
var xhr=new XMLHttpRequest;
xhr.open('GET',text);
xhr.onload=show;
xhr.send();
}
function show() {
var text = this.response;
document.body.innerHTML = text;//you can replace document.body with whatever element you want to wrap your imported HTML
}
read(path/to/file/on/server);
Note a couple of things about the above code. If you are testing on your computer (ie opening your html file on a browser, with a path like file://__) without a local server, you will get some sort of cross origin request error when trying to make an XML request. To bypass this error, either test your code on an actual server (not ideal constantly pushing code, I know) or, preferably, set up a local testing server. If this is something you would want to explore, its not that difficult to do, let me know and I'd be happy to walk you through the process.
Alternately, you could implement the above loading system with jQuery and the .load() function. http://api.jquery.com/load/
If none of the above solutions work for you, let me know more specifically what it is that you need, and I'll be happy to give a more useful/ relevant answer!
I am working on a JavaScript library that is supposed to be used by different applications targeting various platforms such as Web/iOS/Android.
In this, the application would need to make REST-calls to retrieve and parse a large multilevel XML data(with namespace). Planning to use jQuery here.
The large-XML can contain data that may not be useful at times.
For example, depending on the Locale-setting in the application that uses this library, only a subset of a meta-data need to be parsed.
<meta>
<data>
...
...
...
</data>
<component local="EN-US">
...
</component>
<component local="EN-UK">
...
</component>
<component local="EN-AU">
...
</component>
</meta>
So, parsing all the data received in the Ajax-response seems to be a waste of effort.
Keeping this in mind, I wonder if the following is a good way to approach this problem:
1. Store the "responseXML" received in the JQuery's Ajax success callback.
2. When the application requests the data for say "EN-US", parse the appropriate sub-nodes from the stored "responseXML" document, and return the result.
(If the user changes the Locale-setting, no problem, we can go and parse the appropriate section of the XML-Data).
I do understand that, parsing on the go will add a delay to the application, but it seems to me that, overall application performance will be better. No need to parse all the data upfront and use only portion of it.
Please share your thoughts/ideas.
EDIT: Server is controlled by a different team with different priorities. So, changing the server-side code to return appropriate data is not an option at the moment.
Thanks in Advance,
Why don't you let the server only return the data that the client needs at the moment or at least only return the data for the selected local. Thinking of mobile users you will hurt performance and use expensive bandwidth for data that the user doesn't need.
If the user changes the local, you can make a new request and let the server return data for this locale. This will improve the performance on the client and will also save bandwidth on the server...
Edit: Also you cannot parse only some subnodes of an XML document and skip others. To reach a specific node in the XML you have to parse all nodes that come before it, so you might as well process the data if you really need it later.
https://urbantastic-blog.tumblr.com/post/81336210/tech-tuesday-the-fiddly-bits/amp
Heath from Urbantastic writes about his HTML generation system:
All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript. Put another way, the server software for Urbantastic produces and consumes JSON exclusively. HTML, CSS, Javascript, and images are all sent via a different service (a vanilla Nginx server).
I think this is an interesting model as it separates presentation from data physically. I am not an expert in architecture but it seems like there would be a jump in efficiency and stability.
However, the following concerns me:
[subjective] Clojure is extremely powerful; Javascript is not. Writing all the content generation on a language created for another goals will create some pain (imagine writing Javascript-type code in CSS). Unless he has a macro-system for generating Javascript, Heath is probably up to constant switching between JavaScript and Clojure. He'll also have a lot of JS code; probably a lot more than Clojure. That might not be good in terms of power, rapid development, succinctness and all the things we are looking at when switching to LISP-based langauges.
[performance] I am not sure on this but rendering everything on user's machine might lag.
[accessibility] If you have JS disabled you can't use site at all.
[accessibility#2] i suspect that a lot of dynamic data filling with JavaScript will create cross-browser issues.
Can anyone comment? I'd be interested in reading your opinions on this type of architecture.
References:
Link to discussion on HN.
Link to discussion on /r/programming.
"All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript."
I think that's the standard model of an RIA. The emphasis word seems to be 'All' here. Cause in many websites a lot of the dynamic content is still not obtained through Ajax, only key features are.
I don't think the rendering issues would be a major bottleneck if you don't have a huge webpage with a lot of elements.
JS accessibility is indeed a problem. But then, users who want to experience AJAX must have JS enabled. Have you done a survey on how many of YOUR users don't have it enabled?
The advantage is, you can serve 99% (by weight) of the content through CDN (like Akamai) or even put it on external storage (eg. S3). Serving only the JSON it's almost impossible for a site to get slashdoted.
When AJAX began to hit it big, late 2005 I wrote a client-side template engine and basically turned my blogger template into a fully fledged AJAX experience.
The thing is, that template stuff, it was really easy to implement and it eliminated a lot of the grunt work.
Here's how it's was done.
<div id="blogger-post-template">
<h1><span id="blogger-post-header"/></h1>
<p><span id="blogger-post-body"/><p>
<div>
And then in JavaScript:
var response = // <- AJAX response
var container = document.getElementById("blogger-post-template");
if (!template) { // template context
template = container.cloneNode(true); // deep clone
}
// clear container
while(container.firstChild)
container.removeChild(template.firstChild);
container.appendChild(instantiate(template, response));
The instantiate function makes a deep clone of the template then searches the clone for identifiers to replace with data found in the response. The end result is a populated DOM tree which was originally defined in HTML. If I had more than one result I just looped through the above code.
I'm working on an AJAXy project (Dojo and Rails, if the particulars matter). There are several places where the user should be able to sort, group, and filter results. There are also places where a user fills out a short form and the resulting item gets added to a list on the same page.
The non-AJAXy implementation works fine -- the view layer server-side already knows how to render this stuff, so it can just do it again in a different order or with an extra element. This, however, adds lots of burden to the server.
So we switched to sending JSON from the server and doing lots of (re-)rendering client-side. The downside is that now we have duplicate code for rendering every page: once in Rails, which was built for this, and once in Dojo, which was not. The latter is basically just string concatenation.
So question part one: is there a good Javascript MVC framework we could use to make the rendering on the client-side more maintainable?
And question part two: is there a way to generate the client-side views in Javascript and the server-side views in ERB from the same template? I think that's what the Pragmatic Programmers would do.
Alternatively, question part three: am I completely missing another angle? Perhaps send JSON from the server but also include the HTML snippet as an attribute so the Javascript can do the filtering, sorting, etc. and then just insert the given fragment?
Well, every time you generate HTML snippets on the client and on the server you may end up with duplicated code. There is no good way around it generally. But you can do two things:
Generate everything on the server. Use AHAH when you need to generate HTML snippets dynamically. Basically you ask server to generate an HTML fragment, receive it asynchronously, and plug it in place using innerHTML or any other similar mechanism.
Generate everything on the client (AKA the thick client paradigm). In this case even for the initial rendering you pass data instead of pre-rendered HTML, and process the data client-side using JavaScript to make HTML. Depending on the situation you can use the data island technique, or request data asynchronously. Variant: include it as <script> using JSONP so the browser will make a request for you while loading the page.
Both approaches are very simple and have different set of pros and cons. Sometimes it is possible to combine both techniques within one web application for different parts of data.
Of course you can go for exotic solutions, like using some JavaScript-based server-side framework. In this case you can share the code between the server and the client.
I don't have a complete answer for you; I too have struggled with this on a recent project. But, here is a thought:
Ajax call to Rails
Rails composes the entire grid again, with the new row.
Rails serializes HTML, which is returned to the browser.
Javascript replaces the entire grid element with the new HTML.
Note: I'm not a Rails person, so I'm not sure if those bits fit.
Has anyone tried something like the following? There's redundant data now, but all the rendering is done server-side and all the interaction is done client-side.
Server Side:
render_table_initially:
if nojs:
render big_html_table
else:
render empty_table_with_callback_to_load_table
load_table:
render '{ rows: [
{ foo: "a", bar: "b", renderedHTML: "<tr><td>...</td></tr>" },
{ foo: "c", bar: "d", renderedHTML: "<tr><td>...</td></tr>" },
...
]}'
Client side:
dojo.xhrGet({
url: '/load_table',
handleAs: 'json',
load: function(response) {
dojo.global.all_available_data = response;
dojo.each(response.rows, function(row) {
insert_row(row.renderedHTML);
}
}
});
Storing the all_available_data lets you do sorting, filtering, grouping, etc. without hitting the server.
I'm only cautious because I've never heard of it being done. It seems like there's an anti-pattern lurking in there...
"Perhaps send JSON from the server but also include the HTML snippet as an attribute so the Javascript can do the filtering, sorting, etc. and then just insert the given fragment?"
I recommend keeping the presentation layer on the client and simply downloading data as needed.
For a rudimentary templating engine, you can extend Prototype's Template construct:
http://www.prototypejs.org/api/template
As your client scales and you need a rich and flexible MVC, try PureMVC.
http://puremvc.org/content/view/102/181/
As in regular server side programming you should strive to encapsulate your entities with controls, in your case client side controls that has data properties and also render methods + events.
So for example lets say you have on the page an are that shows tree of employees, effectively you should encapsulate it behavior in a client side class that can get JSON object of the employee list / by default connect to the service and a render method to render the output, events and so on.
In ExtJS its is relatively easy to create these kind of controls - look at this article.
Maybe I'm not fully understanding your problem but this is how I would solve your requirements:
Sort, Filter
It can be done all in JavaScript, without hitting the server. It's a problem of manipulating the table rows, move rows for sorting, hide rows for filtering, there is no need for re-rendering. You will need to mark columns with the data type with custom attributes or extra class names in order to be able to parse numbers or dates.
If your reports are paginated I think it's easier and better to refresh the whole table or page.
Group
I can't help here as I don't understand how will you enable grouping. Are you switching columns to show accumulates? How do you show the data that doesn't support accumulates like text or dates?
New item
This time I would use AJAX with a JSON response to confirm that the new item was correctly saved and possibly with calculated data.
In order to add the new item to the results table I would clone a row and alter the values. Again we don't need to render anything in client-side. The last time I needed something like this I was adding an empty hidden row at the end of the results, this way you will always have a row to clone no matter if the results are empty.
Number 5 in my list of five AJAX styles tends to work pretty well.