Is is possible to pretty-print output JSON in hapi-swagger? - javascript

I'm using hapi-swagger (https://github.com/glennjones/hapi-swagger) to generate Swagger docs for my API. I get the ability to try out the APIs via the browser using this as well, which is great. The problem is that the results of the API call are in a compact format--large JSON structures on a single line. It makes sense that the API should produce output like this, but I would like the swagger web client to pretty-print it for me.
I note that regular Swagger seems to be able to do this, based on my reading of this issue (https://github.com/swagger-api/swagger-core/issues/810) but I'm unsure if it's possible to pass along options that get that same behavior in hapi-swagger.
Is it possible, and if so, how do I do it?

Related

How to load large JSON file into javascript

Firstly, thanks for taking time to view my post. I am working on a project with a few people, and we basically have a webpage, and once you log in, it displays all the data from a mysql database which has 6 tables, 3 of which have data in them. We figured out that in order for us to go about this, we need to transfer data by exporting the data into JSON file(s), then from there load the JSON files into java script so it can communicate with the web server. We were wondering what would be the best way to go about this. One way we found out is to reference the JSON ddata as variables and just list everything out, but several of our files have loads and loads of data. Would there be an easier approach?
This would be the first time we are doing something like this, so we are learning and appreciate your feedback!!
If you're using some kind of AJAX library, you can easily request the .json file from the browser.
For example with jQuery:
$.post("ourJSONFile.json", function( data ) {
..logic to display data..
});
NOTE - The comment about no PHP was not posted while I was writing this. Until I know why there can be no PHP, I'm just going to leave this answer here.
I think you're going at this in the wrong way. There is no need for JSON here, just query your database for the information.
The overview of what you need to do is (in PHP or the language you use to talk with your server) you need to
connect to the database
query database for information (query in your case, more or less means get the information)
do something with the returned information (such as echoing it in PHP so it will go to the user)
Now, it doesn't look like there is much effort place for getting this done, only planning. So, I'll just show you a few links to read up. (Which is also why people has devoted your question, Stackoverflow doesn't like asking question with no effort to research answers)
I'd use php.net for looking up methods, such as the mysqli_query method. This is very useful for learning small but important things about the method, like what it returns when an error occurs.
http://php.net/manual/en/mysqli.query.php
Taking a quick look through this guide, I think it should suffice. Besides syntax and such, some other important points are use MySQLI (when using PHP 5 or higher) and use prepared statements.
http://www.pontikis.net/blog/how-to-use-php-improved-mysqli-extension-and-why-you-should
The use of prepared statements is to protect your queries from injections. w3schools gives a good explanation about this # http://www.w3schools.com/sql/sql_injection.asp
MySQLI is MySQL Improved, it is more secured and supported. MySQL has been deprecated since PHP 5, and php.net pages on a MySql method will actually say this at the very top.
Finally, Andrew mentioned AJAX. AJAX is (for example) a way of doing things that would normally require reloading the page, without reloading the page. There is more to that, and I would recommand looking into it once you get use to the languages you are using.
Note, AJAX does not require libraries to use, it can be done with pure javascript. Libraries simply (as seen around) simplify AJAX a lot.

What is the best way to handle i18n in an Angular app with dynamic translations

So I have been tasked with i18n on a new Angular app we are creating. I already know how to implement it if the translations are stored in json format on the client. However I have been told I cannot implement it like this as the translations will get updated on a regular basis by the client so must will have to be got from the api.
I have also been told that I cannot map directly to the json response, but instead I must create TypeScript objects which sit between the json and the UI.
What is the best way to achieve this? The header has a dropdown for languages. Do I need to call all the languages when the application loads and cache them, or do I just call each language as I need to? Do I translate only what I see on screen or does the entire app need to be translated?
do I just call each language as I need to
Definitely only the ones you need.
Do I translate only what I see on screen or does the entire app need to be translated
The relevant portions get redirected to an in memory i18n angular service that has the results cached.

Parsing RSS in JS without tierce service (vanilla JS or Angular)

I want to recover an RSS feed in JS.
I looked-up on the web a whole day, and found that nearly everybody use google feed API, Yahoo API, or a nodejs/php page for the computing and Jsonification. And I don't want to depend on a service like Google Feed API.
My goal is to fetch an RSS feed, and then create an array where each article on the feed will be an object, in full javascript.
I'm using Angular JS, so if the help could use the benefits of this lib, it would be great, but I'm not closed to any vanilla-JS code if needed.
For those who may want to ask why : it is for a Firefox OS appliaction, and that's why I can't have any php/nodejs. All have to be made in JS.
Thanks,
Tom
What is the problem to make fetching of xml structure directly?
I think using systemXHR permission regular AJAX request should work fine for you.
Then you'll be able to get from xml what you need in apy possible way.
So my best guess would to just use normal DOM parser, and then query the document:
var parser = new DOMParser();
var xmlDoc = parser.parseFromString(txt, "text/xml");
I think nowadays you can also use things like querySelectorAll to quickly iterate over the document, similar to normal DOM. E.g. something like this would work:
[].forEach.call(xmlDoc.querySelectorAll('item'), function(item) {
console.log(item.querySelector('title').textContent);
});
The short answer is that you can't fetch and parse XML feeds on the client without using a 3rd party service because of browser's Same Origin Policies.
From there, there are 2 options:
fetch and parser on the server side. You'll have to do all the grunt work by your self, but then you can easily load the data from the browser, because it will be under the same domain and hence the Same Origin Policy won't apply
compromise on your requirement to not use a 3rd party and use one that transforms the XML feeds into JSON to circumvent the SOP.
In both cases, I suggest you check Superfeedr (which I created!), which I believe can help a lot... we also have an Angular module for feeds.
Thanks for pople who took the time to answer me :)
It appears it is not really possible without any server computing.
I have to confess that I'm pretty lucky, because the service I wanted to call have just realesed a new API, so happy end for me :)
Thanks every body !

Piping/streaming JavaScript objects in Node.js

I'm trying to wrap my head around Node.js streams, not that I'm pretty new to JavaScript and node, the last languages I really got were Perl and PHP :-D
I've read the Buffer/Streams documentation # nodejs.org, watched James Halliday # LXJS, read his stream-handbook and Thorsten Lorenz event-stream post. I start to understand the basics :)
I process data which is serialized in RDF (which is neither JSON nor XML). I manage to fetch the data (in real code via request) and parse it into a JS object using rdfstore module.
So far I do this:
s.createReadStream('myRDFdata.ttl').pipe(serialize()).pipe(process.stdout);
Where serialize()does the job of parsing an serializing the code at the same time right now. I use through module to interface to the stream.
Now I have some more methods (not the real function declaration but I hope you get the point):
getRecipe(parsedRDF) -> takes the parsed RDF (as a JavaScript object) and tells me how to use it
createMeal(parsedRDF, recipe) -> takes the parsed RDF and the recipe from above and creates a new RDF object out of it
this new object needs to get serialized and sent to the browser
(In the real world getRecipe will have to do a user interaction in the browser)
I like the idea of chaining this together via pipes for higher flexibility when I enhance the code later. But I don't want to serialize it to a RDF serialization every time but just send around the JS object. From what I've read in the documentation I could use the stringify module to get a string out of each step for piping it to the next step. But:
does this actually make sense? In terms of do I add unnecessary overhead or is this negligible?
I don't see how I could give the parsedRDF to both methods with the dependency that getRecipe would have to be called first and the output is input for createMeal as well. Are there modules which help me on that?
It might be that I have to ask the user for the final recipe selection so I might need to send stuff to the browser there to get the final answer. Can I do something like this over sockets while the pipe is "waiting"?
I hope this shows what I'm trying to do, if not I will try to give more details/rephrase.
Update: After sleeping over it I figured out some more things:
It probably doesn't make sense to serialize a format like RDF into something non-standard if there are official serialization formats. So instead of using stringify I will simply pass an official RDF serialization between the steps
This does imply that I parse/serialize the objects in each step and this surely does add overhead. Question is do I care? I could extend the RDF module I use to parse from stream and serialize into one
I can solve the problem with the dependency between getRecipe and createMeal by simply adding some information from getRecipe to parseRDF, this can be done very easily with RDF without breaking the original data model. But I would still be interested to know if I could handle dependencies like this with pipes
yes, It's okay to make a stream of js objects,
you just have to remember to pipe it through something that will serialize the stream again after before writing it to IO.
I'd recomend writing a module called rdfStream that parses and serializes rdf, you would use it like this
var rdf = require('rdf-stream')
fs.createReadStream(file) //get a text stream
.pipe(rdf.parse()) //turn it into objects
.pipe(transform) //optional, do something with the objects
.pipe(rdf.stringify()) //turn back into text
.pipe(process.stdout) //write to IO.
and it could also be used by other people working with rdf in node, awesome!

Is there a good way of automatically generating javascript client code from server side python

I basically want to be able to:
Write a few functions in python (with the minimum amount of extra meta data)
Turn these functions into a web service (with the minimum of effort / boiler plate)
Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments)
Example
python:
def hello_world():
return "Hello world"
javascript:
...
<!-- This file is automatically generated (either dynamically or statically) -->
<script src="http://myurl.com/webservice/client_side_javascript"> </script>
...
<script>
$('#button').click(function () {
hello_world(function (data){ $('#label').text(data)))
}
</script>
A bit of research has shown me some approaches that come close to this:
Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell)
Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this)
But are there any approaches closer to what I want?
Yes there is, there is Pyjamas. Some people bill this as the "GWT for Python"
It looks like using a javascript XML RPC client (there is jquery plugin for this) together with an XML RPC server is a good way to go.
The jquery plugin will introspect your rpc service and will populate method names make it impossible to mis type the name of a method call without getting early warning. It will not however test the number of arguments that you pass, or their type.
There doesn't seem to be the same support for introspection on json rpc (or rather there doesn't seem to be a consistent standard). This approach can also be used with django.
I've put together some example code and uploaded it here (I hope that linking to one's blog posts isn't considered terrible form - a brief search of the internet didn't seem to suggest it was)...

Categories

Resources