Having a fixed response structure with node.js and Express - javascript

We have recently started using Node.js for our API server instead of Java. Apart from all the good things which Node.js provides, one thing I miss the most is having a proper response object for an API.
Since Javascript being dynamically typed languages, objects can be created on the fly while returning the response. This is in contrast to Java, where I can have a class , an instance of which will be serialized in the response. I can anytime lookup this class to determine what the response of the API will be.
Is there such a design pattern in Node.Js / Javascript. We would like our API's to have strict conformance to such templated object.

You can make them yourself.
If you're using ES6 for example, you can have various error and response modules, and perform your own validation in the class that creates those responses.
For example,
// sample-response.js
class SampleResponse {
constructor(message) {
// validate `message` somehow
this.data = message
}
}
module.exports = {
SampleResponse
}
Then however you're structuring your HTTP interface, you can send back whichever response you'd like (for example, with Express):
res.send(new SampleResponse(message))
Same goes with errors, etc. You're not necessarily limited by a lack of types with JavaScript, you just have to enforce things differently.
If you're not using ES6, you can do something like this:
module.exports = {
SampleResponse: function(message) {
// do some validation
return { data: message }; // or whatever you want
}
};

You can use Flow or TypeScript for you code.
You can use contract testing tools. Depending on the contract for your REST API:
abao for RAML
dredd for api blueprint
SoapUI for Swagger
My choice is to use TypeScript and Abao+RAML.

Related

How to manage data for the tests?

I have only one account - admin by default. I need to change user permissions to non-admin one. I believe that I have few options available (not great yet can be used)
Create a new endpoint on a server just for testing - to make it possible for non-admin user to update those permissions. This idea seems pretty odd to me to change something like that.
Have multiple users to be able to switch between them for different roles (all in all not that simple for now).
Connect to db within tests and make those changes on db - probably easiest option
Is it OK to create new endpoints for testing e.g. /publish-cypress? Is it OK to populate the database just for tests by running some operations on db?
On my personal opinion you shouldn't create testing endpoint.
What you should be testing is the method that will be called by these routes (Your service), and for front-end you can 'fake' these call to the api.
The simplest way to do that is to use something similar ton interfaces (you might want to look at typescript ;) )
An example to fake these calls in your tests:
I'll go with typescript as it'll be mush easier to understand (less code)
//here you define all calls possible to your api
//dont forget to always pass the gateway by reference so that you can change whenever you want between fake & real
interface MyGateway {
changeUserPermission(userId, permissions);
}
//here do the actual implementation that will be used by your app
class MyApiGateway implements MyGateway {
public changeUserPermission(userId, permissions) {
//here is your real api call with fetch, axios or whatever
fetch('http://example.com/perms')...
}
}
//now you can do another implementation for testing purposes
class MyTestGateway implements MyGateway {
//here you can return hard values or do whatever you want so that your app doesn't depends on you backend
public changeUserPermission(userId, permissions) {
console.log(`Updating ${userId} is ${permissions}`);
}
}

Embeddable js interpreter for user's code?

Imagine website, where user can generate content via js.
For example.
User clicks button
It requests our api (not user's api)
Api returns object with specific fields.
We show select with user's defined options generated by user's code or some calculated result based on data we sent.
The idea is to give user an ability to edit visible content (using our structures, we know beforehand which fields in returned object do what things).
First solution "developed" in 5 minutes.
Users clicks button
It send all required data as context to our api.
We fetch from database user's defined code
// here is the code which we write (not user) and we know this code is safe
const APP_CONTEXT = parseInput(); // this can be parameters from command line
const ourLibrary = require('ourLibrary');
// APP_CONTEXT is variable which contains data from frontend. We control data inside APP_CONTEXT, user can not write to it
// here is user defined code
const someVar = APP_CONTEXT['fieldDescribedInOurDocumentation'];
const anotherVar = APP_CONTEXT['anotherFieldFromDocumentation'];
ourLibrary.sendToFrontend(someVar + anotherVar);
In this very simple example once user clicked on button, we sent api request to our api, user's code has been executed, we show result of execution. ourLibrary abstract the way the handling is completed.
The main problem as I think is the security. I think about using restricted nodejs process. No network access, no file system access.
Is it possible to deny any import/require in nodejs process? I want to let user only call all builtin js function (Math.min, Math.max, new Date(), +, -), declare functions and so on. So it will work like a sophisticated calculator. And also we should have an ability to send it back to frontend. For example, via rabbitmq + nodejs + websockets. We can use simple console.log if former is the problem.
Some possible solution (not secure, of course) using nodejs interpreter. We execute interpreter every time when action is required.
const APP_CONTEXT = parseInput();
const ourLibrary = require('ourLibrary');
const usersCode = getUsersCode();
eval(usersCode);
Inside usersCode they use ourLibrary.sendToFrontend to produce the result. But this solution allows user to use any builtin nodejs functions, like const fs = require('fs'). Of course access will be restricted using linux system (selinux or similar) but can I configure/setup nodejs to run as simple js interpreter? May be there is some other js interpreter exists which is safe to use? Safe means: only arithmetic, Date function, Math functions and so on. No filesystem access, no network access.

Generate static Javascript client from Swagger for use in React Native

I'm building a React Native app that will consume an API with Swagger 2.0 definition. I went to Swagger's repo at https://github.com/swagger-api/swagger-codegen#where-is-javascript and it points to their Javascript generator at https://github.com/swagger-api/swagger-js.
The problem is that the generator is dynamic, and since I'll be embedding the client in a mobile app, dynamic generator is not an option. They also say that there's a third party project available at https://github.com/wcandillon/swagger-js-codegen, which says that the project is no longer maintained and points back to https://github.com/swagger-api/swagger-codegen. (while that 3rd party generator works, I don't want to use a deprecated tool that might break any time since I'll be updating the API client when new endpoints arrive. And that tool also doesn't generate really good code anyway as it says in its own repo.)
At this point I'm stuck. What is the supported way of generating a static Javascript client from Swagger definition for use in React Native?
You can use Swagger Codegen to generate a javascript client sdk. However, the javascript code used in it will not work with React Native's fetch implementation. To overcome that, you can simply extend the implementation of ApiClient to use the React Native fetch like:
class CustomApiClient extends ApiClient {
callApi(path, httpMethod, pathParams,queryParams,collectionQueryParams, headerParams, formParams, bodyParam,authNames, contentTypes, accepts,returnType, callback) {
return fetch(`${this.basePath}${path}`,
{
method: httpMethod
});
}
}
Later using it in your other methods such as
class CustomUsersApi extends UsersApi {
constructor() {
super(new CustomApiClient());
}
}
For a detailed implementation on this, you can refer the blog post https://medium.com/#lupugabriel/using-swagger-codegen-with-reactnative-4493d98cac15

How to make API abstraction layer code cleaner

Introduction
I'm developing a react application that has to communicate with a rest api. Currently the full api isn't fully implemented yet, so I'm making mock ups and to avoid wasted code I'm adding an adding an abstraction layer between mock up/api and application.
Current situation
Currently I have classes representing the components (like 'a user') in the api. A get request of an url obj1/obj2/ob3/ is translated into javascript as server.get("obj1").get("obj2").get("obj3").fetch(...args).then(onsucces,onerror).
fetch would return a promise. the other methods
The question
My question has 2 parts.
First is there a way to clean up this part, .get("obj1").get("obj2").get("obj3"). (I don't think react supports Proxies).
Secondly, if you have recursive requests
server.get("user").get(<id>).fetch(
(user)=>{
update_ui(user);
user.books.fetch(
(books)=>{
update_ui(books);
},(error)=>{}
)
},(error)=>{}
)
they can get ugly quickly, is there a way similar to .then(...).then(...) (for promises) to flatten them or something complete differently that would result in better code.
What get().get()...fetch() does
the gets would construct a path from which the fetch,... operations will be executed for an actual api these would be urls for a mock-up this could be a hard-coded dictionary.
for example, get("users").get(<userid>), would correspond to an object of the form
{
path:"users.<userid>" //(or any other seperator)
fetch: function(...args) //GET specified in the api
push: null // api doesn't specify a POST request for this url
...
}
the translation of HTTP request to javascript is as follows:
GET to fetch
POST to push
PUT to set
PATCH to update
DELETE to pop
the implementation of these methods (fetch) would then use the path and the specified arguments to get,post,... the data.

Expose GraphQL schema from non-JS server to a JS client

I have a GraphQL server implemented in Java and a JavaScript client querying it. What I don't like is that the client has to just know the schema and can not get it from the server instead and dynamically build queries against it.
Now, I understand GraphiQL somehow does just that, but I'm guessing it's because its backend is also written in JavaScript so both the client and server can use it. My schema is defined in Java, but there might be a way to automatically generate a JavaScript representation that the client could use.
Does such a thing already exist?
Now, I understand GraphiQL somehow does just that, but I'm guessing it's because its backend is also written in JavaScript so both the client and server can use it.
Actually, (fortunately) this is not the case. It is written in Javascript, but it need not be to achieve this behavior.
I've got some great news for you...
Introspection!
One of the awesome things about GraphQL is that, in fact, the client doesn't have to know anything about the schema, because it can just query the server for it using introspection. In fact, GraphiQL will use this automatically if you don't provide a schema explicitly to automagically populate it.
From the Props section of the GraphiQL README:
schema: a GraphQLSchema instance or null if one is not to be used. If undefined is provided, GraphiQL will send an introspection query using the fetcher to produce a schema.
The official GraphQL Introspection docs will give you lot more information and sample queries. Their example of querying their Star Wars example schema:
{
__schema {
types {
name
}
}
}
This returns the names of all of the types. Introspection is part of the GraphQL spec, so every GraphQL server should be able to do it out of the box: you don't need to explicitly add any functionality.

Categories

Resources