I'm running integration tests on several AWS Lambda's and I need a way to route API calls to a dummy express server on my local machine. Normally I would just change the url's of the API calls, but the urls are generated in projects that are not apart of this and imported via npm, so hardcoding in a new url isn't practical.
My goal is to have these modules use the URL generated but have that routed to a dummy Express server that I'am running where I will have prepackaged responses so I can test the functionality of these lambdas. For example there is a request for an authorization token from an outside service. Instead of requesting from the actual service it would be routed to my local express server which would just provide a static authorization token. There is then another point where that token is verified and I would again hope that this would get routed to the same server (though in reality it's a different service) and it would verify the token.
Ultimately I will have this dummy Express server, a DynamoDB, and SQS, running on docker containers locally to essentially imitate this software running live.
I've seen that docker can route traffic, but I'm not sure if what I'm attempting to do will be possible. I've googled around but most of the stuff I have found seems a bit more simple then what I'm attempting.
Related
I have an application that is comprised of an Angular front-end, an app layer and a DB layer. You can see the architecture in this image.
I am using an nginx instance to serve both the JS front-end bits to the client as well as to proxy requests from the client to the app layer. So let's say I deploy this nginx on a cloud VM with IP 18.1.1.1 (fake) I can point my browser to that IP, the client will download the JS code, and the JS code is configured, see here, to set the app server ip/fqdn to the same ip/fqdn I pointed my browser to download the ui.
At this point the nginx proxy configuration kicks in and forward all /api requests made by the JS code to a specific fqdn. Right now this is a specific FQDN just because I am deploying these components as containers and the nginx container always knows how to reach http://yelb-appserver:4567/api.
I would like now to create additional deployment methods and in particular I would like to host the Angular bits on an S3 bucket (or any other web server) and have the JS point directly to something like an API GW, a separate EC2 instance, a cloud load balancer, or anything that represents an IP/FQDN endpoint different from the IP/FQDN of the web server serving the JS files.
In this case I can no longer use the appserver_env: 'http://' + window.location.host that I have used here.
Since I would like to create a dynamic and repeatable deployment workflow (using cloudformation, or similar) I am wondering if there is a way to work with a single JS compiled artifact parametrizing the Angular code to point to the /api endpoint created at deployment time OR if my only option is, at every deployment, to 1) create/read the /api endpoint at deployment time, 2) programmatically customize the Angular code with the endpoint, 3) re-build the Angular app dynamically (now including the specific /api endpoint) and 4) finally deploy the web site with the JS code ad-hoc created with the custom /api endpoint for that specific application instance deployed.
Thanks.
Use environment variables and keep them in a config (like "environment.prod.ts" in your case), which will be given to a node process running your build. Your javascript angular code can use these variables, like for api endpoint, you can have process.env.API_ENDPOINT in you code wherever you need api endpoint. Now for supplying thses variables you can use something, as simple as, API_ENDPOINT='/api' npm run build or for more advanced approach, you can use Docker.
Hi I am new to Neo4J i am searching from 2 days for access Neo4J every one through URL of public.
In settings of neo4j configuration file some modifications i done those are
dbms.connectors.default_listen_address=0.0.0.0
dbms.connector.http.listen_address=:7474
I got the access for only with in router level means ip4 address level i got but i want to give access to every one.
Because i am using Asp.MVC with script and Neo4jAPI. I installed Neo4j in separate server and app published in separate server i want access the Neo4JAPI
Setting an external public IP is a function of the router managing the connections. If this is a corporate setup, ask the administrator. If this is a personal setup, you will need to do a bunch of things, such as:
Setup a local HTTP server
Allow inbound traffic on port 80
Setup a DNS service
Setup an SSH server
Forward requests on your router to your computer for different ports
This is highly risky and you could be open to problems if you're not careful. A better approach would be to use a public service like AWS to host your application.
I am quite new to javascript and web application environment. I have seen a react web application project which had a public directory, a client directory and a server directory. I have few question
Why do we need an express server file setup in the frontend project if we already have backend APIs ready and backend server ready
Do we need an express server if we make the frontend on react and call the APIs to fetch the data for application.
Isn't the backend server and express server in the frontend project are same?
Why do we need an express server file setup in the frontend project if we already have backend APIs ready and backend server ready
You don't.
You need an HTTP server to listen for and respond to any Ajax requests you make from your client side code.
You need an HTTP server to listen for and respond to any requests for the HTML documents and static resources (JS, CSS, images, etc) that your pages need.
These can be the same HTTP server, different HTTP servers, written with Express or not written with Express.
React tutorials tend to ignore mentioning this and just dive in showing how to use Express for everything. Don't read too much into that.
Do we need an express server if we make the frontend on react and call the APIs to fetch the data for application.
No. See above.
Isn't the backend server and express server in the frontend project are same?
Maybe. It is up to you. See above.
There is no such thing as a "backend server" and a "frontend server", a simple web application is composed of two main parts:
1/ an application that serves html pages, which runs on a backend, so it is usually called a server, but a typical cloud server nowadays can run hundreds of different serving apps at the same time
2/ a frontend, which is typically a complex piece of JavaScript software and html pages that are dynamically send to the user browser and execute locally
The minimum that you require to have a working website is a server application that will return one or several html pages upon user request. A typical React + Node project is organized as follow:
A server directory: which contains all the code for the serving app - the one returning the webpages, it can also contains code that handle the REST API, in case your client app requires dynamic data or if your server connect to a database. Note that the webpage server and the API server could be two different- or more - applications.
You usually dont want to share to users your server code, so typically you have a public directory that contains the html pages and this is the only location on the disk - theoretically - that can be access by users. This directory can also contains required images and resources needed by the webpages, it is also called static resources
To keep thing more organized, the code of the frontend application is placed in a client directory but on production is usually bundled in one or few files, depending on the size of the app, and also placed in the public directory, so it contains everything needed to serve the app.
Hope it helps
We don't need an Express server, however, adding it comes with great benefits:
It helps add compression to an angular / react application that uses only in-memory server (if that is your case).
Defines base path where to serve static files for your project and can also add gzip compression headers for each of the request so that the server returns the compressed versions.
Helps you parse your API calls to the correct format expected by the UI so that the parsing logic stays in the express server and not the UI. This is helpful in the event that the API response changes in the future, or when the final backend endpoint changes, no need to modify the UI, but the route in the express server.
I have found out these and other benefits while looking how to add compression to an angular application (turns out you cannot without express or an actual web server).
OK so I may be asking too much here and/or showing my naivety, but bear with me.
At present I have an html (with js) hosted at A, and node.js app hosted at B. The html/js fetches data from the node app via a XMLHttpRequest, and the node app at B dutifully generates the requested data and sends it to A.
I'm trying to reduce the number of http requests generally, and to streamline the performance generally, and wonder whether it's possible to host the html/js via the node app (via express.static()) at A so that when the html/js requests data from the node server, it's actually requesting data from the same server, and indeed all within the same app (since the node app is generating the data and the node app is also exposing the html/js to a static route).
So is there any way for the js in the html to access the node app functions more directly, i.e. rather than sending an http request to the same node app, just accesses the data-generating function within the node app directly, or at least without using an http request?
I have things set up in my node app so that the html/js can be hosted succesfully via express.static() -- so it's working OK to that extent -- but I just need to know whether it's possible to avoid an http request all the way round a big loop and back to the same node app!
The simple answer is, if A and B are far apart, yes, hosting them on the same server will help.
Serving them from the same application won't help as you'll still need to talk via HTTP.
The question of whether you can remove the HTTP calls from A to B is down to application design. You have a static web app and an API and you're basically thinking of scraping that and making it one application.
There are pros and cons to both but I'll be going down the road of personal opinion if I start listing them.
My vote, don't bother :)
When you serve html and js files by express.static() they are not running in server, but serve from server to browser. And so that js scripts are running from browser. Browser scripts to communicate with server, must use http/https requests or sockets. You can communicate browser scripts from server A with server B (but checkout CORS).
One of my team members is working only on client-side (Javascript) development for a web app with a large and complex backend.
I would like to avoid the need for him install and configure a local copy of the backend.
However, I wouldn't want him to need to push every small change to the dev server just so that he can test it.
We thought about getting the client to make the requests directly to the dev server, instead of to the same domain (the localhost) but this doesn't seem practical due to cross-domain request policies and authentication problems (cookies aren't getting sent).
What are some elegant solutions for developing clients without having a local backend?
Depending on how complicated your backend is, you might be able to create a mock backend using a lightweight web framework like Sinatra. I've had some success with this technique, but the services I've been mocking have been fairly simple. In some cases the mock backend mostly serves static JSON files.
I use Charles Proxy to map the URIs of the dev server's web services to localhost (where I run a light weight web server that serves up my static development code).