Consume Rest API on same ip but different port - javascript

Let's suppose a backend application, which exposes some Rest API, running on a Jetty webserver at address 192.168.1.10:8889.
I would like to have a frontend application (html/javascript only, on a apache2 webserver) running at the same IP but on a different port (e.g. 8000), which should consume the API exposed by the backend application.
How can i get this architecture working without get into "No 'Access-Control-Allow-Origin'" error?

I think that you should install a nginx proxy.
configure it as a reverse proxy you can see documentation here :
https://www.nginx.com/resources/admin-guide/reverse-proxy/
You can search on google for more specific documentation on what you want to do.

Related

export my localhost server ( node js ) online with a web server like apache

I just made a simple chat-service with Node JS and I want to publish It " Online " ; at the time I used Ngrok and Localtunnel , but they are very limited , therefore I saw Apache web Server but I have not found tutorial on how to use it.
Thanks and hope you can help me.
Ngrok and Localtunnel are services which let you open a connection from inside your network to an external server which then forwards traffic back down the tunnel so clients on the Internet can make requests to your service running inside your LAN.
Apache is HTTP server software. It is nothing like Ngrok and Localtunnel.
While you can set up a reverse proxy using it, for that to use useful in this use case you would have to install it in your router … and most routers don't let you install software on them.
You could possibly run it on a computer inside your LAN and then configure port forwarding on the router … but if you are going to do that then you might as well forget about Apache HTTPD and just forward traffic directly to the service you've written using Node.js.
There are security risks and bandwidth considerations to take into account when running services from your LAN. It's almost always a better idea to just invest in a proper hosting service like Amazon AWS, DigitalOcean Droplets, or Heroku.
By "online" I suppose you mean to host it globally. For that my friend you will be in need of a server (preferably a cloude server) and a static IP address. Both of these are provided by a lot of providers like aws, digitalocean etc as a platform as a service, which we can leverage. So pls do the following:
Register for a cloud service (aws, digitalocean, gcp etc.).
Create a server instance of an operating system of your choice (my pref would be a linux instance).
Attach a public static ip to the server.
Log into the server. (SSH is the most secure way and most providers provide this to log into your server).
Install dependencies (in your case NodeJS etc).
Make sure that the port in which the app is hosted is open publicly. Most providers provide a dashboard in which you can configure port settings.
Use Apache or Nginx for configuring a reverse proxy (this is just for keeping your environment secure)

Deploying react js and node js full stack on AWS production?

I have currently deployed the React and Node.js on nginx which sits on AWS . I have no issues in deployment and no errors.
The current environment is: PRODUCTION.
But I have a doubt whether the method I follow is right or wrong. This is the method I followed, https://jasonwatmore.com/post/2019/11/18/react-nodejs-on-aws-how-to-deploy-a-mern-stack-app-to-amazon-ec2
The following is my nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
root /var/apps/front_end/build;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://0.0.0.0:3005/;
}
As shown above , I have copied the build folder after npm run build to the AWS instance and gave the location to nginx and the backend is copied as it is to the AWS instance and I gave npm start it runs on 3005 port , I gave that IP to /api location to proxy pass
I see a couple of others using server.js for the front end and putting the build folder files there and setting up the nginx to that server.js .
So should I do it that way ? or am I good with the current method as seen in the link above ?
Just like everything else, there are multiple ways to go about this. Depending on the way you have ended the question looks like you are open to exploring them.
Here are my preferences depending on the increasing order of responsibilities on my side vs what AWS handles for me:
AWS Amplify :
Given that you are already using React and Node, this will be a relatively easy switch. Amplify is a not only a set of very useful frontend framework by makeing it easy to add functionalities like Authentication, Social Logins, Rotating API keys (via Cognito and API Gateway) etc but also backend logic that can be eventually deployed on AWS ApiGateway and AWS Lambda. Not only this but AMplify also provides a CICD pipeline and connects with Gothub.
In minutes, you can have a scalable service, with opetion to host frontend via AWS CloudFront, a global CDN service or via S3 hosting, deploy the API via ApiGateway and Lambda, have a CICD pipeline setup via AWS CodeDeploy and Code Build and also have user management via AWS Cognito. You can have multiple enviornments dev, test, beta etc and have it setup such that any push to the master branch is automatically deployed on the infra, and so on and so forth other branches being mapeed to specific enviornment. To top it all off, the same stack can be used to test and develop locally.
If you are rather tied down to use a specific service or function in a specific way, you can build up any of the combination of the above services. API Gateway for managing API, Cognito for user management, Lambda for compute capacity etc.
Rememebr, these are managed services so you offload a lot of engineering hours to AWS and being serverles means you are paying for what you use.
Comming to the example you have shared, you don't want your node process to be responsible of serving static assets - its a waste of the compute power as there is no intelligence attached to serving JS, CSS or images and also because in that case you introduce a new process in the loop. Instead have NGINX serve static assets itself. Refer this official guide or this StackOverflow answer.

How to publish to a GCP PubSub topic from inside a Cloudflare worker

Is it possible to publish to a GCP PubSub topic via a basic HTTP request? I have a Cloudflare worker from which I'd like to publish directly to a topic. I originally tried bundling the NodeJS module, but webpack (via wrangler) was unable to build due to dependencies (specifically tls) that are unavailable in the server worker environment.
It seems API keys aren't supported on the PubSub API and I can't for the life of me find a way to use a service account without using an SDK.
Sort of. There is the publish method that uses REST, but it requires OAuth2.0 authorization. I'm not sure if you'll run into the same issues as using the NodeJS client library, but if so, you'll have to use an intermediate service (i.e. Cloud Functions/Compute Engine/App Engine) that exposes an HTTP endpoint that can do the authentication for you.
For more information on using OAuth2.0, see this link here: https://cloud.google.com/pubsub/docs/authentication#user-accounts

How to expose nodejs api on namecheap VPS server?

I have purchased a subscription to NameCheap VPS service.
I have nodejs api running locally that I want to expose.
Currently when visiting mydomain.com a static page is served. How do I expose my nodejs api and handle requests to for example mydomain.com/books?
I have run the steps descrived here
The following guide is useless.

Using https with decoupled front-end and backend MEAN applications that run on same host

I have a MEAN project in which the Angular front-end application is decoupled from the backend Node + Express + MongoDB application. Each application is committed to its own git repository and can be staged or deployed independently.
The problem I have is that the applications are being deployed from the same host and both applications desire to use the https protocol. Are there best practice approaches to allowing the two apps to use the same protocol with default port number 443?
One option that has been suggested is to use nginx to proxy to selected port numbers (eg., 3000 for the front-end and 3001 for the backend). Is this best practice or are their better options available?
Thanks,
I came across this post https://gist.github.com/soheilhy/8b94347ff8336d971ad0. I will try the approach suggested to see if it works.

Categories

Resources