I have a VPS that runs Apache and serves a number of WordPress site. I also managed to get a NodeJS server running on this same VPS account using the MEAN stack. Things worked fine with this setup.
I decided to add a second NodeJS/MEAN app to this same server, running on a separate port, and everything is operating fine - except I've noticed a significant impact to page load performance across all sites once I got this third server running.
I found this question and this question here on SO, but neither of these address performance. So my question is:
Is it possible/practical run two separate/unique domains on the same NodeJS server app? Or is that going to create more problems than it solves? (Note: I don't mean the same machine, I mean the same NodeJS instance)
If not, How can I improve performance? Is upgrading my VPS the only option?
So you can indeed run multiple apps on the same port/process. This can be done using the express-vhost module if you need to separate by domain. You can also use the cluster module to run a pool of processes that share resources (though they end up being the same 'app', you could combine that with the vhost approach to have a pool of processes service a number of domains.
That said, I don't think you're actually going to get the results you want. The overhead of a nodejs process is pretty trivial compared to most (e.g a JVM); the costs come mostly in whatever your custom code is doing. I think what's more likely happening is that whatever size server you've chosen for your VPS is just not enough to run everything you're throwing at it, or the node apps you've written are hogging the event loop through long running processes. It could also be the case that Apache is the hog; you'll need to do more diagnostics to get to the root of it.
Related
I am working on an app that uses machine learning, and recently run into a problem. I found out that Expo and Python do not really work together.
I thought that a solution would be to host and run my machine learning model (Python) on a server, and then make requests from my JS Expo app, and I thought this would also have the hidden bonus of faster processing compared to just on my laptop.
So, can people confirm whether firstly this is possible (to call python code running on a server, from a JS Expo app) and if so, then provide some recommendations as to where I can host this?
This is for a university project, so I am hoping to find something either free or cheap, thanks.
I thought this would also have the hidden bonus of faster processing compared to just on my laptop.
If you want to do it for free then lowest tier instances on any cloud will have 1 or 2 cores plus 1-2 GB of RAM, it won't be stronger than your local device and it won't have any hardware acceleration, but if you planned to run that on your phone then it might be enough
So, can people confirm whether firstly this is possible (to call python code running on a server, from a JS Expo app)
Short answer yes, but
The way you formulated this question does suggest that you don't know how this communication would work. You would need to
send http request from expo app to some endpoint on your server
implement server code that could accept that request, in your case it's probably best to do that in python(with sth like django or flask)
in your server code execute tensorflow code
if tensorflow code executes quickly you can send results in the response if not you will need to have second endpoint to check results
This is for a university project, so I am hoping to find something either free or cheap, thanks.
for local development you will need to host it locally either way, for university project it should be enough, but if you want to have it on public server you can get free tier or some starting credits on almost any cloud service e.g. digital ocean gives 100$ credit for students
Alternatively you can use tensorflow in react-native app, you just can't do it in python, https://blog.tensorflow.org/2020/02/tensorflowjs-for-react-native-is-here.html?m=1
I am trying to develop a word addin. It will provide corrections for a word based on the context. The language will be Bengali.
My idea is that there will be a python server in localhost. Yes it will run on the localhost. Otherwise if it is a central server there will be too much requests. So somehow (maybe through an exe file I will run the server on in localhost) I will make the server run on localhost. The .exe file can be distributed to anyone and when he/she runs it, it will run on their localhost and then he/she can use the word addin (that I am developing) to call to the localhost to get the desired output. It should be noted that, the platform will be windows.
But the problem is word JavaScript api doesn't let me call to the localhost in http. It only can call to https. But localhost isn't https. If there's anyway I can call the word api in http localhost? Also, as I am looking to make a complete product, I am trying to skip OS related configurations. For example, making localhost certified. Because, when the application (python server) will be transferred, I don''t know how and if it is possible to run script in others' windows OS and make their localhost certified. So, it will be very helpful to make word api just call to my localhost using http (or atmost https://localhost) and not some https://somedomainname.com.
For coding environment to develop the word addin, I am using script lab. So, it will be better to find a solution that will be supported by script lab.
There are a few problems with this approach.
For one - you will never pass the validation, when you're trying to call the localhost - since it will fail by default and you'll be asking the users to install an additional software to support your JS add-in. The add-ins should work standalone.
If there's anyway I can call the word api in http localhost?
No - there isn't in this context. Your final manifest URL's need to be https and they should be externally available.
I am trying to skip OS related configurations. For example, making
localhost certified. Because, when the application (python server)
will be transferred, I don''t know how and if it is possible to run
script in others' windows OS and make their localhost certified.
One of the benefits of the Office store is that Microsoft can look after if the add-in the customers are installing (which we're developing) is malicious or not. Trying to say you want to bypass validation doesn't help this case.
My idea is that there will be a python server in localhost. Yes it
will run on the localhost. Otherwise if it is a central server there
will be too much requests.
It should be a central server and you should handle the scalability aspect of it. You might not have a huge number of users as you're imagining to begin with. I'd recommend Heroku - they support python and their first tier is free, you can develop & publish your add-in and once it starts getting significant traffic, you can look into moving it to other places.
Just started deal with NodeJS web apps and have a fundamental question.
Since i came from the PHP realm, i know PHP have a built-in HTTP server but no one actually use it and we used nginx and in the prehistoric projects Apache as HTTP server, when i came into ExpressJS i found that all examples talking about listening to the HTTP server that ExpressJS open (via http NodeJS module of-course) but no one talking about use it via FastCGI (nginx -> FastCGI (e.g. node-fastcgi) -> my ExpressJS app) like i used to do with PHP (nginx -> PHP-fpm -> my PHP env) and i wonder why?
As far as i understood, NodeJS app is very fast, non-blocking I/O and so on but there is a security hole using the app like everybody show, since the service that run have same common resources in the JavaScript environment, one user can share by mistake (or not) sensitive information with others, for instance. let's assume the developer made a mistake like this:
router.post('/set-user-cc', function(res){
global.user = new User({
creditCard: req.param('cc')
});
});
And other user do request like that:
router.get('/get-user-cc', funciton(req, res){
res.json(global.user);
});
At this point each user will get the user's CC info.
Using my ExpressJS app via FastCGI will open a clean JavaScript environment for each HTTP request and users won't hurt each other.
It'll nice to hear form NodeJS (web) apps experienced developers why no one suggest to use the FastCGI solution (searched on Google and found almost nothing) and if so, why it's too bad?
(p.s. the example is just to demonstrate the problem it's not something that someone actually did, but as we know lot of stupid people exists in the universe :)
Thank you!
You won't do mistakes like that if you lint your code, run under strict mode, and don't use global variables like that.
Also in nodejs web applications you generally want to make the server stateless and keep all the data in the databases. This would also make it a more scalable architecture.
In applications that are security super important, you can throw heavy fuzzy testing at it to find problems like that too.
If you do all this, plus having a strict code review process, you won't have to worry about it at all.
FastCGI doesn't prevent this problem as a single or a few connections is used to communicate with the server that processes the requests(node.js in this case) and HTTP connections are multiplexed through it. A node.js process will handle multiple requests at a time.
You can potentially somehow make a solution that launches a thread but it'll be a lot slower. In case if you are using node.js for things that are required to be have high reliability or can't afford small mistakes(for example health related devices), node.js is the wrong platform for it.
I new to Amazon AWS and want to create a cloud-based REST API in Node.js.
Usually I develop the program along with testing it. It means I write some tests, and then write the code that makes those tests run successfully. So in a typical programming session, I may run the tests or the app tens of times.
When I do this locally it is easy and quick. But what if I want to do the whole process on Amazon cloud? How does this code-test-code cycle look like? Should I upload my code to AWS every time I make a change? And then run it against some server address?
I read somewhere in the documentation that when I run a task for a few minutes (for example 15min), Amazon rounds it up to 1hour. So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours? If yes, then what will be the solution to avoid these huge costs?
When I do this locally it is easy and quick.
You can continue to do so. Deploying in the cloud is does not require developing in the cloud.
But what if I want to do the whole process on Amazon cloud?
When I do this, usually edit the code locally, the rsync my git directory up to the server and restart the service. It's super-quick.
Most people develop locally, and occasionally test on a real AWS server to make sure they haven't broken any assumptions (i.e. forgot something at boot/install time).
There are tools like Vagrant that can help you keep your server installation separate from your development environment.
As you grow (and you've got more money), you'll want to spin up staging/QA servers. These don't have to be run all the time, just when changes happen. (i.e. have Jenkins spin them up.) But it's not worth automating everything from the start. Make sure you're building the right thing (what people want) before you build it right (full automation, etc.)
So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours?
Only if you launch a new instance every time. Generally, you want to continue to edit-upload-run on the same server until it works, then occasionally kill and relaunch that server to make sure that you haven't screwed up the boot process.
I currently try to implement a simple HTTP-server for some kind of comet-technique (long polling XHR-requests). As JavaScript is very strict about crossdomain requests I have a few questions:
As I understood any apache worker is blocked while serving a request, so writing the "script" as a usual website would block the apache, when all workers having a request to serve. --> Does not work!
I came up with the idea writing a own simple HTTP server only for serving this long polling requests. This server should not be blocking, so each worker could handle many request at the same time. As my site also contains content / images etc and my server does not need to server content I started him on a different port then 80. The problem now is that I can't interact between my JavaScript delivered by my apache and my comet-server running on a different port, because of some crossdomain restrictions. --> Does not work!
Then I came up with the idea to use mod_proxy to map my server on a new subdomain. I really don't could figure out how mod_proxy works but I could imagine that I know have the same effect as on my first approach?
What would be the best way to create these kind of combination this kind of classic website and these long-polling XHR-requests? Do I need to implement content delivery on my server at my own?
I'm pretty sure using mod_proxy will block a worker while the request is being processed.
If you can use 2 IPs, there is a fairly easy solution.
Let's say IP A is 1.1.1.1 and IP B is 2.2.2.2, and let's say your domain is example.com.
This is how it will work:
-Configure Apache to listen on port 80, but ONLY on IP A.
-Start your other server on port 80, but only on IP B.
-Configure the XHR requests to be on a subdomain of your domain, but with the same port. So the cross-domain restrictions don't prevent them. So your site is example.com, and the XHR requests go to xhr.example.com, for example.
-Configure your DNS so that example.com resolves to IP A, and xhr.example.com resolves to IP B.
-You're done.
This solution will work if you have 2 servers and each one has its IP, and it will work as well if you have one server with 2 IPs.
If you can't use 2 IPs, I may have another solution, I'm checking if it's applicable to your case.
This is a difficult problem. Even if you get past the security issues you're running into, you'll end up having to hold a TCP connection open for every client currently looking at a web page. You won't be able to create a thread to handle each connection, and you won't be able to "select" on all the connections from a single thread. Having done this before, I can tell you it's not easy. You may want to look into libevent, which memcached uses to a similar end.
Up to a point you can probably get away with setting long timeouts and allowing Apache to have a huge number of workers, most of which will be idle most of the time. Careful choice and configuration of the Apache worker module will stretch this to thousands of concurrent users, I believe. At some point, however, it will not scale up any more.
I don't know what you're infrastructure looks like, but we have load balancing boxes in the network racks called F5s. These present a single external domain, but redirect the traffic to multiple internal servers based on their response times, cookies in the request headers, etc.. They can be configured to send requests for a certain path within the virtual domain to a specific server. Thus you could have example.com/xhr/foo requests mapped to a specific server to handle these comet requests. Unfortunately, this is not a software solution, but a rather expensive hardware solution.
Anyway, you may need some kind of load-balancing system (or maybe you have one already), and perhaps it can be configured to handle this situation better than Apache can.
I had a problem years ago where I wanted customers using a client-server system with a proprietary binary protocol to be able to access our servers on port 80 because they were continuously having problems with firewalls on the custom port that the system used. What I needed was a proxy that would live on port 80 and direct the traffic to either Apache or the app server depending on the first few bytes of what came across from the client. I looked for a solution and found nothing that fit. I considered writing an Apache module, a plugin for DeleGate, etc., but eventually rolled by own custom content-sensing proxy service. That, I think, is the worst-case scenario for what you're trying to do.
To answer the specific question about mod-proxy: yes, you can setup mod_proxy to serve content that is generated by a server (or service) that is not public facing (i.e. which is only available via an internal address or localhost).
I've done this in a production environment and it works very, very well. Apache forwarding some requests to Tomcat via AJP workers, and others to a GIS application server via mod proxy. As others have pointed out, cross-site security may stop you working on a sub-domain, but there is no reason why you can't proxy requests to mydomain.com/application
To talk about your specific problem - I think really you are getting bogged down in looking at the problem as "long lived requests" - i.e. assuming that when you make one of these requests that's it, the whole process needs to stop. It seems as though your are trying to solve an issue with application architecture via changes to system architecture. In-fact what you need to do is treat these background requests exactly as such; and multi-thread it:
Client makes the request to the remote service "perform task X with data A, B and C"
Your service receives the request: it passes it onto a scheduler which issues a unique ticket / token for the request. The service then returns this token to the client "thanks, your task is in a queue running under token Z"
The client then hangs onto this token, shows a "loading/please wait" box, and sets up a timer that fires say, for arguments, every second
When the timer fires, the client makes another request to the remote service "have you got the results for my task, it's token Z"
You background service can then check with your scheduler, and will likely return an empty document "no, not done yet" or the results
When the client gets the results back, it can simply clear the timer and display them.
So long as you're reasonably comfortable with threading (which you must be if you've indicated you're looking at writing your own HTTP server, this shouldn't be too complex - on top of the http listener part:
Scheduler object - singleton object, really that just wraps a "First in, First Out" stack. New tasks go onto the end of the stack, jobs can be pulled off from the beginning: just make sure that the code to issue a job is thread safe (less you get two works pulling the same job from the stack).
Worker threads can be quite simple - get access to the scheduler, ask for the next job: if there is one then do the work send the results, otherwise just sleep for a period, start over.
This way, you're never going to be blocking Apache for longer than needs be, as all you are doing is issues requests for "do x" or "give me results for x". You'll probably want to build some safety features in at a few points - such as handling tasks that fail, and making sure there is a time-out on the client side so it doesn't wait indefinitely.
For number 2: you can get around crossdomain restrictions by using JSONP.
Two Three alternatives:
Use nginx. This means you run 3 servers: nginx, Apache, and your own server.
Run your server on its own port.
Use Apache mod_proxy_http (as your own suggestion).
I've confirmed mod_proxy_http (Apache 2.2.16) works proxying a Comet application (powered by Atmosphere 0.7.1) running in GlassFish 3.1.1.
My test app with full source is here: https://github.com/ceefour/jsfajaxpush