I started learning meteor recently and I am developing an application that is similar to a bus timetable app; I have several screens (about 30 devices with browsers) and I connect them all to one localhost that runs my application on cmd (the app is offline using windows 10 with local MongoDB, no online server). Each set of screens display a relevant bus and its timetable, extracted from a MongoDB collection.
The app used to work just fine, but when I added 10 more screens, the clients seem to be disconnected from the server after a couple of minutes; I have meteor calls that display server time on screen, this time shows up as undefined when the server drops and I can't see any of my collections' documents via MeteorToys, I see the collections, but the documents are 0. Also, I can't log in to my admin page (which is a basic user interface that I made, simple MongoDB Accounts collection)
It's worth mentioning that the whole app does not crash; I can still navigate to my pages as my layout, HTML and CSS, still show up, just my server-related functionalities that stop.
I realize it's a traffic issue, as when I disconnect all of the screens and run the app, it works just fine. Also, when I reconnect one by one, it also seems to work fine.
I get no error on my console on the client, and on the server CMD, the application does not crash, it stays on with no error what's so ever.
Also, I added Meteor.status() ping to my console every second, and I get this
{status: "connected", connected: true, retryCount: 0}
which means technically the server is not offline?
I'm very lost, what can I do to rectify this?
Update:
I noticed that I had several ServerSessions that run on every second that get my time from the server. I changed them to normal Sessions and I'm now facing a different issue; I think there is a memory leak somewhere; when it freezes I noticed my RAM usage skyrocketed to 8GB (I have my max_old_space set to 8912 so it shouldn't be a problem) The normal usage was about when it runs is around 600-900MB
Then I get FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
=> Exited with code: 3
and Server Unreachable on the browser with 503 (Service Unavailable)
The server then restarts, and it does the same thing. Help :(
I experienced the same behavior with an app loading lots of data and doing computations. When the nodejs server CPU is on load, the app looses connexion to its node back end, and when the database server CPU is on load, the app looses connexion to the database.
Here are some advices:
Start by this reading if you haven't : https://bulletproofmeteor.com/ It has been written some years ago but still a lot of good practices inside
Monitor your app on your machine, try to understand what is the bottleneck: data transfers, disk reads, cpu intensive tasks ?
Look into your publication/subscription model. You may send too much data to the client, maybe you can limit the amount of data needed ?
if your setup is critical, don't run it on a single windows machine using the meteor command ! : start with building a proper nodejs app using meteor build https://guide.meteor.com/build-tool.html
use nginx to handle incoming connexion, it is robust and scalable
use a separate host for the database, and maybe a mongo replica set of 3 machines (this may solve your issue by giving more database availability, but I haven't tested yet)
Edit regarding the build: After building the app, you will obtain a folder. Here are the steps to launch the app in production mode:
cd into your meteor app folder
meteor build targetFolder --directory
cd targetFolder/bundle
(cd programs/server && npm install)
meteor node targetFolder/main.js #I use meteor node to ensure version compatibility
Related
I have deployed a simple javascript memory game to heroku with lite-server.
https://happy-birthday-eline.herokuapp.com
To my surprise, when a user turns a card, all other users see the card turn too. I can't figure out why. I thought client-side actions were limited to the client and could in no way update the server or impact other users.
How do I prevent a user action (click on card) from propagating to all other users?
Thanks
Answer: I thought I could just deploy using lite-server (rather than express) but lite-server has file listening enabled, which is why user actions were impacting all other users. (Obvious) solution was to use express on Heroku, not lite-server!
It's caused by BrowserSync. Looks like you deployed a development version of your code and BrowserSync is connected.
In order to avoid it, you have to deploy a production version of your application.
For some reason on all my Bluemix services, I intermittently get the error "Cannot GET /pathname" on my node.js express services. This is intermittent (it works about 1/3 of the time). There is no error or logging shown in the application when this happens (however that response is coming from express I assume).
Any ideas? I have no idea how to progress here. The server has ample resources (memory + CPU).
I've seen this happen before when the user accidently has 2 different applications mapped to the same route/URL. The load balancer is hitting a different application on each time.
Try changing the route to something else and try to recreate the problem....myappname2.mybluemix.net
If that seems to fix it, log in to the UI and confirm that you do not have duplicate applications and that all applications have a unique route.
I have a VPS that runs Apache and serves a number of WordPress site. I also managed to get a NodeJS server running on this same VPS account using the MEAN stack. Things worked fine with this setup.
I decided to add a second NodeJS/MEAN app to this same server, running on a separate port, and everything is operating fine - except I've noticed a significant impact to page load performance across all sites once I got this third server running.
I found this question and this question here on SO, but neither of these address performance. So my question is:
Is it possible/practical run two separate/unique domains on the same NodeJS server app? Or is that going to create more problems than it solves? (Note: I don't mean the same machine, I mean the same NodeJS instance)
If not, How can I improve performance? Is upgrading my VPS the only option?
So you can indeed run multiple apps on the same port/process. This can be done using the express-vhost module if you need to separate by domain. You can also use the cluster module to run a pool of processes that share resources (though they end up being the same 'app', you could combine that with the vhost approach to have a pool of processes service a number of domains.
That said, I don't think you're actually going to get the results you want. The overhead of a nodejs process is pretty trivial compared to most (e.g a JVM); the costs come mostly in whatever your custom code is doing. I think what's more likely happening is that whatever size server you've chosen for your VPS is just not enough to run everything you're throwing at it, or the node apps you've written are hogging the event loop through long running processes. It could also be the case that Apache is the hog; you'll need to do more diagnostics to get to the root of it.
I'm learning Ruby on Rails to build a real-time web app with WebSockets on Heroku, but I can't figure out why the websocket connection fails when running on a Unicorn server. I have my Rails app configured to run on Unicorn both locally and on Heroku using a Procfile...
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
...which I start locally with $foreman start. The failure occurs when creating the websocket connection on the client in javascript...
var dispatcher = new WebSocketRails('0.0.0.0:3000/websocket'); //I update the URL before pushing to Heroku
...with the following error in the Chrome Javascript console, 'websocket connection to ws://0.0.0.0:3000/websocket' failed. Connection closed before receiving a handshake response.
...and when I run it on Unicorn on Heroku, I get a similar error in the Chrome Javascript console, 'websocket connection to ws://myapp.herokuapp.com/websocket' failed. Error during websocket handshake. Unexpected response code: 500.
The stack trace in the Heroku logs says, RuntimeError (eventmachine not initialized: evma_install_oneshot_timer):
What's strange is that it works fine when I run it locally on a Thin server using the command $rails s.
I've spent the last five hours researching this problem online and haven't found the solution. Any ideas for fixing this, or even ideas for getting more information out of my tools, would be greatly appreciated!
UPDATE: I found it strange that websocket-rails only supported EventMachine-based web servers while faye-websocket which websocket-rails is based upon, supports many multithread-capable web servers.
After further investigation and testing, I realised that my earlier assumption had been wrong. Instead of requiring an EventMachine-based web server, websocket-rails appears to require a multithread-capable (so no Unicorn) web server which supports rack.hijack. (Puma meets this criteria while being comparable in performance to Unicorn.)
With this assumption, I tried solving the EventMachine not initialized error using the most direct method, namely, initializing EventMachine, by inserting the following code in an initializer config/initializers/eventmachine.rb:
Thread.new { EventMachine.run } unless EventMachine.reactor_running? && EventMachine.reactor_thread.alive?
and.... success!
I have been able to get Websocket Rails working on my local server over a single port using a non-EventMachine-based server without Standalone Server Mode. (Rails 4.1.6 on ruby 2.1.3p242)
This should be applicable on Heroku as long as you have no restriction in web server choice.
WARNING: This is not an officially supported configuration for websocket-rails. Care must be taken when using multithreading web servers such as Puma, as your code and that of its dependencies must be thread-safe. A (temporary?) workaround is to limit the maximum threads per worker to one and increase the number of workers, achieving a system similar to Unicorn.
Out of curiousity, I tried Unicorn again after fixing the above issue:
The first websocket connection was received by the web server (Started GET "/websocket" for ...) but
the state of the websocket client was stuck on connecting, seeming to hang
indefinitely.
A second connection resulted in HTTP error code 500 along with app
error: deadlock; recursive locking (ThreadError) showing up in the
server console output.
By the (potentially dangerous) action of removing Rack::Lock, the deadlock error can be resolved, but connections still hang, even though the server console shows that the connections were accepted.
Unsurprisingly, this fails. From the error message, I think Unicorn is incompatible due to reasons related to its network architecture (threading/concurrency). But then again, it might just be some some bug in this particular Rack middleware...
Does anyone know the specific technical reason for why Unicorn is incompatible?
ORIGINAL ANSWER:
Have you checked the ports for both the web server and the WebSocket server and their debug logs? Those error messages sound like they are connecting to something other than a WebSocket server.
A key difference in the two web servers you have used seems to be that one (Thin) is EventMachine-based and one (Unicorn) is not. The Websocket Rails project wiki states that a Standalone Server Mode must be used for non-EventMachine-based web servers such as Unicorn (which would require an even more complex setup on Heroku as it requires a Redis server). The error message RuntimeError (EventMachine not initialized: evma_install_oneshot_timer): suggests that standalone-mode was not used.
Heroku AFAIK only exposes one internal port (provided as an environmental variable) externally as port 80. A WebSocket server normally requires its own socket address (port number) (which can be worked around by reverse-proxying the WebSocket server). Websocket-Rails appears to get around this limitation by hooking into an existing EventMachine-based web server (which Unicorn does not provide) hijacking Rack.
I new to Amazon AWS and want to create a cloud-based REST API in Node.js.
Usually I develop the program along with testing it. It means I write some tests, and then write the code that makes those tests run successfully. So in a typical programming session, I may run the tests or the app tens of times.
When I do this locally it is easy and quick. But what if I want to do the whole process on Amazon cloud? How does this code-test-code cycle look like? Should I upload my code to AWS every time I make a change? And then run it against some server address?
I read somewhere in the documentation that when I run a task for a few minutes (for example 15min), Amazon rounds it up to 1hour. So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours? If yes, then what will be the solution to avoid these huge costs?
When I do this locally it is easy and quick.
You can continue to do so. Deploying in the cloud is does not require developing in the cloud.
But what if I want to do the whole process on Amazon cloud?
When I do this, usually edit the code locally, the rsync my git directory up to the server and restart the service. It's super-quick.
Most people develop locally, and occasionally test on a real AWS server to make sure they haven't broken any assumptions (i.e. forgot something at boot/install time).
There are tools like Vagrant that can help you keep your server installation separate from your development environment.
As you grow (and you've got more money), you'll want to spin up staging/QA servers. These don't have to be run all the time, just when changes happen. (i.e. have Jenkins spin them up.) But it's not worth automating everything from the start. Make sure you're building the right thing (what people want) before you build it right (full automation, etc.)
So if in a typical development session I run my program 100 times in an hour, do I end up paying for 100 hours?
Only if you launch a new instance every time. Generally, you want to continue to edit-upload-run on the same server until it works, then occasionally kill and relaunch that server to make sure that you haven't screwed up the boot process.