Running Multiple Instances Per Host To Improve Performance
You may find that pimentaCHAT slows down once you have a lot of concurrent users. When this sluggishness begins, you will likely see pimentaCHAT node process approaching 100% CPU (even if the host CPU load is low). This is due to the single-threaded nature of Node.js applications; they can’t take advantage of multiple cores natively.
While it’s possible to scale out by adding more servers (and this is recommended for HA purposes), you can achieve
better utilization of your existing hardware by running multiple instances of the pimentaCHAT application
(Node.js/Meteor app) on your current host(s). Of course, you only want to do this if you’re already running on
a multi-core machine. A reasonable rule-of-thumb may be to run N-1
pimentaCHAT instances, where N=num_cores
.
Running multiple instances of pimentaCHAT on a single host requires a reverse proxy in front of your application. This tutorial assumes that you’ve already followed the tutorial for Running behind a Nginx SSL Reverse Proxy.
There’s essentially just three steps:
- Enable ReplicaSet on your MongoDB installation (https://docs.mongodb.com/manual/tutorial/deploy-replica-set/)
- Start multiple instances of pimentaCHAT bound to different ports
- Update your proxy to point at all local pimentaCHAT instances
We’ll be working with Nginx in our examples, but it should be possible with other reverse proxies as well.
Run multiple instances of pimentaCHAT
We’ll assume that you’ve configured pimentaCHAT to run as a systemd service. Since we want to run multiple instances simultaneously, we need to run at least two services. The only difference is the service name and port. If you don’t have a service yet, the easiest way to do this for pimentaCHAT is to create a file in /usr/lib/systemd/system/ and call it pimentachat.service
[Unit]
Description=pimentaCHAT Server
After=syslog.target
After=network.target
[Service]
Type=simple
Restart=always
StandardOutput=syslog
SyslogIdentifier=RocketChat
User=pimentachat
Group=pimentachat
Environment=MONGO_URL=mongodb://your_mongodb:27017/your_database?replicaSet=your_replica_set_name
Environment=MONGO_OPLOG_URL=mongodb://your_mongodb1:27017/local?replicaSet=your_replica_set_name
Environment=ROOT_URL=https://your_pimentachat_domain.com
Environment=PORT=3000
WorkingDirectory=/path.to.pimentachat/pimenta.chat
ExecStart=/usr/local/bin/node /path.to.pimentachat/pimenta.chat/bundle/main.js
[Install]
WantedBy=multi-user.target
Make sure the User and Group exist and both have read/write/execute Permissions for the pimentachat. Now you can run start, stop, restart, and status your pimentachat service.
If you want multiple Services create another file in /usr/lib/systemd/system and call it pimentachat@.service with the following content:
[Unit]
Description=pimentaCHAT Server
After=syslog.target
After=network.target
[Service]
Type=simple
Restart=always
StandardOutput=syslog
SyslogIdentifier=RocketChat
User=pimentachat
Group=pimentachat
Environment=MONGO_URL=mongodb://your_mongodb:27017/your_database?replicaSet=your_replica_set_name
Environment=MONGO_OPLOG_URL=mongodb://your_mongodb1:27017/local?replicaSet=your_replica_set_name
Environment=ROOT_URL=https://your_pimentachat_domain.com
Environment=PORT=%I
WorkingDirectory=/path.to.pimentachat/pimenta.chat
ExecStart=/usr/local/bin/node /path.to.pimentachat/pimenta.chat/bundle/main.js
[Install]
WantedBy=pimentachat.service
Start the other pimentaCHAT Services with
systemctl start pimentachat@3001 (or any other desired port after the @)
If you want to run pimentachat at boot just enable the services with
systemctl enable pimentachat
The other Services will be enable since they are “WantedBy”=RocketChat.service
Ensure nodes can communicate
If you run pimentaCHAT instances on multiple physical nodes. Or even in multiple containers make sure they can communicate with each other.
pimentaCHAT makes use of a peer to peer connection to inform each other of events. Let’s say you type a message and tag a friend or coworker that is connected to another instance.
Two different events are fired: 1. The user (you) is typing 2. Notify user (friend)
Each pimentaCHAT instance registers in your database the ip address it detected for its self. Other instances then use this list to discover and establish connections with each other.
If you find instances unable to talk to each other you can try setting the INSTANCE_IP
environment variable to the ip the other instances can use to talk to it.
Update your Nginx proxy config
Edit /etc/nginx/sites-enabled/default
or if you use nginx from docker /etc/nginx/conf.d/default.conf
and be sure to use your actual hostname in lieu of the sample hostname “your_hostname.com” below.
You just need to setup a backend if one doesn’t already exist. Add all local pimentaCHAT instances to it. Then swap out the host listed in the proxy section for the backend you defined with no port.
Continuing the example, we’ll update our Nginx config to point to the two pimentaCHAT instances that we started running on ports 3001 and 3002.
# Upstreams
upstream backend {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
#server 127.0.0.1:3002;
#server 127.0.0.1:3003;
.
.
.
}
Now just replace proxy_pass http://IP:3000/;
with proxy_pass http://backend;
.
Updating the sample Nginx configuration
would result in a config like this:
# HTTPS Server
server {
listen 443;
server_name your_hostname.com;
error_log /var/log/nginx/pimentachat.access.log;
ssl on;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/certificate.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don’t use SSLv3 ref: POODLE
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
Now restart Nginx: service nginx restart
Visit https://your_hostname.com
just as before the update. Ooh, so fast!
To confirm you’re actually using both services like you’d expect, you can stop one pimentachat service at a time and confirm that chat still works. Restart that service and stop the other. Still work? Yep, you’re using both services!
Check your database
Another very important part is your database. As mentioned above, you will need to make sure you are running a replicaset.
This is important for a couple of reasons: 1. Database reliability. You will want to make sure that your data is replicated, and you have another node if something happens to your primary. 2. pimentaCHAT does what’s called oplog tailing. The oplog is turned on when you setup a replicaset. Mongo makes use of this to publish events so the other nodes in the replicaset can make sure its data is up to date. pimentaCHAT makes use of this to watch for database events. If someone sends a message on Instance 1 and you are connected to Instance 2. Instance 2 watches for message insert events and then is able to show you a new message has arrived.
Database engine
Another thing to keep in mind is the storage engine you are using. By default mongo uses Wiredtiger. Wiredtiger under some loads can be very CPU and Memory intensive. Under small single instance setups we don’t typically see issues. But when you run multiple instances of pimentaCHAT it can sometimes get a bit unruly.
It’s because of this we recommend in multiple instance situations that you switch the mongo storage engine to mmapv1.