Socket io, Try multiple URLs to establish connection after "connect_error" - javascript

I have an app with divided code (client / server). On the client side, I'd like socket io to attempt multiple URLs (one at a time) until it connects successfully.
Here's my code:
const BAD_HOST = "http://localhost:8081";
const LOCAL_HOST = "http://localhost:8080";
const SOCKET_CONFIG = {
upgrade: false,
transports: ["websocket"],
auth: { ... }, // Trimmed for brevity
extraHeaders: { ... }, // Trimmed for brevity
};
let socket = io(BAD_HOST, SOCKET_CONFIG); // This connects fine when I use LOCAL_HOST
socket.on("connect_error", (err) => {
console.log(err);
socket = io(LOCAL_HOST, SOCKET_CONFIG); // DOES NOT WORK
});
socket.on("connect", () => { ... } // Trimmed for brevity
In short, when I try to reassign the value for socket to a new io connection, it seems to retain the old, failed connection. My browser continues to throw 'connect_error' messages from the bad url:
WebSocket connection to 'ws://localhost:8081/socket.io/?EIO=4&transport=websocket' failed:
I checked but couldn't find any official documentation on this question.

I think an approach is already discussed here:
https://stackoverflow.com/a/22722710/656708
Essentially you have an array of URLs, which in your case would be:
const socketServerURLs = ["http://localhost:8081","http://localhost:8080"];
and then iterate over them, trying to initiate a socket connection, like this:
// something along these lines
socketServerURLs.forEach((url) => {
// ...
socket.connect(url, socketConfiguration, (client) => {});
// ...
}
Then again, I don't know what a BAD_HOST entails. Assuming that you mean that a connection to that host failed, how would you know that without actually trying to connect to it?

Related

Websocket is unable to reconnect after restarting the server in Javascript

I have a simple client-side script like this:
function connect() {
const { contextBridge } = require('electron');
var ws = new WebSocket('ws://localhost:3000');
ws.onerror = (error) => {
console.error(`Lost connection to server. Reason: ${error.message}`);
console.error('Attempting to reconnect...');
ws.close();
}
ws.onclose = (e) => {
setTimeout({
connect();
}, 500);
}
ws.addEventListener('open', () => {
console.log('Connected to server!');
});
// Some other stuff to call functions via the browser console
const API = {
ws_isOpen: () => { return ws.readyState === ws.OPEN }
}
contextBridge.exposeInMainWorld('api', API);
function send_msg(msg) {
// Process some data...
ws.send(msg);
}
}
connect();
It works normally when the server is running and it's trying to connect, or when the server is rebooting and it's trying to connect for the first time, but not while it's connected. What I mean is that, if I were to suddenly shut the server down while the client is being connected to it, it attempts to try to reconnect as usual and the success message does pop up. However, if I type in window.api.ws_isOpen() in the browser console, it returns false. When I try to send a message, an error pops up saying something like Websocket is already in CLOSING or CLOSED stage. I tried changing the ws variable type to let and const but it doesn't work.
Turns out the answer is really simple. For some reason, when I put the ws variable outside the connect() function and modify it in the function, it works. I'm guessing it kinda re-declares/re-new the ws variable. It looks something like this:
var ws = null;
function connect() {
ws = new WebSocket('ws://localhost:3000');
// the exact same as above here....
}
connect();
After rebooting the server and letting it reconnect:
>> window.api.ws_isOpen()
true
I feel like I'm supposed to know how this works...

Javascript: How can I interact with a TCP Socket API using net.connect properly?

I'm fairly new to Javascript and am trying to wrap my head around async, promises, etc.
I have an application running a TCP API (non-HTTP) on the localhost. I'm building an Electron app to interact with this API. I need to send a single request to the API every second and retrieve a single JSON object it returns.
I'm able to do this successfully (for while) by running something like this:
const net = require('net');
function apiCall() {
if (running) {
setTimeout(() => {
// Send the request
request = '{"id":1,"jsonrpc":"2.0","method":"getdetails"}'
socketClient = net.connect({host:'localhost', port:8888}, () => {
socketClient.write(request + '\r\n');
});
// Listen for the response
var response;
socketClient.on('data', (data) => {
response = JSON.parse(data).result;
updateUI(response);
socketClient.end();
});
// On disconnect
socketClient.on('end', () => {
console.log('Disconnected from API');
});
apiCall();
}, refreshRate)
}
}
After running this for an extended amount of time, it appears that the API server is crashing:
Error: connect ECONNREFUSED 127.0.0.1:8888
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146)
Unfortunately, I have no control over the API server or its source code. I'd like some clarification on whether my client might be causing the API server to crash by sending requests this way.
Should I be opening and closing the connection for each request or keep it open and send requests only every second?
If I should be keeping the connection open, how can I do this, and do I need to worry about keep-alive?
It looks like that every time you call apiCall you are creating a new socket client and you are not removing the old socket client instances. This is a memory leak and it will cause the application to crash after running for some time
You can keep a running connection instead like below
const net = require("net");
const { once } = require("events");
let socketClient;
function apiCall() {
if (running) {
setTimeout(async () => {
const request = '{"id":1,"jsonrpc":"2.0","method":"getdetails"}';
// Create the socket client if it was not already created
if (!socketClient) {
socketClient = net.connect({ host: "localhost", port: 8888 });
// On disconnect
socketClient.on("end", () => {
console.log("Disconnected from API");
socketClient.destroy();
socketClient = null;
});
// Wait until connection is established
await once(socketClient, "connect");
}
// Send the request
socketClient.write(request + "\r\n");
// Listen for the response
const data = await once(socketClient, "data");
const response = JSON.parse(data).result;
updateUI(response);
apiCall();
}, refreshRate);
}
}

How to close socket connection for GraphQL subscription in Apollo

I have GraphQL Subscriptions on my Apollo server that I want to close after the user logs out. The initial question is whether we should close this (socket) connections on the client side or in the backend.
On the front-end, I am using Angular with Apollo Client and I handle GraphQL subscriptions by extending the Subscription class from apollo-angular. I am able to close the subscription channels with a typical takeUntil rxjs implementation:
this.userSubscription
.subscribe()
.pipe(takeUntil(this.subscriptionDestroyed$))
.subscribe(
({ data }) => {
// logic goes here
},
(error) => {
// error handling
}
);
However, this does not close the websocket on the server, which If I'm right, will result in a subscription memory leak.
The way the Apollo Server (and express) is set up for subscriptions is as follows:
const server = new ApolloServer({
typeDefs,
resolvers,
subscriptions: {
onConnect: (connectionParams, webSocket, context) => {
console.log('on connect');
const payload = getAuthPayload(connectionParams.accessToken);
if (payload instanceof Error) {
webSocket.close();
}
return { user: payload };
},
onDisconnect: (webSocket, context) => {
console.log('on Disconnect');
}
},
context: ({ req, res, connection }) => {
if (connection) {
// set up context for subscriptions...
} else {
// set up context for Queries, Mutations...
}
When the client registers a new GraphQL subscription, I always get to see console.log('on connect'); on the server logs, but I never see console.log('on Disconnect'); unless I close the front-end application.
I haven't seen any example on how to close the websocket for subscriptions with Apollo. I mainly want to do this to complete a Logout implementation.
Am I missing something here? Thanks in advance!
I based my solution based on this post
Essentially, the way we created the Subscription with sockets was using subscriptions-transport-ws
export const webSocketClient: SubscriptionClient = new
SubscriptionClient(
`${environment.WS_BASE_URL}/graphql`,
{
reconnect: true,
lazy: true,
inactivityTimeout: 3000,
connectionParams: () => ({
params: getParams()
})
}
);
As specified in the question, I wanted to unsubscribe all channels and close the subscription socket connection before logout of the user. We do this by using the webSocketClient SubscriptionClient in the logout function and call:
webSocketClient.unsubscribeAll();
webSocketClient.close();

Nodejs + SocketIO + MySql Connections Not Closing Properly and Creating Database Overhead

I've been having this issue for over a couple of months now, and still can't seem to figure out how to fix it. It seems that I'm experiencing a high number of connections to our database, and I assume it's because our connections aren't closing properly which is causing them to hang for long periods of time. In return this causes a lot of overhead which occasionally causes our web application to crash. Currently the application runs the promise-mysql npm package to create a connection and query the database. Our web application uses socketio to request these connections to our mysql database.
I'm working with existing code that was here before me, so I did not set it up this way. This makes it a bit more confusing for me to debug this issue because I'm not that familiar with how the connections get closed after a successful / unsuccessful query.
When logging errors from our server I'm getting messages like this:
db error { Error: Connection lost: The server closed the connection.
at Protocol.end (/home/ec2-user/myapp/node_modules/mysql/lib/protocol/Protocol.js:113:13)
at Socket.<anonymous> (/home/ec2-user/myapp/node_modules/mysql/lib/Connection.js:109:28)
at Socket.emit (events.js:185:15)
at Socket.emit (domain.js:422:20)
at endReadableNT (_stream_readable.js:1106:12)
at process._tickCallback (internal/process/next_tick.js:178:19) fatal: true, code: 'PROTOCOL_CONNECTION_LOST' }
(Not sure if that has anything to do with the high number of connections I'm seeing or not)
I recently changed the wait_timeout and interactive_timeout to 5000 in MySql, which is way lower than the default 28800, but setting it to this stopped the application from crashing so often.
This is the code for creating the database connection:
database.js file
import mysql from 'promise-mysql';
import env from '../../../env.config.json';
const db = async (sql, descriptor, serializedParameters = []) => {
return new Promise( async (resolve, reject) => {
try {
const connection = await mysql.createConnection({
//const connection = mysql.createPool({
host: env.DB.HOST,
user: env.DB.USER,
password: env.DB.PASSWORD,
database: env.DB.NAME,
port: env.DB.PORT
})
if (connection && env.ENV === "development") {
//console.log(/*"There is a connection to the db for: ", descriptor*/);
}
let result;
if(serializedParameters.length > 0) {
result = await connection.query(sql, serializedParameters)
} else result = await connection.query(sql);
connection.end();
resolve(result);
} catch (e) {
console.log("ERROR pool.db: " + e);
reject(e);
};
});
}
export default db;
And this is an example of what the sockets look like:
sockets.js file
socket.on('updateTimeEntry', async (time, notes, TimeEntryID, callback) => {
try {
const results = await updateTimeEntry(time, notes, TimeEntryID);
callback(true);
//socket.emit("refreshJobPage", false, "");
}
catch (error) {
callback(false);
}
});
socket.on('selectDatesFromTimeEntry', (afterDate, beforeDate, callback) => {
const results = selectDatesFromTimeEntry(afterDate, beforeDate).then((results) => {
//console.log('selectLastTimeEntry: ', results);
callback(results);
})
});
And this is an example of the methods that get called from the sockets to make a connection to the database
timeEntry.js file
import db from './database';
export const updateTimeEntry = (time, notes, TimeEntryID) => {
return new Promise(async (resolve, reject) => {
try {
const updateTimeEntry = `UPDATE mytable SET PunchOut = NOW(), WorkTimeTotal = '${time}', Notes = "${notes}" WHERE TimeEntryID = '${TimeEntryID}';`
const response = await db(updateTimeEntry, "updateTimeEntry");
resolve(response[0]);
} catch (e) {
console.log("ERROR TimeEntry.updateTimeEntry: " + e);
reject(e);
}
});
};
//Gets a List for Assigned Jobs
export const selectDatesFromTimeEntry = (afterDate, beforeDate) => {
return new Promise(async (resolve, reject) => {
try {
const selectDatesFromTimeEntry = `SELECT * FROM mytable.TimeEntry WHERE PunchIn >= '${afterDate}' && PunchIn < '${beforeDate}';`
//console.log("Call: " + selectDatesFromTimeEntry);
const response = await db(selectDatesFromTimeEntry, "selectDatesFromTimeEntry");
//console.log("Response: " + response);
resolve(response);
} catch (e) {
console.log("ERROR TimeEntry.selectDatesFromTimeEntry: " + e);
reject(e);
}
});
};
I just really want to figure out why I'm noticing so much overhead with my database connections, and what I can do to resolve it. I really don't want to have to keep restarting my server each time it crashes, so hopefully I can find some answers to this. If anyone has any suggestions or knows what I can change in my code to solve this issue that would help me out a lot, thanks!
EDIT 1
These are the errors I'm getting from mysql
2020-04-30T11:12:40.214381Z 766844 [Note] Aborted connection 766844 to db: 'mydb' user: 'xxx' host: 'XXXXXX' (Got timeout reading communication packets)
2020-04-30T11:12:48.155598Z 766845 [Note] Aborted connection 766845 to db: 'mydb' user: 'xxx' host: 'XXXXXX' (Got timeout reading communication packets)
2020-04-30T11:15:53.167160Z 766848 [Note] Aborted connection 766848 to db: 'mydb' user: 'xxx' host: 'XXXXXX' (Got timeout reading communication packets)
EDIT 2
Is there a way I can see why some of these connections would be hanging or going idle?
EDIT 3
I've been looking into using a pool instead, as it seems that it is a more scalable and appropriate solution for my application. How can I achieve this with the existing code that I have?
You are opening a new connection for each and every query... Opening a connection is slow, there is a lot of overhead for doing so, and your server certainly does not have unlimited number of connections allowed. The NodeJS mysql package provides a pooling mechanism which would be a lot more efficient for you.
The goal is to reuse the connections as much as possible instead of always disposing of them right after the first query.
In your db.js, create a pool on startup and use it:
var pool = mysql.createPool({
connectionLimit : 10, //Number of connections to create.
host: env.DB.HOST,
user: env.DB.USER,
password: env.DB.PASSWORD,
database: env.DB.NAME,
port: env.DB.PORT
});
To execute your query, you would simply do this:
await pool;
return pool.query(sql, serializedParameters);

Cycle.js - Driver - PhoenixJS (Websockets)

We currently have a VueJS application and I am looking at migrating it to Cycle.js (first major project).
I understand in Cycle.JS we have SI and SO for drivers (using adapt()); naturally a WebSocket implementation fits this as it has both read and write effects.
We use Phoenix (Elixir) as our backend using Channels for soft real-time communication. Our client-side WS library is Phoenix herehttps://www.npmjs.com/package/phoenix.
The example on Cycle.js.org is perfect if you know how to connect.
In our case, we authenticate using a REST endpoint which returns a token (JWT) which is used to initialize the WebSocket (token parameter). This token cannot simply be passed into the driver, as the driver is initialized when the Cycle.js application runs.
An example (not actual code) of what we have now (in our VueJS application):
// Code ommited for brevity
socketHandler = new vueInstance.$phoenix.Socket(FQDN, {
_token: token
});
socketHandler.onOpen(() => VueBus.$emit('SOCKET_OPEN'));
//...... Vue component (example)
VueBus.$on('SOCKET_OPEN', function () {
let chan = VueStore.socketHandler.channel('PRIV_CHANNEL', {
_token: token
});
chan.join()
.receive('ok', () => {
//... code
})
})
The above is an example, we have a Vuex store for a global state (socket etc), centralized message bus (Vue app) for communicating between components and channel setups which come from the instantiated Phoenix Socket.
Our channel setup relies on an authenticated Socket connection which needs authentication itself to join that particular channel.
The question is, is this even possible with Cycle.js?
Initialize WebSocket connection with token parameters from a REST call (JWT Token response) - we have implemented this partially
Create channels based off that socket and token (channel streams off a driver?)
Accessing multiple channel streams (I am assuming it may work like sources.HTTP.select(CATEGORY))
We have a 1: N dependency here which I am not sure is possible with drivers.
Thank you in advance,
Update# 17/12/2018
Essentially what I am trying to imitate is the following (from Cycle.js.org):
The driver takes a sink in, in order to perform write effects (sending messages on a specific channels) but also may return a source; this means there are two streams which are async? Which means that creating the socket at runtime may lead to one stream accessing the "socket" before it is instanitated; please see comments in the snippet below.
import {adapt} from '#cycle/run/lib/adapt';
function makeSockDriver(peerId) {
// This socket may be created at an unknown period
//let socket = new Sock(peerId);
let socket = undefined;
// Sending is perfect
function sockDriver(sink$) {
sink$.addListener({
next: listener => {
sink$.addListener({
next: ({ channel, data }) => {
if(channel === 'OPEN_SOCKET' && socket === null) {
token = data;
// Initialising the socket
socket = new phoenix.Socket(FQDN, { token });
socketHandler.onOpen(() => listener.next({
channel: 'SOCKET_OPEN'
}));
} else {
if(channels[channel] === undefined) {
channels[channel] = new Channel(channel, { token });
}
channels[channel].join()
.receive('ok', () => {
sendData(data);
});
}
}
});
},
error: () => {},
complete: () => {},
});
const source$ = xs.create({
start: listener => {
sock.onReceive(function (msg) {
// There is no guarantee that "socket" is defined here, as this may fire before the socket is actually created
socket.on('some_event'); // undefined
// This works however because a call has been placed back onto the browser stack which probably gives the other blocking thread chance to write to the local stack variable "socket". But this is far from ideal
setTimeout(() => socket.on('some_event'));
});
},
stop: () => {},
});
return adapt(source$);
}
return sockDriver;
}
Jan van Brügge, the soluton you provided is perfect (thank you) except I am having trouble with the response part. Please see above example.
For example, what I am trying to achieve is something like this:
// login component
return {
DOM: ...
WS: xs.of({
channel: "OPEN_CHANNEL",
data: {
_token: 'Bearer 123'
}
})
}
//////////////////////////////////////
// Some authenticated component
// Intent
const intent$ = sources.WS.select(CHANNEL_NAME).startWith(null)
// Model
const model$ = intent$.map(resp => {
if (resp.some_response !== undefined) {
return {...}; // some model
}
return resp;
})
return {
DOM: model$.map(resp => {
// Use response from websocket to create UI of some sort
})
}
first of all, yes this is possible with a driver, and my suggestion will result in a driver that feels quite like the HTTP driver.
First of all to have some rough pseudo code that where I can explain everything, I might have misunderstood parts of your question so this might be wrong.
interface WebsocketMessage {
channel: string;
data: any;
}
function makeWebSocketDriver() {
let socket = null;
let token = null;
let channels = {}
return function websocketDriver(sink$: Stream<WebsocketMessage> {
return xs.create({
start: listener => {
sink$.addListener({
next: ({ channel, data }) => {
if(channel === 'OPEN_SOCKET' && socket === null) {
token = data;
socket = new phoenix.Socket(FQDN, { token });
socketHandler.onOpen(() => listener.next({
channel: 'SOCKET_OPEN'
}));
} else {
if(channels[channel] === undefined) {
channels[channel] = new Channel(channel, { token });
}
channels[channel].join()
.receive('ok', () => {
sendData(data);
});
}
}
});
}
});
};
}
This would be the rough structure of such a driver. You see it waits for a message with the token and then opens the socket. It also keeps track of the open channels and sends/receives in those based on the category of the message. This method just requires that all channels have unique names, I am not sure how your channel protocol works in that regard or what you want in particular.
I hope this enough to get you started, if you clarify the API of the channel send/receive and the socket, I might be able to help more. You are also always welcome to ask questions in our gitter channel

Categories

Resources