Cycle.js - Driver - PhoenixJS (Websockets) - javascript

We currently have a VueJS application and I am looking at migrating it to Cycle.js (first major project).
I understand in Cycle.JS we have SI and SO for drivers (using adapt()); naturally a WebSocket implementation fits this as it has both read and write effects.
We use Phoenix (Elixir) as our backend using Channels for soft real-time communication. Our client-side WS library is Phoenix herehttps://www.npmjs.com/package/phoenix.
The example on Cycle.js.org is perfect if you know how to connect.
In our case, we authenticate using a REST endpoint which returns a token (JWT) which is used to initialize the WebSocket (token parameter). This token cannot simply be passed into the driver, as the driver is initialized when the Cycle.js application runs.
An example (not actual code) of what we have now (in our VueJS application):
// Code ommited for brevity
socketHandler = new vueInstance.$phoenix.Socket(FQDN, {
_token: token
});
socketHandler.onOpen(() => VueBus.$emit('SOCKET_OPEN'));
//...... Vue component (example)
VueBus.$on('SOCKET_OPEN', function () {
let chan = VueStore.socketHandler.channel('PRIV_CHANNEL', {
_token: token
});
chan.join()
.receive('ok', () => {
//... code
})
})
The above is an example, we have a Vuex store for a global state (socket etc), centralized message bus (Vue app) for communicating between components and channel setups which come from the instantiated Phoenix Socket.
Our channel setup relies on an authenticated Socket connection which needs authentication itself to join that particular channel.
The question is, is this even possible with Cycle.js?
Initialize WebSocket connection with token parameters from a REST call (JWT Token response) - we have implemented this partially
Create channels based off that socket and token (channel streams off a driver?)
Accessing multiple channel streams (I am assuming it may work like sources.HTTP.select(CATEGORY))
We have a 1: N dependency here which I am not sure is possible with drivers.
Thank you in advance,
Update# 17/12/2018
Essentially what I am trying to imitate is the following (from Cycle.js.org):
The driver takes a sink in, in order to perform write effects (sending messages on a specific channels) but also may return a source; this means there are two streams which are async? Which means that creating the socket at runtime may lead to one stream accessing the "socket" before it is instanitated; please see comments in the snippet below.
import {adapt} from '#cycle/run/lib/adapt';
function makeSockDriver(peerId) {
// This socket may be created at an unknown period
//let socket = new Sock(peerId);
let socket = undefined;
// Sending is perfect
function sockDriver(sink$) {
sink$.addListener({
next: listener => {
sink$.addListener({
next: ({ channel, data }) => {
if(channel === 'OPEN_SOCKET' && socket === null) {
token = data;
// Initialising the socket
socket = new phoenix.Socket(FQDN, { token });
socketHandler.onOpen(() => listener.next({
channel: 'SOCKET_OPEN'
}));
} else {
if(channels[channel] === undefined) {
channels[channel] = new Channel(channel, { token });
}
channels[channel].join()
.receive('ok', () => {
sendData(data);
});
}
}
});
},
error: () => {},
complete: () => {},
});
const source$ = xs.create({
start: listener => {
sock.onReceive(function (msg) {
// There is no guarantee that "socket" is defined here, as this may fire before the socket is actually created
socket.on('some_event'); // undefined
// This works however because a call has been placed back onto the browser stack which probably gives the other blocking thread chance to write to the local stack variable "socket". But this is far from ideal
setTimeout(() => socket.on('some_event'));
});
},
stop: () => {},
});
return adapt(source$);
}
return sockDriver;
}
Jan van Brügge, the soluton you provided is perfect (thank you) except I am having trouble with the response part. Please see above example.
For example, what I am trying to achieve is something like this:
// login component
return {
DOM: ...
WS: xs.of({
channel: "OPEN_CHANNEL",
data: {
_token: 'Bearer 123'
}
})
}
//////////////////////////////////////
// Some authenticated component
// Intent
const intent$ = sources.WS.select(CHANNEL_NAME).startWith(null)
// Model
const model$ = intent$.map(resp => {
if (resp.some_response !== undefined) {
return {...}; // some model
}
return resp;
})
return {
DOM: model$.map(resp => {
// Use response from websocket to create UI of some sort
})
}

first of all, yes this is possible with a driver, and my suggestion will result in a driver that feels quite like the HTTP driver.
First of all to have some rough pseudo code that where I can explain everything, I might have misunderstood parts of your question so this might be wrong.
interface WebsocketMessage {
channel: string;
data: any;
}
function makeWebSocketDriver() {
let socket = null;
let token = null;
let channels = {}
return function websocketDriver(sink$: Stream<WebsocketMessage> {
return xs.create({
start: listener => {
sink$.addListener({
next: ({ channel, data }) => {
if(channel === 'OPEN_SOCKET' && socket === null) {
token = data;
socket = new phoenix.Socket(FQDN, { token });
socketHandler.onOpen(() => listener.next({
channel: 'SOCKET_OPEN'
}));
} else {
if(channels[channel] === undefined) {
channels[channel] = new Channel(channel, { token });
}
channels[channel].join()
.receive('ok', () => {
sendData(data);
});
}
}
});
}
});
};
}
This would be the rough structure of such a driver. You see it waits for a message with the token and then opens the socket. It also keeps track of the open channels and sends/receives in those based on the category of the message. This method just requires that all channels have unique names, I am not sure how your channel protocol works in that regard or what you want in particular.
I hope this enough to get you started, if you clarify the API of the channel send/receive and the socket, I might be able to help more. You are also always welcome to ask questions in our gitter channel

Related

Socket io, Try multiple URLs to establish connection after "connect_error"

I have an app with divided code (client / server). On the client side, I'd like socket io to attempt multiple URLs (one at a time) until it connects successfully.
Here's my code:
const BAD_HOST = "http://localhost:8081";
const LOCAL_HOST = "http://localhost:8080";
const SOCKET_CONFIG = {
upgrade: false,
transports: ["websocket"],
auth: { ... }, // Trimmed for brevity
extraHeaders: { ... }, // Trimmed for brevity
};
let socket = io(BAD_HOST, SOCKET_CONFIG); // This connects fine when I use LOCAL_HOST
socket.on("connect_error", (err) => {
console.log(err);
socket = io(LOCAL_HOST, SOCKET_CONFIG); // DOES NOT WORK
});
socket.on("connect", () => { ... } // Trimmed for brevity
In short, when I try to reassign the value for socket to a new io connection, it seems to retain the old, failed connection. My browser continues to throw 'connect_error' messages from the bad url:
WebSocket connection to 'ws://localhost:8081/socket.io/?EIO=4&transport=websocket' failed:
I checked but couldn't find any official documentation on this question.
I think an approach is already discussed here:
https://stackoverflow.com/a/22722710/656708
Essentially you have an array of URLs, which in your case would be:
const socketServerURLs = ["http://localhost:8081","http://localhost:8080"];
and then iterate over them, trying to initiate a socket connection, like this:
// something along these lines
socketServerURLs.forEach((url) => {
// ...
socket.connect(url, socketConfiguration, (client) => {});
// ...
}
Then again, I don't know what a BAD_HOST entails. Assuming that you mean that a connection to that host failed, how would you know that without actually trying to connect to it?

Web USB: An operation that changes interface state is in progress error

Suddenly I get an error on a web usb device that connects with my Angular app.
The error reads: An operation that changes interface state is in progress.
Edit: more code:
Selecting device & opening connection:
getDeviceSelector() {
return navigator.usb
.requestDevice(this.options)
.then((selectedDevice) => {
this.device = selectedDevice;
return this.device.open(); // Begin a session.
});
}
Communicating with the device (Raspberry Pi)
Start communication with the web-usb on the Pi:
connectedDevice
.then(() => this.device.selectConfiguration(1)) // Select configuration #1 for the device.
.then(() => this.device.claimInterface(0)) // Request exclusive control over interface #0.
.then(() => {
// Read data every 40 ms
this.interval = interval(40).subscribe(async () => {
await this.read();
});
})
Handle the reading of all the data that is being send:
async read() {
const result = await this.readOneLine();
this.readCallbacks.forEach((callback) => {
callback(result);
});
}
readOneLine() {
return this.device.transferIn(1, 8 * 1024).then(
(result) => {
return new Uint8Array(result.data.buffer);
},
(error) => {
console.error(error);
}
);
}
From there on, we use the readCallbacks function to pass the data we got from to device to a custom event that is been fired.
The error might be related to the new Chrome update, but I can not find changes to the navigator.usb or any other USB related mechanics.
New info will be added as soon as I have it!
In my case, the problem only occurred on some Windows laptops. I had to #1 safely remove the USB-device and #2 plug it in another port. After that, the connection was normal just like before (also on the original USB-port).
It suddenly happened on different computers and at the same time, but we were unable to see the exact environment that caused this issue.

How to close socket connection for GraphQL subscription in Apollo

I have GraphQL Subscriptions on my Apollo server that I want to close after the user logs out. The initial question is whether we should close this (socket) connections on the client side or in the backend.
On the front-end, I am using Angular with Apollo Client and I handle GraphQL subscriptions by extending the Subscription class from apollo-angular. I am able to close the subscription channels with a typical takeUntil rxjs implementation:
this.userSubscription
.subscribe()
.pipe(takeUntil(this.subscriptionDestroyed$))
.subscribe(
({ data }) => {
// logic goes here
},
(error) => {
// error handling
}
);
However, this does not close the websocket on the server, which If I'm right, will result in a subscription memory leak.
The way the Apollo Server (and express) is set up for subscriptions is as follows:
const server = new ApolloServer({
typeDefs,
resolvers,
subscriptions: {
onConnect: (connectionParams, webSocket, context) => {
console.log('on connect');
const payload = getAuthPayload(connectionParams.accessToken);
if (payload instanceof Error) {
webSocket.close();
}
return { user: payload };
},
onDisconnect: (webSocket, context) => {
console.log('on Disconnect');
}
},
context: ({ req, res, connection }) => {
if (connection) {
// set up context for subscriptions...
} else {
// set up context for Queries, Mutations...
}
When the client registers a new GraphQL subscription, I always get to see console.log('on connect'); on the server logs, but I never see console.log('on Disconnect'); unless I close the front-end application.
I haven't seen any example on how to close the websocket for subscriptions with Apollo. I mainly want to do this to complete a Logout implementation.
Am I missing something here? Thanks in advance!
I based my solution based on this post
Essentially, the way we created the Subscription with sockets was using subscriptions-transport-ws
export const webSocketClient: SubscriptionClient = new
SubscriptionClient(
`${environment.WS_BASE_URL}/graphql`,
{
reconnect: true,
lazy: true,
inactivityTimeout: 3000,
connectionParams: () => ({
params: getParams()
})
}
);
As specified in the question, I wanted to unsubscribe all channels and close the subscription socket connection before logout of the user. We do this by using the webSocketClient SubscriptionClient in the logout function and call:
webSocketClient.unsubscribeAll();
webSocketClient.close();

How to make sure AMQP message is not lost in case of error in subscriber with rhea?

So I have designed a basic Publisher-Subscriber model using rhea in JS that takes an API request for saving data in DB and then publishes it to a queue.
From there a subscriber(code added below) picks it up and tries to save it in a DB. Now my issue is that this DB instance goes through a lot of changes during development period and can result in errors during insert operations.
So now when the subscriber tries to push to this DB and it results in an error, the data is lost since it was dequeued. I'm a total novice in JS so is there a way to make sure that a message isn't dequeued unless we are sure that it is saved properly without having to publish it again on error?
The code for my subscriber:
const Receiver = require("rhea");
const config = {
PORT: 5672,
host: "localhost"
};
let receiveClient;
function connectReceiver() {
const receiverConnection = Receiver.connect(config);
const receiver = receiverConnection.open_receiver("send_message");
receiver.on("connection_open", function () {
console.log("Subscriber connected through AMQP");
});
receiver.on("error", function (err) {
console.log("Error with Subscriber:", err);
});
receiver.on("message", function (element) {
if (element.message.body === 'detach') {
element.receiver.detach();
}
else if (element.message.body === 'close') {
element.receiver.close();
}
else {
//save in DB
}
}
receiveClient = receiver;
return receiveClient;
}
You can use code like this to explicitly accept the message or release it back to the sender:
try {
save_in_db(event.message);
event.delivery.accept();
} catch {
event.delivery.release();
}
See the delivery docs for more info.

Responding to fetch request via cache xor indexedDB

I am trying to have a service worker respond to fetch events depending on the type of request made. For static resources I use cache:
// TODO: make cache update when item found
const _fetchOrCache = (cache, request) => {
return cache.match(request).then(cacheResponse => {
// found in cache
if (cacheResponse) {
return cacheResponse
}
// has to add to cache
return fetch(request)
.then(fetchResponse => {
// needs cloning since a response works only once
cache.put(request, fetchResponse.clone())
return fetchResponse
});
}).catch(e => { console.error(e) })
}
for api responses I have already wired up IndexedDB with Jake Archibald's IndexedDB Promised to return content like this:
const fetchAllItems = () => {
return self.idbPromise
.then(conn => conn.transaction(self.itemDB, 'readonly'))
.then(tx => tx.objectStore(self.itemDB))
.then(store => store.getAll())
.then(storeContents => JSON.stringify(storeContents));
}
when I call everything in the service worker the cache part works, but the indexedDB fails miserably throwing an error that it cannot get at the api url:
self.addEventListener("fetch", event => {
// analyzes request url and constructs a resource object
const resource = getResourceInfo(event.request.url);
// handle all cachable requests
if (resource.type == "other") {
event.respondWith(
caches.open(self.cache)
.then(cache => _fetchOrCache(cache, event.request))
);
}
// handle api requests
if (resource.type == "api") {
event.respondWith(
new Response(fetchAllItems());
);
}
});
My questions would be as follows:
1.) Is there any point in separating storing fetch requests like this?
2.) How do I make the indexedDB part work?
good catch on using Jake Archibalds promise based idb. There are many ways to install his idb. The quickest - download the idb.js file somewhere(this is the library). Then import it on the first line in the service worker likeso:
importScripts('./js/idb.js');
.....
//SW installation event
self.addEventListener('install', function (event) {
console.log("[ServiceWorker] Installed");
});
//SW Actication event (where we create the idb)
self.addEventListener('activate', function(event) {
console.log("[ServiceWorker] Activating");
createIndexedDB();
});
.....
//Intercept fetch events and save data in IDB
.....
//IndexedDB
function createIndexedDB() {
self.indexedDB = self.indexedDB || self.mozIndexedDB || self.webkitIndexedDB || self.msIndexedDB;
if (!(self.indexedDB)) { console.console.log('IDB not supported'); return null;}
return idb.open('mydb', 1, function(upgradeDb) {
if (!upgradeDb.objectStoreNames.contains('items')) {
upgradeDb.createObjectStore('items', {keyPath: 'id'});
}
});
}
Judging by the code you pasted above to retrieve IDB data, it is unclear to me what exactly is idbPromise... Are you sure you declared this variable?
You should have something like this
importScripts('./js/idb.js');
//...
//createIdb and store
//...
var idbPromise = idb.open('mydb');
//and after that you have your code like idbPromise.then().then()...
So you create the IDB and the tables during the SW activation. After that you intercept the fetch events and start using the indexeddb like in the tutorials you've seen.
Good luck

Categories

Resources