I want to create a basic chat system which allows the user to attach a file to a given message.
This is a problem with web sockets because it is very difficult to send a file through them. unless you want to convert it to binary data. That wont work for me however because I want to be able to interact with the files on the server, and since i have multiple file types (different images, videos and gifs), I cant just decode the binary.
Here is my code:
Where I make the request to upload the message:
// this.state._new_message : {
// _message_type_id:int,
// _message:string || file,
// _user_id:int,
// _conversation_id:int
// } -> this is what a _new_message object looks like.
let fd = new FormData()
if(this.state._new_message._message_type_id !== 1){
fd.append("file", this.state._new_message._message)
}
fd.append("message", JSON.stringify(this.state._new_message))
Axios.post(API_URL+"/api/chat/send_message", fd)
.then(res => res.data)
.then(res => {
this.socket.emit("message", res._message)
})
Where I handle the request:
def send_message(self, request):
data = request.form["message"]
data = json.loads(data)
if data["_message_type_id"] != 1:
data["_message"] = "Loading Message. Please Wait."
query = fdb.prepare(data, "_messages") # Converts the data dict to a query string
fdb.create(query["query_string"], query["query_params"]) # Interacts with Database
_message = fdb.read("""SELECT * FROM _messages WHERE _id = LAST_INSERT_ID()""", (), False)
if data["_message_type_id"] != 1:
file = request.files["file"]
# OpenCV magic... #
path = "path/to/file"
file.save(path)
fdb.update("""UPDATE _messages SET _message = %s WHERE _id = %s""", (url_path, _message["_id"])) # Interacts with Database
url_path = "url/path/to/file"
_message["_message"] = url_path # If message is not text, set it to be the path to the file instead
return jsonify({
"_message" : _message
})
This is where I listen for the event (server):
#socket.on("message")
def handleMessage(msg):
print(msg)
send(msg, broadcast=True)
return None
This is where I listen for the event (client):
this.socket.on("message", data => {
if(data == null || data._message_type_id == undefined) return
if(this.props.chat.messages.messages !== null)
if(this.props.chat.messages.messages.filter(message => parseInt(message._id) === parseInt(data._id)).length > 0) return
this.props.addMessage(data)
})
From this code, you'd expect to send a message through HTTP, that message to be processed and sent back through HTTP, then the response to be sent to the server through the socket, and the socket to then send that message to all the clients which are listening to that socket. But it doens't work that way. Some times it does, other times it doesn't.
One thing I can confirm is that the messages are saved to the database and file system 100% of the time. I also always get a response from the HTTP request, but when I emit the event from the client and print them message in the server, I some times get None or the previous message logged to the console.
So when that message is sent to the client, my "error catching block" will see that its either undefined or already exists, so it will return without adding it (because it shouldnt add it of course).
Has anyone experienced this issue before? Do I have a wrong idea about something? Or is there wrong with my code?
I figured it out.
After a bit of time analyzing my code, I noticed that I do not execute the following two queries in the same transaction:
fdb.create(query["query_string"], query["query_params"]) # Interacts with Database
_message = fdb.read("""SELECT * FROM _messages WHERE _id = LAST_INSERT_ID()""", (), False)
The fdb extension I developed is basically a high level abstraction of sqlalchemy which has a very loose transaction management system (auto open/close). So I had to implicitly tell it to not close the transaction until it executes both queries.
As a result, I always get the latest insert, and not the insert which occurred beforehand (as was the case before I updated my code).
Goes to show you that abstraction of code isn't always a good idea if you don't think of all the use cases in advanced.
Related
I have a python server-side which sends a request using SSE.
Here the example of python code. It sends an 'action-status' and data which JS has to handle (to do):
async def sse_updates(request):
loop = request.app.loop
async with sse_response(request) as resp:
while True:
# Sending request for queue' token remove when it's been removed on server.
if request.app['sse_requests']['update_queue_vis_remove']:
await resp.send("update-remove")
request.app['sse_requests']['update_queue_vis_remove'] = False
# Sending request for queue' token adding up when it's been added on server.
if request.app['sse_requests']['update_queue_vis_append'][0]:
await resp.send(f"update-append {request.app['sse_requests']['update_queue_vis_append'][1]} {request.app['sse_requests']['update_queue_vis_append'][2]}")
request.app['sse_requests']['update_queue_vis_append'][0] = False
# Sending request for redundant token's list rewrite (on client side ofc)
if request.app['sse_requests']['redundant_tokens_vis'][0]:
await resp.send('update-redtokens ' + ''.join(token + ' ' for token in request.app['sse_requests']['redundant_tokens_vis'][1]))
request.app['sse_requests']['redundant_tokens_vis'][0] = False
await asyncio.sleep(0.1, loop=loop)
return resp
And the JS script which handles a response:
evtSource = new EventSource("http://" + window.location.host + "/update")
evtSource.onmessage = function(e) {
// Data from server is fetching as "<server-event-name> <data1> <data2> <data3> ..."
let fetched_data = e.data.split(' ');
// First option is when a token has been removed from server this event has to be represented on a client-side.
if(fetched_data[0] === "update-remove")
displayQueueRemove();
// The second option is when a token appended on server and also it should be represented to a user
else if(fetched_data[0] === "update-append")
// fetched_data[1] - token
// fetched_data[2] - it's (token's) position
displayQueueAdd(fetched_data[1], parseInt(fetched_data[2]));
// The last possible options is that if the web-page will has refreshed a data in redundant_tokens should be rewritten
else if (fetched_data[0] === "update-redtokens"){
fetched_data.shift();
// Creating variables for token' wrapping
let tag;
let text;
// Wrapping tokens and store it into the array.
for(let i = 0; i < fetched_data.length - 1; i++) {
tag = document.createElement("div");
text = document.createTextNode(fetched_data[i]);
tag.appendChild(text);
tag.setAttribute("class", "token-field");
redundant_tokens[i] = tag;
}
}
}
The problem is that if I open two or more browser windows (sessions), the only one of them catches a response and represents it. Moreover there were the cases when I send a request from one session, however obtain a response to another one.
Is there an option to fix it using SSE (I mean, I was considering some other methods but I'd like to try with SSE)?
I think your problem is synchronizing the data (app["sse_requests"]).
Depending on how you modify the data and who needs to be notified you might need to keep a list of clients (sessions).
For example if all clients need to be notified of all events then keep a list (or even better a set) of connected clients and create a periodic function (using create_task) in which to notify all of them.
If a client needs to only be notified of certain events then you need to identify that client using some sort of key in the request object.
I am writing a simple app to generate content for a blog. So, a POST request to the express server sends an object that contains html string. While I was testing the api, I got the error as Maximum call stack size exceeded once. The html also contains base64 encoded images. But, the size of the html content was about 1.5MB when i got the error.
I tested again with contents having size more than 10MB, but worked fine. I don't know why it happened once.
My code to handle the post request looks something like this:
async function create(postData){
if(await Post.findOne({title: postData.title})){
throw 'Post with same title already exists!';
}
// find the actual user(user id is sent in req.body)
const user = await User.findById(postData.userId);
if(user === null || user === undefined) {
throw 'Cannot find user with given user id while creating the post!';
}
// create new post
const newPost = postData;
delete newPost.userId;
const post = new Post(newPost);
post.user = user;
await post.save();
// add post to user
user.posts.push(post);
await user.save();
return post;
}
I thought user.posts.push(post); took more time to execute, thus blocking the call stack but every other tests succeeded with larger object size. I thought of using streams like JSONStream to handle large objects but even after testing with object sizes around 15MB worked fine.
What am i missing here?
I just wanted to know if there's a way to count how many times a message has been sent in my Discord server, so the bot can send a message. I'm new with coding, so I don't know many things. Thank you in advance!
Explanation
To store the amount of messages sent in a guild, you'll have to keep track of a count somehow. Each time a message is sent, you can increment it by 1. Then, upon a user's request, you can display that number.
One easy option would be to store this "message count" for each guild inside of a JSON file. However, this would greatly impact performance. Consider a database for much better speeds and reliability.
Example Setup
Before using this system, create a guilds.json file with a blank object ({}).
Declaring the necessary variables...
const fs = require('fs'); // fs is the built-in Node.js file system module.
const guilds = require('./guilds.json'); // This path may vary.
Adding the system to the message event listener...
client.on('message', message => {
// If the author is NOT a bot...
if (!message.author.bot) {
// If the guild isn't in the JSON file yet, set it up.
if (!guilds[message.guild.id]) guilds[message.guild.id] = { messageCount: 1 };
// Otherwise, add one to the guild's message count.
else guilds[message.guild.id].messageCount++;
// Write the data back to the JSON file, logging any errors to the console.
try {
fs.writeFileSync('./guilds.json', JSON.stringify(guilds)); // Again, path may vary.
} catch(err) {
console.error(err);
}
}
});
Using the system in a command...
// Grab the message count.
const messageCount = guilds[message.guild.id].messageCount;
// Send the message count in a message. The template literal (${}) adds an 's' if needed.
message.channel.send(`**${messageCount}** message${messageCount !== 1 ? 's' : ''} sent.`)
.catch(console.error);
JSON is highly prone to corruption if a queue system is not created that will make sure multiple reads and writes are not happening to a file all at the same time. For the purpose of what you want, I would use something like SQLite that requires minimal setup, is easy to learn, and has helper frameworks to make it easier to use such as Keyv and Sequelize.
Here is a good guide on how to use sqlite in the nodejs runtime environment.
Push notifications in Chrome via GCM are driving me crazy.
I've got everything up and running. I serve the push using my python server to GCM. A service worker displays the push notification fine.
To my knowledge, there is NO data passed with push events. Sounds like it's coming soon but not available yet.
So just before the push notification shows, I call my server to get extra data for the push notification. But I have no information on the push notification to send to my server to match and return relevant data.
Everything I can think of to match a notification and user data is purely speculative. The closest thing I can find is a timestamp object on the PushEvent{} that roughly matches the successful return of the GCM call for each user.
So how are other people handling custom payload data to display Chrome push notifications?
The PushEvent{} does not seem to have any ID associated with it. I know the user that the push is for because I've previously stored that information at the time of registration.
But once I receive a push, I have no idea of knowing what the push was for.
I would like to avoid:
Trying to match based on timestamp (since notifications displays are not guaranteed to be instant).
Trying to pull the 'latest' data for a user because in my case, there could be several notifications that are sent for different bits of data around the same time.
How are other sites like Whatsapp and Facebook linking custom payload data with a seemingly sterile event data as a result of a push notification?
How are you doing it? Any suggestions?
Here's what my receiver code looks like:
self.addEventListener('push', function(event) {
event.waitUntil(
fetch("https://oapi.co/o?f=getExtraPushData&uid=" + self.userID + "&t=" + self.userToken).then(function(response) {
if (response.status !== 200) {
console.log('Looks like there was a problem. Status Code: ' + response.status);
throw new Error();
}
return response.json().then(function(data) {
if (data.error || !data.notification) {
console.error('The API returned an error.', data.error);
throw new Error();
}
var title = data.notification.title;
var message = data.notification.message;
var icon = data.notification.icon;
return self.registration.showNotification(title, {
body: message,
icon: icon,
});
});
}).catch(function(err) {
console.error('Unable to retrieve data', err);
var title = 'An error occurred';
var message = 'We were unable to get the information for this push message';
var icon = "https://oapi.co/occurrences_secure/img/stepNotify_1.png";
var notificationTag = 'notification-error';
return self.registration.showNotification(title, {
body: message,
icon: icon,
tag: notificationTag
});
})
);
});
I understand your problem, and i've been fiddling with the same when i wanted to use chrome notification. You can use indexedDB to save ObjectStore and retrieve data in webServices.
IndexedDB is accessible to webservices. I am using it to store user information and when the user recieves a push notification i pass the stored access key to identify the user and pass him relevent information.
Here's matt gaunt's tutorial which says indexed db is accessible to web services:
http://www.html5rocks.com/en/tutorials/service-worker/introduction/
Here's a good indexedDB tutorial:
http://blog.vanamco.com/indexeddb-fundamentals-plus-a-indexeddb-example-tutorial/
Assuming you are still in the past. That is, sending only a push trigger to your browser with no payload it's now time to move on with time. You can now send payload in your push events. Since you seem familiar with GCM, it's ok to go with that though there is now the Web Push Protocol which is browser vendor independent.
Briefly, to make that work you need to encrypt your payload with specifications found here, in your server.
There is a node by google chrome and PHP implementations for that, that I know of.
You may check out the PHP Web Push here.
In the browser you would need to provide the subscription object now with the p256dh and auth on top of the endpoint as before.
You may check this out for more details.
As a side project at work, I am currently working on implementing a LiveChat Report web app. Basically, it will download all of our LiveChat data to .csv files for analysis, etc.
I have the entire thing working quite well; however, I am having a hard time wrapping my head around how to loop though requests based on pages of results returned. For example, I want certain information from each chat. LiveChat API only returns 25 chats on each page. Unfortunately, I cannot call each page at a time and depending on the date range parameters, the number of pages varies each time. I want to get all of these pages on 1 csv, if possible.
My request looks like:
function chatListReport() {
document.getElementById('chat_list_submit').addEventListener('click', function(event) {
var req = new XMLHttpRequest();
var params = {date_from:null,date_to:null, page:null};
params.date_from = document.getElementById('date_from').value;
params.date_to = document.getElementById('date_to').value;
params.page = document.getElementById('page').value;
req.onreadystatechange = function() { //when response received
if (req.readyState == 4 && req.status == 200) {
var response = (req.responseText);
var final = "data:text/csv;charset=utf-8," + encodeURI(response);
var link = document.createElement('a');
link.setAttribute('href', final);
link.setAttribute('download', 'ChatListSurveyReport.csv');
link.click();
}}
req.open('POST', '/chat_list_report', true); //submit update via POST to server
req.setRequestHeader('Content-Type', 'application/json'); //set request header
req.send(JSON.stringify(params));
event.preventDefault();
});
My server side looks like this (using NPM liveChatAPI):
app.post('/chat_list_report', function(req, res){
var params = req.body;
api.chats.list(params, function(data){
var headers = 'Chat Date,Agent,PostChat Survey Rating,Comments';
var result = (data.chats || [])
.filter(function(chat) {return chat.type === "chat"})
.map(function(chat) {
var postSurvey = chat.postchat_survey || [];
return [
chat.ended.replace(",", ""),
chat.agents[0].display_name,
(postSurvey[0] || {}).value || "",
(postSurvey[1] || {}).value || "",
].join(',');
});
result.unshift(headers);
res.send(result.join('\n'));
});
});
I am able to return the number of pages. That is included in the returned JSON. So in short, is there a way to return that page number response back to the request and loop through the request X amount of times and then create 1 csv with all of the information? Right now I am limited to only 25 chats per single request.
Thanks! Any help is greatly appreciated.
We see two possible solutions:
1:
A. Front-end sends a request to the backend, and then the backend returns ID of a job, which the front-end must store on it’s side, for example in the localStorage.
B. At this time, back-end works on the jobs on its own, and tries to finish it, as a complete task.
C. Front-end sends a request to the back-end every few seconds about the task, giving it’s ID. While the task is not complete, back-end replies with 'in-progress',
and front-end continues to enquire about the task until its finished.
D. If the front-end enquires about the task, and on this time receive notification
that the task is completed, then back-end returns a message like finished' with
further completed information.
E.
GUI should notify:
o Waiting for response
o Response successful, link to download a file.
---------OR----------
2:
A. Front-end sends request to back-end, back-end registers the task and after
completing the task sends an e-mail with the completed data.
B. Front-end informs that the task has been added to the ‘queue’ and only when it is
completed will an email be sent with the results of the task.
C. Requires adding additional input on the front end side : e-mail and handling some sort of API for e-mails e.g e,g,:https://postmarkapp.com/
Let me know how it works,
Cheers,
Adam