As a side project at work, I am currently working on implementing a LiveChat Report web app. Basically, it will download all of our LiveChat data to .csv files for analysis, etc.
I have the entire thing working quite well; however, I am having a hard time wrapping my head around how to loop though requests based on pages of results returned. For example, I want certain information from each chat. LiveChat API only returns 25 chats on each page. Unfortunately, I cannot call each page at a time and depending on the date range parameters, the number of pages varies each time. I want to get all of these pages on 1 csv, if possible.
My request looks like:
function chatListReport() {
document.getElementById('chat_list_submit').addEventListener('click', function(event) {
var req = new XMLHttpRequest();
var params = {date_from:null,date_to:null, page:null};
params.date_from = document.getElementById('date_from').value;
params.date_to = document.getElementById('date_to').value;
params.page = document.getElementById('page').value;
req.onreadystatechange = function() { //when response received
if (req.readyState == 4 && req.status == 200) {
var response = (req.responseText);
var final = "data:text/csv;charset=utf-8," + encodeURI(response);
var link = document.createElement('a');
link.setAttribute('href', final);
link.setAttribute('download', 'ChatListSurveyReport.csv');
link.click();
}}
req.open('POST', '/chat_list_report', true); //submit update via POST to server
req.setRequestHeader('Content-Type', 'application/json'); //set request header
req.send(JSON.stringify(params));
event.preventDefault();
});
My server side looks like this (using NPM liveChatAPI):
app.post('/chat_list_report', function(req, res){
var params = req.body;
api.chats.list(params, function(data){
var headers = 'Chat Date,Agent,PostChat Survey Rating,Comments';
var result = (data.chats || [])
.filter(function(chat) {return chat.type === "chat"})
.map(function(chat) {
var postSurvey = chat.postchat_survey || [];
return [
chat.ended.replace(",", ""),
chat.agents[0].display_name,
(postSurvey[0] || {}).value || "",
(postSurvey[1] || {}).value || "",
].join(',');
});
result.unshift(headers);
res.send(result.join('\n'));
});
});
I am able to return the number of pages. That is included in the returned JSON. So in short, is there a way to return that page number response back to the request and loop through the request X amount of times and then create 1 csv with all of the information? Right now I am limited to only 25 chats per single request.
Thanks! Any help is greatly appreciated.
We see two possible solutions:
1:
A. Front-end sends a request to the backend, and then the backend returns ID of a job, which the front-end must store on it’s side, for example in the localStorage.
B. At this time, back-end works on the jobs on its own, and tries to finish it, as a complete task.
C. Front-end sends a request to the back-end every few seconds about the task, giving it’s ID. While the task is not complete, back-end replies with 'in-progress',
and front-end continues to enquire about the task until its finished.
D. If the front-end enquires about the task, and on this time receive notification
that the task is completed, then back-end returns a message like finished' with
further completed information.
E.
GUI should notify:
o Waiting for response
o Response successful, link to download a file.
---------OR----------
2:
A. Front-end sends request to back-end, back-end registers the task and after
completing the task sends an e-mail with the completed data.
B. Front-end informs that the task has been added to the ‘queue’ and only when it is
completed will an email be sent with the results of the task.
C. Requires adding additional input on the front end side : e-mail and handling some sort of API for e-mails e.g e,g,:https://postmarkapp.com/
Let me know how it works,
Cheers,
Adam
Related
I want to create a basic chat system which allows the user to attach a file to a given message.
This is a problem with web sockets because it is very difficult to send a file through them. unless you want to convert it to binary data. That wont work for me however because I want to be able to interact with the files on the server, and since i have multiple file types (different images, videos and gifs), I cant just decode the binary.
Here is my code:
Where I make the request to upload the message:
// this.state._new_message : {
// _message_type_id:int,
// _message:string || file,
// _user_id:int,
// _conversation_id:int
// } -> this is what a _new_message object looks like.
let fd = new FormData()
if(this.state._new_message._message_type_id !== 1){
fd.append("file", this.state._new_message._message)
}
fd.append("message", JSON.stringify(this.state._new_message))
Axios.post(API_URL+"/api/chat/send_message", fd)
.then(res => res.data)
.then(res => {
this.socket.emit("message", res._message)
})
Where I handle the request:
def send_message(self, request):
data = request.form["message"]
data = json.loads(data)
if data["_message_type_id"] != 1:
data["_message"] = "Loading Message. Please Wait."
query = fdb.prepare(data, "_messages") # Converts the data dict to a query string
fdb.create(query["query_string"], query["query_params"]) # Interacts with Database
_message = fdb.read("""SELECT * FROM _messages WHERE _id = LAST_INSERT_ID()""", (), False)
if data["_message_type_id"] != 1:
file = request.files["file"]
# OpenCV magic... #
path = "path/to/file"
file.save(path)
fdb.update("""UPDATE _messages SET _message = %s WHERE _id = %s""", (url_path, _message["_id"])) # Interacts with Database
url_path = "url/path/to/file"
_message["_message"] = url_path # If message is not text, set it to be the path to the file instead
return jsonify({
"_message" : _message
})
This is where I listen for the event (server):
#socket.on("message")
def handleMessage(msg):
print(msg)
send(msg, broadcast=True)
return None
This is where I listen for the event (client):
this.socket.on("message", data => {
if(data == null || data._message_type_id == undefined) return
if(this.props.chat.messages.messages !== null)
if(this.props.chat.messages.messages.filter(message => parseInt(message._id) === parseInt(data._id)).length > 0) return
this.props.addMessage(data)
})
From this code, you'd expect to send a message through HTTP, that message to be processed and sent back through HTTP, then the response to be sent to the server through the socket, and the socket to then send that message to all the clients which are listening to that socket. But it doens't work that way. Some times it does, other times it doesn't.
One thing I can confirm is that the messages are saved to the database and file system 100% of the time. I also always get a response from the HTTP request, but when I emit the event from the client and print them message in the server, I some times get None or the previous message logged to the console.
So when that message is sent to the client, my "error catching block" will see that its either undefined or already exists, so it will return without adding it (because it shouldnt add it of course).
Has anyone experienced this issue before? Do I have a wrong idea about something? Or is there wrong with my code?
I figured it out.
After a bit of time analyzing my code, I noticed that I do not execute the following two queries in the same transaction:
fdb.create(query["query_string"], query["query_params"]) # Interacts with Database
_message = fdb.read("""SELECT * FROM _messages WHERE _id = LAST_INSERT_ID()""", (), False)
The fdb extension I developed is basically a high level abstraction of sqlalchemy which has a very loose transaction management system (auto open/close). So I had to implicitly tell it to not close the transaction until it executes both queries.
As a result, I always get the latest insert, and not the insert which occurred beforehand (as was the case before I updated my code).
Goes to show you that abstraction of code isn't always a good idea if you don't think of all the use cases in advanced.
I have a python server-side which sends a request using SSE.
Here the example of python code. It sends an 'action-status' and data which JS has to handle (to do):
async def sse_updates(request):
loop = request.app.loop
async with sse_response(request) as resp:
while True:
# Sending request for queue' token remove when it's been removed on server.
if request.app['sse_requests']['update_queue_vis_remove']:
await resp.send("update-remove")
request.app['sse_requests']['update_queue_vis_remove'] = False
# Sending request for queue' token adding up when it's been added on server.
if request.app['sse_requests']['update_queue_vis_append'][0]:
await resp.send(f"update-append {request.app['sse_requests']['update_queue_vis_append'][1]} {request.app['sse_requests']['update_queue_vis_append'][2]}")
request.app['sse_requests']['update_queue_vis_append'][0] = False
# Sending request for redundant token's list rewrite (on client side ofc)
if request.app['sse_requests']['redundant_tokens_vis'][0]:
await resp.send('update-redtokens ' + ''.join(token + ' ' for token in request.app['sse_requests']['redundant_tokens_vis'][1]))
request.app['sse_requests']['redundant_tokens_vis'][0] = False
await asyncio.sleep(0.1, loop=loop)
return resp
And the JS script which handles a response:
evtSource = new EventSource("http://" + window.location.host + "/update")
evtSource.onmessage = function(e) {
// Data from server is fetching as "<server-event-name> <data1> <data2> <data3> ..."
let fetched_data = e.data.split(' ');
// First option is when a token has been removed from server this event has to be represented on a client-side.
if(fetched_data[0] === "update-remove")
displayQueueRemove();
// The second option is when a token appended on server and also it should be represented to a user
else if(fetched_data[0] === "update-append")
// fetched_data[1] - token
// fetched_data[2] - it's (token's) position
displayQueueAdd(fetched_data[1], parseInt(fetched_data[2]));
// The last possible options is that if the web-page will has refreshed a data in redundant_tokens should be rewritten
else if (fetched_data[0] === "update-redtokens"){
fetched_data.shift();
// Creating variables for token' wrapping
let tag;
let text;
// Wrapping tokens and store it into the array.
for(let i = 0; i < fetched_data.length - 1; i++) {
tag = document.createElement("div");
text = document.createTextNode(fetched_data[i]);
tag.appendChild(text);
tag.setAttribute("class", "token-field");
redundant_tokens[i] = tag;
}
}
}
The problem is that if I open two or more browser windows (sessions), the only one of them catches a response and represents it. Moreover there were the cases when I send a request from one session, however obtain a response to another one.
Is there an option to fix it using SSE (I mean, I was considering some other methods but I'd like to try with SSE)?
I think your problem is synchronizing the data (app["sse_requests"]).
Depending on how you modify the data and who needs to be notified you might need to keep a list of clients (sessions).
For example if all clients need to be notified of all events then keep a list (or even better a set) of connected clients and create a periodic function (using create_task) in which to notify all of them.
If a client needs to only be notified of certain events then you need to identify that client using some sort of key in the request object.
I'm trying to build real time notification system for users using
Server sent events in python.
The issue I'm facing is when I refresh the browser, which means the
browser then will try to hit the EventSource url again, and as per
docs it should send the event.lastEventId as part of headers in next
request. I'm getting None every time I refresh the page.
<!DOCTYPE html>
<html>
<header><title>SSE test</title></header>
<body>
<ul id="list"></ul>
<script>
const evtSource = new EventSource("/streams/events?token=abc");
evtSource.onmessage = function(event) {
console.log('event', event)
console.log('event.lastEventId', event.lastEventId)
const newElement = document.createElement("li");
const eventList = document.getElementById("list");
newElement.innerHTML = "message: " + event.data;
eventList.appendChild(newElement);
}
</script>
</body>
</html>
On the server side
from sse_starlette.sse import EventSourceResponse
from asyncio.queues import Queue
from starlette.requests import Request
#event_router.get(
"/streams/events",
status_code=HTTP_200_OK,
summary='',
description='',
response_description='')
async def event_stream(request: Request):
return EventSourceResponse(send_events(request))
async def send_events(request: Request):
try:
key = request.query_params.get('token')
last_id = request.headers.get('last-event-id')
print('last_id ', last_id) # this value is always None
connection = Queue()
connections[key] = connection
while RUNNING:
next_event = await connection.get()
print('event', next_event)
yield dict(data=next_event, id=next_event['id'])
connection.task_done()
except asyncio.CancelledError as error:
pass
Now, as per every doc on SSE, when client reconnects or refreshes the
page it will send the last-event-id in headers. I'm trying to read it
using request.headers.get('last-event-id'), but this is always null.
Any pointers on how to get the last event id would be helpful. Also,
how would I make sure that I do't send the same events even later once
the user has seen the events, as my entire logic would be based on
last-event-id received by the server, so if its None after reading
events with Id 1 to 4, how would I make sure in server that I should
not send these back even if last-event-id is null for the user
.
Adding browser snaps
1st pic shows that the events are getting received by the browser.
e.g. {alpha: abc, id:4}
2nd pic shows that the event received is
setting the lastEventId correctly.
I think this is a wrong understanding. Where did you get the part that says "when client reconnects or refreshes the page it will send the last-event-id in headers".
My understanding is that the last ID is sent on a reconnect. When you refresh the page directly, that is not seen as a broken connection and a re-connect. You are completely re-starting the whole page and forming a brand new SSE connection to the server from the browser. Keep in mind if your SSE had a certain request parameter for example and this could be driven by something on the page. You could have two tabs open from the same "page" but put different items into the page which causes this request parameter to be different in your SSE. They are two different SSE connections altogether. One does not affect the other except that you can reach the maximum connection limit of a browser if you have too many. In the same way, if you click refresh, this is like a new tab being created. It is a totally new connection.
Just a question I'm confused about how I could achieve this. I just want to implement progress bar on how long the response from HTTP client request call. I saw a lot of tutorials on how to implement this progress bar but it all centered to a file being downloaded from the server. I want to ask what would be the best approach to do this.
This is my situation. I have a web API that generates a zip file that contains multiple word file that was generated base on the parameters from the calling site. Basically, a lot of things happening from pulling the records from the database and generating these files that's why it took time for the response to be sent and I want to at least show progress bar out of this so that when our user click the button to generate this file they can see the progress of the report to be generated.
This is how i consume the API:
using (var client = new HttpClient())
{
client.BaseAddress = new Uri ("MYBASEURI");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/zip"));
var content = new StringContent(strModel, Encoding.UTF8, "application/json");
var response = await client.PostAsync("API_URI", content);
if (!response.IsSuccessStatusCode)
{
ExceptionResponse ex = await response.ExceptionResponse();
return null;
}
var data = await response.Content.ReadAsStreamAsync();
return File(data, "application/zip", "File.zip");
}
I'm calling this method on form submit since it has parameters needed for the report to generate.
Hello:) I am working on a project that requires to authenticate the user, not just via a normal login. I have certain information in my database (let's say projectnames) and I only want to show certain projects to certain users. The whole thing is already written in PHP, what I wanted to do now is to use AJAX/XMLHTTPREQUESTS from JS. The PHP file is taking the value of uid and returns then a json, that contains the information (in this case, projectnames of projects the user is working on). Yet I have it written like this:
var req = new XMLHttpRequest();
var uid = 1 //for testing
req.onreadystatechange = function () {
if (req.readyState == 4 && req.status == 200) {
var json = JSON.parse(req.responseText);
}
};
req.open("GET", "getprojects.php?uid=" + uid);
req.send();
Basically this works great, and the PHP is also working. The problem I have now, is that someone could simply bruteforce it's way into my system by iterating through the UserIDs. My question now is how I could secure my code, so that not everyone can easily iterate through all the uid's.
If you need more info, just tell.
Thanks already :)