Weird (caching) issue with Express/Node - javascript

I've built an angular/express/node app that runs in google cloud which currently uses a JSON file that serves as a data source for my application. For some reason, (and this only happens in the cloud) when saving data through an ajax call and writing it to the json file, everything seems to work fine. However, when refreshing the page, the server (sometimes!) sends me the version before the edit. I can't tell whether this is an Express-related, Node-related or even Angular-related problem, but what I know for sure is that I'm checking the JSON that comes in the response from the server, and it really is sometimes the modified version, sometimes not, so it most probably isn't angular cache-related.
The GET:
router.get('/concerts', function (request, response) {
delete require.cache[require.resolve('../database/data.json')];
var db = require('../database/data.json');
response.send(db.concerts);
});
The POST:
router.post('/concerts/save', function (request, response) {
delete require.cache[require.resolve('../database/data.json')];
var db = require('../database/data.json');
var concert = request.body;
console.log('Received concert id ' + concert.id + ' for saving.');
if (concert.id != 0) {
var indexOfItemToSave = db.concerts.map(function (e) {
return e.id;
}).indexOf(concert.id);
if (indexOfItemToSave == -1) {
console.log('Couldn\'t find concert with id ' + concert.id + 'in database!');
response.sendStatus(404);
return;
}
db.concerts[indexOfItemToSave] = concert;
}
else if (concert.id == 0) {
concert.id = db.concerts[db.concerts.length - 1].id + 1;
console.log('Concert id was 0, adding it with id ' + concert.id + '.');
db.concerts.push(concert);
}
console.log("Added stuff to temporary db");
var error = commit(db);
if (error)
response.send(error);
else
response.status(200).send(concert.id + '');
});
This probably doesn't say much, so if someone is interested in helping, you can see the issue live here. If you click on modify for the first concert and change the programme to something like asd and then save, everything looks fine. But if you try to refresh the page a few times (usually even up to 6-7 tries are needed) the old, unchanged programme is shown. Any clue or advice greatly appreciated, thanks.

To solve: Do not use local files to store data in cloud! This is what databases are for!
What was actually the problem?
The problem was caused by the fact that the App Engine had 2 VM instances running for my application. This caused the POST request to be sent to one instance, it did its job, saved the data by modifying its local JSON file, and returned a 200. However, after a few refreshes, the load balancing causes the GET to arrive at the other machine, which has its individual source code, including the initial, unmodified JSON. I am now using a MongoDB instance, and everything seems to be solved. Hopefully this discourages people who attempt to do the same thing I did.

Related

Why is Net.Socket.Write() not sending the full string to my TCP client?

EDIT: Im desperate to get this thing done so I can move onto a new project, heres the source: https://github.com/eanlockwood/TalkmonClient
Everything else works fine. The server is well aware of the usernames associated with the clients and it sends information to them accordingly, but the weird thing is the string cuts off after the first variable. Here is the loop for a typical send message:
function SendMessage(msg, client)
{
const clientName = ClientNames.get(client);
let recieverMsg = `${clientName}: ${msg}`;
console.log(recieverMsg);
Clients.forEach(elm => {
if(elm != client)
{
elm.write(recieverMsg);
}
else {
elm.write("You: " + msg);
}
})
}
msg is the chunk.toString() in the socket.on('data') event, again, node picks up what this string means, it debugs fine but in the client (imagine the windows cmd) itll just print the username and go to the next line.
The loop for the client side is pretty simple too,
char buff[4090];
do
{
ZeroMemory(buff, 4090);
int messageRevieved = recv(sock, buff, 4090, 0);
if (messageRevieved > 0)
{
userInput->FreezeInput();
cout << buff << endl;
userInput->UnFreezeInput();
}
} while (0 == 0);
User input is handled in its own class on a seperate thread, again, working just fine.
I think it has to do with a misunderstanding of what socket.write and recv actually do or maybe I just dont understand javascript strings enough. Either way this problem is annoying because its the last step in creating my app
Ive done some extensive tests too, it really just doesnt like the concat strings. Itll print everything up until the first variable, meaning I could have socket.write("hehehehehehe " + variable1 + " kadjgkdgad"); and it would print hehehehehe [variable1 value] and just stop
Tl;dr: The server will write to the sockets up to the first variable in a concat'd string and then stop, its so weird.
EDIT: The dependencies used serverside are Net and dotnet if that makes a difference.
EDIT 2: I found another issue. After sending a few messages the client who sent the messages will stop printing the "you: " part and will only print the message, it will do this server side too but its weird because the receiving clients will still print out who the message is from.

Capturing refresh on haskell websockets example server?

The websockets server example works as expected. On browser refresh (e.g. S-F5 with chrome), the websocket disconnects, still working as expected. After refresh, the user has to give name again to connect to the server.
How would you capture the refresh-event and keep the user connected? E.g.
Is this doable only on server side or does the client require modifications as well? Haskell examples or links to such would be nice as well as hints on how to do this!
How would you capture the refresh-event...
There isn't really such a thing as a refresh event to detect (I would love to be proved wrong in this!)
... and keep the user connected...
The refresh, or rather, the leaving of the page before loading it again, causes the websocket to disconnect, and (especially if this is the only page on the site that is open), there isn't really much you can do about it.
So the only thing that can be done, is have some sort of auto-reconnect the next time the page loads. A solution that allows this is one where..
when the name is initially entered, the name is saved somewhere in the browser;
when the page reloads, it checks for a previously saved name;
and if it's found, it connects again using that name.
Local storage is one such place to save this, as in the below example, modified from https://github.com/jaspervdj/websockets/tree/master/example to save/retrieve the name from local storage.
$(document).ready(function () {
var savedUser = sessionStorage.getItem("rejoin-user");
if (savedUser) {
joinChat(savedUser);
}
$('#join-form').submit(function () {
joinChat($('#user').val())
});
function joinChat(user) {
sessionStorage.setItem("rejoin-user", user);
$('#warnings').html('');
var ws = createChatSocket();
ws.onopen = function() {
ws.send('Hi! I am ' + user);
};
ws.onmessage = function(event) {
if(event.data.match('^Welcome! Users: ')) {
/* Calculate the list of initial users */
var str = event.data.replace(/^Welcome! Users: /, '');
if(str != "") {
users = str.split(", ");
refreshUsers();
}
$('#join-section').hide();
$('#chat-section').show();
$('#users-section').show();
ws.onmessage = onMessage;
$('#message-form').submit(function () {
var text = $('#text').val();
ws.send(text);
$('#text').val('');
return false;
});
} else {
$('#warnings').append(event.data);
ws.close();
}
};
$('#join').append('Connecting...');
return false;
};
});
... Is this doable only on server side or does the client require modifications as well?
It definitely needs something done in the client to auto-reconnect. The bare bones version above needs no changes to the server, but if you wanted something fancier, like having the cases of initial connect and auto reconnect handled/shown differently somehow, then the server might need to be modified.

How to process streaming HTTP GET data?

Right now, I have a node.js server that's able to stream data on GET request, using the stream API. The GET request is Transfer-encoded set to 'chunked'. The data can be on the order of 10 to 30 MBs. (They are sometimes 3D models)
On the browser side, I wish to be able to process the data as I'm downloading it--I wish to be able to display the data on Canvas as I'm downloading it. So you can see the 3D model appear, face by face, as the data is coming in. I don't need duplex communication, and I don't need a persistent connection. But I do need to process the data as soon as it's downloaded, rather than waiting for the entire file to finish downloading. Then after the browser downloads the data, I can close the connection.
How do I do this?
JQuery ajax only calls back when all the data has been received.
I also looked at portal.js (which was jquery-streaming) and socket.io, but they seem to assume persistent reconnection.
So far, I was able to hack a solution using raw XMLHttpRequest, and making a callback when readyStead >= 2 && status == 200, and keeping track of place last read. However, that keeps all the data downloaded in the raw XMLHttpRequest, which I don't want.
There seems to be a better way to do this, but I'm not sure what it is. Any one have suggestions?
oboe.js is a library for streaming responses in the browser.
However, that keeps all the data downloaded in the raw XMLHttpRequest, which I don't want.
I suspect this may be the case with oboe.js as well and potentially a limitation of XMLHttpRequest itself. Not sure as I haven't directly worked on this type of use case. Curious to see what you find out with your efforts and other answers to this question.
So I found the answer, and it's Server-sent events. It basically enables one-way http-streams that the browser can handle a chunk at a time. It can be a little tricky because some existing stream libs are broken (they don't assume you have \n in your stream, and hence you get partial data), or have little documentation. But it's not hard to roll your own (once you figure it out).
You can define your sse_transform like this:
// file sse_stream.coffee
var Transform = require('stream').Transform;
var util = require('util');
util.inherits(SSEStream, Transform);
function SSEStream(option) {
Transform.call(this, option);
this.id = 0;
this.retry = (option && option.retry) || 0;
}
SSEStream.prototype._transform = function(chunk, encoding, cb) {
var data = chunk.toString();
if (data) {
this.push("id:" + this.id + "\n" +
data.split("\n").map(function (e) {
return "data:" + e
}).join("\n") + "\n\n");
//"retry: " + this.retry);
}
this.id++;
cb();
};
SSEStream.prototype._flush = function(next) {
this.push("event: end\n" + "data: end" + "\n\n");
next();
}
module.exports = SSEStream;
Then on the server side (I was using express), you can do something like this:
sse_stream = require('sse_stream')
app.get '/blob', (req, res, next) ->
sse = new sse_stream()
# It may differ here for you, but this is just a stream source.
blobStream = repo.git.streamcmd("cat-file", { p: true }, [blob.id])
if (req.headers["accept"] is "text/event-stream")
res.type('text/event-stream')
blobStream.on("end", () -> res.removeAllListeners()).stdout
.pipe(
sse.on("end", () -> res.end())
).pipe(res)
else
blobStream.stdout.pipe(res)
Then on the browser side, you can do:
source = new EventSource("/blob")
source.addEventListener('open', (event) ->
console.log "On open..."
, false)
source.addEventListener('message', (event) ->
processData(event.data)
, false)
source.addEventListener('end', (event) ->
console.log "On end"
source.close()
, false)
source.addEventListener('error', (event) ->
console.log "On Error"
if event.currentTarget.readyState == EventSource.CLOSED
console.log "Connection was closed"
source.close()
, false)
Notice that you need to listen for the event 'end', that is sent from the server in the transform stream's _flush() method. Otherwise, EventSource in the browser is just going to request the same file over and over again.
Note that you can use libraries on the server side to generate SSE. On the browser side, you can use portal.js to handle SSE. I just spelt things out, so you can see how things would work.

Reduce Ajax requests

I'm making a chat script using jQuery and JSON, but my hosting suspends it due to 'resources usage limit'. I want to know if it is possible (and how) to reduce these requests. I read one question in which they tell something about an Ajax timeout, but I'm not very good at Ajax. The code is:
function getOnJSON() {
var from;
var to;
var msg_id;
var msg_txt;
var new_chat_string;
//Getting the data from the JSON file
$.getJSON("/ajax/end.emu.php", function(data) {
$.each(data.notif, function(i, data) {
from = data.from;
to = data.to;
msg_id = data.id;
msg_txt = data.text;
if ($("#chat_" + from + "").length === 0) {
$("#boxes").append('...some stuf...');
$('#' + from + '_form').submit(function(){
contactForm = $(this);
valor = $(this + 'input:text').val();
destinatary = $(this + 'input[type=hidden]').val();
reponse_id = destinatary + "_input";
if (!$(this + 'input:text').val()) {
return false;
}
else {
$.ajax({
url: "/ajax/end.emu.php?ajax=true",
type: contactForm.attr('method'),
data: contactForm.serialize(),
success: function(data){
responsed = $.trim(data);
if (responsed != "success") {
alert("An error occured while posting your message");
}
else {
$('#' + reponse_id).val("");
}
}
});
return false;
}
});
$('#' + from + '_txt').jScrollPane({
stickToBottom: true,
maintainPosition: true
});
$('body').append('<embed src="http://cdn.live-pin.com/assets/pling.mp3" autostart="true" hidden="true" loop="false">');
}
else {
var pane2api = $('#' + from + '_txt').data('jsp');
var originalContent = pane2api.getContentPane().html();
pane2api.getContentPane().append('<li id="' + msg_id + '_txt_msg" class="chat_txt_msg">' + msg_txt + '</li>');
pane2api.reinitialise();
pane2api.scrollToBottom();
$('embed').remove();
$('body').append('<embed src="http://cdn.live-pin.com/assets/pling.mp3" autostart="true" hidden="true" loop="false">');
}
});
});
}
The limit is of 600 reqs/5 min, and I need to make it almost each second. I had a year already paid and they have no refund, also I can't modify the server, just have access to cPanel.
Well, 600 req/5 min is pretty restrictive if you want to make a request/sec for each user. Essentially, that gives you that each user will make 60 req/min. Or 300/5 min. In other words, even if you optimize your script to combine the two requests to one, at maximum you can have two users at your site ;) Not much I guess...
You have two options:
Stick with making a chat system through Ajax requests and change the hosting provider. This might be actually cheaper if you don't have the skills to do 2.
Forget about making an Ajax request to poll and potentially another to push every second. Implement something around web sockets, long-polling or even XMPP.
If you go that route, I would look at socket.io for a transparent library that uses web sockets where they are supported and has fallbacks to long polling and others for the rest. For the XMPP-way, there is the excellent Strophe.js. Note that both routes are much more complex than your Ajax requests and will require a lot of server logic changes.
I don't think that checking each second is really a good idea, in my opinion for online chat 2/3 seconds check should be far enough.
To get less request, you can also add a check on the user activity in client side, if the windows is inactive you can lengthen the checking time, going back to 2/3 seconds when the user come back active, that will allow you to save resources and requests / minutes
I'm working on a project right now that requires keeping the UI in sync with server events. I've been using long polling which does indeed reduce the number of ajax calls, but then it put's the burden on the server to listen for the event that the client is interested in, which isn't fun either.
I'm about to switch over to socket.io which I will set up as a separate push service.
existing server --> pushes to sockt.io server --> pushes to subscribing client
ggozad's response is good, I also recommend web sockets. They work only with newer browser models, so if you want to make it available on all browsers you will need a small Flash bridge (Flash can communicate with sockets very easily and also it can call JavaScript functions and be called from JavaScript). Also, Flash offers P2P if you are interested. http://labs.adobe.com/technologies/cirrus/
Also, for server side you can look into Node.js if you are a JavaScript fan like me :)
To complete my response: there is no way to make an Ajax based chat in witch you are limited to 600 requests/5 min (2 requests/second), want to make a request/second and want more than two users.
Solution: switch to sockets or P2P.
I recommend you to call that paid service from the server side using a single thread (as an API proxy). You can still poll with 600 requests/5 min in this thread. Then every client do Ajax requests to poll or long-poll to your server API proxy without limitation.

CCTV webserver javascript

Bit of a strange one here, i have a CCTV system and contacted the manufacturers to asked if there was an API. The answer was no.
I've been trying to understand how i can take the live jpeg picture and use it in my own app (c#).
here is a link to the liveview page that displays the the live feeds; http://pastebin.com/jCp4jZRh
The line i'm interested in is;
img_buf[0].src = "ivop.get?action=live&piccnt=0&THREAD_ID=" + thd_id;
Now piccnt seems to be for stopping browsers caching the data, so this number keeps changing and thd_id seems to be the channel number. When trying to access this i get the following message;
Authentication Error:Access Denied, authentication error
Even if i log in first, then try the above url with my own contect i still retrieve the access denied message.
Heres the source to the login page; http://pastebin.com/q7nLJ4tk
heres the source to the md5.js file; http://pastebin.com/du1ggaQB
I'm just a little stuck on how to auth then display the feed, does anyone have any pointers?
thanks
I answered a similar question awhile back, and the solution ended up being that you had to set the referrer.
In any case, to find your solution, download a copy of Fiddler.
Once running, hit your camera page, and you will see several requests. When you find one of the requests for ivop.get, drag it into the request builder and execute it a second time.
If after executing it a second time it still works (check using the inspectors), then start changing the headers, removing bits one by one until you find the key element. I suspect there will either be a cookie, or referrer that is required.
Once you have figured out those elements, it should be easy to make the appropriate request in your application.
If you can post a live URL, I can help you with this.
Best guess given the limited info available: they're checking the referrer. You can check the details of the requests using Fiddler (you can even replay the request with a slightly different referrer, confirm if that's what's happening, etc). If this is it, you can set the referrer in HTTPWebRequest: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.referer.aspx
There are many possibilities, and without having access to the source code of the CCTV server, it's hard to say which one it might be.
I'd suggest popping open an HTTP Header sniffing utility (such as https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/ for firefox) and watch the headers for the successful IMG request. Then replay that request using netcat or curl. Once you've got that working, try removing HTTP headers one at a time (you're probably sending some kind of session ID, HTTP Referrer, etc - these may all be important to the CCTV server)
In any case, it's almost certainly going to be important that you at least authenticate with mlogin.get and pass along the resulting session ID in subsequent requests.
Whilst this may be old - i had the same problem. The DVR requires an authenticated login with a key sent in the url during the first redirect to the login page, and the password in hex_hmac_md5. I have a python function to login and retrieve two channel images and then logout below:
def getcamimg():
baseurl = 'http://<IPADDRESS>/'
content = str(getUrl(baseurl))
x = re.search("key=(\w+)", content)
keystr = x.group(1)
key = bytes(keystr,'utf-8')
password = bytes(<YOURPASS>,'utf-8')
hmacobj = hmac.new(key,password)
hmacpass = hmacobj.hexdigest()
#-----------------------------------------------------
loginurl = baseurl + 'mlogin.get?account=<USERNAME>&passwd='+ str(hmacpass) + '&key=' + keystr + '&Submit=Login'
lcontent = str(getUrl(loginurl))
if("another administrator" in lcontent):
print("another admin online")
exit()
y = re.search('href="([\w\d\.\?=&_-]+)"',lcontent)
finalurl = baseurl + y.group(1)
z = re.search('id=(\w+)',lcontent)
thid = z.group(1)
#-----------------------------------------------------
imgurl = baseurl + "ivop.get?action=live&piccnt=1&THREAD_ID=" + thid
imgcontent = getUrl(imgurl)
ctime = datetime.datetime.today().strftime("%Y%m%d%H%M%S")
with open("chan1_"+ctime+".jpg", "wb") as file0:
file0.write(imgcontent)
#-----------------------------------------------------
chanset = "showch.set?channel=3&THREAD_ID=" + thid
getUrl(baseurl + chanset)
#-----------------------------------------------------
icontent1 = getUrl(imgurl)
with open("chan3_"+ctime+".jpg", "wb") as file1:
file1.write(icontent1)
#-----------------------------------------------------
logout = "Forcekick.set?ITSELF=1&Logout=Logout&THREAD_ID=" + thid
getUrl(baseurl + logout)
#-----------------------------------------------------
def getUrl(url):
try:
response = requests.get(url)
response.raise_for_status()
except HTTPError as http_err:
print('HTTP error occurred: '+ str(http_err))
except Exception as err:
print('Other error occurred:' + str(err))
else:
return response.content

Categories

Resources