"Run and forget" a link from javascript - javascript

I'm working on a site that will keep a track of when user enters a page and how long he's there (or to be more specific, when he leaves)
I have set up a node server and I would like to run the action there, but the question is irrelevant to node itself, it's a javascript related question.
My goal is to have javascript call a certain method with specific parameters and then forget about that. However, I would like to avoid ajax if possible, I know I could do it with ajax, but I think that's an overkill and I'm not sure even if ajax would work on a node server.
What I'm looking at is something like
*User opens web page
*Javascript runs the script, let's say.
Run("http://server.com/User/EnteredPage/IDOFUSER");
and when the user closes the page
Run("http://server.com/User/LeftPage/IDOFUSER");
Point is.. I don't need anything from that call, I just want the javascript to run it, to save the data I need and that's it.

HTTP is stateless. The browser asks for a resource. The server gives the browser the resource. The end.
The request is done and dealt with. There is no further communication about that request so the server doesn't know when the visitor has left the page that it served up. If you want to know that, then you need another request to tell the server about it.
The problem with that approach is that the visitor might leave the page by:
Quitting their browser entirely
Running out of battery
Getting disconnected from the network
… so you can't reliably send a new request when the user leaves the page.
So the best you can do is to have the browser tell the server that the user hasn't left yet (you could do this with Ajax or (potentially more efficiently) WebSockets).
Combine this with a timer based action on the server that tests how long it has been since the visitor's browser last sent an I'm still here message and use that to call your visitor has left function.

Related

Running a client-script on loading a webpage

I want to run a script when a user tries to access a webpage.
E.g I type in google.com and as the page loads I want a client side script to
know the protocol of the page( https in this case).
I know that windows.location.protocol is one way of knowing the protocol in JS.
But how do I make this run when a user accesses a webpage in a browser.
Also can I send a request for a webpage and analyse the response using , say ajax.Suppose I send a http request to facebook. And i get a 301 redirect message.How do I analyse this response and know that this is a redirect message.
Does it require browser modifications?Can it be done without it.Thanks
Like #SachinKumar said...
1) getting and using protocol: create a browser plugin in your preferred browser. Chrome is the most obvious choice.
2) determining response: do you mean, analyze it using your own code when the user receives a 301? (Probably doable with a browser plugin.) Or, do you mean you just want to check the response of some page yourself? (Jquery's http/get functions are one clear way to do that.)

javascript setInterval() load

I am trying to figure out what kind of load the window function setInterval() places on a user's computer. I would like to place a setInterval() on a page that is viewable only by my company's employees that will be checking a basic text file every 5 seconds or so and then if there is something to display, it will throw up some html code dynamically on the screen.
Any thoughts? Any better, less intrusive way to do this?
Appears it should not cause a problem, pending that the function setInterval() fires off is not heavy. Since I will only be reading a text file which should never be too large (text file will be overwritten about every minute by a completely separate job or bash script), the load should be minimal since it will be read in as a string, analyzed, and if necessary throw out a small amount of HTML code to the page.
I agree with all the comments regarding a single, polling setInterval() to be trivial.
However, if you want alternatives:
Long Polling
The browser makes an Ajax-style request to the server, which is kept
open until the server has new data to send to the browser, which is
sent to the browser in a complete response.
Also see:
PHP example and explanation
SignalR
Web Sockets
WebSockets is an advanced technology that makes it possible to open an
interactive communication session between the user's browser and a
server. With this API, you can send messages to a server and receive
event-driven responses without having to poll the server for a reply.

Prevent recursive calls of XmlHttpRequest to server

I've been googling for hours for this issue, but did not find any solution.
I am currently working on this app, built on Meteor.
Now the scenario is, after the website is opened and all the assets have been loaded in browser, the browser constantly makes recursive xhr calls to server. These calls are made at the regular interval of 25 seconds.
This can be seen in the Network tab of browser console. See the Pending request of the last row in image.
I can't figure out from where it originates, and why it is invoked automatically even when the user is idle.
Now the question is, How can I disable these automatic requests? I want to invoke the requests manually, i.e. when the menu item is selected, etc.
Any help will be appriciated.
[UPDATE]
In response to the Jan Dvorak's comment:
When I type "e" in the search box, the the list of events which has name starting with letter "e" will be displayed.
The request goes with all valid parameters and the Payload like this:
["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]
And this is the response, which is valid.
a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]
The code for this action is posted here
But in the case of automatic recursive requests, the request goes without the payload and the response is just a letter "h", which is strange. Isn't it? How can I get rid of this.?
Meteor has a feature called
Live page updates.
Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.
To support this feature, Meteor needs to do some server-client communication behind the scenes.
Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:
polling:
The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.
If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.
hanging requests:
The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:
If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.
Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.
Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.
Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.
Chunked response:
The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.
Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.
Web Sockets:
These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.
There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.
These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).
Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.
What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".
Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).

change content in all users pages when at least 1 user triggers event without page refresh

What I want to do is when 1 user checks a checkbox(just for example, it can be any event), everyone, who have same page opened, immediately saw it without page refresh.
I know how to do it with ajax and setTimeOut(or setInterval) function: with setTimeOut function we open stream where an infinite loop checks if an event was triggered and if yes, we update content with ajax. Or set interval to update page in some time.
I'm looking for more optimized and cross-browser solution, so any help will be appreciate.
Search google for: Comet or long polling
For the solution to be cross-browser, you have to bend what web-servers/HTTP were designed for, which is for requests from browsers being served a page as quickly as possible, and then closing the connection. There are new methods with new browsers, and new definitions in the HTTP model, but they will not work on old browsers.
The basic principle behind long polling, is that a request is sent to the server, which idles around, pretending to be generating the page, and if any event happens requiring the client to be updated, it then sends the information as the response to the request which pre-dated the request. This is inefficient in terms of server resources, but about as snappy as you can get in terms of user experience.

Ajax Security

We have a heavy Ajax dependent application. What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There aren't any really.
Any request sent through a browser can be faked up by standalone programs.
At the end of the day does it really matter? If you're worried then make sure requests are authenticated and authorised and your authentication process is good (remember Ajax sends browser cookies - so your "normal" authentication will work just fine). Just remember that, of course, standalone programs can authenticate too.
What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There are no ways. A browser is indistinguishable from a standalone program; a browser can be automated.
You can't trust any input from the client side. If you are relying on client-side co-operation for any security purpose, you're doomed.
There isn't a way to automatically block "non browser user" requests hitting your server side scripts, but there are ways to identify which scripts have been triggered by your application and which haven't.
This is usually done using something called "crumbs". The basic idea is that the page making the AJAX request should generate (server side) a unique token (which is typically a hash of unix timestamp + salt + secret). This token and timestamp should be passed as parameters to the AJAX request. The AJAX handler script will first check this token (and the validity of the unix timestamp e.g. if it falls within 5 minutes of the token timestamp). If the token checks out, you can then proceed to fulfill this request. Usually, this token generation + checking can be coded up as an Apache module so that it is triggered automatically and is separate from the application logic.
Fraudulent scripts won't be able to generate valid tokens (unless they figure out your algorithm) and so you can safely ignore them.
Keep in mind that storing a token in the session is also another way, but that won't buy any more security than your site's authentication system.
I'm not sure what you are worried about. From where I sit I can see three things your question can be related to:
First, you may want to prevent unauthorized users from making a valid request. This is resolve by using the browser's cookie to store a session ID. The session ID needs to tied to the user, be regenerated every time the user goes through the login process and must have an inactivity timeout. Anybody request coming in without a valid session ID you simply reject.
Second, you may want to prevent a third party from doing a replay attacks against your site (i.e. sniffing an inocent user's traffic and then sending the same calls over). The easy solution is to go over https for this. The SSL layer will prevent somebody from replaying any part of the traffic. This comes at a cost on the server side so you want to make sure that you really cannot take that risk.
Third, you may want to prevent somebody from using your API (that's what AJAX calls are in the end) to implement his own client to your site. For this there is very little you can do. You can always look for the appropriate User-Agent but that's easy to fake and will be probably the first thing somebody trying to use your API will think of. You can always implement some statistics, for example looking at the average AJAX requests per minute on a per user basis and see if some user are way above your average. It's hard to implement and it's only usefull if you are trying to prevent automated clients reacting faster than human can.
Is Safari a webbrowser for you?
If it is, the same engine you got in many applications, just to say those using QT QWebKit libraries. So I would say, no way to recognize it.
User can forge any request one wants - faking the headers like UserAgent any they like...
One question: why would you want to do what you ask for? What's the diffrence for you if they request from browser or from anythning else?
Can't think of one reason you'd call "security" here.
If you still want to do this, for whatever reason, think about making your own application, with a browser embedded. It could somehow authenticate to the application in every request - then you'd only send a valid responses to your application's browser.
User would still be able to reverse engineer the application though.
Interesting question.
What about browsers embedded in applications? Would you mind those?
You can probably think of a way of "proving" that a request comes from a browser, but it will ultimately be heuristic. The line between browser and application is blurry (e.g. embedded browser) and you'd always run the risk of rejecting users from unexpected browsers (or unexpected versions thereof).
As been mentioned before there is no way of accomplishing this... But there is a thing to note, useful for preventing against CSRF attacks that target the specific AJAX functionality; like setting a custom header with help of the AJAX object, and verifying that header on the server side.
And if in the value of that header, you set a random (one time use) token you can prevent automated attacks.

Categories

Resources