Time to first byte with javascript? - javascript

Is there any modern browser that via javascript exposes time to first byte (TTFB) and/or time to last byte (TTLB) on a http request without resorting to any plugin?
What I would like is a javascript snippet that can access these values and post them back the the server for performance monitoring purposes.
Clarification:
I am not looking for any js timers or developer tools. What I wonder and hoping is if there are any browsers that measures load times and exposes those value via javascript.

What you want is the W3C's PerformanceTiming interface. Browser support is good (see this survey from Sep 2011). Like you speculated in response to Shadow Wizard's answer, these times are captured by the browser and exposed to javascript in the window object. You can find them in window.performance.timing. The endpoint of your TTFB interval will be window.performance.timing.responseStart (defined as "the time immediately after the user agent receives the first byte of the response from the server, or from relevant application caches or from local resources"). There are some options for the starting point, depending on whether you're interested in time to unload the previous document, or time to resolve DNS, so you probably want to read the documentation and decide which one is right for your application.

I fear it's just not possible.
JavaScript becomes "active" only after part of the request has been sent from server, accepted by the browser and parsed.
What you ask is kind like asking "Can I measure the weight of a cake after eating it?" - you need to first weight and only then eat the cake.

You can see the response time in the Chrome Developer Tools.

It's impossible to get the true TTFB in JS, as the page gets a JS context only after the first byte has been received. The closest you can get is with something like the following:
<script type="text/javascript">var startTime = (new Date()).getTime()</script>
very early in your <head> tag. Then depending on if you want to check when the html finishes, or everything finishes downloading, you can either put a similar tag near the bottom of your html page (and subtract the values), and then do an XHR back to the server (or set a cookie, which you can retrieve server side on the next page request) or listen to the onload event, and do the same.

Related

Is PerformanceTiming.responseStart points to HTML or headers start?

I have a question about PerformanceTiming.responseStart.
Is it a time to first byte of headers of a time to first byte of HTML? In some projects this times can be very different. E. g. when progressive page rendering is used.
[...] must return the time immediately after the user agent receives the first byte of the response from the server
http://www.w3.org/TR/2012/REC-navigation-timing-20121217/#dom-performancetiming-responsestart
The Response is everything including the HTTP-Header, this is even before the HTML-Head. It's the moment when data is on the Networks Socket and beeing read for the first time.
Here is a neet little animation and explanation page about that: https://varvy.com/performance/responsestart.html
When a resource is retrieved via the network (rather than the application cache) responseStart represents part of the HTTP request / response timeline.
It this Point in Time in your Browsers Network-Tool F12:

How to get the number of seconds a page loads after all data are shown on the page?

Is it possible to get the TOTAL NUMBER OF SECONDS a page fetch and display the data?
like from the moment I click a link to the moment all data are displayed on the page, done on OnInit, onrender, pageload onprerender and so on..... is it possible?
thanks
yes its possible just you need to add the code in your web.config file and run the application it will prompt you the loading time just after page rendering. Scroll the mouse and get the details.
<system.web>
<trace pageOutput="true" requestLimit="10" enabled="true" localOnly="true" traceMode="SortByTime" mostRecent="true"/>
</system.web>
Note: you need to write the trace part of the code in system.web which already exists in your web.config file.
You can easily check the time PHP needs to run the script: start to finish.
Simply store the time at start and end. Look here:
http://nl.php.net/manual/en/function.microtime.php
However, this is not the time the user experiences.
For example: If PHP needs 0.1 sec to produce the HTML, and the HTML contains 100 big images, the actual pageloading takes a lot longer than 0.1 sec.
The actual time the enduser experiences depends on a lot of factors, like webserver that is inbetween (and that need to invoke PHP), networkspeed, caching, etc..
I think your best bet is to approach this via Javascript, and use the onLoad event handler that can be attached to body.
Use some external window to do the timing between clicking and the firing of onload.
ALso, keep in mind that result might differ for other visitors with different cache-settings, different networkspeed, etc.. SO you can only get an approximation.
It's possible, but kinda complicated, because the load time from click to full load consists of so many things:
request to the server (connection roundtrip, dns lookup sometimes etc)
request processing server side (this you can measure insude your APS code)
request load till any of the events fire
etc
Long story short is would be impossible to measure it with any single method and combining many would be a pain and would not include all the parts to be measured.
In this particular case the best thing you could do is: bind onclick (on link) an ajax request with current timestamp (millisecond precision) and do a 2nd request, with current timestamp onload and substract the two.
Send a variable from server having current time in it before displaying page.
On HTML page, run a javascript function on onload(). This function is called after page is loaded. Get the current time again in this function.
Match the both time variables. One sent from server and one in onload() function. You will get the number of seconds.

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Monitoring User Sessions to Prevent Editing Conflict

I'm working on something similar to a pastebin (yeah, it's that generic) but allowing for multiple user editing. The obvious problem is that of multiple users attempting to edit the same file. I'm thinking along the lines of locking down the file when one user is working on it (it's not the best solution, but I don't need anything too complex), but to prevent/warn the user I'd obviously need a system for monitoring each user's edit sessions. Working with database and ajax, I'm thinking of two solutions.
The first would be to have the edit page ping the server at a arbitrary interval, say a minute, and it would update the edit session entry in the db. Then the next time a script request to edit, it checks for the most recent ping, and if the most recent was another arbitrary time ago, say five minute, then we assume that the previous user had quited and the file can be edited again. Of course, the problem with this method is that the assumption that the previous user had quited is simply an assumption. He could be having flaky wi-fi connection and simply dropped out for ten minutes, all the time with the window still open.
Of course, to deal with this problem, we'd have to have the server respond to new request from previously closed sessions with an error, telling the client side to point out to the user that his session has ended, and then deal with it by, say, saving it as another file on the server and asking the user to manually merge it, etc. It goes without saying that this is rather horrible for the end user.
So I've came around to think of another solution. It may also be possible to get a unload event to fire when the user's session ends, but I cannot be sure whether this will work reliably.
Does anybody has any other, more elegant solution to this problem?
If you expect the number of concurrent edits to the file to be minor, you could just store a version number for the file in the db, and when the user downloads the file into their browser they also get the version number. They are only allowed to upload their changes if the version number matches. First one to upload wins. When a conflict is detected you should send back the latest file and the user's changes so that the user can manually merge in the changes. The advantage is that this works even if it's the same user making two simultaneous edits. If this feature ends up being frequently used you could add client-side merging similar to what a diff tool uses (but you might need to keep the old revisions in that case).
You're probably better off going for a "merge" solution. Using this approach you only need to check for changes when the user posts their document to the server.
The basic approach would be:
1. User A gets the document for editing, document is at version 1
2. User B gets the document for editing, document is at version 1
3. User B posts some changes, including the base version number of 1
4. Server updates document, document now at version 2
5. User B posts some changes, including the base version number of 1
6. Server responds saying document has changed since the user starts editing, and sends user the new document, and their version - user will then need to perform any merging of their changes into document version 2, and post back to the server. User is essentially now editing document version 2
7. User A posts some changes, including the version number of 2
8. Server updates the document, which is now at version 3
You can still do a "ping" every minute, to get the current version number - you already know what version they're editing, so if a new version is available you can let them know and let them download the latest version to make their changes into.
The main benefit of this approach is that users never lock files, so you don't need any arbitrary "time-outs".
I would say you are on the right track. I would probably implement a hybrid solution:
Have a single table called "active_edits" or something like that with a column for the document_id, the user, and the last_update_time. Lets say your ping time is 1 minute and your timeout is 5 minutes. So a use-case would look like this:
Bob opens a document. It checks the last_update_time. If it is over 5 minutes ago, update the table with Bob and the current time. If it is not, someone else is working on the document, so give an error message. Assuming it is not being edited, Bob works on the document for a while and the client pings an update time every minute.
I would say do include a "finish editing" button and a onunload handler. Onunload, from what I understand can be flaky, but might as well add it. Both of these would send a single send-only post to the server saying that Bob is done. Even if Bob doesn't hit "finish editing" and onunload flakes out, the worst case is that another user would have to wait 5 more minutes to edit. The advantage is that if these normally work (a fair assumption) then the system works a bit better.
In the case you described where a Bob is on a bad wireless connection or takes a break: I would say this isn't a big deal. Your ping function should make sure that the document hasn't been taken over by someone else since Bob's last ping. If it has, just give Bob a message saying "someone else has started working on the document" and give them the option to reload.
EDIT: Also, I would be looking into window.onbeforeunload, not onunload. I believe it executes earlier. I believe this is the function website (slashdot included) use to allow you to confirm that you actually want to leave the page. I think it works in the major browsers except Opera.
As with this SO question How do you manage concurrent access to forms?, I would not try to implement pessimistic locking. It is simply too difficult to get working reliably in a stateless environment. Instead, I would use optimistic locking. However, in this case I used something like a SHA hash of the file to determine if the file had changed since the user last read from the file. For each request to change the file, you would run a SHA hash of the file bytes and compare it with the version you pulled when you first read the data. If had changed, you reject the change and either force the user to do their edits again (pulling a fresh copy of the file contents) or you provide a fancier conflict resolution.

Ajax using JS, but WITHOUT XMLhttp AND using the same socket every time?

Is it possible to communicate and update data in a page without reloading, but without using the XMLHttpRequest object, AND sharing the same connection or socket every time (so, without closing the connection for every request)?
Make your server send back a "page" which is the usual HTML followed by a series of <script> tags that are output slowly over time. The whole thing works over the single socket that delivered the HTML page.
You can't communicate back from the client to the server that way - you'd need to make a new request to the server each time you did that, but with HTTP 1.1 that will reuse the same socket each time anyway.
No.
You can change the content on the page with just Javascript, however if you want content from the server, you're going to have to use an XMLHttpRequest object.
Edit: Looking at the link above about "long polling"
My answer changes depending upon what you mean. Do you mean you don't want to use an XMLHttpRequest object at any level? Or do you mean that you don't want to have to use the raw XMLHttpRequest object.
Because in the end jQuery is going to use an XMLHttpRequest object. However if you just don't want to have to deal with the raw object, then you can use something like jQuery.
Looking at the answer above:
Okay, I understand what you were talking about...however the page you are linking is talking about something completely different.

Categories

Resources