Right now I'm using AJAX to pull in a list of active streams (TwitchTV) and it's viewers and I'm requesting this every second. At time the list of streams to check can get quite lengthy so I plan on splitting the ajax requests into 2 or 3 parts:
1) Get Number of Viewers for Current Stream (Check every 1 Second)
2) Split Stream in Half and Check 1st Half of List for Active Streamers (Check every 5 Seconds)
3) Check 2nd Half of List for Active Streamers (Check every 5 Seconds)
so I would have 3 requests running simultaneously but I'm worried about what the load time will come down to. Since it is constantly pulling in data would it make the page slower? Would the user likely notice? Is it better to keep 1 ajax request for big amounts of data or is it better to use multiple ajax requests for smaller pieces of data? Is ajax really the best thing to pull in constantly changing live data?
The answer to your various questions is probably "It depends":
The ajax requests by themselves shouldn't make anything slower. These are asynchronous requests, so they will only actually cause the user's browser any significant (and probably still not noticeable) load when the request completes.
One thing that could potentially slow your app down (or cause the user to notice in an unpleasant way) is the DOM manipulation when the request completes. Changing your current number of streaming users in-place probably won't hurt, but depending on the number of streams/how you are displaying them in a list, redrawing this could potentially be very expensive/cause lag on things like redraw.
An alternative to using Ajax (depending on what browsers you wish to support) is to use websockets. This way you can keep a connection open and the server can tell the application when the data needs to change, instead of the need to poll for it.
Why do you need to break your list up into a first half and a second half?
One way to cut down on the amount of data you're sending back and forth might be to send some sort of signal indicating the last bit of data you received. For example, when your timeline on twitter.com updates every few seconds, the ajax request sends along the id of the most recent tweet it received, so that the server knows not to waste time sending any data older than that. Depending on your use case this might be effective.
Related
I've been doing some research about infinite scrolling and came across what people call "Lazy Loading". Now I have done it on one of my website elements (chatbox), and now I have 2 ways to do it. but I can't decide which one is more efficient than the other. Here are my ways:
Let's say I have 1,000,000 rows of data from database which I need to fetch all
1st way:
Load content from database, truncate it on the server-side code(PHP) then only show the first 50.
Upon user scroll on the page, another request will be sent to fetch the results again and display the next 50 and so on and so forth..
2nd way:
Load content from database, render it in my HTML as hidden elements but only displaying the first 50, then upon user scroll, show 50 more hidden elements.
The 1st way is requesting from the server whenever the need to display more results arises.
The 2nd way just does 1 request from the server, then hides the result except for the first few that should be shown.
I have no problem doing either of the two.
Now the dilemma, is that the 1st way is sending more HTTP requests, and the 2nd way (though sending only 1 HTTP request) is fetching huge data in a single request, which can be slow.
Which method is "better", and if there are other options, please let me know.
I know this is old question but want to give my opinion:
1st way
Is always preferred specially with that amount of rows, it is much more efficient to only request a set number of rows and if user wants to see more e.g click in next page another request is made to the server to fetch the next page, the response time will be much better, it will also make it easier to the client to manipulate the list if there is other processing that needs to be done before is returned to the user.
You also need to make sure that you are applying the limits in your DB query otherwise you will be loading all the objects into memory which is not efficient.
2nd way
If you fetch 1,000,000 rows at once the user will have to wait until the response comes back which can result in a bad user experience also as the number of rows returned keeps growing the response time will keep increasing and you can hit a time-out eventually, also consider that you will be loading all those objects into memory in your server before is returned.
The only use case I see for this approach is if you have a list that doesn't grow over time or that you have a set number of items that doesn't affect response time.
I'm working on an app a little like Vine, where several looped videos are displayed on the screen of the user. I need to count one view per loop. It means, if the user repeat the video 5 times, it will count 5 views. And this is the model I want to use for every videos of my app.
I use Parse for my back-end and a webview to show the videos. It means that I use Javascript to send requests to Parse, with Ajax calls.
My problem is that I don't really know how to limit the number of requests sent to Parse when I add a view on a video.
Maybe I should save the video views to a MySQL database and then, once a day with a cron task, save the MySQL results to Parse? I don't really know how to proceed, but I really need to limit the number of requests to Parse.
How would you design this?
Thanks!
My first thought is to not optimize too early. There should be plenty of time, as you accrue zillions of users, to improve the design.
If you want to improve it early (and still use parse), keep the object that tracks views "pinned" locally (see this blog entry). Update the view count as often as needed, then update parse on an NSTimer.
The app may become inactive at any time, and if unsaved views have been counted since last time the timer fired, then there's one more problem to solve. The app delegate gets told that applicationDidEnterBackground, and can request a moment to finish "one last thing". See here under "Executing Finite Length Tasks".
There (iIn the dispatch block suggested by the sample code), save the object that counts views (saveInBackgroundWithBlock:), invalidate the timer, and tell iOS you're done with [application endBackgroundTask:bgTask];
What I should is store the video's somewhere else and save 1 view per click.
You can save this click in the background using something like this:
userClick.saveInBackground()
It saves the click in a background proces so the user doesn't have to wait for the sync with Parse.
note: You should use Bolts (https://github.com/BoltsFramework/Bolts-iOS) to get saveInBackground() working.
* edit *
Maybe it's smart to sync with parse every x amount of clicks, maybe 5 or 10. To limit the amount of requests.
I have a question in terms of code and NOT user experience, I have the JS:
$(document).on( "click", "input:radio,input:checkbox", function() {
getContent($(this).parent(),0);
});
The above JS gets the contents from radios and checkboxes, and it refreshes the page to show dependencies. For example if I check on yes, and the dependency is on yes, show text box, the above works!
What I want to know is, if there is a better way to do the same thing, but in a more friendly way, as this is at times, making the pages slow. Especially if I do a lot of ticks/checks in one go, I miss a few, as the parent refreshes!
If you have to hit your server to getContent() then it will automatically be slow.
However, you can save a lot if you send all the elements once instead of hitting the server each time a change is made.
Yet, if creating one super large page is not an option, then you need to keep your getContent() function, but there is one possible solution, in case you did not already implement such, which is to cache all the data that you queried earlier.
So you could have an object (a map) which has keys defining the data you're interested in. If the key is defined, then the data is already available and your return and use that data directly from the cache. Otherwise, you have to hit the server.
One thing to do, you mentioned slowness as you 'tick' things back and forth, is to not send more than one request at a time to the server (with a timeout in case the server never replies). So the process here is:
Need data 'xyz'
Is that data already cached? if yes, then skip step (3 and 4)
If a request being worked on? if yes, push the data on the request stack and return
Send a request to the server, which blocks any further request until answer for 'xyz' is received
Receive the answer and cache the data in an object (map) and release the request queue
Make use of data as required
I check the request queue, if not empty pop the next request and start processing from (2)
The request process is expected to be run on a timer because (1) it can time out and (2) it need to run in the background (not GUI preemptive)
I am not really sure it is possible in JavaScript, so I thought I'd ask. :)
Say we have 100 requests to be done and want to speed things up.
What I was thinking of doing is:
Create a loop that will launch the first 5 ajax calls
Wait until they all return (success - call a function to update the dom / error) - not sure how, maybe with a global counter?
Repeat until all requests are done.
Considering browser JavaScript does not support thread, can we "exploit" the async functionality to do that?
Do you think it would work, or there are inherent problems doing that in JavaScript?
Yes, I have done something similar to this before. The basic process is:
Create a stack to store your jobs (requests, in this case).
Start out by executing 3 or 4 of the requests.
In the callback of the request, pop the next job out of the stack and execute it (giving it the same callback).
I'd say, the comment from Dancrumb is the "answer" to this question, but anyway...
Current browsers do limit HTTP requests, so you can even easily just start all 100 request immediately, and the browser will take care of sending those requests as fast as possible, but limited to a decent number of parallel requests.
So, just start them all immediately and trust on the browser.
However, this may change in the future (the number of parallel requests that a browser sends increases as end-user internet bandwidth increases and technology advances).
EDIT: you should also think and read about the meaning of "asynchronous" in a javascript context.. asynchronous here just means that you give up control about something to some other part of a system. so "sending" an async request just means, that you tell the browser to do so! you do not control the browser, you just tell it to send that request and please notify me about the outcome.
It's actually slower to break up 100 requests and batch post them 5 at a time whilst waiting for them to complete till you send the next batch. You might be better off simply sending 100 requests, remember JavaScript is single threaded so it can only resolve 1 response at a time anyways.
A better way is set up a batch request service that accepts something like:
/ajax_batch?req1=/some/request.json&req2=/other/request.json
And so on. Basically you send multiple requests in a single HTTP request. The response of such a request would look like:
[
{"reqName":"req1","data":{}},
{"reqName":"req2","data":{}}
]
Your ajax_batch service would resolve each request and send back the results in proper order. Client side, you keep track of what you sent and what you expect, so you can match up the results to the correct requests. Downside, it takes quite some coding.
The speed gain would come entirely from a massive reduction of HTTP requests.
There's a limit on how many requests you send because the url length has a limit iirc.
DWR does exactly that afaik.
I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.