I have been working with ajax for a while now, and I have used it to do a lot of nifty jobs.. But then my recent challenge has been on push notification.
I want to implement a site that would not need to make a call to the server every period of time, but rather would make a call to the server only when there is an update on a particular DB field, and I want to implement this in PHP, javascript and/or jquery or any other technology for the web. I do not have any idea as to how to go about it, or if it is even possible.
I would like directions on where and how to start, Thanks all.....
Since WebSocket support isn't quite there yet, you would want to fall back on long polling. There already seems to be a jQuery plugin for this:
https://code.google.com/p/jquery-graceful-websocket/
A nice slide on this subject can be found here http://www.slideshare.net/ffdead/the-html5-websocket-api
To implement WebSockets in PHP, have a look at https://code.google.com/p/phpwebsocket/
Related
I am working on a website, and want to allow my user to work without having to click save to manually save their data (similar to how Google Docs allow you to work without having to press save). I was able to achieve this by using a on change event in JQuery and using AJAX to post to the server every time that event occurred. The only problem is that this results in MANY requests to the server. How do I achieve the same result, while reducing the number times requests are sent to the server.
Any help would be greatly appreciated!
Thanks!
The terms you are looking for a “throttle” and “debounce”. There are numerous solutions (e.g., jQuery plugins such as https://code.google.com/archive/p/jquery-debounce/ – disclaimer: I haven’t used jQuery in years, and therefore have no experience with this plugin), but reading a little bit about it, this is not hard to implement it yourself. In case you use RxJS (not likely in a project that still uses jQuery), there are also methods debounce and throttle in it.
Have you thought about using a timer? For example you could still use the on change event, but instead of transmitting on every change you could start a count-down and only if the user didnt typed for for example 10 seconds you transmit.
I would like to have a page where a restaurant can log in and see all of their current reservations/take-out orders, and I want this page to automatically update when someone (from another computer) makes a reservation or places an order. The idea is that the restaurant would leave this page open at all times to show their current status. What is the best way to do this? Can it be done without refreshing the page?
I wasn't even sure how to refer to a setup like this, so I wasn't really able to find much using Google. Is there a word for this type of setup?
I am using rails, and I am considering using AngularJS for the front end. Any suggestions?
There are two approaches to solving this.
The first, oldest, simplest is that your webpage contains some javascript that will poll the server at regular intervals (e.g. every 10-30 seconds), to check if something has changed and then add the changed data (e.g. reload a partial).
The second approach is a bit cleaner, and it allows the server to push the changed data to the connected clients, only when it is changed.
There are a few available approaches/libraries for this:
use websockets
use pusher
use juggernaut The author of juggernaut had deprecated it, in favor of using HTLM5 SSE (server sent events). Read more.
The advantage of using polling is that it is easy, works on every browser, but you have to write more code yourself, you will put some kind of load on your server, even if data has not changed (although the load is minimal).
The push-technologies are newer, work very clean, less code is needed. But some work only in newer browser (most of the times not really an issue), and some require extra support/setting up on your server-side.
On that note: pusher is really easy to get started with, and if your load is limited, it is free.
There are still a lot of others, but this should get you started in the right direction.
Alright, Here it goes:
I'm currently implementing a software which autorefresh/autopull/autoreload the data to keep the screen live by using AJAX.
This is actually working, but I know I´ve used the simplest approach which is:
SetInterval (javascript)
Call the Refresh Method over and over each n seconds.
Read the Json Data, rebuild the HTML and update it.
This can also be done by just calling a SetTimeOut (javascript) and the end of the AJAX request.
In the refresh method I internally check that it´s not being called simultaneously, etc.
However... this is the simplest approach, it works but, in slow computers, firefox and ie, I can see this activity sometimes freezes the browser, and I know this might not be necessary because of the AJAX call, but how "intensive" is the javascript operation overall... but, after running a profiler, Overall javascript (using jquery by the way) seem to be fine. Also if I disable the autorefresh, the browser wont freeze by short seconds in slow computers.
I decided to investigate how several of the majors AJAX applications works out there.
Facebook for instance.. they do a request all the time, every N seconds, interpret the JSON and update the screen, but, google docs... I can seem to find any request.. This is maybe because: they are just telling the javascript debugger engine that they do not want their request to be logged??, or, are they using another approach to the refresh dilemma?
I read in another answer here at stackoverflow, that Google Docs keeps an open connection..
Can this be the answer? http://ajaxpatterns.org/HTTP_Streaming
What do you guys know about this?
Just as a side note, the application I´m developing is meant to be accessed by thousands of users at a time, and I know the JavaScript refresh routine only tells a little part of the history, but the Server Side Application and the database is currently supporting such a load according to the stress tests I did by using several thousands of virtualized stations. I just want to know what you think about the client browser problem specifically.
Regards and
If you are still reading this..
Thanks you for your time.
I suspect they're using WebSockets. Browser support is flaky, so your mileage may vary with this approach.
You may also want to look at APE (ajax push engine), which is a decent implementation of long polling with a client/server architecture.
You can read up on Long Polling. But then you'll have to handle dropped connections etc.
I have a simple User ActiveRecord model class and I need to update a user counter visible in all pages (a partial) each time a new User is created. It should basically look like the download counter for Firefox here. I can imagine writing some JavaScript code that constantly polls the db would do the trick, but I guess that there is some better way to do it. I generally do mostly server-side programming and many UI techniques are quite new to me.
I'm using Rails 3.0.7 with jQuery enabled. I thank you in advance for suggestions/solutions to my predicament.
You could if you want use some kind of WebSockets solution. Which means that you can push the data whenever a new User is created to all the clients, and then just use JQuery to render the Counter area with the new data.
There's also 3rd party apps that makes this push technology really easy to set up. http://pusher.com/ is one of them.
If you want to investigate Node.js there's also Socket.io.
All of these "custom" solutions uses some kind of fallback because WebSockets isn't available in all web browsers. It usually is fallback to Flash Sockets or maybe Long polling.
I have been attempting to scrape and eventually parse some data(specifically availabilities and price) from hostels.com, for example http://www.hostels.com/hosteldetails.php/HostelNumber.11890. The problem is, once you select the number of nights and select "book now" nothing is passed through the URL string(its all done through Ajax, I belive) I cant go directly to a specific date or time frame.
I have attempted browser emulators such as Selenium, IRobotSoft and FakeApp and although I did get Selenium and Fake to do much of the work capturing the full source, it was ugly and still tedious when having to scrape(and parse with other software) multiple pages a day.
I have also tried HTML DOM Parser, PHP Scriptable Web Browser, HTMLUnit, cScrape.php, Crowbar. Either they couldn't handle the Ajax or I had no luck getting even them to run.
Ideally I would like something that can run from a server, with as few dependencies as possible, but at this point I would just like to get it running.
Now after spending many hours trying to get this working. I still feel I'm not sure where to begin. Can someone just point me in the right direction?. Should I go back and spend more time with HTMLUnit? what would be the best practice for a site like this?
Thanks
I'm really into Node.js atm (server-side javascript, in case you're not familiar), so that's what I'm recommending. What's awesome about using it to scrape sites is you can use jQuery or whatever your favorite JS framework is to do all the work of parsing for the info you want! See the following resources to get started:
http://blog.dtrejo.com/scraping-made-easy-with-jquery-and-selectorga
https://github.com/tmpvar/jsdom
https://github.com/chriso/node.io/wiki/Scraping
https://github.com/joshfire/node-crawler
The page you are referring to does not seem to be using AJAX. Instead what you are referring to as AJAX is a POST request (as opposed to stuff passed in the url, which is a GET request). I suggest you read up on difference between them. Try to understand what going on, it is more important than relying on some third-party tool which might turn out to be very inflexible.
Install Firebug and watch which variables are sent in the POST request.
Now do the same thing in your favourite programming language. Parse the response HTML for the POST request for the necessary information.
Also, +1 for the effort of trying so many different solutions and not giving up.
I've found Celerity (http://celerity.rubyforge.org), a JRuby library that uses HTMLUnit under the hood, to be a very robust solution for "data acquisition via the Web".
Celerity being Ruby, I found, was much faster to develop with in comparison to full blown Java (HTMLUnit). Also, due to Celerity's "wrapping" of HTMLUnit -- I was able to drop down to HTMLUnit as I needed to do some heavier lifting.
I've had success with sites that are rich in DHTML, as well as utilize Ajax; and while I made have used some sleep() calls to wait on the Ajax responses -- everything worked as expected.
Give it a try!