This question already has answers here:
Is there a real server push over http?
(7 answers)
Closed 3 years ago.
I am currently making a simple clock in and out system for staff. The warehouse has a TV with a list of names on the right of people who have not signed in, and a list of names on the left of those who are signed in. When they clock in on the scanning device downstairs I need the TV/dashboard upstairs to update and move their name onto the appropriate side.
I have made the function that logs the details into the database. And I know I could use code such:
<meta http-equiv="refresh" content="3;url=thecurrentpagesurl" />
And doing that would refresh the page run a function that checks for changes and updates the display accordingly, however I was wondering if there was a way of "listening" for a change without having to spam the server to often. Ideally I want the dashboard to update in real time.
Using meta refresh I can refresh the page use a function to check for db changes and update the html on the dashboard accordingly. Is there a less resource intensive or more efficient way of doing this?
(I'm not a JavaScript expert but I have enough understanding to use some at a basic level).
Edit
This question has asked before and I have looked at the other answers which is why I came to the conclusion about the meta refresh aspect, but what I wanted to know if there was a more efficient way of doing it given my specific set up.
In your described environment your current approach is not bad.
Here are some ideas/options on which you can read up and improve your application if needed:
Use Ajax and just refresh the content of your lists. You can combine this with neat animations. But this is still not real-time. This is especially easy when using jQuery or similar libraries.
You could use Ajax with longpolling, This means that the Ajax-Request will be hold open on the server-side until any change is happening (or a possible timeout). When a request got an response just open another ajax request with longpolling and wait for more changes. This is not realtime, but it gets really close to it. In some environments this can stress the server since each open request will occupy resources or block sockets/threads etc.
Another more modern version is to use the Websockets. This is the most complex approach, but the communication is in realtime. The Browser-view and Server are establishing a tcp connection and keeping that open to communicate over it. There are also some good open-source libraries for Websockets.
In your described situation i would go with your current solution and also take look into the first two ideas. They are not that complex and can give the feel of "realtime" updates.
I decided to run the refresh at set times, so what I did was use php date function to calculate the time, if the time is between 7-9 (the earliest staff clock in) or 5-7 (the latest anyone clocks out) it will run the meta refresh that way it doesnt check all day while everyones at work singed in. Thanks again !
Related
It's my first HTML/javascript question which I raised on meta without results yet,
On Stack Overflow I reload my tags page which displays questions related to my selected tags.
If I want to go back to the previous results/page with a reload, then a question I wanted to check isn't possible because it's going back in history to a different page (or a blank page).
Can I go to the previous view? Is it impossible?
Sometimes questions are deleted or have changed tags so it's hard to find them.
Is it possible in javascript/HTML to view last results?
I've been in the same situation many times, wishing to see what I just saw but the internet has changed out from under me. (It actually happened while I was researching this answer...) Obligatory XKCD: https://xkcd.com/1309/
Let's talk a little about what this question is actually asking? The example given in this question is this very site, SO. But the purpose can apply to any website with dynamic information. As a user browses through information, an often used pattern is to link into more specific details. When the user moves back to the general list, it sometimes occurs that new information has been added, or old information removed, or just generally moved about. In those cases, it can be difficult for the user to navigate back to information they may have wanted to look at or they have looked at but now lost the link to. We ask what can be done to make this UX better both in the user habits and the website design.
I think this question truly does apply to both meta and non-meta, since there are many aspects: what can SO specifically do, as website, to improve their UX (the meta question); what could developers of sites do in general to improve UX in this regard; and what can user do to better utilize their technology or improve their workflow to improve their UX without special consideration from a website. Let's talk about what the user can do first.
What can a user do?
In some senses, this should be a simple task. All the browser would need to do is keep a record of the current DOM in it's cache when moving to a new page, then on back press, load the cached DOM as it stood when you last looked at it. But what do browsers actually do?
Browsers won't consistently help you right now by default. The answer by Stephan P gives us some insight. Essentially, going 'back' could result in a number of different things from loading the cached resources like I described above, to reloading some of the resources, to just refetching everything. And this is all decided by the arcane machinations of your browser. You have basically no say in what happens. (Except of course when you use one of the many methods to forcibly skip cache and reload all the recourses, but there doesn't seem to by any method to forcibly load from cache). So, no dice there.
The closest browser functionality I was able to find is the setting to load from cache when offline in Chrome. But this would require us to constantly switch between connecting and disconnecting from the internet while browsing. Which is not very helpful.
The above point actually just proves that this should be 100% possible,
if we could just look at our browser's cache of the page. Which we can! GolfWolf lets us know about the handy chrome://cache address that let's us view the complete contents of our browser's cache! Unfortunately, it is all in hex, so not super useful for browsing...
There are tools that will convert the hex dump to a viewable webpage as Senseful points us to. But I think, going into your cache, finding the old page, copying hex over to a translater, then viewing your page is not a very friendly workflow for regular browsing.
So, our browser is basically useless for what we are trying to accomplish. We only have a few options:
Write our own browser
Something very custom...
A. One potential would be to set up a proxy of sorts, that hard caches all the pages that went through it and returns from cache when you went back in your browser. Perhaps a simple browser extension button to click to send a message to the proxy to clear its cache?
B. While we're talking about crazy ideas... It could be possible to make a webpage that loads pages in iframes and attempts to intercept link clicks and instead of opening in the same iframe opens a new iframe to the new page and just hides the old iframe in its previous state, so when you go 'back' it just unhides your old page. But that is super cludgy and will break a lot of stuff with all those discontinuities. (and don't forget, not all links open a new page, there's a lot of ways this plan could go wrong).
Outside of actually solving this, what could we do to mitigate or work around our problems? ...Never leave a webpage... ever...
In all seriousness, this is my workflow: open the main page, open all links in new tabs (You can usually cmd/ctrl-click to quickly open a link in a new tab), keep all the tabs open until I've finished.
What can a website do?
Several things. There are many strategies that can be used to make information traversable, and they all have their own benefits and ramifications. This is not an exhaustive list:
The site can implement custom urls with parameters for everything that could affect rendering of information. Google search does this. In this way going back to a link (or just sharing a link to a different person) will result in loading the page in the same deterministic way as before. This can be used to great success, but also can result in inordinately long links.
The site can use cookies or local storage to keep track of where you are to restore your state when you return to a previous page.
The site can create its own history for you to browse. This is similar to what SO does now in other aspects, but taken to the extreme. Basically, record every action, every comment, every view, every post, absolutely everything. Then let the user browse through this history of things (obviously with lots of sorting and filtering options) to let them sift down to what they are looking for. i.e. "What was that post I was looking at yesterday? ...searching... Oh here it is!"
Each of these are huge topics on their own. Way outside scope of an answer here on SO.
TL;DR
A user can't do much of anything that isn't either a lot of work or breaks flow in one way or another, or both.
A website can do a lot of things to ease the user's pain. But there's no silver bullet. Only lots of work to make a good UX.
You can try to play with the ids to go back only with the browser, but if the question is deleted I think there's no way to catch its content if you don't use a web crawler software :/
https://stackoverflow.com/questions/48630484/
https://stackoverflow.com/questions/{ID}/ (it's an autoincremental id)
I would like to have a page where a restaurant can log in and see all of their current reservations/take-out orders, and I want this page to automatically update when someone (from another computer) makes a reservation or places an order. The idea is that the restaurant would leave this page open at all times to show their current status. What is the best way to do this? Can it be done without refreshing the page?
I wasn't even sure how to refer to a setup like this, so I wasn't really able to find much using Google. Is there a word for this type of setup?
I am using rails, and I am considering using AngularJS for the front end. Any suggestions?
There are two approaches to solving this.
The first, oldest, simplest is that your webpage contains some javascript that will poll the server at regular intervals (e.g. every 10-30 seconds), to check if something has changed and then add the changed data (e.g. reload a partial).
The second approach is a bit cleaner, and it allows the server to push the changed data to the connected clients, only when it is changed.
There are a few available approaches/libraries for this:
use websockets
use pusher
use juggernaut The author of juggernaut had deprecated it, in favor of using HTLM5 SSE (server sent events). Read more.
The advantage of using polling is that it is easy, works on every browser, but you have to write more code yourself, you will put some kind of load on your server, even if data has not changed (although the load is minimal).
The push-technologies are newer, work very clean, less code is needed. But some work only in newer browser (most of the times not really an issue), and some require extra support/setting up on your server-side.
On that note: pusher is really easy to get started with, and if your load is limited, it is free.
There are still a lot of others, but this should get you started in the right direction.
Alright, Here it goes:
I'm currently implementing a software which autorefresh/autopull/autoreload the data to keep the screen live by using AJAX.
This is actually working, but I know I´ve used the simplest approach which is:
SetInterval (javascript)
Call the Refresh Method over and over each n seconds.
Read the Json Data, rebuild the HTML and update it.
This can also be done by just calling a SetTimeOut (javascript) and the end of the AJAX request.
In the refresh method I internally check that it´s not being called simultaneously, etc.
However... this is the simplest approach, it works but, in slow computers, firefox and ie, I can see this activity sometimes freezes the browser, and I know this might not be necessary because of the AJAX call, but how "intensive" is the javascript operation overall... but, after running a profiler, Overall javascript (using jquery by the way) seem to be fine. Also if I disable the autorefresh, the browser wont freeze by short seconds in slow computers.
I decided to investigate how several of the majors AJAX applications works out there.
Facebook for instance.. they do a request all the time, every N seconds, interpret the JSON and update the screen, but, google docs... I can seem to find any request.. This is maybe because: they are just telling the javascript debugger engine that they do not want their request to be logged??, or, are they using another approach to the refresh dilemma?
I read in another answer here at stackoverflow, that Google Docs keeps an open connection..
Can this be the answer? http://ajaxpatterns.org/HTTP_Streaming
What do you guys know about this?
Just as a side note, the application I´m developing is meant to be accessed by thousands of users at a time, and I know the JavaScript refresh routine only tells a little part of the history, but the Server Side Application and the database is currently supporting such a load according to the stress tests I did by using several thousands of virtualized stations. I just want to know what you think about the client browser problem specifically.
Regards and
If you are still reading this..
Thanks you for your time.
I suspect they're using WebSockets. Browser support is flaky, so your mileage may vary with this approach.
You may also want to look at APE (ajax push engine), which is a decent implementation of long polling with a client/server architecture.
You can read up on Long Polling. But then you'll have to handle dropped connections etc.
Understanding that if someone had JavaScript disabled the site would not work then is there any other reason not to do this?
I am in the design phase of a new site and want to make it easy to change the server code without having to change the UI - just like a form.
This is using Python server side.
One problem, arguably, is that Ajax techniques break the back button.
By making all of your calls to the server Ajax calls, the user loses their ability to 'go back' to a previous view. Facebook exemplifies this. Click the back button on Facebook, generally, does not take you to the previous view you were presented with.
In addition, it is more difficult for a user to bookmark their current view of the site. This can make it difficult for them to share what they are seeing with others and can make it difficult to debug problems that users see; rather than just sending you a URL to recreate their problem, they have to figure out the numerous steps they took before they spotted a problem.
Personally, I think the best place for Ajax is for updating small segments of a page. If you have a page that is changing more than, say, 50%, you may want to rethink the sole use of Ajax
Potentially yes, here are two items that come to mind.
Search indexing: this would have profound impact on what content on your site search engines like Google can index. Because the crawlers do not execute any AJAX script when reading your page.
Performance: Too many AJAX calls can actually hinder performance and page load response times. AJAX should generally be used to update only specific parts of a given page if at all possible. If you can emit the majority of content in the first page get request you should, period.
For a desktop-like rich web application, I would say that the all AJAX approach is acceptable.
However unobtrusive JavaScript and progressive enhancement may be a better strategy for most categories of public web-facing interfaces.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm building a web application with the Zend Framework. I have wanted to include some AJAX type forms and modal boxes, but I also want my application to be as accessible as possible. I want my application to be enhanced by AJAX, but also fully functional without AJAX.
So as a general guideline...when should I not use AJAX? I mean, should I bother making my application usable without AJAX? Or does everyone have AJAX enabled browsers these days?
If you mean "accessible" in the ADA sense, AJAX is usually a no-no - your site should provide all its content and core functionality using only standard (X)HTML and CSS. Any javascript used should merely extend the core functionality, and your site should be coded to work elegantly in the absence of a javascript-enabled browser.
Examples: if you want a user to click on a thumbnail and get a full-size version of the image as a result, you can make the thumbnail a link. Then, the onclick event will fire a JQuery method that cancels the navigation behavior of the link and pops up a JQuery floating div to show the image on the current page. If the user's browser doesn't support JavaScript, the onclick event will never fire, and the user will be presented the image in a new page. The core functionality is the same with or without scripting.
EDIT: Skeleton example, sans JQuery-specific code.
<html>
<body>
Some URL
</body>
</html>
To cancel the navigation operation, simply make sure that the method invoked by the onclick event returns false at the end.
A neat example of the JQuery image popup I described can be found here.
Use ajax if it adds value for the user.
If the ajax version adds a lot more value than the non-ajax version then it might justify the expense to develop a solution that caters for both clients. Generally i wouldn't recommend doing the extra work (remember.. more code results in more maintenance).
I think one point is missing here: Use Ajax only for content any search engine does not need to know.
98% of users will have AJAX enabled browsers.
A significant percentage of those people won't have it turned on when they first visit your site though (or at all, ever perhaps).
I've seen websites that look like a blank page without javascript on. Don't be one of them. Javascript to fix layout issues is a horrible idea in my opinion. Make sure it loads and looks ok without Javascript. If people can atleast see what they are missing out on, they are likely to switch it on, but if your website looks like it's just broken, then...
I often have noscript block Flash and JavaScript until I make the decision that your site is worthy.
So be sure to tell me what I'm missing if I have JavaScript turned off.
It depends on the complexity of your web application.
If you can, having it functional with javascript disabled is great, because it makes your application usable not only by users on js-disabled browsers but also by robots. The day you decide to write an application to automatically fill your forms, for example, you don't have to write an API from the ground up.
In any case, do not user AJAX for EVERYTHING! I have just inherited a project that basically consists of a single page that is populated by a ton of AJAX calls and I can tell that you just thinking about it gives me physical pain. I guess the original developer didn't like the concept of using the back/forward button in the browser as a mean of navigation.
Unless you are targeting mobile devices or other non-standard web users, you can be fairly sure that the vast majority has Javascript enabled, because most major sites (including SO) rely heavily on it.
I want my application to be as accessible as possible.
You can do things like rendering your modals and forms as a page that can operate standalone.
The AJAX version pulls the template into a modal/container, the standalone version checks if it's an AJAX request and renders the page including the header/footer (this can occur from the same URL if planned well)
The AJAX version intercepts the submit and does AJAX submission then provides an inline thank you, the non-AJAX opens a thank you page. Once again you can likely use the same pages for each of these functions if thought out correctly.
Reusing templates and URL's helps avoid additional maintenance for the AJAX/non-AJAX versions.
I want my application to be enhanced by AJAX, but also fully
functional without AJAX.
Thinking through the structure of your URLs and templates can go a long way towards this, if you make most of your AJAX requests pull in completely rendered templates (as opposed to just data) then you can usually use the same URL to serve both versions. You just serve only the guts of the modal/form to the AJAX request and the entire page to a regular request.
When should I not use AJAX?
You should not use AJAX if doing so will cause a poor experience for a significant portion of your user base (there are of course techniques that can be used to mitigate this)
You should not use AJAX if the development time associated with implementing it will be too significant to justify the improvements in user experience
You should not use AJAX for content which has significant SEO value without implementing an appropriate fallback that allows it to be indexed (Crawlers are improving constantly but it's still a good idea)
I mean, should I bother making my application usable without AJAX? Or
does everyone have AJAX enabled browsers these days?
I'd say a lot of the time it's unnecessary as the vast majority of users will have AJAX enabled browsers, but there are scenarios where it's critical such as SEO optimization or when a large portion of your user base is likely to use browsers that are less likely to support Javascript as well or where they're likely to have Javascript/AJAX disabled.
A few examples of these scenarios:
A website for a company or government that uses an outdated browser as standard
A website where a large portion of the users may be disabled in a manner that may negatively impact their experience such as a website for vision or motor-skill impaired people may be negatively impacted by updating content via AJAX especially if it occurs rapidly.
A site accessed regularly via a less common device or browser that will cause a negative impact to a large portion of users
So what should I do?
Think about who is going to be using the site, how they're going to access it, and what they're going to access it with. Also try to think about not just the present but also the future.
Design the site in a manner that will cater to the majority of these users.
Think who will gain and who will loose based on my decision to use AJAX and if in doubt have a look at your analytics data to help weigh up the decision and if you lack the data it may be worth updating your tracking and obtaining a sample to aid the decision
Think does my decision to use AJAX cause any contradictions with core requirements for this project
Use AJAX to enhance content where possible as opposed to making it mandatory ie the content should work with or without JS/AJAX
Consider the additional development time involved with the use of AJAX (if any)
My experience is, we should use ajax after it works without it. For a couple of reasons.
First, if something breaks in the ajax, and you don't have it working without it, the site simply doesn't work. For example, a product list with pagination. It should work with the links alone, then use ajax when possible.
Second, for site indexing and accessibility. If it works without ajax, it's better.
And it's easier to break something (even if only for a few moments). A bad piece of code, an uncaught exception, an external library not loaded, a blocking browser extension,...
After everything works without ajax, its quite easier to add ajax. Just have the ajax catch the action, add ajax=1 and when returning the result, return only what you need if ajax=1, otherwise return everything.
In the product list example, I would only return the products and pagination html, and add to the correct div. If ajax stops working, the whole page is loaded and the customer sees the second page as it loads.
Ajax adds a lot of value to UX. If done right, the user gets a great feel when using the site, and better data usage because it doesn't load the whole page everytime.
But the question being "when not to use ajax", I would say, you should always count on it to improve UX but not rely on it for the site to work (as other users also mentioned). And nowadays we need both, great code and great user experience.
My practice is to use two main pages, let's say index.py and ajax.py. First one is responsible for generating full website, and is default target of forms. Other one generates only output specific for adequate ajax query. Logic behind both of them is the same, only the method of generating output is a bit different.
In jquery I simply change action parameter when sending a request. It works both with and without ajax, although long time have I not seen someone with disabled js and ajax.
I like the thought of coding your application without JavaScript / Ajax and then adding it in later to enhance the UI without depriving users of functionality just because they don't have JavaScript enabled. I read about this in Pro ASP.NET MVC but I think I've seen it elsewhere in reading about unobtrusive JavaScript.
You should not make your service bloated with web 2.0 effects like accordion, modal/etc forms, image zoomers etc.
Use modern tech smarter (AJAX is one of them) and your users will be happy. Do not fear AJAX -- this is very good thing to make user expirience smooth. But don't do things because you like it - do them because your user need it ;)
When you want to make a website that looks like a website, not a fugly imitation of a desktop app?
You should not use AJAX or JavaScript in cases where:
your system needs to be accessible
your system needs to be search friendly
However, by using a modern JS framework with some solid "unobtrusive" practices, you can progressively enhance pages so that they remain accessible and search-friendly while offering a slick UI to users.
This totally depends on the type of application or feature you're developing. If it is crucial that the application is accessible despite the absence of Javascript, then it would help to have fallback methods (i.e. alternative forms) to allow your user to use said functionality/feature. For that, it will require you to invest some of your time developing methods for collecting information not just using client-side scripts but also on the server-side.
For miscellaneous features that only serves to enhance user experience, it's mostly not worth it to develop fallback methods.
There's no reason to totally not use AJAX. AJAX helps minimize your traffic after all.
You can if you wish always use AJAX and update the history state using Push State or for more compatibility use the hash with none HTML5 compliant browsers.
with this you can have your server load a page then javascript read the document.hash and resume the state of the application base on the state of the hash.
for example i got to /index.html i click into something for example a client to open the view client you can change the hash to #/view/client/{client_id}/ then if a reload or go back using the browser the hash with change and you can use the onhashchanged event to capture it and match the sites state to the new hash then same if a favorite a certain state
A couple of other scenarios where one may be better off NOT using AJAX:
Letting someone to log into the web application. Use traditional form submit instead.
Searching and returning more than a few 100 rows from the database. Either break the process down or let the server side language handle it.