How facebook, twitter reload their page without any refresh? - javascript

I am so much curious about this technology, I want to know how Facebook, Twitter, and many websites reload their page after clicking on a link without any refreshing?
I search about this on google but did not find any helpful information, In this Quora article.
Someone says that they use WebSocket API or AJAX to request anything like that.
So, What this technique/ technology name?

Mostly all modern websites are powered with FE frameworks like React, Angular, Vue and many others the main feature of which is dynamically construct DOM in response to user actions without the need of page reload.
One of the the power tool of these specific frameworks are routers. That pretty much reconstruct the page from the blueprint stored on FE side
Please have a look on the working demo of React Router:
https://codesandbox.io/s/nn8x24vm60
P.S: Pretty much JS hides/removes specific elements in the DOM and replaces them with the expected ones when user navigates using specific router links (which can look like normal link for other developer inspects DOM, unless you really inspect attached Event Listeners)

Related

Remove specific pages from browser history and back button with javascript?

Situation:
I have a sensitive website about domestic violence with an EXIT button that directly links to Google. So that anyone visiting that website can quickly jump to Google if the visitor feels unsafe or uncomfortable.
I would love to be able to clear any references to this website from bot the history list and the back button functionality. Basically, remove any proof of visiting that website. Keep in mind that not all people know how to browse anonymous and some people just cannot even get out of the house to browse the internet. Yes, this scenario is for seriously bad situations.
I've tried using location.replace instead of regular links to keep them from being saved into the history, but they just keep being saved in the history.
I've also tried to use browser.history.deleteUrl({url:"https://thewebsite"}), but this gives error on browser being undefined.
Is this even possible from a website? Or are there other options?
Thanks for thinking with me!
As you state in the question, you can use window.location.replace() to prevent your site from appearing in the window’s history (back button). Of course, this only works if your site had only one entry in the window’s history to begin with.
As you also state, there is a bigger problem: this does not prevent the site from appearing in the browser’s history. I believe you cannot solve this problem with scripts on your website: you need some external solution, like a browser extension.
(This does not really answer your question, but you could try using URLs and titles that disguise the nature of your site. I have heard of that being done with this sort of resource.)
In response to my idea of disguises, someone asked for examples and asked about discoverability. I was referring to the Aspire News App, featured on Dr Phil’s TV show. On that show, they made a big deal out of not showing what the app looked like, to avoid tipping off abusers. They also said the app is disguised as an ordinary app.
When I was researching this answer, I learned that disguises are indeed a terrible idea. I had no trouble finding information about the app online, and one review said the app is “pointless” because “with all of the media cpverage this app has gotten sbusers know exactly what it is and what to look for”.
I also learned that the app still had a fundamental security flaw 7 years after it was released. This shows that even supposedly reputable apps, dealing with sensitive matters, cannot be trusted. And perhaps it means that supposedly reputable websites looking to hide themselves from the browser’s history cannot be trusted either.

SEO and crawling: UI-Router ui-sref VS ng-click

After looking around a bit I came to no conclusion about this matter: does Google and other search engines crawl pages that are only accessible through ng-click, without an anchor tag? Or does an anchor tag always need to be present for the crawling to work successfully?
I have to build various elements which link to other pages in a generic way and ng-click is the best solution for me in terms of flexibility, but I suppose Google won't "click" those elements since they have no anchor tag.
Besides the obvious ui-sref tag with I have about other solutions like:
<a ng-click = 'controller.changeToLink()'>Link name</a>
Altough I am not sure if this is a good practice either.
Can someone please clarify this issue for me? Thanks.
Single page applications are in general very SEO unfriendly, ng-click not being followed being the least of the problems.
The application does not get rendered server side, so search engine crawlers have a hard time properly indexing the content.
According to this latest recommendation, the Google crawler can render and index most dynamic content.
The way that it will work is that it will wait for the Javascript to kicking and render the application, and only index after the content is injected in the page. This process is not 100% proof and single page applications cannot compete with static applications until recently.
This is the main reason why most sites are using them for their menu system, as that would make for a much better user experience than full page reloads. Single page apps are not SEO friendly.
This is slowly changing as now Angular Universal, Ember Fast Boot and React are adding the possibility to render server side an SEO friendly page, but still have it take over as SPA on the client side.
I think your best bet to try to improve your SEO is to submit a site map file to google using their webmaster tools. This will let google know about those pages that you trigger via ng-click.
Note that this only has a chance of working if you are using the HTML5 mode for the router and not using bookmarks (urls using #), as Google does not index bookmarks.
In general its very hard to get good SEO for an Angular 1 app, and thats why its mostly not used for public indexable content. The sweetspot of AngularJs is for building the "dashboard" private section of your app, that users can access after logging in.
Try using prerender.io to prerendered these angularge pages and filter out bot requests and serve these prerendered pages from the page cache.

How do you use React.js for SEO?

Articles on React.js like to point out, that React.js is great for SEO purposes. Unfortunately, I've never read, how you actually do it.
Do you simply implement _escaped_fragment_ as in https://developers.google.com/webmasters/ajax-crawling/docs/getting-started and let React render the page on the server, when the url contains _escaped_fragment_, or is there more to it?
Being able not to rely on _escaped_fragment_ would be great, as probably not all potentially crawling sites (e.g. in sharing functionalities) implement _escaped_fragment_.
I'm pretty sure anything you've seen promoting React as being good for SEO has to do with being able to render the requested page on the server, before sending it to the client. So it will be indexed just like any other static page, as far as search engines are concerned.
Server rendering made possible via ReactDOMServer.renderToString. The visitor will receive the already rendered page of markup, which the React application will detect once it has downloaded and run. Instead of replacing the content when ReactDOM.render is called, it will just add the event bindings. For the rest of the visit, the React application will take over and further pages will be rendered on the client.
If you are interested in learning more about this, I suggest searching for "Universal JavaScript" or "Universal React" (formerly known as "isomorphic react"), as this is becoming the term for JavaScript applications that use a single code base to render on both the server and client.
As the other responder said, what you are looking for is an Isomorphic approach. This allows the page to come from the server with the rendered content that will be parsed by search engines. As another commenter mentioned, this might make it seem like you are stuck using node.js as your server-side language. While it is true that have javascript run on the server is needed to make this work, you do not have to do everything in node. For example, this article discusses how to achieve an isomorphic page using Scala and react:
Isomorphic Web Design with React and Scala
That article also outlines the UX and SEO benefits of this sort of isomorphic approach.
Two nice example implementations:
https://github.com/erikras/react-redux-universal-hot-example: Uses Redux, my favorite app state management framework
https://github.com/webpack/react-starter: Uses Flux, and has a very elaborate webpack setup.
Try visiting https://react-redux.herokuapp.com/ with javascript turned on and off, and watch the network in the browser dev tools to see the difference…
Going to have to disagree with a lot of the answers here since I managed to get my client-side React App working with googlebot with absolutely no SSR.
Have a look at the SO answer here. I only managed to get it working recently but I can confirm that there are no problems so far and googlebot can actually perform the API calls and index the returned content.
It is also possible via ReactDOMServer.renderToStaticMarkup:
Similar to renderToString, except this doesn't create extra DOM
attributes such as data-react-id, that React uses internally. This is
useful if you want to use React as a simple static page generator, as
stripping away the extra attributes can save lots of bytes.
There is nothing you need to do if you care about your site's rank on Google, because Google's crawler could handle JavaScript very well! You can check your site's SEO result by search site:your-site-url.
If you also care about your site's rank such as Baidu, and your sever side implemented by PHP, maybe you need this: react-php-v8js.

Another "how to change the URL without leaving the webpage"

I am creating a website which uses jquery scrolling as the method of navigation that never leaves a single html page.
I have noticed that some websites are able to change the URL and have looked at posts/answers (such as How does GitHub change the URL without reloading a page? and Attaching hashtag to URL with javascript) which refer to these changes being either push states, AJAX scripts or history API's (all of which I am not too savvy in).
Currently I am looking into which method is best for my website and have been looking at some examples which I like.
My question is why do the websites below use /#/ in the path for the changing URL. The only reason I ask is because I am seeing this more and more often with jquery heavy websites.
http://na.square-enix.com/ffxiii-2/
http://www.airwalk.com
If anyone could simply shed some light on what these guys are using to do this, it would be much appreciated so I can possibly create my own script.
My question is why do the websites below use /#/ in the path for the changing URL
If we discount the possibility of ignorance to the alternatives then: Because they are willing to accept the horrible drawbacks in exchange for making it work in Internet Explorer (which doesn't support the history API).
Github take the sensible approach of using the history API if it is available and falling back to the server if it isn't, rather then generating links that will break without JavaScript.
http://probablyinteractive.com/url-hunter
This has a nice example on how to change the url with javascript.
I've not tried it myself, but read many reviews/opinions about History.js
It's supposed to have the "# in the path" option as you said (for older -- incompatible -- browsers) and the facebook-like direct changing of URL. Plus, when you hit the back button, you will get to the previous AJAX-loaded page with no problem.
I've implemented such a feature (AJAX tabs with URL changing), but if you will have other javascript on the pages that you want to load dynamically, I wouldn't recommend you using AJAX-loaded pages, because when you load content with AJAX, the JavaScript inside the content won't be executed.
So I vote for either HistoryJS or making your own module.
Well, they're using the anchor "#" because they need to differentiate between multipla bookmarkable/browser navigatable places in the site, while still having everything on the same page. By adding browser history events of the genre /mySamePage.html#page1, /mySamePage.html#page2 when the user does something that Ajax loads some content in the current html page you have the advantage of (well, obviouslly) still staying on the current page, but at the same time the user can bookmark that specific content, and pressiing back/forward on his browser will differentiate between different Ajax loaded content.
It's not bad as a trick, only issue is with SEO optimisation. Google has a nice page explaining this http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html

What are some good JavaScript/AJAX interface patterns for websites?

I really like how sites like FogBugz and Facebook offer snappy user interfaces by loading page content asynchronously.
What are some good resources and patterns for applying this to other websites? I am looking for a solution that creates a unique hash URL for each page, preserves history and basic browser functions, and degrades gracefully if JavaScript is not enabled (a great example of this is Facebook).
This blog post is a good start, but it's far from a complete solution/pattern - and any approaches using jQuery would be great.
IMO, in order to allow a site to degrade gracefully, you should first build at least the framework of the site in the lowest level that you're going to support. In your case, this is going to be standard postbacks.
Once you've got this in place, you can then start adding ajax interactions.
The approach I've taken when using ASP.NET MVC is to have one function which builds the whole page from scratch (for regular postbacks) and then have some extra methods which I used to dynamically refresh content via Ajax. If I want to implement a 'Single Page' method like oyu describe then I would handle the onclick event of a hyperlink and call an ajax method that renders the 'Build Whole Page' method to a string then pump that string into my content div.
HTH
I have found pjax to be the most promising solution so far. From https://github.com/defunkt/jquery-pjax:
pjax loads HTML from your server into the current page without a full
reload. It's ajax with real permalinks, page titles, and a working
back button that fully degrades.
pjax enhances the browsing experience - nothing more.
You can find a demo on http://pjax.heroku.com/
Here are an example to building Ajax based website using jQuery and PHP
Here is great article about loading content with jQuery, and it degrades gracefuly when js is diasebled.
link text

Categories

Resources