Use multiple html pages or show/hide divs with javascript - javascript

First of all, I have a background in C (++), Java, MATLAB and Python, mainly used for scientific and electronic applications (Math operation on data, reading data from sensors, microcontrollers).
But i'm relatively new to both HTML (CSS) and Javascript.
For both I've read some books. In HTML books, multiple pages are done by links (<a></a>).
In javascript (which feels a lot more natural to me than HTML), I've seen some examples where there is only 1 html page, full of divs, who are shown and hiden each time a certain page needs to be shown.
This is done with the Jquery command $('#div1').hide() and $('#div2').show();
Now my question is, what is the best practice? When is it better to have multiple HTML pages, and when is it better to have just hide/show divs with Javascript?
Thanks

Not Every one Can Use Javascript. Not Every Computer Or Browser Has The Basics Of Java Installed. But Every Computer Can Read HTML every Browswer Can Read HTML.
To identify If a visitor is using java.
How to check whether Java plugins are installed or not in a browser using Code .?
Java is mostly installed now days with the browser some basic functions. But older navigator or IE browsers dont always have it installed by default.
More Info Here also
How can I detect the Java runtime installed on a client from an ASP .NET website?
The easiest is using <ul><li> css navigation themes. Check this site out for more info.
https://medialoot.com/blog/how-to-create-a-responsive-navigation-menu-using-only-css/

When you have multiple html pages and user clicks on links, then on each click a new web page has to be fetched from server and then rendered.
Whereas when you do it in java script the same web page will be altered, so there are no additional requests to the server. And this will be much faster than loading a new web page.
But remember the initial loading time is second approach is longer but its negligible.

Let me point out that there is no "best practice" to the question that you are asking. It is entirely up to the team if they want to push all the content in one page or keep them separately.
If in case you have a content that requires decent amount of images to be loaded, or contents that you are sure will rarely be seen, you might want to keep them in separate pages so as to make the page load faster.
If you have heavy contents which requires a lot of interaction with javascript/jQuery then you certainly might want to keep them in separate pages so that later when you want to debug/add to the code it can be easily done.
The vice versa of the above holds true as well.
If in case you just have small content, or simple text content then you can easily do it in a single page.

Maybe you should use a tab component ? bootstrap wraps one very nicely:
http://getbootstrap.com/javascript/
Maybe thats the best approach, also take a look at angular.js routes in specific, it should do what ever you are looking for.

Related

Appending base tag to head with JavaScript

Can you append a base tag to the head of a document from a div in the body of the document using JavaScript? By that, I mean, what are some drawbacks of doing that? My concern is that I'll run into a sort of race condition because the base tag is understood to exist in the head so it won't get respected if the page has already been rendered. I haven't yet experienced this problem, but I was wondering whether it should be a concern.
To be clear, I know how do this via JavaScript. My question is whether the tag will be respected/honored if it's appended to the DOM after the page loads/renders...
My code is an HTML fragment that is likely to appear in the body, but I need to set the base tag because my assets are referenced relatively. Let's assume that I can't change that (because I can't. At least, not right away). You can also assume that setting the base won't break anything that's not my HTML fragment and that there are no other base tags...ever.
Yes, for example:
<script>
var base = document.createElement('base');
base.href = 'http://www.w3.org/';
document.getElementsByTagName('head')[0].appendChild(base);
</script>
I don’t see why you would want to do this, but it’s possible.
I might be wrong (or partially wrong depending on how each browser chose to implement that), but AFAIK the document URL base is parsed only once. By the time you append that BASE Element to the DOM it is already too late.
EDIT: Looks like I was wrong
Apparently, there is a way. But there are also downsides about search engines.
Jukka to answer your question of WHY you would want to do it that way.
Example.
A mobile application such as phonegap that is a thin wrapper around a webapp, but smart enough to know if it's running in a browser or on the device.
Once it knows that it's on a device, then it needs to know the base url so it can properly locate everything that was previously referenced as relative URLs.
In our case, we have 4 different systems, dev, test, beta & live, each with different URLs.
Usually changes are incremental, but a lot of times we do want to test back and forth between each system, for instance in a/b testing.
Since the routing layouts are basically identical, switching back and forth on the base URL makes a lot of sense.
Remember many web apps use a static asset such as an html page for the application skeleton, javascript for the glue logic and a web based backend that is really nothing more than a thin layer over a DB. eg MEAN apps are frequently this way.
Building your apps this way provides a phenomenal speed up in scalability and responsiveness since the "web" server doesn't have to slow down long enough to construct the page view as happens in template languages.
Anyways setting the base url means being able to change where the app sources it's data on the fly and can be incredible speed up for developer productivity due to code reuse.
Search engines?
There was a time when search engines crawling bots did not "understand" or run any of the Javascript code. In this case, such bots would get all the links wrong and the crawling would stop right there.
So basically it might hamper some crawlers from crawling and indexing your links.

Is it possible to edit the CSS of an existing webpage?

Specifically, what I'm trying to do is create a mobile version of a site I don't have access to. The best approach I can think of is this:
My site executes their php search file and then displays the results page, but first modifies its DOM to use my CSS. Is it technically possible?
Your site can definitely access web content from another site, filter/transform it however it wants, and then forward the result wherever it wants. It is not a simple problem, potentially, as so much web content is dynamic. For example, if the source site has content that's formatted with CSS that's dynamically built by JavaScript, it'd be fairly difficult to come up with an automated transformation.
Whether the original site's owners will be happy about your site doing that is a separate issue.

Javascript isn't good for SEO, is it?

If I decided to use some javascipt in my website like
$('#body').load(URL);
or
$.get(URL, {param:value}, function(){ ... });
or
window.title = 'TEXT';
Is it good for SEO? Or am I recommended to use pure PHP for data on the page for SEO purposes?
The question of if javascript is good for SEO or not is missing the point. We should pretty much assume that any content which is only available by javascript will not be crawled by the search engines. Google at least claims to be able to crawl some javascript only content but is fairly tight lipped about what exactly they can crawl. Other search engines probably don't crawl it and it's certainly the case that not all do. So assume it doesn't get crawled.
That doesn't mean it's bad for SEO.
If the content will contribute to your SEO, then it's bad for SEO. If the content is neutral to SEO, then it's neutral for SEO. So the answer to your question really depends on the nature of your content. If the content is part of your SEO campaign, then stick with server-side HTML generation be it PHP or some other method. Otherwise the question of SEO has no bearing on the decision to to use javascript or not. Accessibility would be another thing to take into account. Javascript only content is terrible for that.
The larger search engines can/do render limited amounts of javascript. However, for SEO purposes your best bet is rendering the content via HTML rather than javascript. A good rule of thumb is to utilize HTML for content/expressing limited content structure (e.g. paragraph type text = p, lists = ul/ol, headings = h1/h2/h3, etc...), CSS for presentation, and JS for client side programming. With that being said, always ensure a good user experience first. If you can do the above while providing a great user experience, great! If you can't, users first. Its likely you can keep both users and bots happy 95% of the time if you take the time to do so.
Further reading (sorry, I can only post one link as a new user):
Matt Cutts Interview (Check out #26 on Google Javascript Rendering)
A spider's view of Web 2.0
EDIT Added that for "a new user" ;) ~ drachenstern
I think first you should consider what SEO means. It means "Search Engine Optimization" ... how does a search engine get data in the first place for it to be optimized?
It does a GET on the page and whatever data is returned in the GET is processed. No JS engine. No POST data. So you should be optimizing for whatever data is returned on a GET.
Additionally, you tagged this with PHP, but the question has nothing to do with PHP.
Have you seen any of the questions on this list?
https://stackoverflow.com/search?q=javascript+seo
No Sir, Google does not translate flash and java script properly so it may not crawl those area using java script or flash content. I suggest you should keep your website simple but if it is necessary to keep flashy/java script content then you should keep a text base backup.
The first thing you should be asking is not what is good for SEO, but what is good for users. For users, loading data with JavaScript will give them an interactive page, where they can start seeing the page immediately while it is still loading and where the page can update without having to reload it.
From Google's Webmaster Guidelines and article on Cloaking, you should not assume that crawlers can understand JavaScript. This does not mean that you should not use JavaScript on your website, but rather that you should provide the textual equivalent in noscript tags, for use both by users with JavaScript disabled
as well as for crawlers, bearing in mind that the content of these noscript tags should be roughly equivalent to what was shown with JavaScript enabled; showing different content to users and to search engines is called "cloaking" and is frowned upon to say the least.
Google doesn't (yet) execute a page's Javascript (JS). So if your JS replaces/creates content on a page then the content would normally be invisible to the crawlers (not good).
But, the Googlers have implemented a url hack that enables your server to create pages (from the server, not from JS), with all the different varients of your JS page's content.
This solves the SEO problem of Ajax powered pages. At least for Google searches...
See Crawable Ajax
Javascript or any scripts for that matter should never be used to house your sites content, ever! The entire web is driven by HTML and CSS, and in rare cases XML languages, everything else is a headache when it comes to SEO. Ask yourself this question, what exactly is SEO and what is it that search engines are indexing? Javascript and all programming/scripting languages are proprietary, this means that they are NOT standards as defined by the W3C, which means they are essentially worthless when it comes to indexing content. On the other hand, HTML, CSS, and XML are real standards developed for the web! It's ok to use scripts to add additional functionality to your pages, embed apps like social networking plugins, etc, but you should never use them to hold your websites HTML, CSS, or actual content ever, for any reason. Here's a link to a good article that will explain why you should be using HTML and CSS, and not a million scripts, optimizing webpages using proper html markup. Scripts cause other problems besides code that is hard for search engines to decipher. For one, they are harder for browsers to process, causing pages to load much slower than "static" pages made with HTML and CSS would. Pages made with PHP tend to create "dynamic" URL's that users and search engines cannot read. This is why Google recommends people who use jsp or PHP for their webpages include a sitemap, otherwise your links will never be found and might as well not exist. Stick to the conventions! Lets face it, we have standards for a reason. If every electronic component in your home required had a different type of plug that required a special socked, and all those devices had differing voltage and amperage requirements, what would happen? You would essentially burn down your house! And, you'd be spending 5 hours a day at the hardware store looking for those special adapters to fit your wall sockets with. If you plan on designing a website, use scripts for embedding apps or connecting with a database only, and use HTML and CSS to build "static" webpages. Also, use text links, as they are both human and search engine readable, and easy to index and make sense of. Never use scripts for your links. Programming and scripting can be fun, but not on the internet its not.
Search engines index HTML, CSS, and content (multi-media, graphics, videos, text, thats it!) everything else is pointless and annoying to both users and search engines alike. For best results use XML and design a custom language.
Google can crawl, index and rank javascript generated content.
But... it uses an old Chrome version (42) with an old javascript render engine.
The consequence is that your javascript code needs to work in older browsers and older chrome versions (older than 42). So no fancy ES6 functions, you need to use polyfills or use Babel for example.
Although you can do a lot with javascript (like click events or inject your mobile menu), it's recommended to still use a-href instead of a button with a javascript event and then using a function to get to a new page.
You can check the mobile testing tool from Google: https://search.google.com/test/mobile-friendly and check the errors/warnings/logs. If the rendered output looks like intended, Google will see your content.
In search console you can also ask to index the page. Sometimes the javascript crawler is first, sometimes the 'classic' crawler.
Doublecheck it some days afterward by googling a sentence or paragraph from your page.
There's no answer about if it's better or not. Content is content and Google should rank your website, SPA, PWA, AMP site, PDF document, online Doc, wikipage, and so on based on their content, not on the underlying technique.
If you are familiar with JavaScript, give it a go.
Regards, Peter

Javascript based redirect: will it hurt SEO?

I recently implemented a fix to create separate landing pages depending on whether or not the user has javascript enabled. Basically the way it works is this.
The default page is an HTML page w/ no javascript. Basic version of the site. Upon landing on it, there is a script that says if javascript is enabled then go to another page. That landing page is generated by sending the user request through a JSP file that renders the page (header, footer, etc.). The final landing page is http://whatever.com/home.jsp if the user has javascript enabled.
My question is if this will hurt SEO. Considering 99% of the world has javascript enabled I would hate to compromise any SEO benefit to accomodate the 1% who doesn't enable javascript.
Hope that make sense.
In general, searchbots should be treated as browsers with JS disabled. I think you can now imagine where they'll land.
This whole question is by the way completely unrelated to JSP. It is just a server side view technology which provides a template to write HTML/CSS/JS in and provides capabilities to control the page flow dynamically with taglibs and access backend data with EL. All what webbrowsers and bots sees (and thus all what counts for SEO) is its generated HTML output.
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Short version, if your JS sends them to entirely different content, it's probably bad, and Google may give you a a hard time. Other than that, you should be good.
If the alternative version is an (almost) full-featured, full-content version, then it's perfectly OK.
Google even advices for making alternatives for Flash-only sites, for example, in regard to usability.
Read google FAQ
You touch two topics, one is described as "Cloaking", the other as "Duplicate Content". With "cloaking", you present different (optimized-with-bad-intention) content based upon identification of the client that accesses it, e.g. by inspecting the User-agent header (google-bot versus Browser). You are not doing this, you just want to present content in a way that suits your client best, like a redirect on a page optimized for mobile clients ("m.example.com").
The other thing is how to avoid duplicate content. There's a way by indicating the original content source with a canonical tag, see here: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html

Using Javascript to render data onload

This post probably will need some modification. I'll do my best to explain...
Basically, as a tester, I have noticed that sometimes programers who use template-based web back ends push a lot of stuff into onload handlers that then do stuff like load menu items, change display values in forms, etc.
For example, a page that displays your network configuration loads blank (or dummy values) for the IP info, then loads a block of variables in an onload function that sets the values when the page is rendered.
My experience (and gut feeling) is that this is a really bad practice, for a couple reasons.
1- If the page is displayed in an environment where Javascript is off (such as using "Send Page") the page will not display properly in that environment.
2- The HTML page becomes very hard to diagnose, because what is actually on screen is needs to be pieced together by executing the javascript in your head (this problem is less prominent w/ Firefox because of Firebug).
3- Most of the time, this is not being done via a standard practice of feature of the environment. In other words, there isn't a service on the back-end, the back-end code looks just as spaghetti as the resulting HTML.
and, not really a reason, more a correlation:
I have noticed that most coders that do this are generally the coders that have a lot of code-related bugs or critical integration bugs.
So, I'm not saying we shouldn't use javascript, I think what I'm saying is, when you produce a page dynamically, the dynamic behavior should be isolated to the back-end, and you should avoid changing the displayed information after the page is loaded and rendered.
I think what you're saying is what we should be doing is Progressive Enhancement with JavaScript.
Also related: Progressive Enhancement with CSS, Understanding Progressive Enhancement and Test-Driven Progressive Enhancement.
So the actual question is "What are advantages/disadvantages" of javascript content generation?
here's one: a lot of the things designers want are hard in straight html/css, or not fully supported. using Jquery to do zebra-tables with ":odd" for instance. Sometimes the server-side framework doesn't have good ways to accomplish this, so the way to get the cleanest code is actually to split it up like that.

Categories

Resources