I am creating a website which uses jquery scrolling as the method of navigation that never leaves a single html page.
I have noticed that some websites are able to change the URL and have looked at posts/answers (such as How does GitHub change the URL without reloading a page? and Attaching hashtag to URL with javascript) which refer to these changes being either push states, AJAX scripts or history API's (all of which I am not too savvy in).
Currently I am looking into which method is best for my website and have been looking at some examples which I like.
My question is why do the websites below use /#/ in the path for the changing URL. The only reason I ask is because I am seeing this more and more often with jquery heavy websites.
http://na.square-enix.com/ffxiii-2/
http://www.airwalk.com
If anyone could simply shed some light on what these guys are using to do this, it would be much appreciated so I can possibly create my own script.
My question is why do the websites below use /#/ in the path for the changing URL
If we discount the possibility of ignorance to the alternatives then: Because they are willing to accept the horrible drawbacks in exchange for making it work in Internet Explorer (which doesn't support the history API).
Github take the sensible approach of using the history API if it is available and falling back to the server if it isn't, rather then generating links that will break without JavaScript.
http://probablyinteractive.com/url-hunter
This has a nice example on how to change the url with javascript.
I've not tried it myself, but read many reviews/opinions about History.js
It's supposed to have the "# in the path" option as you said (for older -- incompatible -- browsers) and the facebook-like direct changing of URL. Plus, when you hit the back button, you will get to the previous AJAX-loaded page with no problem.
I've implemented such a feature (AJAX tabs with URL changing), but if you will have other javascript on the pages that you want to load dynamically, I wouldn't recommend you using AJAX-loaded pages, because when you load content with AJAX, the JavaScript inside the content won't be executed.
So I vote for either HistoryJS or making your own module.
Well, they're using the anchor "#" because they need to differentiate between multipla bookmarkable/browser navigatable places in the site, while still having everything on the same page. By adding browser history events of the genre /mySamePage.html#page1, /mySamePage.html#page2 when the user does something that Ajax loads some content in the current html page you have the advantage of (well, obviouslly) still staying on the current page, but at the same time the user can bookmark that specific content, and pressiing back/forward on his browser will differentiate between different Ajax loaded content.
It's not bad as a trick, only issue is with SEO optimisation. Google has a nice page explaining this http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
Related
I would like to know if there is a way to prevent an html page with jQuery or javascript to be modified by the user to change its behavior.
A user can modify it using tools such as FireBug or the Google Chrome developer bar to hide or show divs, add event listeners to page elements and so on.
I've seen some web pages showing a blocking div when the page is loaded and a pop-up telling to answer some question. If you answer it, the div hides and you can see the page normally.
But if you try to hide the blocking div using FireBug, the page reloads and there is no way to see the page correctly if you don't ansewer the question asked in the pop-up.
I want to know how could I prevent user from doing such things.
Thanks a lot.
It is not possible (which is a very good thing).
To defeat the method described in the question:
You can use the keyboard shortcuts to the console/tools (Ctrl+Shift+I in Chrome)
You can use the resource/net panel to see the source
You can see it at any other level, e.g. Fiddler
You can use a bookmarklet for easier access
No, you can't prevent people seeing or modifying your source/script if they want to...the ones you most want to prevent are the most able to circumvent any deterrent (and that's all anything you do is, a deterrent, not a stop) you put in place.
The only way to do it (in my opinion) is not have the page content load until the user does your desired action. After he answers the question (or whatever) you send an AJAX request for the content (of course, as thejh said, you should also validate the answer on the server, preferably in the same request). So you load page header, banners and anything not critical, but the actual content (say an article on a blog) should not be loaded until the user does your action.
Everything that a user's browser receives belongs to the user, so you can't enforce anything on that.
For sure you can't prevent anybody from doing what he wants to, but you can make changes more difficult.
Take a look at the DOMEvents, especially the Mutation-Events.
Those give you the ability to see when something has changes(attributes, removed/inserted nodes, data in textnodes ...). For example you could build a function that watches some special attributes you would'nt have to get changed and reload the page, if this happens.
As others have already said, it isn't possible to control what the end user does with data you've sent them.
It may be possible to detect the console object that Firebug and others use, but what can your site do with that information once you've got it? You can't disable firebug or prevent it from being used, or even know whether it has been used.
The bottom line is that once the web page and javscript code have been sent to the browser, it is out of your control.
The closest you can get to what you want is to move some of your code out of Javascript and to the server, where it will be untouchable by the user. However you'll still have to have some client-side code, which will still be at the mercy of malicious users.
The other alternative is to move to Flash or something similar, where the end user doesn't have direct access to the code or the object model. This has it's own downsides though, and you'd be bucking the trend, which is to move away from Flash toward HTML5 and Javascript.
It's impossible. When you send code to the client, the client can look at it and modify it. Only code that runs on your server is protected aganist that.
I don't think it's possible to do that unless you can make sure (or force) users to use browsers that don't have developer tool.
Use ajax to get remote information don't send the user all the information such as answers to polls etc get the answer after he picked a choice from the server using ajax for example. Client-side validation is never a good thing hell thats how I used to delete other stupid people's databases due to that unless people learn properly how things REALLY work they should get taught the hard way such as losing everything in a case of root access vulnerability.
I don't know why HTML has to be blocked it wasn't blocked since browsers came out he'll I could make my own browser with a socket and get HTML transfered right to some textbox and see it in my favorite notepad/editor etc..
As for javascript you can simply send javascript commands in browser address bar (how convenient of browsers in supporting hackers hehe but it's also used for inter-op communication with other technologies such as flash so it has a evil/good side to it as everything.)
If you didn't know you can just do
javascript: alert('hi');
or if your javascript game or whatever has globally scoped variables you can modify em easily
javascript: score=9999;damage=99999;
etc etc like i said it's all good it weeds out the bad programmers and gets em fired or teaches em a lesson in the future.
I've seen many big sites still fall to a SIMPLE XSS attack (Cross-site scripting) which is just baffling how these programmers get a job, I'd do a better interview or some shit it's ridiculous
In order to see a live preview of another website that users link to on my website, I'm using iframes.
However, this is probably not the best solution, as a website is loaded directly into mine, with every JavaScript element etc that is on the linked page.
My question: how dangerous is it to do such a thing? What is the worst case scenario that could happen? Could a linked site just by using JavaScript (or other technologies) do any serious harm to my site or my user's data?
And then, the second part of my question is, of course, about the website preview.
All I found so far are scripts that contain more than one php and js file in order to load a website preview picture.
Isn't there an easier way to do this? What do you suggest?
how dangerous is it to do such a thing?
Some websites do not like to be embedded using frames. Such websites can possibly take over the full browser by ensuring it is loaded in the topmost window. Aside from that, as long as your website and the website you are loading aren't from the same domain, they won't be able to access your cookies, DOM etc. So its pretty safe in that respect.
about the website preview
There aren't many fool proof mechanisms other than generating the preview image server side - as I believe the scripts you've seen do.
I really like how sites like FogBugz and Facebook offer snappy user interfaces by loading page content asynchronously.
What are some good resources and patterns for applying this to other websites? I am looking for a solution that creates a unique hash URL for each page, preserves history and basic browser functions, and degrades gracefully if JavaScript is not enabled (a great example of this is Facebook).
This blog post is a good start, but it's far from a complete solution/pattern - and any approaches using jQuery would be great.
IMO, in order to allow a site to degrade gracefully, you should first build at least the framework of the site in the lowest level that you're going to support. In your case, this is going to be standard postbacks.
Once you've got this in place, you can then start adding ajax interactions.
The approach I've taken when using ASP.NET MVC is to have one function which builds the whole page from scratch (for regular postbacks) and then have some extra methods which I used to dynamically refresh content via Ajax. If I want to implement a 'Single Page' method like oyu describe then I would handle the onclick event of a hyperlink and call an ajax method that renders the 'Build Whole Page' method to a string then pump that string into my content div.
HTH
I have found pjax to be the most promising solution so far. From https://github.com/defunkt/jquery-pjax:
pjax loads HTML from your server into the current page without a full
reload. It's ajax with real permalinks, page titles, and a working
back button that fully degrades.
pjax enhances the browsing experience - nothing more.
You can find a demo on http://pjax.heroku.com/
Here are an example to building Ajax based website using jQuery and PHP
Here is great article about loading content with jQuery, and it degrades gracefuly when js is diasebled.
link text
I recently implemented a fix to create separate landing pages depending on whether or not the user has javascript enabled. Basically the way it works is this.
The default page is an HTML page w/ no javascript. Basic version of the site. Upon landing on it, there is a script that says if javascript is enabled then go to another page. That landing page is generated by sending the user request through a JSP file that renders the page (header, footer, etc.). The final landing page is http://whatever.com/home.jsp if the user has javascript enabled.
My question is if this will hurt SEO. Considering 99% of the world has javascript enabled I would hate to compromise any SEO benefit to accomodate the 1% who doesn't enable javascript.
Hope that make sense.
In general, searchbots should be treated as browsers with JS disabled. I think you can now imagine where they'll land.
This whole question is by the way completely unrelated to JSP. It is just a server side view technology which provides a template to write HTML/CSS/JS in and provides capabilities to control the page flow dynamically with taglibs and access backend data with EL. All what webbrowsers and bots sees (and thus all what counts for SEO) is its generated HTML output.
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Short version, if your JS sends them to entirely different content, it's probably bad, and Google may give you a a hard time. Other than that, you should be good.
If the alternative version is an (almost) full-featured, full-content version, then it's perfectly OK.
Google even advices for making alternatives for Flash-only sites, for example, in regard to usability.
Read google FAQ
You touch two topics, one is described as "Cloaking", the other as "Duplicate Content". With "cloaking", you present different (optimized-with-bad-intention) content based upon identification of the client that accesses it, e.g. by inspecting the User-agent header (google-bot versus Browser). You are not doing this, you just want to present content in a way that suits your client best, like a redirect on a page optimized for mobile clients ("m.example.com").
The other thing is how to avoid duplicate content. There's a way by indicating the original content source with a canonical tag, see here: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
A friend of mine uses a web-app for work related purposes.
The app's built using PHP/MySQL , and while it has some JavaScript to make it easier to work with, it's not user friendly enough, and with a bit of extra JS, a lot of stuff could be automated.
I would like to enhance that app, but I'd like to not have to modify the original server-side code. To do this, all I could think of was Greasemonkey. Is this the only way to do it, or am I missing out something? I'd also like to be able to use a modern JS framework, like jQuery.
EDIT: I should tell you what improvements I want to make. There are a lot of fields on the page, so autocompletion would really help a lot. This will be used for data entering, so AJAX may be used for some error checking as well.
Greasemonkey is certainly an option. Another idea is to code up your improvements, and then make bookmarklets out of them. Your friend can use the bookmarks (probably in a bookmark bar) to do the things you've improved. Bookmarklets have access to the page as though they were a part of the page.
Edit 1 In fact, now I think about it, a bookmarklet should be able to load a script file (from a different origin) into a document by adding a script tag to the head section (well, or anywhere, really). Since the SOP is based on where the document came from, not the script, ... That way, he'd just have to press the button once (for any given page he goes to) to load your improvements.
Edit 2 Yup, a bookmarklet can be used to bootstrap any script file into the page; here's an example:
javascript:(function(){var%20d=document,db=d.body||d.documentElement,elm;elm=d.createElement('script');elm.src="http://example.com/yourscript.js";db.appendChild(elm);db.removeChild(elm);})();
That adds a script element for the file http://example.com/yourscript.js to the body of the current document, which executes it. (The bookmarklet then removes the script element; just adding it is enough, it doesn't have to stick around; details.) Your script can then do things like add other scripts (jQuery, in your example) in the same sort of way, fire up auto-completers, etc. Tested the above (which probably needs tuning) with Chrome and Firefox; IE isn't liking it but I think that's an issue with my encoding of the bookmarklet or something rather than a fundamental problem. (I'm relatively new to bookmarklets.)
I think Javascript can manipulate across frames, can't it?
Can't you just make a page that loads the original site in one frame and your js interface improvements in another?
(Getting the permission of the employer is also a good idea, if thats not been addressed)