I build a website focussing on loading only data that has to be loaded.
I've build an example here and would like to know if this is a good way to build a wegpage.
There are some problems when building a site like that, e.g.
bookmarking
going back and forth in
history SEO (since the content is basically not really connected)
so here is the example:
index.html
<html>
<head>
<title>Somebodys Website</title>
<!-- JavaScript -->
<script type="text/javascript" src="jquery-1.3.2.min.js"></script>
<script type="text/javascript" src="pagecode.js"></script>
</head>
<body>
<div id="navigation">
<ul>
<li>Welcome</li>
<li>Page1</li>
</ul>
</div>
<div id="content">
</div>
</body>
</html>
pagecode.js
var http = null;
$(document).ready(function()
{
// create XMLHttpRequest
try {
http = new XMLHttpRequest();
}
catch(e){
try{
http = new ActiveXObject("MS2XML.XMLHTTP");
}
catch(e){
http = new ActiveXObject("Microsoft.XMLHTTP");
}
}
// set navigation click events
$('.nav').click(function(e)
{
loadPage(e);
});
});
function loadPage(e){
// e.g. "link_Welcome" becomes "Welcome.html"
var page = e.currentTarget.id.slice(5) + ".html";
http.open("POST", page);
http.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
http.setRequestHeader("Connection", "close");
http.onreadystatechange = function(){changeContent(e);};
http.send(null);
}
function changeContent(e){
if(http.readyState == 4){
// load page
var response = http.responseText;
$('#content')[0].innerHTML = response;
}
}
Welcome.html
<b>Welcome</b>
<br />
To this website....
So as you can see, I'm loading the content based on the IDs of the links in the navigation section. So to make the "Page1" navigation item linkable, i would have to create a "Page1.html" file with some content in it.
Is this way of loading data for your web page very wrong? and if so, what is a better way to do it?
thanks for your time
EDIT:
this was just a very short example and i'd like to say that for users with javascript disabled it is still possible to provide the whole page (additionally) in static form.
e.g.
<li>Welcome</li>
and this Welcome.html would contain all the overhead of the basic index.html file.
By doing so, the ajax using version of the page would be some kind of extra feature, wouldn't it?
No, it isn't a good way to do it.
Ajax is a tool best used with a light touch.
Reinventing frames using it simply recreates all the problems of frames except that it replaces the issue of orphan pages with complete invisibility to search engines (and other use agents that don't support JS or have it disabled).
By doing so, the ajax using version of the page would be some kind of extra feature, wouldn't it?
No. Users won't notice, and you break bookmarking, linksharing, etc.
It's wrong to use AJAX (or any javascript for that matter) only to use it (unless you're learning how to use ajax which is diffrent matter).
There are situations where the use of javascript is good (mostly when you're building a custom user interface inside your browser window) and when AJAX really shines. But loading static web pages with javascript is very wrong: first, you tie yourself with a browser that can run your JS, second you increase the load on your server and on the client side.
More technical details:
The function loadPage should be re-written using jquery : $post(). This is a random shot, not tested:
function loadPage(e){
// e.g. "link_Welcome" becomes "Welcome.html"
var page = e.currentTarget.id.slice(5) + ".html";
$.post( page, null, function(response){
$('#content')[0].innerHTML = response;
} );
}
Be warned, I did not test it, and I might get this function a little wrong. But... dud, you are using jQuery already - now abuse it! :)
When considering implementing an AJAX pattern on a website you should first ask yourself the question: why? There are several good reasons to implement AJAX but also several bad reasons depending on what you're trying to achieve.
For example, if your website is like Facebook, where you want to offer end-users with a rich user interface where you can immediately see responses from friends in chat, notifications when users post something to your wall or tag you in a photo, without having to refresh the entire page, AJAX is GREAT!
However, if you are creating a website where the main content area changes for each of the top-level menu items using AJAX, this is a bad idea for several reasons: First, and what I consider to be very important, SEO (Search Engine Optimization) is
NOT optimized. Search engine
crawlers do not follow AJAX requests
unless they are loaded via the
onclick event of an anchor tag.
Ultimately, in this example, you are
not getting the value out of the rich
experience, and you are losing a lot
of potential viewers.
Second, users will have trouble bookmarking pages unless you implement a smart way to parse URLs to map to AJAX calls.
Third, users will have problems properly navigating using the back and forward buttons if you have not implemented a custom client-side mechanism to manage history.
Lastly, each browser interprets JavaScript differently, and with the more JavaScript you write, the more potential there is for losing cross browser compatibility unless you implement a framework that such as jQuery, Dojo, EXT, or MooTools that handles most of that for you.
gabtub you are not wrong, you can get working AJAX intensive web sites SEO compatible, with bookmarking, Back/Forward buttons (history navigation in general), working with JavaScript disabled (avoiding site duplication), accessible...
There is one problem, you must get back to server-centric.
You can get some "howtos" here.
And take a look to ItsNat.
How about unobtrusivity (or how should I call it?)?
If the user has no javascript for some reason, he'll only see a list with Welcome and Page1.
Yes it's wrong. What about users without JavaScript? Why not do this sort of work server-side? Why pay the cost of multiple HTTP requests instead of including the files server-side so they can be downloaded in a single fetch? Why pay the cost of non-JavaScript enabled clients not being able to view your stuff (Google's spider being an important user who'll be alienated by this approach)? Why? Why?
Related
so i know this may sound stupid, but I want to know if there is any way that i could redirect someone to a website, and then display an alert box on the web page, using either javascript or any other interface that can be weaved into HTML5. I asked some of my classmates, and they didn't know, so I just need a confirmation that this isn't possible.
I have ran a few trials i found, but on further review, they wouldn't work.
edit I have control over the site, but I wish for the box to only pop up if i redirect it.
I can give the code if it would help, but I'm doubtful it will help
sorry for wasting your time if i did. thanks.
Similar to Gerard's answer, but using query strings. A possibly more standard solution.
On the first page:
<script>
window.location = 'https://{your website here}/?showAlert=true';
</script>
Then on the 2nd page.
const urlParams = new URLSearchParams(window.location.search);
const showAlert = urlParams.get('showAlert');
if(showAlert === "true") {
alert("Hello");
}
Note this will not work in internet explorer
You could communicate to the new site that it's a redirect by appending the route:
On the site you want to redirect from:
<script>
// if you can't use a normal link, change the url with JS
window.location = 'https://{your website here}/#redirect';
</script>
On the site you want to show the alert:
<script>
if (window.location.pathname.includes('#redirect') alert('What you want to say');
</script>
If you only have control over the first site, you could alert before redirecting, there wouldn't be much difference to the user since the alert blocks execution of other code.
You can call an alert box in a HTML document with JavaScript like this:
window.onload = function () {
alert("Hello World!");
}
-> https://jsfiddle.net/u8x6s31v/
You can also redirect somebody, if the page he's visiting is yours. Is that what you mean?
Update:
I think you can't see the URL where someone comes from with JavaScript. The only way to trigger scripts for some people is to add a hash.
window.onload = function () {
hashUrl = window.location.hash;
if (hashUrl == "#alert") {
alert("Hello World!");
}
}
Then call for example: domain.tld/path/index.html#alert
-> https://jsfiddle.net/u8x6s31v/1/
This is a vague questions with a lot of possible answers but seeing as how you're new, I'm going to speculate on what you might be asking and try to answer it.
It's not clear what you mean by "redirect".
Is this a server-side redirect like when you move/change a URL and you redirect the old URL to a new URL? Is this a Javascript meta refresh that you put in a 404 template? Is this a redirect that occurs after a user takes an action?
Regardless, you're going to have to "annotate" that user and then take action upon that annotation. The most obvious method would be based on the "referrer" HTTP header but it is also possible to do it based on the presence of a cookie.
Additionally, adding URL parameters to a redirect is trivially easy and often used for stuff like this.
The immediate things that come to my mind would be via Google Analytics (preferably implemented via Google Tag Manager because it'll make it easier).
Look into "outbound link tracking", "cross-domain tracking" and "UTM campaign tracking" (all related to Google Analytics and/or Google Tag Manager) and you'll probably find something that suits your needs.
URL shorteners are commonly used to mask parameters in links and there are open source libraries that allow you to host your own URL shortener (which you could integrate into your redirects) or do some other type of link tracking like is often used for affiliates.
So I run a site that uses a lot of javascript and ajax. I understand how to make users refresh their browser when the browser loads. But what happens if I need them to refresh their browser after they have loaded the site?
I want to change the ajax that is served to the client to speed up things up, but this is going to cause errors for the users who have not yet refreshed their browser.
The only solution I can come up with is that when a new version of the JavaScript file is required, the site uses a popup that asks the users to force refresh their browsers. (This won't really fix the current version, but would prevent future issues.)
I hate to use a popup for something that I could do automatically. Is there a better way to force updates for the client?
window.location.href = "http://example.com"
replaces the current page with the one pointed to by http://example.com.
You sound like you are having trouble with your JavaScript getting an updated version of the data it loads through Ajax methods, is that correct? For instance, if two Ajax calls try to load 'data.txt', then the second call merely uses the cached version.
You also may be having trouble with loading new versions your script itself.
The way around both of these problems is to add a randomly-generated query string to your script source and your Ajax source.
For example, make one script that loads your main script, like this:
/* loader1.js */
document.write('<script src="mainjavascript.js?.rand=', Math.random(), '"></script>');
And in your HTML, just do
<script src="loader1.js"></script>
The same method works for JavaScript Ajax requests as well. Assuming that "client" is a new XMLHttpRequest() object, and has been properly set up with a readystatechange function and so on, then the you simply append the same query string, like this:
request = client.open('GET', 'data.txt?.rand=' + Math.random(), true);
request.send();
You may be using a library to do your Ajax requests, and so it's even easier then. Just specify the data URL as 'data.txt?.rand=' + Math.random() instead of merely 'data.txt'
I need to create a simple static web part - the only dynamic part is the current user login name (needs to be included in a URL parameter in a link).
I'd rather just use a content editor web part instead of a custom web part that I'll need to deploy. In fact, I already did that using JavaScript and ActiveX (WScript.Shell - full code below)
Is there a better, more native way to do it? What I don't like about the ActiveX approach is that it requires more liberal configuration of IE, and even when I enable everything ActiveX-related in the security settings there is still a prompt that needs to be clicked.
Cross-browser support is not a major issue, it's intranet - but will be a nice extra, too.
One way I can think of is to scrape the username from the top-right hand corner with jQuery (or just JavaScript), but is there something even cleaner?
Or maybe someone has already done the scraping and can share the code to save me some time ;-)
Here's my current solution:
<script language="javascript">
function GetUserName()
{
var wshell = new ActiveXObject("WScript.Shell");
var username = wshell.ExpandEnvironmentStrings("%USERNAME%");
var link = document.getElementById('linkId');
link.href = link.href + username.toUpperCase();
}
</script>
<P align=center>
<a id="linkId" onclick="GetUserName();" href="<my_target_URL>?UserID=">open username-based URL</a>
</P>
Include this Web part token in your content editor web part:
_LogonUser_
It outputs the same value as Request.ServerVariables("LOGON_USER")
I've done this before including in a hidden span and plain old document.getElementById to get the innertext.
My conclusion is that there's no better way to do that (which is also sufficiently easy).
I checked the source of the page and the username is not anywhere there, so can't be scraped by JavaScript/jQuery. The browser doesn't seem to be sending it with the request (as it does with your local IP and other client-related information), so can't be obtained from headers either.
The only other approach I can conceive is calling a web service from the client-side that would be an overkill for my scenario.
I have been asked to automate the logging into a webapp(what I assume to be one, that runs a lot of .aspx and .js scripts) that, currently, can only run in IE. Now i am programming in Perl and have tried to use Win32::IE::Mechanize to run the IE browser and log in. What i did was try an extract all the forms from the webapp, and given the users information, fill out the required forms, but this is where the problem arises, when I try and run the subroutine no forms appear......
So then I transitioned into WWW::Mechanize and used the post subroutine(from LWP::UserAgent) which solved the problem for the most part. Now i've run into a problem in the response, from the server, I get this script as the content of the response and I don't know what to do with it.
So my question is: Using Perl how can I go about to manipulate a Javascript functions in a website? Would that even be a valid solution to the problem?
I am open to writing this in other programming languages as well. Thanks in advance for the help!
(So that I can fully log in to the webapp)
Update: The content of the response:
var msgTimerID;
var strForceLogOff = "false";
function WindowOnLoad(){
if ("false" == "true" && "false" == "false")
MerlinSystemMsg("",64);
if ("false"=="true")
msgTimerID = window.setInterval("MerlinSystemMsg(10095,64)", 300000,'javascript');
}
function MyShowModal(){
showModalDialog("", window, strFeatures);}
function clearMsgInterval(){
window.clearInterval(msgTimerID);
}
function WindowOnUnLoad(){
if(top.frames(0).document.getElementById("OPMODE").value =="LOGOFF"){
strFeatures = "width=1,height=1,left=1000,top=1000,toolbar=no,scrollbars=no,menubar=no,location=no,directories=no,status=yes,resizable=1";
window.open("ForceLogOff.aspx","forcelogout",strFeatures);
}
}
window.onbeforeunload = WindowOnUnLoad;
window.onload = WindowOnLoad;
There is also this Frame Title that has the src:
FRAME TITLE="Service Desk Express Navigator" SRC="options_nailogo.aspx" MARGINWIDTH=0 MARGINHEIGHT=0 NORESIZE scrolling=no
Trying to emulate the browser with a fully functioning JS engine is going to be a mighty big task. Instead, I'd suggest that you just try to emulate the actual interaction with the web site and not care what HTML/JS is actually sent back. Your server side code doesn't care how the HTTP submissions take place, only that they do. Admittedly this is more fragile if the forms change a lot, but at least you're not trying to implement a full browser.
So look at modules like LWP::UserAgent, HTTP::Request and HTTP::Response.
I'm copying and pasting my answer to your other duplicate question here
(You should consider deleting one of these?)
That content is the website source :)
How WWW::Mechanize deals with FRAME SRC as a link:
Note that <FRAME SRC="..."> tags are parsed out of the the HTML and
treated as links so this method works with them.
You'll want to use follow_link on that link.
As far as dealing with Javascript, there is support for a Firefox Add-on called MozRepl that you can use in conjunction with WWW::Mechanize::Firefox that I have used in the past to call Javascript code while crawling a page.
For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?
That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...
Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas
You could try to render the whole page in canvas and save this image back to server. have fun :)
A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.
Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete
I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)
You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.
Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.
You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});
I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.
Print Screen? Old school and a couple of keypresses, but it works!
This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.
i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.
We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.
In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.