Images through XML into to HTML - javascript

I have a folder of images, these images need to display on a HTML page I'm using a excel spreadsheet exported to XML which is imported into HTML using Javascript: (copied and pasted from w3schools).
What I need to do is get the images from the images folder using XML and display it in between h2 and h3.
How do I do this and what would it look like in the XML file and Javascript below?
Each div (below) then needs to be a link to different pages?
Also the items on the XML needs to be indexable/searchable I have google custom site search.
Thanks in advance.
<script type="text/javascript">
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.open("GET","cdcat.xml",false);
xmlhttp.send();
xmlDoc=xmlhttp.responseXML;
var x=xmlDoc.getElementsByTagName("CD");
for (i=0;i<x.length;i++)
{
document.write("<div class=\"feat_product\"><h2>");
document.write(x[i].getElementsByTagName("ARTIST")[0].childNodes[0].nodeValue);
document.write("</h2><h3>");
document.write(x[i].getElementsByTagName("TITLE")[0].childNodes[0].nodeValue);
document.write("</h3></div>");
}
</script>

I will not write your code for you, but explain the approach:
You need to use img elements and set the src attribute to the public path the images are exposed on. This is in javascript.
In your XML, for each CD element you will need to pass in the image path as well to be consumed by the javascript.

Related

External php page taking too long to parse and load with javascript

I have a code that is supposed to get an external page (In this cage, php) parse and load inside another page, it's like an iframe, but it uses javascript. The code is working, but it's the last thing that is executed on the page, it has to wait everything on the whole page to load, so it can be executed.
What I need to achieve is that I need it to load with the page, or before the page loads.
I believe it's because of the even window.addEventListener.
<div style="min-height:300px;
display: block;">
<script type="text/javascript">
function loadXMLDoc()
{
var xmlhttp;
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv020103").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","http://www.jonasweb.net/samples.php",true);
xmlhttp.send();
}
if(window.addEventListener){
window.addEventListener('load',loadXMLDoc,false); //W3C
}
else{
window.attachEvent('onload',loadXMLDoc); //IE
}
</script>
<div width="100%" id="myDiv020103"></div>
</div>
<div style="clear:both"></div>
I tried creating a fidle, but it's not working, http://jsfiddle.net/5wcncs1x/
It only works on a host.
I found out by myself. Just had to change:
if(window.addEventListener){
window.addEventListener('load',loadXMLDoc,false); //W3C
}
else{
window.attachEvent('onload',loadXMLDoc); //IE
}
for
document.addEventListener("DOMContentLoaded", loadXMLDoc, false);document.addEventListener("DOMContentLoaded", loadXMLDoc, false);

Javascript links and google seo

I have an xml file which would contain links to several internal html pages. I am using HTML DOM to get these links and display the links in a table. These links are simple html links and no parameters. These html pages reside in the server.
My question is, when I used fetch as google in webmaster tools. google is fetching the javascript but not the table that is populated. Will google crawl and index these links? I want to make sure that these pages linked here will be indexed... Please guide me through this issue. Also let me know if there would be a better way to display content from xml so that google crawls these links.
<script>
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.open("GET","/jobs/jobs.xml",false);
xmlhttp.send();
xmlDoc=xmlhttp.responseXML;
document.write('<table id="example">');
document.write('<thead><tr><th>Job ID</th><th>Job Title</th><th class=\"mobexcl\">Location</th><th class=\"mobexcl\">Country</th><th class=\"mobexcl\">Date Posted</th><th>Status</th><th class=\"mobexcl\">View</th></tr></thead><tbody>');
var x=xmlDoc.getElementsByTagName("CD");
for (i=0;i<x.length;i++)
{
if(i%2==0){
document.write('<tr class="alt">');
}
else{
document.write('<tr class="alt1">');
}
document.write("<td>");
document.write(''+x[i].getElementsByTagName("JOBID")[0].childNodes[0].nodeValue+'');
document.write("</td><td>");
document.write(x[i].getElementsByTagName("TITLE")[0].childNodes[0].nodeValue);
document.write("</td><td class=\"mobexcl\">");
document.write(x[i].getElementsByTagName("LOCATION")[0].childNodes[0].nodeValue);
document.write("</td><td class=\"mobexcl\">");
document.write(x[i].getElementsByTagName("COUNTRY")[0].childNodes[0].nodeValue);
document.write("</td><td class=\"mobexcl\">");
document.write(x[i].getElementsByTagName("DATE")[0].childNodes[0].nodeValue);
document.write("</td><td>");
document.write(x[i].getElementsByTagName("STATUS")[0].childNodes[0].nodeValue);
document.write("</td><td class=\"mobexcl\">");
document.write('View/Apply');
document.write("</td></tr>");
}
document.write("</tbody></table>");
</script>
Crawlers won't execute scripts on your page.
Google has devised a method to crawl ajax populated sites. You can read about it here.
Third item in the list seems to be applicable to your case.
Basically, your server needs to create a HTML snapshot of the ajax rendered page for google bot to crawl.
Google also provides tool to test this setup.
HTH.

Dynamically adding HTML using .innerHTML (formatting issue in IE7)

I have a select dropdown that when updated takes the new variable and dynamically updates the div directly below it.
HERE is my select input:
<select name='region_name' onchange='showDistrict(this.value)'>
here is my javascript that controls the dynamic stuff:
<script>
function showDistrict(str)
{
if (str=="")
{
document.getElementById("district_div").innerHTML="";
return;
}
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("district_div").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","getdistrict.php?q="+str,true);
xmlhttp.send();
}
</script>
The page getdistrict.php?q= does another MySQL call and loops out info into the div "district_div" MY PROBLEM IS... that div region does not stretch correctly in IE 7. So my dynamic data overlays everything below it.
When I look at the source with firebug I don't even see the new html from innerHTML so I am not sure this is a css issue or something having to do with .innerHTML
do not set any width or height for this DIV, or set only min-width min-height, and you can also set overflow:hidden for this DIV

javascript document.write() removes the html from page and display result in a blank page [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
JavaScript - what are alternatives to document.write?
i am creating a javascript function which i want to execute after few seconds but when it executes it removes all the page content and display only that result which i am displaying using document.write() here is my javascript code.
<script language="javascript">
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
setTimeout(function(){
xmlhttp.open("GET","some.xml",false);
xmlhttp.send();
xmlDoc=xmlhttp.responseXML;
var x=xmlDoc.getElementsByTagName("offer");
var page = parseInt(x.length) / 10;
document.write("<div class='pagination_btn_cont'>");
for (i=1;i<=page;i++)
{
document.write("<div class='pagination_btn'>"+i+"</div>");
}
document.write("</div>");
},10000);
</script>
when i open the webpage its display all the content of the page but after 10 seconds the page will become blank and display only the numbers which i am getting from the loop.
any suggestion how can do this task.
Use innerHtml to change text of a specific element.
HTML
<div id="container"></div>
javascript
document.getElementById('container').innerHTML += '<div>content</div>';
or you can use jQuery library, life will be much easier:
$("#container").append("<div>content</div>");
You can use innerHTML in javascript.
It use to insert data to particular div without affecting page contents.
Example:
var results = "";
for(var i=1;i<=10;i++)
{
results += "<div class='pagination_btn'>"+i+"</div>";
}
document.getElementById("your result show div id").innerHTML = results;
you can specify $('.pagination_btn').bind("click")... inside your document.ready
try this
document.getElementsByTagName('body').innerHTML += '<div id="parent" class="pagination_btn_cont">';
for (i=1;i<=page;i++)
{
document.getElementById('parent').innerHTML = "<div class='pagination_btn'>"+i+"</div>";
}

How to get unnamed elements from an external webpage using AJAX?

At the moment I'm trying to get an element off an external website using AJAX, so far I've (hopefully) managed to get the page:
var xmlhttp;
var version;
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
version=xmlhttp.innerHTML;
}
}
xmlhttp.open("GET","http://www.minecraftwiki.net/wiki/Minecraft_Wiki",true);
xmlhttp.send();
Now I just need to find a way to get the contents:
<dd> Current PC version:
<b>
<a href="/wiki/Version_history#1.2.5" title="Version history">
1.2.5
</a>
</b>
</dd>
I've checked the source code of the url and sadly the element I want is unnamed (Has no id=" "), so is it still possible to do so? And if so, how? Thanks
First, you're making a cross-domain request, so unless you're using a browser that allows cross-domain AJAX, this is most likely not going to work for you without using a server-side proxy.
However, to answer your original question, you don't need an id attribute to access an element. While helpful, you can access an element in any number of ways.
Class Selectors
var col = document.getElementsByClassName("the-class");
Then loop through the collection until you find the element you want.
jQuery Selectors:
jQuery Selectors are perhaps the easiest way to handle DOM manipulations and get access to the element you are interested in:
// example of an attribute selector:
var exampleHTML = $('div[title="example"]).html();
There is also XPath, but from my experience jQuery CSS Selectors are more modern, robust, and help speed up the development process.

Categories

Resources