I'm working with a some legacy code and in many places there's code to get XML data from some url. it's pretty straight forward.
var xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
xmlDoc.async="false";
xmlDoc.load(url);
and in some places
var httpRequest= new ActiveXObject("Msxml2.XMLHTTP.6.0");
httpRequest.open('GET', url, false);
httpRequest.send(null);
in most cases we're picking results from response XML.
I've seen in a number of places that the usage of "Microsoft.XMLDOM" is antiquated, replacing it in favor of ActiveXObject("Msxml2.XMLHTTP.6.0").
For other browsers I should use the standard W3C XMLHttpRequest. This seems to work without any problems.
The problem is loading the result xml string.
I see that loadXML is defined when using "Microsoft.XMLDOM" but not with ActiveXObject("Microsoft.XMLHTTP");
With other browsers DOMParser is the suggested method as well as IE-11.
This is what I did to retrieve information from a url then
parse that information and then finally attempt to load the XML string to the DOM.
My main problem is I'm getting more and more confused as to what solution is appropriate when Manipulating XML with regards to Internet Explorer or maybe it's just too late in the day.
I wanted to remove the usage of "Microsoft.XMLDOM" but to perform the loadXML I had to go back to it. is there a better way to approach this?
// Get the information use either the XMLHttpRequest or ActiveXObject
if (window.ActiveXObject || 'ActiveXObject' in window) {
httpRequest = new ActiveXObject("Msxml2.XMLHTTP.6.0");
}
else if (window.XMLHttpRequest) {
httpRequest = new XMLHttpRequest();
if (httpRequest.overrideMimeType) {
httpRequest.overrideMimeType('text/xml');
}
}
httpRequest.open('GET', url, false);
httpRequest.send();
var xmlDoc = httpRequest.responseXML;
// Retrieve the XML into the DOM
var xml = xmlDoc.getElementsByTagName("XML_STRING_SETTINGS")
// Load the XML string into the DOM
if (window.DOMParser) {
var parser = new DOMParser();
xmlDoc = parser.parseFromString(xml, "text/xml");
}
else // code for IE
{
xmlDoc = new ActiveXObject("Microsoft.XMLDOM");
xmlDoc.async = false;
// is there another way to do this load?
xmlDoc.loadXML(xml);
}
You may take following steps:
Go to internet options.
Select security tab.
Under Custom level.
Ensure that Initialize and script active x controls is not marked safe for scripting select the radio button enabled and try to run your code now.
I am sure your problem will be solved.
I think the code for IE5 and 6 is
new ActiveXObject("Microsoft.XMLHTTP")
And for IE7+ (and the rest of browsers)
new XMLHttpRequest();
Related
i want to make a javascript that reads a XML document as an input (let say "C010.xml"), searches for certain tags and then returns the value within these tags.
For example, in the expression
<lesson_mode>normal</lesson_mode>
to return the attribute "normal".
Could you suggest sth please?
Thanks!
You need to get the XML first. Use XMLHttpRequest for that, then parse the response via DOMParser which returns a Document instance. Then you can just access the value like this: doc.getElementsByTagName('lesson_node')[0].textContent
I don't know what experience you have, so this is the basic structure:
var xhr = new XMLHttpRequest();
xhr.open('GET', 'C010.xml', true);
xhr.onload = function () {
var parser = new DOMParser();
var doc = parser.parseFromString(xhr.responseText, 'application/xml');
var value = doc.getElementsByTagName('lesson_node')[0].textContent;
};
xhr.send(null);
Note that this is not by any means cross-browser nowadays. You would have to search for a slightly different way to parse the response in IE.
I want to retrieve the data contained in a text file (from a given URL) through JavaScript (running on the client's browser).
So far, I've tried the following approach:
var xmlhttp, text;
xmlhttp = new XMLHttpRequest();
xmlhttp.open('GET', 'http://www.example.com/file.txt', false);
xmlhttp.send();
text = xmlhttp.responseText;
But it only works for Firefox. Does anyone have any suggestions to make this work in every browser?
Thanks
IT works using xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); in IE older versions. Chrome, Firefox and all sensible browsers use xhr
Frankly, if you want cross browser compatibility, use jquery
its pretty simple there:
var text="";
$.get(url, function(data){text=data;//Do something more with the data here. data variable contains the response})
var xhr = new XMLHttpRequest();
xhr.open('POST', '/uploadFile');
var form = new FormData();
form.append('file', fileInput.files[0]);
xhr.send(form);
It was previously impossible to upload binary data with XMLHttpRequest object, because it could not stand the use of FormData (which, anyway, did not exist at that time) object. However, since the arrival of the new object and the second version of XMLHttpRequest, this "feat" is now easily achievable
It's very simple, we just spent our File object to a FormData object and upload it
Ok, this piece of code:
http_request = false;
http_request = new XMLHttpRequest();
if (http_request.overrideMimeType) {
http_request.overrideMimeType('text/xml');
}
if (!http_request){return false;}
http_request.open('GET', realXmlUrl, true);
http_request.send(null);
xmlDoc = http_request.responseXML;
seems to successfully get an external xml file.
Butwhen I try to view it... by doing something like alert(xmlDoc); it wont let me see the actual xml file ;(
how do I see the actual XML file?
Thanks!
R
Check http_request.responseText. As long as reponseXML isn't null, it should be a Document object and can be interacted with as such.
I'm trying to download an HTML page, and parse it using XMLHttpRequest(on the most recent Safari browser). Unfortunately, I can't get it to work!
var url = "http://google.com";
xmlhttp = new XMLHttpRequest();
xmlhttp.open("GET", url);
xmlhttp.onreadystatechange = function(){
if(xmlhttp.readyState==4){
response = xmlhttp.responseText;
var doc = new DOMParser().parseFromString(response, "text/xml");
console.log(doc);
var nodes = document.evaluate("//a/text()",doc, null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE,null);
console.log(nodes);
console.log(nodes.snapshotLength);
for(var i =0; i<nodes.snapshotLength; i++){
thisElement = nodes.snapshotItem(i);
console.log(thisElement.nodeName);
}
}
};
xmlhttp.send(null);
The text gets downloaded successfully(response contains the valid HTML), and is parsed into a tree correctly(doc represents a valid DOM for the page). However, nodes.snapshotLength is 0, despite the fact that the query is valid and should have results. Any ideas on what's going wrong?
If you are using either:
a JS library or
you have a modern browser with the querySelectorAll method available (Safari is one)
You can try to use CSS selectors to parse the DOM instead of XPATH.
HTML is not XML. The two are not interchangeable. Unless the "HTML" is actually XHTML, you will not be able to use XPATH to process it.
I'm playing about creating an RSS reader widget using Konfabulator/Yahoo. At the moment I'm
pulling in the RSS using
var xmlDoc = COM.createObject("Microsoft.XMLDOM");
xmlDoc.loadXML("http:foo.com/feed.rss");
I've simplified it here by removing the error handling, but what else could I use to do the same task using konfabulator? And how cross platform is this?
COM is Windows-specific, and Yahoo Widgets has XML parsing built-in; so stay away from MSXML :P
You should use the built-in XMLDOM object instead. But since you want to download the XML document from the ’net anyway, XMLHttpRequest supports getting a DOMDocument directly, without having to pass the data to XMLDOM:
var request = new XMLHttpRequest();
request.open( "GET", "http://www.example.com/feed.rss", false);
request.send();
var xmlDoc = request.responseXML;
It works exactly like the XMLHttpRequest on a browser.
For completeness, if you need to parse XML from a string:
var xmlDoc = XMLDOM.parse("<foo>hello world</foo>");