Servlet Unable to Fetch the updated value from XML - javascript

In a Java Web Project we are using a Servlet to update an XML tag value using the values coming from a webpage then for further execution the Servlet has to fetch this updated tag value from the XML and proceed, however it is retrieving the old value (before the update) and proceeding.
public void setPeriodID(String bookingsBOPeriodID) throws InterruptedException {
try{
final String FilePath=UtilLib.getEnvVar("ConfigXMLFilePath");
String filepath = FilePath;
String bwperiodid=" and ";
DocumentBuilderFactory docFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder docBuilder = docFactory.newDocumentBuilder();
Document doc = docBuilder.parse(filepath);
// Get the staff element by tag name directly
Node Parameters = doc.getElementsByTagName("Parameters").item(0);
// loop the staff child node
NodeList list = Parameters.getChildNodes();
for (int i = 0; i < list.getLength(); i++) {
Node node = list.item(i);
//BookingsBO
if (bookingsBOPeriodID!=null && bookingsBOPeriodID.length()!=0 && "BookingsBOINPeriodId".equals(node.getNodeName()) && bookingsBOPeriodID.indexOf(bwperiodid)==-1 ){
System.out.println("***** Updating Bookings BO IN Period id ********");
System.out.println("inside updateEnvPeriodID::"+bookingsBOPeriodID);
node.setTextContent(bookingsBOPeriodID);
// node.setNodeValue(bookingsBOPeriodID);
}
}
// write the content into xml file
TransformerFactory transformerFactory = TransformerFactory.newInstance();
Transformer transformer = transformerFactory.newTransformer();
DOMSource source = new DOMSource(doc);
StreamResult result = new StreamResult(new File(filepath));
transformer.transform(source, result);
System.out.println("******* Period Id details updated **************");
} catch (ParserConfigurationException pce) {
pce.printStackTrace();
} catch (TransformerException tfe) {
tfe.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} catch (SAXException sae) {
sae.printStackTrace();
}
System.out.println("in period id after update :"+ UtilLib.getParam("BookingsBOINPeriodId"));
}
New value from webinterface is passed through "bookingsBOPeriodID" . Immediately after this method the new value is not reflecting in the XML

Make your code wait for sometime before reading the data from the xml.
You should have your resources and processes Synchronized for such opertaions.

Related

Return multiple values from stored procedure

The idea is that each subject has multiple topics, and when I call the function getTopicsForSubject() in order to get this data to a website page, it returns only 1 of the records from the table. I'm testing this using console.log(response) in the JavaScript file to see what is being passed in from the stored procedure/api connection. I'm thinking I need to read what's being passed by the stored procedure as if it were an array, although I'm not too sure how this is done.
Stored Procedure:
USE [Capstone]
GO
/****** Object: StoredProcedure [dbo].[getTopicsForSubject] Script Date: 2/21/2021 11:30:03 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[getTopicsForSubject]
#SubjectID int
AS
BEGIN
select *
from Topic
where SubjectID = #SubjectID
return;
END
API Code
private static string ExecuteSPGetSubjectsForTopic(string queryString, string subjectID)
{
string json = "";
string connectionString = ConfigurationManager.AppSettings["dbconn"].ToString();
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
// 1. create a command object identifying the stored procedure
SqlCommand cmd = new SqlCommand(queryString, conn);
// 2. set the command object so it knows to execute a stored procedure
cmd.CommandType = CommandType.StoredProcedure;
// 3. add parameter to command, which will be passed to the stored procedure
cmd.Parameters.Add(new SqlParameter("#SubjectID", subjectID));
// execute the command
using (SqlDataReader rdr = cmd.ExecuteReader())
{
// iterate through results, printing each to console
while (rdr.Read())
{
json = (string)rdr[0].ToString() + "|" + (string)rdr[1].ToString()+ "|" + (string)rdr[2].ToString() + "|" + (string)rdr[3].ToString();
}
}
}
return json;
}
JavaScript Code
function getTopicsForSubject()
{
var postObj = {
subjectID: localStorage.getItem('myFutureCurrentSubject')
};
console.log(postObj);
var req = new XMLHttpRequest();
req.open('POST', 'https://localhost:44303/api/JSON/getTopicsForSubject', true);
req.setRequestHeader('Content-Type', 'application/json');
req.onreadystatechange = function() { // Call a function when the state changes.
if (this.readyState === XMLHttpRequest.DONE && this.status === 200) {
console.log(req.response);
}
}
req.send(JSON.stringify(postObj));
return false;
}
You're reinitializing your JSON variable each time when reading a row. Try this:
json += (string)rdr[0].ToString() + "|" + (string)rdr[1].ToString()+ "|" + (string)rdr[2].ToString() + "|" + (string)rdr[3].ToString();
This is not the right way to return data. In JS you will still get this as a string and then parse it like this to get the actual values:
var array = req.response.split('|');
for (var i = 0; i < array.length; i++) {
console.log(array[i]);
}
I would suggest you use a proper way to handle this data by return an HTTP response from API instead of a string. E.g. create a list and then populate it while reading from the reader and return it. Try this:
List<object[]> topics = new List<object[]>();
while (rdr.Read())
{
object[] row = new object[rdr.FieldCount];
for (int i = 0; i < rdr.FieldCount; i++)
{
row[i] = rdr[i];
}
topics.Add(row);
}
return Ok(new { Data = topics });

After using window.location the async function behaving as sync function

I worked on a ASP.NET MVC 4 project in which users can apply filter on UI & extract an excel report. The data was stored in MS SQL server. To improve performances, i decided to adopt the async in my application to:
Reduce the time of extraction
Allow users to have parallel extraction
To do this, my approach are:
When user click extraction button on UI -> an ajax call will be made from client to async function on server -> This async function in turn will create an Command object & do ExecuteReaderAsync(). Using this DbDatareader to generated an Excel file using NPOI and save the file content to TempData. The handler to retrieve file will be return to client for later download using window.location. I adopted these techniques from this post Download Excel file via AJAX MVC
After the first extraction, if users want to extract another datasets in parallel, they can click extraction button again and application will repeat step 1.
The results are 2 or more data extractions can happened on the same time.
My problem is, take example, 4 extractions currently running in parallels, if any of these extractions finished & 1 file is downloaded (using window.location). The next time user click on extraction button (which repeat step 1), it doesn't async anymore & later extractions will wait for previous extraction finish before execute.
On debugging, if i restart the ISS server, the problem gone for a while until 1 file is downloaded, so I doubted that window.location do something that blocked the threads on server when any of file is downloaded.
UPDATE 1
Class:
public class QUERYREADER
{
public DbConnection CONNECTION { get; set; }
public DbDataReader READER { get; set; }
}
Model:
public async Task<QUERYREADER> GET_DATA(CancellationToken ct)
{
//Create the query reader
QUERYREADER qr = new QUERYREADER();
//Set up the database instances
DbProviderFactory dbFactory = DbProviderFactories.GetFactory(db.Database.Connection);
//Defined the query
var query = "SELECT * FROM Table";
//Set up the sql command object
using (var cmd = dbFactory.CreateCommand())
{
//Try to open the database connection
try
{
//Check if SQL connection is set up
if (cmd.Connection == null)
{
cmd.CommandType = CommandType.Text;
cmd.Connection = db.Database.Connection;
}
//Open connection to SQL if current state is closed
if (cmd.Connection.State == ConnectionState.Closed)
{
//Change the connection string to set the packet size to max. value = 32768 to improve efficiency for I/O transmit to SQL server
cmd.Connection.ConnectionString = cmd.Connection.ConnectionString + ";Packet Size=20000";
//Open connection
await cmd.Connection.OpenAsync(ct);
}
//Save the connection
qr.CONNECTION = cmd.Connection;
} catch (Exception ex) {
//If errors throw, close the connection
cmd.Connection.Close();
};
//Retrieve the database reader of provided sql query
cmd.CommandText = query;
DbDataReader dr = await cmd.ExecuteReaderAsync(ct);
qr.READER.Add(dr);
}
//Return the queryreader
return qr;
}
Controller:
public async Task<JsonResult> SQL_TO_EXCEL()
{
//Set up the subscription to client for "cancellation request, browser closing"
CancellationToken disToken = Response.ClientDisconnectedToken;
//Get the datareader
try
{
qr = await GET_DATA(disToken);
}
catch(Exception ex) { }
//Open the connection to SQL server
using (qr.CONNECTION)
{
using (var dr = qr.READER)
{
while (await dr.ReadAsync(disToken))
{
for (int k = 0; k < dr.FieldCount; k++)
{
//.... using NPOI to write Excel file to MemoryStream
}
}
dr.Close();
}
}
//Generate XL file if controller action is still running (no "cancellation request, browser closing")
if (!disToken.IsCancellationRequested)
{
string file_id = Guid.NewGuid().ToString();
//... Write the NPOI excel file to TempData and then create a handler for later download at client
//This line caused trouble
TempData["file_id"] = XLMemoryStream.ToArray();
HANDLER["file_id"] = file_id;
HANDLER["file_name"] = FILE["FILE_NAME"].ToString().NonUnicode() + FILE["FILE_TYPE"].ToString() ;
}
//Return JSON to caller
var JSONRESULT = Json(JsonConvert.SerializeObject(HANDLER), JsonRequestBehavior.AllowGet);
JSONRESULT.MaxJsonLength = int.MaxValue;
return JSONRESULT;
}
public async Task<ActionResult> DOWNLOAD_EXCEL(string file_id, string file_name)
{
if (TempData[file_id] != null)
{
byte[] data = await Task.Run(() => TempData[file_id] as byte[]);
return File(data, "application/vnd.ms-excel", file_name);
}
else
{
return new EmptyResult();
}
}
Javascript
$.ajax({
type: 'POST',
async: true,
cache: false,
url: 'SQL_TO_EXCEL',
success: function (data)
{
var response = JSON.parse(data);
window.location =
(
"DOWNLOAD_EXCEL" +
'?file_id=' + response.file_id +
'&file_name=' + response.file_name
);
},
error: function (XMLHttpRequest, textStatus, errorThrown) {
console.log(errorThrown);
}
});
UPDATE 2:
After a lot of tests, i figured out window.location has nothing to do with threads on server, the line TempData[file_id] = XLMemoryStream.ToArray() caused the issues. It look likes the problem is similar as described in this post Two parallel ajax requests to Action methods are queued, why?

Retrieving file data in chunks using Web API for display in browser (WIP)

I have this working but I want to share this out to see if I missed anything obvious and to solve a mystery as to why my file chunk size has to be a multiple of 2049. The main requirements are:
Files uploaded from website must be stored in SQL server, not as files
Website must be able to download and display file data as a file (opened in a separate window.
Website is angularjs/javascript SPA, no server side code, no MVC
API is Web API 2 (again not MVC)
I'm just going to focus on the download part here. Basically what I'm doing is:
Read a chunk of data from SQL server varbinary field
Web API 2 api returns file name, mime type and byte data as a base64 string. NOTE - tried returning byte array but Web API just serializes it into base64 string anyway.
concatenate the chunks, convert the chunks to a blob and display
VB library function that returns a dataset with the chunk (I have to use this library which handles the database connection but doesn't support parameter queries)
Public Function GetWebApplicationAttachment(ByVal intId As Integer, ByVal intChunkNumber As Integer, ByVal intChunkSize As Integer) As DataSet
' the starting number is NOT 0 based
Dim intStart As Integer = 1
If intChunkNumber > 1 Then intStart = ((intChunkNumber - 1) * intChunkSize) + 1
Dim strQuery As String = ""
strQuery += "SELECT FileName, "
strQuery += "SUBSTRING(ByteData," & intStart.ToString & "," & intChunkSize.ToString & ") AS ByteData "
strQuery += "FROM FileAttachments WHERE Id = " + intId.ToString + " "
Try
Return Query(strQuery)
Catch ex As Exception
...
End Try
End Function
Web API business rules bit that creates the file object from the dataset
...
result.FileName = ds.Tables[0].Rows[0]["FileName"].ToString();
// NOTE: Web API converts a byte array to base 64 string so the result is the same either way
// the result of this is that the returned data will be about 30% bigger than the chunk size requested
result.StringData = Convert.ToBase64String((byte[])ds.Tables[0].Rows[0]["ByteData"]);
//result.ByteData = (byte[])ds.Tables[0].Rows[0]["ByteData"];
... some code to get the mime type
result.MIMEType = ...
Web API controller (simplified - all security and error handling removed)
public IHttpActionResult GetFileAttachment([FromUri] int id, int chunkSize, int chunkNumber) {
brs = new Files(...);
fileResult file = brs.GetFileAttachment(appID, chunkNumber, chunkSize);
return Ok(file);
}
angularjs Service that gets the chunks recurively and puts them together
function getFileAttachment2(id, chunkSize, chunkNumber, def, fileData, mimeType) {
var deferred = def || $q.defer();
$http.get(webServicesPath + "api/files/get-file-attachment?id=" + id + "&chunkSize=" + chunkSize + "&chunkNumber=" + chunkNumber).then(
function (response) {
// when completed string data will be empty
if (response.data.StringData === "") {
response.data.MIMEType = mimeType;
response.data.StringData = fileData;
deferred.resolve(response.data);
} else {
if (chunkNumber === 1) {
// only the first chunk computes the mime type
mimeType = response.data.MIMEType;
}
fileData += response.data.StringData;
chunkNumber += 1;
getFileAttachment2(appID, detailID, orgID, GUID, type, chunkSize, chunkNumber, deferred, fileData, mimeType);
}
},
function (response) {
... error stuff
}
);
return deferred.promise;
}
angular controller method that makes the calls.
function viewFile(id) {
sharedInfo.getWebPortalSetting("FileChunkSize").then(function (result) {
// chunk size must be a multiple of 2049 ???
var chunkSize = 0;
if (result !== null) chunkSize = parseInt(result);
fileHelper.getFileAttachment2(id, chunkSize, 1, null, "", "").then(function (result) {
if (result.error === null) {
if (!fileHelper.viewAsFile(result.StringData, result.FileName, result.MIMEType)) {
... error
}
result = {};
} else {
... error;
}
});
});
}
And finally the bit of javascript that displays the file as a download
function viewAsFile(fileData, fileName, fileType) {
try {
fileData = window.atob(fileData);
var ab = new ArrayBuffer(fileData.length);
var ia = new Uint8Array(ab); // ia provides window into array buffer
for (var i = 0; i < fileData.length; i++) {
ia[i] = fileData.charCodeAt(i);
}
var file = new Blob([ab], { type: fileType });
fileData = "";
if (window.navigator.msSaveOrOpenBlob) // IE10+
window.navigator.msSaveOrOpenBlob(file, fileName);
else { // Others
var a = document.createElement("a"),
url = URL.createObjectURL(file);
a.href = url;
a.download = fileName;
document.body.appendChild(a);
a.click();
setTimeout(function () {
document.body.removeChild(a);
window.URL.revokeObjectURL(url);
}, 0);
}
return true;
} catch (e) {
... error stuff
}
}
I see already that a more RESTful approach would be to use headers to indicate chunk range and to separate the file meta data from the file chunks. Also I could try returning a data stream rather than Base64 encoded string. If anyone has tips on that let me know.
Well that was entirely the wrong way to go about that. In case it helps here's what I ended up doing.
Dynamically create the href address of an anchor tag to return a file (security token and parameters in query string)
get byte array from database
web api call return response message (see code below)
This is much faster and more reliable, but provides less in the way of progress monitoring.
business rule method uses...
...
file.ByteData = (byte[])ds.Tables[0].Rows[0]["ByteData"];
...
web api controller
public HttpResponseMessage ViewFileAttachment([FromUri] int id, string token) {
HttpResponseMessage response = new HttpResponseMessage();
... security stuff
fileInfoClass file = ... code to get file info
response.Content = new ByteArrayContent(file.ByteData);
response.Content.Headers.ContentDisposition =
new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment") {
FileName = file.FileName
};
response.Content.Headers.ContentType = new MediaTypeHeaderValue(file.MIMEType);
return response;
This could even be improved with streaming

Retrieving XML as downloadable attachment from MVC Controller action

I am rewriting an existing webform to use js libraries instead of using the vendor controls and microsoft ajax tooling (basically, updating the web app to use more contemporary methodologies).
The AS-IS page (webform) uses a button click handler on the server to process the submitted data and return a document containing xml, which document can either then be saved or opened (opening opens it up as another tab in the browser). This happens asynchronously.
The TO-BE page uses jquery ajax to submit the form to an MVC controller, where virtually the same exact code is executed as in the server-side postback case. I've verified in the browser that the same data is being returned from the caller, but, after returning it, the user is NOT prompted to save/open - the page just remains as if nothing ever happened.
I will put the code below, but I think I am just missing some key diferrence between the postback and ajax/controller contexts to prompt the browser to recognize the returned data as a separate attachment to be saved. My problem is that I have looked at and tried so many ad-hoc approaches that I'm not certain what I am doing wrong at this point.
AS-IS Server Side Handler
(Abridged, since the SendXml() method is what generates the response)
protected void btnXMLButton_Clicked(object sender, EventArgs e)
{
//generate server side biz objects
//formattedXml is a string of xml iteratively generated from each selected item that was posted back
var documentStream = MemStreamMgmt.StringToMemoryStream(formattedXml);
byte[] _documentXMLFile = documentStream.ToArray();
SendXml(_documentXMLFile);
}
private void SendXml(byte[] xmlDoc)
{
string _xmlDocument = System.Text.Encoding.UTF8.GetString(xmlDoc);
XDocument _xdoc = XDocument.Parse(_xmlDocument);
var _dcpXMLSchema = new XmlSchemaSet();
_dcpXMLSchema.Add("", Server.MapPath(#"~/Orders/DCP.xsd"));
bool _result = true;
try
{
_xdoc.Validate(_dcpXMLSchema, null);
}
catch (XmlSchemaValidationException)
{
//validation failed raise error
_result = false;
}
// return error message
if (!_result)
{
//stuff to display message
return;
}
// all is well .. download xml file
Response.ClearContent();
Response.Clear();
Response.ContentType = "text/plain";
Response.AddHeader("Content-disposition", "attachment; filename=" + "XMLOrdersExported_" + string.Format("{0:yyyy-MM-dd_hh-mm-ss-tt}.xml", DateTime.Now));
Response.BinaryWrite(xmlDoc);
Response.Flush();
Context.ApplicationInstance.CompleteRequest();
Response.End();
}
TO-BE (Using jquery to submit to a controller action)
Client code: button click handler:
queueModel.getXmlForSelectedOrders = function () {
//create form to submit
$('body').append('<form id="formXmlTest"></form>');
//submit handler
$('#formXmlTest').submit(function(event) {
var orderNbrs = queueModel.selectedItems().map(function (e) { return e.OrderId() });
console.log(orderNbrs);
var ordersForXml = orderNbrs;
var urlx = "http://localhost:1234/svc/OrderServices/GetXml";
$.ajax({
url: urlx,
type: 'POST',
data: { orders: ordersForXml },
dataType: "xml",
accepts: {
xml: 'application/xhtml+xml',
text: 'text/plain'
}
}).done(function (data) {
/*Updated per comments */
console.log(data);
var link = document.createElement("a");
link.target = "blank";
link.download = "someFile";//data.name
console.log(link.download);
link.href = "http://localhost:23968/svc/OrderServices/GetFile/demo.xml";//data.uri;
link.click();
});
event.preventDefault();
});
$('#formXmlTest').submit();
};
//Updated per comments
/*
[System.Web.Mvc.HttpPost]
public void GetXml([FromBody] string[] orders)
{
//same code to generate xml string
var documentStream = MemStreamMgmt.StringToMemoryStream(formattedXml);
byte[] _documentXMLFile = documentStream.ToArray();
//SendXml(_documentXMLFile);
string _xmlDocument = System.Text.Encoding.UTF8.GetString(_documentXMLFile);
XDocument _xdoc = XDocument.Parse(_xmlDocument);
var _dcpXMLSchema = new XmlSchemaSet();
_dcpXMLSchema.Add("", Server.MapPath(#"~/Orders/DCP.xsd"));
bool _result = true;
try
{
_xdoc.Validate(_dcpXMLSchema, null);
}
catch (XmlSchemaValidationException)
{
//validation failed raise error
_result = false;
}
Response.ClearContent();
Response.Clear();
Response.ContentType = "text/plain";
Response.AddHeader("Content-disposition", "attachment; filename=" + "XMLOrdersExported_" + string.Format("{0:yyyy-MM-dd_hh-mm-ss-tt}.xml", DateTime.Now));
Response.BinaryWrite(_documentXMLFile);
Response.Flush();
//Context.ApplicationInstance.CompleteRequest();
Response.End();
}
}*/
[System.Web.Mvc.HttpPost]
public FileResult GetXmlAsFile([FromBody] string[] orders)
{
var schema = Server.MapPath(#"~/Orders/DCP.xsd");
var formattedXml = OrderXmlFormatter.GenerateXmlForSelectedOrders(orders, schema);
var _result = validateXml(formattedXml.DocumentXmlFile, schema);
// return error message
if (!_result)
{
const string message = "The XML File(s) are not valid! Please check with your administrator!.";
return null;
}
var cd = new System.Net.Mime.ContentDisposition
{
FileName = "blargoWargo.xml",
Inline = false
};
System.IO.File.WriteAllBytes(Server.MapPath("~/temp/demo.xml"),formattedXml.DocumentXmlFile);
return File(formattedXml.DocumentXmlFile,MediaTypeNames.Text.Plain,"blarg.xml");
}
[System.Web.Mvc.HttpGet]
public FileResult GetFile(string fileName)
{
var cd = new System.Net.Mime.ContentDisposition
{
// for example foo.bak
FileName = fileName,
Inline = false
};
Response.AppendHeader("Content-Disposition", cd.ToString());
var fName = !string.IsNullOrEmpty(fileName)?fileName:"demo.xml";
var fArray = System.IO.File.ReadAllBytes(Server.MapPath("~/temp/" + fName));
System.IO.File.Delete(Server.MapPath("~/temp/" + fName));
return File(fArray, MediaTypeNames.Application.Octet);
}
UPDATE:
I just put the AS-IS/TO-BE side by side, and in the dev tools verified the ONLY difference (at least as far as dev tools shows) is that the ACCEPT: header for TO-BE is:
application/xhtml+xml, /; q=0.01
Whereas the header for AS-IS is
text/html, application/xhtml+xml, image/jxr, /
Update II
I've found a workaround using a 2-step process with a hyperlink. It is a mutt of a solution, but as I suspected, apparently when making an ajax call (at least a jQuery ajax call, as opposed to a straight XmlHttpRequest) it is impossible to trigger the open/save dialog. So, in the POST step, I create and save the desired file, then in the GET step (using a dynamically-created link) I send the file to the client and delete it from the server. I'm leaving this unanswered for now in the hopes someone who understands the difference deeply can explain why you can't retrieve the file in the course of a normal ajax call.

Create application based on Website

I searched and tried a lot to develop an application which uses the content of a Website. I just saw the StackExchange app, which looks like I want to develop my application. The difference between web and application is here:
Browser:
App:
As you can see, there are some differences between the Browser and the App.
I hope somebody knows how to create an app like that, because after hours of searching I just found the solution of using a simple WebView (which is just a 1:1 like the browser) or to use Javascript in the app to remove some content (which is actually a bit buggy...).
To repeat: the point is, I want to get the content of a website (on start of the app) and to put it inside my application.
Cheers.
What you want to do is to scrape the websites in question by getting their html code and sorting it using some form of logic - I recomend xPath for this. then you can implement this data into some nice native interface.
You need however to be very aware that the data you get is not allways formated the way you want so all of your algorithems have to be very flexible.
the proccess can be cut into steps like this
retrive data from website (DefaultHttpClient and AsyncTask)
analyse and retrive relevant data (your relevant algorithm)
show data to user (Your interface implementation)
UPDATE
Bellow is some example code to fetch some data of a website it implements html-cleaner libary and you will need to implement this in your project.
class GetStationsClass extends AsyncTask<String, String, String> {
#Override
protected String doInBackground(String... params) {
HttpClient httpclient = new DefaultHttpClient();
httpclient.getParams().setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1);
httpclient.getParams().setParameter(CoreProtocolPNames.HTTP_ELEMENT_CHARSET, "iso-8859-1");
HttpPost httppost = new HttpPost("http://ntlive.dk/rt/route?id=786");
httppost.setHeader("Accept-Charset", "iso-8859-1, unicode-1-1;q=0.8");
try {
// Add your data
List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(3);
httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs, "utf-8"));
// Execute HTTP Post Request
HttpResponse response = httpclient.execute(httppost);
int status = response.getStatusLine().getStatusCode();
String data = "";
if (status != HttpStatus.SC_OK) {
ByteArrayOutputStream ostream = new ByteArrayOutputStream();
response.getEntity().writeTo(ostream);
data = ostream.toString();
} else {
BufferedReader reader = new BufferedReader(new InputStreamReader(response.getEntity().getContent(),
"iso-8859-1"));
String line = null;
while ((line = reader.readLine()) != null) {
data += line;
}
XPath xpath = XPathFactory.newInstance().newXPath();
try {
Document document = readDocument(data);
NodeList nodes = (NodeList) xpath.evaluate("//*[#id=\"container\"]/ul/li", document,
XPathConstants.NODESET);
for (int i = 0; i < nodes.getLength(); i++) {
Node thisNode = nodes.item(i);
Log.v("",thisNode.getTextContent().trim);
}
} catch (XPathExpressionException e) {
e.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
#Override
protected void onPostExecute(String result) {
super.onPostExecute(result);
//update user interface here
}
}
private Document readDocument(String content) {
Long timeStart = new Date().getTime();
TagNode tagNode = new HtmlCleaner().clean(content);
Document doc = null;
try {
doc = new DomSerializer(new CleanerProperties()).createDOM(tagNode);
return doc;
} catch (ParserConfigurationException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return doc;
}
to run the code above use
new getStationsClass.execute();

Categories

Resources