Server Side Logging Of Client Side Javascript Crashes - javascript

I have a large complex web app with thousands of lines of Javascript. There is a small set of intermittent Javascript bugs that are report by users.
I think these are epiphenomena of race conditions - something has not initialised correctly and the Javascript crashes causing 'down stream' js not to run.
Is there anyway to get Javascript execution crashes to log back server side?
All the js logging libraries like Blackbird and Log4JavaScript are client-side only.

I have written a remote error logging function using window.onerror as suggested by #pimvdb
Err = {};
Err.Remoterr = {};
Err.Remoterr.onerror = function (msg, errorfileurl, lineno) {
var jsonstring, response, pageurl, cookies;
// Get some user input
response = prompt("There has been an error. " +
"It has been logged and will be investigated.",
"Put in comments (and e-mail or phone number for" +
" response.)");
// get some context of where and how the error occured
// to make debugging easier
pageurl = window.location.href;
cookies = document.cookie;
// Make the json message we are going to post
// Could use JSON.stringify() here if you are sure that
// JSON will have run when the error occurs
// http://www.JSON.org/js.html
jsonstring = "{\"set\": {\"jserr\": " +
"{\"msg\": \"" + msg + "\", " +
"\"errorfileurl\": \"" + errorfileurl + "\", " +
"\"pageurl\": \"" + pageurl + "\", " +
"\"cookies\": \"" + cookies + "\", " +
"\"lineno\": \"" + lineno + "\", " +
"\"response\": \"" + response + "\"}}}";
// Use the jquery cross-browser post
// http://api.jquery.com/jQuery.post/
// this assumes that no errors happen before jquery has initialised
$.post("?jserr", jsonstring, null, "json");
// I don't want the page to 'pretend' to work
// so I am going to return 'false' here
// Returning 'true' will clear the error in the browser
return false;
};
window.onerror = Err.Remoterr.onerror;
I deploy this between the head and body tags of the webpage.
You will want to change the JSON and the URL that you post it to depending on how you are going to log the data server side.

Take a look at https://log4sure.com (disclosure: I created it) - but it is really useful, check it out and decide for yourself. It allows you to log errors/event and also lets you create your custom log table. It also allows you to monitor your logs real-time. And the best part, its free.
You can also use bower to install it, use bower install log4sure
The set up code is really easy too:
// setup
var _logServer;
(function() {
var ls = document.createElement('script');
ls.type = 'text/javascript';
ls.async = true;
ls.src = 'https://log4sure.com/ScriptsExt/log4sure.min.js';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(ls, s);
ls.onload = function() {
// use your token here.
_logServer = new LogServer("use-your-token-here");
};
})();
// example for logging text
_logServer.logText("your log message goes here.")
//example for logging error
divide = function(numerator, divisor) {
try {
if (parseFloat(value) && parseFloat(divisor)) {
throw new TypeError("Invalid input", "myfile.js", 12, {
value: value,
divisor: divisor
});
} else {
if (divisor == 0) {
throw new RangeError("Divide by 0", "myfile.js", 15, {
value: value,
divisor: divisor
});
}
}
} catch (e) {
_logServer.logError(e.name, e.message, e.stack);
}
}
// another use of logError in window.onerror
// must be careful with window.onerror as you might be overwriting some one else's window.onerror functionality
// also someone else can overwrite window.onerror.
window.onerror = function(msg, url, line, column, err) {
// may want to check if url belongs to your javascript file
var data = {
url: url,
line: line,
column: column,
}
_logServer.logError(err.name, err.message, err.stack, data);
};
// example for custom logs
var foo = "some variable value";
var bar = "another variable value";
var flag = "false";
var temp = "yet another variable value";
_logServer.log(foo, bar, flag, temp);

Related

PhantomJS 2.0.0 - Select: Invalid argument error

The script below contains some URLs in "links" array. The function gatherLinks() is used to gather more URLs from sitemap.xml of the URLs in "links" array. Once the "links" array has enough URLs (decided by variable "limit"), function request() is called for each URL in "links" array to send a request to the server and fetch the response. Time taken for each response is reported. Total time taken by the program is reported when the program ends.
I wrote a PhantomJS program (source below) to send some requests and calculate the time taken (in order to compare the performance of 2.0.0 and 1.9.8). I get links using sitemap.xml file of the sites I hardcode in "links" array.
When run using PhantomJS 2.0.0, after some 65 requests the program (method page.open() of request function) starts outputting the following:
select: Invalid argument
select: Invalid argument
select: Invalid argument
select: Invalid argument
select: Invalid argument
.
.
.
.
When run using PhantomJS 1.9.8, it crashes after about 200 requests with the following error.
"PhantomJS has crashed. Please read the crash reporting guide at https://github.com/ariya/phantomjs/wiki/Crash-Reporting and file a bug report at https://github.com/ariya/phantomjs/issues/new with the crash dump file attached: /tmp/2A011800-3367-4B4A-A945-3B532B4D9B0F.dmp"
I tried to send the crash report but their guide is not very useful for me.
It's not the urls that I use, I have tried using other urls but same results.
Is there something wrong with my program? I am using OSX.
var system = require('system');
var fs = require('fs');
var links = [];
links = [
"http://somesite.com",
"http://someothersite.com",
.
.
.
];
var index = 0, fail = 0, limit = 300;
finalTime = Date.now();
var gatherLinks = function(link){
var page = require('webpage').create();
link = link + "/sitemap.xml";
console.log("Fetching links from " + link);
page.open(link, function(status){
if(status != "success"){
console.log("Sitemap Request FAILED, status: " + status);
fail++;
return;
}
var content = page.content;
parser = new DOMParser();
xmlDoc = parser.parseFromString(content, 'text/xml');
var loc = xmlDoc.getElementsByTagName('loc');
for(var i = 0; i < loc.length; i++){
if(links.length < limit){
links[links.length] = loc[i].textContent;
} else{
console.log(links.length + " Links prepared. Starting requests.\n");
index = 0;
request();
return;
}
}
if(index >= links.length){
index = 0;
console.log(links.length + " Links prepared\n\n");
request();
}
gatherLinks(links[index++]);
});
};
var request = function(){
t = Date.now();
var page = require('webpage').create();
page.open(links[index], function(status) {
console.log('Loading link #' + (index + 1) + ': ' + links[index]);
console.log("Time taken: " + (Date.now() - t) + " msecs");
if(status != "success"){
console.log("Request FAILED, status: " + status);
fail++;
}
if(index >= links.length-1){
console.log("\n\nAll links done, final time taken: " + (Date.now() - finalTime) + " msecs");
console.log("Requests sent: " + links.length + ", Failures: " + fail);
console.log("Success ratio: " + ((links.length - fail)/links.length)*100 + "%");
phantom.exit();
}
index++;
request();
});
}
gatherLinks(links[0]);
After playing around with the program, I couldn't find any particular pattern to the problems I mention below. For 2.0.0, I could only once succeed in sending 300 requests without an error. I have tried all different combinations of URLs, program usually fails between request 50-80. I maintain a log of urls that failed, all of them run fine when I send a single request using another PhantomJS program. For 1.9.8, it's much more stable and the crash I mention below is not very frequent. But again, I couldn't find any pattern to the crashing, it still crashes once in a while.
There are lots of problems with your code. The main one is probably that you're creating a new page for every single request and never close it afterwards. I think you're running out of memory.
I don't see a reason to create a new page for every request, so you can easily reuse a single page for all requests. Simply move the line var page = require('webpage').create(); to the global scope out of gatherLinks() and request(). If you don't want to do that, then you can call page.close() after you're done with it, but keep the asynchronous nature of PhantomJS in mind.
If the reason to use multiple page objects was to prevent cache re-use for later requests, then I have to tell you that this doesn't solve that problem. page objects in a single PhantomJS process can be regarded as tabs or windows and they share cookies and cache. If you want to isolate every request, then you will need to run every request in its own process for example through the use of the Child Process Module.
There is another problem with your code. You probably wanted to write the following in gatherLinks():
if(index >= links.length){
index = 0;
console.log(links.length + " Links prepared\n\n");
request();
return; // ##### THIS #####
}
gatherLinks(links[index++]);

InDesign ExtendScript script sometimes creates a corrupted PDF during export

I've had this problem for a while, now. Close to the end of my "Proofing" script, the currently opened document in InDesign is to be exported to two different .pdf files. The first is password-protected while the second is not. I don't seem to have any problems with the latter, but the former often becomes corrupted somehow and cannot be opened by any PDF reader, including Acrobat itself. Here's the code block that does the exporting (it is not runnable by itself, btw):
/********** BEGIN PDF EXPORTING **********/
// First, let's create and set PDF export preferences.
// This begins with creating a temporary preset if it doesn't already exist.
// This preset will be used for both the Proof page and the Cover sheet.
var tempPreset = app.pdfExportPresets.item("tempPreset");
try
{
tempPreset.name;
}
catch (eNoSuchPreset)
{
tempPreset = app.pdfExportPresets.add({name:"tempPreset"});
}
with (tempPreset)
{
acrobatCompatibility = AcrobatCompatibility.ACROBAT_5;
bleedMarks = false;
colorBars = false;
colorBitmapCompression = BitmapCompression.AUTO_COMPRESSION;
colorBitmapQuality = CompressionQuality.MAXIMUM;
colorBitmapSampling = Sampling.BICUBIC_DOWNSAMPLE;
colorBitmapSamplingDPI = 300;
compressTextAndLineArt = true;
cropImagesToFrames = true;
cropMarks = false;
exportGuidesAndGrids = false;
exportNonprintingObjects = false;
exportReaderSpreads = false;
exportWhichLayers = ExportLayerOptions.EXPORT_VISIBLE_PRINTABLE_LAYERS;
generateThumbnails = false;
grayscaleBitmapCompression = BitmapCompression.AUTO_COMPRESSION;
grayscaleBitmapQuality = CompressionQuality.MAXIMUM;
grayscaleBitmapSampling = Sampling.BICUBIC_DOWNSAMPLE;
grayscaleBitmapSamplingDPI = 300;
includeBookmarks = false;
includeHyperlinks = false;
includeSlugArea = false;
includeStructure = true;
monochromeBitmapCompression = MonoBitmapCompression.CCIT4;
monochromeBitmapSampling = Sampling.BICUBIC_DOWNSAMPLE;
monochromeBitmapSamplingDPI = 1200;
omitBitmaps = false;
omitEPS = false;
omitPDF = false;
optimizePDF = true;
pageInformationMarks = false;
pageMarksOffset = 0.0833;
pdfMarkType = MarkTypes.DEFAULT_VALUE;
printerMarkWeight = PDFMarkWeight.P25PT;
registrationMarks = false;
standardsCompliance = PDFXStandards.NONE;
subsetFontsBelow = 100;
thresholdToCompressColor = 450;
thresholdToCompressGray = 450;
thresholdToCompressMonochrome = 1800;
useDocumentBleedWithPDF = false;
}
currentProcess.text = "PDF export preferences"; progressWin.show();
progressIndividual.value++; if (aProducts.length > 1) {progressOverall.value++;}
// Now let's actually set the export preferences. These are for the proof page.
with (app.pdfExportPreferences)
{
pageRange = proofRange;
useSecurity = true;
disallowChanging = true;
disallowCopying = false;
disallowDocumentAssembly = true;
disallowExtractionForAccessibility = false;
disallowFormFillIn = true;
disallowHiResPrinting = true;
disallowNotes = true;
disallowPlaintextMetadata = true;
disallowPrinting = false;
changeSecurityPassword = "sky";
if (multiColor)
{
pageRange = colorTable.toString();
}
if (currentProduct.pLabel != "")
{
pageRange += "," + labelPage.name;
}
}
currentProcess.text = "Exporting PDF proof page"; progressWin.show();
progressIndividual.value++; if (aProducts.length > 1) {progressOverall.value++;}
// Before exporting the Proof page(s), hide the color bar on multicolor products.
if (multiColor) {document.layers.item("COLOR BAR").visible = false;}
// Then we save the proof page.
document.exportFile(ExportFormat.PDF_TYPE, File(jobFolder.toString() + "/" + saveName + ".pdf"), false, tempPreset);
When that produced corrupted PDFs once in a while, I thought that perhaps it was our less-than-ideal network structure causing the problem, so I instead tried exporting the PDF file to the local hard drive rather than directly to the network, then having the file be moved to the network afterward. So, the last line in the above code block was replaced with:
// First, to the local HDD.
document.exportFile(ExportFormat.PDF_TYPE, File("~/Documents/" + saveName + ".pdf"), false, tempPreset);
$.sleep(1000);
File("~/Documents/" + saveName + ".pdf").copy(File(jobFolder.toString() + "/" + saveName + ".pdf"));
$.sleep(1000);
File("~/Documents/" + saveName + ".pdf").remove();
I even added in those 1-second delays, just in case. Sadly, this hasn't helped. I am still getting a corrupted PDF every now and then. If there is any pattern to the corrupted files, I haven't been able to discern it. Does anyone have any thoughts?
It finally hit me that, if the corrupted files are not able to be opened in Acrobat, then why not just test for that after the file is created? So I created a loop that exports the PDF file and tries to open it in Acrobat. If it opens fine, then it prints and closes the file, returning a "true" message. If it is unable to do so, then it returns a "false" message to the script. Then the loop repeats so long as that message is "false". While not a great fix for the underlying cause (whatever it may be), it at least is a workaround that will do just fine for our needs. The trick is that, because we work with Macs, we have to route the message through an AppleScript instead of using BridgeTalk to communicate directly with Acrobat.
Here's the code snippet from the main InDesign script which goes through the PDF-checking loop:
// Then we save the proof page.
// The loop is to make sure that the file was saved properly.
var validFile = false; // Flag that states whether or not the file is corrupted after saving.
var rString; // String returned from Acrobat that should be either "true" or "false".
var testAndPrintFile = File("~/Documents/testAndPrint.applescript"); // The applescript file that calls Acrobat and runs a folder-level script.
var pdfFile; // A String of the filename & path that will be passed to through the applescript file to Acrobat.
var pdfArray = new Array(4); // An array to send to Acrobat. [0] is the PDF filename as a String,
// [1] is duplex if true, [2] is the printer name, and [3] is to enable printing.
if (multiTwoSided || twoPages) pdfArray[1] = "true";
else pdfArray[1] = "false";
pdfArray[2] = localPrinter;
pdfArray[3] = "true";
while (!validFile)
{
$.writeln("If this message is seen more than once, then the Proof PDF was corrupted.");
try
{
document.exportFile(ExportFormat.PDF_TYPE, File(jobFolder.toString() + "/" + saveName + ".pdf"), false, tempPreset);
}
catch (e)
{
alert("Could not save the Proof PDF. Please close any open copies of the Proof PDF, then save and print it manually.");
}
pdfFile = jobFolder.toString() + "/" + saveName + ".pdf";
pdfArray[0] = pdfFile;
$.writeln("pdfArray contains: " + pdfArray);
try
{
rString = app.doScript(testAndPrintFile, ScriptLanguage.APPLESCRIPT_LANGUAGE, pdfArray);
validFile = rString == "true";
// validFile = true;
$.writeln("validFile is " + validFile);
if (!validFile)
{
alert("It seems that the file " + unescape(pdfArray[0]) + " is corrupted. Will try to export it again.");
}
}
catch (e)
{
$.writeln("ERROR at line number " + e.line);
$.writeln(e.description);
throw new Error("ERROR at line number " + e.line + "\n" + e.description);
}
}
The testAndPrint.applescript file that this loop calls:
set pdfFile to item 1 of arguments
set duplexed to item 2 of arguments
set printerName to item 3 of arguments
set printEnabled to item 4 of arguments
tell application "Adobe Acrobat Pro"
set result to do script ("testAndPrint(\"" & pdfFile & "\", \"" & duplexed & "\", \"" & printerName & "\", \"" & printEnabled & "\");")
end tell
return result
And, finally, the folder-level Javascript file that is loaded into memory when Acrobat starts, ready to have its function called by the above Applescript file:
var testAndPrint = app.trustedFunction(function (fName, duplexed, sPrinterName, bEnablePrinting)
{
var success = true;
app.beginPriv();
console.println("fName is " + unescape(fName));
console.println("sPrinterName is " + sPrinterName);
try
{
var printDoc = app.openDoc(unescape(fName));
var pp = printDoc.getPrintParams();
if (duplexed == "true") pp.DuplexType = pp.constants.duplexTypes.DuplexFlipLongEdge;
else pp.DuplexType = pp.constants.duplexTypes.Simplex;
pp.printerName = sPrinterName;
pp.interactive = pp.constants.interactionLevel.silent;
pp.pageHandling = pp.constants.handling.none;
if (bEnablePrinting == "true") printDoc.print({bUI: false, bSilent: true, bShrinkToFit: false, printParams: pp});
printDoc.closeDoc(true);
}
catch (e)
{
console.println("ERROR at line number " + e.lineNumber);
console.println(e.message);
success = false;
}
app.endPriv();
console.println("success is " + success);
return success;
});
I hope that, perhaps, this information might be useful to anyone else running into a similar problem. It's not pretty, of course, but it certainly gets the job done.

Node.js net library: getting complete data from 'data' event

I've searched around, and either can't find the exact question I'm trying to answer, or I need someone to explain it to me like I'm 5.
Basically, I have a Node.js script using the Net library. I'm connecting to multiple hosts, and sending commands, and listening for return data.
var net = require('net');
var nodes = [
'HOST1,192.168.179.8',
'HOST2,192.168.179.9',
'HOST3,192.168.179.10',
'HOST4,192.168.179.11'
];
function connectToServer(tid, ip) {
var conn = net.createConnection(23, ip);
conn.on('connect', function() {
conn.write (login_string); // login string hidden in pretend variable
});
conn.on('data', function(data) {
var read = data.toString();
if (read.match(/Login Successful/)) {
console.log ("Connected to " + ip);
conn.write(command_string);
}
else if (read.match(/Command OK/)) { // command_string returned successful,
// read until /\r\nEND\r\n/
// First part of data comes in here
console.log("Got a response from " + ip + ':' + read);
}
else {
//rest of data comes in here
console.log("Atonomous message from " + ip + ':' + read);
}
});
conn.on('end', function() {
console.log("Lost conncection to " + ip + "!!");
});
conn.on('error', function(err) {
console.log("Connection error: " + err + " for ip " + ip);
});
}
nodes.forEach(function(node) {
var nodeinfo = node.split(",");
connectToServer(nodeinfo[0], nodeinfo[1]);
});
The data ends up being split into two chunks. Even if I store the data in a hash and append the first part to the remainder when I read the /\r\nEND\r\n/ delimiter, there's a chunk missing out of the middle. How do I properly buffer the data in order to make sure I get the complete message from the stream?
EDIT: Ok, this seems to be working better:
function connectToServer(tid, ip) {
var conn = net.createConnection(23, ip);
var completeData = '';
conn.on('connect', function() {
conn.write (login_string);
});
conn.on('data', function(data) {
var read = data.toString();
if (read.match(/Login Successful/)) {
console.log ("Connected to " + ip);
conn.write(command_string);
}
else {
completeData += read;
}
if (completeData.match(/Command OK/)) {
if (completeData.match(/\r\nEND\r\n/)) {
console.log("Response: " + completeData);
}
}
});
conn.on('end', function() {
console.log("Connection closed to " + ip );
});
conn.on('error', function(err) {
console.log("Connection error: " + err + " for ip " + ip);
});
}
My biggest problem was apparently a logic error. I was either waiting for the chunk that began the reply, or the chunk that ended it. I wasn't saving everything in-between.
I guess if I wanted to get all Node-ish about it, I should fire an event whenever a complete message came in (beginning with a blank line, ending with 'END' on a line by itself), and do the processing there.
You shouldn't do anything with the data you recieve, until you receive the end event. The end callback means that all data chunks have been sent through the stream to your callbacks. If data comes in more than one chunk, you need to create a variable within your function closure to store this data to. Most programs can work just fine ignoring this fact, because data usually comes across in one chunk. But sometimes it doesn't. It doesn't even necessarily depend on the amount of data. If you're in a situation where this is happening, I created an example that demos how to handle it. I basically used your code, but removed all the fluff... this is just demoing the logic you need to collect all the data and do work on it.
function connectToServer(tid, ip) {
var conn = net.createConnection(23, ip);
var completeData = '';
conn.on('connect', function() {
conn.write (login_string); // login string hidden in pretend variable
});
conn.on('data', function(data) {
completeData += data;
var dataArray = completeData.split('your delimiter');
if(dataArray.size > 1) { //If our data was split into several pieces, we have a complete chunk saved in the 0th position in the array
doWorkOnTheFirstHalfOfData(dataArray[0]);
completeData = dataArray[1];// The second portion of data may yet be incomplete, thise may need to be more complete logic if you can get more than one delimeter at a time...
}
});
conn.on('end', function() {
//do stuff with the "completeData" variable in here.
});
}
My problem was a logic problem. I was either looking for the chunk that began the message, or the chunk that ended the message, and ignoring everything in between. I guess expected the entirety of the reply to come in in one or two chunks.
Here's the working code, pasted from above. There's probably a more Node-ish way of doing it (I should really emit an event for each chunk of information), but I'll mark this as the answer unless someone posts a better version by this time tomorrow.
function connectToServer(tid, ip) {
var conn = net.createConnection(23, ip);
var completeData = '';
conn.on('connect', function() {
conn.write (login_string);
});
conn.on('data', function(data) {
var read = data.toString();
if (read.match(/Login Successful/)) {
console.log ("Connected to " + ip);
conn.write(command_string);
}
else {
completeData += read;
}
if (completeData.match(/Command OK/)) {
if (completeData.match(/\r\nEND\r\n/)) {
console.log("Response: " + completeData);
}
}
});
conn.on('end', function() {
console.log("Connection closed to " + ip );
});
conn.on('error', function(err) {
console.log("Connection error: " + err + " for ip " + ip);
});
}

Sending all Javascript console output into a DOM element

How does one send all console output into a DOM element so it can be viewed without having to open any developer tools? I'd like to see all output, such as JS errors, console.log() output, etc.
I found the accepted answer above helpful but it does have a couple issues as indicated in the comments:
1) doesn't work in Chrome because "former" does not take into account the this context no long being the console, the fix is to use the JavaScript apply method.
2) It does not account for multiple arguments being passed to console.log
I also wanted this to work without jQuery.
var baseLogFunction = console.log;
console.log = function(){
baseLogFunction.apply(console, arguments);
var args = Array.prototype.slice.call(arguments);
for(var i=0;i<args.length;i++){
var node = createLogNode(args[i]);
document.querySelector("#mylog").appendChild(node);
}
}
function createLogNode(message){
var node = document.createElement("div");
var textNode = document.createTextNode(message);
node.appendChild(textNode);
return node;
}
window.onerror = function(message, url, linenumber) {
console.log("JavaScript error: " + message + " on line " +
linenumber + " for " + url);
}
Here is an updated working example with those changes.
http://jsfiddle.net/eca7gcLz/
This is one approach for a quick solution:
Javascript
var former = console.log;
console.log = function(msg){
former(msg); //maintains existing logging via the console.
$("#mylog").append("<div>" + msg + "</div>");
}
window.onerror = function(message, url, linenumber) {
console.log("JavaScript error: " + message + " on line " +
linenumber + " for " + url);
}
HTML
<div id="mylog"></div>
Working Example http://jsfiddle.net/pUaYn/2/
Simple console.log redefinition, without error handling:
const originalConsoleLog = console.log
console.log = (...args) => {
args.map(arg => document.querySelector("#mylog").innerHTML += arg + '<br>')
}
console.log = originalConsoleLog

Parse JSON received with WebSocket results in error

I have written a very simple test application that creates a websocket and parses the data it receives (server sends valid JSON data).
The problem is that the first JSON object is parsed successfully, but all the subsequent objects are parsed with errors.
Here's all the code:
$("#connect").click(function ()
{
socket = new WebSocket("my server address");
socket.onopen = function ()
{
$("#log").append("Connection opened.<br/>");
socket.send(/* login information goes here */));
};
socket.onerror = function (e)
{
$("#log").append("Error: " + e.data + "<br/>");
};
socket.onclose = function ()
{
$("#log").append("Connection closed.<br/>");
};
socket.onmessage = function (e)
{
$("#log").append(index.toString() + ": " + e.data + "<br/><br/>");
console.log("Parsing " + index);
index++;
var obj = JSON.parse(e.data);
console.log("Parsed:");
console.log(obj);
};
});
What I'm getting is: The first time "socket.onmessage" called - JSON is parsed and JS console displays an object. When second one arrives it outputs it to my "log", but JSON.parse fails with error "Uncaught SyntaxError: Unexpected token ILLEGAL".
What is puzzling me is that the string that is received is a valid JSON object - I have tested it through several JSON validators. I have even copy-pasted it from my "log" put it in a separate file and parsed it with $.getJSON - and it worked fine, no errors.
Browser: Chrome 13.0.782.112
Any ideas would be helpful.
Thank you.
ES5 spec http://ecma262-5.com/ELS5_HTML.htm#Section_15.12.1 defines JSON whitespace as tab, cr, lf or sp. The Crockford skip-space uses the following code :
white = function () {
// Skip whitespace.
while (ch && ch <= ' ') {
next();
}
},
So if you have any spurious null characters or form-feeds etc in your response then the ES5 JSON parse will throw an error while the Crockford version will not.
You should do:
$.parseJSON( json );
see this for more info:
http://api.jquery.com/jQuery.parseJSON/

Categories

Resources