I'm trying to display this code with JSONView but won't display when calling the data from inside the api callback function, but will display non api data when placed outside the callback.
// Call FreeGeoIP API to get browser IP address
$.getJSON('https://freegeoip.net/json/', function(data) {
var ipaddress = data.ip;
// Get browser language
var language = window.navigator.language;
// Get software
var software = window.navigator.appVersion;
var regExp = /\(([^)]+)\)/;
software = regExp.exec(software)[1];
// Add data to obj
var obj = {
'ipaddress': ipaddress,
'language': language,
'software': software
};
// Write obj to document
$('body').html(JSON.stringify(obj));
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
JSONView extension or any other extensions in the Chrome browser needs permission to access file URLs if it is accessing files from your local system.
To allow:-
Visit chrome://extensions/
Click on Details button of the Extension card
Switch ON the Allow access to file URLs
JSONView or any other json formatter detects if you are viewing json on basis on contentType of the document loaded (as set on http header).
Since you must be running this code on client side (browser) the contentType is set to text/html .
For the plugin to correctly format the json, it must know that what you're looking at is indeed json and it does so by reading contentType header.
That is why fetching json via this script shows json as text in body attribute but not picked up by the plugin.
Related
I'm trying to get a direct download URL for a file using Google's Picker API so that I can choose a file and pass this URL to server side code to download and store a copy of the item on the server.
I'm able to authorize through the picker API and get info of a picked file including the file name and preview URL (which is confusingly referred to as simply "A URL to this item" in the JSON response docs: https://developers.google.com/picker/docs/results)
I noticed that there is a post about using the Drive API to get a direct download URL here: Get google drive file download URL
However when I do this in my picker callback function (based on the docs here: https://developers.google.com/picker/docs/)
I get an error of:
"Project [number here] is not found and cannot be used for API calls. If it is recently created, enable Drive API by visiting https://console.developers.google.com/apis/api/drive.googleapis.com/overview?project=[project number here] then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry."
I have the API enabled in my developer console and the URL added to the JS allowed origins.
The documentation is very confusing and there seems to be 3 versions of the REST API to use with Drive which is based on an gapi.auth2 object whereas the picker api uses gapi.auth object.
I'm not sure if I need to authenticate again using the Google Drive API before performing the GET request. This all seems very messy and I believe there must be an easier approach for what is a simple request!
My picker callback function:
pickerCallback: function(data) {
if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
var doc = data[google.picker.Response.DOCUMENTS][0];
var fileName = doc[google.picker.Document.NAME];
var url = doc[google.picker.Document.URL];
var docId = doc[google.picker.Document.ID];
var request = null;
gapi.client.load('drive', 'v2', function() {
request = gapi.client.drive.files.get({
'fileId': docId
});
request.execute(function(resp){
console.log(resp);
});
});
//Write upload details to page
//Populate hidden field
}
Developer console screen - The first app is the picker API the second is for the Drive API:
You may want to try the simple callback implementation shown in this documentation. Notice that url was initialized before the if statement:
function pickerCallback(data) {
var url = 'nothing';
if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
var doc = data[google.picker.Response.DOCUMENTS][0];
url = doc[google.picker.Document.URL];
}
var message = 'You picked: ' + url;
document.getElementById('result').innerHTML = message;
}
Also, in authorizing, set the AppId value and choose the user account with the app's current OAuth 2.0 token. Note that the AppId set and the client ID used for authorizing access to a user's files must be contained in the same app. Then, after successfully obtaining the fileId, you can then send request using files.get. By default, this responds with a Files resource in the response body which includes downloadUrl.
For additional insights, see this related SO post.
I am new to jQuery and JavaScript. I am trying to read the content of a .doc file and display it in a textarea.
var url = "D:\\way2Jobs\\way2jobz\\WebContent\\pages\\Resumes\\";
var firstName = $("#first").val();
var extn=".doc";
jQuery.ajax({
url : url+firstName+extn,
dataType : "doc",
success : function(data){
alert(firstName);
document.getElementById("candResume").innerHTML = data;
}
});
You just can't (thanks god) make an ajax request to your local filesystem. It's a safety restriction and you can't bypass this.
1 - You have to at least use a webserver like apache
2 - You never will sucessfuly make a request to D:\
3 - You CAN request a DOC file and process it using javascript. But it's not that easy because DOC file isn't a plain text and javascript was not made for it. Maybe it's easier for you to do it using a server side language such as PHP or JAVA or even NodeJS if you are familiar with javascript.
I am trying to find a way where by we can auto save a file in Firefox using JS. The way I have done till yet using FireShot on a Windows Desktop:
var element = content.document.createElement("FireShotDataElement");
element.setAttribute("Entire", EntirePage);
element.setAttribute("Action", Action);
element.setAttribute("Key", Key);
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", Data);
element.setAttribute("Document", content.document);
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
content.document.documentElement.appendChild(element);
var evt = content.document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
But the issue is that it opens a dialog box to confirm the local drive location details. Is there a way I can hard code the local drive storage location and auto save the file?
If you are creating a Firefox add-on then FileUtils and NetUtil.asyncCopy are your friends:
Components.utils.import("resource://gre/modules/FileUtils.jsm");
Components.utils.import("resource://gre/modules/NetUtil.jsm");
var TEST_DATA = "this is a test string";
var source = Components.classes["#mozilla.org/io/string-input-stream;1"].
createInstance(Components.interfaces.nsIStringInputStream);
source.setData(TEST_DATA, TEST_DATA.length);
var file = new FileUtils.File("c:\\foo\\bar.txt");
var sink = file.openSafeFileOutputStream(file, FileUtils.MODE_WRONLY |
FileUtils.MODE_CREATE);
NetUtil.asyncCopy(source, sink);
This will asynchronously write the string this is a test string into the file c:\foo\bar.txt. Note that NetUtil.asyncCopy closes both streams automatically, you don't need to do it. However, you might want to pass a function as third parameter to this method - it will be called when the write operation is finished.
See also: Code snippets, writing to a file
Every computer has a different file structure. But still, there is a way. You can save it to cookie / session, depends on how "permanent" your data wants to be.
Do not consider writing a physical file as it requires extra permission.
I've have some javascript code that transforms XML with XSLT. I now want the user to be able to save that new XML (either by prompting them or throwing the new XML up as a file or something so that the user can then save it. Anybody know how to do that?
var xmldoc = new ActiveXObject("Microsoft.XMLDOM");
xmldoc.async = false;
xmldoc.loadXML(responseText); // responseText is xml returned from ajax call
//apply the xslt
var xsldoc = new ActiveXObject("Microsoft.XMLDOM");
xsldoc.async = false;
xsldoc.load("../xslt/ExtraWorkRequest.xslt");
var content = xmldoc.transformNode(xsldoc);
How do I get the user to save the XML (content) as a file?
By default, you can't. Browser isn't supposed to access your local disks for security reasons.
But, if you can ask your user to change it's security settings (and you should not to ask), you can to use FileSystemObject or even your Microsoft.XMLDOM.Save method.
You cannot do it with 100% client-side JavaScript with the default security settings. You'll need to implement some server-side logic. In your case, you'll be able to do the XML transformation server-side, as well.
http://www.bigresource.com/Tracker/Track-javascripts-ijfTJlI9/
http://support.microsoft.com/kb/q260519/
You could construct a data: URI with a media type of application/octet-stream.
function download (data, charset) {
if (!charset) {
charset = document.characterSet;
}
location.href = ["data:application/octet-stream;charset=",
charset, ",", encodeURIComponent(data)
].join("");
}
All browsers except IE support data: URIs. I think IE8 may support them, but only for images. For IE, a workaround could be to send the data to a server (including document.characterSet) and then load a page that has something like the following header:
Content-Type: application/xml; charset={document.characterSet}
Content-Disposition: attachment
If you want to give the file a name too, use Content-Disposition: attachment; filename=....
Also, for any of this to work, you have to convert your XML to a string first.
I do this with code snippets on my blog (user can click on the save button, and the snippet will come up in their default text editor, where they can tweak it and/or copy it into their app).
It works by putting all the textual data inside of a hidden field, and then submits it to a very simple server-side HTTP handler. The handler just grabs the hidden field value and spits it right back out in the response with the right content-disposition header, giving the user the open/save download prompt.
This is the only way I could get it to work.
I want a robust way to upload a file. That means that I want to be able to handle interruptions, error and pauses.
So my question is: Is something like the following possible using javascript only on the client.
If so I would like pointers to libraries, tutorials, books or implementations.
If not I would like an explanation to why it's not possible.
Scenario:
Open a large file
Split it into parts
For each part I would like to
Create checksum and append to data
Post data to server (the server would check if data uploaded correctly)
Check a web page on server to see if upload is ok
If yes upload next part if no retry
Assume all posts to server is accompanied by relevant meta data (sessionid and whatnot).
No. You can, through a certain amount of hackery, begin a file upload with AJAX, in which case you'll be able to tell when it's finished uploading. That's it.
JavaScript does not have any direct access to files on the visitor's computer for security reasons. The most you'll be able to see from within your script is the filename.
Firefox 3.5 adds support for DOM progress event monitoring of XMLHttpRequest transfers which allow you to keep track of at least upload status as well as completion and cancellation of uploads.
It's also possible to simulate progress tracking with iframes in clients that don't support this newer XMLHTTPRequest additions.
For an example of script that does just this, take a look at NoSWFUpload. I've been using it succesfully for about few months now.
It's possible in Firefox 3 to open a local file as chosen by a file upload field and read it into a JavaScript variable using the field's files array. That would allow you to do your own chunking, hashing and sending by AJAX.
There is some talk of getting something like this standardised by W3, but for the immediate future no other browser supports this.
Yes. Please look at the following file -
function Upload() {
var self = this;
this.btnUpload;
this.frmUpload;
this.inputFile;
this.divUploadArea;
this.upload = function(event, target) {
event.stopPropagation();
if (!$('.upload-button').length) {
return false;
}
if (!$('.form').length) {
return false;
}
self.btnUpload = target;
self.frmUpload = $(self.btnUpload).parents('form:first');
self.inputFile = $(self.btnUpload).prev('.upload-input');
self.divUploadArea = $(self.btnUpload).next('.uploaded-area');
var target = $(self.frmUpload).attr('target');
var action = $(self.frmUpload).attr('action');
$(self.frmUpload).attr('target', 'upload_target'); //change the form's target to the iframe's id
$(self.frmUpload).attr('action', '/trnUpload/upload'); //change the form's action to the upload iframe function page
$(self.frmUpload).parent("div").prepend(self.iframe);
$('#upload_target').load(function(event){
if (!$("#upload_target").contents().find('.upload-success:first').length) {
$('#upload_target').remove();
return false;
} else if($("#upload_target").contents().find('.upload-success:first') == 'false') {
$('#upload_target').remove();
return false;
}
var fid = $("#upload_target").contents().find('.fid:first').html();
var filename = $("#upload_target").contents().find('.filename:first').html();
var filetype = $("#upload_target").contents().find('.filetype:first').html();
var filesize = $("#upload_target").contents().find('.filesize:first').html();
$(self.frmUpload).attr('target', target); //change the form's target to the iframe's id
$(self.frmUpload).attr('action', action); //change the form's
$('#upload_target').remove();
self.insertUploadLink(fid, filename, filetype, filesize);
});
};
this.iframe = '' +
'false' +
'';
this.insertUploadLink = function (fid, filename, filetype, filesize) {
$('#upload-value').attr('value', fid);
}
}
$(document).ready(event) {
var myupload = new Upload();
myupload.upload(event, event.target);
}
With also using PHP's APC to query the status of how much of the file has been uploaded, you can do a progress bar with a periodical updater (I would use jQuery, which the above class requires also). You can use PHP to output both the periodical results, and the results of the upload in the iframe that is temporarily created.
This is hackish. You will need to spend a lot of time to get it to work. You will need admin access to whatever server you want to run it on so you can install APC. You will also need to setup the HTML form to correspond to the js Upload class. A reference on how to do this can be found here http://www.ultramegatech.com/blog/2008/12/creating-upload-progress-bar-php/