Displaying .dcm files with XTK's X.volume() - javascript

According to the lesson 15 I pass .dcm file to the volume. I get no errors (e.g. no parsing errors I faced before) but nothing is displayed. What could be wrong with the .dcm files?
Here is a excerpt of the code I use:
function demo(file){
var _dicom = [file];
var r = new X.renderer3D();
r.init();
var v = new X.volume();
v.file = _dicom.map(function(v) {
return file;
});
r.add(v);
r.render();
r.onShowtime = function() {
v.volumeRendering = true;
};
};
I pass the full file path here. Well, I'm not sure it's a correct wording, but I'd lile to know, what dicom settings or parameters could cause such a behaviour, no errors and no diplayed data. Thanks in advance.

Did you try to drag the .dcm file into http://slicedrop.com and see the same effect? DICOM support currently only exists for uncompressed DICOM files and we didn't get Ultrasound too work yet. So on uncompressed MR data or CT data in DICOM it should work.

Related

javascript - unzip tar.gz archive in google script

I am trying to unzip the file: [https://wwwinfo.mfcr.cz/ares/ares_vreo_all.tar.gz][1] into google drive folder.
So, I have downloaded the file using google script, but can not properly unzip it. Could you, please, help me with it?
Thank you in advance!
Update:
Here is my code
function updateDB() {
var url = 'https://wwwinfo.mfcr.cz/ares/ares_vreo_all.tar.gz';
var blob = UrlFetchApp.fetch(url).getBlob();
var folder = DriveApp.getFolderById('fileid');
var archive = folder.createFile(blob);
unzip(archive);
}
function unzip(archive){
var zipblob = archive.getBlob();
var uncompressed1 = Utilities.ungzip(zipblob);
}
So I receive the following error:
Exception: Could not decompress gzip.
I guess it does not decompress normally, that is why I am asking if you would know different way
[1]: https://wwwinfo.mfcr.cz/ares/ares_vreo_all.tar.gz
You can use the Utilities Class to unzip your File.
To unzip the gzip you have to call the ungzip() method:
var textBlob = Utilities.newBlob("Some text to compress using gzip compression");
// Create the compressed blob.
var gzipBlob = Utilities.gzip(textBlob, "text.gz");
// Uncompress the data.
var uncompressedBlob = Utilities.ungzip(gzipBlob);
That's the example provided in the official documentation.
You can also take a look at the answer given to this question that also explains how to use the Utilities Class in combination with Google Drive.

store and access image file paths when templating (from cloudinary or other service)

I’m using gulp and nunjucks to automate some basic email templating tasks.
I have a chain of tasks which can be triggered when an image is added to the images folder e.g.:
images compressed
new image name and dimensions logged to json file
json image data then used to populate template when template task is run
So far so good.
I want to be able to define a generic image file path for each template which will then concatenate to each image name (as stored in the json file). So something like:
<img src="{{data.path}}{{data.src}}" >
If I want to nominate a distinct folder to contain the images for each template generated then cloudinary requires a mandatory unique version component to be applied in the file path. So the image path can never be consistent throughout a template.
if your public ID includes folders (elements divided by '/'), the
version component is mandatory, (but you can make it shorter. )
For example:
http://res.cloudinary.com/demo/image/upload/v1312461204/sample_email/hero_image.jpg
http://res.cloudinary.com/demo/image/upload/v1312461207/sample_email/footer_image.jpg
Same folder. Different path.
So it seems I would now need to create a script/task that can log and store each distinct file path (with its unique id generated by cloudinary) for every image any time an image is uploaded or updated and then rerun the templating process to publish them.
This just seems like quite a convoluted process so if there’s an easier approach I’d love to know?
Else if that really is the required route it would great if someone could point me to an example of the kind of script that achieves something similar.
Presumably some hosting services will not have the mandatory unique key which makes life easier. I have spent some time getting to know cloudinary and it’s a free service with a lot of scope so I guess I'm reluctant to abandon ship but open to all suggestions.
Thanks
Note that the version component (e.g., v1312461204) isn't mandatory anymore for most use-cases. The URL could indeed work without it, e.g.,:
http://res.cloudinary.com/demo/image/upload/sample_email/hero_image.jpg
Having said that, it is very recommended to include the version component in the URL in cases where you'd like to update the image with a new one while keeping the exact same public ID. In that case, if you'd access the exact same URL, you might get a CDN cached version of the image, which may be the old one.
Therefore, when you upload, you can get the version value from Cloudinary's upload response, and store it in your DB, and the next time you update your image, also update the URL with the new version value.
Alternatively, you can also ask Cloudinary to invalidate the image while uploading. Note that while including the version component "busts" the cache immediately, invalidation may take a while to propagate through the CDN. For more information:
http://cloudinary.com/documentation/image_transformations#image_versions
This is the solution I came up with. It's based on adapting the generic script I use to upload images from a folder to cloudinary and now stores the updated file paths from cloudinary and generates a json data file to publish the hosted src details to a template.
I'm sure it could be a lot better semantically so welcome any revisions offered if someone stumbles on this but it seems to do the job:
// points to the config file where we are defining file paths
var path = require('./gulp.path')();
// IMAGE HOSTING
var fs = require('fs'); // !! not installed !! Not required??
var cloudinary = require('cloudinary').v2;
var uploads = {};
var dotenv = require('dotenv');
dotenv.load();
// Finds the images in a specific folder and retrurns an array
var read = require('fs-readdir-recursive');
// Set location of images
var imagesInFolder = read(path.images);
// The array that will be populated with image src data
var imgData = new Array();
(function uploadImages(){
// Loop through all images in folder and upload
for(var i = 0; i < imagesInFolder.length;i++){
cloudinary.uploader.upload(path.images + imagesInFolder[i], {folder: path.hosted_folder, use_filename: true, unique_filename: false, tags: 'basic_sample'}, function(err,image){
console.log();
console.log("** Public Id");
if (err){ console.warn(err);}
console.log("* Same image, uploaded with a custom public_id");
console.log("* "+image.public_id);
// Generate the category title for each image. The category is defined within the image name. It's the first part of the image name i.e. anything prior to a hyphen:
var title = image.public_id.substr(image.public_id.lastIndexOf('/') + 1).replace(/\.[^/.]+$/, "").replace(/-.*$/, "");
console.log("* "+title);
console.log("* "+image.url);
// Add the updated src for each image to the output array
imgData.push({
[title] : {"src" : image.url}
});
// Stringify data with no spacing so .replace regex can easily remove the unwanted curly braces
var imgDataJson = JSON.stringify(imgData, null, null);
// Remove the unwanted [] that wraps the json imgData array
var imgDataJson = imgDataJson.substring(1,imgDataJson.length-1);
// Delete unwanted braces "},{" replace with "," otherwise what is output is not valid json
var imgDataJson = imgDataJson.replace(/(},{)/g, ',');
var outputFilename = "images2-hosted.json"
// output the hosted image path data to a json file
// (A separate gulp task is then run to merge and update the new 'src' data into an existing image data json file)
fs.writeFile(path.image_data_src + outputFilename, imgDataJson, function(err) {
if(err) {
console.log(err);
} else {
console.log("JSON saved to " + outputFilename);
}
});
});
}
})();
A gulp task is then used to merge the newly generated json to overide the existing json data file:
// COMPILE live image hosting data
var merge = require('gulp-merge-json');
gulp.task('imageData:comp', function() {
gulp
.src('src/data/images/*.json')
.pipe(merge('src/data/images.json'))
.pipe(gulp.dest('./'))
.pipe(notify({ message: 'imageData:comp task complete' }));
});

How to create file(.apk) from URL in Jaggery?

I have application store and applications have their url. I want to download apks from those urls to my jaggery server. Although below code(my first solution) create myApp.apk successfully, its not work properly.
First i tried to below code,
var url = "http://img.xxx.com/006/someApp.apk";
var data = get(url, {});
var file = new File("myApp.apk");
file.open("w");
file.write(data.data);
file.close();
when i print data.data value, its look like
i also tried,
var file = new File("http://img.xxx.com/006/someApp.apk");
file.saveAs("myApp.txt");
Can anyone help me?
.apk files are Android application files, and they are expected to start with PK, because they are actually zip archives!
They're not meant to be unzipped, although you can do it to see some of the application resources (but there are better ways for reverse engineering .apk files such as Apktool, if that's what you're looking for).
According to jaggery documentations, file.write is writing the String representation of the object to the file. So that's why you are getting an apk file which cannot be installed.
However you can make it work using copyURLToFile in apache commons-io java library as follows since jaggery supports java itself and all of WSO2 products have apache commons-io library in their class path.
<%
var JFileUtils = Packages.org.apache.commons.io.FileUtils;
var JUrl = Packages.java.net.URL;
var JFile = Packages.java.io.File;
var url = new JUrl("http://img.xxx.com/006/someApp.apk");
JFileUtils.copyURLToFile(url, new JFile("myApp.apk"));
print("done");
%>
Your file will be stored on $CARBON_HOME directory by default, unless you specified relative or absolute path to the file.

NetUtil.asyncCopy from one file to append to another in Firefox extension

I'm trying to use NetUtil.asyncCopy to append data from one file to the end of another file from a Firefox extension. I have based this code upon a number of examples at https://developer.mozilla.org/en-US/docs/Code_snippets/File_I_O, particularly the 'Copy a stream to a file' example. Given what it says on that page, my code below:
Creates nsIFile objects for the file to copy from and file to append to and initialises these objects with the correct paths.
Creates an output stream to the output file.
Runs the NetUtil.asyncCopy function to copy between the file (which, I believe, behaves as an nsIInputStream) and the output stream.
I run this code as append_text_from_file("~/CopyFrom.txt", "~/AppendTo.txt");, but nothing gets copied across. The Appending Text and After ostream dumps appear on the console, but not the Done or Error dumps.
Does anyone have any idea what I'm doing wrong here? I'm fairly new to both Firefox extensions and javascript (although I am a fairly experienced programmer) - so I may be doing something really silly. If my entire approach is wrong then please let me know - I would have thought that this approach would allow me to append a file easily, and asynchronously, but it may not be possible for some reason that I don't know about.
function append_text_from_file(from_filename, to_filename) {
var from_file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
from_file.initWithPath(from_filename);
var to_file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
to_file.initWithPath(to_filename);
dump("Appending text\n");
var ostream = FileUtils.openFileOutputStream(to_file, FileUtils.MODE_WRONLY | FileUtils.MODE_APPEND)
dump("After ostream\n");
NetUtil.asyncCopy(from_file, ostream, function(aResult) {
dump("Done\n");
if (!Components.isSuccessCode(aResult)) {
// an error occurred!
dump(aResult);
dump("Error!\n")
}
});
}
asyncCopy() requires an input stream not a file.
you can do this:
var fstream = Cc["#mozilla.org/network/file-input-stream;1"].createInstance(Ci.nsIFileInputStream);
fstream.init(from_file, 0x01, 4, null);
NetUtil.asyncCopy(fstream, ostream, function(aResult)....

Loaded data truncated when using nsIFileInputStream & nsIConverterInputStream

I'm working on a project (BrowserIO - go to browserio dot googlecode dot com if you want to check out the code and work on it. Help welcome!) in which I'm using Firefox's nsIFileInputStream in tandem with nsIConverterInputStream, per their example (https://developer.mozilla.org/en/Code_snippets/File_I%2F%2FO#Simple), but only a portion of the full data is being loaded. The code is:
var file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
file.initWithPath(path);
var data = "";
var fstream = Components.classes["#mozilla.org/network/file-input-stream;1"].createInstance(Components.interfaces.nsIFileInputStream);
var cstream = Components.classes["#mozilla.org/intl/converter-input-stream;1"].createInstance(Components.interfaces.nsIConverterInputStream);
fstream.init(file, -1, 0, 0);
cstream.init(fstream, "UTF-8", 0, 0); // you can use another encoding here if you wish
var str = {};
cstream.readString(-1, str); // read the whole file and put it in str.value
data = str.value;
cstream.close(); // this closes fstream
If you want to see this behavior, checkout the code from the BrowserIO project page, and use Firebug to set a breakpoint at the data = str.value; line in file_io.js. Then select a text file from the list, and click the "Open" button. In Firebug, in the watch panel set a watch for str.value. Look at the file... It should be truncated, unless it's really short.
For reference, the code above is the main body of the openFile() function in trunk/scripts/file_io.js.
Anybody have any clue what's happening with this?
See nsIConverterInputStream; basically, -1 doesn't mean "give me everything" but rather "give me the default amount", which the docs claim is 8192.
More generally, if you want to exhaust the contents of an input stream, you have to loop until it's empty. Nothing in any of the stream contracts guarantees that the amount of data returned by a call is the entirety of the contents of the stream; it could even return less than it has immediately available if it wanted.
I discovered how to do the file read without converting, to avoid issues from not knowing the file encoding type. The answer is to use nsIScriptableInputStream with nsIFileInputStream:
var sstream = Components.classes["#mozilla.org/scriptableinputstream;1"].createInstance(Components.interfaces.nsIScriptableInputStream);
fstream.init(file, 0x01, 0004, 0);
sstream.init(fstream);
data = sstream.read(sstream.available());

Categories

Resources