GraphicsMagick For Node - Get Multi-Page TIF Page Frame - javascript

I have a java script where I can pass in a single page TIF, and it converts to a JPEG properly. However, I am having trouble finding whether "GraphicsMagick For Node" (https://github.com/aheckmann/gm) has the ability to extract a single page (using the "pageNumber" parameter) out of a multi-page TIFF and convert to JPEG. I.e., navigate to frame and extract just that one page. I cannot find anywhere else on this site. Thanks!
function GetImageAndConvert(input, output, pageNumber, idx, callback)
{
gm(input).setFormat('jpeg').noProfile().toBuffer(function(err, buffer)
{
if (!err)
{
console.log('Converted to jpeg');
}
else
{
// record error here
}
callback(buffer, idx);
});
}

Answering my own question. Completely missed it in the documentation:
gm("img.png").selectFrame(0)

Related

protractor-html-reporter screenshot is corrupted

I'm using protractor to automate testing an angularjs application. I use protractor-html-reporter to generate reports.
In a hooks.js file I have placed the following code, in order to generate a screenshot whenever a test scenario fails. The image then is attached to the report.
this.After(function (scenario,callback) {
if (scenario.isFailed()) {
browser.takeScreenshot().then(function (base64png) {
var decodedImage = new Buffer(base64png, 'base64').toString('binary');
scenario.attach(decodedImage, 'image/png');
callback();
}, function (err) {
return callback(err);
});
} else{
callback();
}
});
The JSON, html and screenshot files are generated. The reports are readable and can be viewed using a browser or text editor. However, the screenshot ("png") file is always "damaged".
In the report, the image displays like reserved space (X symbol inside a black square). When trying to open the screenshot using a graphic editor, it tells that the image is damaged.
This is the html report
This is the image error
Am I missing something?

Electron will-download keeps getting interrupted

I am trying to download a file, but it keeps getting interrupted, and I have no idea why. I can not find any information on how to debug the reason it got interrupted either.
Here is where I am saving the file:
C:\Users\rnaddy\AppData\Roaming\Tachyon\games\murware\super-chain-reaction\web.zip
window.webContents.session.on('will-download', (event, item, webContents) => {
let path = url.parse(item.getURL()).pathname;
let dev = path.split('/')[3] || null;
let game = path.split('/')[4] || null;
if (!dev && !game) {
item.cancel();
} else {
item.setSavePath(Settings.fileDownloadLocation(dev, game, 'web'));
item.on('updated', (event, state) => {
let progress = 0;
if (state == 'interrupted') {
console.log('Download is interrupted but can be resumed');
} else if (state == 'progressing') {
progress = item.getReceivedBytes() / item.getTotalBytes();
if (item.isPaused()) {
console.log('Download is paused');
} else {
console.log(`Received bytes: ${item.getReceivedBytes()}; Progress: ${progress.toFixed(2)}%`);
}
}
});
}
});
Here is my listener that will trigger the above:
ipcMain.on(name, (evt) => {
window.webContents.downloadURL('http://api.gamesmart.com/v2/download/murware/super-chain-reaction');
});
Here is the output that I am getting in my console:
Received bytes: 0; Progress: 0.00%
Received bytes: 233183; Progress: 0.02%
Download is interrupted but can be resumed
I have a host file setup:
127.0.0.1 api.gamesmart.com
When I try to access the path http://api.gamesmart.com/v2/download/murware/super-chain-reaction in chrome, the file downloads just fine into my Downloads folder. So, what is causing this?
If you set the specific directory for downloading, you should use full file path with the file name in item.setSavePath() method. The best way to do it, fetching the file name from downloaditem object (item in your case) itself. You can use item.getFilename() to get the name of the current download item easily. here is the doc
And also there is a good way to get frequently used public system directory paths in electron. That is, using app.getPath(name) method. name would be the pre-defined String by electron for several directories. here is the doc
So, your complete setSavePath function would be,app.getPath("downloads") + "/" + item.getFilename()
In your case, if you are OK with your file path extraction method, only thing you are missing is filename at the end of the download path.
Of course you can use any other string as the file name if you wish. But remember to put correct extension though. :)
My solution was to use the correct Windows path separator (\), .e.g. 'directory\\file.zip'. Generally, Node.js uses / for any platform, but this seems to be sensitive about the path separator.

Unity WebGL External Assets

I'm developing some webGL project in Unity that has to load some external images from a directory, it runs all fine in the editor, however when I build it, it throws a Directory Not Found exception in web console. I am putting the images in Assets/StreamingAssets folder, that will become StreamingAssets folder in the built project (at root, same as index.html). Images are located there, yet browser still complains about not being able to find that directory. (I'm opening it on my own computer, no running web server)
I guess I'm missing something very obvious, but it seems like I could use some help, I've just started learning unity a week ago, and I'm not that great with C# or JavaScript (I'm trying to get better...) Is this somehow related to some javascript security issues?
Could someone please point me in the right direction, how I should be reading images(no writing need to be done) in Unity WebGL?
string appPath = Application.dataPath;
string[] filePaths = Directory.GetFiles(appPath, "*.jpg");
According to unity3d.com in webGL builds everything except threading and reflection is supported, so IO should be working - or so I thought:S
I was working around a bit and now I'm trying to load a text file containing the paths of the images (separated by ';'):
TextAsset ta = Resources.Load<TextAsset>("texManifest");
string[] lines = ta.text.Split(';');
Then I convert all lines to proper path, and add them to a list:
string temp = Application.streamingAssetsPath + "/textures/" + s;
filePaths.Add(temp);
Debug.Log tells me it looks like this:
file://////Downloads/FurnitureDresser/build/StreamingAssets/textures/79.jpg
So that seems to be allright except for all those slashes (That looks a bit odd to me)
And finally create the texture:
WWW www = new WWW("file://" + filePaths[i]);
yield return www;
Texture2D new_texture = new Texture2D(120, 80);
www.LoadImageIntoTexture(new_texture);
And around this last part (unsure: webgl projects does not seem easily debuggable) it tells me: NS_ERROR_DOM_BAD_URI: Access to restricted URI denied
Can someone please enlighten me what is happening? And most of all, what would be proper to solution to create a directory from where I can load images during runtime?
I realise this question is now a couple of years old, but, since this still appears to be commonly asked question, here is one solution (sorry, the code is C# but I am guessing the javascript implementation is similar). Basically you need to use UnityWebRequest and Coroutines to access a file from the StreamingAssets folder.
1) Create a new Loading scene (which does nothing but query the files; you could have it display some status text or a progress bar to let the user knows what is happening).
2) Add a script called Loader to the Main Camera in the Loading scene.
3) In the Loader script, add a variable to indicate whether the asset has been read successfully:
private bool isAssetRead;
4) In the Start() method of the Loading script:
void Start ()
{
// if webGL, this will be something like "http://..."
string assetPath = Application.streamingAssetsPath;
bool isWebGl = assetPath.Contains("://") ||
assetPath.Contains(":///");
try
{
if (isWebGl)
{
StartCoroutine(
SendRequest(
Path.Combine(
assetPath, "myAsset")));
}
else // desktop app
{
// do whatever you need is app is not WebGL
}
}
catch
{
// handle failure
}
}
5) In the Update() method of the Loading script:
void Update ()
{
// check to see if asset has been successfully read yet
if (isAssetRead)
{
// once asset is successfully read,
// load the next screen (e.g. main menu or gameplay)
SceneManager.LoadScene("NextScene");
}
// need to consider what happens if
// asset fails to be read for some reason
}
6) In the SendRequest() method of the Loading script:
private IEnumerator SendRequest(string url)
{
using (UnityWebRequest request = UnityWebRequest.Get(url))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError)
{
// handle failure
}
else
{
try
{
// entire file is returned via downloadHandler
//string fileContents = request.downloadHandler.text;
// or
//byte[] fileContents = request.downloadHandler.data;
// do whatever you need to do with the file contents
if (loadAsset(fileContents))
isAssetRead = true;
}
catch (Exception x)
{
// handle failure
}
}
}
}
Put your image in the Resources folder and use Resources.Load to open the file and use it.
For example:
Texture2D texture = Resources.Load("images/Texture") as Texture2D;
if (texture != null)
{
GetComponent<Renderer>().material.mainTexture = texture;
}
The directory listing and file APIs are not available in webgl builds.
Basically no low level IO operations are supported.

Google Docs Add-On - Dealing with Images

I'm trying to create a Google Docs add-on in which someone:
Selects an image
Clicks a menu item
A dialog is displayed, showing the image (on a canvas) with a couple tools
Canvas is modified using tools
Canvas data is saved and replaces the original image
Meta data for the image is saved, so it can be re-edited from the original.
I know how to get the image selection (from the GS code) and trigger the menu item and dialog. I also know how to do all of my custom code things.
I need to know:
How to get the original image URL (or extract it as a base64 string) that I can put in to a canvas
Replace the image and save it in the document.
Save metadata on a per-image basis so it can be re-edited.
Examples would be awesome, though links to documentation would also be great. I've found a lot of things, but nothing concrete on how to extract the data as anything but a blob.
(This answer is a work-in-progress. Starting an answer to put the bits as I figure them out. If someone else helps me figure out the missing bits, I'll accept theirs instead of this one).
How to get the original image URL (or extract it as a base64 string)
As far as I can tell, there isn't a way to get the default URL. I was however able to get the base64 string. It's slightly convoluted, but works.
Code.gs
// Gets an InlineImage in some way. I'm using the currently selected image,
// but that's irrelevant to the code sample.
// #return {InlineImage}
function getImage() {
// gets the InlineImage element somehow
}
// Gets the actual data URI.
// Note: this function uses image/png only. You can change this by changing
// it in the two places, or using a variable. Just be sure the two spots
// match.
// #return {string}
function getDataUri() {
return 'data:image/png;base64,' + Utilities.base64Encode(getImage().getAs('image/png').getBytes());
}
MyDialogJavaScript.html
$(function () {
google.script.run
.withSuccessHandler(function (data) { console.log(data); })
.withFailureHandler(function (err) { console.log('failure: ' + err); })
.getDataUri();
});
An important note: you must SandboxMode.IFRAME when creating the dialog or else you'll get something like:
Rejecting <img>.setAttribute('src', blahblahblah
This is apparently due to a limitation in the Caja compiler normally used. See the answer here for more info: Using base64-encoded images with HtmlService in Apps Script

Stream file uploaded with Express.js through gm to eliminate double write

I'm using Express.js and have a route to upload images that I then need to resize. Currently I just let Express write the file to disk (which I think uses node-formidable under the covers) and then resize using gm (http://aheckmann.github.com/gm/) which writes a second version to disk.
gm(path)
.resize(540,404)
.write(dest, function (err) { ... });
I've read that you can get a hold of the node-formidable file stream before it writes it to disk, and since gm can accept a stream instead of just a path, I should be able to pass this right through eliminating the double write to disk.
I think I need to override form.onPart but I'm not sure where (should it be done as Express middleware?) and I'm not sure how to get a hold of form or what exactly to do with the part. This is the code skeleton that I've seen in a few places:
form.onPart = function(part) {
if (!part.filename) { form.handlePart(part); return; }
part.on('data', function(buffer) {
});
part.on('end', function() {
}
}
Can somebody help me put these two pieces together? Thanks!
You're on the right track by rewriting form.onPart. Formidable writes to disk by default, so you want to act before it does.
Parts themselves are Streams, so you can pipe them to whatever you want, including gm. I haven't tested it, but this makes sense based on the documentation:
var form = new formidable.IncomingForm;
form.onPart = function (part) {
if (!part.filename) return this.handlePart(part);
gm(part).resize(200, 200).stream(function (err, stdout, stderr) {
stdout.pipe(fs.createWriteStream('my/new/path/to/img.png'));
});
};
As for the middleware, I'd copypaste the multipart middleware from Connect/Express and add the onPart function to it: http://www.senchalabs.org/connect/multipart.html
It'd be a lot nicer if formidable didn't write to disk by default or if it took a flag, wouldn't it? You could send them an issue.

Categories

Resources