I am currently using jasmine-spec-reporter to create a spec report for my Protractor test cases.
The output on the terminal looks great! Is there any way to save this output to file or somehow use protractor-jasmine2-screenshot-reporter to create a summary, but disable the screenshots?
I have tried looking online for solutions, but so far haven't been successful.
var SpecReporter = require('jasmine-spec-reporter');
jasmine.getEnv().addReporter(new SpecReporter({displayStacktrace: 'none'}));
https://github.com/jintoppy/protractor-html-screenshot-reporter
https://github.com/bcaudan/jasmine-spec-reporter
My current workaround is to use the protractor-jasmine2-screenshot-reporter to generate the report. This also generates screenshots (not very practical due to the volume being created).
If anyone has a solution to disabling the screenshots, or even not allowing the .png files to save, please share.
The output on the terminal looks great! Is there any way to save this output to file
This package is what you want https://www.npmjs.com/package/jasmine-reporters. It contains several different reporting options. If you want to parse the xml into an html file you can use https://www.npmjs.com/package/jasmine-xml2html-converter
It seems that this guy had the same need: https://github.com/Kenzitron/protractor-jasmine2-html-reporter
You can turn off screenshot if needed:
jasmine.getEnv().addReporter(new Jasmine2HtmlReporter({
takeScreenshots: false
}));
Related
This is a topic where I can't seem to find the answer on the Node.js docs (I know it's possible because of libraries like exif), nor can I find an answer on the internet without everyone saying to just use a library.
I don't want to use a library, so I want to do this natively and learn more about reading file metadata, and maybe eventually updating the metadata too while building my own mini-tool.
If I run something like fs.statSync() I can get generic metadata that returns in the Stats object; but, in my case, I'm looking for all the other metadata, NOT just the basic file info like size, birthtime, etc.
I want the other metadata like dimensions, date taken, and especially things you'd see in image, video, or audio files.
Maybe there's something like:
const deepMetaData = fs.readFileSync().getMetaDataAsString();
console.info(/Date Taken/.test(deepMetaData)); // true
or
const deepMetaData = fs.createReadStream().buffer().toString();
const dateTaken = deepMetaData.match(/Date Taken: (\d{4}-\d{2}-\d{2})/)[1];
console.info(dateTaken);
If I need to work with buffers, streams, whatever, instead of a string output, that's cool too. Ideally something synchronous. So if there's a simple example someone could provide of how to read that kind of meta data without a library, I'll at least be able to look up the methods used from that to understand more later and leverage the docs associated with whatever approach. Thank you!
Nodejs fs functions like fs.statSync() provide OS level metadata on the file only (such as createDate, modificationDate, file size, etc...). These are properties of the file in the file system. These do NOT have anything at all to do with the actual data of the file itself.
When you talk about EXIF (for a photo), this is parsed from the file data itself. To know about that type of data, you must read and parse at least the beginning of the file and you must be able to recognize and understand all the different file formats that you might encounter. For photos, this would include JPEG, PNG, HEIC, GIF, etc... Each of those have different file formats and will require unique code for understanding the metadata embedded in the file.
Nodejs does not have support for any of that built-in.
So, it will take custom code for each file type. If you further want to include other types of files like videos, you need to extend your list of different file types you can read, parse and understand. For the depth of files you're talking about this is a big job, particular when it comes to testing against all the different variants of files and metadata that exist out in the wild.
I personally would be fine with implementing my own code for one particular file type like JPEG, but if I was tasked with supporting dozens of types of files and particularly if tasked with supporting the wide range of video file formats, I'd immediately seek out help from existing libraries that have already done all the time consuming work to research, write and test how to properly read and understand all the variants.
I know it's possible because of libraries like exif
This is an example of a library that reads the beginning of the image file, parses it according to the expected format and knows how to interpret all the possible tags that can be in the EXIF header and what they all mean.
So if there's a simple example someone could provide of how to read that kind of meta data without a library
Go study the code for the EXIF library and see how it works. If you're going to implement it yourself, that's how you have to do it. I'm still not sure why you'd avoid using working libraries if they already exist. That is one of the biggest advantages of the nodejs ecosystem - you can build on all the open source code that already exists without reimplementing it all from scratch yourself and spend your coding time on parts of your problem that someone else has not already implemented.
how would one read that metadata using node?
You literally have to read the data from the file (usually at the start of the file). You can use any of the mechanisms that the fs module provides. For example, you can use fs.createReadStream() and then stream in the file, parsing and interpreting it as data arrives and then stop the stream when you get past the end of the metadata. Of, you can open a file handle using fs.open() and use fs.read() to read chunks of the file until you have read enough to have all the metadata.
You HAVE an example sitting right in front of you of code that does this in the EXIF library on NPM that you already seem to know about. Just go examine its code. The code is ALL there.
I'm just looking for a simple answer on getting that info, even if it's a blob of strings.
This is perhaps your main problem. There is no simple answer to get that info and it doesn't just exist as a blob of strings. These files are sometimes binary files (for space efficiency reasons). You have to learn how to read and parse binary data. Go study the code in the EXIF library and see what it is already doing and you can learn from that. There is no better example to start with.
But, for a simple example using the heic filetype, this will grab the first 5000 characters of the file's metadata, which can then be searched:
const fileDescriptor = fs.openSync(absPathToHeicPhoto);
const charCount = 5000;
const buffer = Buffer.alloc(charCount);
const headerBytes = fs.readSync(fileDescriptor, buffer, 0, charCount);
const bufferAsStr = buffer.toString('utf8', 0, charCount);
console.info(/\d{4}:\d{2}:\d{2}/.test(bufferAsStr));
FYI, I looked at the code for this EXIF library on NPM and it's poorly implemented. It uses fs.readFile() to load the ENTIRE image into RAM (even though it only needs a fraction of the data at the start of the file). This is a poor implementation for this reason (memory and disk inefficient).
But, it does have a method called processImage and one called extractExifData that process the binary data of the file to parse out the EXIF info. These are links to its actual code. You can start learning there.
FYI, as a photographer, I use a command line program called exiftool that will dump exif info to stdout or to a file for many images. As a different approach, you could just run that tool from your nodejs program (using the child_process module and capture its output and use that output, letting it do the hard work you just operate on the generated output.
I'm trying to get it to read & parse a json file to update it, but it's not reading the full file, it's stopping after a lot of the file and just not reading any more of it. It's a massive json file because I can't really store it as anything else, besides multiple json files.
The code of CacheManager is here
The size of what it read is 143,360, and the actual size of the file is 153,840. I've never really ran into the issue, so I have no clue how to remedy it. I'm using fs-extra in the code, but I've verified that the same issue happens with the built-in fs module. I've printed out the content of what it got as well, so I can see that it is reading the file, and it is reading the right content, it's just not getting all of it. I'll link the right content and what it's getting. It's cut off at the end, you can see the part of the json for the md5. The code writing it to the file is just writing the raw content of the read file here (look at the part below the first screenshot to see the regular code)
If the issue is caused by the size of the file, you may look in some streaming parsing alternatives to standard JSON parse (like https://www.npmjs.com/package/stream-json).
Note: I'll check and let you know
Edit for the reader: so far it seems some kind of race and SO caching, discussion in the comments.
In one of my tests I am expecting some kind of DOM change. However, the page it's on is quite long.
So what I usually do for smaller components is to use screen.debug() method. But since the file is quite long, I started to also run the test task with DEBUG_PRINT_LIMIT=50000. Now that eventually got the result I got.
But that made me wonder, is it perhaps possible to save the output in a file?
According to the docs, screen.debug is essentially a shortcut for console.log(prettyDOM()).
So you could just use prettyDOM() directly, and do whatever with the result.
I would do copy(prettyDOM()) to put it on the clipboard and then paste it in a text file manually (in Chrome) or save it into a file (in node).
I need to provide text information in a file, read it in java-script and then work with the output. The output is not supposed to go to a web page, but is a plugin for another program (RPG Maker MV).
I'm trying to load a text-file (*.txt) and output it with document.write() I'm currently checking the code in Firefox. I get why this is prevented from working, but ultimately I do not want to run it in a Browser anyway. However I need to try the code before implementing it.
My text file looks like this:
What is the Capital of France?
-Berlin
-Paris [X]
-Koppenhagen
Who came up with the Theory of Evolution?
-Charles Darwin [X]
-Thomas Eddison
-Nicolas Flamel
The java-script should do something like:
var mytext=fs.readFileSync('quiz.txt');
document.write(mytext);
I could also use other file types than *.txt of course. Important is, that the quiz-file can be adapted without messing with the original java-script file.
Unfortunately, you can't use the same code to read in file in NodeJS and from code running in a browser as they use very different APIs.
NodeJS relies on the fs module while the browser environment relies on the File APIs specified by W3C.
I've got a text file that contains terminal output which includes all kinds of character codes such as moving the cursor around, etc. How can I render this properly in a browser?
There are sevral options that I've found based on terminal emulation using Javascript:
jQuery Terminal plugin
GateOne
http://cb.vu/
shellinabox
The first option seems to be the closest solution to what you need.
https://github.com/drudru/ansi_up probably is what you need, it will render any termial output to html
I used term.js, bone.io, expres.io for terminal emulation. Its working pretty well.
https://github.com/PrimeEuler/ShellServer.js
https://github.com/chjj/tty.js
https://github.com/thlorenz/hypernal Renders terminal output as html to simplify reusing server side modules in the browser.