Googel Apps Script hasNext() returns false - javascript

I'm trying a very simple thing but can't seem to get it to work
I have a number of files with the same name "Facturas 21" living in different subfolders in my drive. I want to do stuff to them but I cannot access them with the file iterator for some reason.
My code is as simple as it gets:
function getFiles() {
const drive = DriveApp.getFolderById("XXXXXXXXX");
const files = drive.getFilesByName("Facturas 21");
Logger.log(files.hasNext()); // logs false!
while (files.hasNext()) {
let file = files.next();
... some code
}
}
Why is it giving me false when it's a fact that there are those files? FYI, I've copied and pasted the string, so no misspelling or mistyping.
According to the docs, getFilesByName() "gets a collection of all files in the user's Drive that have the given name" and returns a FileIterator object. What am I not seeing??

You need to traverse the whole tree of content. Drive won't do that for you.
You're getting an empty FileIterator, presumably because you don't have any files named "Facturas 21" directly in folder XXX.
Recursively call drive.getFolders() until you reach the child node.

The reason this line Logger.log(files.hasNext()); returns false is mainly because you are performing the search in the specific folder with the id you provisioned.
Just like mentioned in the other answer, you will have to traverse the whole Drive tree.
However, you may benefit from using the below snippet in order to list all the files named "Facturas 21":
function listFiles() {
var query = "title contains 'Facturas 21'";
let files = Drive.Files.list({q: query}).items;
// code
}
The code above makes use of the Drive advanced service and is returning all the files named "Facturas 21" from the whole Drive. The files variable returned will be an array of objects of type File, so depending on the end result you are expecting, you can manipulate these for your needs.
Note
Please bear in mind that the Drive advanced service makes use of Drive API v2.
Reference
Drive API v2 Search Terms;
Drive API v2 Files:list;
Apps Script Advanced Drive Service.

Explanation:
According to this link, the hasNext() method of the Class FileIterator returns a Boolean value. Thus, this is the reason why you're getting "false" or "true" result when you run your code Logger.log(files.hasNext());.
Additionally, I tried your code and I was able to get the "Facturas 21" file as seen here by adding the file's extension (in my case it's .txt) on the line 3 const files = drive.getFilesByName("Facturas 21.txt");. This is the main reason why you're only getting the false result.

Related

How to filter out non-json documents in MarkLogic?

I have a lot of data loaded in my database where some of the documents loaded are not JSON files & just binary files. Correct data looks like this: "/foo/bar/1.json" but the incorrect data is in the format of "/foo/bar/*". Is there a mechanism in MarkLogic using JavaScript where I can filter out this junk data and delete them?
PS: I'm unable to extract files with mlcp that have a "?" in the URI and maybe when I try to reload this data I get this error. Any way to fix that extract along with this?
If all of the document URIs contain a ? and are in that directory, then you could use cts.uriMatch()
declareUpdate();
for (const uri of cts.uriMatch('/foo/bar/*?*') ) {
xdmp.documentDelete(uri)
}
Alternatively, if you are looking to find the binary() documents, you can apply the format-binary option to a cts.search() with a cts.directoryQuery() and then delete them.
declareUpdate();
for (const doc of cts.search(cts.directoryQuery("/foo/bar/"), ['format-json']) ) {
xdmp.documentDelete(fn.baseUri(doc));
}
They are probably being persisted as binary because there is no file extension when the URI ends with a question mark and some querystring parameter values i.e. 1.json?foo=bar instead of 1.json
It is difficult to diagnose and troubleshoot without seeing what your MLCP job configs are and knowing more about what you are doing to load the data.

Pentaho/Kettle - Javascript or java that gets file names older than a specified date

Please excuse the rookie question as I'm not a programmer :)
We're using Pentaho 8
I'm looking for a way to have Javascript or Java read a directory and return the file names of any files that are older than a date that will be provided by a Pentaho parameter.
Here is what I currently have using a Modified Java Script Value step that only lists the directory contents:
var _getAllFilesFromFolder = function(dir) {
var filesystem = require("fs");
var results = [];
filesystem.readdirSync(dir).forEach(function(file) {
file = dir+'\'+file;
var stat = filesystem.statSync(file);
if (stat && stat.isDirectory()) {
results = results.concat(_getAllFilesFromFolder(file))
} else results.push(file);
});
return results;
};
Is Javascript/Java the right way to do this?
There's a step called "Get file names". You just need to provide the path you want to poll. It also allows doing so recursively, only showing filenames that match a given filter, and in the filters tab allow you to show only folders, only files, or both.
nsousa's answer would be the easiest, then after you get your file list you can use a filter rows step on the lastmodifiedtime returned from the Get file names. 2 -steps, 3 if you want to format the date/time returned to something easier to sort/filter through. This is the approach I use and its is faster then the transformations can keep up with generally.

How to get Model object metadata properties in Javascript AutoDesk

I am working with AutoDesk Forge Viewer (2D) in Javascript with Offline svf file.
I have converted the .dwg file to svf file.
How can I get Model Object Metadata properties in Javascript like we get using the api "https://developer.api.autodesk.com/modelderivative/v2/designdata/{urn}/metadata/{guid}/properties" ?
I tried using viewer.model.getProperties(dbId,function,funtion), but this only gives me details of particular to that dbId but i want the list of properties.
Please help me with this.
firstly, the other blog talks about how Model Derivative extracts properties. In theory, if you get 'aka json (json.gz)' or 'sqlLite (sdb/db)', you would be able to extract yourself by other tools.
How properties.db is used in Forge Viewer?.
I believe you have known http://extract.autodesk.io/ as you said you have downloaded SVF. http://extract.autodesk.io/ provides you with the logic to download translated data, including json.gz and sqlLite db.
While if you prefer to dump all properties within browser by Forge Viewer, the only way I can think is as below:
function getAllDbIds(viewer) {
var instanceTree = viewer.model.getData().instanceTree;
var allDbIdsStr = Object.keys(instanceTree.nodeAccess.dbIdToIndex);
return allDbIdsStr.map(function(id) { return parseInt(id)});
}
var AllDbIds = getAllDbIds(myViewer);
myViewer.model.getBulkProperties(AllDbIds, null,
function(elements){
console.log(elements);//this includes all properties of a node.
})
Actually, I combined two blogs:
https://forge.autodesk.com/cloud_and_mobile/2016/10/get-all-database-ids-in-the-model.html
https://forge.autodesk.com/blog/getbulkproperties-method

What is the purpose of the computed_hashes.json and verified_contents.json files in a secure Chrome extension?

I've seen some Chrome extensions that hashes their folder and file names. They have a folder named 'metadata' and two files inside it: 'computed_hashes.json' and 'verified_contents.json'. What are these files, what do they do and how can I get them or use them?
computed_hashes.json
computed_hashes.json calculates the SHA256 hash of the blocks of the files included in the extension, which is presumably used for file integrity and/or security purposes to ensure the files haven't been corrupted/tampered with.
I go into this in depth in this StackOverflow answer, where I reference the various relevant sections in the Chromium source code.
The main relevant files are:
extensions/browser/computed_hashes.h
extensions/browser/computed_hashes.cc
And within this, the main relevant functions are:
Compute
ComputeAndCheckResourceHash
GetHashesForContent
And the actual hash calculation can be seen in the ComputedHashes::GetHashesForContent function.
verified_contents.json
Short Answer
This is used for file integrity and/or security purposes to ensure that the extension files haven't been corrupted/tampered with.
verified_contents.json ensures that the Base64 encoded payload of the signed_content object within the object with a description of treehash per file validates against the signature of the object within signatures that has a header.kid of webstore. This is validated using crypto::SignatureVerifier::RSA_PKCS1_SHA256 across the concatenated values of protected + . + payload.
If the signature validates correctly, the SHA256 hash of the blocks of the files included in the extension are then calculated and compared as per computed_hashes.json (described above).
Deep Dive Explanation
To determine the internal specifics of how verified_contents.json is created/validated, we can search the chromium source code for verified_contents as follows:
https://source.chromium.org/search?q=verified_contents
This returns a number of interesting files, including:
extensions/browser/verified_contents.h
extensions/browser/verified_contents.cc
Looking in verified_contents.h we can see a comment describing the purpose of verified_contents.json, and how it's created by the webstore:
// This class encapsulates the data in a "verified_contents.json" file
// generated by the webstore for a .crx file. That data includes a set of
// signed expected hashes of file content which can be used to check for
// corruption of extension files on local disk.
We can also see a number of function prototypes that sound like they are used for parsing and validating the verified_contents.json file:
// Returns verified contents after successfully parsing verified_contents.json
// file at |path| and validating the enclosed signature. Returns nullptr on
// failure.
// Note: |public_key| must remain valid for the lifetime of the returned
// object.
static std::unique_ptr<VerifiedContents> CreateFromFile(
base::span<const uint8_t> public_key,
const base::FilePath& path);
// Returns verified contents after successfully parsing |contents| and
// validating the enclosed signature. Returns nullptr on failure. Note:
// |public_key| must remain valid for the lifetime of the returned object.
static std::unique_ptr<VerifiedContents> Create(
base::span<const uint8_t> public_key,
base::StringPiece contents);
// Returns the base64url-decoded "payload" field from the |contents|, if
// the signature was valid.
bool GetPayload(base::StringPiece contents, std::string* payload);
// The |protected_value| and |payload| arguments should be base64url encoded
// strings, and |signature_bytes| should be a byte array. See comments in the
// .cc file on GetPayload for where these come from in the overall input
// file.
bool VerifySignature(const std::string& protected_value,
const std::string& payload,
const std::string& signature_bytes);
We can find the function definitions for these in verified_contents.cc:
VerifiedContents::CreateFromFile calls base::ReadFileToString (which then calls ReadFileToStringWithMaxSize that reads the file as a binary file with mode rb) to load the contents of the file, and then passes this to Create
VerifiedContents::Create
calls VerifiedContents::GetPayload to extract/validate/decode the contents of the Base64 encoded payload field within verified_contents.json (see below for deeper explanation of this)
parses this as JSON with base::JSONReader::Read
extracts the item_id key, validates it with crx_file::id_util::IdIsValid, and adds it to verified_contents as extension_id_
extracts the item_version key, validates it with Version::IsValid(), and adds it to verified_contents as version_
extracts all of the content_hashes objects and
verifies that the format of each is treehash
extracts the block_size and hash_block_size, ensures they have the same value, and addsblock_size to verified_contents as block_size_
extracts all of the files objects
extracts the path and root_hash keys and ensures that root_hash is Base64 decodeable
calculates the canonical_path using content_verifier_utils::CanonicalizeRelativePath and base::FilePath::FromUTF8Unsafe, and inserts it into root_hashes_ in the verified_contents
finally, returns the verified_contents
VerifiedContents::GetPayload
parses the contents as JSON with base::JSONReader::Read
finds an object in the JSON that has the description key set to treehash per file
extracts the signed_content object
extracts the signatures array
finds an object in the signatures array that has a header.kid set to webstore
extracts the protected / signature keys and Base64 decodes the signature into signature_bytes
extracts the payload key
calls VerifySignature with protected / payload / signature_bytes
if the signature is valid, Base64 decodes the payload into a JSON string
VerifiedContents::VerifySignature
calls SignatureVerifier::VerifyInit using crypto::SignatureVerifier::RSA_PKCS1_SHA256
uses this to validate protected_value + . + payload
Since this didn't show how the file hashes themselves were verified, I then searched for where VerifiedContents::CreateFromFile was called:
https://source.chromium.org/search?q=VerifiedContents::CreateFromFile
Which pointed me to the following files:
extensions/browser/content_verifier/content_hash.cc
Where
VerifiedContents::CreateFromFile is called by ReadVerifiedContents
ReadVerifiedContents is called by ContentHash::GetVerifiedContents, and when the contents are successfully verified, it will pass them to verified_contents_callback
GetVerifiedContents is called by ContentHash::Create, which passes ContentHash::GetComputedHashes as the verified_contents_callback
ContentHash::GetComputedHashes calls ContentHash::BuildComputedHashes
ContentHash::BuildComputedHashes will read/create the computed_hashes.json file by calling file_util::GetComputedHashesPath, ComputedHashes::CreateFromFile, CreateHashes (which calls ComputedHashes::Compute), etc
Note that ComputedHashes::CreateFromFile and ComputedHashes::Compute are the functions described in the computed_hashes.json section above (used to calculate the SHA256 hash of the blocks of the files included in the extension), and which I go into much more detail about in this StackOverflow answer.

Troubling using QuaggaJS - Javascript Barcode Scanner

I am using QuaggaJS. On the home page there, it has basic descriptions of its main methods, as well as an example html folder in its downloadable zip. My problem, is that one of the example HTMLs is called static_images. This takes in image src's for its scanning procedure, but I cannot figure out how to give it a custom single src that I specify. (The example HTML seems to use a pre-set list of images in the folder).
I read (on QuaggaJS git homepage) that the method Quagga.decodeSingle(config, callback) does exactly what I want.
In contrast to the calls described above, this method does not rely on
getUserMedia and operates on a single image instead. The provided
callback is the same as in onDetected and contains the result data
object.
But I cannot figure out how to implement that method into his example code. Can someone guide me, and explain, how I am to implement that method within QuaggaJS? (quagga/example/static_images.html/js)
The method Quagga.decodeSingle takes an object as the first parameter (config) that has a property called "src". You can pass your src to this property.
The example the author gives is:
Quagga.decodeSingle({
readers: ['code_128_reader'],
locate: true, // try to locate the barcode in the image
src: '/test/fixtures/code_128/image-001.jpg' // or 'data:image/jpg;base64,' + data
}, function(result){
console.log(result);
});
where the readers property indicates the method will only decode code_128 barcodes. You can add the other barcode types in this array, which are basically the names of the protocols with underscores instead of spaces with "_reader" at the end (e.g., ["code_128_reader", "code_39_reader", "code_39_vin_reader", "ean_reader", "ean_8_reader", "upc_reader", "upc_e_reader", "codabar_reader"]).

Categories

Resources