How to associate objects in Sequelize (Node js)? - javascript

I have User and File models in Sequlize. User can have several files.
I have association
db.User.hasMany(db.File, {as: 'Files', foreignKey: 'userId', constraints: false});
I want to init User object with several Files and save it to database.
I wrote next code:
var files = [];
var file1 = models.File.build();
file1.name = "JPEG";
file1.save().then(function () {
});
files.push(file1);
var file2 = models.File.build();
file2.name = "PNG";
file2.save().then(function () {
});
files.push(file2);
var newUser = models.User.build();
newUser.email = email;
newUser.save().then(function (usr) {
files.forEach(function (item) {
newUser.addFile(item);
});
});
But I found a bug, sometimes several files were not associated with the user.
I found(in nodejs logs) commands(update) for setting foreign keys for these files.
But commands were not executed. Foreign keys(userId) of several files were empty.
I think the problem is in asynchronous queries.
How to organize code structure to avoid this bug?

The problem is exactly what you think it is, the async code.
You need to move the functions inside of the callbacks otherwise the code gets run before the file is created.
JavaScript doesn't wait before moving on to the next line so it'll run the next line whether its done or not. It has no sense of waiting before moving.
So you're basically adding something that doesn't yet exist because it doesn't wait for the file to be saved before moving on.
This would work though, just moving the code inside of the then callbacks:
var files = [];
var file1 = models.File.build();
file1.name = "JPEG";
file1.save().then(function () {
files.push(file1);
var file2 = models.File.build();
file2.name = "PNG";
file2.save().then(function () {
files.push(file2);
var newUser = models.User.build();
newUser.email = email;
newUser.save().then(function (usr) {
files.forEach(function (item) {
newUser.addFile(item);
});
});
});
});
But that's messy. Instead you can chain promises like this:
var files = [];
var file1 = models.File.build();
file1.name = "JPEG";
file1.save()
.then(function(file1) {
files.push(file1);
var file2 = models.File.build();
file2.name = "PNG";
return file2.save();
})
.then(function(file2) {
files.push(file2);
var newUser = models.User.build();
newUser.email = email;
return newUser.save();
})
.then(function(newUser) {
files.forEach(function(item) {
newUser.addFile(item);
});
});
Now that's a bit cleaner but still a bit messy and also a bit confusing. So you can use generator functions instead like this:
var co = require('co');
co(function*() {
var files = [];
var file1 = models.File.build();
file1.name = "JPEG";
yield file1.save();
files.push(file1);
var file2 = models.File.build();
file2.name = "PNG";
yield file2.save();
files.push(file2);
var newUser = models.User.build();
newUser.email = email;
newUser.save();
files.forEach(function(item) {
newUser.addFile(item);
});
});
Now that's much better.
Look closely and you see what's happening. co accepts a generator function which is basically a regular function with asterisks *. This is a special functions which adds yield expression support.
yield expressions basically wait for the then() callback to be called before moving on and if the then callback has a argument then it'll return it as well.
So you can do something like:
var gif = yield models.File.create({
name: 'gif'
});
instead of:
models.File.create({
name: 'gif'
}).then(function(gif) {
});
You have to use a small node module called co for this though just npm install --save co

Related

How can I optimize my JavaScript code to handle large log files (over 1 GB)? [duplicate]

I need to do some parsing of large (5-10 Gb)logfiles in Javascript/Node.js (I'm using Cube).
The logline looks something like:
10:00:43.343423 I'm a friendly log message. There are 5 cats, and 7 dogs. We are in state "SUCCESS".
We need to read each line, do some parsing (e.g. strip out 5, 7 and SUCCESS), then pump this data into Cube (https://github.com/square/cube) using their JS client.
Firstly, what is the canonical way in Node to read in a file, line by line?
It seems to be fairly common question online:
http://www.quora.com/What-is-the-best-way-to-read-a-file-line-by-line-in-node-js
Read a file one line at a time in node.js?
A lot of the answers seem to point to a bunch of third-party modules:
https://github.com/nickewing/line-reader
https://github.com/jahewson/node-byline
https://github.com/pkrumins/node-lazy
https://github.com/Gagle/Node-BufferedReader
However, this seems like a fairly basic task - surely, there's a simple way within the stdlib to read in a textfile, line-by-line?
Secondly, I then need to process each line (e.g. convert the timestamp into a Date object, and extract useful fields).
What's the best way to do this, maximising throughput? Is there some way that won't block on either reading in each line, or on sending it to Cube?
Thirdly - I'm guessing using string splits, and the JS equivalent of contains (IndexOf != -1?) will be a lot faster than regexes? Has anybody had much experience in parsing massive amounts of text data in Node.js?
I searched for a solution to parse very large files (gbs) line by line using a stream. All the third-party libraries and examples did not suit my needs since they processed the files not line by line (like 1 , 2 , 3 , 4 ..) or read the entire file to memory
The following solution can parse very large files, line by line using stream & pipe. For testing I used a 2.1 gb file with 17.000.000 records. Ram usage did not exceed 60 mb.
First, install the event-stream package:
npm install event-stream
Then:
var fs = require('fs')
, es = require('event-stream');
var lineNr = 0;
var s = fs.createReadStream('very-large-file.csv')
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
lineNr += 1;
// process line here and call s.resume() when rdy
// function below was for logging memory usage
logMemoryUsage(lineNr);
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(err){
console.log('Error while reading file.', err);
})
.on('end', function(){
console.log('Read entire file.')
})
);
Please let me know how it goes!
You can use the inbuilt readline package, see docs here. I use stream to create a new output stream.
var fs = require('fs'),
readline = require('readline'),
stream = require('stream');
var instream = fs.createReadStream('/path/to/file');
var outstream = new stream;
outstream.readable = true;
outstream.writable = true;
var rl = readline.createInterface({
input: instream,
output: outstream,
terminal: false
});
rl.on('line', function(line) {
console.log(line);
//Do your stuff ...
//Then write to output stream
rl.write(line);
});
Large files will take some time to process. Do tell if it works.
I really liked #gerard answer which is actually deserves to be the correct answer here. I made some improvements:
Code is in a class (modular)
Parsing is included
Ability to resume is given to the outside in case there is an asynchronous job is chained to reading the CSV like inserting to DB, or a HTTP request
Reading in chunks/batche sizes that
user can declare. I took care of encoding in the stream too, in case
you have files in different encoding.
Here's the code:
'use strict'
const fs = require('fs'),
util = require('util'),
stream = require('stream'),
es = require('event-stream'),
parse = require("csv-parse"),
iconv = require('iconv-lite');
class CSVReader {
constructor(filename, batchSize, columns) {
this.reader = fs.createReadStream(filename).pipe(iconv.decodeStream('utf8'))
this.batchSize = batchSize || 1000
this.lineNumber = 0
this.data = []
this.parseOptions = {delimiter: '\t', columns: true, escape: '/', relax: true}
}
read(callback) {
this.reader
.pipe(es.split())
.pipe(es.mapSync(line => {
++this.lineNumber
parse(line, this.parseOptions, (err, d) => {
this.data.push(d[0])
})
if (this.lineNumber % this.batchSize === 0) {
callback(this.data)
}
})
.on('error', function(){
console.log('Error while reading file.')
})
.on('end', function(){
console.log('Read entirefile.')
}))
}
continue () {
this.data = []
this.reader.resume()
}
}
module.exports = CSVReader
So basically, here is how you will use it:
let reader = CSVReader('path_to_file.csv')
reader.read(() => reader.continue())
I tested this with a 35GB CSV file and it worked for me and that's why I chose to build it on #gerard's answer, feedbacks are welcomed.
I used https://www.npmjs.com/package/line-by-line for reading more than 1 000 000 lines from a text file. In this case, an occupied capacity of RAM was about 50-60 megabyte.
const LineByLineReader = require('line-by-line'),
lr = new LineByLineReader('big_file.txt');
lr.on('error', function (err) {
// 'err' contains error object
});
lr.on('line', function (line) {
// pause emitting of lines...
lr.pause();
// ...do your asynchronous line processing..
setTimeout(function () {
// ...and continue emitting lines.
lr.resume();
}, 100);
});
lr.on('end', function () {
// All lines are read, file is closed now.
});
The Node.js Documentation offers a very elegant example using the Readline module.
Example: Read File Stream Line-by-Line
const { once } = require('node:events');
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('sample.txt'),
crlfDelay: Infinity
});
rl.on('line', (line) => {
console.log(`Line from file: ${line}`);
});
await once(rl, 'close');
Note: we use the crlfDelay option to recognize all instances of CR LF ('\r\n') as a single line break.
Apart from read the big file line by line, you also can read it chunk by chunk. For more refer to this article
var offset = 0;
var chunkSize = 2048;
var chunkBuffer = new Buffer(chunkSize);
var fp = fs.openSync('filepath', 'r');
var bytesRead = 0;
while(bytesRead = fs.readSync(fp, chunkBuffer, 0, chunkSize, offset)) {
offset += bytesRead;
var str = chunkBuffer.slice(0, bytesRead).toString();
var arr = str.split('\n');
if(bytesRead = chunkSize) {
// the last item of the arr may be not a full line, leave it to the next chunk
offset -= arr.pop().length;
}
lines.push(arr);
}
console.log(lines);
I had the same problem yet. After comparing several modules that seem to have this feature, I decided to do it myself, it's simpler than I thought.
gist: https://gist.github.com/deemstone/8279565
var fetchBlock = lineByline(filepath, onEnd);
fetchBlock(function(lines, start){ ... }); //lines{array} start{int} lines[0] No.
It cover the file opened in a closure, that fetchBlock() returned will fetch a block from the file, end split to array (will deal the segment from last fetch).
I've set the block size to 1024 for each read operation. This may have bugs, but code logic is obvious, try it yourself.
Reading / Writing files using stream with the native nodejs modules (fs, readline):
const fs = require('fs');
const readline = require('readline');
const rl = readline.createInterface({
input: fs.createReadStream('input.json'),
output: fs.createWriteStream('output.json')
});
rl.on('line', function(line) {
console.log(line);
// Do any 'line' processing if you want and then write to the output file
this.output.write(`${line}\n`);
});
rl.on('close', function() {
console.log(`Created "${this.output.path}"`);
});
Based on this questions answer I implemented a class you can use to read a file synchronously line-by-line with fs.readSync(). You can make this "pause" and "resume" by using a Q promise (jQuery seems to require a DOM so cant run it with nodejs):
var fs = require('fs');
var Q = require('q');
var lr = new LineReader(filenameToLoad);
lr.open();
var promise;
workOnLine = function () {
var line = lr.readNextLine();
promise = complexLineTransformation(line).then(
function() {console.log('ok');workOnLine();},
function() {console.log('error');}
);
}
workOnLine();
complexLineTransformation = function (line) {
var deferred = Q.defer();
// ... async call goes here, in callback: deferred.resolve('done ok'); or deferred.reject(new Error(error));
return deferred.promise;
}
function LineReader (filename) {
this.moreLinesAvailable = true;
this.fd = undefined;
this.bufferSize = 1024*1024;
this.buffer = new Buffer(this.bufferSize);
this.leftOver = '';
this.read = undefined;
this.idxStart = undefined;
this.idx = undefined;
this.lineNumber = 0;
this._bundleOfLines = [];
this.open = function() {
this.fd = fs.openSync(filename, 'r');
};
this.readNextLine = function () {
if (this._bundleOfLines.length === 0) {
this._readNextBundleOfLines();
}
this.lineNumber++;
var lineToReturn = this._bundleOfLines[0];
this._bundleOfLines.splice(0, 1); // remove first element (pos, howmany)
return lineToReturn;
};
this.getLineNumber = function() {
return this.lineNumber;
};
this._readNextBundleOfLines = function() {
var line = "";
while ((this.read = fs.readSync(this.fd, this.buffer, 0, this.bufferSize, null)) !== 0) { // read next bytes until end of file
this.leftOver += this.buffer.toString('utf8', 0, this.read); // append to leftOver
this.idxStart = 0
while ((this.idx = this.leftOver.indexOf("\n", this.idxStart)) !== -1) { // as long as there is a newline-char in leftOver
line = this.leftOver.substring(this.idxStart, this.idx);
this._bundleOfLines.push(line);
this.idxStart = this.idx + 1;
}
this.leftOver = this.leftOver.substring(this.idxStart);
if (line !== "") {
break;
}
}
};
}
node-byline uses streams, so i would prefer that one for your huge files.
for your date-conversions i would use moment.js.
for maximising your throughput you could think about using a software-cluster. there are some nice-modules which wrap the node-native cluster-module quite well. i like cluster-master from isaacs. e.g. you could create a cluster of x workers which all compute a file.
for benchmarking splits vs regexes use benchmark.js. i havent tested it until now. benchmark.js is available as a node-module
import * as csv from 'fast-csv';
import * as fs from 'fs';
interface Row {
[s: string]: string;
}
type RowCallBack = (data: Row, index: number) => object;
export class CSVReader {
protected file: string;
protected csvOptions = {
delimiter: ',',
headers: true,
ignoreEmpty: true,
trim: true
};
constructor(file: string, csvOptions = {}) {
if (!fs.existsSync(file)) {
throw new Error(`File ${file} not found.`);
}
this.file = file;
this.csvOptions = Object.assign({}, this.csvOptions, csvOptions);
}
public read(callback: RowCallBack): Promise < Array < object >> {
return new Promise < Array < object >> (resolve => {
const readStream = fs.createReadStream(this.file);
const results: Array < any > = [];
let index = 0;
const csvStream = csv.parse(this.csvOptions).on('data', async (data: Row) => {
index++;
results.push(await callback(data, index));
}).on('error', (err: Error) => {
console.error(err.message);
throw err;
}).on('end', () => {
resolve(results);
});
readStream.pipe(csvStream);
});
}
}
import { CSVReader } from '../src/helpers/CSVReader';
(async () => {
const reader = new CSVReader('./database/migrations/csv/users.csv');
const users = await reader.read(async data => {
return {
username: data.username,
name: data.name,
email: data.email,
cellPhone: data.cell_phone,
homePhone: data.home_phone,
roleId: data.role_id,
description: data.description,
state: data.state,
};
});
console.log(users);
})();
I have made a node module to read large file asynchronously text or JSON.
Tested on large files.
var fs = require('fs')
, util = require('util')
, stream = require('stream')
, es = require('event-stream');
module.exports = FileReader;
function FileReader(){
}
FileReader.prototype.read = function(pathToFile, callback){
var returnTxt = '';
var s = fs.createReadStream(pathToFile)
.pipe(es.split())
.pipe(es.mapSync(function(line){
// pause the readstream
s.pause();
//console.log('reading line: '+line);
returnTxt += line;
// resume the readstream, possibly from a callback
s.resume();
})
.on('error', function(){
console.log('Error while reading file.');
})
.on('end', function(){
console.log('Read entire file.');
callback(returnTxt);
})
);
};
FileReader.prototype.readJSON = function(pathToFile, callback){
try{
this.read(pathToFile, function(txt){callback(JSON.parse(txt));});
}
catch(err){
throw new Error('json file is not valid! '+err.stack);
}
};
Just save the file as file-reader.js, and use it like this:
var FileReader = require('./file-reader');
var fileReader = new FileReader();
fileReader.readJSON(__dirname + '/largeFile.json', function(jsonObj){/*callback logic here*/});

How would I assign a Javascript variable to another Javascript variable? (a little confusing, i know)

So, I have had a problem recently!
I was trying to assign a value to a variables value (a little confusing, i know). I was trying to make a library system for my Discord bot, which uses JavaScript (Node.js)!
So, here's a part of the code that I was struggling with:
flist.forEach(item1 => {
liblist.forEach(item => {
eval(item1 = require(item));
});
});
OK, so basically item1 has to be replaced with a library file's name (if a file is named cmdh, I use that as a variable name to assign require(item) to it)
Edit: the item1 is the file name, by the way.
Edit 2: Here's a part of the file util.js:
const Discord = require("discord.js");
const fs = require("fs");
function genEmbed(title, desc, col) {
return new Discord.MessageEmbed().setTitle(title).setDescription(desc).setColor(col);
};
function log(inp) {
console.log(`[UTIL] ${inp}`);
};
function loadDef() {
log("Loading libraries...");
dirlist = [];
liblist = [];
flist = [];
fs.readdir("./libraries", (err, dirs) => {
if(err) console.error(err);
dirs.forEach(dir => {
dirlist.push("./libraries/" + dir);
});
dirlist.forEach(dir => {
fs.readdir(dir, (err, files) => {
if(err) console.error(err);
files.forEach(file => {
if(file.includes(".")) {
liblist.push(require(dir + "/" + file));
filename = file.split(".")[0];
flist.push(filename);
} else {
log(`${file} is a directory, ignoring...`)
};
});
});
});
});
flist.forEach(item1 => {
liblist.forEach(item => {
eval(item1 = require(item));
});
});
log("Libraries loaded!");
};
module.exports = {
genEmbed,
loadDef
};
If you are trying to assign a new value for each element in an array and return a new array with the updated values, you should use map instead of forEach.
If you want to return an array with the required libraries from liblist and store the result in flist, your code should look as follows:
flist = liblist.map(item => require(item));
Essentially, map takes each value from an array, applies a function to it (in your case, require), and then returns a new array with the results of the function.
It should be noted that you also need to replace fs.readdir with fs.readdirSync, otherwise liblist will be empty when the code is called.
After our discussion on the chat I understood that you want to set the variables as globals in your main file. However, this is a bit hackish. To do that you need to modify the global object.
Note: I would advise you to statically require via const libraryName = require('./libraries/fileName.js') instead, as this way of dynamically defining libraries has no advantage in this case and only makes the code more complicated. Dynamically requiring libraries only makes sense when you, for example, add listeners via your libraries, but then you do not need to store them as globals.
Your full code for the utils file should now look as follows:
const Discord = require("discord.js");
const fs = require("fs");
function genEmbed(title, desc, col) {
return new Discord.MessageEmbed().setTitle(title).setDescription(desc).setColor(col);
};
function log(inp) {
console.log(`[UTIL] ${inp}`);
};
function loadDef() {
log("Loading libraries...");
libraries = {};
const dirlist = fs.readdirSync("./libraries").map(dir => "./libraries/" + dir)
dirlist.forEach(dir => {
const files = fs.readdirSync(dir)
files.forEach(file => {
if(file.includes(".")) {
filename = file.split(".")[0];
libraries[filename] = require(dir + "/" + file);
} else {
log(`${file} is a directory, ignoring...`)
};
});
});
log("Libraries loaded!");
return libraries;
};
module.exports = {
genEmbed,
loadDef
};
Now you still need to modify the code in your main file, as follows:
const libraries = loadDef()
global = Object.assign(global, libraries) // assign libraries to global variables

Get subfolders from folder with multiple levels

I have code which checks all the files in subfolders for a folder. But how can I change it to not only check on subfolder level but also the subfolders of the subfolder and so on?
This is the code i have for the folder and its subfolders:
var fso = new ActiveXObject("Scripting.FileSystemObject");
fso = fso.getFolder(path);
var subfolders = new Object();
subfolders = fso.SubFolders;
var oEnumerator = new Enumerator(subfolders);
for (;!oEnumerator.atEnd(); oEnumerator.moveNext())
{
var itemsFolder = oEnumerator.item().Files;
var oEnumerator2 = new Enumerator(itemsFolder);
var clientFileName = null;
for(;!oEnumerator2.atEnd(); oEnumerator2.moveNext())
{
var item = oEnumerator2.item();
var itemName = item.Name;
var checkFile = itemName.substring(itemName.length - 3);
if(checkFile == ".ac")
{
var clientFileName = itemName;
break;
}
}
}
On each level of subfolders I need to check all the files if it can find a .ac file.
The solution I mentioned in my comment would look something like this (I don't know much about ActiveX, so there are a lot of comments so hopefully you can easily correct any mistakes):
//this is the function that looks for the file inside a folder.
//if it's not there, it looks in every sub-folder by calling itself
function getClientFileName(folder) {
//get all the files in this folder
var files = folder.Files;
//create an enumerator to check all the files
var enumerator = new Enumerator(files);
for(;!enumerator.atEnd(); enumerator.moveNext()) {
//get the file name we're about to check
var file = enumerator.item().Name;
//if the file name is too short skip this one
if (file.length<3) continue;
//check if this file's name matches, if it does, return it
if (file.substring(file.length - 3)==".ac") return file;
}
//if we finished this loop, then the file was not inside the folder
//so we check all the sub folders
var subFolders = folder.SubFolders;
//create an enumerator to check all sub folders
enumerator = new Enumerator(subFolders);
for(;!enumerator.atEnd(); enumerator.moveNext()) {
//get the sub folder we're about to check
var subFolder = enumerator.item();
//try to find the file in this sub folder
var fileName = getClientFileName(subFolder);
//if it was inside the sub folder, return it right away
if (fileName!=null) return fileName;
}
//if we get this far, we did not find the file in this folder
return null;
}
You would then call this function like so:
var theFileYouAreLookingFor = getClientFileName(theFolderYouWantToStartLookingIn);
Again, beware of the above code: I did not test it, nor do I know much about ActiveX, I merely took your code and changed it so it should look in all the sub folders.
What you want is a recursive function. Here's a simple recursive function that iterates thru each file in the provided path, and then makes a recursive call to iterate thru each of the subfolders files. For each file encountered, this function invokes a provided callback (which is where you'd do any of your processing logic).
Function:
function iterateFiles(path, recursive, actionPerFileCallback){
var fso = new ActiveXObject("Scripting.FileSystemObject");
//Get current folder
folderObj = fso.GetFolder(path);
//Iterate thru files in thisFolder
for(var fileEnum = new Enumerator(folderObj.Files); !fileEnum.atEnd(); fileEnum.moveNext()){
//Get current file
var fileObj = fso.GetFile(fileEnum.item());
//Invoke provided perFile callback and pass the current file object
actionPerFileCallback(fileObj);
}
//Recurse thru subfolders
if(recursive){
//Step into each sub folder
for(var subFolderEnum = new Enumerator(folderObj.SubFolders); !subFolderEnum.atEnd(); subFolderEnum.moveNext()){
var subFolderObj = fso.GetFolder(subFolderEnum.item());
//Make recursive call
iterateFiles(subFolderObj.Path, true, actionPerFileCallback);
}
}
};
Usage (here I'm passing in an anonymous function that get's called for each file):
iterateFiles(pathToFolder, true, function(fileObj){
Wscript.Echo("File Name: " + fileObj.Name);
};
Now.. That is a pretty basic example. Below is a more complex implementation of a similar function. In this function, we can recursively iterate thru each file as before. However, now the caller may provide a 'calling context' to the function which is in turn passed back to the callback. This can be powerful as now the caller has the ability to use previous information from it's own closure. Additionally, I provide the caller an opportunity to update the calling context at each recursive level. For my specific needs when designing this function, it was necessary to provide the option of checking to see if each callback function was successful or not. So, you'll see checks for that in this function. I also include the option to perform a callback for each folder that is encountered.
More Complex Function:
function iterateFiles(path, recursive, actionPerFileCallback, actionPerFolderCallback, useFnReturnValue, callingContext, updateContextFn){
var fso = new ActiveXObject("Scripting.FileSystemObject");
//If 'useFnReturnValue' is true, then iterateFiles() should return false IFF a callback fails.
//This function simply tests that case.
var failOnCallbackResult = function(cbResult){
return !cbResult && useFnReturnValue;
}
//Context that is passed to each callback
var context = {};
//Handle inputs
if(callingContext != null){
context.callingContext = callingContext;
}
//Get current folder
context.folderObj = fso.GetFolder(path);
//Do actionPerFolder callback if provided
if(actionPerFolderCallback != null){
var cbResult = Boolean(actionPerFolderCallback(context));
if (failOnCallbackResult(cbResult)){
return false;
}
}
//Iterate thru files in thisFolder
for(var fileEnum = new Enumerator(context.folderObj.Files); !fileEnum.atEnd(); fileEnum.moveNext()){
//Get current file
context.fileObj = fso.GetFile(fileEnum.item());
//Invoke provided perFile callback function with current context
var cbResult = Boolean(actionPerFileCallback(context));
if (failOnCallbackResult(cbResult)){
return false;
}
}
//Recurse thru subfolders
if(recursive){
//Step into sub folder
for(var subFolderEnum = new Enumerator(context.folderObj.SubFolders); !subFolderEnum.atEnd(); subFolderEnum.moveNext()){
var subFolderObj = fso.GetFolder(subFolderEnum.item());
//New calling context that will be passed into recursive call
var newCallingContext;
//Provide caller a chance to update the calling context with the new subfolder before making the recursive call
if(updateContextFn != null){
newCallingContext = updateContextFn(subFolderObj, callingContext);
}
//Make recursive call
var cbResult = iterateFiles(subFolderObj.Path, true, actionPerFileCallback, actionPerFolderCallback, useFnReturnValue, newCallingContext, updateContextFn);
if (failOnCallbackResult(cbResult)){
return false;
}
}
}
return true; //if we made it here, then all callbacks were successful
};
Usage:
//Note: The 'lib' object used below is just a utility library I'm using.
function copyFolder(fromPath, toPath, overwrite, recursive){
var actionPerFileCallback = function(context){
var destinationFolder = context.callingContext.toPath;
var destinationPath = lib.buildPath([context.callingContext.toPath, context.fileObj.Name]);
lib.createFolderIfDoesNotExist(destinationFolder);
return copyFile(context.fileObj.Path, destinationPath, context.callingContext.overwrite);
};
var actionPerFolderCallback = function(context){
var destinationFolder = context.callingContext.toPath;
return lib.createFolderIfDoesNotExist(destinationFolder);
};
var callingContext = {
fromPath : fromPath,
toPath : lib.buildPath([toPath, fso.GetFolder(fromPath).Name]), //include folder in copy
overwrite : overwrite,
recursive : recursive
};
var updateContextFn = function(currentFolderObj, previousCallingContext){
return {
fromPath : currentFolderObj.Path,
toPath : lib.buildPath([previousCallingContext.toPath, currentFolderObj.Name]),
overwrite : previousCallingContext.overwrite,
recursive : previousCallingContext.recursive
}
}
return iterateFiles(fromPath, recursive, actionPerFileCallback, null, true, callingContext, updateContextFn);
};
I know this question is old but I stumbled upon it and hopefully my answer will help someone!

Test context missing in before and after test hook in nightwatch js globals

I have multiple nightwatch tests with setup and teardown in every single test. I am trying to unify it into globalModule.js in before after(path set in globals_path in nightwatch.json).
//globalModule.js
before:function(test, callback){
// do something with test object
}
//sampletest.js
before: function(test){
..
},
'testing':function(test){
....
}
My problem is test context is not available in globalsModule.js. How do i get it there? Can someone let me know?
Test contex not available now. As said beatfactor, it will available soon.
While it not available try use local before first file, but it hack.
Also you can export all your file into one object and export it into nightwatch, but then you can use local before just in time.
For example:
var tests = {};
var befores = [];
var fs =require('fs');
var requireDir = require('require-dir');
var dirs = fs.readdirSync('build');
//if you have dirs that should exclude
var usefull = dirs.filter(function(item){
return !(item=='data')
});
usefull.forEach(function(item){
var dirObj = requireDir('../build/' + item);
for(key in dirObj){
if(dirObj.hasOwnProperty(key))
for(testMethod in dirObj[key])
if(dirObj[key].hasOwnProperty(testMethod))
if(testMethod == 'before')
befores.push(dirObj[key][testMethod]);
else
tests[testMethod] = dirObj[key][testMethod];
}
});
tests.before = function(browser){
//some global before actions here
//...
befores.forEach(function(item){
item.call(tests,browser);
});
};
module.exports = tests;
For more information https://github.com/beatfactor/nightwatch/issues/388

Chrome Extension with Database API interface

I want to update a div with a list of anchors that I generate from a local database in chrome. It's pretty simple stuff, but as soon as I try to add the data to the main.js file via a callback everything suddenly becomes undefined. Or the array length is set to 0. ( When it's really 18. )
Initially, I tried to install it into a new array and pass it back that way.
Is there a setting that I need to specify in the chrome manifest.json in order to allow for communication with the database API? I've checked, but all I've been able to find was 'unlimited storage'
The code is as follows:
window.main = {};
window.main.classes = {};
(function(awe){
awe.Data = function(opts){
opts = opts || new Object();
return this.init(opts);
};
awe.Data.prototype = {
init:function(opts){
var self = this;
self.modified = true;
var db = self.db = openDatabase("buddy","1.0","LocalDatabase",200000);
db.transaction(function(tx){
tx.executeSql("CREATE TABLE IF NOT EXISTS listing ( name TEXT UNIQUE, url TEXT UNIQUE)",[],function(tx,rs){
$.each(window.rr,function(index,item){
var i = "INSERT INTO listing (name,url)VALUES('"+item.name+"','"+item.url+"')";
tx.executeSql(i,[],null,null);
});
},function(tx,error){
});
});
self._load()
return this;
},
add:function(item){
var self = this;
self.modified = true;
self.db.transaction(function(tx){
tx.executeSql("INSERT INTO listing (name,url)VALUES(?,?)",[item.name,item.url],function(tx,rs){
//console.log('success',tx,rs)
},function(tx,error){
//console.log('error',error)
})
});
self._load()
},
remove:function(item){
var self = this;
self.modified = true;
self.db.transaction(function(tx){
tx.executeSql("DELETE FROM listing where name='"+item.name+"'",[],function(tx,rs){
//console.log('success',tx,rs)
},function(tx,error){
//console.log('error',tx,error);
});
});
self._load()
},
_load:function(callback){
var self = this;
if(!self.modified)
return;
self.data = new Array();
self.db.transaction(function(tx){
tx.executeSql('SELECT name,url FROM listing',[],function(tx,rs){
console.log(callback)
for(var i = 0; i<rs.rows.length;i++)
{
callback(rs.rows.item(i).name,rs.rows.item(i).url)
// var row = rs.rows.item(i)
// var n = new Object()
// n['name'] = row['name'];
// n['url'] = row['url'];
}
},function(tx,error){
//console.log('error',tx,error)
})
})
self.modified = false
},
all:function(cb){
this._load(cb)
},
toString:function(){
return 'main.Database'
}
}
})(window.main.classes);
And the code to update the list.
this.database.all(function(name,url){
console.log('name','url')
console.log(name,url)
var data = []
$.each(data,function(index,item){
try{
var node = $('<div > '+item.name + '</div>');
self.content.append(node);
node.unbind();
node.bind('click',function(evt){
var t = $(evt.target).attr('href');
chrome.tabs.create({
"url":t
},function(evt){
self._tab_index = evt.index
});
});
}catch(e){
console.log(e)
}
})
});
From looking at your code above, I notice you are executing "self._load()" at the end of each function in your API. The HTML5 SQL Database is asynchronous, you can never guarantee the result. In this case, I would assume the result will always be 0 or random because it will be a race condition.
I have done something similar in my fb-exporter extension, feel free to see how I have done it https://github.com/mohamedmansour/fb-exporter/blob/master/js/database.js
To solve a problem like this, did you check the Web Inspector and see if any errors occurs in the background page. I assume this is all in a background page eh? Try to see if any error occurs, if not, I believe your encountering a race condition. Just move the load within the callback and it should properly call the load.
Regarding your first question with the unlimited storage manifest attribute, you don't need it for this case, that shouldn't be the issue. The limit of web databases is 5MB (last I recall, it might have changed), if your using a lot of data manipulation, then you use that attribute.
Just make sure you can guarantee the this.database.all is running after the database has been initialized.

Categories

Resources