Why is my required external file being ignored in React Native? - javascript

I have two React Native 'scenes', one of which launches puzzles in the other. In launch.js, I retrieve data from a separate data file as follows:
import fileData from './data.js';
//var fileData = require('./data.js'); <= I've had it this way, too; data.js is a module.exports of an array-type object
I pass the data to game.js as follows:
onSelect(passed) {
var inFileData = fileData; //*** this is just a debugging abstraction so I can see what the values are
this.props.navigator.replace({
id: 'game board',
passProps: {
title: passed,
theData: fileData,
},
});
}
The data is acquired in game.js,
constructor(props) {
super(props);
this.state = {
id: 'game board',
title: this.props.title,
theData: this.props.theData, //<= here
};
My problem is that when I do 'stuff' in the game, such as changing a value of a word with
var data = this.props.theData;
data[index].word = "test";
this.setState({theData: data});
when I go back to launch.js (and then back to game.js) these changes persist, even to the extent of having the values in the variable inFileData (the *** comment above) reflect what was changed in the game instead of what's in the data.js file. Also, in game.js I'm unable to hold any sort of copy of the imported data (even with kludgy for-loops to try to duplicate the object) to reset values, all of which leads me to believe I'm missing some key underpinning of React's state model. Can anyone shed some light on where I'm going wrong?

I finally resolved this issue (after spending 2 days on it!) with this cloning function using the deepClone call. The function is described in this excellent article, which I came across on the ancient and much-up-voted StackOverflow question, How do I correctly clone a JavaScript object?
I included the entire "owl" function as deepCopy.js in my React Native project as
var deepCopy = require('./deepCopy.js');
after appending
module.exports = owl;
to the deepCopy.js file, then used it with
var copy = owl.deepCopy(fileData);
and
this.props.navigator.replace({
id: 'game board',
passProps: {
title: passed,
theData: copy,
},
Note that because this isn't a web environment I had to comment out the "HTML DOM Node" section of the owl function as it made references to "document" which React Native didn't appreciate.
I guess the issue here was the depth of the array that was being cloned, and references at deeper levels still being retained. Seems like a more native solution must exist, but at least this works...I'll wait to see if anyone has further input before marking this as answered.
Edit: I'm going to include the code in case it gets taken down:
/* This file is part of OWL JavaScript Utilities.
OWL JavaScript Utilities is free software: you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation, either version 3 of
the License, or (at your option) any later version.
OWL JavaScript Utilities is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with OWL JavaScript Utilities. If not, see
<http://www.gnu.org/licenses/>.
*/
owl = (function() {
// clone objects, skip other types.
function clone(target) {
if ( typeof target == 'object' ) {
Clone.prototype = target;
return new Clone();
} else {
return target;
}
}
// Shallow Copy
function copy(target) {
if (typeof target !== 'object' ) {
return target; // non-object have value sematics, so target is already a copy.
} else {
var value = target.valueOf();
if (target != value) {
// the object is a standard object wrapper for a native type, say String.
// we can make a copy by instantiating a new object around the value.
return new target.constructor(value);
} else {
// ok, we have a normal object. If possible, we'll clone the original's prototype
// (not the original) to get an empty object with the same prototype chain as
// the original. If just copy the instance properties. Otherwise, we have to
// copy the whole thing, property-by-property.
if ( target instanceof target.constructor && target.constructor !== Object ) {
var c = clone(target.constructor.prototype);
// give the copy all the instance properties of target. It has the same
// prototype as target, so inherited properties are already there.
for ( var property in target) {
if (target.hasOwnProperty(property)) {
c[property] = target[property];
}
}
} else {
var c = {};
for ( var property in target ) c[property] = target[property];
}
return c;
}
}
}
// Deep Copy
var deepCopiers = [];
function DeepCopier(config) {
for ( var key in config ) this[key] = config[key];
}
DeepCopier.prototype = {
constructor: DeepCopier,
// determines if this DeepCopier can handle the given object.
canCopy: function(source) { return false; },
// starts the deep copying process by creating the copy object. You
// can initialize any properties you want, but you can't call recursively
// into the DeeopCopyAlgorithm.
create: function(source) { },
// Completes the deep copy of the source object by populating any properties
// that need to be recursively deep copied. You can do this by using the
// provided deepCopyAlgorithm instance's deepCopy() method. This will handle
// cyclic references for objects already deepCopied, including the source object
// itself. The "result" passed in is the object returned from create().
populate: function(deepCopyAlgorithm, source, result) {}
};
function DeepCopyAlgorithm() {
// copiedObjects keeps track of objects already copied by this
// deepCopy operation, so we can correctly handle cyclic references.
this.copiedObjects = [];
thisPass = this;
this.recursiveDeepCopy = function(source) {
return thisPass.deepCopy(source);
}
this.depth = 0;
}
DeepCopyAlgorithm.prototype = {
constructor: DeepCopyAlgorithm,
maxDepth: 256,
// add an object to the cache. No attempt is made to filter duplicates;
// we always check getCachedResult() before calling it.
cacheResult: function(source, result) {
this.copiedObjects.push([source, result]);
},
// Returns the cached copy of a given object, or undefined if it's an
// object we haven't seen before.
getCachedResult: function(source) {
var copiedObjects = this.copiedObjects;
var length = copiedObjects.length;
for ( var i=0; i<length; i++ ) {
if ( copiedObjects[i][0] === source ) {
return copiedObjects[i][1];
}
}
return undefined;
},
// deepCopy handles the simple cases itself: non-objects and object's we've seen before.
// For complex cases, it first identifies an appropriate DeepCopier, then calls
// applyDeepCopier() to delegate the details of copying the object to that DeepCopier.
deepCopy: function(source) {
// null is a special case: it's the only value of type 'object' without properties.
if ( source === null ) return null;
// All non-objects use value semantics and don't need explict copying.
if ( typeof source !== 'object' ) return source;
var cachedResult = this.getCachedResult(source);
// we've already seen this object during this deep copy operation
// so can immediately return the result. This preserves the cyclic
// reference structure and protects us from infinite recursion.
if ( cachedResult ) return cachedResult;
// objects may need special handling depending on their class. There is
// a class of handlers call "DeepCopiers" that know how to copy certain
// objects. There is also a final, generic deep copier that can handle any object.
for ( var i=0; i<deepCopiers.length; i++ ) {
var deepCopier = deepCopiers[i];
if ( deepCopier.canCopy(source) ) {
return this.applyDeepCopier(deepCopier, source);
}
}
// the generic copier can handle anything, so we should never reach this line.
throw new Error("no DeepCopier is able to copy " + source);
},
// once we've identified which DeepCopier to use, we need to call it in a very
// particular order: create, cache, populate. This is the key to detecting cycles.
// We also keep track of recursion depth when calling the potentially recursive
// populate(): this is a fail-fast to prevent an infinite loop from consuming all
// available memory and crashing or slowing down the browser.
applyDeepCopier: function(deepCopier, source) {
// Start by creating a stub object that represents the copy.
var result = deepCopier.create(source);
// we now know the deep copy of source should always be result, so if we encounter
// source again during this deep copy we can immediately use result instead of
// descending into it recursively.
this.cacheResult(source, result);
// only DeepCopier::populate() can recursively deep copy. So, to keep track
// of recursion depth, we increment this shared counter before calling it,
// and decrement it afterwards.
this.depth++;
if ( this.depth > this.maxDepth ) {
throw new Error("Exceeded max recursion depth in deep copy.");
}
// It's now safe to let the deepCopier recursively deep copy its properties.
deepCopier.populate(this.recursiveDeepCopy, source, result);
this.depth--;
return result;
}
};
// entry point for deep copy.
// source is the object to be deep copied.
// maxDepth is an optional recursion limit. Defaults to 256.
function deepCopy(source, maxDepth) {
var deepCopyAlgorithm = new DeepCopyAlgorithm();
if ( maxDepth ) deepCopyAlgorithm.maxDepth = maxDepth;
return deepCopyAlgorithm.deepCopy(source);
}
// publicly expose the DeepCopier class.
deepCopy.DeepCopier = DeepCopier;
// publicly expose the list of deepCopiers.
deepCopy.deepCopiers = deepCopiers;
// make deepCopy() extensible by allowing others to
// register their own custom DeepCopiers.
deepCopy.register = function(deepCopier) {
if ( !(deepCopier instanceof DeepCopier) ) {
deepCopier = new DeepCopier(deepCopier);
}
deepCopiers.unshift(deepCopier);
}
// Generic Object copier
// the ultimate fallback DeepCopier, which tries to handle the generic case. This
// should work for base Objects and many user-defined classes.
deepCopy.register({
canCopy: function(source) { return true; },
create: function(source) {
if ( source instanceof source.constructor ) {
return clone(source.constructor.prototype);
} else {
return {};
}
},
populate: function(deepCopy, source, result) {
for ( var key in source ) {
if ( source.hasOwnProperty(key) ) {
result[key] = deepCopy(source[key]);
}
}
return result;
}
});
// Array copier
deepCopy.register({
canCopy: function(source) {
return ( source instanceof Array );
},
create: function(source) {
return new source.constructor();
},
populate: function(deepCopy, source, result) {
for ( var i=0; i<source.length; i++) {
result.push( deepCopy(source[i]) );
}
return result;
}
});
// Date copier
deepCopy.register({
canCopy: function(source) {
return ( source instanceof Date );
},
create: function(source) {
return new Date(source);
}
});
// HTML DOM Node
// utility function to detect Nodes. In particular, we're looking
// for the cloneNode method. The global document is also defined to
// be a Node, but is a special case in many ways.
function isNode(source) {
if ( window.Node ) {
return source instanceof Node;
} else {
// the document is a special Node and doesn't have many of
// the common properties so we use an identity check instead.
if ( source === document ) return true;
return (
typeof source.nodeType === 'number' &&
source.attributes &&
source.childNodes &&
source.cloneNode
);
}
}
// Node copier
deepCopy.register({
canCopy: function(source) { return isNode(source); },
create: function(source) {
// there can only be one (document).
if ( source === document ) return document;
// start with a shallow copy. We'll handle the deep copy of
// its children ourselves.
return source.cloneNode(false);
},
populate: function(deepCopy, source, result) {
// we're not copying the global document, so don't have to populate it either.
if ( source === document ) return document;
// if this Node has children, deep copy them one-by-one.
if ( source.childNodes && source.childNodes.length ) {
for ( var i=0; i<source.childNodes.length; i++ ) {
var childCopy = deepCopy(source.childNodes[i]);
result.appendChild(childCopy);
}
}
}
});
return {
DeepCopyAlgorithm: DeepCopyAlgorithm,
copy: copy,
clone: clone,
deepCopy: deepCopy
};
})();
module.exports = owl;

Related

Check if array of objects has changed

I have an array that could contain objects. Objects can either be added to it or have a property modified. I want to check if the array has changed at all (could be element(s) added or simply just have one object have a key changed), and then update the DB based on the potential change.
Just wanna know if what I have will cover all cases and/or if there is a better way to do it.
const origArrayCopy = JSON.stringify(origArray);
someFnThatPotentiallyChanges(origArray);
if (origArrayCopy !== JSON.stringify(origArray)) {
updateDB(arr);
} else {
console.log('NO DIFF');
}
And here's a jsFiddle I created to test around with https://jsfiddle.net/j4eqwmp6/
Converting the object to a string using stringify should account for deep-nested changes, right? Any insights on this implementation and is there now a more appropriate way to do it?
Using JSON.stringify is certainly a possibility.
An alternative, is to wrap the object (array) in a proxy, and do that for every nested object as well. Then trap all actions that mutate those objects.
Here is how that could look:
function monitor(obj, cb) {
if (Object(obj) !== obj) return obj;
for (let key of Object.keys(obj)) {
obj[key] = monitor(obj[key], cb);
}
return new Proxy(obj, {
defineProperty(...args) {
cb();
return Reflect.defineProperty(...args);
},
deleteProperty(...args) {
cb();
return Reflect.deleteProperty(...args);
},
set(...args) {
cb();
return Reflect.set(...args);
}
});
};
// Example array
let origArray = [{x: 1}, { child: { y: 1} }];
// Activate the proxy:
let dirty = false;
origArray = monitor(origArray, () => dirty = true);
// Perform a mutation
origArray[1].child.y++;
console.log(dirty); // true
console.log(origArray);

How add getters to all objects

How can I add getters (or prototype/method) to all object.
I have an object that look like:
foo.bar.text
//or
foo.bar.child.text
text - is an array of strings, but I need only one of them.
Each time when I get this value, I want get only one fixed index (this index saved in other variable).
So what I need in result:
foo.text = ['a','b']
foo.bar.text = ['c','d']
foo.bar.someChild.text = [null, undefined]
x = 1;
// here we make some magic
console.log(foo.text) // b
console.log(foo.bar.text) // d
console.log(foo.bar.someChild.text) // undefined
So if any object contains an array text, if we try get it, we get not array but some defined item from it.
Manually point on item I can't so foo.bar.text[x] is not an option.
Name of array and variable that we get is optional, for example we can save array in fullText and try get text. As if text = fullText[x].
Can somebody advice how I can implement this, getter, setter, prototype?
Update
Proxy seems is my option, thanks for advice!
I would suggest you apply Proxy recursively to the foo object.
// the code is handwriting without test
var handler = {
get: (target, prop) => {
if (prop === 'text') {
if (target[prop] instanceof Array) {
return target[prop][x];
} else {
// dealing with non-array value
}
} else if (typeof target[prop] === 'object') {
return new Proxy(target[prop], handler);
} else {
return target[prop];
}
}
}
var newFoo = new Proxy(foo, handler);

Defining an indexer for an object

One can make an object iterable by implementing [Symbol.iterator].
But how can one override the behavior of the [] operator?
For example i have a an object which has an array inside of it and i want to be able to access that given an index like obj[3].
is that possible?
example
const SignalArray = (data = []) => {
...
return {
add,
remove,
onAdd,
onRemove,
onSort,
removeOnAdd,
removeOnRemove,
removeOnSort,
[Symbol.iterator]() {
return {
next: () => {
if (index < data.length) {
return { value: data[index++], done: false };
} else {
index = 0;
return { done: true };
}
}
}
}
}
}
how can one override the behavior of the [] operator?
Only via Proxy, added in ES2015. You'd provide a get trap and handle the property keys you want to handle.
Here's an example where we check for property names that can be successfully coerced to numbers and return the number * 2:
const o = new Proxy({}, {
get(target, prop, receiver) {
const v = +prop;
if (!isNaN(v)) {
return v * 2;
}
return Reflect.get(...arguments);
}
});
o.x = "ex";
console.log(o[2]); // 4
console.log(o[7]); // 14
console.log(o.x); // "ex"
If you want to override setting the array element, you'd use set trap. There are several other traps available as well. For instance, in a reply to a comment, you said:
...if you hate es6 classes and want to write a wrapper around an array that gives it extra functionality, like an observable array for example...
...that would probably involve a set trap and overriding various mutator methods.

For inserting the first element of a binary tree, do you put it on the left or right?

So I'm trying to create a binary tree in JavaScript
function ToBinaryTree ( arr )
{
// creates a binary tree from an array arr of comparable objects
this.Tree = { left: undefined, right: undefined };
this.CreateNode = function ( value )
{
return { val : undefined, left : undefined, right : undefined }
};
this.Insert = function (elem)
{
var node = this.CreateNode(elem);
if ( this.Tree.left == undefined )
// ... ??
};
// insert elements from array provided in "constructor"
arr.forEach(function(x){
this.Insert(x);
}.bind(this));
this.Contains = function (elem)
{
// ...
};
return this;
}
and I can't figure out whether the very first element inserted should go on the right or left, and if it is, say, the left (this.Tree.left), then do I check that there are no elements inserted by checking this.Tree.left == undefined ???
The first element goes on the root:
this.CreateNode = function (value) {
return {
val: value, // Store the root value here instead of undefined
left: undefined,
right: undefined
};
};
p.s. Just realized this bug is not the only problem, and you also have a "Tree" constructor(?)... Don't try to distinguish Nodes from the Tree as a whole, in most cases it's simpler to assume the tree is the same as its root node and to have an additional "outside" static insert method that can deal with a "null" or undefined tree parameter (returning a new tree as needed)
p.p.s. If you use undefined as an empty tree marker, you can just overwrite the value on insert.

How to create a memoize function

I am stumped with this memoize problem. I need to create a function that will check to see if a value has already been calculated for a given argument, return the previous result, or run the calculation and return that value.
I have spent hours on this and while I am new to JS. I cannot get my head around how to do this. I cannot use any built in functions and would really like to understand what I need to do.
Here is what I have so far, which is so wrong it feels like pseudo-code at this point. I have searched existing memoize questions out here but I cannot seem to make any solution work yet. Any help is much appreciated.
myMemoizeFunc = function(passedFunc) {
var firstRun = passedFunc;
function check(passedFunc){
if(firstRun === undefined){
return passedFunc;
}else{return firstRun;}
}
};
Sorry, I should have been more clear. Here are my specific requirements:
myMemoizeFunc must return a function that will check if the calculation has already been calculated for the given arg and return that val if possible. The passedFunc is a function that holds the result of a calculation.
I understand this may seem like a duplicate, but I am marking as not so, as I am having some serious difficulty understanding what I should do here, and need further help than is given in other posts.
This is what my thought process is bringing me towards but again, I am way off.
myMemoizeFunc = function(passedFunc) {
var allValues = [];
return function(){
for(var i = 0; i < myValues.length; i++){
if(myValues[i] === passedFunc){
return i;
}
else{
myValues.push(passedFunc);
return passedFunc;
}
}
}
};
I should not be returning i or passedFunc here, but what else could I do within the if/else while checking for a value? I have been looking at this problem for so long, I am starting to implement code that is ridiculous and need some fresh advice.
I think the main trick for this is to make an object that stores arguments that have been passed in before as keys with the result of the function as the value.
For memoizing functions of a single argument, I would implement it like so:
var myMemoizeFunc = function (passedFunc) {
var cache = {};
return function (x) {
if (x in cache) return cache[x];
return cache[x] = passedFunc(x);
};
};
Then you could use this to memoize any function that takes a single argument, say for example, a recursive function for calculating factorials:
var factorial = myMemoizeFunc(function(n) {
if(n < 2) return 1;
return n * factorial(n-1);
});
Consider this an extension on the answer of Peter Olson.
For a variable number of arguments you could use something like this.
Note: This example is not optimal if you intent to pass complex arguments (arrays, objects, functions). Be sure to read further and not copy/paste blindly.
function memo(fn) {
const cache = {};
function get(args) {
let node = cache;
for (const arg of args) {
if (!("next" in node)) node.next = new Map();
if (!node.next.has(arg)) node.next.set(arg, {});
node = node.next.get(arg);
}
return node;
}
return function (...args) {
const cache = get(args);
if ("item" in cache) return cache.item;
cache.item = fn(...args);
return cache.item;
}
}
This builds the following cache tree structure:
const memoizedFn = memo(fn);
memoizedFn();
memoizedFn(1);
memoizedFn(1, 2);
memoizedFn(2, 1);
cache = {
item: fn(),
next: Map{ // <- Map contents depicted as object
1: {
item: fn(1),
next: Map{
2: { item: fn(1, 2) }
}
},
2: {
next: Map{
1: { item: fn(2, 1) }
}
}
}
}
This solution leaks memory when passing complex arguments (arrays, object, functions) that are no longer referenced afterwards.
memoizedFn({ a: 1 })
Because { a: 1 } is not referenced after the memoizedFn call it would normally be garbage collected. However now it can't be because cache still holds a reference. It can only be garbage collected once memoizedFn itself is garbage collected.
I showed the above first because it shows the base concept and demonstrates the cache structure in a somewhat simple form. To clean up cache that would normally be garbage collected we should use a WeakMap instead of a Map for complex objects.
For those unfamiliar with WeakMap, the keys are a "weak" reference. This means that the keys do not count towards active references towards an object. Once an object is no longer referenced (not counting weak references) it will be garbage collected. This will in turn remove the key/value pair from the WeakMap instance.
const memo = (function () {
const primitives = new Set([
"undefined",
"boolean",
"number",
"bigint",
"string",
"symbol"
]);
function typeOf(item) {
const type = typeof item;
if (primitives.has(type)) return "primitive";
return item === null ? "primitive" : "complex";
}
const map = {
"primitive": Map,
"complex": WeakMap
};
return function (fn) {
const cache = {};
function get(args) {
let node = cache;
for (const arg of args) {
const type = typeOf(arg);
if (!(type in node)) node[type] = new map[type];
if (!node[type].has(arg)) node[type].set(arg, {});
node = node[type].get(arg);
}
return node;
}
return function (...args) {
const cache = get(args);
if ("item" in cache) return cache.item;
cache.item = fn(...args);
return cache.item;
}
}
})();
const fib = memo((n) => {
console.log("fib called with", n);
if (n == 0) return 0;
if (n == 1) return 1;
return fib(n - 1) + fib(n - 2);
});
// heavy operation with complex object
const heavyFn = memo((obj) => {
console.log("heavyFn called with", obj);
// heavy operation
return obj.value * 2;
});
// multiple complex arguments
const map = memo((iterable, mapFn) => {
console.log("map called with", iterable, mapFn);
const result = [];
for (const item of iterable) result.push(mapFn(item));
return result;
});
console.log("### simple argument demonstration ###");
console.log("fib(3)", "//=>", fib(3));
console.log("fib(6)", "//=>", fib(6));
console.log("fib(5)", "//=>", fib(5));
console.log("### exlanation of when cache is garbage collected ###");
(function () {
const item = { value: 7 };
// item stays in memo cache until it is garbade collected
console.log("heavyFn(item)", "//=>", heavyFn(item));
console.log("heavyFn(item)", "//=>", heavyFn(item));
// Does not use the cached item. Although the object has the same contents
// it is a different instance, so not considdered the same.
console.log("heavyFn({ value: 7 })", "//=>", heavyFn({ value: 7 }));
// { value: 7 } is garbade collected (and removed from the memo cache)
})();
// item is garbade collected (and removed from memo cache) it is no longer in scope
console.log("### multiple complex arguments demonstration ###");
console.log("map([1], n => n * 2)", "//=>", map([1], n => n * 2));
// Does not use cache. Although the array and function have the same contents
// they are new instances, so not considdered the same.
console.log("map([1], n => n * 2)", "//=>", map([1], n => n * 2));
const ns = [1, 2];
const double = n => n * 2;
console.log("map(ns, double)", "//=>", map(ns, double));
// Does use cache, same instances are passed.
console.log("map(ns, double)", "//=>", map(ns, double));
// Does use cache, same instances are passed.
ns.push(3);
console.log("mutated ns", ns);
console.log("map(ns, double)", "//=>", map(ns, double));
The structure stays essentially the same, but depending on the type of the argument it will look in either the primitive: Map{} or complex: WeakMap{} object.
const memoizedFn = memo(fn);
memoizedFn();
memoizedFn(1);
memoizedFn(1, 2);
memoizedFn({ value: 2 }, 1);
cache = {
item: fn(),
primitive: Map{
1: {
item: fn(1),
primitive: Map{
2: { item: fn(1, 2) }
}
}
},
complex: WeakMap{
{ value: 2 }: { // <- cleared if { value: 2 } is garbage collected
primitive: Map{
1: { item: fn({ value: 2 }, 1) }
}
}
}
}
This solution does not memoize any errors thrown. Arguments are considered equal based on Map key equality. If you also need to memoize any errors thrown I hope that this answer gave you the building blocks to do so.
There are a number of memoization libraries available. Doing memoization efficiently is not as straight forward as it seems. I suggest a library be used. Two of the fastest are:
https://github.com/anywhichway/iMemoized
https://github.com/planttheidea/moize
See here for a comprehensive(-ish) list of memoization libraries: https://stackoverflow.com/a/61402805/2441655

Categories

Resources