Avoid effect reactivity on store variable update - javascript

I've declared an effect, something on the lines of:
createEffect(async ()=>{
let localvar = 0;
if(storeGetter.var1 && storeGetter.var2){
while(storeGetter.var3){
localvar = await storeGetter.someFunction1(storeSetter,storeGetter);
}
}
}
And then in store.someFunction I've declared:
someFunction: async function(setter,getter){
let localVar2 = 0;
let ans = 0;
if(getter.var_4){
/*do some calculations*/
}
return ans;
}
The thing is that when store_var4 is updated, the way solidjs resolves the reactivity is taking it into account as a dependency on the effect.
Is there any possibility of avoiding the triggering of the effect in this case?
I'm not sure if untrack is the way to go or how to use it in this case

There are many ways to ignore or selectively trigger effects, using untrack, using conditional statements, passing controlled equals value to createSignal and using on function are the most common ones.
How to listen to only a certain value of an object in solid-js?
However your intent is not clear and there are few problems with your code.
First and foremost, Solid uses synchronous API, using async functions clearly serves no purpose and makes your code very hard reason about.
The way you interact with localstorage has problems, i.e why do you need a while statement? It is a blocking statement and a huge no. If you code would work the way you intended, the UI would be either frozen or janky.
You can find some examples on how to use localstorage here:
How to access values ​stored on a browser from a code file using sessionStorage in javascript?
How to update local storage values in SolidJS using hooks
How do I temporarily store form data on a user's computer from sessionstorage?

Related

When in the Chrome Debugger, is there anyway to reference data or functions inside an anonymous function block?

I'm trying to debug something live on a customer website and my code is all inside an anonymous function block. I don't know if there's anyway to reach that code to execute functions or look at variables in there. I can't put a breakpoint either because this code is dynamically generated each time the page is refreshed and the breakpoint doesn't stick.
(function() {
var Date = "14 September 2022 14:44:55"; // different every refresh for example
var Holder = {
var Items = {
item1: "Value1",
item2: "Value2"
};
function getItem(name) {
return Items[name];
};
function setItem(name, value) {
Items[name] = value;
};
setTimeout(DoSomething(), 2000);
})();
That's not the actual code, just a bare minimum example to illustrate the problem.
Is there anyway to get reach getItem() or Items?
Without a breakpoint that code probably runs to completion then POOF it's all gone anyway.
Redefine setTimeout
If it really is the case that the code inside the anonymous function calls other browser methods, you might be able to insert a detour at runtime that you can then put a breakpoint on.
For this to work, you will need to be able to inject new code into the page before the anonymous code, because there's no other way to invoke the IIFE.
Your example code uses setTimeout, so here's what I would try to insert:
let realSetTimeout = window.setTimeout
window.setTimeout = (...args) => {
debugger
return realSetTimeout(...args)
}
Lots of unrelated code might be calling setTimeout, in which case this could break the page or just make debugging really tedious. In that case, you might make it only debug if one of the setTimeout args has a value that's used in your example, e.g.:
// only break for our timeout
if(args[1] === 2000) debugger
Something like that might not trigger for only your code, but it would hugely reduce the number of other codepaths that get interrupted on their journey through the commonly-used browser capability.
Alternatively, use Charles Proxy to rewrite the body of the HTML page before it enters your browser. You could manually insert a debugger call directly into the anonymous function. Charles is not free, but I think they have a demo that might let you do this. If you do this professionally, it's probably a good purchase anyway. Your employer might even pay for the license.
If you can't use Charles (or a similar tool), you could instead set up a local proxy server using Node which does the rewrite for you. Something like that might only take an hour to throw together. But that is a bigger task, and deserves its own question if you need help with that.
No unfortunately.
The variables inside of the anonymous object are created in a scope which is inaccessible from the outside.
One of the main benefits of using a closure!
You’ll have to find a way to insert your own code inside of it by modifying the function that is generating those objects. If you can’t do that, then you’ll have to take the fork in the road and find another way.

Is there a way to minimise CPU usage by reducing number of write operations to chrome.storage?

I am making a chrome extension that keeps track of the time I spend on each site.
In the background.js I am using a map(stored as an array) that saves the list of sites as shown.
let observedTabs = [['chrome://extensions', [time, time, 'icons/sad.png']]];
Every time I update my current site, the starting and ending time of my time on that particular site is stored in the map corresponding to the site's key.
To achieve this, I am performing the chrome.storage.sync.get and chrome.storage.sync.set inside the tabs.onActivated, tabs.onUpdated, windows.onFocusChanged and idle.onStateChanged.
This however results in a very high CPU usage for chrome(around 25%) due to multiple read and write processes from(and to) storage.
I tried to solve the problem by using global variables in background.js and initialising them to undefined. Using the function shown below, I read from storage only when the the current variable is undefined(first time background.js tries to get the data) and at all other times, it just uses the set global variable.
let observedTabs = undefined;
function getObservedTabs(callback) {
if (observedTabs === undefined) {
chrome.storage.sync.get("observedTabs", (observedTabs_obj) => {
callback(observedTabs_obj.observedTabs);
});
} else {
callback(observedTabs);
}
}
This solves the problem of the costly repeated read operations.
As for the write operations, I considered using runtime.onSuspend to write to storage once my background script stops executing, as shown:
chrome.runtime.onSuspend.addListener(() => {
getObservedTabs((_observedTabs) => {
observedTabs = _observedTabs;
chrome.storage.sync.set({"observedTabs": _observedTabs});
});
});
This, however doesn't work. And the documentation also warns about this.
Note that since the page is unloading, any asynchronous operations started while handling this event are not guaranteed to complete.
Is there a workaround that would allow me to minimise my writing operations to storage and hence reduce my CPU usage?

A way to reverse all prototype re-definitions?

Let's assume that I do not trust the browsers visiting my website. What if they for example override XMLHttpRequest.prototype and redirect or change the requests. Is there any way to prevent this? Perhaps by resetting all objects to "factory settings" at the end of loading?
If your script run at first, you can save references to whatever you need like below & freeze it.
(() => {
const XHR = XMLHttpRequest;
Object.freeze(XHR);
// use XHR here ...
)();
Otherwise, I think it is not possible to be sure other than check if function is native - casting to string (function + '') should show it is native function.
And running few simple unit tests against this function to be sure it is ok.
Running units test may be necessary to be sure if functions are not changed with each other (still remains native).
const alertRef = alert;
alert = console.log;
console.log = alertRef;
Another possible option that may be helpfull is checking functions length property which defines how many arguments it accepts.
Be careful with that becouse it may differ in different browsers.

How to initialize a child process with passed in functions in Node.js

Context
I'm building a general purpose game playing A.I. framework/library that uses the Monte Carlo Tree Search algorithm. The idea is quite simple, the framework provides the skeleton of the algorithm, the four main steps: Selection, Expansion, Simulation and Backpropagation. All the user needs to do is plug in four simple(ish) game related functions of his making:
a function that takes in a game state and returns all possible legal moves to be played
a function that takes in a game state and an action and returns a new game state after applying the action
a function that takes in a game state and determines if the game is over and returns a boolean and
a function that takes in a state and a player ID and returns a value based on wether the player has won, lost or the game is a draw. With that, the algorithm has all it needs to run and select a move to make.
What I'd like to do
I would love to make use of parallel programming to increase the strength of the algorithm and reduce the time it needs to run each game turn. The problem I'm running into is that, when using Child Processes in NodeJS, you can't pass functions to the child process and my framework is entirely built on using functions passed by the user.
Possible solution
I have looked at this answer but I am not sure this would be the correct implementation for my needs. I don't need to be continually passing functions through messages to the child process, I just need to initialize it with functions that are passed in by my framework's user, when it initializes the framework.
I thought about one way to do it, but it seems so inelegant, on top of probably not being the most secure, that I find myself searching for other solutions. I could, when the user initializes the framework and passes his four functions to it, get a script to write those functions to a new js file (let's call it my-funcs.js) that would look something like:
const func1 = {... function implementation...}
const func2 = {... function implementation...}
const func3 = {... function implementation...}
const func4 = {... function implementation...}
module.exports = {func1, func2, func3, func4}
Then, in the child process worker file, I guess I would have to find a way to lazy load require my-funcs.js. Or maybe I wouldn't, I guess it depends how and when Node.js loads the worker file into memory. This all seems very convoluted.
Can you describe other ways to get the result I want?
child_process is less about running a user's function and more about starting a new thread to exec a file or process.
Node is inherently a single-threaded system, so for I/O-bound things, the Node Event Loop is really good at switching between requests, getting each one a little farther. See https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/
What it looks like you're doing is trying to get JavaScript to run multiple threads simultaniously. Short answer: can't ... or rather it's really hard. See is it possible to achieve multithreading in nodejs?
So how would we do it anyway? You're on the right track: child_process.fork(). But it needs a hard-coded function to run. So how do we get user-generated code into place?
I envision a datastore where you can take userFn.ToString() and save it to a queue. Then fork the process, and let it pick up the next unhandled thing in the queue, marking that it did so. Then write to another queue the results, and this "GUI" thread then polls against that queue, returning the calculated results back to the user. At this point, you've got multi-threading ... and race conditions.
Another idea: create a REST service that accepts the userFn.ToString() content and execs it. Then in this module, you call out to the other "thread" (service), await the results, and return them.
Security: Yeah, we just flung this out the window. Whether you're executing the user's function directly, calling child_process#fork to do it, or shimming it through a service, you're trusting untrusted code. Sadly, there's really no way around this.
Assuming that security isn't an issue you could do something like this.
// Client side
<input class="func1"> // For example user inputs '(gamestate)=>{return 1}'
<input class="func2">
<input class="func3">
<input class="func4">
<script>
socket.on('syntax_error',function(err){alert(err)});
submit_funcs_strs(){
// Get function strings from user input and then put into array
socket.emit('functions',[document.getElementById('func1').value,document.getElementById('func2').value,...
}
</script>
// Server side
// Socket listener is async
socket.on('functions',(funcs_strs)=>{
let funcs = []
for (let i = 0; i < funcs_str.length;i++){
try {
funcs.push(eval(funcs_strs));
} catch (e) {
if (e instanceof SyntaxError) {
socket.emit('syntax_error',e.message);
return;
}
}
}
// Run algorithm here
}

Conflicting purposes of IndexedDB transactions

As I understand it, there are three somewhat distinct reasons to put multiple IndexedDB operations in a single transaction rather than using a unique transaction for each operation:
Performance. If you’re doing a lot of writes to an object store, it’s much faster if they happen in one transaction.
Ensuring data is written before proceeding. Waiting for the “oncomplete” event is the only way to be sure that a subsequent IndexedDB query won’t return stale data.
Performing an atomic set of DB operations. Basically, “do all of these things, but if one of them fails, roll it all back”.
#1 is fine, most databases have the same characteristic.
#2 is a little more unique, and it causes issues when considered in conjunction with #3. Let’s say I have some simple function that writes something to the database and runs a callback when it's over:
function putWhatever(obj, cb) {
var tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
}
That works fine. But now if you want to call that function as a part of a group of operations you want to atomically commit or fail, it's impossible. You'd have to do something like this:
function putWhatever(tx, obj, cb) {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
This second version of the function is very different than the first, because the callback runs before the data is guaranteed to be written to the database. If you try to read back the object you just wrote, you might get a stale value.
Basically, the problem is that you can only take advantage of one of #2 or #3. Sometimes the choice is clear, but sometimes not. This has led me to write horrible code like:
function putWhatever(tx, obj, cb) {
if (tx === undefined) {
tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
} else {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
}
However even that still is not a general solution and could fail in some scenarios.
Has anyone else run into this problem? How do you deal with it? Or am I simply misunderstanding things somehow?
The following is just opinion as this doesn't seem like a 'one right answer' question.
First, performance is an irrelevant consideration. Avoid this factor entirely, unless later profiling suggests a material problem. Chances of perf issues are ridiculously low.
Second, I prefer to organize requests into transactions solely to maintain integrity. Integrity is paramount. Integrity as I define it here simply means that the database at any one point in time does not contain conflicting or erratic data. Essentially the database is never able to enter into a 'bad' state. For example, to impose a rule that cross-store object references point to valid and existing objects in other stores (a.k.a. referential integrity), or to prevent duplicated requests such as a double add/put/delete. Obviously, if the app were something like a bank app that credits/debits accounts, or a heart-attack monitor app, things could go horribly wrong.
My own experience has led me to believe that code involving indexedDB is not prone to the traditional facade pattern. I found that what worked best, in terms of organizing requests into different wrapping functions, was to design functions around transactions. I found that quite often there are very few DRY violations because every request is nearly always unique to its transactional context. In other words, while a similar 'put object' request might appear in more than one transaction, it is so distinct in its behavior given its separate context that it merits violating DRY.
If you go the function per request route, I am not sure why you are checking if the transaction parameter is undefined. Have the caller create the function and then pass it to the requests in turn. Expect the tx to always be defined and do not over-zealously guard against it. If it is ever not defined there is either a serious bug in indexedDB or in your calling function.
Explicitly, something like:
function doTransaction1(db, onComplete) {
var tx = db.transaction(...);
tx.onComplete = onComplete;
doRequest1(tx);
doRequest2(tx);
doRequest3(tx);
}
function doRequest1(tx) {
var store = tx.objectStore(...);
// ...
}
// ...
If the requests should not execute in parallel, and must run in a series, then this indicates a larger and more difficult design issue.

Categories

Resources