I am looking for the most efficient way to update a property of an object in an array using modern JavaScript. I am currently doing the following but it is way too slow so I'm looking for an approach that will speed things up. Also, to put this in context, this code is used in a Redux Saga in a react app and is called on every keystroke* a user makes when writing code in an editor.
*Ok not EVERY keystroke. I do have debounce and throttling implemented I just wanted to focus on the update but I appreciate everyone catching this :)
function* updateCode({ payload: { code, selectedFile } }) {
try {
const tempFiles = stateFiles.filter(file => file.id !== selectedFile.id);
const updatedFile = {
...selectedFile,
content: code,
};
const newFiles = [...tempFiles, updatedFile];
}
catch () {}
}
the above works but is too slow.
I have also tried using splice but I get Invariant Violation: A state mutation
const index = stateFiles.findIndex(file => file.id === selectedFile.id);
const newFiles = Array.from(stateFiles.splice(index, 1, { ...selectedFile, content: code }));
You can use Array.prototype.map in order to construct your new array:
const newFiles = stateFiles.map(file => {
if (file.id !== selectedFile.id) {
return file;
}
return {
...selectedFile,
content: code,
};
});
Also, please consider using debouncing in order not to run your code on every keystroke.
Related
To remove the global variables used in a MV2 background script when migrating to a MV3 service worker, all the guides I've found have just given an example of replacing a single global variable with a few lines of setting and then getting using chrome.storage, but it's still not clear to me how it can be used in a bit more complicated scenario.
For instance:
const activatedTabs = [];
let lastActiveTabInfo;
chrome.tabs.onActivated.addListener((activeInfo) => {
if (activatedTabs.length === 0) {
activatedTabs.push(activeInfo.tabId);
lastActiveTabInfo = activeInfo;
}
}
How could the snippet above be refactored to use chrome.storage and remove the global variables?
The number of variables in the state doesn't change the approach:
read the state on the start of the script
save the state on change
For small data (1MB total) use chrome.storage.session, which is in-memory i.e. it doesn't write to disk, otherwise use chrome.storage.local. Both can only store JSON-compatible types i.e. string, number, boolean, null, arrays/objects of such types. There's also IndexedDB for Blob or Uint8Array.
let activatedTabs;
let lastActiveTabInfo;
let busy = chrome.storage.session.get().then(data => {
activatedTabs = data.activatedTabs || [];
lastActiveTabInfo = data.lastActiveTabInfo;
busy = null;
});
const saveState = () => chrome.storage.session.set({
activatedTabs,
lastActiveTabInfo,
});
chrome.tabs.onActivated.addListener(async info => {
if (!activatedTabs.length) {
if (busy) await busy;
activatedTabs.push(info.tabId);
lastActiveTabInfo = info;
await saveState();
}
});
You can also maintain a single object with properties instead:
const state = {
activatedTabs: [],
lastActiveTabInfo: null,
};
const saveState = () => chrome.storage.session.set({ state });
let busy = chrome.storage.session.get('state').then(data => {
Object.assign(state, data.state);
busy = null;
});
chrome.tabs.onActivated.addListener(async info => {
if (!state.activatedTabs.length) {
if (busy) await busy;
state.activatedTabs.push(info.tabId);
state.lastActiveTabInfo = info;
await saveState();
}
});
Note that if you subscribe to frequent events like tabs.onActivated, your service worker may restart hundreds of times a day, which wastes much more resources than keeping an idle persistent background page. The Chromium team ignores this problem, but you shouldn't, and luckily there's a way to reduce the number of restarts by prolonging the SW lifetime. You still need to read/save the state as shown.
Within my function, through interaction from the user, I aim slowly build up an array of responses which I then pass off to an API. However, different approaches to append to the array, simply return a single position array (overwrite).
My current code as follows:
const contribution: Array = [];
const handlePress = () => {
var col = {
response,
user: 1,
update: update.id,
question: q.id,
};
contribution = [...contribution, col];
}
My understanding is that contribution = [...contribution, col] is the correct way to add to the array.
What is the best practice approach for doing this inside a function called each time the user interacts?
Although it is not clear from the question, I suspect, this code is inside a component. If so, then a new contribution array is created on every render. You need to use useState to store this array so that a new array is not created on every render.
const [contribution, setContribution] = React.useState([]);
const handlePress = () => {
var col = {
response,
user: 1,
update: update.id,
question: q.id,
};
setContribution([...contribution, col]);
}
I need Cypress to wait for any xhr requests to complete by default before performing any operations. Is there any way to make this as a default or any other alternatives because the application I am testing is slow and makes a lot of api calls?
Edit: By writing a single statement for every api request is getting messy and unnecessary work. Need a way to make this easier.
If what you want is to wait for a specific xhr you can do it making use of cy.route(). I use this in some scenarios and it is really useful. The general steps to use it are:
cy.server()
cy.route('GET','**/api/my-call/**').as('myXHR');
Do things in the UI such as clicking on a button that will trigger such api calls
cy.wait(#myXHR)
This way if such call isn't triggered your test will fail. You can find extensive documentation about this here
Found something that works for me here https://github.com/PinkyJie/cypress-auto-stub-example
Look for cy.waitUntilAllAPIFinished
I partialy solve the problem adding a waitAll command and ovewrite route command in support folder:
const routeCallArr = [];
Cypress.Commands.overwrite('route', (route, ...params) => {
const localRoute = route(...params);
if (localRoute.alias === undefined) return;
localRoute.onRequest = function() {
routeCallArr.push({alias: `#${localRoute.alias}`, starTime: Date.now()});
}
localRoute.onResponse = function() {
clearCall(`#${localRoute.alias}`);
}
})
const waitAll = (timeOut = 50000, options = {verbose: false, waitNested: false}) => {
const filterRouteCallArr = [];
const date = Date.now();
for (const routeCall of routeCallArr) {
if ((date - routeCall.starTime) > timeOut) continue;
filterRouteCallArr.push(routeCall.alias);
}
if (options.verbose ){
console.table(routeCallArr.map(routeCall => ({
deltaTime: date - routeCall.starTime,
alias: routeCall.alias,
starTime: routeCall.starTime,
})));
console.log(routeCallArr, filterRouteCallArr)
};
routeCallArr.length = [];
if (filterRouteCallArr.length > 0) {
const waiter = cy.wait(filterRouteCallArr, {timeout: timeOut});
options.waitNested && waiter.then(() => {
if (routeCallArr.length > 0) {
waitAll(timeOut, options);
}
});
}
}
Cypress.Commands.add('waitAll', waitAll)
And in the test instead of use cy.wait(['#call01',..., '#callN']); I use cy.waitAll();
The problem with this implementation came when have nested calls in a relative separate time interval from original calls. In that case you can use a recursive wait cy.waitAll(50000, {waitNested: true});
I'm using rxjs.
I have a Browser that's responsible for a number of Page objects. Each page has an Observable<Event> that yields a stream of events.
Page objects are closed and opened at various times. I want to create one observable, called TheOneObservable that will merge all the events from all the currently active Page objects, and also merge in custom events from the Browser object itself.
Closing a Page means that the subscription to it should be closed so it doesn't prevent it from being GC'd.
My problem is that Pages can be closed at any time, which means that the number of Observables being merged is always changing. I've thought of using an Observable of Pages and using mergeMap, but there are problems with this. For example, a subscriber will only receive events of Pages that are opened after it subscribes.
Note that this question has been answered here for .NET, but using an ObservableCollection that isn't available in rxjs.
Here is some code to illustrate the problem:
class Page {
private _events = new Subject<Event>();
get events(): Observable<Event> {
return this._events.asObservable();
}
}
class Browser {
pages = [] as Page[];
private _ownEvents = new Subject<Event>();
addPage(page : Page) {
this.pages.push(page);
}
removePage(page : Page) {
let ixPage = this.pages.indexOf(page);
if (ixPage < 0) return;
this.pages.splice(ixPage, 1);
}
get oneObservable() {
//this won't work for aforementioned reasons
return Observable.from(this.pages).mergeMap(x => x.events).merge(this._ownEvents);
}
}
It's in TypeScript, but it should be understandable.
You can switchMap() on a Subject() linked to array changes, replacing oneObservable with a fresh one when the array changes.
pagesChanged = new Rx.Subject();
addPage(page : Page) {
this.pages.push(page);
this.pagesChanged.next();
}
removePage(page : Page) {
let ixPage = this.pages.indexOf(page);
if (ixPage < 0) return;
this.pages.splice(ixPage, 1);
this.pagesChanged.next();
}
get oneObservable() {
return pagesChanged
.switchMap(changeEvent =>
Observable.from(this.pages).mergeMap(x => x.events).merge(this._ownEvents)
)
}
Testing,
const page1 = { events: Rx.Observable.of('page1Event') }
const page2 = { events: Rx.Observable.of('page2Event') }
let pages = [];
const pagesChanged = new Rx.Subject();
const addPage = (page) => {
pages.push(page);
pagesChanged.next();
}
const removePage = (page) => {
let ixPage = pages.indexOf(page);
if (ixPage < 0) return;
pages.splice(ixPage, 1);
pagesChanged.next();
}
const _ownEvents = Rx.Observable.of('ownEvent')
const oneObservable =
pagesChanged
.switchMap(pp =>
Rx.Observable.from(pages)
.mergeMap(x => x.events)
.merge(_ownEvents)
)
oneObservable.subscribe(x => console.log('subscribe', x))
console.log('adding 1')
addPage(page1)
console.log('adding 2')
addPage(page2)
console.log('removing 1')
removePage(page1)
.as-console-wrapper { max-height: 100% !important; top: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>
You will need to manage the subscriptions to the pages yourself and feed its events into the resulting subject yourself:
const theOneObservable$ = new Subject<Event>();
function openPage(page: Page): Subscription {
return page.events$.subscribe(val => this.theOneObservable$.next(val));
}
Closing the page, i.e. calling unsubscribe on the returned subscription, will already do everything it has to do.
Note that theOneObservable$ is a hot observable here.
You can, of course, take this a bit further by writing your own observable type which encapsulates all of this API. In particular, this would allow you to unsubscribe all inner observables when it is being closed.
A slightly different approach is this:
const observables$ = new Subject<Observable<Event>>();
const theOneObservable$ = observables$.mergeMap(obs$ => obs$);
// Add a page's events; note that takeUntil takes care of the
// unsubscription process here.
observables$.next(page.events$.takeUntil(page.closed$));
This approach is superior in the sense that it will unsubscribe the inner observables automatically when the observable is unsubscribed.
I have a problem creating the following observable.
I want it to receive a predefined array of values
And I want to filter by different things, and be able to work with these as individual observables.
And then when it comes time to merge these filtered observables, I want to preserve the order from the original one
//Not sure the share is necessary, just thought it would tie it all together
const input$ = Observable.from([0,1,0,1]).share();
const ones$ = input$.filter(n => n == 1);
const zeroes$ = input$.filter(n => n == 0);
const zeroesChanged$ = zeroes$.mapTo(2);
const onesChanged$ = ones$.mapTo(3);
const allValues$ = Observable.merge(onesChanged$,zeroesChanged$);
allValues$.subscribe(n => console.log(n));
//Outputs 3,3,2,2
//Expected output 3,2,3,2
EDIT: I am sorry I was not specific enough in my question.
I am using a library called cycleJS, which separates sideeffects into drivers.
So what I am doing in my cycle is this
export function socketCycle({ SOCKETIO }) {
const serverConnect$ = SOCKETIO.get('connect').map(serverDidConnect);
const serverDisconnect$ = SOCKETIO.get('disconnect').map(serverDidDisconnect);
const serverFailedToConnect$ = SOCKETIO.get('connect_failed').map(serverFailedToConnect);
return { ACTION: Observable.merge(serverConnect$, serverDisconnect$, serverFailedToConnect$) };
}
Now my problem arose when I wanted to write a test for it. I tried with the following which worked in the wrong matter(using jest)
const inputConnect$ = Observable.from(['connect', 'disconnect', 'connect', 'disconnect']).share();
const expectedOutput$ = Observable.from([
serverDidConnect(),
serverDidDisconnect(),
serverDidConnect(),
serverDidDisconnect(),
]);
const socketIOMock = {
get: (evt) => {
if (evt === 'connect') {
return inputConnect$.filter(s => s === 'connect');
} else if (evt === 'disconnect') {
return inputConnect$.filter(s => s === 'disconnect');
}
return Observable.empty();
},
};
const { ACTION } = socketCycle({ SOCKETIO: socketIOMock });
Observable.zip(ACTION, expectedOutput$).subscribe(
([output, expectedOutput]) => { expect(output).toEqual(expectedOutput); },
(error) => { expect(true).toBe(false) },
() => { done(); },
);
Maybe there is another way I can go about testing it?
When stream is partitioned, the timing guarantees between elements in different daughter streams is actually destroyed. In particular, even if connect events always come before disconnect events at the event source, the events of the connect Observable won't always come before their corresponding events items in the disconnect Observable. At normal timescales, this race condition probably quite rare but dangerous nonetheless, and this test shows the worst case.
The good news is that your function as shown is just a mapper, between events and results from handlers. If you can continue this model generally over event types, then you can even encode the mapping in a plain data structure, which benefits expressiveness:
const event_handlers = new Map({
'connect': serverDidConnect,
'disconnect': serverDidDisconnect,
'connect_failed': serverFailedToConnect
});
const ACTION = input$.map(event_handlers.get.bind(event_handlers));
Caveat: if you were reducing over the daughter streams (or otherwise considering previous values, like with debounceTime), the refactor is not so straightforward, and would also depend on a new definition of "preserve order". Much of the time, it would still be feasible to reproduce with reduce + a more complicated accumulator.
Below code might be able to give you the desire result, but it's no need to use rxjs to operate array IMHO
Rx.Observable.combineLatest(
Rx.Observable.from([0,0,0]),
Rx.Observable.from([1,1,1])
).flatMap(value=>Rx.Observable.from(value))
.subscribe(console.log)