Parse After_Delete Update Counters - javascript

I'm using Parse cloud code to update some counters on a user when after_delete is called on certain classes. A user has counters for subscriptions, followers and following that are incremented in the before_save for subscriptions and follows and decremented in the before_delete for the same classes.
The issue I'm running into is when a user is deleted. The after_delete function destroys all related subscriptions/follows, but this triggers an update to the (deleted) user via before_delete for subscriptions/follows. This always causes the before_delete to error out.
Perhaps I'm conceptually mixed up on the best way to accomplish this, but I can't figure out how to properly set up the following code in follow before_delete:
var fromUserPointer = follow.get("fromUser");
var toUserPointer = follow.get("toUser");
fromUserPointer.fetch().then( function(fromUser){
// update following counter
// if from user is already deleted, none of the rest of the promise chain is executed
}.then( function (fromUser){
return toUserPointer.fetch();
}.then( function(toUser){
// update followers count
}
Is there a way to determine if the fromUserPointer and toUserPointer point to a valid object short of actually performing the fetch?

Its not an error to not find the user, but by not handling the missing object case on the fetch, its being treating implicitly as an error.
So...
fromUserPointer.fetch().then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}).then(f(result) {
// good stuff
}, f(error) {
// this is good stuff too, if there's no mode of failure
// above that would cause you to want NOT to delete, then...
response.success();
});

Related

How to use if else with protractor

I have very strange scenarios and I am not sure how to handle it.
Im a new into testning and I got a site to test where we check a cart function if its working property.
My problem is that we add x numbers of product and we do a stock check. If there is a stock conflict then we need to solve it before continue else we just continue.
I managed to create a function that looks like:
describe("Details page", function () {
detailsPage = new DetailsPage();
// The details page is accessible by the specified URL
it(`Is defined by the URL: ${userData["url"]}${browser.baseUrl}`,
async function () {
await detailsPage.navigateDesktop();
});
// Details page has a form and it can be filled out with user data
it("Has a form that can receive user data",
async function() {
await detailsPage.fillFormWithUserData();
await utils.click(detailsPage.getForm().buttons.nextStep);
});
if (detailsPage.hasStockConflict()) {
// Details page allows the user to fix conflicts in stocks
it('Enables resolution of stock conflicts', async function () {
// Wait for stock to fully load
await detailsPage.hasStockConflict();
await detailsPage.clickAllRemoveButtons();
await detailsPage.clickAllDecreaseButtons();
});
// Details page allows the user to proceed to the next stage when all conflicts (if any) has been resolved
it('Allows the user to proceed to the next stage of purchasing', async function () {
const nextStepButton = detailsPage.getForm().buttons.nextStep;
await utils.elementToBeClickable(nextStepButton);
await utils.click(nextStepButton);
});
}
});
however my function problem is that I need to wait until I get a response back from the server, either I do get a stock conflict which will be triggered by:
hasStockConflict() //checks if there is stockConflict message in DOM
or I will will get redirect to new page.
My question is, how can I either make a sort functionally that checks if there is a stock conflict then we solve the if statement else we just continue without needing to do anything (Which will take me to next page)?
I have set a timeout for 1 minute. After 1 minute it will pass the test as failed.
Basically I want to solve the if statement if there is a stock conflict else we just skip it basically. I might have done misunderstood the purpose of testning so all sort of knowledge would also be appreciated!
To add to what Code-Apprentice has mentioned, you can set up mock data to get the response as you see fit. You should have different responses mocked and depending on the response do one specific thing in one test. No if else stuff in the steps.
In your case, for now, use items which you know are in stock or add dummy items which are always instock and add dummy items to your database which are out of stock. Write separate tests for both and how you see fit.
Hope it helps!
Each test should test on specific thing. They should not contain if...else branching. Instead, you should have a test for each scenario. Each test should require initialized data that satisfies that specific scenario.
You have two different ways to approach this:
Set up data in resource that you query and request the specific data for the scenario being tested.
Mock the resource so that requests return mock data that is curated for the scenario being tested.
What everyone was saying is that there are best practices that one should follow in order to avoid pitfalls in future...
However, the best practice #1 is it always depends on your company, your product, your needs. So if you decide you need to go this route, go for it
Why your scenario doesn't work
Short answer, your it blocks are built before the browser started. At that time
your function can't run, and I assume fails or returns undefined
Answer
With that said ^, you can't skip it, just place your logic inside like this
it('Enables resolution of stock conflicts', async function () {
if (detailsPage.hasStockConflict()) {
// Wait for stock to fully load
await detailsPage.hasStockConflict();
await detailsPage.clickAllRemoveButtons();
await detailsPage.clickAllDecreaseButtons();
const nextStepButton = detailsPage.getForm().buttons.nextStep;
await utils.elementToBeClickable(nextStepButton);
await utils.click(nextStepButton);
}
});

Conflicting purposes of IndexedDB transactions

As I understand it, there are three somewhat distinct reasons to put multiple IndexedDB operations in a single transaction rather than using a unique transaction for each operation:
Performance. If you’re doing a lot of writes to an object store, it’s much faster if they happen in one transaction.
Ensuring data is written before proceeding. Waiting for the “oncomplete” event is the only way to be sure that a subsequent IndexedDB query won’t return stale data.
Performing an atomic set of DB operations. Basically, “do all of these things, but if one of them fails, roll it all back”.
#1 is fine, most databases have the same characteristic.
#2 is a little more unique, and it causes issues when considered in conjunction with #3. Let’s say I have some simple function that writes something to the database and runs a callback when it's over:
function putWhatever(obj, cb) {
var tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
}
That works fine. But now if you want to call that function as a part of a group of operations you want to atomically commit or fail, it's impossible. You'd have to do something like this:
function putWhatever(tx, obj, cb) {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
This second version of the function is very different than the first, because the callback runs before the data is guaranteed to be written to the database. If you try to read back the object you just wrote, you might get a stale value.
Basically, the problem is that you can only take advantage of one of #2 or #3. Sometimes the choice is clear, but sometimes not. This has led me to write horrible code like:
function putWhatever(tx, obj, cb) {
if (tx === undefined) {
tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
} else {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
}
However even that still is not a general solution and could fail in some scenarios.
Has anyone else run into this problem? How do you deal with it? Or am I simply misunderstanding things somehow?
The following is just opinion as this doesn't seem like a 'one right answer' question.
First, performance is an irrelevant consideration. Avoid this factor entirely, unless later profiling suggests a material problem. Chances of perf issues are ridiculously low.
Second, I prefer to organize requests into transactions solely to maintain integrity. Integrity is paramount. Integrity as I define it here simply means that the database at any one point in time does not contain conflicting or erratic data. Essentially the database is never able to enter into a 'bad' state. For example, to impose a rule that cross-store object references point to valid and existing objects in other stores (a.k.a. referential integrity), or to prevent duplicated requests such as a double add/put/delete. Obviously, if the app were something like a bank app that credits/debits accounts, or a heart-attack monitor app, things could go horribly wrong.
My own experience has led me to believe that code involving indexedDB is not prone to the traditional facade pattern. I found that what worked best, in terms of organizing requests into different wrapping functions, was to design functions around transactions. I found that quite often there are very few DRY violations because every request is nearly always unique to its transactional context. In other words, while a similar 'put object' request might appear in more than one transaction, it is so distinct in its behavior given its separate context that it merits violating DRY.
If you go the function per request route, I am not sure why you are checking if the transaction parameter is undefined. Have the caller create the function and then pass it to the requests in turn. Expect the tx to always be defined and do not over-zealously guard against it. If it is ever not defined there is either a serious bug in indexedDB or in your calling function.
Explicitly, something like:
function doTransaction1(db, onComplete) {
var tx = db.transaction(...);
tx.onComplete = onComplete;
doRequest1(tx);
doRequest2(tx);
doRequest3(tx);
}
function doRequest1(tx) {
var store = tx.objectStore(...);
// ...
}
// ...
If the requests should not execute in parallel, and must run in a series, then this indicates a larger and more difficult design issue.

array reference javascript angular

i'm trying to reference one item in an array, and i have no idea why this is not working,
console.log($scope.Times);
console.log($scope.Times[0]);
these two lines of code are EXACTLY after eachother, but the output i get from the console is the following..
Output from the console
any ideas why this is not working? the commands are exactly after each other as I mentioned before and in the same function, the variable is global in my controller.
i can add more code if you think it can help, but i don't really understand how..
some more code:
$scope.Times = [];
$scope.getStatus = function(timer){
$http.post('getStatus.php', {timer : timer})
.success(function(response){
$scope.response = response;
if ($scope.response.Running === "0"){
$scope.model = { ItemNumber : $scope.response.Part };
$scope.loadTiming($scope.response.Part);
console.log($scope.Times);
console.log($scope.Times[0]);
}
});
};
$scope.loadTiming = function(itemNumber) {
$http.post('getPartTimings.php', {itemNumber : itemNumber})
.success(function(response){
$scope.selectedTiming = response;
$scope.Times.splice(0);
var i = 0;
angular.forEach($scope.selectedTiming, function(value) {
if (value !== 0)
$scope.Times.push({
"Process" : $scope.procedures[i],
"Duration" : value*60
});
i++;
});
});
};
<?php
$postData = file_get_contents("php://input");
$request = json_decode($postData);
require "conf/config.php";
mysqli_report(MYSQLI_REPORT_STRICT);
try {
$con=mysqli_connect(DBSERVER,DBUSER,DBPASS,DBNAME);
} catch (Exception $exp) {
echo "<label style='font-weight:bold; color:red'>MySQL Server Connection Failed. </label>";
exit;
}
$query = 'SELECT *,
TIME_TO_SEC(TIMEDIFF(NOW(),Timestamp))
FROM live_Timers
WHERE Timer='.$request->timer;
$result = mysqli_query($con, $query);
$data = mysqli_fetch_assoc($result);
echo JSON_ENCODE($data);
thanks for your help.
OK, so more code does help. It looks like you have asynchronous logic happening here. loadTiming is fired, which does a POST and then a splice on the Times array. One console.log could be firing before this POST and the other after. There's no easy way to tell.
One possible fix would be to only log these once the loadTiming async process runs. Return a promise from the loadTiming function and then in the then callback of the promise, log your array.
$scope.getStatus = function(timer){
$http.post('getStatus.php', {timer : timer})
.success(function(response){
$scope.response = response;
if ($scope.response.Running === "0"){
$scope.model = { ItemNumber : $scope.response.Part };
$scope.loadTiming($scope.response.Part).then(function () {
console.log($scope.Times);
console.log($scope.Times[0]);
});
}
});
};
$scope.loadTiming = function(itemNumber) {
return $http.post('getPartTimings.php', {itemNumber : itemNumber})
.success(function(response){
$scope.selectedTiming = response;
$scope.Times.splice(0);
var i = 0;
angular.forEach($scope.selectedTiming, function(value) {
if (value !== 0)
$scope.Times.push({
"Process" : $scope.procedures[i],
"Duration" : value*60
});
i++;
});
});
};
I think your issue is a $scope reference issue.
I would try this:
$scope.vm = {};
$scope.vm.Times = [];
Adding the "." is Angular best practice when attaching to $scope. This is best described here Understanding Scopes
I have experienced a similar situation a while ago, related with this issue.
Since then, I've encountered related issues a bunch of times (AngularJS, due to its cyclic nature seems prone to produce this behaviour).
In your case, using JSON.stringify($scope.Times) might "fix" this.
Context
Usually this happens in this context:
An async call or a expensive DOM manipulation is made.
You make 2 (or more) calls to console.log in between.
The state of the DOM or object is changed
The output shows inconsistent (and strange) results
How
Take this example:
console.log(someObject);
console.log(someObject.property);
After digging a lot (and talking to Webkit developers) this is what I've found:
The second call to console.log is "resolved" first.
Why?
In your case, this has to do how Console handles objects and "expressions" in a different way:
An "expression" is resolved in the time of call, while with objects, a reference to said object is stored instead
Note that expression is used loosely here. You can observe this behaviour in this fiddle
More in depth analysis
Regarding display discrepancies, the behaviour posted above is not the only gotcha with Console. In fact, it is related in how Console works.
Console is an external tool
First you must realize that Console is an external tool and not part of the ECMAScript spec. Implementations differ between browsers and it shouldn't be used in production. It certainly won't work the same for every user.
Console is a non-standard external tool and is not on a standards track.
Console is dynamic
Console as a very dynamic tool. With console you can make assertions (test), time and profile your code, group log entries, remote connect to your server and debug Server Side Code. You can even change code itself, at runtime. So..
Console is not just a static log displayer... Its dynamic nature is one its most features
Console has a slight delay
Being an external dynamic tool, Console works as a watcher process attached to the javascript engine.
This is useful in debugging and among other things prevents Console to inadvertently block the execution of the script. A simple and crude way of thinking about this is picturing console.log as a kind of non-blocking async call. This means that:
With Console, there's a slight delay between 1)call, 2)processing and 3)output.
However, calling Console is not "instant" per se. In fact, by itself, can delay script execution. If you mix this with complex DOM manipulations and events, it can cause weird behaviours.
I've encountered an issue with Chrome, when using MutationObserver and console.log. This happened because the DOM Painting was delaying the update of the DOM object but the event triggered by that DOM change was fired nevertheless. This meant the event callback was executed and finished before the DOM Object was fully updated, resulting in an invalid reference to the DOM object.
Using console.log in the observer caused a brief delay in the callback execution, that, in most of the times, was enough to let the DOM Object update first. This proves that console.log delays code execution.
But even when an invalid reference error occurred, console.log ALWAYS showed a valid object. Since the object couldn't have been changed by code itself, this proves there is a delay delay between the call of console.log and the processing.
Console log order matches the code path
Console log entries order is unaffected by entries update status. In other words,
The order of the log entries reflect the order in which they are called, not their "freshness"
So, if an object is updated, it does not move to the end of the log. (makes sense to me)
Counterintuitive behaviour
This can lead to a number of possible counterintuitive behaviours because one might expect a console.log to be some kind of snapshot of the object, not a reference to it.
For instance, in your case, the object is changed between the the call to console.log and the end of the script.
At the time of calling, $scope.Times is empty, so $scope.Times[0] is undefined.
However, the $scope.Time object is updated posteriorly.
When the Console report is displayed, it shows an updated version of the object.
Fix
In your case, transforming the object in an "expression" can solve the "issue". For instance, you can use JSON.stringify($scope.Times).
Debate
It is debatable if the way console handles objects is a Bug or a Feature. Some propose that, when called with an object, console.log should clone that object making a kind of snapshot. Some argue that storing a reference to the object is preferable, since you can easily create a snapshot yourself if you wish to do so.

How to delete/remove nodes on Firebase

I'm using Firebase for a web app. It's written in plain Javascript using no external libraries.
I can "push" and retrieve data with '.on("child_added")', but '.remove()' does not work the way it says it should. According to the API,
"Firebase.remove() -
Remove the data at this Firebase location. Any data at child locations will also be deleted.
The effect of the delete will be visible immediately."
However, the remove is not occurring immediately; only when the entire script is done running. I need to remove and then use the cleared tree immediately after.
Example code:
ref = new Firebase("myfirebase.com") //works
ref.push({key:val}) //works
ref.on('child_added', function(snapshot){
//do stuff
}); //works
ref.remove()
//does not remove until the entire script/page is done
There is a similar post here but I am not using Ember libraries, and even so it seems like a workaround for what should be as simple as the API explains it to be.
The problem is that you call remove on the root of your Firebase:
ref = new Firebase("myfirebase.com")
ref.remove();
This will remove the entire Firebase through the API.
You'll typically want to remove specific child nodes under it though, which you do with:
ref.child(key).remove();
I hope this code will help someone - it is from official Google Firebase documentation:
var adaRef = firebase.database().ref('users/ada');
adaRef.remove()
.then(function() {
console.log("Remove succeeded.")
})
.catch(function(error) {
console.log("Remove failed: " + error.message)
});
To remove a record.
var db = firebase.database();
var ref = db.ref();
var survey=db.ref(path+'/'+path); //Eg path is company/employee
survey.child(key).remove(); //Eg key is employee id
Firebase.remove() like probably most Firebase methods is asynchronous, thus you have to listen to events to know when something happened:
parent = ref.parent()
parent.on('child_removed', function (snapshot) {
// removed!
})
ref.remove()
According to Firebase docs it should work even if you lose network connection. If you want to know when the change has been actually synchronized with Firebase servers, you can pass a callback function to Firebase.remove method:
ref.remove(function (error) {
if (!error) {
// removed!
}
}
As others have noted the call to .remove() is asynchronous. We should all be aware nothing happens 'instantly', even if it is at the speed of light.
What you mean by 'instantly' is that the next line of code should be able to execute after the call to .remove(). With asynchronous operations the next line may be when the data has been removed, it may not - it is totally down to chance and the amount of time that has elapsed.
.remove() takes one parameter a callback function to help deal with this situation to perform operations after we know that the operation has been completed (with or without an error). .push() takes two params, a value and a callback just like .remove().
Here is your example code with modifications:
ref = new Firebase("myfirebase.com")
ref.push({key:val}, function(error){
//do stuff after push completed
});
// deletes all data pushed so far
ref.remove(function(error){
//do stuff after removal
});
In case you are using axios and trying via a service call.
URL: https://react-16-demo.firebaseio.com/
Schema Name: todos
Key: -Lhu8a0uoSRixdmECYPE
axios.delete(`https://react-16-demo.firebaseio.com/todos/-Lhu8a0uoSRixdmECYPE.json`). then();
can help.

Meteor: Could a race condition happen with Meteor.collections on server side?

in my server/server.js
Meteor.methods({
saveOnServer: function() {
var totalCount = Collections.find({
"some": "condition"
}).count();
if (totalCount) {
var customerId = Collections.update('someId', {
"$addToSet": {
objects: object
}
}, function(err) {
if (err) {
throw err;
} else {
return true;
}
});
} else {}
}
});
I'm afraid that when saveOnServer() is called by 2 clients at the same time, it will return the same totalCount for each client and basically end up inserting same integer number into object id. The end goal is to insert row on the server side with an atomic operation that only completes when the totalCount is successfully returned and the document is inserted ensuring that no duplicate id exists? I'm trying to not use the mongodb _id but have my own integer incrementing id column.
I'm wondering how I can ensure that a field gets auto-incremented for each insert operation? I am currently relying on getting the total count of documents. Is a race condition possible here? If so, what is the meteor way of dealing with this?
In Meteor's concurrency model, you can imagine a whole method as an uninterruptible block of stuff that happens. In order for Meteor to switch from running one method midway to say, starting another method, you need to "yield"—the method needs to signal, "I can be interrupted."
Methods yield whenever they do something asynchronous, which in practice means any time you do a database update or call a method with a callback in Meteor 0.6.5 and later. Since you give your update call a callback, Meteor will always try to do something in between the call to update and the update's callback. However, in Meteor 0.6.4.2 and earlier, database updates were uninterruptible regardless of the use of callbacks.
However, multiple calls to saveOnServer will happen in order and do not cause a race condition. You can call this.unblock() to allow multiple calls to saveOnServer to occur "simultaneously"—i.e., not share the same queue, labeled saveOnServer queue, of uninterruptible blocks of stuff.
Given the code you have, another method modifying Collections can change the value of count() between the call and the update.
You can prevent one method from making the other invalid midway by implementing the following data models:
saveOnServer : function () {
// ...
Collections.update({_id:someId, initialized:true, collectionCount: {$gt: 0}},
{$addToSet: {objects: object}});
///...
}
When adding objects to Collections:
insertObject: function() {
//...
var count = Collections.find({some: condition}).count();
Collections.insert({_id:someId, initialized:false, collectionCount: count});
Collections.update({initialized:false},
{$set:{initialized:true}, $inc: {collectionCount: 1}});
}
Note, while this may seem inefficient, it reflects the exact cost of making an update and insert in different methods behave the way you intend. In saveOnServer you cannot insert.
Conversely, if you removed the callback from Collections.update, it will occur synchronously and there will be no race conditioning Meteor 0.6.5 and later.
You can make this collection have a unique key on an index field, and then keep it updated as follows:
1) Whenever you insert into the collection, first do a query to get the maximum index and insert the document with index + 1.
2) To find out the number of documents just do the query to get the max of the index.
Insertion is now a pair of queries, a read and a write, so it can fail. (DB ops can always fail, though.) However, it can never leave the database in an inconsistent state - the Mongo index will guarantee that.
The syntax for building an index in Meteor is this:
MyCollection._ensureIndex('index', {unique: 1});
Another way to do this is from a mechanism hibernate/jpa follows - and that is to set up a collision field. Most of the time, this can be an update timestamp that is set on each update. Just prior to doing any update, query the update timestamp. Then you can specify the update where the update timestamp is what you just fetched. If it has changed in the interim, the update won't happen - and you check the return code/count that the row was updated or not.
JPA does this automatically for you when you add an annotation for this collision field - but this is essentially what it does in behind the scenes

Categories

Resources