Getting trxId from PayPal buttons API when sending the request - javascript

I'm working on the integration with PayPal using the Smart Payment Buttons, and I need to catch the trxId generated when the user presses the "Pay With PayPal", when I print the info on the console, I can see the value I need, but when debugging or trying to get the value of the var, I'm getting undefined all the times, what I'm missing?
Below there's an image that explains the issue
My only guess so far is that there's some sort of security that prevents from getting this value?

There is no transaction at the time createOrder is called. The buyer hasn't even signed in at that point, much less given their approval, much less there have been a successful capture.
A transaction is only created after a successful capture.
onApprove: function(data, actions) {
return actions.order.capture().then(function(details) {
console.log(details); //Transaction's ID will be within this object
});
}
The above captures on the client side -- but you should not capture on the client side and then send data to a server.
If you need to do anything important with the transaction ID (such as store it in a database), you ought to be using a server-side integration instead. For this create two routes, one for 'Create Order' and one for 'Capture Order', documented here. These routes should return JSON data.
Pair your two routes with the following approval flow: https://developer.paypal.com/demo/checkout/#/pattern/server
Reading your question again, though, you mention an ID "at the time the button is clicked", so perhaps you meant to ask about the order ID rather than the transaction ID. Well the most obvious and best way to know the order ID is to directly create it on your own server.

Related

How to establish a communication between two users, if the id generated by the socketIO are auto generated?

Hello I hope you are very well, I would like to ask you a question that I have not been able to deduce the answer to.
In a course I was seeing that to send a message to a specific user with socketIO.js the to () method is used and the id as a parameter, but I have a doubt, that id is auto generated by socketIO as far as I understand, so I would like to know How can the frontend know that id? The example that I saw in the course does it from the console and sends it directly to the method with the id that it already knows, then that example is not so real, what I would like to know in itself How is it that it performs a one-to-one chat if the id is autogenerated by the socket? I don't know if I understand.
For example, to start a conversation with another user, you can click on a button, trigger an event that makes emit, send the id of the user who writes, the event that should trigger the backend with socket, but my question is how does it taste like who send the message? How do you know the id of who is being sent to when establishing communication between 2 users for the first time? Obviously this must be sent by frontent as a parameter but also how does the frontend give this id of who will it be sent to? I don't know if you can store a fixed socket id for a user in a DB or Can you use your DB id to use with sockets? more than everything is what I can not deduce how it would be done?
I do not know if I understood with the question, more than everything is that, I do not know how it obtains or assigns the id for the target from where the message is sent and if this can be fixed and stored in db or is there any method to this.
I thank you in advance for your response, and any resources that you share with me about it or if you recommend a course with, I would greatly appreciate it.
as an example I have this method
io.on('connection', (client) => {
client.on('privateMessage', (data)=>{
const person = user.getPersona(client.id) //get this
client.broadcast.to(data.para).emit('privateMessage', createMsj( person.name, data.messages));
});
}
But where does the front-end of the person to receive the message to pass it to the method?
The front-end will not know the socket.io id of any other clients. This is where your server needs to be involved.
Each of your users presumably has some username that is displayed in the client UI and this is the name that other clients would know them by.
So, your server needs to keep a mapping between username and socket.io clientID. So, a user can send a request to your server to connect to BobS. Your server then needs to be able to look up BobS, find out if that user is currently connected and, if they are, then what is their socket.id value. That way, your server can facilitate connecting the two users.
This mapping would not typically be kept in a permanent store (such as a database) because the socket.id is a transient value and is only good for the duration of that client's socket.io connection. As such, it is more typically just kept in some sort of Javascript data structure (such as a Map object).

BreezeJs: SaveChanges() server response getting dropped

I have breezeJs running in an angular app on mobile device (cordova), which talks to .Net WebApi.
Everything works great, except once in a while the device will get PrimaryKey violations (from my SQL Server).
I think I narrowed it down to only happening when data connection is shakey on the device.
The only way I can figure these primary key violations are happening is somehow the server is Saving Changes, but the mobile connection drops out before the response can come back from server that everything saved OK.
What is supposed to happen when BreezeJS doesn't hear back from server after calling SaveChanges?
Anyone familiar with BreezeJS know of a way to handle this scenario?
I've had to handle the same scenario in my project. The approach I took was two part:
Add automatic retries to failed ajax requests. I'm using breeze with jQuery, so I googled "jQuery retry ajax". There's many different implementations, mine is somewhat custom, all center around hijacking the onerror callback as well as the deferred's fail handler to inject retry logic. I'm sure Angular will have similar means of retrying dropped requests.
In the saveChanges fail handler, add logic like this:
...
function isConcurrencyException(reason: any) {
return reason && reason.message && /Store update, insert, or delete statement affected an unexpected number of rows/.test(reason.message);
}
function isConnectionFailure(reason: any): boolean {
return reason && reason.hasOwnProperty('status') && reason.status === 0
}
entityManager.saveChanges()
.then(... yay ...)
.fail(function(reason) {
if (isConnectionFailure(reason)) {
// retry attempts failed to reach server.
// notify user and save to local storage....
return;
}
if (isConcurrencyException(reason)) {
// EF is not letting me save the entities again because my previous save (or another user's save) moved the concurrency stamps on the record. There's also the possibility that a record I'm try to save was deleted by another user.
// recover... in my case I kept it simple and simply attempt to reload the entity. If nothing is returned I know the entity was deleted. Otherwise I now have the latest version. In either case a message is shown to the user.
return;
}
if (reason.entityErrors) {
// We have an "entityErrors" property... this means the saved failed due to server-side validation errors.
// do whatever you do to handle validation errors...
return;
}
// an unexpected exception. let it bubble up.
throw reason;
})
.done(); // terminate the promise chain (may not be an equivalent in Angular, not sure).
One of the ways you can test spotty connections is to use Fiddler's AutoResponder tab. Set up a *.drop rule with a regex that matches your breeze route and check the "Enable Automatic Responses" box when you want to simulate dropped requests.
This is a somewhat messy problem to solve- no one size fits all answer, hope this helps.
NOTE
Ward makes a good point in the comments below. This approach is not suitable in situations where the entity's primary key is generated on the server (which would be the case if your db uses identity columns for PKs) because the retry logic could cause duplicate inserts.

Prevent race condition when concurrent API calls that write to database occurs (Or when the server is slow)

Lets imagine a scenario where you would have an endpoint used to create a user. This would be within a restful application, so lets imagine that a rich client calls this API endpoint.
exports.createUser = function(req,res){
if(req.body){
//Check if email has already been used
db.User.find({where:{email:req.body.email}}).success(function(user){
if(user === null || user === undefined){
//Create user
res.send(201);
} else {
res.json(409,{error: 'User already exists'});
}
});
} else {
res.send(400);
}
};
If I were to call this endpoint multiple time really fast, it would be possible to create multiple records with the same email in the database, even though you queryed the user table to make sure there would be no duplicate.
I'm sure this is a common problem, but how would one go about preventing this issue? I tough limiting the number of request to a certain endpoints, but that doesn't seem like a very good solution.
Any ideas?
Thank you very much!
The simplest option is to LOCK TABLE "users" IN EXCLUSIVE MODE at the beginning the transaction that does the find then the insert. This ensures that only one transaction can be writing to the table at a time.
For better concurrency, you can:
Define a UNIQUE constraint on email, then skip the find step. Attempt the insert and if it fails, trap the error and report a duplicate; or
Use one of the insert-if-not-exists techniques known to be concurrency-safe
If using a unique constraint, one thing to consider is that your app might mark users as disabled w/o deleting them, and probably doesn't want to force email addresses to be unique for disabled users. If so, you might want a partial unique index instead (see the docs).

Dropping a Mongo Database Collection in Meteor

Is there any way to drop a Mongo Database Collection from within the server side JavaScript code with Meteor? (really drop the whole thing, not just Meteor.Collection.remove({}); it's contents)
In addition, is there also a way to drop a Meteor.Collection from within the server side JavaScript code without dropping the corresponding database collection?
Why do that?
Searching in the subdocuments (subdocuments of the user-document, e.g. userdoc.mailbox[12345]) with underscore or similar turns out quiet slow (e.g. for large mailboxes).
On the other hand, putting all messages (in context of the mailbox-example) of all users in one big DB and then searching* all messages for one or more particular messages turns out to be very, very slow (for many users with large mailboxes), too.
There is also the size limit for Mongo documents, so if I store all messages of a user in his/her user-document, the mailbox's maximum size is < 16 MB together with all other user-data.
So I want to have a database for each of my user to use it as a mailbox, then the maximum size for one message is 16 MB (very acceptable) and I can search a mailbox using mongo queries.
Furthemore, since I'm using Meteor, it would be nice to then have this mongo db collection be loaded as Meteor.Collection whenever a user logs in. When a user deactivates his/her account, the db should of course be dropped, if the user just logs out, only the Meteor.Collection should be dropped (and restored when he/she logs in again).
To some extent, I got this working already, each user has a own db for the mailbox, but if anybody cancels his/her account, I have to delete this particular Mongo Collection manually. Also, I have do keep all mongo db collections alive as Meteor.Collections at all times because I cannot drop them.
This is a well working server-side code snippet for one-collection-per-user mailboxes:
var mailboxes = {};
Meteor.users.find({}, {fields: {_id: 1}}).forEach(function(user) {
mailboxes[user._id] = new Meteor.Collection("Mailbox_" + user._id);
});
Meteor.publish("myMailbox", function(_query,_options) {
if (this.userId) {
return mailboxes[this.userId].find(_query, _options);
};
});
while a client just subscribes with a certain query with this piece of client-code:
myMailbox = new Meteor.Collection("Mailbox_"+Meteor.userId());
Deps.autorun(function(){
var filter=Session.get("mailboxFilter");
if(_.isObject(filter) && filter.query && filter.options)
Meteor.subscribe("myMailbox",filter.query,filter.options);
});
So if a client manipulates the session variable "mailboxFilter", the subscription is updated and the user gets a new bunch of messages in the minimongo.
It works very nice, the only thing missing is db collection dropping.
Thanks for any hint already!
*I previeously wrote "dropping" here, which was a total mistake. I meant searching.
A solution that doesn't use a private method is:
myMailbox.rawCollection().drop();
This is better in my opinion because Meteor could randomly drop or rename the private method without any warning.
You can completely drop the collection myMailbox with myMailbox._dropCollection(), directly from meteor.
I know the question is old, but it was the first hit when I searched for how to do this
Searching in the subdocuments...
Why use subdocuments? A document per user I suppose?
each message must be it's own document
That's a better way, a collection of messages, each is id'ed to the user. That way, you can filter what a user sees when doing publish subscribe.
dropping all messages in one db turns out to be very slow for many users with large mailboxes
That's because most NoSQL DBs (if not all) are geared towards read-intensive operations and not much with write-intensive. So writing (updating, inserting, removing, wiping) will take more time.
Also, some online services (I think it was Twitter or Yahoo) will tell you when deactivating the account: "Your data will be deleted within the next N days." or something that resembles that. One reason is that your data takes time to delete.
The user is leaving anyway, so you can just tell the user that your account has been deactivated, and your data will be deleted from our databases in the following days. To add to that, so you can respond to the user immediately, do the remove operation asynchronously by sending it a blank callback.

How to check failed recurring subscription Stripe

How should I design an on-login middleware that checks if the recurring subscription has failed ? I know that Stripe fires events when things happen, and that the best practice is webhooks. The problem is, I can't use webhooks in the current implementation, so I have to check when the user logs in.
The Right Answer:
As you're already aware, webhooks.
I'm not sure what you're doing that webhooks aren't an option in the current implementation: they're just a POST to a publicly-available URL, the same as any end-user request. If you can implement anything else in Node, you can implement webhook support.
Implementing webhooks is not an all-or-nothing proposition; if you only want to track delinquent payments, you only have to implement processing for one webhook event.
The This Has To Work Right Now, Customer Experience Be Damned Answer:
A retrieved Stripe Customer object contains a delinquent field. This field will be set to true if the latest invoice charge has failed.
N.B. This call may take several seconds—sometimes into the double digits—to complete, during which time your site will appear to have ceased functioning to your users. If you have a large userbase or short login sessions, you may also exceed your Stripe API rate limit.
I actually wrote the Stripe support team an email complaining about this issue (the need to loop through every invoice or customer if you're trying to pull out delinquent entries) and it appears that you can actually do this without webhooks or wasteful loops... it's just that the filtering functionality is undocumented. The current documentation shows that you can only modify queries of customers or invoices by count, created (date), and offset... but if you pass in other parameters the Stripe API will actually try to understand the query, so the cURL request:
https://api.stripe.com/v1/invoices?closed=false&count=100&offset=0
will look for only open invoices.... you can also pass a delinquent=true parameter in when looking for delinquent customers. I've only tested this in PHP, so returning delinquent customers looks like this:
Stripe_Customer::all(array(
"delinquent" => true
));
But I believe this should work in Node.js:
stripe.customers.list(
{delinquent:true},
function(err, customers) {
// asynchronously called
});
The big caveat here is that because this filtering is undocumented it could be changed without notice... but given how obvious the approach is, I'd guess that it's pretty safe.

Categories

Resources