What is meant by webhook? - javascript

I have read http://en.wikipedia.org/wiki/Webhook but still i am not clear about webhook concept.
I have following doubts about webhook :
1.Can anyone explain what is the use of webhook with real world example?
2.Why should i use webhook in application?

As alluded to in the Wikipedia article, an excellent real-world example is a source code repository like github. Suppose you're using github to manage your source, and a separate tool (bamboo, jenkins, whatever) to perform continuous integration. Every time you push code to github, you want it to trigger a build in your CI tool. How're we going to make that happen?
Given the topic, it shouldn't be surprising that the answer is 'webhooks'.
Github offers a variety of webhook triggers. See https://developer.github.com/webhooks/ for their documentation - the concrete example may help. In brief, however, each webhook consists of:
An event which triggers the hook (such as 'code was pushed to the repository')
A URL that github should send a request to when the event occurs (such as an incoming trigger-point in your CI package)
A payload (the request body that will be sent to the selected URL).
The important thing here is that github doesn't know what CI system you're using. It doesn't care. It knows about events that occur in its domain, and it's up to the outside system to register its interest and decide what to do with the notification. This creates a highly generic and scalable interface, and avoids requiring git to make any (potentially limiting) assumptions about who or what may want to react to its events.

In simple terms webhooks are extension points which allow others to extend your application.
You will define webhooks (extensions points), users will register their functions with these hooks and whenever these extension points are reached your application will call users registered functions.

Related

How do I prevent the Google API from being used by others?

I'm going to make a project using the Google translate api and I'm thinking of uploading this project to a server and just sharing it with my friends. But unfortunately the Api Key that I will use in the project can be accessed clearly in the JavaScript file. This is a very bad situation. To prevent this, I have limited the Google Cloud Api and as far as I understand it is only allowed to be used on the links I allow. It cannot be used on other links. Now my main question is, is this method enough to protect Api from malicious people? Do I need to do anything else? Thank you in advance for your answers.
Best practice in these cases is to use .env files to keep data like API keys private.
You have to create a server for that which will perform OAuth and then send an API request to google.
You can get help about how to implement OAuth from this topic provided by google: https://developers.google.com/identity/protocols/oauth2/javascript-implicit-flow
If you send/attach your API key in frontend like javascript which is basically a frontend language then it can be used to:
Send fake requests which will use all of the bandwidth etc.
You should also consult the TOS.
On November 5th 2014 Google made some changes to the APIs terms of Service.
Like you I had an issue with the following line.
Asking developers to make reasonable efforts to keep their private
keys private and not embed them in open source projects.
That is however really only an issue if you are releasing the source code of your app as an Open source project for example.
If your just hosting this on a server then what you shoudl do is set up limitations for the api key adding_application_restrictions you can limit it so that the api key can only be used from your server and no where else.

How to design a proactive monitoring system?

This is a vague question on design. I have microservice which performs order management. The service orchestrates every order from Placed to Delivered. A lot of things happening in between. Let say these are statuses an order can be.
Placed
Authorized
Shipped
Delivered
I have an elastic search dashboard which visualizes if an order stuck in particular status and not moving forward - This is a kind of reactive approach. I want to design a monitoring subsystem which actually monitors every order placed in the system is moving to next status within the SLA configured.
The general idea would be to tag every order placed and have cron worker which checks if the order crossed the configured SLA for every status. But I'm thinking this won't scale well if we have like 100k order placed in one single day the cron is not a better way of designing this kind of systems.
So how do people solve these design problems? Pointers to any existing approach / any idea is welcome.
You mentioned a microservice so I think the most "scalable" way of doing it while respecting a microservice architecture whould be to perform the monitoring in an asynchronous manner. If you don't already have one, you could setup a Message Queueing service like Google PubSub or RabbitMQ. There are a lot of differents Message Queueing service out there with specific features and performance so you'd need to make some research to find the best fit to your use case.
Once you have setup your MQ service, your Order microservice would dispatch a message like { orderId: 12345, status: 'Authorized', timestamp: 1610118449538, whatEver: 'foo' }. That way this message could be consumed by any service registered to your specific topic (and also depending of the architecture of your MQ).
Then I would develop another microservice: the Monitoring microservice. This microservice would register to the topics dispatched by the Order microservice. This way it would be aware of any Order status changes and you could setup crons on your microservice to check i.e every 5min which orders you didn't receive the message regarding their status change and act accordingly. This microservice could communicate with your ElasticSearch. I'd also recommend you'd mutualize as much as possible of the code managing the business logic regarding the orders status changes between the Order and Monitoring microservices. You could use private NPM packages. This way you are less likely to end up with business requirements mismatches between the two microservices.
Using a MQ service allows you to scale as much as needed because you can then horizontally scale your Monitoring and Order microservices. You'd need to handle some kind of lock/semaphore mechanism between the different instances of your Monitoring service though so you don't handle the same message by multiple instances. In case of any microservices shutdown your queue would store the message to prevent data loss. Once back-up they can process the queued messages. You'd have to consider how to handle the downtime on your MQ service too.

Transfer Data from Click Event Between Bokeh Apps

I have two Bokeh apps (on Ubuntu \ Supervisor \ Nginx), one that's a dashboard containing a Google map and another that's an account search tool. I'd like to be able to click a point in the Google map (representing a customer) and have the account search tool open with info from the the point.
My problem is that I don't know how to get the data from A to B in the current framework. My ideas at the moment:
Have an event handler for the click and have it both save a cookie and open the account web page. Then, have some sort of js that can read the cookie and load the account.
Throw my hands up, try to put both apps together and just find a way to pass it in the back end.
The cookies idea might work fine. There are a few other possibilities for sharing data:
a database (e.g. redis or something else, that can trigger async events that the app can respond to)
direct communication between the apps (e.g. with zeromq or similiar) The Dask dashboard uses this kind of communication between remote workers and a bokeh server.
files and timestamp monitoring if there is a shared filesystem (not great, but sometimes workable in very simple cases)
Alternatively if you can run both apps on the same single server (even though they are separate apps) then you could probably communicate by updating some mutable object in a module that both apps import. But this would not work in a scale-out scenario with more than one Bokeh server running.
Any/all of these somewhat advanced usages, an working example would make a great contribution for the docs so that others can use them to learn from.

CouchDB and Cloudant Security

We have used CouchDB in production, mainly building apps in controlled environments. Most times, we use a middle-ware library to make direct calls onto couchdb/cloudant, hence avoiding direct (Front-End JavaScript calls direct onto CouchDB/Cloudant).
For security reasons, it is obvious that for an authenticated CouchDB database: http://{username}:{password}#IPAddress:Port/DB OR for cloudant: https://{username}:{password}#username.cloudant.com/DB ,
If the call is directly made from JavaScript, Developer tools in the browsers today enable a person to realise this call and hence has access to your database entirely.
Attachments are usually painful when handled in the middle-ware. It is advantageous to make cloudant handle the caching and serving of the attachments directly to the front end hence relieving our middle ware from that. However, on the web and with a huge audience, making direct calls to our cloudant environment is tricky.
We started out by first of all having a separate cloudant account for all attachments such that, an inquisitive boy will not tamper with the actual meta-data or information of our users. So, the only cloudant account they can have access to is that of the attachments since we are making direct JavaScript calls to our database.
Question: How do we find a way in which we hide the Username and Password of our cloudant environment thereby allowing us to securely make direct JavaScript calls onto cloudant ? Our infrastructure is entirely in the cloud, so we don't have proxies and stuff to work with. We have heard of Url shortening services, CDNs e.t.c. but, we have not come up with a really conclusive solution.
Try using the _session endpoint. This will set up cookie authentication.
How do we find a way in which we hide the Username and Password of our cloudant environment thereby allowing us to securely make direct JavaScript calls onto cloudant ?
As far as I know you can't do that without using a middleware or some kind of proxy. But that does not mean we are completely defenceless. couchdb gives us some spears to poke inquisitive boy :)
So a good thing that you have done is to make the attachments database seperate. You don't mention in your question if you are using couchdb authorization scheme so I going to assume that you are not. So the first step is to create a user in couchdb _users database and then assign it as a member in the attachments database. More details here and here.
After this step you should have a member on attachments database. The reason we want a member and not an admin is that members do not have permissions to write or read design documents.
It's a start but it's not enough since a member can still read via _all_docs and that is a dos attack right there. So the problem we face now is that we do this at the moment
https://{username}:{password}#username.cloudant.com/DB
A very good move would be to change it to
https://{username}:{password}#someurl.com/
What's the difference between these two? Well it hides the location of your database and makes accessing built in methods harder. This can be accomplished with the help of vhosts configuration and some rewrite rules. Some very good stuff is on Caolan's blog too
With this in place you have got two things going for you.
The stalker inquisitive boy will be clueless where the calls go to.
There will be no way he can get the contents of unknown documents by making direct calls. He can only access your database through the rules that you set.
Still not 100% secure but it's okay as far as read level security goes. Hope this helps.

Can I access an API without authentication in JavaScript?

Circumstances
I develope a WebApp with AngularJS.
I've an restful API on server-side with GET and POST commands.
I want to use the API within my module (means: in JavaScript) to display and edit my data.
I want to protect the API with some kind of authentication (basic auth with an API key for example)
I don't want to protect the API when a user uses the app itself
Actual question
Okay, I guess the last point is a bit unclear.
I want that a user can use the app with his browser without any authentication
But when a third-party app want to access the API it have to use authentication
Since JavaScript is executed on client-side of course I can't write a master key into js or something similar..
Is there any kind of pattern or solution to solve this problem?
More specifications
referring to #EliranMalka and #shaunhusain
On the server-side I do use Tornado with it's builtin template engine. I do use the template engine actually just to write the index page and insert CSS, JS dynamically.
The code for authentication would just something like:
def is_authenticated(request):
if 'api_key' in request.arguments:
return sql('SELECT id FROM keys WHERE key=%S' % request.arguments['api_key']).count == 1
My AngularJS module is doing something similar to:
$http.get('/api/foo?api_key=1234')
.then(function (result) {
$scope.data = result.data
});
As you can see I'm writing my API key into js at the moment. But I wan't to avoid this.
Also, what do you mean exactly by third-party?
not a third-party request would be: Using the App on http:/ /app.example.com with a browser
A third-party request would be from an Android app for example. Something that comes from outside or remote.
A JS request from the browser on the actual page would be not from remote (again: since it's JS it is actually from remote - but I hope it gets more clear now)
Oh and before I forget...
I'm aware of that my plan is a bit weird - but it's just a learning(-web-development)-by-doing project.
Also the API key is not absolutely to avoid abusion, it is rather to log 3rd-party usage.
PS I hope my question was clear for you
Hmm, well I'll try to address the questions but here's a few things.
Question isn't really appropriate in it's current format for stackoverflow.com (should be programming questions, I tried X and Y happened) perhaps closer to a StackExchange question but is still fairly open ended.
Include more information about specifics of the languages (and/or frameworks) your using server side and any code you have that is relevant (authentication code?).
Putting the key into the client code and transmitting it from the client means anyone with a web proxy (check out Charles or Wireshark) can grab the key so just to reiterate you're right there that's not the way to go.
Check out how other organizations allow you to get access to their APIs (for example Google, LinkedIn, Facebook, Twitter) to get a feel for how it works. In all of these cases you are signed into the service to be able to make an API key, in some cases you have to specify which domain the requests with that API key will come from. If you use the same precautions and check the API key sent with a request against a database of registered API users and verify the domain in the request then I'd say you're in pretty good shape.

Categories

Resources