Azure service hub automatically receive message in javascript - javascript

I'm very new in Azure. Is there any way to receive messages from queue/topic automatically instead of making some kind of cron mechanism which will be making requests for new messages? This kind of functionality is available in RabbitMQ client.

The question is a bit vague... In C# SDK there is OnMessage callback that you can sign up to, see docs and a full example in C#.
For a fully automated serverless way of handling messages, have a look at Azure Functions Service Bus Trigger. It's based on WebJobs SDK which you could use directly in self-hosted apps.

Related

How to communicate securely between shell app and micro application(frontend) via pubsub

I have a shell application which is the container application that performs all the API communication. Also, I do have multiple Micro application which just broadcast the API request signal to shell application.
Now, keeping the security in mind, as a shell application how it can ensure that API request signal is coming from the trusted micro app which I own.
To be very precise, My ask is, is there a way to let shell application know that the signal is coming from the micro app that it owns and not from any untrusted(like hacking, XSS) source
As per the Micro-Frontend architecture each Micro Frontend should make call to it's own API (micro service). However, your Shell app can provide some common/global library which can help the Micro Frontends make the AJAX call. But the onus of making the call must remain with the individual micro frontend.
From your question it is unclear if your apps are running in iframes, or are being loaded directly into your page.
In the case of iFrames your using postMessage and you can check the origin on received message via event.origin. Compare this with a list of allowed domains.
If your micro apps are directly on your page then you just control what is allowed to load into them.
So, in most microfrontends, each microapp does its own API calls to the corresponding microservice on the backend, and the shell app is ignorant of it. The most the shell app would do relative to this is passing some app config to all microapps which has config like the hostname of the various backends, and, an auth token if all the backends are using the same auth.
But to ensure the shell app doesn't have, say, an advertisement with malicious code trying to pose as another microapp, well..
how are the microapps talking to the shell? Is there a common custom event? The name of the customEvent would have to be known to the intruder, but that's only security-by-obscurity, which isn't real.
other methods like postMessage are between window objects, which I don't think help your case.
You might be able to re-use the authToken that the shell and microapps both know, since it was communicated at startup. But if you have microapps that come and go that won't work either.

Sending and Receiving data to pubnub website

How to send database data to pubnub and how to receive data from pubnub website for live data streaming
Its not clearly.
Not sure what the actual question is here but I'll attempt to address it.
PubNub is not really a website, it's a realtime network service with serverless Functions that can process the messages that are published through that network.
You send data to the PubNub using the publish or fire API. You receive data using the subscribe API. You just need to pick a PubNub SDK (there's over 70 to choose from) to implement your app using these APIs. Here's the JavaScript SDK for example. Put Functions aside for now and concentrate on the publish/subscribe part first.
Also, you might want to review How PubNub Works docs to understand what PubNub is (and is not) and what it can do for your applications you use it in.

Azure IOT Hub Rest API with Javascript

I would like to receive IOT Hub messages from an endpoint. With client side Javascript and REST.
I used this article https://msdn.microsoft.com/nl-nl/library/mt590786.aspx for creating the URL.
This is my code:
function readIOTHub()
{
$.getJson("https://MyIOTHub.azure-devices.net/devices/device1/messages/devicebound?api-version=2016-02-03", function(result)
{
alert(result);
});
}
But my Request is not receiving any messages.
Does someone know how to receive messages from IOT Hub, with Javascript REST?
I don't think this is currently possible, first because, from what I see, Azure IoT hub REST API does not issue CORS requests (i.e. they don't write in the CORS header access-control-allow-origin), so your JS client can't access it from within the browser.
You might want to take a look at the Node.js sdk for the IoT hubs, but then again this is in the context of Node.js.
If you are ok with node.js, it becomes much simpler.
Hope this helps!
Mert

Azure Event Hubs: How to grant SAS tokens to Javascript publishers (running in browser)?

I am building a website analytics solution based on Azure Event Hubs. I have Javascript code embedded in each web page that publishes events directly to an Event Hub via the Azure Event Hubs REST API.
The REST API requires that each call be authenticated via a SAS token. My questions is - Do I have to code up a server side endpoint that will provide my publishers with temporary tokens before they can start publishing?
Are there alternative approaches?
Does the REST API provide this "authenticate" end point out of the box? (couldn't find it here)
Or, how terrible, security wise would it be to have a token hard coded into the client-side code?
Or, technically feasible but security-wise much worse than option 2, Hard-code the Event Hub's Shared Access Key in the client-side code and use something like the (unofficial) Azure ServiceBus JavaScript SDK to generate the SAS token on the fly?
Event Hub REST api does not provide an authentication end point. You will have to code up the generation of SAS tokens per client (browser or device) on your server side (may be as part of your AuthN/Z routines?). Refer to RedDog.ServiceBus nuget package to generate SAS tokens for your Event Hub, per client. Also this article on IoT, explains authenticating against Event Hubs using the aforementioned package.
In my opinion, I would much rather do the above and rule out #2 and #3. They (2 & 3) leave the solution vulnerable and violate best practices.
Considering the example set by Google Analytics and other browser analytics providers, the second alternative in my question is quite acceptable.
That is, a SAS token can be generated on a "per site" (or "per analytics customer") basis and be shared by all browsers that this site is tracked on. The generation of the keys can be done via a tool like Sandrino Di Mattia's Event Hubs Signature Generator based on his RedDog Azure library.
This way tokens can be generated once when a publisher is onboarded and there is no need for an online Web API endpoint to be constantly available.
As an alternative approach, you could consider Application Insights for event ingestion. Depending on the type of event collection you're doing, you could be using it and exporting data using built in archiving mechanisms or querying endpoints for specific events from time to time. App Insights was designed for JS in-browser scenarios, can handle a large number of RPS + you get some reports, analytics, querying endpoints and some other interesting features. It provides an SDK and JS lib you can use, and implemnts batching for you using browser's local storage.
As a side note, consider that browsers (and any other JS code running on it) as an insecure client. That means, even if you write a mechanism to do a request to a server-side app written by you to get the SAS key, any developer will be able to intercept in memory. So, the most secure thing you could do is a) have a server-side code that generates a short-lived SAS key and b) let your clients authenticate before calling this server-side code. Or, ignore the problem and filter invalid events you receive.
Both GA and App Insights work by exposing a common key. As far as I know, Google Analytics uses heuristics to filter invalid requests. I suppose App Insights do the same.

How do i set up an offline environment to test my Google App Engine chat room?

I'll be using the channel API, which will route the messages to a javascript client.
How do I set up the GAE SDK for this, and how do I create a javascript client that can work offline with GAE?
side note: I'm using Ubuntu.
Do you know how to work with GAE and use its SDK?
If yes then just read the Channel API overview here, it has everything you need to get started.
If no then you should read the GAE getting started guide
for python 2.5
for python 2.7
Actually, there is no special thing that you need to set up for Channel API, it works wonderful offline, and it uses the ordinary polling mechanism for simulating the service (the real one doesn't use polling), but of course you must know how to work with the SDK.
Google App Engine Tutorial on Channel API at http://googcloudlabs.appspot.com/codelabexercise4.html. That may help.

Categories

Resources