I have created a backend for an android app (xamarin.android) usng Azure EasyTables. Now everything's working but I also want to access my EasyTables db from a website.
Currently as a test, I'm using the Azure mobile javascript sdk. As an absolute beginner, I really don't have an idea how I can make this secure. I have lines of code like
var MobService = WindowsAzure.MobileServiceClient;
var client = new MobService(MYAPPURL);
var reportsTable = client.getTable("rp_Table");
var totalActs;
var query = reportsTable;
query.where(function (){return this.LicensePlate == lplate || this.ReporterId == uname;})
.includeTotalCount().read().done(function (results){ });
all of which are EXPOSED to anyone. Where do I even begin to look to secure this? Is there a way to have some sort of stored procedure in Azure EasyTables so I can just disable anonymous CRUD permissions?
You can disable anonymous CRUD permissions on the table via the Azure portal like this:
navigate to your App Service -> Easy tables -> select the table -> Change permissions
For more information, please refer to the following documentation articles:
How to: Use authentication claims with your tables
30 DAYS OF ZUMO.V2 (AZURE MOBILE APPS): DAY 6 – PERSONAL TABLES
Raphael,
Could you try setting the firewall on your EasyTables DB as mentioned here: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-security-tutorial
You would be able to specify client IP addresses to which you would like access to be granted to.
I am new at web developing and I want to do webpage for remote controlling my Raspberry PI. On raspberry I have few sensors attached and I can get data by sending request on 192.168.1.100:9997. There is code written in Python. Everything works if I try to get data with Putty for example. Now I want to establish TCP connection for reading data over my webpage. I was searching for few days and found that this is possible by creating Websockets. There are many tools, most described I found is Node.js. As I understand with Node.js is possible to create Websockets and it can also serves webpage (instead of Appache for example)?
For example I am running this Websocket server just for reading data from RPi in "server.js". Now I don't know how can I get this data from "server.js" into my .html? I didn't find any very basic examples. I can get data via database, but this is not what I want. I also want to send request from my Webpage to Rpi and then read the answer.
I hope you understand my problem. If you can point me in some good examples or tell me how it must be done I will be very glad. I want to do this with Javasrcipt if it's possible.
Thank you in advance.
EDIT: I have now working example with Node.js, but I don't know how to implement this into my Web page that user can trigger this part of codes from .html, and show answered data into .html web page. I hope this helps.
var client = new net.Socket();
client.connect(9997, '192.168.1.100', function() {
console.log('Connected');
//sending request
//THIS SHOULD BE TRIGGERED FROM HTML onclick for example
client.write('$DATA');
});
client.on('data', function(data) {
console.log('Received: ' + data);
//THIS DATA SHOULD BE SHOWN IN HTML for example
//client.destroy(); // kill client after server's response
});
client.on('close', function() {
console.log('Connection closed');
});
For getting data off a Pi and into a Web page, take a look at some examples doing this using WAMP (an open protocol which runs on top of WebSocket) and Crossbar.io (open source router for WAMP) - http://crossbar.io/iotcookbook/Raspberry-Pi/
Full disclosure: I'm working on these projects - but they are open source, and a great fit for what the OP wants to do.
I have a simple socket.io client and server program, running on node.js. The client and server exchange messages between them for a few minutes, before disconnecting (like chat).
If there any function/method I can use to get the total bytes transferred (read/write), after the socket is closed?
At present I am adding up the message size for each each message sent and received by the client. But, as per my understanding, in socket.io depending on which protocol is used (websocket, xhr-polling, etc.), the size of the final payload being sent will differ due to the header/wrapper size. Hence, just adding message bytes won't give me an accurate measure of bytes transferred.
I can use monitoring tools like Wireshark to get this value, but I would prefer using a javascript utility to get this value. Search online, didn't give me any reasonable answer.
For pure websocket connections, I am being able to get this value using the functions: socket._socket.bytesRead and socket._socket.bytesWritten
Any help is appreciated!
As of socket v2.2.0 i managed to get byte data like this. Only problem these are specified when client closes browser window and reason parameter is transport error. If client uses socket.close() or .disconnect() or server uses .disconnect() then bytes are 0.
socket.on('disconnect', (reason) => {
let symbs = Object.getOwnPropertySymbols(socket.conn.transport.socket._socket);
let bytesRead = socket.conn.transport.socket._socket[symbs[3]];
let bytesWritten = socket.conn.transport.socket._socket[symbs[4]];
});
If you wanted such a feature that would work no matter what the underlying transport was below a socket.io connection, then this would have to be a fundamental feature of socket.io because only it knows the details of what it's doing with each transport and protocol.
But, socket.io does not have this feature built in for the various transports that it could use. I would conclude that if you're going to use the socket.io interface to abstract out the specific protocol and implementation on top of that protocol, then you give up the ability to know exactly how many bytes socket.io chooses to use in order to implement the connection on its chosen transport.
There are likely debug APIs (probably only available to browser extensions, not standard web pages) that can give you access to some of the page-wide info you see in the Chrome debugger so that might be an option to investigate. See the info for chrome.devtools.network if you want more info.
I need users to be able to post data from a single page browser application (SPA) to me, but I can't put server-side code on the host.
Is there a web service that I can use for this? I looked at Amazon SQS (simple queue service) but I can't call their REST APIs from within the browser due to cross origin policy.
I favour ease of development over robustness right now, so even just receiving an email would be fine. I'm not sure that the site is even going to catch on. If it does, then I'll develop a server-side component and move hosts.
Not only there are Web Services, but nowadays there are robust systems that provide a way to server-side some logic on your applications. They are called BaaS or Backend as a Service providers, usually to provide some backbone to your front end applications.
Although they have multiple uses, I'm going to list the most common in my opinion:
For mobile applications - Instead of having to learn an API for each device you code to, you can use an standard platform to store logic and data for your application.
For prototyping - If you want to create a slick application, but you don't want to code all the backend logic for the data -less dealing with all the operations and system administration that represents-, through a BaaS provider you only need good Front End skills to code the simplest CRUD applications you can imagine. Some BaaS even allow you to bind some Reduce algorithms to calls your perform to their API.
For web applications - When PaaS (Platform as a Service) came to town to ease the job for Backend End developers in order to avoid the hassle of System Administration and Operations, it was just logic that the same was going to happen to the Backend. There are many clones that showcase the real power of this strategy.
All of this is amazing, but I have yet to mention any of them. I'm going to list the ones that I know the most and have actually used in projects. There are probably many, but as far as I know, this one have satisfied most of my news, whether it's any of the previously ones mentioned.
Parse.com
Parse's most outstanding features target mobile devices; however, nowadays Parse contains an incredible amount of API's that allows you to use it as full feature backend service for Javascript, Android and even Windows 8 applications (Windows 8 SDK was introduced a few months ago this year).
How does a Parse code looks in Javascript?
Parse works through classes and objects (ain't that beautiful?), so you first create a specific class (can be done through Javascript, REST or even the Data Browser manager) and then you add objects to specific classes.
First, add up Parse as a script tag in javascript:
<script type="text/javascript" src="http://www.parsecdn.com/js/parse-1.1.15.min.js"></script>
Then, through a given Application ID and a Javascript Key, initialize Parse.
Parse.initialize("APPLICATION_ID", "JAVASCRIPT_KEY");
From there, it's all object manipulation
var Person = Parse.Object.extend("Person"); //Person is a class *cof* uppercase *cof*
var personObject = new Person();
personObject.save({name: "John"}, {
success: function(object) {
console.log("The object with the data "+ JSON.stringify(object) + " was saved successfully.");
},
error: function(model, error) {
console.log("There was an error! The following model and error object were provided by the Server");
console.log(model);
console.log(error);
}
});
What about authentication and security?
Parse has a User based authentication system, which pretty much allows you to store a base of users that can manipulate the data. If map the data with User information, you can ensure that only a given user can manipulate specific data. Plus, in the settings of your Parse application, you can specify that no clients are allowed to create classes, to ensure innecesary calls are performed.
Did you REALLY used in a web application?
Yes, it was my tool of choice for a medium fidelity prototype.
Firebase.com
Firebase's main feature is the ability to provide Real Time to your application without all the hassle. You don't need a MeteorJS server in order to bring Push Notifications to your software. If you know Javascript, you are half way through to bring Real Time magic to your users.
How does a Firebase looks in Javascript?
Firebase works in a REST fashion, and I think they do an amazing job structuring the Glory of REST. As a good example, look at the following Resource structure in Firebase:
https://SampleChat.firebaseIO-demo.com/users/fred/name/first
You don't need to be a rocket scientist to know that you are retrieve the first name of the user "Fred", giving there's at least one -usually there should be a UUID instead of a name, but hey, it's an example, give me a break-.
In order to start using Firebase, as with Parse, add up their CDN Javascript
<script type='text/javascript' src='https://cdn.firebase.com/v0/firebase.js'></script>
Now, create a reference object that will allow you to consume the Firebase API
var myRootRef = new Firebase('https://myprojectname.firebaseIO-demo.com/');
From there, you can create a bunch of neat applications.
var USERS_LOCATION = 'https://SampleChat.firebaseIO-demo.com/users';
var userId = "Fred"; // Username
var usersRef = new Firebase(USERS_LOCATION);
usersRef.child(userId).once('value', function(snapshot) {
var exists = (snapshot.val() !== null);
if (exists) {
console.log("Username "+userId+" is part of our database");
} else {
console.log("We have no register of the username "+userId);
}
});
What about authentication and security?
You are in luck! Firebase released their Security API about two weeks ago! I have yet to explore it, but I'm sure it fills most of the gaps that allowed random people to use your reference to their own purpose.
Did you REALLY used in a web application?
Eeehm... ok, no. I used it in a Chrome Extension! It's still in process but it's going to be a Real Time chat inside a Chrome Extension. Ain't that cool? Fine. I find it cool. Anyway, you can browse more awesome examples for Firebase in their examples page.
What's the magic of these services? If you read your Dependency Injection and Mock Object Testing, at some point you can completely replace all of those services for your own through a REST Web Service provider.
Since these services were created to be used inside any application, they are CORS ready. As stated before, I have successfully used both of them from multiple domains without any issue (I'm even trying to use Firebase in a Chrome Extension, and I'm sure I will succeed soon).
Both Parse and Firebase have Data Browser managers, which means that you can see the data you are manipulating through a simple web browser. As a final disclaimer, I have no relationship with any of those services other than the face that James Taplin (Firebase Co-founder) was amazing enough to lend me some Beta access to Firebase.
You actually CAN use SQS from the browser, even without CORS, as long as you only need the browser to send messages, not receive them. Warning: this is a kludge that would make my CS professors cry.
When you perform a GET request via javascript, the browser will always perform the request, however, you'll only get access to the response if it was from the same origin (protocol, host, port). This is your ticket to ride, since messages can be posted to an SQS queue with just a GET, and who really cares about the response anyways?
Assuming you're using jquery, your queue is https://sqs.us-east-1.amazonaws.com/71717171/myqueue, and allows anyone to post a message, the following will post a message with the body "HITHERE" to the queue:
$.ajax({
url: 'https://sqs.us-east-1.amazonaws.com/71717171/myqueue' +
'?Action=SendMessage' +
'&Version=2012-11-05' +
'&MessageBody=HITHERE'
})
The'll be an error in the console saying that the request failed, but the message will show up in the queue anyways.
Have you considered JSONP? That is one way of calling cross-domain scripts from javascript without running into the same origin policy. You're going to have to set up some script somewhere to send you the data, though. Javascript just isn't up to the task.
Depending in what kind of data you want to send, and what you're going to do with it, one way of solving it would be to post the data to a Google Spreadsheet using Ajax. It's a bit tricky to accomplish though.Here is another stackoverflow question about it.
If presentation isn't that important you can just have an embedded Google Spreadsheet Form.
What about mailto:youremail#goeshere.com ? ihihi
Meantime, you can turn on some free hostings like Altervista or Heroku or somenthing else like them .. so you can connect to their server , if i remember these free services allows servers p2p, so you can create a sort of personal web services and push ajax requests as well, obviously their servers are slow for free accounts, but i think it's enought if you do not have so much users traffic, else you should turn on some better VPS or Hosting or Cloud solution.
Maybe CouchDB can provide what you're after. IrisCouch provides free CouchDB instances. Lock it down so that users can't view documents and have a sensible validation function and you've got yourself an easy RESTful place to stick your data in.
I've got a production site that has been working for years with a SQL Server 2000 default instance on server named MDWDATA. TCP port 1433 and Named Pipes are enabled there. My goal is to get this web app working with a copy of the database upgraded to SQL Server 2008. I've installed SQL2008 with SP1 on a server called DEVMOJITO and tested the new database using various VB6 desktop programs that exercise various stored procs in a client-server fashion and parts of the website itself work fine against the upgraded database residing on this named instance of SQL2008. So, while I am happy that the database upgrade seems fine there is a part of this website that fails with this Named Pipes Provider: Could not open a connection to SQL Server [1231]. I think this error is misleading. I disabled Named Pipes on the SQL2000 instance used by the production site, restarted SQL and all the ASP code still continued to work fine (plus we have a firewall between both database servers and these web virtual directories on a public facing webserver.
URL to my production virtual directory which demos the working page:
URL to my development v-directory which demos the failing page:
All the code is the same on both prod and dev sites except that on dev I'm trying to connect to the upgraded database.
I know there are dozens of things to check which I've been searching for but here are a few things I can offer to help you help me:
The code that is failing is server-side Javascript adapted from Brent Ashley's "Javascript Remote Scripting (JSRS)" code package years ago. It operates in an AJAX-like manner by posting requests back to different ASP pages and then handling a callback. I think the key thing to point out here is how I changed the connection to the database: (I cannot get Javascript to format right here!)
function setDBConnect(datasource)
{
var strConnect; //ADO connection string
//strConnect = "DRIVER=SQL Server;SERVER=MDWDATA;UID=uname;PASSWORD=x; DATABASE=StagingMDS;";
strConnect = "Provider=SQLNCLI10;Server=DEVMOJITO\MSSQLSERVER2008;Uid=uname;Pwd=x;DATABASE=StagingMDS;";
return strConnect;
}
function serializeSql( sql , datasource)
{
var conn = new ActiveXObject("ADODB.Connection");
var ConnectString = setDBConnect(datasource);
conn.Open( ConnectString );
var rs = conn.Execute( sql );
Please note how the connection string differs. I think that could be the problem but I don't know what to do. I am surprised the error returned says "named pipes" was involved because I really wanted to use TCP. The connection string syntax here is the same as used successfully on a different part of the site which uses VBScript which I'll paste here to show:
if DataBaseConnectionsAreNeeded(strScriptName) then
dim strWebDB
Set objConn = Server.CreateObject("ADODB.Connection")
if IsProductionWeb() Then
strWebDB = "DATABASE=MDS;SERVER=MDWDATA;DRIVER=SQL Server;UID=uname;PASSWORD=x;"
end if
if IsDevelopmentWeb() Then
strWebDB = "Provider=SQLNCLI10;Server=DEVMOJITO\MSSQLSERVER2008;Database=StagingMDS;UID=uname;PASSWORD=x;"
end if
objConn.ConnectionString = strWebDB
objConn.ConnectionTimeout = 30
objConn.Open
set oCmd = Server.CreateObject("ADODB.Command")
oCmd.ActiveConnection = objConn
This code works in both prod and dev virtual directories and other code in other parts of the web which use ASP.NET work against both databases correctly. Named pipes and TCP are both enabled on each server. I don't understand the string used by the Pipes but I am using the defaults always.
I wonder why the Javascript call above results in use of named pipes instead of TCP. Any ideas would be greatly appreciated.
Summary of what I did to get this working:
Add an extra slash to the connection string since this is server-side Javascript:
Server=tcp:DEVMOJITO\MSSQLSERVER2008,1219;
Explicitly code tcp: as a protocol prefix and port 1219. I learned that by default a named instance of SQL uses dynamic porting. I ended up turning that off and chose, somewhat arbitrarily, the port 1219, which dynamic had chosen before I turned it off. There are probably other ways to get this part working.
Finally, I discovered that SET NOCOUNT ON needed to be added to the stored procedure being called. Otherwise, the symptom is the message: "Operation is not allowed when the object is closed".