I've got an existing WCF service that we've been using to communicate with a Silverlight client, and hence have been using it with the NetTCP binding. I'd like to start using this same service with a JavaScript client, ideally modifying the service as little as possible (i.e., allowing Silverlight and JS clients to call the same duplex service). Ideally this would happen through a reasonably performant and scalable tech, like WebSockets, rather than a hack like Comet.
What's the best way to do this?
Adding WebSocket support to the service (through the NetHttpBinding) would seem like one obvious way to do it - but there doesn't seem to be any documentation on how to call the resulting service from JavaScript. I suppose I could configure it to use a text-based transport, instead of the default binary transport, and then hack together some sort of JavaScript-based SOAP client (perhaps using WSDL2JS) to call it. That feels like it ought to work, but also pretty awkward, and with some pieces in the mix that haven't been well documented.
I could also re-implement my service in a framework like XSockets or SuperWebSocket, but that's some real work, and keeping it in sync with the WCF implementation would be more on top of that.
Any other thoughts?
I am one of the guys behind XSockets.NET. I can help you with this one I hope.
XSockets has a "external" API that you can use from WCF (or anything else talking TCP/IP and .NET) to send messages to the XSockets server. The server will then pass the messages (pub/sub pattern) to the client(s) and vice versa.
So, there will be almost no changes to your WCF.
Just tell me if you need an example, and I will provide one for you. Just send me an email on uffe at xsockets dot net and we can take it from there.
EDIT: Created a example on howto boost your WCF to realtime. It´s on GitHub: Boost WCF to RealTime
Regards
Uffe, Team XSockets
Related
I'm working on an Angular 4 front-end for an API built by another team. The API follows HATEOAS and provides me with hypermedia links with every single response.
I know the shape of the API and I figure I can just hard-code the URLs into Angular Services with minimal fuss. However, a colleague (who is a backend developer) is trying to convince me that I should take full advantage of the hypermedia because it will mean less coupling between the frontend and backend (and potential breakage if the API changes).
However, I'm stumped on how I'd even go about implementing a simple HATEOAS pattern using Angular's built-in Http service. How would I store/share the hypermedia/URL information in a way that doesn't couple all the services together and make them hard-to-test? There seems to be no examples out there.
Would trying to create a HATEOAS-friendly HTTP client even be a good idea, or is it likely not worth the trouble?
Your colleague is right, you should use the meta information that the back-end provides. In this way you are not putting responsibility on the client that doesn't belong there. Why should the client know from where to fetch the entities? Storing the entities (in fact the data in general) is the responsibility of the back-end. The back-end owns the data, it decides where to put it, how to access it, when to change the location or the persistence type, anything related to storing the data.
How would I store/share the hypermedia/URL information in a way that doesn't couple all the services together and make them hard-to-test?
Why do you think using HATEOAS makes the testing harder? It does not, in fact not using it makes the testing harder as the URLs are static which makes the back-end non-stub-able.
You can extract the information from the back-end response and store it as meta-information in the angular model, on a _meta key or something like that.
I've done some research and I've noticed that in a lot of examples Symfony2/AngularJS apps the frontend and backend are combined; for example, views use Twig.
I'd always thought that it's possible (and common practice) to create the frontend and backend separately and just join them by API. In that case if I want to change a PHP framework I will can do it without any problems and it will be enough to keep API.
So what are the best practices for doing it? It would be great if you could explain it to me and even greater if you just give me a link to good example on github or something.
We have been developing some projects using the same approach. Not only I think it doesn't have any "side effect", but the solution is very elegant too.
We usually create the backend in Node.js, and it is just an API server (not necessarily entirely REST-compliant). We then create another, separate web application for the frontend, written entirely in HTML5/JavaScript (with or without Angular.js). The API server never returns any HTML, just JSON! Not even an index structure.
There are lots of benefits:
The code is very clean and elegant. Communication between the frontend and the backend follow standardized methods. The server exposes some API's, and the client can use them freely.
It makes it easier to have different teams for the frontend and the backend, and they can work quite freely without interfering with each other. Designers, which usually have limited coding skills, appreciate this too.
The frontend is just a static HTML5 app, so it can (and we often did) easily be hosted on a CDN. This means that your servers will never have to worry about static contents at all, and their load is reduced, saving you money. Users are happier too, as CDNs are usually very fast for them.
Some hints that I can give you based on our experience:
The biggest issue is with authentication of users. It's not particularly complicated, but you may want to implement authentication using for example protocols like OAuth 2.0 for your internal use. So, the frontend app will act as a OAuth client, and obtains an auth token from the backend. You may also want to consider moving the authentication server (with OAuth) on another separate resource from the API server.
If you host the webapp on a different hostname (e.g. a CDN) you may need to deal with CORS, and maybe JSONP.
The language you write the backend in is not really important. We have done that in PHP (including Laravel), even though we got the best results with using Node.js. For Node.js, we published our boilerplate on GitHub, based on RestifyJS
I asked some questions in the past you may be interested in:
Web service and API: "chicken or egg"
Security of an API server: login with password and sessions
Background
My background is high scale object oriented middleware and Applications development for embedded devices and desktops with C++. Now we need to create a high scale web-app for our startup.
Question
Request-response based and continuous polling based current web-development frameworks looks very primitive, inefficient.
I am looking for completely server-side object oriented and event based programming.
Here is an example it,
There is a persistent object named employeeManager on server,
methods of this object,
empList getAllEmployeeList();
empList getEmployeeOfDepartment(string strDept);
/*Some more */
events of this object
employeeAdded(empID);
employeeEdited(empID);
employeeRemoved(empID);
/*Some more */
Now, client side javascript should be able to call the methods of this (server-side) object and should be able to receive events of this object. We can have results of the method call in asynchronous mode. Framework should also provide a way so that view ( or html-js page ) can register for required server side events.
Is there any frameworks which works on this methodology. Anything like this on top of socketIO? Any framework which provides a good two way RPC between client javascript and sever side objects?
Try the following combo:
Node + socket.io + Backbone.Model + a bit of imagination.
I think the missing piece is a model like structure that can be used on both server side and client side. The model needs to synchronize state between server and client.
Here is an article that I find very interesting, and maybe you can use the technique described?
The article:
http://blog.andyet.com/2011/feb/15/re-using-backbonejs-models-on-the-server-with-node/
NodeJS and sockets.io. These can help achieve the desired effect.
Meteor is a Node.js based framework that uses sock.js for websocket communication and MongoDB for a database which is oriented for horizontally scalable apps. Meteor will pretty much do all the heavy lifting for you when it comes to client-server synchronization - you will not have to write any code for database syncing. The result is a minimal codebase with mainly your application's logic instead of the req/resp overhead. You can have a look at the examples here: http://meteor.com/examples/leaderboard
If you want cross-language RPC you might find Apache Thrift useful. I believe there's a Javascript client (but have never used it). You could build an RPC framework on top of Socket.IO as many others have pointed out, but it feels like painting a cat to look like a cow... i.e. fun, but not particular productive
I'm sure you have already, or have some legacy constraint, but in case you haven't I'd take a second to think about whether RPC is really the model you want to use. RPC leakily abstracts the existence of network latency, and as such bakes a few shaky assumptions into the foundations of your app. There's a fairly short and readable critique of RPC in general (by AST no less) that might be worth a read.
If you're familiar with C++, you may want to check out G-WAN. They have a great example using Comet (what you're looking for), and there are also Node.JS wrappers too.
G-WAN also allows for client-side applets written in whatever language you need too. So, for you, C++, might be just what you're looking for.
This is a very scalable web application server. From all the benchmarks I've seen, Node.JS doesn't scale well with high-concurrency (I may be wrong on this, if I am, please let me know, and provide me with the information). That being said, I've done things very similarly to what you're talking about doing. All I had to do was write a very simple wrapper to translate from JS to whatever language it was that I was using at the time (for me I've done it with PHP, MivaScript, SMT and C).
But the key (for me) was using Comet, to cut down on unnecessary polling of the server.
Has anyone an idea for the following scenario?
I have a RIA-Webapplication (realized in ExtJs). What I want to implement is the possibility to use local ressources like card readers or fingerprint readers or other serial devices and filesystem access.
I thought about implementing this with a local websocket service which has to be installed by our customer before using our RIA the first time. When the webapp is loading it should scan the local machine if a websocket service is available and connect to it.
After that, local events (like new card is beeing read or recognized new finger) should be passed to the browser via websocket connection.
Any ideas how to get started with such a solution?
I have made something like that. Besides the obvious things such as read/write/poll data from the card-readers and so on, you would have to either implement everything yourself or, use a library for technology you are going to employ for your web-server. So, if you use a LAMP solution, i think there are some web-socket libraries for PHP that you can use. However, if you do everything by yourself then you have to implement everything from hand-shaking, to creating data packets. I have done everything from scratch by using .NET and it provides a number of useful libraries such as hashing. Java also would be a good option and have those kind of libraries as well. In general if you doing everything by yourself i would say the trickiest thing would be to split the data you want to transmit among various data packet. It is not that hard to do things from scratch. The RFC (https://www.rfc-editor.org/rfc/rfc6455) helped me a lot. Hopefully, this helps.
We will be using custom Silverlight 4.0 controls on our ASP.NET MVC web page to display data from our database and was wondering what the most efficient method was? We will be having returned values of up to 100k records (of 2 properties per record).
We have a test that uses the HTML Bridge from Javascript to Silverlight. First we perform a post request to a controller action in the MVC web app and return JSON. This JSON is then passed to the Silverlight where it is parsed and the UI updated. This seems to be rather slow, with the stored procedure (the select) taking about 3 seconds and the entire update in the browser about 10-15sec.
Having a brief look on the net, it seems that WCF is another option, but not having used it, I wasn't sure of it's capability or suitability.
Does anyone have any experiences or recommendations?
You should definitely consider a change in your approach. This just shouldn't have to be so complicated. WCF is a possible solution. I am sure you are gonna get better performance out of it.
It is designed to transfer data across the wire. Web services in general are thought to be the "right way" to provide data to your silverlight app. WCF services are definitely more configurable.
Another point in favour of web services is that this approach is more straightforward than the one you apply. You don't have to serialize in JSON then to parse in JavaScript objects and then to pass them to Silverlight.
It is really easy to port and continue developing with wcf.
Last but not least your code will be much more readable and maintainable.
It seems that performance is critical in your case so you can take a look here for compraison.
In conclusion my advice is to consider a change in your approach. WCF services looks like possible solution.
Hope this helps.