I am working on a project which involves building a web service on the top of a .js library. I successfully completed that but now I have create another library (or something) in which everything should be customizable from the client side and server should just provide the JavaScript functions so that client can attach them with buttons and work with them.
Right now, I can think of two possible implementations:
Creating a new .js library and provide the functions which the client can use after including my .js file.
(Overheads - client will still have to include other .js libraries which I am using inside my library...so frankly speaking, this is kind of overhead on the client and is probably not a good way to go about the problem). My server in this case will just process the data which will be used by my .js library on the client side.
I directly return the necessary JavaScript functions along with the processed data so that client does not have to know about my internal implementation and can work with less overhead.
The problem is I don't know how to go about the second approach which looks quite promising. Is there any example implementation, or a better way to go about the first approach?
Ok. Let me try and explain this in my own words.
As far as I understood your problem, you do not want to include all the JS libraries in your code and would rather have the data passed to the JS libraries and then the inbuilt functions within your libraries process your data and return processed output.
I have not heard of this being done. Maybe it is so we can wait for an answer regarding that. But ideally what you would do is create a "Web service".
So this "Web service" basically act's like your "external javascript function". You pass data to the Web service and this data is processed and returned to you. You then process this output and display it the way you like.
So simply put, you will have to design a Web service with functions, that act the way your Javascript functions do.
Related
I am working on an old enterprise solution with these properties:
The solution has a MVC web application
The solution has a WCF service layer
The solution has javascript in the database, in the form of functions in a database column
The web application retrieves said javascript through the service layer and plugs it into certain pages
My team cannot modify the web application, nor the service layer
My team must write javascript by inserting functions into said database columns
This architecture leads to:
A very inefficient development loop
Very poor source control
I'd like to propose a solution for them, how to upgrade this, but here's where I fall a bit short on experience. My suggestion would be:
Migrate the javascript from the database to javascript files
Make some sort of hook in the web application for other teams' javascript files
My questions are:
Has anyone had this kind of problem and how did they solve it?
Is there an effective way to do this kind of javascript migration into files? My idea would be to write a small console program to do the migration
How would they make a hook to import our javascript files? My idea is to make a script bundle with some naming convention, so we can add scripts without them needing to change their code. Are there problems with this approach?
Any kind of input would be invaluable.
Edit:
Additional explanation:
The mechanism maps the javascript function names to a certain DOM elements' event attributes and inlines the code right after the element
The functions are standalone functions, depending only on libraries already in the web application
The functions are grouped by a common form
So I suppose it would be better to group them into files bearing the form names.
If these are just simple, static function definitions being inlined into the web page, then I suppose it might be possible to serialize/aggregate them all into a giant file and run something like prettier on it to make it readable.
That wouldn't be ideal to gain traction in your proposed migration, though. If the code has any volume to it at all, it would be nice to give some structure and order to maintain it.
It's already kind of a huge assumption that this javascript is just pure functions without any complex dependencies on each other, but it's possible that these pieces of Javascript work in isolation already if they are being pulled out of a database. It's hard to know without knowing more context. It seems unlikely that your life will be that easy.
If you managed to extract this monolithic Javascript file, the easiest thing to do would be include it in a script tag for the entire site and be done with it. This could be a bad idea if the file is getting to the ~MB size and slows your initial page load time.
Then again, the point at which you have a bunch of functions in one file, you could probably do a lot there to optimize and reduce duplication of code.
This is still all conjecture because I don't know the mechanism by which your web application imports the javascript once it retrieves it from the database.
If this question is more appropriate for another forum please do point me to the same.
I am writing a web application that would pull data in JSon format from a number of REST sources. The UI uses a number of javascript technologies like Knockout.js etc using which I am displaying graphs, charts etc.
I have written a mid-tier in java, that acts as a 'broker' between the java script and the REST sources - the idea is to make javascript layer REST calls agnostic to the user/user role etc and let the mid tier decide with REST server/endpoint to call.This java code calls the actual REST endpoints, and exposes a generic REST endpoint to be called by java script.
The problem is that the json returned by most of the REST calls is of a different structure than the one that is required by my java script technologies ( they are a plain array of the data, while each java script component like graph need the data in a very specific format). Also I can in no way modify the source of those REST calls.
This means I will have to do some amount of processing on the json I receive.
My question is, where should I make this processing? Should I do it in the java script code or it would be more appropriate to do it in the java mid-tier code?
A friend of mine suggested that I should do it in javascript because :
In future I might end up making some REST calls directly from
java script, and then I would end up similar logic in two places -
java script and java
JSon being JSon, javascript will have better handling capabilities
If I do it in java, I also increase the number of rest calls
considerably.
I am a bit uncomfortable doing it in java script because:
I am not comfortable coding in javascript(I admit)
If written in java the logic executes on the server instead of users
browser, which I expect to be faster.(A fast loading page is kind of
a must have here)
Am I right or wrong? Any other pros/cons?
P.S. Not that I care so much, but down-voting/close voting without mentioning the reason doesn't help anyone.
Do not listen your friend. Both his arguments are wrong.
While you use Java server side it is a bad idea to call your providers directly from JavaScript. Use mixed patterns is very-very bad idea.
JSon handles easy and better in Java than in JavaScript disregarding that JSON is "JavaScript Object Notation". Just pick a package you'd like better: Jackson, Gson or something else...
So... do it on server in Java. It is a proper place to do that.
You have trivial integration use case, when you need to transform data from external providers to the client required format.
I am working on a project which allows users to monitor energy consumption. The main dashboard page is a web app which is pretty neat and makes extensive use of javascript and ajax. The server currently runs apache and uses php; however, I am planning on installing node.js and updating the server side scripts in order to support websockets (and I also like the idea of using javascript on the server and client side).
I have followed several online introductions but I am struggling to find answers to specific questions which I need to get my head round before I can start, one of which is outlined below.
Is there a simple way to run page-specific javascript on the pages of the website in the same way you would with PHP? I have come across templating (for example using mustache), but I don't understand whether it is possible to run specific modules of javascript on the server when a specific file is about to be served.
Thank you very much for taking the time to read my questions. If you can answer any of them, or even provide any general advice, it would be greatly appreciated.
You should be looking into web frameworks for Node.js that help you with this kind of thing. My favorite is express.
Express will allow you to map a "route" to a handler, and that handler can be in any file you want. E.g.,
app.get('/energy/page-x', require('./routes/page-x-handler'));
Where ./routes/page-x-handler.js is something like:
module.exports = function (req, res, next) {
res.render('template-name');
}
If you use a framework like Express you can 'intercept' (or 'route', as it's called) requests for specific URL's and run server side JS to handle that URL (including, at the end of processing, render and return a template).
One of the differences with PHP is that PHP often mixes templates and code into one file, whereas with Node, those two are usually separated (so you have a clear separation between code and layout).
I've been looking at the API for Flattr, http://flattr.com/support/integrate/js , which has a cool way of accepting query variables for their JavaScript to load.
My question is, do most APIs use something other than JavaScript to accept these different variables for their services? EG:
Ruby on Rails
PHP
Python
Then these are parsed by the respective language and returned as outputted JavaScript to the requesting website?
Cheers
Javascript itself is totally capable of reading how it's embedded to the HTML it belongs to, by reading document.getElementsByTagName("script") and further parse/match their src attributes. Therefore, it's not a problem at all for it to further parse the query variables attached at the end, and dynamically (all in javascript, client side) load components within.
Any javascript libraries that allow you to pack the whole thing and deploy to your own web server should take this approach, since there's no server to handle the request anyways.
On the other hand, javascript libraries that are hosted on other sites that allow you to use (like YUI) MAY take the server approach like you mentioned.
In my personal experience, projects that I have worked on have used server side languages to deal with get params.
So a request might be /myjavascript.js?id=123123 The server side language would create the correct javascript for that request.
Keeping everything on the server side has the advantage of not allowing the user to see what is going on. If this isn't a problem for you, javascript is more than capable of handling different params.
In my experience it's fairly common that widgets embedded into others' sites gets their parameters by parsing them from their script tags. It makes the widget script static and self-contained and thus easier to distribute through eg. a fast CDN. Performance is important when you're going to convince someone else to add your javascript to their site as poor performance from the widget can make the entire site appear sluggish.
A better place to specify the parameters than query parameters would however be to specify them in the URL:s hash-part as that part isn't included when caches are checked and thus the script would have to be downloaded fewer times - which of course is good for performance, especially if the parameters might shift a lot.
I'm planning on writing a spine/backbone.js style web application which basically just transfers a large application.js file to the client's browser that communicates with the node.js backend using ajax. The problem is that I don't know how to structure such a project, since I've never seen examples of such an application. I can picture some pros and cons with different ways of doing this
Keep everything in one project folder. Both the server side and client side code resides in the same folders which means they can share resources such as form input validation and language files. This seems like a good solution, but I have no clue how I would bundle only the code that the client needs, and not the server code. Just in general I don't know how to accomplish this. If it has been done before, I would like to see some sample code, perhaps even a git repo
Create two separate projects. One for the client and one for the server. This seems a lot more simple and straight forward, but not as elegant when it comes to sharing resources. I would have to write code such as form input validation twice.
Any thoughts?
Your first situation is a very tricky scenario and I would suggest that we're not quite there yet. Some would argue that there's little reason to try to get there, as front/back ends will always be tasked with slightly and sometimes drastically different tasks. Libraries like derby show promise, but aren't quite there yet.
I discussed this recently with a friend and we came to the conclusion that perhaps the best bet for now would be to serialize models over websockets, and then ensure that the node server and client app stay in sync.
I may work on such a library, but for now I'm still developing with 2 folders and copies of models on both sides. Layout mark-up gets sent from the server, with all other content rendered client-side after receiving JSON from the server. Frankly, the amount of duplication isn't really that substantial. A little irritating but also maintains greater flexibility to grow in different directions.
This won't be a complete answer to your question, but one library that might help if you choose to pursue such an endeavour might be Browserify.
It's designed so you can use a similar require() function with a preprocessed, or on-the-fly generated from module source, js file containing many different modules. These modules can be shared with the server side through the same require() mechanism.
I don't know explicity of a Backbone implemented on the server side as a server side counter part for model sync, that would seem to be the first goal you are looking for, aloowing code that makes sense to be shared, such as models and validation, to be usefully shared.
Another thing to look at is requirejs, which uses more traditional script tag asynchronous loading f js modules, but also works within node.js aloowing the same AMD modules to a be shared between node and client code.
Was realtime required? Otherwise the Derby approach might be a little too heavy. Express.js proposes a structure where client js is separated in public folder, and provides methods to get a quick RESTful API running, which you can then access with your application.js.
I guess you could load "classic" js files from public into node via eval() too.
Things have moved much ahead now, and things like
browserify influenced coding can help us achieve this easily
there will always be some uncommon code between server and client sides, But the goal shall always be to keep all the logic code in different modules(which are later used from both environments). This is better from TDD point of view as well, also keeps your keyboard press count to lesser.
Have a look at things like this stack -
http://mindthecode.com/lets-build-an-angularjs-app-with-browserify-and-gulp/
Having said that your option1 did not seem that manageable to me, if you had the right coders coding the right code.