So, I have thought about designing a WebGL Graphics Engine, which will facilitate designing 3D Interactive Graphics for web. Now, my question is :
WebGL is Javascript API, so in order to design an engine for WebGL graphics, do I need to have a JavaScript compiler or anything ? What I want is a system which will let users see what they are creating (for example, like blender workspace, if you draw up a scene you can see and make changes simultaneously)
You would have to create some kind of engine, or framework that you'll build your system onto.
Creating only framework/engine would take at least 2-3 months, and if you plan creating something really big and advanced, that supports various effects rather than simple rendering primitives, than that might come down to 5-6 months. After that you could start creating your web application. So 6-7 months time for that? That shouldn't be a problem.
I don't know how advanced you are, how many people are you working with, but that seems very plausible and doable. But is worth of it? In a year, many different things will change, maybe new openGL ES version for webGL, changing api, supported browsers (IE recently joined the game),... it's really questionable.
You wouldn't need any kind of JS compiler or anything like it, just knowledge of advanced JS and many different techniques used in 3d, and since you plan building system that is far beyond just-graphic-stuff, then it adds even more to overall complexity and time consumption.
So, to answer your question: yes, it's very doable in a year, but will it pay off?
Similar things already exist in some form:
http://errolschwartz.com/projects/threescene/
http://badassjs.com/post/12885773103/threenodes-js-a-visual-webgl-scene-editor
CopperLicht engine has its own real-time editor
there are more lab-playground-like editors
Related
Newbie here. I am working with phaser, specifically with the isometric plugin.
I would like to know if it is possible to create games in phaser similar to agar.io, in terms of handling real-time multiple connections, generating a enormous map with about 300 players in it and all this without having too much impact in game performance. I seriously don't know how to handle the multiplayer part (probably sockets, node.js) for it to work really well. And as for generating a really big map I am quite blank too.
Is it possible, in phaser, to create a isometric-type game that handles multiples real time multiplayer and HUGE maps that are generated when the user gets to the edges of the visible "map"? How?
If not, what should I opt for (game engine in js and other applications) in order to achieve what I want?
You're not asking the right question, but you're close!
Your first guess is correct. You wouldn't handle multiplayer with Phaser, you'd use web sockets, or nodejs, or some other backend. So Phaser does not really limit you in what you can create with regards to multiplayer, since none of the networking code has anything to do with Phaser.
The idea of handling a huge map also just depends on how you optimize your graphics, regardless of what platform or framework you're using. For example, if you have huge or infinite maps, you can always just only show what's on screen, or around the edges of the screen, and use object pooling to show the rest of the map as the players move.
For multiplayer in Nodejs, check out Socket.io. It's really easy to use. I've set up a barebones example using it here. And in case you might find it helpful, here's an open source game I made for Ludum Dare in Phaser, with networking (this one is only p2p, so it's only made to handled 2 players connected to each other, but like I said, that's only a limitation of the multiplayer framework I used, in this case peerjs.com, and has nothing to do with Phaser itself, which can take care of all your rendering and game logic needs.)
Hopefully this helped answer some of your questions!
Phaser (at least in its 2.0 version) is not a good candidate for implementation of real time game networking as explained here.
If you're looking for a Javascript Multiplayer game engine you should check out Lance, which was written specifically for this purpose. You can then choose a renderer of your choice (Pixi.js, for example, if you're aiming to implement something like Agar.io. It's the same Renderer Phaser uses)
Regarding PhasedEvolution's comment above - Firebase is a nice tool if you're doing turn based games. It's not up to par for real-time game development as it doesn't allow low level access for any game-critical features that mitigate latency like client-size prediction, bending, interpolation and extrapolation.
Proper disclosure: I'm one of the co-creators of Lance :)
I am building a massive real-time collaboration web application. It is a Web IDE that has support for HTML, CSS & JS programming and a stage area that would reflect the results a la JSFiddle, Plunker etc.
Now, the twist is that, it would support multi-user real-time collaboration, where people viewing the same instance of the web application would be able to write code together that would reflect across all the open instances. I have figured out the race conditions, session management et al
What I am having trouble with is
how to reflect the typing and/or deleting along with caret positioning
across these multiple instances giving the illusion that when one
person is typing, he is actually typing on all the instances.
The other thing is that RactiveJS says that it
updates only those parts of the page that are out of date. Tedious DOM
manipulation is a thing of the past.
Which is a very nice effect in their Demo. Imagine, you are on JSFiddle and you don't have to re-run every time. So, my question is, how do they do it? What is the concept behind it?
I don't want to use any library for this. I am pretty good in JS. I am having a hard time figuring out the algorithm.
Things I have considered:
Dirty Checking [but, its dirty right?]
Delta Differencing [but like ReactJS, it would have to update the entire application state every time]
Object.observe [the browser compatibility is not realistic yet]
So, if you have anything that can help me go in the right direction, I would be really thankful.
Realtime-collaboration tools, that allow concurrent editing/manipulation of objects/texts etc, usually use a variant of the Operational Transformation algorithm.
OT is not trivial to understand, much rather implement so I'd suggest you take a look at already-cooked libs for this such as:
ShareJS
Racer for Node
OT works, in some very basic way, similar to GIT
As an update to what you posted in the comments.
You say you are using Python. You wouldn't rebuild your whole codebase, I guess, but keep in mind that real-time collaboration tools usually benefit a lot from using an event-driven server-side language.
Since you are using Python, you could check out the Twisted Framework
I'm about to use WebGl in a academic project to preview some 2d and 3d models in a given format.
While I'm reading some documentation, I would like to know, from your experience, what would be the best API to speedup development and abstract some low-level calls and also the best IDE to work with it.
Cross browser compatibility is not a major problem.
I've decided about WebGl because I would like to create a web interface for my project to help sharing my progress.
Do you even recommend using WebGl for that?
At the end of the day, an IDE is only meant to help a little, you do the hard hauling, having said that, the best editor(s) I use for javascript are Sublime Text and Netbeans IDE
Then as it was already stated, Chrome DevTools is your best bet for debugging.
API
Three.js is really awesome to work with regarding developing WebGL apps. It makes creating what you want very easy (create a scene object, create some things you want to show, then add them to the scene and render. No need to mess around with GLSL and low level stuff right off the bat, although you could if you really wanted to).
IDE
Chrome's console and various tools are great for debugging in general. You can use whatever text editor / IDE for javascript that you want.
API (Framework)
If three.js is hard for you, or you are professional developer who just don't want to spend his time on simple things like setting up environment(scene, camera, renderer) you may try whitestorm.js.
WhitestormJS framework is a wrapper around three.js (you can use both at once, like jQuery wraps DOM). It has some extra features:
Built-in physics of Bullet Physics 3, even softbody physics (You can use light version without physics.
It has component structure (like ReactJS). You can share your plugins, components and use the ones from others.
*WhitestormJS is non-commercial open-source project by three.js fans.
I'm looking for a good plugin for:
a) rapid rendering of
b) lines, shapes, and imagery on top of a
c) rectangular canvas area who's size can be declared at load time.
It needs to run at 20-30fps without putting a heavy load in the browser. It also needs to be able to interface with JavaScript and the DOM.
Creating my own Flash plugin is the first choice, but I'd like to aim for a free, open-source and/or non-proprietary solution first. HTML5 canvas is out of the question - it renders way too slowly.
Anyone happen to know of anything that does these features? (I'd even be okay with a pre-made Flash plugin that meets the requirements mentioned above)
Your request is still vague. What do the lines and shapes need to do? Sit there looking pretty?
If your lines, shapes, and imagery are going to be fewer than say 5,000 objects total, I'd recommend using SVG and perhaps the Raphael library to go with it. Every SVG object is a DOM object from the get-go, which will save you some associated headaches with trying to use Flash or Canvas.
If you really need a lot more performance or plan on having 50,000 objects on screen, Canvas may be for you.
"Creating my own Flash plugin is the
first choice"
I'm unclear what you mean by "plugin" here - I assume you just mean "flash file" (an SWF?).
I think #WTP is making a good point. You say "rapid rendering" but of what? how complex is it? Flash has very good speeds when it comes to vector graphics, and much faster rendering of optimized bitmap data (the technique of choice is blitting here). It all comes down to optimizations / complexity of graphics. No matter the plugin / tech solution, you will always be able to cripple a machine with inefficient design.
To answer your question, Flash will definitely meet your needs.
I would also venture that Canvas/JS would as well, but apparently you've tried that already? I've seen quite complex scenes running quite rapidly, so that surprises me.
I'll note, also, the upcoming Molehill APIs for Flash. This provides low-level access to the GPU - and will create the potential for breathtaking 2d/3d performance in the browser. But its still in alpha, so... don't hold your breath ;)
I know of only four major players in the "Vector graphics capable" department -- HTML Canvas, Flash, Silverlight, and Java applets. Aside from canvas, all them are proprietary in some way or another. The good news is that all of them have the ability to compile for free in some way or another and they are generally faster than canvas by my understanding. Now, I happen to know Flash so that might color my opinion, but I am fairly certain that it is your best option. It has decent performance and a solid install base. It also runs on Linux and does not raise major security issues.
Look up the Flash Builder (Flex) sdk. There is command line compilation for it.
I am holding back on seriously pursuing ProcessingJS pieces mostly due to the bloat of the library. I have found that pieces like Ball Droppings do not use the library's Processing syntax parser, which is good, since I imagine it would slow down the page more, especially adding to the initial load and setup time. Still, I am wondering if it's worthwhile to use it basically as a big utility library like UnderscoreJS. For example, how good is its implementation with SVG compared with the other libraries out there today like RaphaelJS? Has anyone gone through the implementation of the Processing API extensively enough? When I skim through the I see a lot of boilerplate I don't really need, as well as a couple instances of questionable coding practices. But the library still seems to perform decently, at least on the ProcessingJS homepage, although the examples are set to run at 15fps, and not the (in my opinion) minimally acceptable 24fps.
I think it strongly depends on the project you are working on and the background knowledge you have with the processing library.
Processing.js is a great choice if you already have learned the original processing api (java) and want to leverage your existing knowledge in the web environment. It might be the only choice if you want to port an existing project to the web - actually this is probably the best time to use it.
If you are a JavaScript programmer and don't know much about processing you probably will dislike to write Java syntax in the browser and everything becomes even more problematic if you have to mix it with js. The API doesn't feel like JavaScript and there is a lot of code that could be written more elegantly.
Regarding performance it is not a bad choice, actually most projects run smoothly and I can definitely recommend using processing.js on circumstances like those explained above.
Here is great list of various javascript engines out there:
Javascript Graphic/Game Engines
It is hard to recommend a single library, as the requirements are specific to each project.
For simple graphics/diagrams: RaphaelJs is really nice and performs decently
how good is its implementation with SVG compared with the other libraries out there today like RaphaelJS
Processingjs doesn't use SVG as far as i know, it only uses canvas. Raphaeljs only SVG. There's an interesting read here and also at wikipedia about the difference. The main difference is SVG stores the vector data of objects so you can easily change position, colour, etc... of stuff but also provides mouseover events. Canvas - and processingjs - does no such thing, it draws to the canvas and forgets everything so you have to do more work. Don't know about performance difference between the two.
As far as the processingjs API is concerned, I don't have any clue as how it is implemented but since John Resig of jQuery is involved it can't be that bad to say the least.
I agree with user hlfcoding that writing java in browser feels weird. I am too looking for a cleaner solution for my future canvas experiments.
I fail to see how re-rendering for each frame in JavaScript can be seen as performant.
That's exactly how canvas works, you have to calculate and render every frame in js, it's not processingjs specific. I don't think that's such a performance hit, behind the scene a browser running SVG does the same anyway.