I want to draw a 3D cat(with animation) which is no more than a bunch of 3D objects - ellipsoids, pyramids, spheres, etc.
And I have 2 questions:
1) Are there any ways to define your own complex geometrical 3D objects rather than standard Three.js objects such as Sphere, Cube...
2) When animation the whole cat should I define an animation function for each object? Is there any way to combine some objects together?
For question one I'd recommend reading up on parameter driven modelling, this will allow you to make consistent complex objects without reinventing the wheel every time you create one. As for creating the custom objects, much like in the way polylines are are effectively a collection of lines with iterative implementations of the standard line methods (as well as object-specific methods) you'd create a javascript object which contains a collection of the objects necessary to create your custom shape. Here's a good webgl cheat sheet to help you out a bit.
Question two is somewhat similar to the way we've described complex objects above in that while you'll write a Cat object render / animate function, you'll handle the animation on a per object basis (with the exception full object static movement, imagine a cat on an escalator). Once again constraint, or parameter driven design will be your saviour here since the fact that two or more objects are partially superposed in no way means that the objects are explicitly linked.
As an end note I'd recommend looking into clojurescript. It might not be necessary for this type of work but lisp is very popular in the CAD scripting world and you'd definitely be doin' yourself a favour in the long run by at least familiarising yourself with the coding conventions - a lot of the questions you're goin' to have whilst working on this project will be answered in a variety of programming languages but you'll likely find that many of the answers that were written by folk working on both sides of the fence (cad/programming) will be written in lisp. Here's a final general CAD forum that's a great resource for all things CAD.
Related
I believe, the answers will provide nice and complete examples of angular's directives and also different points of view of actual understanding about particules. It applies to Angularjs and Angular too! So, Many best answers possible based on theories and also I think, funny challenge that leads to great learning material
Can have several properties: volume, density, or mass. Electron, atoms and molecules, to macroscopic particles like powders and other granular materials. Particles can also be used to create scientific models of even larger objects depending on their density.
Thank you!
So far I seen so many discussion on this topic and using different approaches to achieve this (https://github.com/tensorflow/models/issues/1809) but I want to know if anyone managed to successfully use Tensorflowjs to achieve this.
I know some also achieved this using transfer learning but it is not same as being able to add my own new class.
The short answer: No, not yet, though technically possible, I have not seen an implementation of this in the wild.
The longer answer - why:
Given that "transfer learning" essentially means reusing the existing knowledge in a trained model to help you then classify things of a similar nature without having to redo all the prior learning there are actually 2 ways to do that:
1) This is the easier route but may not be possible for some use cases: Use one of the high level layers of the frozen model that you have access to (eg the models that are released by TF.js are frozen models I believe - the ones on GitHub). This allows you to reuse some of its lower layers (or final output) which may already be good at picking out certain features that are useful for the use case you need eg object detection in a general sense, which you can then feed into your own unfrozen layers that sit on top of that output you are sampling from (which is where the new training would happen). This is faster as you are only updating weights etc for the new layers you have added, however because the original model is frozen, it means you would have to replicate in TF.js the layers you were bypassing to ensure you have the same resulting model architecture for COCO-SSD in this case if you wanted the architecture. This may not be trivial to do.
2) Retraining the original model - can think of tuning the original model - but this is only possible if you have access to the original unfrozen model and the data used to train that. This would take longer as you are essentially retraining the whole model on all the data + your new data. If you do not have the original unfrozen model, then the only way to do this would be to implement the said model in TF.js yourself using the layers / ops APIs as needed and then use that to train on your own data.
What?!
So an easier to visualize example of this is if we consider PoseNet - the one that estimates where human joints/skeletons are.
Now in this Posenet example imagine you wanted to make a new ML model that could detect when a person is in a certain position - eg waving a hand.
In this example you could use method 1 to simply take the output of existing posenet predictions for all the joints it has detected and feed that into a new layer - something simple like a multi layered perceptron - that could then very quickly learn from example data when a hand was in a waving position for example. In this case we are simply adding to the existing architecture to achieve a new result - gesture prediction vs the raw x-y point predictions for the joints themselves.
Now consider case 2 for PoseNet - you want to be able to recognise a new part of the body that it currently does not. For that to happen you would need to retrain the original model so that it could learn to predict that new body part as part of its output.
This is much harder as you would need to retrain the base model to do this, which means you need to have access to the unfrozen model to do that. If you didn't have access to the unfrozen model then you would have no choice but attempt to recreate PoseNet architecture entirely yourself and then train that with your own data. As you can see this 2nd use case is much harder and more involved to do.
I have somewhere between 2M and 10M static objects which I would like overlay on Google Maps. I've previously tried HeatmapLayer successfully on much smaller sets. Due to the shear volume I'm a bit concerned, and I must to lump the objects together to avoid performance problems. The target platform is Chrome on a standard desktop.
What is the best way to space partition and merge objects in close proximity? Should I try some type of loose quad tree to lump the objects together, and then display each node with its respective weight using the HeatmapLayer? Or should I try to dynamically build some type of triangle mesh where vertices can be dynamically merged and triangles gain weight as more objects are added to them and then display the triangles on top of Google Maps? HeatmapLayer is pretty fast (looks like it's implemented in GL shaders), but I doubt Polygon is.
I've tried searching for open source loose quad tree JavaScript implementations and other fast space partition JavaScript implementations but found nothing. Is my best bet to port some C++ implementation? Any answers/comments from someone who built something similar would be helpful!
I settled on preprocessing my data in the backend using a space partitioning implementation. I recommend it for anybody who has the luxury of doing so.
I realize this is not strictly related to programming problems but as SO is the best resource for programming related problems, I decided to try it out. :)
I have a project where I need to do 3D pathfinding with javascript, inside a building. Dijkstra algorithm is probably the best case for this, as it handles irregular shapes quite nicely.
However, the problem is this:
Dijkstra requires node structure for it to work. But how to create that data? Obviously some sort of conversion need to be done from the basedata, but how to create that basedata? Going through the blueprint, getting x & y values for each possible path node, calculating the distances by hand seems bit excessive... And prone for swearwords...
I was even thinking of using Google Scetchup for this. Drawing lines for each possible path, but then the problem is getting the path data out from it. :/
I can't be the first person to have this problem... Any ideas? Are there any ready-made tools for creating path data?
Could not find any ready made tools so I ended up creating the path data as lines in Google SketchUp, exporting them Collada files and writing my own converter for the Collada XML data.
This can all be done in code by constructing a 3d grid and removing cubes that intersect with 3d objects.
I would then layer multiple 3d grids (doubling in size each time) that gives a more general idea of reachability (constructed from smaller grids), then by sheer virtue of path finding algorithms you will always find the most efficient path from A-B that will automatically direct the path using the largest cells (and therefore the fewest calculation steps). Note: make the larger 3d grids have a slightly lower weighting so that it's paths are favoured.
This can be used for many applications. For example if you could only walk on the ground, then simply remove blocks in unreachable areas.
I'm trying to nut out a highlevel tech spec for a game I'm tinkering with as a personal project. It's a turn based adventure game that's probably closest to Archon in terms of what I'm trying to do.
What I'm having trouble with is conceptualising the best way to develop a combat system that I can implement simply at first, but that will allow expansion and complexity to be added in the future.
Specifically I'm having trouble trying to figure out how to handle combat special effects, that is, bonuses or negatives that may be applied or removed by an actor, an item or an environment.
Do I have the actor handle all effects that are in play for/against them should the game itself check each weapon, armour, actor and location each time it tries to make a decisive roll.
Are effects handled in individual objects or is there an 'effect' object or a bit of both?
I may well have not explained myself at all well here, and I'm more than happy to try and expand the question if my request is simply too broad and airy. But my intial thinking is that smarter people than me have spent the time and effort in figuring things like this out and frankly I don't want to taint the conversation with the cul-de-sac of my own stupidity too early.
The language in question is javascript, although at this point I don't imagine it makes a great difference.
What you're calling 'special effects' used to be called 'modifiers' but nowadays go by the term popular in MMOs as 'buffs'. Handling these is as easy or as difficult as you want it to be, given that you get to choose how much versatility you want to be able to bestow at each stage.
Fundamentally though, each aspect of the system typically stores a list of the modifiers that apply to it, and you can query them on demand. Typically there are only a handful of modifiers that apply to any one player at any given time so it's not a problem - take the player's statistics and any modifiers imparted by skills/spells/whatever, add on any modifiers imparted by worn equipment, then add anything imparted by the weapon in question. If you come up with a standard interface here (eg. sumModifiersTo(attributeID)) that is used by actors, items, locations, etc., then implementing this can be quick and easy.
Typically the 'effect' objects would be contained within the entity they pertain to: actors have a list of effects, and the items they wear or use have their own list of effects. Where effects are explicitly activated and/or time-limited, it's up to you where you want to store them - eg. if you have magical potions or other consumables, their effects will need to be appended to the Actor rather than the (presumably destroyed) item.
Don't be tempted to try and have the effects modify actor attributes in-place, as you quickly find that it's easy for the attributes to 'drift' if you don't ensure all additions and removals are done following the correct protocol. It also makes it much harder to bypass certain modifiers later. eg. Imagine a magical shield that only protects against other magic - you can pass some sort of predicate to your modifier totalling function that disregards certain types of effect to do this.
Take a look at the book, Head First Design Patterns, by Elisabeth Freeman. Specifically, read up on the Decorator and Factory patterns and the method of programming to interfaces, not implementations. I found that book to be hugely effective in illustrating some of the complex concepts that may get you going on this.
Hope this helps to point you in the right direction.
At first blush I would say that the individual combatants (player and NPC) have a role in determining what their combat characteristics are (i.e. armor value, to-hit number, damage range, etc.) given all the modifiers that apply to that combatant. So then the combat system is not trying to figure out whether or not the character's class gives him/her an armor bonus, whether a magic weapon weighs in on the to hit, etc.
But I would expect the combat system itself to be outside of the individual combatants. That it would take information about an attacker and a desired type of attack and a target or set of targets and resolve that.
To me, that kind of model reflects how we actually ran combat in pencil and paper RPGs. The DM asked each player for the details of his or her character and then ran the combat using that information as the inputs. That it works in the real world suggests its a pretty flexible system.