What would be your representation of a particule as an angular directive - javascript

I believe, the answers will provide nice and complete examples of angular's directives and also different points of view of actual understanding about particules. It applies to Angularjs and Angular too! So, Many best answers possible based on theories and also I think, funny challenge that leads to great learning material
Can have several properties: volume, density, or mass. Electron, atoms and molecules, to macroscopic particles like powders and other granular materials. Particles can also be used to create scientific models of even larger objects depending on their density.
Thank you!

Related

does anyone know how to retrain Object Detection (coco-ssd) of TFJS for object 91?

So far I seen so many discussion on this topic and using different approaches to achieve this (https://github.com/tensorflow/models/issues/1809) but I want to know if anyone managed to successfully use Tensorflowjs to achieve this.
I know some also achieved this using transfer learning but it is not same as being able to add my own new class.
The short answer: No, not yet, though technically possible, I have not seen an implementation of this in the wild.
The longer answer - why:
Given that "transfer learning" essentially means reusing the existing knowledge in a trained model to help you then classify things of a similar nature without having to redo all the prior learning there are actually 2 ways to do that:
1) This is the easier route but may not be possible for some use cases: Use one of the high level layers of the frozen model that you have access to (eg the models that are released by TF.js are frozen models I believe - the ones on GitHub). This allows you to reuse some of its lower layers (or final output) which may already be good at picking out certain features that are useful for the use case you need eg object detection in a general sense, which you can then feed into your own unfrozen layers that sit on top of that output you are sampling from (which is where the new training would happen). This is faster as you are only updating weights etc for the new layers you have added, however because the original model is frozen, it means you would have to replicate in TF.js the layers you were bypassing to ensure you have the same resulting model architecture for COCO-SSD in this case if you wanted the architecture. This may not be trivial to do.
2) Retraining the original model - can think of tuning the original model - but this is only possible if you have access to the original unfrozen model and the data used to train that. This would take longer as you are essentially retraining the whole model on all the data + your new data. If you do not have the original unfrozen model, then the only way to do this would be to implement the said model in TF.js yourself using the layers / ops APIs as needed and then use that to train on your own data.
What?!
So an easier to visualize example of this is if we consider PoseNet - the one that estimates where human joints/skeletons are.
Now in this Posenet example imagine you wanted to make a new ML model that could detect when a person is in a certain position - eg waving a hand.
In this example you could use method 1 to simply take the output of existing posenet predictions for all the joints it has detected and feed that into a new layer - something simple like a multi layered perceptron - that could then very quickly learn from example data when a hand was in a waving position for example. In this case we are simply adding to the existing architecture to achieve a new result - gesture prediction vs the raw x-y point predictions for the joints themselves.
Now consider case 2 for PoseNet - you want to be able to recognise a new part of the body that it currently does not. For that to happen you would need to retrain the original model so that it could learn to predict that new body part as part of its output.
This is much harder as you would need to retrain the base model to do this, which means you need to have access to the unfrozen model to do that. If you didn't have access to the unfrozen model then you would have no choice but attempt to recreate PoseNet architecture entirely yourself and then train that with your own data. As you can see this 2nd use case is much harder and more involved to do.

Compare sound between source and microphone in JavaScript

I'm working about audio but I'm a newbie in this area. I would like to matching sound from microphone to my source audio(just only 1 sound) like Coke Ads from Shazam. Example Video (0.45 minute) However, I want to make it on website by JavaScript. Thank you.
Building something similar to the backend of Shazam is not an easy task. We need to:
Acquire audio from the user's microphone (easy)
Compare it to the source and identify a match (hmm... how do... )
How can we perform each step?
Aquire Audio
This one is a definite no biggy. We can use the Web Audio API for this. You can google around for good tutorials on how to use it. This link provides some good fundametal knowledge that you may want to understand when using it.
Compare Samples to Audio Source File
Clearly this piece is going to be an algorithmic challenge in a project like this. There are probably various ways to approach this part, and not enough time to describe them all here, but one feasible technique (which happens to be what Shazam actually uses), and which is also described in greater detail here, is to create and compare against a sort of fingerprint for smaller pieces of your source material, which you can generate using FFT analysis.
This works as follows:
Look at small sections of a sample no more than a few seconds long (note that this is done using a sliding window, not discrete partitioning) at a time
Calculate the Fourier Transform of the audio selection. This decomposes our selection into many signals of different frequencies. We can analyze the frequency domain of our sample to draw useful conclusions about what we are hearing.
Create a fingerprint for the selection by identifying critical values in the FFT, such as peak frequencies or magnitudes
If you want to be able to match multiple samples like Shazam does, you should maintain a dictionary of fingerprints, but since you only need to match one source material, you can just maintain them in a list. Since your keys are going to be an array of numerical values, I propose that another possible data structure to quickly query your dataset would be a k-d tree. I don't think Shazam uses one, but the more I think about it, the closer their system seems to an n-dimensional nearest neighbor search, if you can keep the amount of critical points consistent. For now though, just keep it simple, use a list.
Now we have a database of fingerprints primed and ready for use. We need to compare them against our microphone input now.
Sample our microphone input in small segments with a sliding window, the same way we did our sources.
For each segment, calculate the fingerprint, and see if it matches close to any from storage. You can look for a partial match here and there are lots of tweaks and optimizations you could try.
This is going to be a noisy and inaccurate signal so don't expect every segment to get a match. If lots of them are getting a match (you will have to figure out what lots means experimentally), then assume you have one. If there are relatively few matches, then figure you don't.
Conclusions
This is not going to be an super easy project to do well. The amount of tuning and optimization required will prove to be a challenge. Some microphones are inaccurate, and most environments have other sounds, and all of that will mess with your results, but it's also probably not as bad as it sounds. I mean, this is a system that from the outside seems unapproachably complex, and we just broke it down into some relatively simple steps.
Also as a final note, you mention Javascript several times in your post, and you may notice that I mentioned it zero times up until now in my answer, and that's because language of implementation is not an important factor. This system is complex enough that the hardest pieces to the puzzle are going to be the ones you solve on paper, so you don't need to think in terms of "how can I do X in Y", just figure out an algorithm for X, and the Y should come naturally.

Poor performance with KO on a larger data grid

I have a decent sized data-grid (basically an interactive table) around 600 rows.
I noticed that binding KO to this grid actually takes substantial amount of time, esp. during the databind. On older browser the situation is even worse, the processor is peaked for almost a minute.
The biggest chunk of performance block seems to be coming from the line that performs the databind, NOTE: This is the initial databind, so many of reply to handling large update does not seem to be applicable.
Also mapping plugin was used to convert json objects into viewmodels on the fly. However the line that performs the mapping itself did not seem to take up too much time compared to the line that databinds.
Unfortunately paging is out of the question due to the requirement. Is there any general tips/pointers on optimizing larger view models and KO?
I had a similar issue and posted in the Knockout Google group. Michael Best recommended trying some custom bindings.
Since you're doing edits, his knockout-table binding won't work for you. But you might try the knockout-repeat binding. It's supposed to be faster than Knockout's native foreach (at the cost of some additional complexity in your HTML). The final option is to create your own binding that builds your grid all in one go. In theory, building the entire grid in memory and stuffing it into the DOM complete will be faster than modifying the DOM in discrete bits.
KoGrid is probably not what you want, but there are probably some hints and tips embedded in the source.
One advice when using the mapping plugin is to only map the properties that you need. Mapping all properties to observables in large data sets can be a real performance killer.

Pathfinding: How to create path data for the pathfiding algorithm?

I realize this is not strictly related to programming problems but as SO is the best resource for programming related problems, I decided to try it out. :)
I have a project where I need to do 3D pathfinding with javascript, inside a building. Dijkstra algorithm is probably the best case for this, as it handles irregular shapes quite nicely.
However, the problem is this:
Dijkstra requires node structure for it to work. But how to create that data? Obviously some sort of conversion need to be done from the basedata, but how to create that basedata? Going through the blueprint, getting x & y values for each possible path node, calculating the distances by hand seems bit excessive... And prone for swearwords...
I was even thinking of using Google Scetchup for this. Drawing lines for each possible path, but then the problem is getting the path data out from it. :/
I can't be the first person to have this problem... Any ideas? Are there any ready-made tools for creating path data?
Could not find any ready made tools so I ended up creating the path data as lines in Google SketchUp, exporting them Collada files and writing my own converter for the Collada XML data.
This can all be done in code by constructing a 3d grid and removing cubes that intersect with 3d objects.
I would then layer multiple 3d grids (doubling in size each time) that gives a more general idea of reachability (constructed from smaller grids), then by sheer virtue of path finding algorithms you will always find the most efficient path from A-B that will automatically direct the path using the largest cells (and therefore the fewest calculation steps). Note: make the larger 3d grids have a slightly lower weighting so that it's paths are favoured.
This can be used for many applications. For example if you could only walk on the ground, then simply remove blocks in unreachable areas.

Organizing objects in html5 webgl canvas animation with three.js

I want to draw a 3D cat(with animation) which is no more than a bunch of 3D objects - ellipsoids, pyramids, spheres, etc.
And I have 2 questions:
1) Are there any ways to define your own complex geometrical 3D objects rather than standard Three.js objects such as Sphere, Cube...
2) When animation the whole cat should I define an animation function for each object? Is there any way to combine some objects together?
For question one I'd recommend reading up on parameter driven modelling, this will allow you to make consistent complex objects without reinventing the wheel every time you create one. As for creating the custom objects, much like in the way polylines are are effectively a collection of lines with iterative implementations of the standard line methods (as well as object-specific methods) you'd create a javascript object which contains a collection of the objects necessary to create your custom shape. Here's a good webgl cheat sheet to help you out a bit.
Question two is somewhat similar to the way we've described complex objects above in that while you'll write a Cat object render / animate function, you'll handle the animation on a per object basis (with the exception full object static movement, imagine a cat on an escalator). Once again constraint, or parameter driven design will be your saviour here since the fact that two or more objects are partially superposed in no way means that the objects are explicitly linked.
As an end note I'd recommend looking into clojurescript. It might not be necessary for this type of work but lisp is very popular in the CAD scripting world and you'd definitely be doin' yourself a favour in the long run by at least familiarising yourself with the coding conventions - a lot of the questions you're goin' to have whilst working on this project will be answered in a variety of programming languages but you'll likely find that many of the answers that were written by folk working on both sides of the fence (cad/programming) will be written in lisp. Here's a final general CAD forum that's a great resource for all things CAD.

Categories

Resources