I am working on a home automation project with two bulbs. Please refer to the following state chart I created using xstate. I also have the gist so you can see it in the visualizer also.
https://xstate.js.org/viz/?gist=119995cdff639c5b99df55278a32cf57
You can see that I need to be in the autoInactive state so I can turn the bulbs on and off, this works fine. The problem is in the autoActive state I wanted to still turn the bulbs on and off but using a motion sensor.
So here is what i'm trying to do.
autoInactive - user can use UI to turn bulbs on and off.
autoActive - user cannot operate bulbs, but a motion sensor turns them on and off.
How can you achieve this using xstate?
I know you've said you already solved the problem, but I was just playing around to see how I would solve this - so maybe it helps you or someone else.
I basically approached the problem slightly differently (and that might not work in your context).
I thought the conditions of the lights (on or off) are actually quite independent from the mode you are in (automatic or manual). So I ended up with two parallel states, one for purely controlling the state of the lights, and one management interface state which allows for managing the lights differently based on the mode you are in:
https://xstate.js.org/viz/?gist=4b815be2cc42e6e51b15ba39c99d53dc
Related
Last year I tried to learn a bit React JS for making the project you can find here. I apologize for my rather vague / imprecise description below, but I'm by no means versed in this.
Basically, there is a single <svg> tag, which will contain a number of paths etc. as created by the user. The problem I have is that things become very slow the more paths are present. To my current understanding, this is due to the fact that the entire SVG DOM gets updated repeatedly upon user interactions that involve dragging the mouse or using the mouse wheel.
This holds true, particularly, for two user interactions:
a) Panning - all paths are being moved at the same time; I think one might circumvent this issue by taking a snapshot image first and moving that around instead. However, that's not a solution for the other user interaction, which is:
b) Expanding/collapsing paths - here, all paths are being modified in terms of coordinates of some of their points. That is, every path must be modified in a different way, but all of them must be modified at once, and this must happen repeatedly because it's a user interaction controlled with the mouse wheel where changes happen gradually and the user requires immediate visual feedback on these changes as they happen.
Particularly for b), I see no alternative that would involve a single transformation or something.
After extensive research last year, I came to the conclusion that choosing SVG to display and modify a lot of things dynamically on screen was a wrong decision in the first place, but I realized too late, so I gave up and have never touched it since. I'm pretty certain that there isn't any way to deal with the low performance that builds upon what I already have; I have no intention to start this project from scratch with a completely different approach. Also, the reason why I chose SVG was that it's easy to manipulate.
In summary, I'd basically like to get confirmation that there is no feasible way to rescue this project.
I aim to create this pattern of game play with one main Lobby Scene and other GameScene opening through window.open.
But even when i try this with empty hello world project to open multiple window, I get restricted due to high drop in FPS.
So basically I need to know is this setup possible in cocos creator , that can even four windows render simultaneously without FPS taking a hit.
Any guidelines if any can be provided to help achieve this will be appreciated.
The game in reference pic i think is made via angular ,maybe thats why it is so smooth even after ten windows.
My team posted issue on cocos2djs but no help :- https://discuss.cocos2d-x.org/t/help-regarding-multi-window-game-in-cocos-creator/42688
After a little bit of dig-in and according to your answer in the comment, I think you can try a different approach "split-screen game". I believe when a new window opened it use the same assets and it drops the FPS.
I don't know what is the best practice for "split-screen game", but I have one suggestion on how to implement it:
Create a prefab template of the main screen.
Create different layers (node) for each screen
Add the prefab to the layer, for example :
layer with 1 screen - 1 prefab
layer with 2 screen - 2 prefabs (duplicate prefab)
etc.
If you move between screens (layers) don't forget to make active false to the last node and destroy all his children.
Also, I think your drop calls it a little bit high for even one window app, Try maybe to check it also.
I hope I helped you.
I am currently working on my thesis project, where I am building a Javascript/node library making it easier for developers to merge the canvas in the browser from multiple devices together. So that objects can live within a large canvas of all the devices together. Basically, the idea is that you'll be able to put multiple phones/pads next to each other in different positions relative to each other, but use all their browsers as just one canvas.
I will also create another library extension with a bunch of restrictions to it, and hold a hackathon to see what developers creates with this tool and within these restrictions.
Anyway, I have ran into a problem. To make the tool more versatile and flexible I optimally want every device to be able to detect where in space the other devices are in relation to itself. But I have ran out of ideas about how to solve it, do you guys have any ideas? Do you think it is possible? Or will I have to come up with a manual solution? Can any other technology help? Bluetooth?
I have looked at projects like:
Google Chrome Racer (https:/ /www.chrome.com/racer)
Coca-Cola Penguin Curling (http:/ /cargocollective.com/rafaeldante/Coca-Cola-Penguin-Curling)
How do you think these projects solved the issue of positioning order? Which device is where in the order?
Sadly, Chrome Racer doesn't seem to be running anymore. But as far as I can remember playing it a while ago, you did not have to put in the position of your device manually? Analyzing this clip(https://youtu.be/17P67Uz0kcw?t=4m46s), it looks like the application understands where in line that specific device is, right? Any ideas on this?
Just a random musing on possible paths to a solution.
They all have cameras that are facing up. If any two can capture an image that overlaps you have way of orienting relative to each other. If every device had a view that overlapped with at least one other then you can get a reasonable approximation of the relative orientation and positions of them all. The more devices the better the result.
You can listen to the ambient sound environment and use arrival time of sounds to give another relative positional clue. Devices can also emit both sound and light, if done with pre determined order, the sound can produce relative position. The display if flashed on and off in specific patterns could also be detected ( not directly but as subtle ambient reflection.)
Ambient light levels are also a source of relative position and orientation.
If only two devices tried these methods they would fail most of the time. But with each extra device you get its relative information compared to all the others thus growing the data exponentially making a solution easier to find. Given enough devices the solution of position and orientation may be possible via passive sensing only.
I would like to ask if there are any examples related to Polymer's animated pages ( http://www.polymer-project.org/docs/elements/core-elements.html#core-animated-pages ) and how we can build a similar demo using the resources provided in the Angular/material repo (https://github.com/angular/material).
I would like to achieve http://www.polymer-project.org/components/core-animated-pages/demos/music.html but I don't want to use Polymer since I would like to use Angular.
Can you please provide me some directions in order to start?
What they typically do with Polymer is have two connected elements which shows only one and when you perform some action, the other gets shown (from display: none) and animates from certain dimentions to its final form. Sometimes elements also shift but it depends on whether the content is able to move to its new position or not.
You have to work with css transition, transform and display. Sometimes even custom animations. And you are mostly changing multiple divs to their final form. I think the most difficult would be animating colors (from white to pink or from yellow to green for example) as those are most difficult to do (performance-wise).
If you look at the example you've set (final link) you see there's a list of items with a detail div and once you click the item you show the detail and transform the contents to its final dimentions.
Just know that these things are pretty hard if you aren't very much into Angular or HTML/CSS/Javascript.
The framework of Polymer for Web is very much a work in progress and i wouldn't be surprised if it took a few months to get similar results for both native and web.
You can take example from things like this: https://medium.com/tictail-makers/giving-animations-life-8b20165224c5 or https://www.polymer-project.org/apps/topeka/ or http://codepen.io/collection/amheq/ . And don't forget to speed it up by using some bootstrap theme like this http://fezvrasta.github.io/bootstrap-material-design/ or something.
I've been struggling with the same problem as there isn't much to go from right now. You stated the Angular project but that will take time. If you want to do it now, you have to do quite some work (if you do, share it with us), but you might be better of with postponing this until most of the bugs and problems have been solved.
Thats what i'm doing now.
If you access your mobile me account online with Safari, you can select an icon and login directly to selected service, great feature btw.
But if you access the same page using other browser like firefox or Chrome, you will see a gorgeous login page with a big, no huge cloud in the middle (the MobileMe logo) and interesting lighballs comming out of it.
Here's the link:
https://auth.me.com/authenticate?service=mail&ssoNamespace=appleid&formID=loginForm&returnURL=aHR0cHM6Ly9tZS5jb20vbWFpbC8=
And the greatest thing is that you can mouse over those little light balls and they follow your mouse movement.
Its just beautiful and i have never seen anything like that in Javascript. And i couldnt understand by looking at their code, how they did it. Of course their javascript is compressed so i couldn't look at it, but in the markup those shiny lights are just a bunch of canvas tags.
Does any one have an idea of how to make something like that? Its probably way beyond my javascript skills but it would be great to add such an effect to one of my projects.
Thanks in advance for all your suggestions ;)
that takes a lot of skills. I believe its achievable with processing.js
http://processingjs.org/
Take a look at this [quote]:
So, how is this eye candy accomplished? Through over 6000 lines of
(unminified) JS. MobileMe usually uses SproutCore for its
applications, but after looking through the source code, I didn’t find
a single reference to it. There did appear to be some resemblance of
a library being used in the login page, however, but I think it is
pretty custom. There appeared to be a class for each of the visual
components on the screen, at least one if not two separate animation
libraries (one 2d and one 3d), a particle rendering library, and
libraries for dealing with canvas drawing and DOM manipulation.
So it looks like it was custom made. You can read more here: http://badassjs.com/post/1649735994/the-new-mobileme-login-page-has-some-badass-js
I hope this helps.