I am trying to build a react application that can create a card component and connect them, and they also can pan and zoom the screen also drag those cards. Like the building window in Figma or Adobe XD. Can anyone suggest to me any popular and effective solution or packages for it. I have been doing some research for a few days now and I could not get a proper solution for it.
You can use drag&drop functionality, basically you need to handle callbacks when drag&drop events happen (onDragStart, onDragEnd, onDragEnter ...), you get data from the event parameter. When something is dropped, you update the state accordingly (add/remove item from list), and render them with map function. Some sample: https://codesandbox.io/s/reactdrag-and-drop-iq89y?file=/src/components/TaskList.jsx
Related
I am trying to create a dashboard where users can drag and drop widgets within the dashboard to any position they'd like. I've seen other examples similar but they all seem to have predefined elements.
In my case, the user can create and remove elements on the dashboard and move them to any point on the board.
My question is, what would be the best way to create a dashboard like this that supports the dragging and dropping of an element anywhere on it. Also, how can I save this info?
Thanks in advance.
Are you looking for something like react-grid-layout?
In this demo you can see how the widgets' state could be encoded. This could be saved and retrieved upon page load.
You can use react-dnd library. Easy to implement and customise. Here is the repo link;
https://github.com/react-dnd/react-dnd/
I would like to disable only the zoom interaction on the LightningChartJs but would still like to tap into the drag and other events.
But when I disable it(setMouseInteractionRectangleZoom) other interactions like drag dont run too.
Thanks in advance.
There is no issue with the library and was a flaw in my implementation
I was dynamically enabling and disabling zoom and other functionalities with state variables and the create chart function was inside useCallback hook and thus the chart had become unfunctional as the useCallback on change of state variables rerenders the function.
Apollogies to the team of LightningChartJs.
I'm trying to understand how to integrate D3 and React. Specifically I'm trying to understand how using D3 to render visualizations impacts React. As explained in this excellent question and it's reply:
[...] there is currently no great way to work with React and D3 [...] this is because in the React world you don't do direct DOM manipulation, but in the d3 world that's the only thing you do.
The reply goes on to say
It seems to me that the current consensus for Force Layouts and the
like is to simply drop out of React for those components and let d3 do
its thing. This isn't ideal but it's way more performant.
What is the impact on React from letting D3 take care of rendering? Will it only impact the performance of the component using D3, or other components as well? Will direct manipulation of the DOM using D3 screw with React's virtual DOM in some way for example? I'm basically trying to get an idea of the price you have to pay for using D3.
I've worked on a project (private, unfortunately) where I used D3 to represent a UML editor. Everything used SVG manipulation to draw an SVG representing the UML.
The Editor UI logic was implemented in a unique React element (UMLEditor), using TypeScript and D3. You could pass the editor properties to set changes in the UML, and callbacks to get back the data. For instance, you can drag and drop a UML class (in 60fps), but the UI only triggers two events (drag, and drop) to React callbacks.
The key is to have the logic and events separated from the UI manipulation, and have a small amount of big react elements, and not so many small elements.
It could manage a UML with around 4K classes in 30fps.
Edit: Let's define a small application.
You have small react components with its children, like the root App element, a Navigation bar, a viewport, etc...
Every element but the UMLEditor have a small impact on the performance. UMLEditor element is a complex element without any React children. Every UI element inside it is rendered using D3. The real DOM of UMLEditor contains a complex SVG element managed entirely using D3.
To make this element interact with React, we pass as props callbacks for events like drag, drop, create new UML class... and one JavaScript class with all the D3 render logic.
We don't pass as prop the entire UML configuration, as it would have a negative impact on the performance. Instead, when we needed it for exporting purposes, the JavaScript class passed as a prop can give the whole UML configuration using a method.
I have a fairly simple RN app with a layout similar to this:
As shown in my image, I have four Touchable columns equally splitting the screen.
Everything works fine. But here is what I'm trying to do now: wherever a user touches the screen, I need to append an element where the touch interaction occured and on top of everything.
First, I naively added a new TouchableWithoutFeedback on top of my other views, but this simply prevented onPress to be fired on those other views.
Then I began to realize that this was not as simple as I thought and took a look to the Gesture Responder System. I think I got the concept but I didn't achieve anything with it yet...
Is it possible to let onPress events to bubble up through every layers ?
Sencha Touch seems like an amazing way to develop mobile apps. I've seen posts by people incorporating Jquery, D3.
At the same time the posts describing customizing controls seems to be fairly narrow.
Adding the picture of a kitten next to the slider and labeling the slider seems kinda tame compared to what ios can do in terms of custom controls, at least in terms of examples available. Most blog posts imply you can extend the control objects in Sencha or the CSS file.
These posts are not quite what I'm looking for - that's my problem. I can't see any examples of anyone changing default controls in Sencha touch, but they make it sound as if it might be possible to do anything.
This is my question:
Is Sencha Touch able to build an iOS or Android App incorporating any javascript library or HTML5? Are there any limitations here?
To give an example I trying to implement a custom slider, where a touch along a continuous line or a circle like this color selector will enter new values. Further if you incorporate a library like protovis or D3 (or Raphael charts) can Sencha display anything the graph canvas element will otherwise display? Will it take touch input and interact with the graph libraries the way that the HTML5 graph does?
The post you mentioned is not about customizing controls, it's about displaying a list from bound store, instead of of using just Ext.XTemplate (the system with Ext.view.View) to generate HTML, it uses ComponentView to generate Ext.Components instead.
It's hard to tell what you're asking, what in particular are you trying to do?
To address some of the questions you added:
Charts in Sencha are implemented using Raphael, which uses SVG, therefore all the elements in the chart can be interacted with using HTML events.
Everything that Sencha generates is valid HTML, you can listen to HTML events, but components usually abstract the lower level events into something that is easier to consume, (for example a data view abstracts the click so that it passes the record being clicked along with the event).
Therefore, the answer to the question is, YES, Sencha can co-exist with regular HTML. If you want the full benefit of the framework, you should always create an Ext.Component so that your components can be easily used within the framework's layout containers.
It's very easy to misuse Ext when trying to write regular HTML and still place that within the layout rendering pipeline. Ext.Component has a built in way of creating HTML out of templates, see http://docs.sencha.com/touch/2-0/#!/api/Ext.Component-cfg-data and http://docs.sencha.com/touch/2-0/#!/api/Ext.Component-cfg-tpl