I have an algorithm in c++ which create an 2D table with float values, basing on that values I want to create svg graphics in html(or javascrpit). And my question is - can i anyhow make it in one code (creating that 2D array in c++, keep it in memory and basing on that, creating graphics by html), or better(/only possible?) would be for example saving 2D array to .txt then creating separately .html code in which I open the .txt file, read values and then creating svg graphics (if it is possible, i'm totally green in html/javascript).
I hope You can give me some advice guys. :)
JavaScript doesn't have access to ANY memory management in the browsers. You could probably only connect with http requests.
However there are parsers like Rhino, which allow for Java classes to be made into javascript objects and reverse, I have yet to hear of a C++ one though.
NodeJS allows access to the filesystem, thus any files you save will be accessible and there might be an api for C++.
You could use Emscripten to compyle C++ into JS, but it is hard.
You can write the table as json file that can be easy interpretted by javascript.
Or, my recomandation, use a svg library in C++ and skip html. see: Render a vector graphic (.svg) in C++
or search google.
If you are more specific about the usecase you would get batter answers:
How will the application run: part of a web site or local
Why do you need HTML
Who will use this: you want to convert some data to SVG for a small project or some other users will use your program for a longer time.
Related
I am trying to use python and understand SVG drawings. I would like python to behave similar to java script and get information from SVG. I understand that there can be 2 types of information in SVG.
XML based information - such as elementbyID, elementbyTagNames
Structural information - positional information taking transformations in to consideration too - such as getelementfrompoint, getboundingbox
I have searched around and found python libraries such as lxml for xml processing in svg. Also I found libraries such as svgpathtools, svg.path , but as I understand, these deal only with svgpath elements.
So my question is,
Are there any good libraries which support processing svg in python?(similar to java script)
I don't think this is feasible. I was directing you to an answer about rendering SVG with Python - but after that all you have got is pixels.
Extracting positional information from SVG data at arbitrary points between transformations is likely something only implemented in browsers themselves. You will likely have two options: use a headless browser and selenium/splinter to load your SVG data in a real browser and run javascript statements in there, OR make your Python code run in Brython, and run everything inside the browser. From Brython you should be able to use the Javascript calls as methods of the SVG object as it is exposed to Brython itself.
Try to use Pygal. It's used for creating interactive .svg pictures.
Thank you for all the answers and help.
After reading all comments and even more www search, I agree with the idea that there is much more support in Java script supported web browsers for dealing with SVG. So I decided to use JS and to use python only when there is no choice. I will however use python libraries such as(https://pypi.python.org/pypi/svgpathtools/1.0.1) I think. But as of now I handed over all the SVG feature finding to java script
Start your search by visiting www.pypi.org and search for "svg". Review what exists and see what suits your needs.
After working with dcmtk in C++, I'd like to use it in javascript but I think it's not as easy as it is with C++.
is there any way to do that ?
thank you in advance.
I agree to John, I would rather advise to seek for a JavaScript DICOM toolkit instead of establishing an interface between DCMTK and JavaScript.
To answer your question, however:
First, make a basic decision if you want to use the toolkit's executables through kind of a scripting layer that is invoked through JS functions or to write C++ - CGI funtions based on the DCMTK libraries. I think it is obvious that the latter approach gives you far more flexibility in designing the DICOM functionality. In the following, I am going to mention executables which can accomplish particular tasks. In case you want to go the CGI way, the source code of the executable is a good starting point to learn how to use the library.
To read the DICOM header information, have a look at dcmdump. It can convert the binary DICOM header format into a text file which can be easily parsed with non-DICOM-aware JS functions.
To create binary DICOM objects, use the complementary tool dump2dcm, which converts a text file in the format that dcmdump creates back to a binary DICOM file.
To render images to a "web image format" (i.e. PNG or JPEG), you can use dcmj2pnm. It takes a DICOM image and renders it with some simple rendering functions (scale, rotate, windowing).
All of these tools provide a lot of options through the command-line interface to control the output.
There are more tools around which may be helpful but without knowing more about the use cases you want to support this is the information I can provide. Please note again, that I explicitly do not want to advise you to use these approaches, as they are very limited in terms of performance optimization and error handling.
I'm currently building a texture classifier in the c++ api of opencv. I was looking to use this to recognise textures and ideally help a parot ar drone 2.0 to navigate to a specific texture. I have found the documentation on node copter and it's opencv bindings. I wasn't sure about whether this would require me to re write my program in javascript?
If there is some sort of interface then is it feasible to run my program in the background, pull images from the parrot analyse them and send back control commands to the parrot?
I have been working with opencv for about 3 months and have some basic understanding of node.
Thanks in advance!
There are lots of ways to interface with a Parot AR drone. NodeCopter is one option, but there are others. ROS has good AR drone bindings I've used which would give you tons of flexibility at the expense of some complexity.
You might also consider building your C++ program into a stand-alone option and calling it from Node.js. You could also interface with the AR Drone API directly.
It's not too hard to write a program to control an AR.Drone with some sort of OpenCV-based tracking. Javascript would probably be my suggestion as the easiest way to do that, but as #abarry alluded, you could do it with any language that has bindings for the AR.Drone communications protocol and OpenCV.
The easiest thing would be to have a single program that controls the drone, and processes images with OpenCV. You don't need to run anything in the background.
copterface is a Node.js application that uses node-ar-drone and node-opencv to recognize faces and steer the drone toward them. It might be a good starting point for your application.
Just to give an example in another language, turboshrimp-tracker is a Clojure application that shows you live video from the drone, lets you select a region of the video containing an object, and then tracks that object using OpenCV. It doesn't actually steer the drone toward the tracked object, but that would be pretty easy to add.
I posted a similar question earlier but don't think I explained my requirements very clearly. Basically, I have a .NET application that writes out a bunch of HTML files ... I additionally want this application to index these HTML files for full-text searching in a way that javascript code in the HTML files can query the index (based on search terms input by a user viewing the files offline in a web browser).
The idea is to create all this and then copy to something like a thumb drive or CD-ROM to distribute for viewing on a device that has a web browser but not necessarily internet access.
I used Apache Solr for a proof of concept, but that needs to run a web server.
The closest I've gotten to a viable solution is JSSindex (jssindex.sourceforge.net), which uses Lush, but our users' environment is Windows and we don't want to require them to install Cygwin.
It looks like your main problem is to make index accessible by local HTML. Cheat way to do it: put index in JS file and refer from the HTML pages.
var index=[ {word:"home", files:["f.html", "bb.html"]},....];
Ladders Could be a solution, as it provides on the spot indexing. But with 1,000 files or more, I dunno how well it'd scale... Sadly, I am not sure JS is the answer here. I'd go for a custom (compiled) app that served both as front-end (HTML display) and back-end (text search and indexing).
Use a trie - they're ridiculously compact and very scalable - dead handy for text matching.
There is a great article covering performance and design strategies. They're slower to boot up than a dictionary, but take up a lot less room, particularly when you're working with larger datasets.
I'd tackle it as follows:
in your .net code index all the keywords that are important to you (track their document and offset).
generate your trie structure using an alpha sorted list of keywords,
decorate the terminal nodes with information about the documents the words they represent can be found in.
C
A
R T [{docid,[hit offsets]},...]
You don't have to store the offsets, but it would allow you to search for words by proximity or order.
Your .net guys could build the trie sample code.
It will take a while to generate the map, but once it's done and you've serialised it to JSON your javascript application will race through it.
I'm looking for a way to load a virtual human (ie; a model rigged with a skeleton, preferably with moving eyebrows/etc) onto a web page. The functionality should be similar to the dated library Haptek Player (http://www.youtube.com/watch?v=c2iIuiT3IW8), but allow for a transparent background. Ideally it would be in WebGL/O3D since it can be directly integrated with my existing code. However, if there's an implementation out there already in Flash3D or a different plugin, I can quickly switch my codebase to actionscript.
I've investigated trying to send the Haptek Player vertices to a Float32Array (used by WebGL) using an npapi plugin. I can place the vertex data into a javascript array and draw the virtual human. The vertex data cannot be changed, however, since the array must be copied to a typed array (Float32Array) to be used by WebGL.
Thanks for any input!
try http://expression.sourceforge.net/
C++ and OpenGL
you can experiment with converting the code to javascript using emscripten