We are working on this website http://bandaultralarga.italia.it/mappa-bul/
On that page we have a gMap where we draw some polygons on top, using d3(v3) and topoJson.
Now we want (would like) to switch to the new v4 to add new data viz design on the page but keeping what we already had.
Troubles come in the overlay function below where we translate the coordinates into the Google Map system.
Referring to D3 v4 API Docs we switched from:
path = d3.geo.path().projection(googleMapProjection);
to:
path = d3.geoPath().projection(googleMapProjection);
but this generates the subject error (projectionStream is not a function.)
Any clue to get this to work again?
var overlay = new google.maps.OverlayView();
overlay.onAdd = function () {
var div = document.createElement('div');
div.id = 'svg_map';
div.style.borderStyle = 'none';
div.style.borderWidth = '0px';
div.style.position = 'absolute';
this.getPanes().overlayMouseTarget.appendChild(div);
overlay.draw = function () {
var overlayProjection = this.getProjection();
var googleMapProjection = function (coordinates) {
var googleCoordinates = new google.maps.LatLng(coordinates[0], coordinates[1]);
var pixelCoordinates = overlayProjection.fromLatLngToDivPixel(googleCoordinates);
return [pixelCoordinates.x + 4000, pixelCoordinates.y + 4000];
};
path = d3.geoPath().projection(googleMapProjection); // Here is where troubles come
generaOverlay(overlay, datiShape);
};
};
overlay.setMap(map);
mbostock answer:
Please read the d3-geo release notes or CHANGES.md:
“Fallback projections”—when you pass a function rather than a projection to path.projection—are no longer supported. For geographic projections, use d3.geoProjection or d3.geoProjectionMutator to define a custom projection. For arbitrary geometry transformations, implement the stream interface; see also d3.geoTransform.
Related
I have used THREE.js GLTFLoader to load glb file then I used USDZ Exporter to export it to usdz then when I tried to open it on browser it opened on saferi but didn't shown on arkit on object mode
and on AR Mode appear above my head this is a simple file
https://drive.google.com/file/d/1uAZNZLWI-zdtjcetfT2tIBh9a0tyzGyi/view?usp=sharing
This is my trial
const loader = new GLTFLoader().setPath(`${origin}${folderPath}`);
loader.load(modelName, async function (gltf) {
model = gltf.scene;
scene.add(model);
const exporter = new USDZExporter();
const arraybuffer = await exporter.parse(model );
const blob = new Blob([arraybuffer], { type: 'application/octet-stream' });
const link = document.getElementById('usdz-link');
link.style.display = ''
link.href = URL.createObjectURL(blob);
also I have tried to center the glb model by this code
const box = new THREE.Box3().setFromObject(gltf.scene);
const center = box.getCenter(new THREE.Vector3());
var sceneCopy=gltf.scene.clone();
sceneCopy.position.x += (gltf.scene.position.x - center.x);
sceneCopy.position.y += (gltf.scene.position.y - center.y);
sceneCopy.position.z += (gltf.scene.position.z - center.z);
and then used sceneCopy to be exported to usdz but unfortunately this didn't help me
I think we answered this one through slack, but I'll repost here for closure..
The offset of the model needs to be close to the origin for it to appear correctly in the quicklook viewer on iPad/iPhone.
You can manually set the offset by changing these three lines...
https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1023-L1025
which is the opposite of these 3 values:
the opposite to these 3 values: https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1044-L1046
Basically, when you use filtering in forge-convert-utils, the center option doesn't take into account the offset.
There is a GitHub repo feature request to fix this...
https://github.com/petrbroz/forge-convert-utils/issues/44
and a branch with a possible solution.
https://github.com/petrbroz/forge-convert-utils/commit/bb4bd0a13c685c34966a3bf5c2784ba9b1343a7d
Hello im trying to reduce the point size of all the charts that i have, In the documentation it says to set by
Chart.defaults.global.elements.rectangle.radius = 2
But point sizes do not change, although it have made it to change size by setting it manually upon declaration, but it would be nice to do it globally since i have to dynamically change at some stage in the website
Related code is below:
var ctx = document.getElementById('myChart').getContext('2d');
var myChart = new Chart(ctx, {
//Some looong data here
});
var ctx2 = document.getElementById('myChart2').getContext('2d');
var myChart2 = new Chart(ctx2, {
//Some looong data here
});
var ctx3 = document.getElementById('myChart3').getContext('2d');
var myChart3 = new Chart(ctx3, {
//Some looong data here
});
Chart.defaults.global.elements.rectangle.radius = 2
Use global point options instead of global rectangle options.
// Chart.defaults.global.elements.rectangle.radius = 2; // remove this line
Chart.defaults.global.elements.point.radius = 2;
You also have to move this line on top of your code. It should be performed prior to create your charts.
I have a Feature Layer that I would like to buffer using user input and a geometry service.
FeatureLayer:
var texasPipeline = new FeatureLayer(pipeURL, {
mode: FeatureLayer.MODE_ONDEMAND,
outFields: ["*"],
definitionExpression:texasPipeQuery
});
BufferParameters:
var params = new BufferParameters();
params.distances = [distance];
params.unit = units;
params.outSpatialReference = map.spatialReference;
params.geometries = texasPipeline;
map.graphics.clear();
geomSvc.buffer(params, showBuffer);
The server is returning an error saying that geometries must be supplied. My guess here is that i need to pass in the geometry of the FeatureLayer as opposed to the FeatureLayer itself. How do i get at the geometries of the FeatureLayer and appropriately pass that into the BufferParameters??
EDIT:
Additionally I have tried to loop through as you can see in the code bellow. passing the array of geometries into the BufferParameters still does not return successfully.
var texasPipelineGeom = [];
var graphics = texasPipeline.graphics;
for (var G in graphics) {
var g = graphics[G]["geometry"];
console.log(g);
texasPipelineGeom.push(g);
}
What is the error you re receiving with the edits you made, that looks to be a good start. The buffer parameters does take an array of geometries instead of a feature layer.
You could use something like this (untested, just take as pseudo code):
params.geometries = texasPipeline.graphics.map(function (graphic) {
return graphic.geometry;
});
If using polygons, sometimes the geometry service will complain about the polygons not being simplified. You can find a full working example with polygons here: https://developers.arcgis.com/javascript/3/jssamples/util_buffergraphic.html, You will just need to correctly get your geometries out of the feature layer and add them to params.geometries.
var buffer = function buffer (point, radius) {
var promise = new Deferred();
var gsvc = new GeometryService(CONFIG.GEOMETRY_SERVICE_URL);
var params = new BufferParameters();
params.geometries = [point];
params.distances = [radius];
params.unit = GeometryService.UNIT_KILOMETER;
params.outSpatialReference = new SpatialReference(54010);
gsvc.buffer(params, promise.resolve, console.error);
return promise;
};
What vesrion of js api you are using? Starting from version 3.13 there is a module esri/geometry/geometryEngine. With this module you can do geometry operation on the client side without geometry service. Here is a good example of using it.
Also take a look at module esri/graphicsUtils to get geometries from graphics.
I use easeJS in the implementation of the game robolucha, currently we display different colors of the characters by using shapes under transparent images.
We want to use Bitmaps and apply color filters to it.
Sadly the ColorFilter is not working.
The Fiddle is here for the code : https://jsfiddle.net/athanazio/7z6mqnrk/
And here is the code I´m using
var stage = new createjs.Stage("filter");
var head = new createjs.Container();
head.x = 300;
head.y = 300;
head.regX = 100;
head.regY = 100;
var path = "https://raw.githubusercontent.com/hamilton-lima/javascript-samples/master/easejs/colorfilter/";
var layer1 = new createjs.Bitmap(path + "layer1-green.png");
layer1.image.onload = function(){
layer1.filters = [ new createjs.ColorFilter(0, 0, 0, 1, 0, 0, 255, 1) ];
layer1.cache(0,0,200,200);
}
var layer2 = new createjs.Bitmap(path + "layer2.png");
head.addChild(layer1);
head.addChild(layer2);
stage.addChild(head);
createjs.Ticker.addEventListener("tick", headTick);
function headTick() {
head.rotation += 10;
}
createjs.Ticker.addEventListener("tick", handleTick);
function handleTick() {
stage.update();
}
The ColorFilter does not work in this example because the image is being loaded cross-domain. The browser will not be able to read the pixels to apply the filter. I am not exactly sure why there is no error in the console.
EaselJS has no mechanism to automatically handle cross-origin images when it creates images behind the scenes (which it does when you pass a string path). You will have to create the image yourself, set the "crossOrigin" attribute, and then set the path (in that order). Then you can pass the image into the Bitmap constructor.
var img = document.createElement("img");
img.crossOrigin = "Anonymous";
img.onload = function() {
// apply the filter and cache it
}
img.src = path + "layer1.png";
layer1 = new createjs.Bitmap(img);
You don't have to wait for the image to load to create the Bitmap and apply the filter, but you will have to wait to cache the image.
This fix also requires a server that sends a cross-origin header, which git does. Here is an updated fiddle with that change. Note that if your image is loaded on the same server, this is not necessary.
https://jsfiddle.net/7z6mqnrk/10/
Cheers.
I am new to JavaScript and PaperJS. What I am trying to do is:
1) Load image to canvas using raster - done
var imageCanvas = document.getElementById('image-canvas');
var drawCanvas = document.getElementById('draw-canvas');
var img = document.getElementById('img');
imageCanvas.width = img.width;
imageCanvas.height = img.height;
var scope = new paper.PaperScope();
scope.setup(drawCanvas);
scope.setup(imageCanvas);//this will be active
var views = [2];
views[0] = scope.View._viewsById['image-canvas'];
views[1] = scope.View._viewsById['draw-canvas'];
views[0]._project.activate(); //making sure we are working on the right canvas
raster = new paper.Raster({source:img.src, position: views[0].center});
2) get the sub raster of the image; the rectangle is drawn by the user using mouse drag events. But for the sake of convenience lets say I have a rectangle at position (10,10) and dimensions (100,100)
3) get the sub raster and show the preview in other drawcanvas
views[1]._project.activate();// activating draw-canvas
var subras = raster.getSubRaster(new paper.Path.Rectangle(new paper.Point(10,10), new paper.Size(100,100))); //Ignore if the rectangle inside is not initialized correctly
But, nothing happens on the draw-canvas.
I also tried using
var subrasdata = raster.getSubRaster(new paper.Path.Rectangle(new paper.Point(10,10), new paper.Size(100,100))).toDataURL();
but it gives me
Uncaught SecurityError: Failed to execute 'toDataURL' on 'HTMLCanvasElement': Tainted canvases may not be exported.
Is there any other way to get the sub raster and pasting it in other canvas?
Instead of getSubRaster() I used Raster#getSubCanvas and Context#drawImage to paste in new canvas. So in short
canvasSrc = raster.getSubCanvas(rect);
destinationCanvasContext.drawImage(canvasSrc,0,0);