OK so I have a few levels of a simple platform built out,
so I just need some insight on how to assign images to bodies, and handle images, etc. The game is a simple platform game, with a ball as the palyer chacter and
you have to try to reach the other side. I have some obstacles like
joints and swinging balls, etc. Just getting it started, please let
me know if you can help... box2dweb.. Here is an example of a few
bodies inside my game in various places.. Any advice would be greatly
appreciated. Pplayer character let me know if you can help.
function PC(gamePiece){
if (gamePiece == 1) {
var ballSd1 = new b2CircleDef();
ballSd1.density = 1.1;
ballSd1.radius = 22;
ballSd1.restitution = 0.5;
ballSd1.friction = 1;
ballSd1.userData = 'player';
var ballBd = new b2BodyDef();
ballBd.linearDamping = .03;
ballBd.allowSleep = false;
ballBd.AddShape(ballSd1);
ballBd.position.Set(40,0);
player.object = world.CreateBody(ballBd);
}
ok so i am using box2d and need some help adding images to bodies...
I guess you do not generate your world programmatically. You will need to find an editor suitable for this job, or write one yourself.
I can advice you to use R.U.B.E which lets you create your Box2D worlds and attach images to the bodies. You may then export the scene via JSON which fits very well to javascript.
The rendering of those images attached to bodies will be a custom job. As well as the importing of the scene if there is no loader for box2dweb yet.
Related
Im trying to make an interactive html ad using Adobe Animate and I do know how to do with the tools it provides while having assets stored on a local drive. But the issue is that my client wants me to export the ad as HTML and wants me to use all the assets from the web (he's providing me with all the URL links) but I have no idea how to use create js' libraries and commands so I can use those links to import the assets into the project.
I'd really appreciate if someone could help!
I asked the same question on reddit and adobe forum
Ive got one reply that was saying the following (that solution didn't work for me or maybe I didnt paste the code into Animate correctly):
to load images
if(!alreadyExecuted){
this.imageA = \[\];
var imagePaths("./images/image1", etc);
alreadyExecuted = true;
var index = 0;
imageF.bind(this)(imagePathA\[0\])
}
function loadF(){
imageF(imagePathA\[index\])
}
function imageF(imgPath) {
var image = new Image();
image.onload = onLoadF.bind(this);
image.src=imgPath;
}
function onLoadF(e) {
var img = new createjs.Bitmap(e.target);
this.imageA.push(image); // still hasn't been added to display list or positioned
index++;
if(index\<imagePathsA.length){
imageF.bind(this)(imagePathA\[index\])
} else {
// loading complete. do whatever
}
}
to load sound and play a sound:
createjs.Sound.registerSound(path to sound, "soundID");
this.your_button.addEventListener('click',playF.bind(this));
function playF(){
var soundInstance = createjs.Sound.play("soundID");
//soundInstance.addEventListener("complete",cF.bind(this)); // if you want to
know when sound completes play
}
In an effort to reduce clientside load, we are attempting to do the work of flattening Paper.js layers on a Node Express server. We have many layers to flatten with lots of image data. And rather than overwriting our data structure, we want to end up with new objects containing the rasterized (flattened) layers.
So we have an Express route that looks like this:
app.post('/flatten', function (request, response) {
var pdfs = JSON.parse(request.body.pdfs);
// Attempt to set up canvas on the server side to work with
var canvas = new paper.Canvas(1000, 1000);
paper.setup(canvas);
paper.view.draw();
for (var i = 0; i < pdfs.length; i++) {
var pdf = pdfs[i];
if (pdf !== null) {
for (var j = 0; j < pdf.pages.length; j++) {
if (pdf.pages[j].layer !== undefined) {
paper.project.layers.push(pdf.pages[j].layer); // Attempt to add to current project; necessary?
pdf.pages[j].layer.activate(); // Blows up
pdf.pages[j].layer.visible = true;
var layerAsRaster = pdf.pages[j].layer.rasterize(); // Blows up
layerAsRaster.visible = false;
var dataString = layerAsRaster.toDataURL();
pdfs[i].pages[j].pageImageData = dataString.split(',')[1];
pdf.pages[j].layer.visible = false;
}
}
}
}
response.send(pdfs);
});
The .layer is a native Paper.js layer that was made on the clientside.
We receive this error when hitting this route:
TypeError: pdf.pages[j].layer.activate is not a function
Thinking that perhaps we don't need to worry about activating layers on the serverside, I commented that out, but got the same error for the .rasterize line. (See the two lines commented "Blows up".)
Do I need to somehow import the layers we're receiving from the client into the project? I attempt to do that with the line:
paper.project.layers.push(pdf.pages[j].layer);
but to no avail.
How can I modify this method to successfully work with layers on the serverside?
The problem is that you are directly adding the layer to the project with the line paper.project.layers.push(pdf.pages[j].layer);
You're not allowed to directly manipulate paper's data structures. If you want to add a layer to a project use the following (note that this is not documented and will change with the next release of paper, but I don't think you'll need to do this):
(paperscript)
project.addChild(layer);
(javascript)
paper.project.addChild(layer);
It's not clear how pdf.pages[i].layer was created on the server side, whether it was imported via JSON (in which case it could already be inserted into the project), or whether it was removed from another project, so there may be other complications.
I think there is another problem. It doesn't appear that pdf.pages[i].layer has been turned into a server-side layer. So the key question is how was it transferred from the client to the server?
Here's a stab at the whole process:
(client side)
jsonLayer = paper.project.activeLayer.exportJSON();
// send jsonLayer to server using some method
(server side)
// get jsonLayer from client
layer = new paper.Layer();
layer.importJSON(jsonLayer);
layer should already be inserted into the project and should contain all the items that were in jsonLayer which was the layer on the client.
Here's a link to a discussion on how importJSON and exportJSON map to one another:
paperjs group discussion
I am currently trying to make a web editor allowing users to easily adjust basic settings to their audio files, as a plugin I've integrated wavesurfer.js as it has a very neat and cross-browser solution for it's waveform.
After indexing a must-have list for the functionalities I've decided that the cut and paste are essential for making this product work, however after spending hours of trying to figure out how to implement this in the existing library and even starting to rebuild the wavesurfer.js functionalities from scratch to understand the logic I have yet to succeed.
My question would be if anyone can give me some pointers on how to start building a cut and paste functionality or maybe even an example that would be greatly appreciated.
Thanks in advance!
wavesurfer plugin:
http://wavesurfer-js.org
plucked web editor
http://plucked.de
EDIT Solution (instance is the wavesurfer object.):
function cut(instance){
var selection = instance.getSelection();
if(selection){
var original_buffer = instance.backend.buffer;
var new_buffer = instance.backend.ac.createBuffer(original_buffer.numberOfChannels, original_buffer.length, original_buffer.sampleRate);
var first_list_index = (selection.startPosition * original_buffer.sampleRate);
var second_list_index = (selection.endPosition * original_buffer.sampleRate);
var second_list_mem_alloc = (original_buffer.length - (selection.endPosition * original_buffer.sampleRate));
var new_list = new Float32Array( parseInt( first_list_index ));
var second_list = new Float32Array( parseInt( second_list_mem_alloc ));
var combined = new Float32Array( original_buffer.length );
original_buffer.copyFromChannel(new_list, 0);
original_buffer.copyFromChannel(second_list, 0, second_list_index)
combined.set(new_list)
combined.set(second_list, first_list_index)
new_buffer.copyToChannel(combined, 0);
instance.loadDecodedBuffer(new_buffer);
}else{
console.log('did not find selection')
}
}
Reading this answer suggests you can create an empty AudioBuffer of the size of the audio segment you want to copy (size = length in seconds ⨉ sample rate), then fill its channel data with the data from the segment.
So the code might be like this:
var originalBuffer = wavesurfer.backend.buffer;
var emptySegment = wavesurfer.backend.ac.createBuffer(
originalBuffer.numberOfChannels,
segmentDuration * originalBuffer.sampleRate,
originalBuffer.sampleRate
);
for (var i = 0; i < originalBuffer.numberOfChannels; i++) {
var chanData = originalBuffer.getChannelData(i);
var segmentChanData = emptySegment.getChannelData(i);
for (var j = 0, len = chanData.length; j < len; j++) {
segmentChanData[j] = chanData[j];
}
}
emptySegment; // Here you go!
// Not empty anymore, contains a copy of the segment!
Interesting question. First word that comes to mind is ffmpeg. I can't talk from experience but if I was trying to achieve this I would approach it like:
Let's assume you select a region of your audio track and you want to copy it and make a new track out of it (later maybe just appending it to an existing track).
Use the getSelection() method provided by the nice wavesurfer.js library. This will give you startPosition() and endPosition()[in seconds].
Given those points you now can use ffmpeg on the backend to select the region and save it as a new file (eventually upload it to S3,etc.). See this thread to get an idea of how to call ffmpeg from your ruby app (the commandline parameters shown there can be helpful too).
Note that if you plan to copy and paste many regions to piece up a new track, making this in the backend all the time will probably make no sense, and I guess I'd try to look for a client-side JS approach.
I hope this is helpful at least for an easy use case, and gets you started for the rest ;)
Update
This might be worth reading.
Web Audio API, tutorial here.
HTML5 audio - State of play. here(don't miss the section on TimeRanges, looks like a reasonable option to try).
This one(Outdated, but worth a look, interesting links).
I'm making a web application that has to work offline. So far everything works and my last step is to take the map tiles offline. Luckily I know exactly what areas of the map will need to be accessible to users, so I don't have to allow caching of millions of tiles.
The map is split into areas and so the idea is to offer the tiles for these areas as downloadable 'packages.'
For instance, when I'm online, I go to the 'tile packages' page, which offers downloads for several areas. I choose the area which I'm interested in, it downloads the tiles, and when I go offline, I'm able to use these tiles. I only need about 2 zoom levels, one far out for quick navigation, and one more up close for more detail.
I'm using leaflet to serve up the map. Has anyone had to do something like this and could give me some guidance? I really just don't know how to even approach this, and it's the last piece of the puzzle.
Sadly you don't point out, what the exact problem is or at which step you fail. So I will try to give a general answer:
Leaflet uses Tiles by different providers to for a slippymap using JS. The map tiles (aka rasterimages) can be offered via an Tile Map Service (TMS) or an slightly different method (for OSM the numbering here described).
So you can create a list of images you want to get and can transfer them by respeciting legal and tecnical terms. For OSM this is for example:
http://wiki.openstreetmap.org/wiki/Legal_FAQ
https://wiki.openstreetmap.org/wiki/Tile_usage_policy
So you need to create an server/client script, that is able to do such a bulk transfer (maybe as packed archive file?) and ask to place it at a certain place for your user. I'm not experienced enough in Leaflet and can't tell you how to provide them, beside you might add them to the browsers cache itself, or to use a local server to provide them as localhost.
Anyway, if you have more questions, just ask.
So here's what I came up with. I import an area of the map to my database. I then offer this section as a downloadable package. When the user downloads the package, the database is queried and returns all tiles associated with that area in JSON format. The images are stored as blobs. I then pass this array of tiles to a custom leaflet layer which parses the data. Here's the code for the layer:
define([], function() {
L.TileLayer.IDBTiles = L.TileLayer.extend({
initialize: function(url, options, tiles) {
options = L.setOptions(this, options);
// detecting retina displays, adjusting tileSize and zoom levels
if (options.detectRetina && L.Browser.retina && options.maxZoom > 0) {
options.tileSize = Math.floor(options.tileSize / 2);
options.zoomOffset++;
if (options.minZoom > 0) {
options.minZoom--;
}
this.options.maxZoom--;
}
this._url = url;
var subdomains = this.options.subdomains;
if (typeof subdomains === 'string') {
this.options.subdomains = subdomains.split('');
}
this.tiles = tiles;
},
getTileUrl: function (tilePoint) {
this._adjustTilePoint(tilePoint);
var z = this._getZoomForUrl();
var x = tilePoint.x;
var y = tilePoint.y;
var result = this.tiles.filter(function(row) {
return (row.value.tile_column === x
&& row.value.tile_row === y
&& row.value.zoom_level === z);
});
if(result[0]) return result[0].value.tile_data;
else return;
}
});
});
I think you can use a quadtree,i.e. space filling curve. MS Bing Map uses the most simple tile map: http://bcdcspatial.blogspot.de/2012/01/onlineoffline-mapping-map-tiles-and.html?m=1. I think the other maps server also uses a space filling curve, buf it's not so obvious. You may search for ms bings maps quadkey or nick's spatial index hilbert curve. You can also download my php class hilbert curve # phpclasses.org. You can use it with many different space filling curves and to generate a quadkey. A good start is also the hacker's cookbook. There is a whole chapter dedicated to the hilbert curve.
I am trying to capture a still frame from an (any) external swf file, by using my own flash movie as a proxy to load it and hand information regarding the Stage onto javascript. I want to keep it as wide compatible as possible, so I went with AS2 / Flash 8 for now.
The script works fine in the Flash debugger, i.e. the
trace(flash2canvasScreenshot.getPixel(w, h).toString(16));
returns the correct pixel color, where as:
ExternalInterface.call("sendToJS",flash2canvasScreenshot.getPixel(w, h).toString(16));
in the published movie doesn't.
This method can obviously be quite slow for large flash (dimension wise) movies, as it iterates every single pixel. If someone has any better methods in mind, feel free to share, but as said, the problem I am facing is that I am getting differentiating results in debugging and publishing, with the pixel information not getting fetched when published.
import flash.display.BitmapData;
import flash.external.*;
var myLoader:MovieClipLoader = new MovieClipLoader();
var mclListener:Object = new Object();
mclListener.onLoadInit = function(target_mc:MovieClip)
{
var stageW = Stage.width;
var flash2canvasScreenshot:BitmapData = new BitmapData(stageW, Stage.height, false, 0x00000000);
var pixels:Array = new Array();
flash2canvasScreenshot.draw(element);
for (w = 0; w <= stageW; w++)
{
trace(flash2canvasScreenshot.getPixel(w, h).toString(16)); // this gives correct color value for the pixels in the debugger
ExternalInterface.call("sendToJS",flash2canvasScreenshot.getPixel(w, h).toString(16)); // this just returns the bitmap default color, 0 in this case.
/*
for (h = 0; h <= Stage.height; h++)
{
var pixel = flash2canvasScreenshot.getPixel(w, h).toString(16);
pixels.push(pixel);
}
*/
}
//ExternalInterface.call("sendToJS",pixels.toString());*/
};
myLoader.addListener(mclListener);
myLoader.loadClip("http://i.cdn.turner.com/cnn/cnnintl_adspaces/2.0/creatives/2010/6/9/21017300x250-03.swf", 0);
//myLoader.loadClip("https://s.ytimg.com/yt/swfbin/watch_as3-vflJjAza6.swf", 0);
//myLoader.loadClip(_level0.flash2canvasurl, _root.mc);
There are few problems with the snippet you posted:
like the one Joey mentioned, but the one that stands out from my
point of view is the element variable which isn't defined
anywhere, so that either is a type o, or you're trying to draw an
undefined object.
You're drawing as soon as the load is finished, but the animation you're loading might start slightly later. Maybe take the snapshot a bit after the load is complete.
Haven't touched as2 for some time and don't remember how security issue are handled, but if you're swf is loading another swf from a different domain, then the domain hosting the swf you're loading should also have a crossdomain.xml policy file allowing you to access the content of the loaded swf. If you simply load and display a swf from another domain, that's fine. However, if you're trying to draw the swf using BitmapData, you're actually attempting to access pixel data from the content of that swf, therefore you would need permissions. If you have no control over the crossdomain policy file, you might need to use a server side script to copy/proxy the file over to a domain that can grant your loaded swf access.
Here's a simplified version of your snippet that works (sans the external interface/pixel values part):
var myLoader:MovieClipLoader = new MovieClipLoader();
var mclListener:Object = new Object();
mclListener.onLoadInit = function(target_mc:MovieClip)
{
var pixels:Array = new Array();
setTimeout(takeSnapshot,2000,target_mc);
}
myLoader.addListener(mclListener);
myLoader.loadClip("http://www.bbc.co.uk/science/humanbody/sleep/sheep/reaction_version5.swf",1);
//myLoader.loadClip("http://i.cdn.turner.com/cnn/cnnintl_adspaces/2.0/creatives/2010/6/9/21017300x250-03.swf", 1);
//myLoader.loadClip("https://s.ytimg.com/yt/swfbin/watch_as3-vflJjAza6.swf", 0);
function takeSnapshot(target:MovieClip):Void {
var flash2canvasScreenshot:BitmapData = new BitmapData(150, 150, false, 0x00000000);//tiny sample
flash2canvasScreenshot.draw(target);
_level1._alpha = 20;//fade the loaded content
_level0.attachBitmap(flash2canvasScreenshot,0);//show the snapshop. sorry about using _root
}
Here's a quick zoomed preview of the 150x150 snap:
Here's an as3 snippet to illustrate the security sandbox handling issue:
var swf:Loader = new Loader();
swf.contentLoaderInfo.addEventListener(Event.COMPLETE,loaderComplete);
swf.contentLoaderInfo.addEventListener(SecurityErrorEvent.SECURITY_ERROR,loaderSecurityError);
swf.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,loaderIOError);
swf.load(new URLRequest("http://i.cdn.turner.com/cnn/cnnintl_adspaces/2.0/creatives/2010/6/9/21017300x250-03.swf"),new LoaderContext(true));
function loaderComplete(event:Event):void{
setTimeout(takeSWFSnapshot,2000);
}
function loaderSecurityError(event:SecurityErrorEvent):void {
trace('caught security error',event.errorID,event.text);
}
function loaderIOError(event:IOErrorEvent):void{
trace('caught I/O error',event.errorID,event.text,'\tattempting to load\t',swf.contentLoaderInfo.url);
}
function takeSWFSnapshot():void{
var clone:BitmapData = new BitmapData(swf.content.width,swf.content.height,false,0);
try{
clone.draw(swf.content);
}catch(e:SecurityError){
trace(e.name,e.message,e.getStackTrace());
}
addChild(new Bitmap(clone));
}
HTH
My approach to this would be:
-Use AS3 for the reason lukevanin commented:
Just remember that AS3 can load an AS2 SWF, but an AS2 SWF cannot load
an AS3 SWF, so you actually achieve greater compatibility (with your
content) if you publish AS3
-Use a proxy file to fetch the swf file to get around sandbox violation issues (although if the swf loads external resources and uses relative paths it might get a bit more complex)
-Take a snapshot of the frame ( see George Profenza's solution )
-Encode the image using base64 and send that** to a JS method, and then decode to get the image.
** I'm pretty sure there are no size limitations...