07 Apr 2009
Click to run
Cylinders shaped with randomized 3D cubic Bezier splines and sine waves. Later it would be nice to develop on the tentacle movement a good deal more with kinematics and possibly springs... Robert Hodgins' piece, "Relentless, The REV" was the point of inspiration for this.
Update 8/2010:
Alternate untextured version
Java with Processing.
03 Mar 2009
Click to run
The color of each triangle in the mesh is obtained by averaging the pixel colors of the underlying image at the screen positions under each vertex as well as at the triangles' center point. Some FlatShader-like lighting is used to add a little more three-dimensionality. Finally, a BlurFilter is applied to the source image to keep the transitions between colors from feeling too abrupt.
27 Feb 2009
[+] Large-scale art installation idea
A giant 'AR square' (not sure of the right terminology here...) is painted on the side of a building. People come by to view it thru the web application - an AIR app, possibly. Because the 'geometry' of the landscape around the painted square is known in advance (eg, city streets, adjacent buildings, etc), and can be assumed to remain 'constant', that information could be 'hardcoded' into the 3d scene for the dynamic 3d elements to interact with in a visually convincing manner. The 3d parts could also be properly occluded behind other buildings or certain street obstacles. The dynamic elements wouldn't even have to specifically interact with the location of the square itself; the square simply situates the camera in relation to the entire 3d space... A feature to save footage locally to be uploaded to a central repository later. Also, it wouldn't necessarily have to be a large-scale context. The same treatment could be done within a room-sized scene.
[+] An interactive setup-phase to define scene geometry
With the square remaining in a fixed position, the user uses a few simple tools to draw 3d planes and rectangles on top of the video to describe the physical space around them. Eg, user draws 4 connecting line segments to describe a plane which represents a room's wall. (And then defines a few more planes that describe the other walls, floor, and ceiling). Actually, you might just be able to pinpoint the 8 corners of the inside box which makes up the room... Maybe the user can also 'overlay' a 3d cube over a coffee table or desk. Etc. The program could even draw from a library of furniture-like 3d objects for the user to translate, rotate, and scale to overlay onto the scene. These 3d elements then constitute solid objects which the dynamic 3d elements can interact with. The camera is free to move around within the scene as long as the square remains fixed.
(a) 3d rain falling and hitting indoor furniture in interesting, 'convincing' ways. Or snowflakes falling, and collecting on various surfaces. Or a room getting slowly flooded with water, starting from the floor up to the ceiling... The supplied room geometry forming the basis for a platform game...
(b) Actually, if the user-supplied data and the positioning information from the AR Toolkit are accurate enough, there's no reason why you couldn't take snapshots from the video imagery, dynamically snip out the various quadrilaterals corresponding to the scene geometry, correct for perspective, and then skin the 3d geometry with that video texture information... !
(c) A 3d box is positioned over a real-world rectangle-shaped table or something. The top of that box is then used as a playing surface for something like pong pong or air hockey, which uses normal 3d game mechanics and assets but which is of course overlayed on top of live video. Imagine a replay feature where the winning shot is replayed in slow motion, but the user views it from different angles by moving the webcam. Virtual indoor handball by using AR in combination with motion detection.
(d) A square is stuck on a person's chest. If the person's height is supplied, we have enough information for a gross bounding box. This is enough information to go on to do a number of potentially interesting things. If we make the assumption that the subject remains generally upright and standing, we know the general position of the floor as well.
Update: Of course all these questions have already been dreamed up, and solved. This page from chronotext.org looks useful...
[+] Use of 'physics'; use of a physics engine (eg,
Jiglib)
As the AR square describes a plane, it of course lends itself to being acted upon as a solid surface. If we introduce extra scene geometry as described in points 1 or 2 above, even more could be done. Cubes or spheres falling from the ceiling to fill up a room (of course). Apply that sweet-ass Jiglib
rally car example to a scene where the AR square is placed on the floor...
Update: Cloth demo by Saqoosha. (I guess I should do more 'research' before clicking 'Publish' ;)
[+] A specific visual piece: Tentacles
The AR square, placed on the body. Multiple tentacles coming out of the square in anime/sci-fi style. A fun exercise to play with for... inverse kinematics; 3d bezier curve animation; animating bezier patches to generate mesh geometry (Away3D 2.3); tree-like branching of tentacles; 'generative art' generally; crazy, lurid motions. Assuming a fixed camera, various 'motion behaviors' based on the movement of the square on the body; tentacles reacting fluidly to translation and rotation of the square.
[+] Interaction of dynamic 3d elements between two or more squares
Particles coming out of one square and going into another; gravity-based motion between squares; arcs of electricity going from one to the other; tentacles (from point 4) coming out of one body and 'attacking' another body to which another square is attached, for some reason.
[+] Interaction of the video bitmap information with the 3d elements
(a) The video image used as an environment map applied to the dynamic 3d elements, to make it look reflective and vaguely chrome-like (with a little cleverness and finesse).
(b) Pixelbender-like effects applied to the areas of the video that are 'underneath' or adjacent to the 3d elements. A 3d 'fire' coming out of the square, and the video imagery around the fire shimmering from the 'heat' of it. Wind-like motion-blur effect emanating from a virtual fan or something, and taking account of perspective. Pixel-dissolve-y action?
(c) Real-time chroma-keying to mask out background video imagery. Dynamic 3d elements can then be made to appear to be circling around the subject by appearing both in front of or 'behind' the video.
(d) The idea of treating the entire video with various video filters momentarily/sporadically to put the artificial content in bolder relief appeals to me...
[+] Intelligent video color sampling
(a) Application polls the color information of the incoming video to try to mimic, vaguely, the scene's lighting as applied to the 3d elements. Again, with some clever hand-wavery and finesse.
(b) The dynamic 3d elements attempt to 'mimic' the colors of the video pixels around it. An animated lizard character or something. Maybe the colors of the faces of a mesh are assigned by averaging the colors of the area of the video image that the face normals are pointing at.
(c) Random 'remixing' of nearby patches of video imagery applied to a 3d mesh to create its texture. Another Pixel Bender possibility. The invisible suit in the movie 'A Scanner Darkly'...
[+] Save video to disk from within application
Add built-in feature to easily save composited output to disk (eg, using
SimpleFlvWriter).
[+] Science museum-style interactive art installation
As many AR ideas might require specific rules, setups, or optimal conditions, set them up in an expressedly controlled setting...
[+] Use of augmented reality + head-mounted display/webcam + geotagging + wireless internet = William Gibson's Spook Country
The composited output fed into a head-mounted display/'VR goggles', with a lightweight webcam mounted on it pointing outward. When combined with GPS tagging, "locative art" a la
Spook Country. (Actually, the GPS tagging wouldn't even be necessary, just nice to have). Ie: Users view 3d sculpures and whatnot (pushed via wireless) associated with AR squares (made to various scales) that are 'tagged' around the (real-world) landscape by other users.
17 Feb 2009
Click to run demo
A green-screen/chroma key effect using Pixel Bender. With this version, the live video is compared against a reference image rather than a static color. Additionally, the alpha of the output is graduated based on the color difference between the base image and the incoming video (rather than just being either 'on' or 'off').
Use the sliders to adjust the falloff curve for the alpha. Click "Reset base image" to set the base image with which to compare any further changes in the video against. You can also choose your own background image (click "Browse..." at the bottom). Works best against a solid-colored background and probably best of all with proper lighting against a green background.
When I wrote the image processing routine in pure Actionscript, it ran like a slideshow. The Pixel Bender version on a multi-core system runs about 30x faster.
Pixel Bender kernel: GreenScreenEffect.pbk
09 Feb 2009
Click to run demo
How it would look if you took a webcam or camera, and captured several frames of the same scene in quick succession, and then essentially averaged them out and composited them into one image? My guess was that the resulting image would retain most of its sharpness but lose most of the 'noise'.
Read more