|
Introduction
Humans explore objects in many steps. Typically, we first
visually scan the environment with our eyes to locate the position of the
object. We then touch the object to feel its general shape. Finally, a
more careful manual exploration is made to investigate the surface and
material characteristics of the object. Manual sensing of shape, softness,
and texture of an object occurs through tactile and kinesthetic sensory
systems that respond to spatial and temporal distribution of forces on
the hand. Recent advances in virtual reality and robotics enable the human
tactual system to be stimulated in a controlled manner through force feedback
devices, also known as haptic interfaces. A haptic interface is a device
that enables manual interaction with virtual environments or teleoparated
remote systems. They are employed for tasks that are usually performed
using hands in the real world (Srinivasan, 1995). Force-reflecting haptic
devices generate computer-controlled forces to convey to the user a sense
of natural feel of the virtual environment and objects within it. In this
regard, haptic rendering can be defined as the process of displaying computer
controlled forces on the user to make him or her sense the tactual feel
of virtual objects
In this tutorial, haptic rendering of 3D polygonal objects
is discussed. The techniques described in here will enable you to understand
the principles of
(1) haptic rendering
(2) haptic rendering of 3D primitives such as cube, sphere,
cone, etc.
(3) haptic rendering of 3D polygonal objects
with some implementation details and examples. |
Applications
-
Medicine: surgical simulators for medical training; manipulating
micro and macro robots for minimally invasive surgery; remote diagnosis
for telemedicine; aids for the disabled such as haptic interfaces for the
blind.
-
Entertainment: video games and simulators that enable the
user to feel and manipulate virtual solids, fluids, tools, and avatars.
-
Education: giving students the feel of phenomena at nano,
macro, or astronomical scales; “what if” scenarios for non-terrestrial
physics; experiencing complex data sets.
-
Industry: integration of haptics into CAD systems such that
a designer can freely manipulate the mechanical components of an assembly
in an immersive environment.
-
Graphic Arts: virtual art exhibits, concert rooms, and museums
in which the user can login remotely to play the musical instruments, and
to touch and feel the haptic attributes of the displays; individual or
co-operative virtual sculpturing across the internet.
|
see the references of this tutorial for the
applications |
General Principles
The basic process of haptic rendering of objects in virtual
environments with a force-feedback device is shown in http://touchlab.mit.edu/people/cagatay/hapticVR.html.
As the user manipulates the generic probe of the haptic device, the new
position and orientation of the probe is sensed by the encoders. Collisions
between the simulated stylus and virtual objects are detected. If the probe
collides with an object, the mechanistic model calculates the reaction
force based on the penetration depth of the probe into the virtual object.
The calculated force vectors may then be modified by appropriately mapping
them over the object surface to take into account the surface details.
The modified force vectors are fed back to the user through the haptic
device. Hence the haptic loop includes at least the following function
calls:
…
get_position (HIP); // position and/or
orientation of the end-effector
calculate_force(force); // user-defined function
to calculate forces
send_force(force);// calculate joint torques
and reflect the force back to the user
…
Several haptic rendering techniques have been developed recently
to render 3D objects. Just as in computer graphics, the representation
of 3D objects can be either surface-based or volume-based for the purposes
of computer haptics. While the surface models are based on parametric or
polygonal representations, volumetric models are made of voxels. An alternative
way of distinguishing the existing haptic rendering techniques is based
on the type of haptic interaction: point-based or ray-based.
In point-based haptic interactions, only the end point
of haptic device, also known as the end effector point or haptic interface
point (HIP), interacts with objects. Since the virtual surfaces have finite
stiffnesses, the end point of the haptic device penetrates into the object
after collision. Each time the user moves the generic probe of the haptic
device, the collision detection algorithms check to see if the end point
is inside the virtual object. If so, the depth of indentation is calculated
as the distance between the current HIP and a surface point, such as the
nearest surface point. In ray-based haptic interactions, the generic probe
of the haptic device is modeled as a finite ray whose orientation is taken
into account, and the collisions are checked between the ray and the objects.
In both point- and ray-based force reflection, the reaction
force (F) is usually calculated using the linear spring law, F = kx, where
k is the stiffness of the object and x is the depth of indentation. For
frictionless interactions, the reaction force (F) is normal to the polygonal
face that the generic probe collides with. For rigid objects, the value
of k is set as high as possible, limited by the contact instabilities of
the haptic device. Studies show that addition of a damping term into the
interaction dynamics improves the stability of the system and the haptic
perception of “rigidity”. Other forces such as those that arise from surface
friction and roughness also need to be displayed to improve the realism
of haptic interactions.
Two important issues any haptic interaction paradigm has
to specify are (1) collision detection: the detection of collisions
between the end-point of the generic probe and the objects in the scene
and (2) collision response: the response to the detection of collision
in terms of how the forces reflected to the user are computed. A good collision
detection algorithm not only reduces the computational time, but also helps
in correctly displaying interaction forces to the human operator to make
the haptic sensing of virtual objects more realistic. Whereas the collision
detection procedures for haptics and graphics are the same, the collision
response in haptic rendering differs from that in computer graphics.
In computer graphics, collision detection (Cohen et al., 1995, Lin, 1993,
Gottschalk et al., 1996, Smith et al., 1995, Hubbard, 1995) techniques
are used to detect if two objects overlap. When the collision is detected,
objects are separated from each other using collision response methods
(Moore and Wilhelms, 1988, Baraff, 1994, Mirtich, 1995 and 1996). In general,
the purpose of the collision detection and response in graphics is to avoid
the overlap between objects and to simulate the behavior of objects following
the overlap.
In contrast, the purpose of collision detection in haptic
rendering is not only to check collisions between objects, but more frequently
to check collisions between the probe and virtual objects to compute the
interaction forces. When simulating interactions between the probe and
the objects, the reflected force typically increases with penetration distance
such that it resists the probe from further penetrating the object. Thus,
the end point of the probe will always be inside the object during the
collision response phase. This is a major difference between the collision
response techniques developed for haptic and graphic interactions. In graphics,
typically the existence of the overlap needs to be detected, followed by
collision response algorithms. In haptics, the main goal is to compute
the reaction force. Hence the depth of penetration and how the penetration
evolves are important |
Basics of Collision Detection: Bounding Boxes and Binary Search
Trees
This section describes the concepts related to the bounding
box computations and binary search techniques which are useful in collision
detection computations. The techniques described in here enable the user
to create
(1) axis aligned bounding box of an object
(AABB)
(2) oriented bounding box of an object (OBB)
(3) binary search trees
(1) Axis Aligned Bounding Box (see Figure)
Consider a situation in which you need to check if your
point (e.g. Haptic Interface Point -HIP-) is inside an object for each
iteration of your servo loop (i.e. When you interact with virtual objects
through a haptic interface, you always need this check. The device becomes
alive -reflects forces to the user- only when the end point of the probe
is inside the object).
Let’s say we need to know if the point P(x, y) in Figure
1b will be inside the boundaries of Object1 at the next iteration of servo
loop. Considering all the coordinates of the objects in the scene to check
if the point P is inside the Object1 will be computationally very expensive.
One remedy to this problem is to first compute Axis aligned Bounding Box
(AABB) of the object and then check if the point P is inside this box before
you further proceed. AABB of an object can easily be constructed from maximum
and minimum global coordinates of the object along the X, Y, and Z axes.
This box fully encloses the object (see Figure).
Figure (a) Bounding Box computation in 2D coordinate
system. (b) Is point P(x,y)
inside the object1 or object2 ?
Then, the bounding box check for collision detection purposes
will be as simple as adding the following lines to your code (The extension
of this algorithm to 3D objects is trivial.):
for (j = 0; j < number of objects in the
scene; j++)
{
if(x < X_max_j && x> X_min_j
&& y < y_max_j && y > Y_j_min)
{
Collision = TRUE;
Collided Object = j;
}
else
Collision = FALSE;
}
(2) Oriented Bounding Box (see Figure)
Although AABB is easy to construct and improves the collision
detection, it is certainly not the optimum solution. There is a better
way: Oriented Bounding Box (OBB). The difference between AABB and OBB approaches
is depicted in Figure 2. It is clear from Figure 2 that OBB tightly encloses
the object.
Figure (a) Axis Aligned Bounding Box (AABB)
(b) Oriented Bounding Box (OBB)
How can we find the OBB of a polyhedron that is made of
triangles? The solution is given by Gottschalk et al. (1996).
(3) Binary Space Partitioning (see Figure)
When you check collisions with multiple objects or a single
object with multiple triangles, you need to consider techniques that enable
you to access the geometrical database quickly. Figure 3a depicts the concept
of "binary space partitioning" in 2D space. Using Binary Space Partitioning
(BSP) technique, you can divide the space into smaller regions. Since each
region is associated with an object or a group of objects, you can easily
eliminate some unnecessary checks as you march along the branches of the
binary search tree from top to bottom. Partition of space can be achieved
using the minimum and maximum coordinates of the objects (see the Y-axis
in Figure (a) : minimum and maximum y coordinates of the objects are projected
to the Y-axis to sort the objects and to partition the space along the
Y axis).
Figure (a) Binary space partitioning
(b) Binary search tree
|
Haptic Rendering of 3D Primitives
Initially, haptic rendering methods focused on displaying
simple, rigid, and frictionless 3D objects such as cube, cylinder, and
sphere. The depth of indentation is taken to be the distance between the
HIP and the nearest surface point. For certain objects, it is quite easy
to compute the depth of penetration and the direction of the force vector.
Here is a simple function call for calculating forces to render a sphere
of radius R = 20 in virtual environments:
Figure . Rendering of a sphere.
void calculate_force (Vector &force)
{
float X, Y, Z, distance;
float R = 20.0;
// HIP: tip point of the end-effector
X = HIP[0]; Y = HIP[1]; Z = HIP[2];
distance = sqrt(X*X + Y*Y + Z*Z);
if(distance < R)
{
force[0] = X/distance * (R-distance);
force[1] = Y/distance * (R-distance);
force[2] = Z/distance * (R-distance);
}
}
Computations for determining the force vector sometimes involve
dividing the object into sub-spaces associated with particular portions
of object surface (see Figure 3). If the HIP penetrates into a region which
is shared by multiple sub-spaces, then superposition of surface normals
is used to calculate the direction of resultant force vector.
Figure. Rendering of a cube: The cube is divided into
6 equal volumes to determine the direction of the force vector.
A virtual scene can be constructed from multiple primitives
whose sub-regions can be defined in advance. However, there are many problems
with this haptic rendering technique: first, it is not easy either to divide
an object into sub-spaces intuitively or to construct virtual environments
from primitive objects, and second, the superposition of force vectors
breaks down for thin or complex shaped objects.
|
Point-Based Haptic Interactions for Rendering 3D Polygonal Objects
When exploring virtual environments, we interact with
objects through the end point of the probe, known as the Haptic Interface
Point (HIP). At the same time, we also consider another point, named as
the Ideal Haptic Interface Point (IHIP) -similar to the so called god-object
point (Zilles and Salisbury, 1995)- to follow the trace of the HIP. The
HIP is not constrained and, therefore, it can penetrate the object. However,
we constrain the IHIP such that it doesn’t penetrate any object. When the
HIP is outside the virtual object, the IHIP will be coincident with the
HIP. If HIP penetrates a virtual object, the IHIP will stay on the surface
of the object. When the HIP is outside the object, we keep track of its
path and check if this path penetrates any polygon. For this purpose, we
construct a line segment between the previous and current coordinates of
the HIP and detect the collisions between this line segment (since the
servo rate is around 1 kHz, and human motions are relatively slow, this
line is very short) and the polygons of the 3D object. To achieve fast
detection of collisions between the line segment and the polygonal objects,
we utilize “hierarchical bounding boxes” approach (Gottschalk et
al., 1996). In this approach, polygons of the 3D object are hierarchically
partitioned a priori until each polygon is enclosed by its own bounding
box. The detection of collisions occur in three consecutive stages: first,
collisions are detected between the line segment and the bounding box of
the object. If the line segment is inside the bounding box, then collisions
are checked with partitioned bounding boxes along the branches of the hierarchical
tree by marching from the top. Finally, we detect the collision between
the line segment and the polygon in the last bounding box at the lowest
level. If the line segment penetrates a polygon, we set this polygon as
the contacted geometric primitive. The IHIP will be constrained to stay
on the surface of this polygon. The nearest point from this polygon to
the current HIP is set as the IHIP and the distance from the IHIP to the
current HIP is set as the depth of penetration. Although the first contacted
geometric primitive is always a polygon and the IHIP is on the surface
of this polygon, it can easily be a line or a vertex in the upcoming iterations.
(i.e., the IHIP can be constrained to stay on the edge or vertex of a polygon
as well as on the surface of the polygon). In the next iteration, we calculate
the nearest distances from the current HIP to the contacted geometric primitive
and its neighboring primitives. For example, if the contacted primitive
is a polygon, then we check the distance from the current HIP to the neighboring
lines and vertices. This local search approach significantly reduces the
number of computations. Then, we set the primitive that has the shortest
distance to the current HIP as the new contacted geometric primitive and
move the IHIP to a point that is located on this primitive and is nearest
to the current HIP (see Figure 4). The ensuing interactions are iterated
following the steps of this rule-based algorithm.
Figure. Haptic interactions between the end point
of the probe and the 3D objects in VEs: Before the collision occurs, HIP
is outside the object surface and identical with IHIP (see HIPt-3, HIPt-2,
HIPt-1) When the HIP penetrates into object at time=t, the IHIP is constrained
to stay on the surface. At time=t+1, HIP moves to a new location (HIPt+1)
and the new location of IHIP is determined from the current HIP and its
neighboring primitives based on the nearest distance criteria.
At each iteration, one also needs to check if the current
HIP is still inside the virtual object. For this purpose, we construct
a vector from the current HIP to the IHIP. If the dot product of this vector
and the normal of the contacted primitive is negative, the current HIP
is no longer inside the object and there is no penetration any more. If
the dot product is positive, then the current HIP is still inside the object
and the magnitude of the vector from the current HIP to the IHIP can be
used as the depth of penetration for the force computations. The pseudo-code
for our haptic interaction algorithm is given below (Ho, et al., 1999).
if (collision == FALSE)
{
if (the path of HIP penetrates a polygon)
{
Set the polygon as the contacted geometric
primitive
Move the IHIP to a point on this surface that
is closest to current HIP
collision = TRUE;
}
}
else
{
contacted geometric primitive = the
contacted geometric primitive in the previous loop;
primitive1 = contacted geometric primitive ;
distance1 = closest distance from current HIP
to primitive1;
repeat
{
primitive1= contacted geometric primitive;
for (i=1 : number of neighboring primitives of primitive1)
{
primitive2 = the ith neighboring
primitive of primitive1;
distance2 = distance from
current HIP to primitive2;
if (distance2 < distance1)
{
contacted geometric primitive =
primitive2;
distance1 = distance2;
}
}
} while (primitive1!= contacted geometric primitive)
Move IHIP to a point that is nearest from the contacted
geometric primitive to current HIP
vector1 = vector from current HIP to current
IHIP;
normal1 = normal of the contacted geometric
primitive;
if (dot product (vector1, normal1) < 0)
collision = FALSE;
else
{
collision = TRUE;
penetration vector = vector1;
penetration depth = magnitude of
penetration vector;
}
}
Using this algorithm, we can haptically render both
convex and concave objects in an efficient manner. The computational time
for detecting the first collision will be in the order of log(N) for a
single object, where N is the number of polygons (Gottschalk et al., 1996).
After the first collision, we only consider the distances from the current
HIP to the contacted primitive and its neighbors to determine the new location
of IHIP, so the servo rate will only depend on the number of the contacted
geometric primitive’s neighbors. As N increases, because the order of magnitude
of computational operations for searching the neighbors of each primitive
is about the same for a homogeneously triangulated polyhedron, the servo
rate will be fast and independent of the number of polygons after the first
collision.
|
Implementation Details: Integration of Vision and Touch
To have a satisfying experience in interacting with a
VE, the graphics and haptic update rates need to be maintained at around
30 Hz and 1000 Hz, respectively. In order to develop effective multimodal
VEs and the optimal usage of the CPU capabilities, you need to experiment
with multi-threading and multi-processing techniques to synchronize the
visual and haptic servo loops. The conceptual difference between
the multi-threading and the multi-processing structures is illustrated
in Figure below.
(a)
(b)
Figure. Software architectures for (a) multi-threading
and (b) multi-processing. The difference between the multi-threading and
multi-processing software structures. In the multi-threading structure,
both the haptic and the graphic loops share the same database. In this
structure, the synchronization of the two loops in accessing the data is
important. In the multi-processing structure, the haptic and the graphic
loops have their own copies of databases that are not shared. The two processes
could run on the same machine or on different machines. The communication
protocol between the two loops that ensures consistent update of both graphics
and haptics databases is important in this structure.
Our experience is that both multi-threading and multi-processing
techniques are quite useful in achieving high graphic and haptic rendering
rates and stable haptic interactions. The choice of multi-processing or
multi-threading structures should depend on the characteristics of the
application. Although creating a separate process for each modality seems
to require more programming effort than multi-threading, it enables the
user to display the graphics and/or haptics on any desired machine(s).
If large amounts of data will need to be transferred back and forth frequently
between the loops, we recommend multi-threading techniques implemented
with timer callbacks for synchronizing the haptic and visual servo loops.
|
References
-
Adachi, Y., Kumano, T., Ogino, K. (1995). Intermediate Representation
for Stiff Virtual Objects. Proc. IEEE Virtual Reality Annual Intl. Symposium
’95 (Research Triangle Park, N. Carolina, March 11-15), 203-210.
-
Baraff, D. (1994) Fast Contact Force Computation for Nonpenetrating
Rigid Bodies. ACM (Proceedings of SIGGRAPH), 28, 23-34.
-
Basdogan, C., Ho, C., and Srinivasan, M.A. (1997) A Ray-Based
Haptic Rendering Technique for Displaying Shape and Texture of 3D Objects
in Virtual Environments. ASME Winter Annual Meeting, Dallas, TX, 61, 77-84,.
-
Basdogan C., Ho, C., Srinivasan, M.A., Small, S., Dawson,
S. (1998). Force interactions in laparoscopic simulations: haptic rendering
of soft tissues. Proceedings of the Medicine Meets Virtual Reality VI Conference,
San Diego, CA, January 19-22, 385-391.
-
Bier, E.A., Sloan K.R. (1986). Two-Part Texture Mapping.
IEEE Computer Graphics and Applications, 40-53.
-
Blinn, J.F. (1978). Simulation of Wrinkled Surfaces. ACM
(Proceedings of SIGGRAPH), 12(3), 286-292.
-
Burdea, G. (1996). Force and Touch Feedback for Virtual Reality.
John Wiley and Sons, Inc., NY.
-
Chen, J., DiMattia, C., Falvo, M., Thiansathaporn P., Superfine,
R., Taylor, R.M. (1997). Sticking to the Point: A Friction and Adhesion
Model for Simulated Surfaces. Proceedings of the Sixth Annual Symposium
on Haptic Interfaces and Virtual Environment and Teleoperator Systems,
Dallas, TX, 167-171.
-
Cohen, J., Lin, M., Manocha, D., Ponamgi, K. (1995). I-COLLIDE:
An Interactive and Exact Collision Detection System for Large-Scaled Environments.
Proceedings of ACM Interactive 3D Graphics Conference, 189-196.
-
Ebert, D.S., Musgrave F.K., Peachey D., Perlin K., Worley
S. (1994). Texturing and Modeling. AP Professional, Cambridge, MA.
-
Fleischer, K. W., Laidlaw, D. H., Currin, B. L., and Barr,
A. H. (1995). Cellular Texture Generation. ACM (Proceedings of SIGGRAPH),
August, 239-248.
-
Foley, J.D., van Dam, A., Feiner, S. K., Hughes, J.F. (1995).
Computer Graphics: Principles and Practice. Addison-Wesley.
-
Fritz and Barner (1996). Haptic Scientific Visualization.
Proceedings of the First PHANToM Users Group Workshop, Eds: Salisbury J.K.
and Srinivasan M.A. MIT-AI TR-1596 and RLE TR-612.
-
Gottschalk, S., Lin, M., and Manocha, D. (1996). OBB-Tree:
A hierarchical Structure for Rapid Interference Detection. ACM (Proceedings
of SIGGRAPH), August.
-
Green, D. F. and Salisbury, J. K. (1997). Texture Sensing
and Simulation Using the PHANToM: Towards Remote Sensing of Soil Properties.
Proceedings of the Second PHANToM Users Group Workshop, Oct. 19-22.
-
Ho, C., Basdogan, C., Srinivasan M.A. (1997). Haptic Rendering:
Point- and Ray-Based Interactions. Proceedings of the Second PHANToM Users
Group Workshop, Dedham, MA, October 20-21.
-
Hubbard, P. (1995). Collision Detection for Interactive Graphics
Applications. IEEE Transactions on Visualization and Computer Graphics,
1(3), 219-230.
-
Lin, M. (1993). Efficient Collision Detection for Animation
and Robotics. Ph.D. thesis, University of California, Berkeley.
-
Mandelbrot, B. (1982). The Fractal Geometry of Nature. W.H.
Freeman.
-
Mark, W., Randolph S., Finch, M., Van Verth, J., Taylor,
R.M. (1996). Adding Force Feedback to Graphics Systems: Issues and Solutions.
Computer Graphics: Proceedings of SIGGRAPH’96, August, 447-452.
-
Massie, T. H. (1993). Initial Haptic Explorations with the
Phantom: Virtual Touch Through Point Interaction. MS thesis, Massachusetts
Institute of Technology.
-
Massie T.H., Salisbury J.K. (1994). The PHANToM Haptic Interface:
A Device for Probing Virtual Objects. Proceedings of the ASME Dynamic Systems
and Control Division, 55(1), 295-301.
-
Max, N.L., Becker, B.G. (1994). Bump Shading for Volume Textures.
IEEE Computer Graphics and App., 4, 18-20.
-
Minsky, M. D. R. (1995). Computational Haptics: The Sandpaper
System for Synthesizing Texture for a Force-Feedback Display. Ph.D. thesis,
Massachusetts Institute of Technology.
-
Minsky, M., Ming, O., Steele, F., Brook, F.P., and Behensky,
M. (1990). Feeling and seeing: issues in force display. Proceedings of
the symposium on 3D Real-Time Interactive Graphics, 24, 235-243.
-
Mirtich, B. (1996). Impulse-based Dynamic Simulation of Rigid
Body Systems. Ph.D. thesis, University of California, Berkeley.
-
Mirtich, B., Canny, J. (1995). Impulse-based Simulation of
Rigid Bodies. Proceedings of Symposium on Interactive 3D Graphics, April.
-
Moore, M., Wilhelms, J. (1988). Collision Detection and Response
for Computer Animation. ACM (Proceedings of SIGGRAPH), 22(4), 289-298.
-
Morgenbesser, H.B., Srinivasan, M.A. (1996). Force Shading
for Haptic Shape Perception. Proceedings of the ASME Dynamic Systems and
Control Division, 58, 407-412.
-
Perlin, K. (1985). An Image Synthesizer. ACM SIGGRAPH, 19(3),
287-296.
-
Ruspini, D.C., Kolarov, K., Khatib O. (1996). Robust Haptic
Display of Graphical Environments. Proceedings of the First PHANToM Users
Group Workshop, Eds: Salisbury J.K. and Srinivasan M.A. MIT-AI TR-1596
and RLE TR-612.
-
Ruspini, D.C., Kolarov, K., Khatib O. (1997). The Haptic
Display of Complex Graphical Environments. ACM (Proceedings of SIGGRAPH),
July, 345-352.
-
Salcudean, S. E. and Vlaar, T. D. (1994). On the Emulation
of Stiff Walls and Static Friction with a Magnetically Levitated Input/Output
Device. ASME DSC, 55(1), 303 - 309.
-
Salisbury, J.K., Brock, D., Massie, T., Swarup, N., Zilles
C. (1995). Haptic Rendering: Programming touch interaction with virtual
objects. Proceedings of the ACM Symposium on Interactive 3D Graphics, Monterey,
California.
-
Salisbury, J. K., Srinivasan, M. A. (1997). Phantom-Based
Haptic Interaction with Virtual Objects. IEEE Computer Graphics and Applications,
17(5).
-
Siira, J., Pai D. K. (1996). Haptic Texturing - A Stochastic
Approach. Proceedings of the IEEE International Conference on Robotics
and Automation, Minneapolis, Minnesota, 557-562.
-
Smith, A., Kitamura, Y., Takemura, H., and Kishino, F. (1995).
A Simple and Efficient Method for Accurate Collision Detection Among Deformable
Polyhedral Objects in Arbrary Motion. IEEE Virtual Reality Annual International
Symposium, 136-145.
-
Srinivasan, M.A. (1995). Haptic Interfaces. In Virtual Reality:
Scientific and Technical Challenges, Eds: N. Durlach and A. S. Mavor, National
Academy Press, 161-187.
-
Srinivasan, M.A., and Basdogan, C. (1997). Haptics in Virtual
Environments: Taxonomy, Research Status, and Challenges. Computers and
Graphics, 21(4), 393 – 404.
-
Turk, G. (1991). Generating Textures on Arbitrary Surfaces
Using Reaction-Diffusion. ACM (Proceedings of SIGGRAPH), 25(4), 289-298.
-
Watt, A., Watt, M. (1992). Advanced Animation and Rendering
Techniques. Addison-Wesley, NY.
-
Wijk, J. J. V. (1991). Spot Noise. ACM (Proceedings of
SIGGRAPH), 25(4), 309-318.
-
Witkin, A., and Kass, M. (1991). Reaction-Diffusion Textures.
ACM (Proceedings of SIGGRAPH), 25(4), 299-308.
-
Worley, S. (1996). A Cellular Texture Basis Function. ACM
(Proceedings of SIGGRAPH), August, 291-294.
-
Zilles, C.B., and Salisbury, J.K. (1995). A Constraint-Based
God-Object Method for Haptic Display. IEEE International Conference on
Intelligent Robots and System, Human Robot Interaction, and Co-operative
Robots, IROS, 3, 146-151.
|
|