3.1 The Sense of Touch and its Biology
Examples of haptic systems and the importance of the haptic sense have been discussed
in the preceding chapters without actually giving an exact idea of the function
of haptic perception. For the design of haptic systems it is vital to have a basic understanding
of characteristic biological parameters, as only these will help to identify
relevant technical requirements. This chapter introduces the most important terminology
and basics for understanding the neurobiology of haptic perception. Please
note that research on haptic perception is far from being complete. As a consequence
this short presentation of complex biological coherences is a well-founded working
hypothesis which will be extended or confuted by further research. In order to perceive
information from our surroundings, man is equipped with five senses: Hearing,
Smelling, Tasting, Sight and Touch. The physiology of senses distinguishes five
sensors and sensory-systems [219] differing from this very popular definition. They
allow a classification in a vocabulary lent from a technical approach to describe
things:
• Thermal sensor for registering the change of temperature especially within the
skin,
• Chemical sensors reacting on odorous or gustatory substances,
• Optical sensors reacting on the impact of photons, especially within the cones
and rods in the retina,
• Pain sensors, also named nociceptors, to identify chemical and physical tissue
damage,
• Mechanical sensors for detecting mechanical tensions and strains e.g. within the
skin or muscles.
The sensory capacity and its importance for haptic perception are valued differently.
The visual sensors register ≈ 10 Mio. bit/s, the sense of touch ≈ 1 Mio.
bit/s and the acoustic sense ≈ 100 kbit/s [18]. The processing of these sensory data
happens within the cerebral cortex. It is structured in functional brain areas. The primary
motor cortex is the physiological location for processing data from the sense of
touch. A visualization of the distribution of body parts on the primary motor cortex
(fig. 3.1) shows a significant portion being used for fingers and hand.
Within the sensorimotor functions the haptic sense has the highest importance.
It consists of a group of mechanical sensors detecting force induced deformations
within tissues in the skin, muscles and joints. As a consequence haptic perception
is the sum of signals from a large number of measurement points distributed among
the human body, consisting of at least 6 types of sensors which can be divided into
two basic groups: Tactile and kinaesthetic sensors (fig. 3.2).
Tactile sensors are located in the outer areas of the skin in exposed positions (e.g.
the fingertips). They react on strains of the skin and are activated either proportionally
to the elongation, to the velocity or to the acceleration. The neuro-pyhsiology
distinguishes between four different types of tactile sensors [236, 219]:
• Rapid-adaption or Fast-Adaption (RA or FA-I) Meissner corpuscles - with velocity
dependent activation.
• Slow-adapting (SA-I and SA-II) Merkel cells and Ruffini-corpuscles with velocitydependent
and elongation proportional activation. They show a lower dynamic
response 1 compared to the Meissner-corpuscles.
• Fast-Adaption (FA-II) Pacinian corpuscles with acceleration-proportional activation.
The distribution of sensors varies within different skin areas (fig. 3.3) and is part
of current research. For example in [193], the existence of Meissner corpuscles has
to be put into question in contrast to established doctrines.
Unlike tactile sensors kinaesthetic sensors are located mainly within muscles,
joints and tendons. They acquire forces affecting whole extremities only. Their dynamic
requirements are reduced as a result of the mechanical low-pass characteristics
of the extremities (their mass, damping and stiffness). Their requirements on
the relative resolution between the smallest perceivable force vs. the maximum de-
tectable force (amplitude-dynamics) can be compared with the group of tactile sensors.
Kinaesthetic sensors can be divided in two groups:
• spindle-stretch-receptors Dynamic Bag fibres and Static Bag fibres placed in parallel
to the muscle fibres.
• spindel-tension-receptors Golgi tendon organ - in serial orientation to the muscle
fibres.
When summarizing all information about biological sensors contributing to haptic
perception, it is interesting to see that nature chooses a design to identify forces
and vibrations, which does not significantly differ from the technical solution of
comparable problems. However comparable technical solutions are older than the
biological understanding of the sense of touch. Therefore it seems likely that with
the given physical constraints only solutions optimized in such a manner are adequate.
Besides dealing with theses sensors as the first part of the haptic perception chain,
in the next step it is necessary to consider a model for the neurological processing
of haptic information, in order to get a feeling for the complexity of the system and
outline components relevant for the design of technical haptic systems.
Figure 3.4 shows a simplified understanding of neuronal subsystems participating
in a task like “grasping a glass of water”. The motivation phase starts with thirst
due to e.g. the body salinity being too high, and with the knowledge about the availability
of a glass of water. As a result a decision is taken to “seize the glass of
water”. Within a programming phase this decision results in a definition of movements
for single extremities and body parts. As a subcomponent each of them has
a position controller, which controls the movement. Feedback is given by the motor
sensors within the joints but also by visual control from a superordinate control loop.
Subordinate to the visual control, a closed-loop circuit with force feedback exists,
enabling the safe and secure holding of the glass based on a maximum force to be
exerted. As an alternative, a feedback loop can be assumed controlling the grasping
force to avoid a slipping through of the glass.
It is remarkable that the analogy to technical control systems can so easily be
drawn. Decision phase, programming phase and processing phase are accepted references
for central-neural structures [219]. The interconnection between position
and force-controller is a direct result from dynamic ranges and measurement errors
unique to the components of the close-loop circuit. The position control loop including
the locomotor system and kinaesthetic sensors shows a dynamic range of
≤ 10Hz [287, 83]. Additionally angle-positioning and absolute position measurement
without a line of sight show large errors (2◦ to 10◦ dependent on the joints
participating [34]). Movements including visual control are much more precise.
The visual perception is able to resolute movements with up to 30 Hz depending
on the illumination level. By the aid of sight a human is able to move to a position
and hold it until immediately before physical contact - which is strictly impossible
with closed eyes. On the other hand tactile sensors show a dynamic range of many
hundred Hertz. This capability combined with high amplitude-dynamics enables
humans to hold even slippery and fragile objects without breaking them.
3.2 Haptic Perception
Knowledge about the performance of haptic perception is essential. for the formulation
of requirements as a basis for system design, For each body part there are
different characteristic values, as the haptic sense is not located in a single organ.
Additionally haptic interaction is always bidirectional, which means - especially
in case of kinaesthetic interaction - that haptic interaction can be mapped only by
considering positions and angles of body parts and forces and torques of mechanical
specifications,. Furthermore haptic perception is also greatly dependent on the
dynamic excitation in a broad frequency range. Last but not least the aspect of multimodality2
has to be considered. Haptically inconspicuous keys and buttons are
considered to be of high quality when they are accompanied by a loud click-sound,
compared to silent buttons with identical haptic properties. As a consequence of
the complexity of effects influencing haptic perception every characteristic value
taken from literature has to be seen in the context of the individual test design and
weighted with the accuracy of the experiment’s layout. The characteristic values presented
within this chapter shall be taken as points for orientation only, and should
be modified and even disapproved of by future experiments.
3.2.1 Psychophysical Concepts
In order to be able to understand characteristic values of haptic perception, a basic
knowledge about some relevant psychophysical concepts is necessary. The definitions
given here are based on G. A. GESCHEIDER [66], recommended to any reader
interested in this subject.
3.2.1.1 Threshold and Difference-Limen
Two fundamental concepts for the analysis of thresholds are distinguished in psychophysics.
On the one hand there is the measurement of thresholds for differential
perception (thresholds of differential sensitivity). On the other hand there are thresholds
for absolute perception (thresholds of absolute sensitivity). All measurement
principles in psychophysics can be categorized according to these two principles.
Additionally the analyzed stimuli differ in their dimensions (e.g. space, time, spectral
3).
The absolute threshold (fig. 3.5) of a stimulus describes the value, from which
a stimulus φ begins to become perceivable.
As another characteristic value the stimulus’ change is relevant, creating a justnoticeable
difference (JND) 4. The stimulus’ change is called difference threshold
or alternatively difference-limen (DL). Consequently the DL means the measurement
of a Δφ being the difference to a stimulus φ0 compared to another stimulus
φ1. The JNDs are numbered discretely as JND being a member of N. The first JND
is the first DL after the absolute threshold; the second JND is that DL following the
sum of the absolute threshold and the first DL (fig. 3.6). To sum up: the JND is the
smallest physiological scale unit of the linearized perception of a physical stimulus
φ .
applied allow conclusions concerning the neuronal processing of stimuli. A classical
method for the analysis of DL is the presentation of a reference stimulus comparing
it to a second stimulus, which is presented to a subject either in an automated way
or manually controlled by the test-person himself (fig. 3.7).
Besides the aspects just mentioned there are others for doing comparable analysis.
On the one hand, there is the aspect of masking with the question: “At which
point will two stimuli dependent on a single parameter be perceived as different?”.
Aspects analyzed frequently for masking a time and spatial dependencies. As a result
the terms of temporal masking and spatial masking have been fixed.
example dynamic masking: The perception of a change in frequency of a mechanical
oscillation of fixed amplitude shall be analyzed. For this purpose two stimuli
are given at the same time to test a subject. The subject is allowed to change the
frequency of one stimulus until he or she detects two independent stimuli. The measure
of the change in frequency Δ f is the value for DL with respect to the reference
stimulus. Results of such a kind of experiments are not always precise and should
be critically analyzed. For example in case of stimulus locations very near to each
other the above experiment can be easily interpreted in a way that only the maximum
amplitude of a summed up signal has been analyzed and not the DL of a frequency
change. To prevent this kind of criticism a careful experiment design should
be done with a series of hypotheses for falsi- and verification. In this case an additional
experiment would be adequate showing a statistically significant difference in
the perception of JND between a summed-up amplitude of two stimuli with identical
frequency compared to a signal with two frequencies.
example temporal masking: A stimulus φ0 with a frequency f is presented for a
long period t. Afterwards stimuli φn, e.g. dependent on frequency, are given. The perception
of those stimuli (e.g. with regard to the absolute threshold) varies dependent
on the prior period t. The measure of this variation is the temporal-masking-effect
of a certain masking frequency f.
example spatial masking: Two stimuli φ0 and φ1, e.g. needles on the skin are given
with a spatial distance d. At a certain distance d both stimuli are perceived independently
from each other. This is a very specific example of spatial masking frequently
used for measuring the resolution of tactile perception. It has therefore been given
its own term: two-point-threshold
Another aspect of analysis is the Successiveness Limen (LM) connected with
the question: “How many stimuli presented consecutively can be perceived?”
Example LM: With the help of a vibratory motor a sequence of stimuli is presented
on a body location. The stimuli vary according to a temporal pattern. The LM is the
temporal pace enabling a correct perception of the sequence.
3.2.1.2 Psychophysical Laws
An important way for presenting DL Δφ is as a value related to a reference stimulus
φ0 according to the formula
In 1834, E.H. WEBER found out, that c is a constant quotient for a specific
perception. In his key experiment he placed weights on the skin and found out, that c
is almost 1/30 . This means that the next higher weight differing from a weight of 200 g is 1/30· 200g+200g = 206.66g. The value c differs significantly between different
stimuli, but the comprehensive coherence according to equation 3.2 (Weber’s law)
seems to be universal for many situations. As a consequence Weber’s law allows
putting different senses and their perception in relation to each other. An exception
is the area of lower stimuli (fig. 3.8a) in the range of absolute thresholds where c
increases significantly.
A modification of Weber’s law
compensates this dependency in the range of absolute thresholds (fig. 3.8b). The
constant a is - identical to c - specific for each sense and as compared to c, quite
low. The physiological reason for a has not finally been determined. A hypothesis
existing assumes it to be a measure for the background noise of the corresponding
receptors.
Some senses, especially the acoustic but also the haptic sense, show a nonlinear
logarithmic dependency on perceived intensity and physical excitation. For the
range of stimuli, for which Weber’s law is valid according to its original formula
(equ. 3.2) a new dependency can be formulated. This dependency, named Fechner’s
law,
Ψ = k logφ........3.4
provides a linearized measureΨ of the perception amplitude.
Today Fechner’s law has mainly a historical significance. In 1975 it was replaced
by S.S. STEVENS, suggesting a law describing the intensity of a stimulus by an
exponential relation:
Ψ = kφ^a (3.5)
This relation is called Power-law and allows comparisons of numerous perceptiondependencies
by a look at its constants a and k. If a = 1, the equation 3.5 gives a
linear dependency. At values for a > 1 the law gives a dependency increasing with
increased stimulus, at a < 1 a damping of the perception with increased stimulus is
resulting. When logarithmizing equation 3.5, an interdependency easy to display on
diagrams with logarithmic axis (fig. 3.9) can be obtained.
logΨ = log k+alogφ (3.6)
with y-axis log k and a slope of a.
Table 3.1 gives an extraction of STEVENS’ published data [242] of the coefficient
a according to equation 3.5.
3.2.1.3 Mean Values and Percentiles
The analysis of psychophysical measures is always laborious due to large variances
in results either between individual subjects or as in certain tests, among specifically
trained test-persons. As a consequence, statistical design and the application of signal
detecting algorithms should be considered for any such experiments. For details
of these procedures, literature of statistical experiment design and [66] is suggested.
For a more general perspective the following remarks should be considered:
Frequently, in psychophysics experimental results follow a Gaussian normal distribution.
This happens with regard to a single person as well as with respect to a
larger number of people. Of course, a Gaussian distribution can be characterized
by a mean-value μ and a standard deviation σ . The mean value defines the measure
where exactly 50% of a given set (e.g. of experiments) are above and below that
value.
For the usage with sets not following a normal distribution, the usage of percentiles
is suggested. Typical examples of their application are anthropometric values
in ergonomics. The x-th percentile gives the point on a scale, where x percent of tests of a given set are below that value. In the exceptional case of a normal
distribution the fiftieth percentile is identical to the mean-value (fig. 3.10).
3.2.2 Frequency Dependency
As mentioned in section 2.2, every kinaesthetic interaction has a tactile component.
We know from the analysis of “grasping a glass of water” in section 3.1 that tactile
components are part of the interaction’s innermost feedback-loop. As a result
the requirements for their dynamic properties are extraordinarily high. This section
discusses the perception thresholds and difference-limens as identified in neurology
and psychophysics from an engineering perspective. It is therefore a preparation
for the identification of requirements for technical systems interfacing the sense of
touch.
The identification of haptic perceptional dynamics can be performed either with
psychophysical methods or with neurological tools. When focussing on the receptors
only, the analysis of tactile and kinaesthetic sensors can be done independently
from each other. Neuronal potential on nerve fibres can be measured via
interventional implanted electrodes, and even positioned during electrode recording
[250, 131].
In [122] several tactile sensory types (compare fig. 3.2 on page 37) have been
analyzed as to their frequency-dependency (dynamics) and their thresholds for the
detection of skin deformation (fig. 3.11). Frequency areas of slow-adapting (SA) and
rapid-adapting (RA) sensors complement and overlap each other. The SA-II sensor
especially affects a range of ≈ 8Hz. According to this study the mean threshold of
the isolated sensors shows a maximum in sensitivity at around ≈ 300Hz with an
elongation of 10μm.
WILKINSON performed a study [287] on thresholds on isolated Golgi-tendons
receptors (fig. 3.12). The results show an almost linear dependency between the
response of receptors in mV and the stimulation in μm over frequency. The relevant
frequency range of these receptors is lower than the range of the tactile receptors of
figure 3.11, especially as the masses and stiffnesses of limbs show distinct low-pass
characteristics. High frequency components of forces and elongations are damped
anyway and therefore the kinaesthetic sensory system does not have to be able to
measure it.
For the design process of haptic systems the focus point does not lie on single
biological receptors but rather on the human’s combined perception resulting
from the sum of all tactile and kinaesthetic sensors. In this area numerous studies
have been performed, three of which are given here showing the range of results.
In 1935, HUGONY already published a study about the perception of oscillations
dependent on frequency of mechanical stimuli [101]. Additionally he quantified
different stimuli-levels which are defined from the absolute perception threshold
to pain-thresholds (fig. 3.13a). To complement this general study TALBOT added
details about the interdependency of the isolated biological sensor and perception
(fig. 3.13b). Both scientists showed that the sensitivity of perception increases to
a frequency of ≈ 200Hz (HUGONY) and ≈ 300Hz (TALBOT). These two studies
along with several others were compiled by HANDWERKER [219] resulting in a
combined curve of haptic perception thresholds (fig. 3.14).
A source for the analysis of haptic perception worth to be recommended can
be found in the publications by GESCHEIDER. He followed a stringent analysismethodology
and discussion of haptic sensory systems. Beginning in 1970 until at
last in 2002 a series of hypotheses and measurements in numerous publications is
documented. Another source worth to be considered is the work by BÉKÉSY [23]
and by JOHANSSON.
Next to the already known dependency of haptic perception on frequency, another
dependency exists connected to the surface area transmitting the mechanical
oscillations: Large areas of force transmission (A > 1cm2) and small areas of force
transmission (A < 1mm2) differ significantly according to their absolute perception
thresholds (fig. 3.15). When focusing on kinaesthetic devices, usually a large area
of force transmission exists. With tactile devices smaller force transmission areas
have to be considered. The perception curve is a combination of the four tactile sensor
types and shows a minimum (point of maximum sensitivity) at ≈ 350Hz. The
frequency-dependency is obvious and undoubted. Only the precise shape and the
exact position of the minimum vary in the range of ≈ 100Hz depending on measurement,
author and publication. Additionally it can be noted that the perception
of very low frequencies below 0.1Hz was not subject to many studies. Typically the
perception curves are assumed to stay constant to lower values from a frequency of
approximately 1Hz.
Besides frequency-dependency there exist two additional dependencies affecting
haptic perception. Ongoing mechanical stimuli result in a reversible desensitization
of receptors. This time dependency is used in [69] to mask single receptor classes in
order to study the part of other receptor classes in overlapping frequency areas. The
time dependency of perception curves ΔK in dB can be approximated according to
ΔK(t) = 12 · (et )12. (3.7)
As a result desensitization happens in a time frame of a second (spectral components
below10 Hz). As a consequence desensitization is not a matter to be necessarily
considered for the design of haptic devices, telemanipulation systems or
simulators due to their large ratio between usage vs. desensitization time frame. A
steady state can be considered for almost all relevant applications. In practical application
this approximation is not necessarily adequate. For example when tactile
devices based on pin- or shear-force systems are used, there is some evidence, according
to the author’s purely subjective observation, that the mentioned effect still
happens after minutes of usage.
The amplitude-resolution (DL) of haptic perception shows a logarithmic dependency
analogue to the visual and acoustic perception. The perception of smallest
changes dependent on frequency with varying base excitation was studied in [68].
Measurements were taken at two frequencies (25Hz, 250Hz) and with white noise.
The approved dependency of DL of the amplitude of the base excitation is nonlinear
with a maximum difference of ≈ +3dB. It is larger for smaller amplitudes of
the base excitation. This allows the conclusion that the Power-Law (section 3.2.1.2,
equ. 3.6) can be used for the description of perception. However its coefficients have
to be identified for every contact situation independently.
3.2.3 Characteristics of Haptic Interaction
Besides the dynamics’ curves in the prior section there are numerous insular values
from experiments documenting the possibilities of haptic interaction. The results
can be divided into two groups. In the table of haptic perception (tab. 3.2) the parameters
from a receptive perspective are summarized. In the table of active movements
(tab. 3.3) border values of the capabilities of the active parts of motor systems
are summarized. The tables are based on a collection by DOERRER [46] and have
been extended by selected additional sources. However, when considering their application,
a very important statement of BURDEA [34] still has to be remembered:
“... that it is dangerous to bank on recommendations for the design of haptic devices,
especially when they are taken from different experiments, with varying methods,
and when only a small number of participants took part”. The characteristic values
given here can only represent a selection of the analyses presented in literature. For
quite an actual and a very compelling summary [118] is recommended.
3.3 Conclusions from the Biology of Haptics
Next to studying the pure characteristic values of haptic perception we should keep
an eye on the real meaning of μm-elongations and frequencies of 1 kHz and more
and on its impact on real technical systems which is a small “finger exercise” for
you to be prepared for the challenges of the design of haptic systems; the idea of
this is based on a talk given by NIEMEYER at the Eurohaptics conference in 2006.
3.3.1 Stiffnesses
Already the initial touch of a material gives us information about its haptic properties.
A human is able to immediately discriminate, whether he or she is touching a
wooden table, a piece of rubber or a concrete wall with his or her finger tip. Besides
the acoustic and thermal properties, especially the tactile and kinaesthetic feedback
plays a large role. Based on the simplified assumption of a double-sided fixed plate
its stiffness k can be identified by the usage of the E-modulus according to equation
[158]
k = (2bh^3/l3)·E (3.8)
Figure 3.16a shows the calculation of stiffnesses for a plate of an edge length of
1 m and a thickness of 40 mm of different materials. In comparison, the stiffnesses
of commercially available haptic systems are given in (fig. 3.16b). It is obvious
that these stiffnesses of haptic devices are factors of ten lower than the stiffnesses
of concrete, every-day objects like tables and walls. However, stiffness is just one
criterion for the design of a good, haptic system and should not be overestimated.
The comparison above shall make us aware of the fact that a pure reproduction of
solid objects can hardly be realized with a single technical system. It rather takes a
combination of stiff and dynamic hardware, for especially the dynamic interaction
in high frequency areas dominates the quality of haptics, which has extensively been
discussed in the last section.
3.3.2 One Kilohertz - Significance for the Mechanical Design?
Haptic perception ranges to a frequency of 10 kHz, whereby the area of highest sensitivity
lies between 100 Hz and 1 kHz. This wide range of haptic perception enables
us to perceive microstructures on surfaces with the same accuracy as enabling us to
identify the point of impact when drumming with our fingers on a table. For a rough
calculation a model according to figure 3.17 is considered to be a parallel circuit
between a mass m and a spring k. Assuming an identical “virtual” volume V of material
and taking the individual density ρ for a qualitative comparison, the border
frequency for a step response can be calculated according to
fb=1/2*pi(sqrt(k/m))=1/2*pi(sqrt(k/vp))...(3.9)
Figure 3.17 shows the border frequencies of a selection of materials. Only in case
of rubber and soft plastics border frequencies of below 100 Hz appear. Harder plastic
material (Plexiglas) and all other materials show border frequencies above 700 Hz.
One obvious interpretation would state that any qualitatively good simulation of
such a collision demands at least such bandwidth of dynamics within the signal
conditioning elements and the mechanical system.
As a consequence, a frequent recommendation for the design of haptic systems
is the transmission of a full bandwidth of 1kHz (and in some sources even up to
10kHz). This requirement is valid with respect to software and communicationsengineering,
as sampling-systems and algorithmic can achieve such frequencies easily
today. Considering the mechanical part of the design, we see that dynamics of
1kHz are enormous, maybe even utopian. Figure 3.18 gives another rough calculation
of oscillating force amplitude according to
F0 = |x · (2π f )2m|. (3.10)
The basis of the analysis is a force source generating an output force F0. The
load of this system is a mass (e.g. a knob) of 10 grams (!!). The system does not
have any additional load, i.e.it does not have to generate any haptically active force
to a user. A periodic oscillation of a frequency f and an amplitude x is assumed.
With expected amplitudes for the oscillation of 1mm at 10 Hz a force of approximately
10mN is necessary. At a frequency of 100 Hz there is already a force of
2-3N needed. At a frequency of 700 Hz the force already increases to 100N - and
this is what happens when moving a mass of 10 grams. Of course in combination
with a user-impedance as load the amplitude of the oscillation will decrease in areas
of below 100μm, proportionally decreasing the necessary force. But this calculation
should make aware of the simple fact that the energetic design and power management
of electromechanical systems with application in the area of haptics needs to
be done very carefully.
The design of a technical haptic system is always a compromise between bandwidth,
stiffness, dynamics of signal conditioning and maximum force-amplitudes.
Even with simple systems the design process leads the engineer to the borders of
what is physically possible. Therefore it is necessary to have a good model for the
user according to his being a load to the mechanical system and according to his
or her haptic perception. This model enables the engineer to carry out an optimized
design of the technical system and its generation is the focus point of the following
chapter.
Saturday, March 20, 2010
HAPTICS RENDERING ALGORITHM
In the last decade we’ve seen an enormous
increase in interest in the science of haptics.
The quest for better understanding and use of haptic
abilities (both human and nonhuman) has manifested
itself in heightened activity in disciplines ranging from
robotics and telerobotics; to computational geometry
and computer graphics; to psychophysics, cognitive science,
and the neurosciences.
This issue of IEEE CG&A focuses
on haptic rendering. Haptics broadly
refers to touch interactions (physical
contact) that occur for the purpose
of perception or manipulation of
objects. These interactions can be
between a human hand and a real
object; a robot end-effector and a
real object; a human hand and a simulated
object (via haptic interface
devices); or a variety of combinations
of human and machine interactions
with real, remote, or virtual
objects. Rendering refers to the
process by which desired sensory
stimuli are imposed on the user to
convey information about a virtual
haptic object. At the simplest level,
this information is contained in the representation of the
object’s physical attributes—shape, elasticity, texture,
mass, and so on. Just as a sphere visually rendered with
simple shading techniques will look different from the
same sphere rendered with ray-tracing techniques, a
sphere haptically rendered with a simple penalty function
1 Basic architecture for a virtual reality application incorporating visual,
auditory, and haptic feedback.
function
will feel different from the same sphere rendered
with techniques that also convey mechanical textures
and surface friction.
As in the days when people were astonished to see
their first wire-frame computer-generated images, people
are now astonished to feel their first virtual object.
Yet the rendering techniques we use today will someday
seem like yesterday’s wire-frame displays—the first
steps into a vast field.
To help readers understand the issues discussed in
this issue’s theme articles, we briefly overview haptic
systems and the techniques needed for rendering the
way objects feel. We also discuss basic haptic-rendering
algorithms that help us decide what force should be
exerted and how we will deliver these forces to users. A
sidebar discusses key points in the history of haptics.
Architecture for haptic feedback
Virtual reality (VR) applications strive to simulate real
or imaginary scenes with which users can interact and
perceive the effects of their actions in real time. Ideally
the user interacts with the simulation via all five senses;
however, today’s typical VR applications rely on a
smaller subset, typically vision, hearing, and more
recently, touch.
Figure 1 shows the structure of a VR application incorporating
visual, auditory, and haptic feedback. The
application’s main elements are:
■ the simulation engine, responsible for computing the
virtual environment’s behavior over time;
■ visual, auditory, and haptic rendering algorithms,
which compute the virtual environment’s
graphic, sound, and force
responses toward the user; and
■ transducers, which convert visual,
audio, and force signals from
the computer into a form the
operator can perceive.
The human operator typically
holds or wears the haptic interface
device and perceives audiovisual feedback from audio
(computer speakers, headphones, and so on) and visual
displays (a computer screen or head-mounted display,
for example).
Whereas audio and visual channels feature unidirectional
information and energy flow (from the simulation
engine toward the user), the haptic modality
exchanges information and energy in two directions,
from and toward the user. This bidirectionality is often
referred to as the single most important feature of the
haptic interaction modality.
Haptic interface devices
An understanding of some basic concepts about haptic
interface devices will help the reader through the
remainder of the text. A more complete description of
the elements that make up such systems is available
elsewhere.1
Haptic interface devices behave like small robots that
exchange mechanical energy with a user. We use the term
device-body interface to highlight the physical connection
between operator and device through which energy is
exchanged. Although these interfaces can be in contact
with any part of the operator’s body, hand interfaces have
been the most widely used and developed systems to
date. Figure 2 shows some example devices.
One way to distinguish between haptic interface
devices is by their grounding locations. For interdigit
tasks, force-feedback gloves, such as the Hand Force
Feedback (HFF),2 read finger-specific contact information
and output finger-specific resistive forces, but can’t
reproduce object net weight or inertial forces. Similar
handheld devices are common in the gaming industry
and are built using low-cost vibrotactile transducers,
which produce synthesized vibratory effects. Exoskeleton
mechanisms or body-based haptic interfaces, which
a person wears on the arm or leg, present more complex
multiple degree-of-freedom (DOF) motorized devices.
Finally, ground-based devices include force-reflecting
joysticks and desktop haptic interfaces.
HISTORY OF HAPTICS
In the early 20th century, psychophysicists introduced the word
haptics (from the Greek haptesthai meaning to touch) to label the
subfield of their studies that addressed human touch-based perception
and manipulation. In the 1970s and 1980s, significant
research efforts in a completely different field—robotics—also
began to focus on manipulation and perception by touch. Initially
concerned with building autonomous robots, researchers soon
found that building a dexterous robotic hand was much more
complex and subtle than their initial naive hopes had suggested.
In time these two communities—one that sought to understand
the human hand and one that aspired to create devices with dexterity
inspired by human abilities—found fertile mutual interest
in topics such as sensory design and processing, grasp control
and manipulation, object representation and haptic information
encoding, and grammars for describing physical tasks.
In the early 1990s a new usage of the word haptics began to
emerge. The confluence of several emerging technologies made
virtualized haptics, or computer haptics,1 possible. Much like
computer graphics, computer haptics enables the display of simulated
objects to humans in an interactive manner. However,
computer haptics uses a display technology through which
objects can be physically palpated.
This new sensory display modality presents information by
exerting controlled forces on the human hand through a haptic
interface (rather than, as in computer graphics, via light from
a visual display device). These forces depend on the physics of
mechanical contact. The characteristics of interest in these
forces depend on the response of the sensors in the human hand
and other body parts (rather than on the eye’s sensitivity to
brightness, color, motion, and so on).
Unlike computer graphics, haptic interaction is bidirectional,
with energy and information flows both to and from the user.
Although Knoll demonstrated haptic interaction with simple
virtual objects at least as early as the 1960s, only recently was sufficient
technology available to make haptic interaction with complex
computer-simulated objects possible. The combination of
high-performance force-controllable haptic interfaces, computational
geometric modeling and collision techniques, cost-effective
processing and memory, and an understanding of the
perceptual needs of the human haptic system allows us to assemble
computer haptic systems that can display objects of sophisticated
complexity and behavior. With the commercial availability
of 3 degree-of-freedom haptic interfaces, software toolkits from
several corporate and academic sources, and several commercial
haptics-enabled applications, the field is experiencing rapid and
exciting growth.
Another distinction between haptic interface devices
is their intrinsic mechanical behavior. Impedance haptic
devices simulate mechanical impedance—they read
position and send force. Admittance haptic devices simulate
mechanical admittance—they read force and send
position. Simpler to design and much cheaper to produce,
impedance-type architectures are most common.
Admittance-based devices, such as the Haptic Master,3
are generally used for applications requiring high forces
in a large workspace.
Haptic interface devices are also classified by the
number of DOF of motion or force present at the devicebody
interface—that is, the number of dimensions characterizing
the possible movements or forces exchanged
between device and operator. A DOF can be passive or
actuated, sensed or not sensed.
Characteristics commonly considered desirable for
haptic interface devices include
■ low back-drive inertia and friction;
■ minimal constraints on motion imposed by the device
kinematics so free motion feels free;
■ symmetric inertia, friction, stiffness, and resonatefrequency
properties (thereby regularizing the device
so users don’t have to unconsciously compensate for
parasitic forces);
■ balanced range, resolution, and bandwidth of position
sensing and force reflection; and
■ proper ergonomics that let the human operator focus
when wearing or manipulating the haptic interface
as pain, or even discomfort, can distract the user,
reducing overall performance.
We consider haptic rendering algorithms applicable
to single- and multiple-DOF devices.
System architecture for haptic rendering
Haptic-rendering algorithms compute the correct
interaction forces between the haptic interface representation
inside the virtual environment and the virtual
objects populating the environment. Moreover, hapticrendering
algorithms ensure that the haptic device correctly
renders such forces on the human operator.
An avatar is the virtual representation of the haptic
interface through which the user physically interacts
with the virtual environment. Clearly the choice of avatar
depends on what’s being simulated and on the haptic
device’s capabilities. The operator controls the avatar’s
position inside the virtual environment. Contact between
the interface avatar and the virtual environment sets off
action and reaction forces. The avatar’s geometry and
the type of contact it supports regulates these forces.
Within a given application the user might choose
among different avatars. For example, a surgical tool
can be treated as a volumetric object exchanging forces
and positions with the user in a 6D space or as a pure
point representing the tool’s tip, exchanging forces and
positions in a 3D space.
Several components compose a typical haptic rendering
algorithm. We identify three main blocks, illustrated
in Figure 3.
Collision-detection algorithms detect collisions
between objects and avatars in the virtual environment
and yield information about where, when, and ideally
to what extent collisions (penetrations, indentations,
contact area, and so on) have occurred.
Force-response algorithms compute the interaction
force between avatars and virtual objects when a collision
is detected. This force approximates as closely as
possible the contact forces that would normally arise during
contact between real objects. Force-response algorithms
typically operate on the avatars’ positions, the
positions of all objects in the virtual environment, and
the collision state between avatars and virtual objects.
Their return values are normally force and torque vectors
that are applied at the device-body interface.
Hardware limitations prevent haptic devices from
applying the exact force computed by the force-response
algorithms to the user. Control algorithms command
the haptic device in such a way that minimizes the error
between ideal and applicable forces. The discrete-time
nature of the haptic-rendering algorithms often makes this difficult, as we explain further later in the article.
Desired force and torque vectors computed by forceresponse
algorithms feed the control algorithms. The
algorithms’ return values are the actual force and torque
vectors that will be commanded to the haptic device.
A typical haptic loop consists of the following
sequence of events:
■ Low-level control algorithms sample the position sensors
at the haptic interface device joints.
■ These control algorithms combine the information
collected from each sensor to obtain the position of
the device-body interface in Cartesian space—that is,
the avatar’s position inside the virtual environment.
■ The collision-detection algorithm uses position information
to find collisions between objects and avatars
and report the resulting degree of penetration or
indentation.
■ The force-response algorithm computes interaction
forces between avatars and virtual objects involved
in a collision.
■ The force-response algorithm sends interaction forces
to the control algorithms, which apply them on the
operator through the haptic device while maintaining
a stable overall behavior.
The simulation engine then uses the same interaction
forces to compute their effect on objects in the virtual
environment. Although there are no firm rules about how
frequently the algorithms must repeat these computations,
a 1-KHz servo rate is common. This rate seems to be
a subjectively acceptable compromise permitting presentation
of reasonably complex objects with reasonable
stiffness. Higher servo rates can provide crisper contact
and texture sensations, but only at the expense of reduced
scene complexity (or more capable computers).
The following sections explain the basic principles of
haptic-rendering algorithms, paying particular attention
to force-response algorithms. Although the ability
to detect collisions is an important aspect of computing
contact force response, given the familiarity of CG&A’s
readership with the topic, we don’t dwell on it here. The
geometric problem of efficiently detecting when and
where contact and interobject penetrations occur continues
to be an important research topic in haptics and
related fields. The faster real-time needs of haptic rendering
demand more algorithmic performance. One
solution is to accept less accuracy and use simpler collision
model geometries. Alternately, researchers are
adapting graphics-rendering hardware to enable fast
real-time collision detection among complex objects.
Lin and Manocha give a useful survey of collision-detection
algorithms for haptics.4
Computing contact-response forces
Humans perceive contact with real objects through
sensors (mechanoreceptors) located in their skin, joints,
tendons, and muscles. We make a simple distinction
between the information these two types of sensors can
acquire. Tactile information refers to the information
acquired through sensors in the skin with particular reference
to the spatial distribution of pressure, or more
generally, tractions, across the contact area. Kinesthetic
information refers to the information acquired
through the sensors in the joints. Interaction forces are
normally perceived through a combination of these two.
A tool-based interaction paradigm provides a convenient
simplification because the system need only render
forces resulting from contact between the tool’s
avatar and objects in the environment. Thus, haptic
interfaces frequently utilize a tool handle physical interface
for the user.
To provide a haptic simulation experience, we’ve
designed our systems to recreate the contact forces a
user would perceive when touching a real object. The
haptic interfaces measure the user’s position to recognize
if and when contacts occur and to collect information
needed to determine the correct interaction force.
Although determining user motion is easy, determining
appropriate display forces is a complex process and a
subject of much research. Current haptic technology
effectively simulates interaction forces for simple cases,
but is limited when tactile feedback is involved.
In this article, we focus our attention on forceresponse
algorithms for rigid objects. Compliant objectresponse
modeling adds a dimension of complexity
because of nonnegligible deformations, the potential
for self-collision, and the general complexity of modeling
potentially large and varying areas of contact.
We distinguish between two types of forces: forces
due to object geometry and forces due to object surface
properties, such as texture and friction.
Geometry-dependant force-rendering algorithms
The first type of force-rendering algorithms aspires to
recreate the force interaction a user would feel when
touching a frictionless and textureless object. Such interaction
forces depend on the geometry of the object being
touched, its compliance, and the geometry of the avatar
representing the haptic interface inside the virtual environment.
Although exceptions exist,5 the DOF necessary
to describe the interaction forces between an avatar and
a virtual object typically matches the actuated DOF of
the haptic device being used. Thus for simpler devices,
such as a 1-DOF force-reflecting gripper (Figure 2a), the
avatar consists of a couple of points that can only move
and exchange forces along the line connecting them. For
this device type, the force-rendering algorithm computes
a simple 1-DOF squeeze force between the index finger
and the thumb, similar to the force you would feel when
cutting an object with scissors. When using a 6-DOF haptic
device, the avatar can be an object of any shape. In
this case, the force-rendering algorithm computes all the
interaction forces between the object and the virtual
environment and applies the resultant force and torque
vectors to the user through the haptic device.
We group current force-rendering algorithms by the
number of DOF necessary to describe the interaction
force being rendered.
One-DOF interaction. A 1-DOF device measures
the operator’s position and applies forces to the operator
along one spatial dimension only. Types of 1-DOF
interactions include opening a door with a knob that is
constrained to rotate around one axis, squeezing scissors
to cut a piece of paper, or pressing a syringe’s piston
when injecting a liquid into a patient. A 1-DOF interaction
might initially seem limited; however, it can render
many interesting and useful effects.
Rendering a virtual wall—that is, creating the interaction
forces that would arise when contacting an infinitely
stiff object—is the prototypical haptic task. As one
of the most basic forms of haptic interaction, it often
serves as a benchmark in studying haptic stability.6–8
The discrete-time nature of haptic interaction means
that the haptic interface avatar will always penetrate
any virtual object. A positive aspect of this is that the
force-rendering algorithm can use information on how
far the avatar has penetrated the object to the compute
interaction force. However, this penetration can cause
some unrealistic effects to arise, such as vibrations in
the force values, as we discuss later in the article. As Figure
4 illustrates, if we assume the avatar moves along
the x-axis and x < xW describes the wall, the simplest
algorithm to render a virtual wall is given by
where Krepresents the wall’s stiffness and thus is ideally
very large. More interesting effects can be accomplished
for 1-DOF interaction.
4 Virtual wall
concept, a
1-DOF interaction.
The operator
moves and
feels forces only
along one
spatial
dimension.
Two-DOF interaction. Examples of 2-DOF interactions
exist in everyday life—for example, using a
mouse to interact with a PC. Using 2-DOF interfaces to
interact with 3D objects is a bit less intuitive. It’s possible,
however, and is an effective way to interact with
simpler 3D virtual environments while limiting the
costs and complexity of haptic devices needed to render
the interactions. Two-DOF rendering of 3D objects
is, in some cases, like pushing a small ball over the surface
of a 3D object under the influence of gravity. Various
techniques enable this type of rendering by
projecting the ideal 3-DOF point-contact interaction
force on a plane,11,12 or by evaluating the height
change between two successive contact points on the
same surface.13
Three-DOF interaction. Arguably one of the most
interesting events in haptics’ history was the recognition,
in the early 1990s, of the point interaction paradigm
usefulness. This geometric simplification of the
general 6-DOF problem assumes that we interact with
the virtual world with a point probe, and requires that
we only compute the three interaction force components
at the probe’s tip. This greatly simplifies the interface
device design and facilitates collision detection and
force computation. Yet, even in this seemingly simple
case, we find an incredibly rich array of interaction possibilities
and the opportunity to address the fundamental
elements of haptics unencumbered by excessive
geometric and computational complexity.
To compute force interaction with 3D virtual objects,
the force-rendering algorithm uses information about
how much the probing point, or avatar, has penetrated
the object, as in the 1-DOF case. However, for 3-DOF
interaction, the force direction isn’t trivial as it usually
is for 1-DOF interaction.
Various approaches for computing force interaction
for virtual objects represented by triangular meshes exist.
Vector field methods use a one-to-one mapping between
position and force. Although these methods often work
well, they don’t record past avatar positions. This makes
it difficult to determine the interaction force’s direction
when dealing with small or thin objects, such as the interaction
with a piece of sheet metal, or objects with complex
shapes. Nonzero penetration of avatars inside
virtual objects can cause the avatars to cross through
such a thin virtual surface before any force response is
computed (that is, an undetected collision occurs). To
address the problems posed by vector field methods,
Zilles et al. and Ruspini et al. independently introduced
the god-object14 and proxy algorithms.15Both algorithms
are built on the same principle: although we can’t stop
avatars from penetrating virtual objects, we can use additional
variables to track a physically realistic contact on
the object’s surface—the god object or proxy. Placing a
spring between avatar position and god object/proxy
creates a realistic force feedback to the user. In free space,
the haptic interface avatar and the god object/proxy are
collocated and thus the force response algorithm returns
no force to the user. When colliding with a virtual object,
the god object/proxy algorithm finds the new god
object/proxy position in two steps:
1. It finds a set of active constraints.
2. Starting from its old position, the algorithm identifies
the new position as the point on the set of active constraint
that is closest to the current avatar position.
Morgenbesser et al.’s introduction of force shading—
the haptic equivalent of Phong shading—successively
refined both algorithms.16 Whereas graphic-rendering
interpolated normals obtain more smooth-looking
meshes, haptic-rendering interpolated normals obtain
smooth-changing forces throughout an object’s surface.
Walker et al. recently proposed an interesting variation
of the god-object/proxy algorithms applicable to
cases involving triangular meshes based on large quantities
of polygons.17
Salisbury et al. introduced an extension of the godobject
algorithm for virtual objects based on implicit surfaces
with an analytical representation.18 For implicit
surfaces, collision detection is much faster and we can
calculate many of the variables necessary for computing
the interaction force, such as its direction and intensity,
using closed analytical forms. Other examples of
3-DOF interaction include algorithms for interaction
with NURBS-based19 and with Voxels-based objects.20
More than 3-DOF interaction. Although the
point interaction metaphor has proven to be surprisingly
convincing and useful, it has limitations. Simulating
interaction between a tool’s tip and a virtual environment
means we can’t apply torques through the contact.
This can lead to unrealistic scenarios, such as a user feeling
the shape of a virtual object using the tool’s tip while
the rest of the tool lies inside the object.
To improve on this situation, some approaches use
avatars that enable exertion of forces or torques with
more than three dof. Borrowing terminology from the
robotic-manipulation community, Barbagli et al.21
developed an algorithm to simulate 4-DOF interaction
through soft-finger contact—that is, a point contact with
friction that can support moments (up to a torsional friction
limit) about the contact normal. This type of avatar
is particularly handy when using multiple-point interaction
to grasp and manipulate virtual objects.
Basdogan et al. implemented 5-DOF interaction, such
as occurs between a line segment and a virtual object, to
approximate contact between long tools and virtual environments.
22 This ray-based rendering technique allows
to simulate the interaction of tools by modeling them as
a set of connected line segments and a virtual object.
Several researchers have developed algorithms providing
for 6-DOF interaction forces. For example,
McNeely et al.23 simulated interaction between modestly
complex rigid objects within an arbitrarily complex
environment of static rigid objects represented by
voxels, and Ming et al.24 simulated contact between
complex polygonal environments and haptic probes.
Surface property-dependent force-rendering
algorithms
All real surfaces contain tiny irregularities or indentations.
Obviously, it’s impossible to distinguish each
irregularity when sliding a finger over an object. However,
tactile sensors in the human skin can feel their
combined effects when rubbed against a real surface.
Although this article doesn’t focus on tactile displays,
we briefly present the state of the art for algorithms that
can render virtual objects’ haptic textures and friction
properties.
Micro-irregularities act as obstructions when two surfaces
slide against each other and generate forces tangential
to the surface and opposite to motion. Friction,
when viewed at the microscopic level, is a complicated
phenomenon. Nevertheless, simple empirical models
exist, such as the one Leonardo da Vinci proposed and
Charles Augustin de Coulomb later developed in 1785.
Such models served as a basis for the simpler frictional
models in 3 DOF
Researchers outside the haptic community have
developed many models to render friction with higher
accuracy—for example, the Karnopp model for modeling
stick-slip friction, the Bristle model, and the reset
integrator model. Higher accuracy, however, sacrifices
speed, a critical factor in real-time applications. Any
choice of modeling technique must consider this trade
off. Keeping this trade off in mind, researchers have
developed more accurate haptic-rendering algorithms
for friction (see, for instance, Dupont et al.25).
A texture or pattern generally covers real surfaces.
Researchers have proposed various techniques for rendering
the forces that touching such textures generates.
Many of these techniques are inspired by analogous
techniques in modern computer graphics. In computer
graphics, texture mapping adds realism to computergenerated
scenes by projecting a bitmap image onto surfaces
being rendered. The same can be done haptically.
Minsky11 first proposed haptic texture mapping for 2D;
Ruspini et al. later extended his work to 3D scenes.15
Researchers have also used mathematical functions to
create synthetic patterns. Basdogan et al.22 and Costa et
al.26 investigated the use of fractals to model natural textures
while Siira and Pai27 used a stochastic approach.
Controlling forces delivered through
haptic interfaces
So far we’ve focused on the algorithms that compute
the ideal interaction forces between the haptic interface
avatar and the virtual environment. Once such forces
have been computed, they must be applied to the user.
Limitations of haptic device technology, however, have
sometimes made applying the force’s exact value as
computed by force-rendering algorithms impossible.
Various issues contribute to limiting a haptic device’s
capability to render a desired force or, more often, a
desired impedance. For example, haptic interfaces can
only exert forces with limited magnitude and not equally
well in all directions, thus rendering algorithms must
ensure that no output components saturate, as this
would lead to erroneous or discontinuous application
of forces to the user.
In addition, haptic devices aren’t ideal force transducers.
An ideal haptic device would render zero impedance
when simulating movement in free space, and any
finite impedance when simulating contact with an
object featuring such impedance characteristics. The
friction, inertia, and backlash present in most haptic
devices prevent them from meeting this ideal.
A third issue is that haptic-rendering algorithms operate
in discrete time whereas users operate in continu-ous time, as Figure 5 illustrates. While moving into and
out of a virtual object, the sampled avatar position will
always lag behind the avatar’s actual continuous-time
position. Thus, when pressing on a virtual object, a user
needs to perform less work than in reality; when the user
releases, however, the virtual object returns more work
than its real-world counterpart would have returned. In
other terms, touching a virtual object extracts energy
from it. This extra energy can cause an unstable
response from haptic devices.7
Finally, haptic device position sensors have finite resolution.
Consequently, attempting to determine where
and when contact occurs always results in a quantization
error. Although users might not easily perceive this
error, it can create stability problems.
All of these issues, well known to practitioners in the
field, can limit a haptic application’s realism. The first
two issues usually depend more on the device mechanics;
the latter two depend on the digital nature of VR
applications.
As mentioned previously, haptic devices feature a
bidirectional flow of energy, creating a feedback loop
that includes user, haptic device, and haptic-rendering/
simulation algorithms, as Figure 5 shows. This loop
can become unstable due to the virtual environment
energy leaks.
The problem of stable haptic interaction has received
a lot of attention in the past decade. The main problem
in studying the loop’s stability is the presence of the
human operator, whose dynamic behavior can’t be generalized
with a simple transfer function. Researchers
have largely used passivity theory to create robust algorithms
that work for any user.
For a virtual wall such as the one in Figure 4, Colgate
analytically showed that a relation exists between the
maximum stiffness a device can render, the device’s level
of mechanical damping, the level of digital damping
commanded to the device, and the servo rate controlling
the device.6 More specifically, to have stable interaction,
the relationship b > KT/2 + B should hold. That
is, the device damping b should always be higher than
the sum of the level of digital damping that can be controlled
to the device B and the product KT/2 where K is
the stiffness to be rendered by the device and T is the
servo rate period. Stiffer walls tend to become unstable
for higher servo rate periods, resulting in high-frequency
vibrations and possibly uncontrollably high levels of
force. Increasing the level of mechanical damping featured
by the device can limit instability, even though this
limits the device’s capabilities of simulating null impedance
when simulating the device’s free-space movements.
Thus high servo rates (or low servo rate periods)
are a key issue for stable haptic interaction.
Two main sets of techniques for limiting unstable
behavior in haptic devices exist. The first set includes
solutions that use virtual damping to limit the energy
flow from the virtual environment toward the user when
it could create unstable behavior.8,28 Colgate introduced
virtual coupling, a connection between haptic device and
virtual avatar consisting of stiffness and damping, which
effectively limits the maximum impedance that the haptic
display must exhibit.28 A virtual coupling lets users
create virtual environments featuring unlimited stiffness
levels, as the haptic device will always attempt to
render only the maximum level set by the virtual coupling.
Although this ensures stability, it doesn’t make a
haptic device stably render higher stiffness levels.
The second set of techniques include solutions that
attempt to speed up haptic servo rates by decoupling
force-response algorithms from other slower algorithms,
such as collision-detection, visual-rendering,
and virtual environment dynamics algorithms.29 This
can be accomplished by running all of these algorithms
in different threads with different servo rates, and letting
the user interact with a simpler local virtual object
representation at the highest possible rate that can be
accomplished on the system.
Four main threads exist. The visual-rendering loop is
typically run at rates of up to 30 Hz. The simulation
thread is run as fast as possible congruent with the simulated
scene’s overall complexity. A collision-detection
thread, which computes a local representation of the
part of the virtual object closest to the user avatar, is run
at slower rates to limit CPU usage. Finally a faster collision
detection and force response is run at high servo rates.
An extremely simple local representation makes this
possible (typical examples include planes or spheres).
Surface discontinuities are normally not perceived,
given that the maximum speed of human movements is
limited and thus the local representation can always
catch up with the current avatar position. This approach
has gained success in recent years with the advent of
surgical simulators employing haptic devices, because
algorithms to accurately compute deformable object
dynamics are still fairly slow and not very scalable.30,31
Conclusion
As haptics moves beyond the buzzes and thumps of
today’s video games, technology will enable increasingly
believable and complex physical interaction with
virtual or remote objects. Already haptically enabled
commercial products let designers sculpt digital clay
figures to rapidly produce new product geometry,
museum goers feel previously inaccessible artifacts, and
doctors train for simple procedures without endangering
patients.
Past technological advances that permitted recording,
encoding, storage, transmission, editing, and ultimately
synthesis of images and sound profoundly
affected society. A wide range of human activities,
including communication, education, art, entertainment,
commerce, and science, were forever changed
when we learned to capture, manipulate, and create
sensory stimuli nearly indistinguishable from reality. It’s
not unreasonable to expect that future advancements
in haptics will have equally deep effects. Though the
field is still in its infancy, hints of vast, unexplored intellectual
and commercial territory add excitement and
energy to a growing number of conferences, courses,
product releases, and invention efforts.
For the field to move beyond today’s state of the art,
researchers must surmount a number of commercial and
technological barriers. Device and software tool-oriented
corporate efforts have provided the tools we need to
Survey step out of the laboratory, yet we need new business models.
For example, can we create haptic content and authoring
tools that will make the technology broadly attractive?
Can the interface devices be made practical and inexpensive
enough to make them widely accessible?
Once we move beyond single-point force-only interactions
with rigid objects, we should explore several
technical and scientific avenues. Multipoint, multihand,
and multiperson interaction scenarios all offer enticingly
rich interactivity. Adding submodality stimulation
such as tactile (pressure distribution) display and vibration
could add subtle and important richness to the
experience. Modeling compliant objects, such as for surgical
simulation and training, presents many challenging
problems to enable realistic deformations, arbitrary
collisions, and topological changes caused by cutting
and joining actions.
Improved accuracy and richness in object modeling
and haptic rendering will require advances in our understanding
of how to represent and render psychophysically
and cognitively germane attributes of objects, as
well as algorithms and perhaps specialty hardware
(such as haptic or physics engines) to perform real-time
computations.
Development of multimodal workstations that provide
haptic, visual, and auditory engagement will offer
opportunities for more integrated interactions. We’re
only beginning to understand the psychophysical and
cognitive details needed to enable successful multimodality
interactions. For example, how do we encode
and render an object so there is a seamless consistency
and congruence across sensory modalities—that is, does
it look like it feels? Are the object’s density, compliance,
motion, and appearance familiar and unconsciously
consistent with context? Are sensory events predictable
enough that we consider objects to be persistent, and
can we make correct inference about properties?
Finally we shouldn’t forget that touch and physical
interaction are among the fundamental ways in which
we come to understand our world and to effect changes
in it. This is true on a developmental as well as an evolutionary
level. For early primates to survive in a physical
world, as Frank Wilson suggested, “a new physics
would eventually have to come into this their brain, a
new way of registering and representing the behavior
of objects moving and changing under the control of the
hand. It is precisely such a representational system—a
syntax of cause and effect, of stories, and of experiments,
each having a beginning, a middle, and an end—
that one finds at the deepest levels of the organization
of human language.”32
Our efforts to communicate information by rendering
how objects feel through haptic technology, and the
excitement in our pursuit, might reflect a deeper desire
to speak with an inner, physically based language that
has yet to be given a true voice.
In the last decade we’ve seen an enormous
increase in interest in the science of haptics.
The quest for better understanding and use of haptic
abilities (both human and nonhuman) has manifested
itself in heightened activity in disciplines ranging from
robotics and telerobotics; to computational geometry
and computer graphics; to psychophysics, cognitive science,
and the neurosciences.
This issue of IEEE CG&A focuses
on haptic rendering. Haptics broadly
refers to touch interactions (physical
contact) that occur for the purpose
of perception or manipulation of
objects. These interactions can be
between a human hand and a real
object; a robot end-effector and a
real object; a human hand and a simulated
object (via haptic interface
devices); or a variety of combinations
of human and machine interactions
with real, remote, or virtual
objects. Rendering refers to the
process by which desired sensory
stimuli are imposed on the user to
convey information about a virtual
haptic object. At the simplest level,
this information is contained in the representation of the
object’s physical attributes—shape, elasticity, texture,
mass, and so on. Just as a sphere visually rendered with
simple shading techniques will look different from the
same sphere rendered with ray-tracing techniques, a
sphere haptically rendered with a simple penalty function
1 Basic architecture for a virtual reality application incorporating visual,
auditory, and haptic feedback.
function
will feel different from the same sphere rendered
with techniques that also convey mechanical textures
and surface friction.
As in the days when people were astonished to see
their first wire-frame computer-generated images, people
are now astonished to feel their first virtual object.
Yet the rendering techniques we use today will someday
seem like yesterday’s wire-frame displays—the first
steps into a vast field.
To help readers understand the issues discussed in
this issue’s theme articles, we briefly overview haptic
systems and the techniques needed for rendering the
way objects feel. We also discuss basic haptic-rendering
algorithms that help us decide what force should be
exerted and how we will deliver these forces to users. A
sidebar discusses key points in the history of haptics.
Architecture for haptic feedback
Virtual reality (VR) applications strive to simulate real
or imaginary scenes with which users can interact and
perceive the effects of their actions in real time. Ideally
the user interacts with the simulation via all five senses;
however, today’s typical VR applications rely on a
smaller subset, typically vision, hearing, and more
recently, touch.
Figure 1 shows the structure of a VR application incorporating
visual, auditory, and haptic feedback. The
application’s main elements are:
■ the simulation engine, responsible for computing the
virtual environment’s behavior over time;
■ visual, auditory, and haptic rendering algorithms,
which compute the virtual environment’s
graphic, sound, and force
responses toward the user; and
■ transducers, which convert visual,
audio, and force signals from
the computer into a form the
operator can perceive.
The human operator typically
holds or wears the haptic interface
device and perceives audiovisual feedback from audio
(computer speakers, headphones, and so on) and visual
displays (a computer screen or head-mounted display,
for example).
Whereas audio and visual channels feature unidirectional
information and energy flow (from the simulation
engine toward the user), the haptic modality
exchanges information and energy in two directions,
from and toward the user. This bidirectionality is often
referred to as the single most important feature of the
haptic interaction modality.
Haptic interface devices
An understanding of some basic concepts about haptic
interface devices will help the reader through the
remainder of the text. A more complete description of
the elements that make up such systems is available
elsewhere.1
Haptic interface devices behave like small robots that
exchange mechanical energy with a user. We use the term
device-body interface to highlight the physical connection
between operator and device through which energy is
exchanged. Although these interfaces can be in contact
with any part of the operator’s body, hand interfaces have
been the most widely used and developed systems to
date. Figure 2 shows some example devices.
One way to distinguish between haptic interface
devices is by their grounding locations. For interdigit
tasks, force-feedback gloves, such as the Hand Force
Feedback (HFF),2 read finger-specific contact information
and output finger-specific resistive forces, but can’t
reproduce object net weight or inertial forces. Similar
handheld devices are common in the gaming industry
and are built using low-cost vibrotactile transducers,
which produce synthesized vibratory effects. Exoskeleton
mechanisms or body-based haptic interfaces, which
a person wears on the arm or leg, present more complex
multiple degree-of-freedom (DOF) motorized devices.
Finally, ground-based devices include force-reflecting
joysticks and desktop haptic interfaces.
HISTORY OF HAPTICS
In the early 20th century, psychophysicists introduced the word
haptics (from the Greek haptesthai meaning to touch) to label the
subfield of their studies that addressed human touch-based perception
and manipulation. In the 1970s and 1980s, significant
research efforts in a completely different field—robotics—also
began to focus on manipulation and perception by touch. Initially
concerned with building autonomous robots, researchers soon
found that building a dexterous robotic hand was much more
complex and subtle than their initial naive hopes had suggested.
In time these two communities—one that sought to understand
the human hand and one that aspired to create devices with dexterity
inspired by human abilities—found fertile mutual interest
in topics such as sensory design and processing, grasp control
and manipulation, object representation and haptic information
encoding, and grammars for describing physical tasks.
In the early 1990s a new usage of the word haptics began to
emerge. The confluence of several emerging technologies made
virtualized haptics, or computer haptics,1 possible. Much like
computer graphics, computer haptics enables the display of simulated
objects to humans in an interactive manner. However,
computer haptics uses a display technology through which
objects can be physically palpated.
This new sensory display modality presents information by
exerting controlled forces on the human hand through a haptic
interface (rather than, as in computer graphics, via light from
a visual display device). These forces depend on the physics of
mechanical contact. The characteristics of interest in these
forces depend on the response of the sensors in the human hand
and other body parts (rather than on the eye’s sensitivity to
brightness, color, motion, and so on).
Unlike computer graphics, haptic interaction is bidirectional,
with energy and information flows both to and from the user.
Although Knoll demonstrated haptic interaction with simple
virtual objects at least as early as the 1960s, only recently was sufficient
technology available to make haptic interaction with complex
computer-simulated objects possible. The combination of
high-performance force-controllable haptic interfaces, computational
geometric modeling and collision techniques, cost-effective
processing and memory, and an understanding of the
perceptual needs of the human haptic system allows us to assemble
computer haptic systems that can display objects of sophisticated
complexity and behavior. With the commercial availability
of 3 degree-of-freedom haptic interfaces, software toolkits from
several corporate and academic sources, and several commercial
haptics-enabled applications, the field is experiencing rapid and
exciting growth.
Another distinction between haptic interface devices
is their intrinsic mechanical behavior. Impedance haptic
devices simulate mechanical impedance—they read
position and send force. Admittance haptic devices simulate
mechanical admittance—they read force and send
position. Simpler to design and much cheaper to produce,
impedance-type architectures are most common.
Admittance-based devices, such as the Haptic Master,3
are generally used for applications requiring high forces
in a large workspace.
Haptic interface devices are also classified by the
number of DOF of motion or force present at the devicebody
interface—that is, the number of dimensions characterizing
the possible movements or forces exchanged
between device and operator. A DOF can be passive or
actuated, sensed or not sensed.
Characteristics commonly considered desirable for
haptic interface devices include
■ low back-drive inertia and friction;
■ minimal constraints on motion imposed by the device
kinematics so free motion feels free;
■ symmetric inertia, friction, stiffness, and resonatefrequency
properties (thereby regularizing the device
so users don’t have to unconsciously compensate for
parasitic forces);
■ balanced range, resolution, and bandwidth of position
sensing and force reflection; and
■ proper ergonomics that let the human operator focus
when wearing or manipulating the haptic interface
as pain, or even discomfort, can distract the user,
reducing overall performance.
We consider haptic rendering algorithms applicable
to single- and multiple-DOF devices.
System architecture for haptic rendering
Haptic-rendering algorithms compute the correct
interaction forces between the haptic interface representation
inside the virtual environment and the virtual
objects populating the environment. Moreover, hapticrendering
algorithms ensure that the haptic device correctly
renders such forces on the human operator.
An avatar is the virtual representation of the haptic
interface through which the user physically interacts
with the virtual environment. Clearly the choice of avatar
depends on what’s being simulated and on the haptic
device’s capabilities. The operator controls the avatar’s
position inside the virtual environment. Contact between
the interface avatar and the virtual environment sets off
action and reaction forces. The avatar’s geometry and
the type of contact it supports regulates these forces.
Within a given application the user might choose
among different avatars. For example, a surgical tool
can be treated as a volumetric object exchanging forces
and positions with the user in a 6D space or as a pure
point representing the tool’s tip, exchanging forces and
positions in a 3D space.
Several components compose a typical haptic rendering
algorithm. We identify three main blocks, illustrated
in Figure 3.
Collision-detection algorithms detect collisions
between objects and avatars in the virtual environment
and yield information about where, when, and ideally
to what extent collisions (penetrations, indentations,
contact area, and so on) have occurred.
Force-response algorithms compute the interaction
force between avatars and virtual objects when a collision
is detected. This force approximates as closely as
possible the contact forces that would normally arise during
contact between real objects. Force-response algorithms
typically operate on the avatars’ positions, the
positions of all objects in the virtual environment, and
the collision state between avatars and virtual objects.
Their return values are normally force and torque vectors
that are applied at the device-body interface.
Hardware limitations prevent haptic devices from
applying the exact force computed by the force-response
algorithms to the user. Control algorithms command
the haptic device in such a way that minimizes the error
between ideal and applicable forces. The discrete-time
nature of the haptic-rendering algorithms often makes this difficult, as we explain further later in the article.
Desired force and torque vectors computed by forceresponse
algorithms feed the control algorithms. The
algorithms’ return values are the actual force and torque
vectors that will be commanded to the haptic device.
A typical haptic loop consists of the following
sequence of events:
■ Low-level control algorithms sample the position sensors
at the haptic interface device joints.
■ These control algorithms combine the information
collected from each sensor to obtain the position of
the device-body interface in Cartesian space—that is,
the avatar’s position inside the virtual environment.
■ The collision-detection algorithm uses position information
to find collisions between objects and avatars
and report the resulting degree of penetration or
indentation.
■ The force-response algorithm computes interaction
forces between avatars and virtual objects involved
in a collision.
■ The force-response algorithm sends interaction forces
to the control algorithms, which apply them on the
operator through the haptic device while maintaining
a stable overall behavior.
The simulation engine then uses the same interaction
forces to compute their effect on objects in the virtual
environment. Although there are no firm rules about how
frequently the algorithms must repeat these computations,
a 1-KHz servo rate is common. This rate seems to be
a subjectively acceptable compromise permitting presentation
of reasonably complex objects with reasonable
stiffness. Higher servo rates can provide crisper contact
and texture sensations, but only at the expense of reduced
scene complexity (or more capable computers).
The following sections explain the basic principles of
haptic-rendering algorithms, paying particular attention
to force-response algorithms. Although the ability
to detect collisions is an important aspect of computing
contact force response, given the familiarity of CG&A’s
readership with the topic, we don’t dwell on it here. The
geometric problem of efficiently detecting when and
where contact and interobject penetrations occur continues
to be an important research topic in haptics and
related fields. The faster real-time needs of haptic rendering
demand more algorithmic performance. One
solution is to accept less accuracy and use simpler collision
model geometries. Alternately, researchers are
adapting graphics-rendering hardware to enable fast
real-time collision detection among complex objects.
Lin and Manocha give a useful survey of collision-detection
algorithms for haptics.4
Computing contact-response forces
Humans perceive contact with real objects through
sensors (mechanoreceptors) located in their skin, joints,
tendons, and muscles. We make a simple distinction
between the information these two types of sensors can
acquire. Tactile information refers to the information
acquired through sensors in the skin with particular reference
to the spatial distribution of pressure, or more
generally, tractions, across the contact area. Kinesthetic
information refers to the information acquired
through the sensors in the joints. Interaction forces are
normally perceived through a combination of these two.
A tool-based interaction paradigm provides a convenient
simplification because the system need only render
forces resulting from contact between the tool’s
avatar and objects in the environment. Thus, haptic
interfaces frequently utilize a tool handle physical interface
for the user.
To provide a haptic simulation experience, we’ve
designed our systems to recreate the contact forces a
user would perceive when touching a real object. The
haptic interfaces measure the user’s position to recognize
if and when contacts occur and to collect information
needed to determine the correct interaction force.
Although determining user motion is easy, determining
appropriate display forces is a complex process and a
subject of much research. Current haptic technology
effectively simulates interaction forces for simple cases,
but is limited when tactile feedback is involved.
In this article, we focus our attention on forceresponse
algorithms for rigid objects. Compliant objectresponse
modeling adds a dimension of complexity
because of nonnegligible deformations, the potential
for self-collision, and the general complexity of modeling
potentially large and varying areas of contact.
We distinguish between two types of forces: forces
due to object geometry and forces due to object surface
properties, such as texture and friction.
Geometry-dependant force-rendering algorithms
The first type of force-rendering algorithms aspires to
recreate the force interaction a user would feel when
touching a frictionless and textureless object. Such interaction
forces depend on the geometry of the object being
touched, its compliance, and the geometry of the avatar
representing the haptic interface inside the virtual environment.
Although exceptions exist,5 the DOF necessary
to describe the interaction forces between an avatar and
a virtual object typically matches the actuated DOF of
the haptic device being used. Thus for simpler devices,
such as a 1-DOF force-reflecting gripper (Figure 2a), the
avatar consists of a couple of points that can only move
and exchange forces along the line connecting them. For
this device type, the force-rendering algorithm computes
a simple 1-DOF squeeze force between the index finger
and the thumb, similar to the force you would feel when
cutting an object with scissors. When using a 6-DOF haptic
device, the avatar can be an object of any shape. In
this case, the force-rendering algorithm computes all the
interaction forces between the object and the virtual
environment and applies the resultant force and torque
vectors to the user through the haptic device.
We group current force-rendering algorithms by the
number of DOF necessary to describe the interaction
force being rendered.
One-DOF interaction. A 1-DOF device measures
the operator’s position and applies forces to the operator
along one spatial dimension only. Types of 1-DOF
interactions include opening a door with a knob that is
constrained to rotate around one axis, squeezing scissors
to cut a piece of paper, or pressing a syringe’s piston
when injecting a liquid into a patient. A 1-DOF interaction
might initially seem limited; however, it can render
many interesting and useful effects.
Rendering a virtual wall—that is, creating the interaction
forces that would arise when contacting an infinitely
stiff object—is the prototypical haptic task. As one
of the most basic forms of haptic interaction, it often
serves as a benchmark in studying haptic stability.6–8
The discrete-time nature of haptic interaction means
that the haptic interface avatar will always penetrate
any virtual object. A positive aspect of this is that the
force-rendering algorithm can use information on how
far the avatar has penetrated the object to the compute
interaction force. However, this penetration can cause
some unrealistic effects to arise, such as vibrations in
the force values, as we discuss later in the article. As Figure
4 illustrates, if we assume the avatar moves along
the x-axis and x < xW describes the wall, the simplest
algorithm to render a virtual wall is given by
where Krepresents the wall’s stiffness and thus is ideally
very large. More interesting effects can be accomplished
for 1-DOF interaction.
4 Virtual wall
concept, a
1-DOF interaction.
The operator
moves and
feels forces only
along one
spatial
dimension.
Two-DOF interaction. Examples of 2-DOF interactions
exist in everyday life—for example, using a
mouse to interact with a PC. Using 2-DOF interfaces to
interact with 3D objects is a bit less intuitive. It’s possible,
however, and is an effective way to interact with
simpler 3D virtual environments while limiting the
costs and complexity of haptic devices needed to render
the interactions. Two-DOF rendering of 3D objects
is, in some cases, like pushing a small ball over the surface
of a 3D object under the influence of gravity. Various
techniques enable this type of rendering by
projecting the ideal 3-DOF point-contact interaction
force on a plane,11,12 or by evaluating the height
change between two successive contact points on the
same surface.13
Three-DOF interaction. Arguably one of the most
interesting events in haptics’ history was the recognition,
in the early 1990s, of the point interaction paradigm
usefulness. This geometric simplification of the
general 6-DOF problem assumes that we interact with
the virtual world with a point probe, and requires that
we only compute the three interaction force components
at the probe’s tip. This greatly simplifies the interface
device design and facilitates collision detection and
force computation. Yet, even in this seemingly simple
case, we find an incredibly rich array of interaction possibilities
and the opportunity to address the fundamental
elements of haptics unencumbered by excessive
geometric and computational complexity.
To compute force interaction with 3D virtual objects,
the force-rendering algorithm uses information about
how much the probing point, or avatar, has penetrated
the object, as in the 1-DOF case. However, for 3-DOF
interaction, the force direction isn’t trivial as it usually
is for 1-DOF interaction.
Various approaches for computing force interaction
for virtual objects represented by triangular meshes exist.
Vector field methods use a one-to-one mapping between
position and force. Although these methods often work
well, they don’t record past avatar positions. This makes
it difficult to determine the interaction force’s direction
when dealing with small or thin objects, such as the interaction
with a piece of sheet metal, or objects with complex
shapes. Nonzero penetration of avatars inside
virtual objects can cause the avatars to cross through
such a thin virtual surface before any force response is
computed (that is, an undetected collision occurs). To
address the problems posed by vector field methods,
Zilles et al. and Ruspini et al. independently introduced
the god-object14 and proxy algorithms.15Both algorithms
are built on the same principle: although we can’t stop
avatars from penetrating virtual objects, we can use additional
variables to track a physically realistic contact on
the object’s surface—the god object or proxy. Placing a
spring between avatar position and god object/proxy
creates a realistic force feedback to the user. In free space,
the haptic interface avatar and the god object/proxy are
collocated and thus the force response algorithm returns
no force to the user. When colliding with a virtual object,
the god object/proxy algorithm finds the new god
object/proxy position in two steps:
1. It finds a set of active constraints.
2. Starting from its old position, the algorithm identifies
the new position as the point on the set of active constraint
that is closest to the current avatar position.
Morgenbesser et al.’s introduction of force shading—
the haptic equivalent of Phong shading—successively
refined both algorithms.16 Whereas graphic-rendering
interpolated normals obtain more smooth-looking
meshes, haptic-rendering interpolated normals obtain
smooth-changing forces throughout an object’s surface.
Walker et al. recently proposed an interesting variation
of the god-object/proxy algorithms applicable to
cases involving triangular meshes based on large quantities
of polygons.17
Salisbury et al. introduced an extension of the godobject
algorithm for virtual objects based on implicit surfaces
with an analytical representation.18 For implicit
surfaces, collision detection is much faster and we can
calculate many of the variables necessary for computing
the interaction force, such as its direction and intensity,
using closed analytical forms. Other examples of
3-DOF interaction include algorithms for interaction
with NURBS-based19 and with Voxels-based objects.20
More than 3-DOF interaction. Although the
point interaction metaphor has proven to be surprisingly
convincing and useful, it has limitations. Simulating
interaction between a tool’s tip and a virtual environment
means we can’t apply torques through the contact.
This can lead to unrealistic scenarios, such as a user feeling
the shape of a virtual object using the tool’s tip while
the rest of the tool lies inside the object.
To improve on this situation, some approaches use
avatars that enable exertion of forces or torques with
more than three dof. Borrowing terminology from the
robotic-manipulation community, Barbagli et al.21
developed an algorithm to simulate 4-DOF interaction
through soft-finger contact—that is, a point contact with
friction that can support moments (up to a torsional friction
limit) about the contact normal. This type of avatar
is particularly handy when using multiple-point interaction
to grasp and manipulate virtual objects.
Basdogan et al. implemented 5-DOF interaction, such
as occurs between a line segment and a virtual object, to
approximate contact between long tools and virtual environments.
22 This ray-based rendering technique allows
to simulate the interaction of tools by modeling them as
a set of connected line segments and a virtual object.
Several researchers have developed algorithms providing
for 6-DOF interaction forces. For example,
McNeely et al.23 simulated interaction between modestly
complex rigid objects within an arbitrarily complex
environment of static rigid objects represented by
voxels, and Ming et al.24 simulated contact between
complex polygonal environments and haptic probes.
Surface property-dependent force-rendering
algorithms
All real surfaces contain tiny irregularities or indentations.
Obviously, it’s impossible to distinguish each
irregularity when sliding a finger over an object. However,
tactile sensors in the human skin can feel their
combined effects when rubbed against a real surface.
Although this article doesn’t focus on tactile displays,
we briefly present the state of the art for algorithms that
can render virtual objects’ haptic textures and friction
properties.
Micro-irregularities act as obstructions when two surfaces
slide against each other and generate forces tangential
to the surface and opposite to motion. Friction,
when viewed at the microscopic level, is a complicated
phenomenon. Nevertheless, simple empirical models
exist, such as the one Leonardo da Vinci proposed and
Charles Augustin de Coulomb later developed in 1785.
Such models served as a basis for the simpler frictional
models in 3 DOF
Researchers outside the haptic community have
developed many models to render friction with higher
accuracy—for example, the Karnopp model for modeling
stick-slip friction, the Bristle model, and the reset
integrator model. Higher accuracy, however, sacrifices
speed, a critical factor in real-time applications. Any
choice of modeling technique must consider this trade
off. Keeping this trade off in mind, researchers have
developed more accurate haptic-rendering algorithms
for friction (see, for instance, Dupont et al.25).
A texture or pattern generally covers real surfaces.
Researchers have proposed various techniques for rendering
the forces that touching such textures generates.
Many of these techniques are inspired by analogous
techniques in modern computer graphics. In computer
graphics, texture mapping adds realism to computergenerated
scenes by projecting a bitmap image onto surfaces
being rendered. The same can be done haptically.
Minsky11 first proposed haptic texture mapping for 2D;
Ruspini et al. later extended his work to 3D scenes.15
Researchers have also used mathematical functions to
create synthetic patterns. Basdogan et al.22 and Costa et
al.26 investigated the use of fractals to model natural textures
while Siira and Pai27 used a stochastic approach.
Controlling forces delivered through
haptic interfaces
So far we’ve focused on the algorithms that compute
the ideal interaction forces between the haptic interface
avatar and the virtual environment. Once such forces
have been computed, they must be applied to the user.
Limitations of haptic device technology, however, have
sometimes made applying the force’s exact value as
computed by force-rendering algorithms impossible.
Various issues contribute to limiting a haptic device’s
capability to render a desired force or, more often, a
desired impedance. For example, haptic interfaces can
only exert forces with limited magnitude and not equally
well in all directions, thus rendering algorithms must
ensure that no output components saturate, as this
would lead to erroneous or discontinuous application
of forces to the user.
In addition, haptic devices aren’t ideal force transducers.
An ideal haptic device would render zero impedance
when simulating movement in free space, and any
finite impedance when simulating contact with an
object featuring such impedance characteristics. The
friction, inertia, and backlash present in most haptic
devices prevent them from meeting this ideal.
A third issue is that haptic-rendering algorithms operate
in discrete time whereas users operate in continu-ous time, as Figure 5 illustrates. While moving into and
out of a virtual object, the sampled avatar position will
always lag behind the avatar’s actual continuous-time
position. Thus, when pressing on a virtual object, a user
needs to perform less work than in reality; when the user
releases, however, the virtual object returns more work
than its real-world counterpart would have returned. In
other terms, touching a virtual object extracts energy
from it. This extra energy can cause an unstable
response from haptic devices.7
Finally, haptic device position sensors have finite resolution.
Consequently, attempting to determine where
and when contact occurs always results in a quantization
error. Although users might not easily perceive this
error, it can create stability problems.
All of these issues, well known to practitioners in the
field, can limit a haptic application’s realism. The first
two issues usually depend more on the device mechanics;
the latter two depend on the digital nature of VR
applications.
As mentioned previously, haptic devices feature a
bidirectional flow of energy, creating a feedback loop
that includes user, haptic device, and haptic-rendering/
simulation algorithms, as Figure 5 shows. This loop
can become unstable due to the virtual environment
energy leaks.
The problem of stable haptic interaction has received
a lot of attention in the past decade. The main problem
in studying the loop’s stability is the presence of the
human operator, whose dynamic behavior can’t be generalized
with a simple transfer function. Researchers
have largely used passivity theory to create robust algorithms
that work for any user.
For a virtual wall such as the one in Figure 4, Colgate
analytically showed that a relation exists between the
maximum stiffness a device can render, the device’s level
of mechanical damping, the level of digital damping
commanded to the device, and the servo rate controlling
the device.6 More specifically, to have stable interaction,
the relationship b > KT/2 + B should hold. That
is, the device damping b should always be higher than
the sum of the level of digital damping that can be controlled
to the device B and the product KT/2 where K is
the stiffness to be rendered by the device and T is the
servo rate period. Stiffer walls tend to become unstable
for higher servo rate periods, resulting in high-frequency
vibrations and possibly uncontrollably high levels of
force. Increasing the level of mechanical damping featured
by the device can limit instability, even though this
limits the device’s capabilities of simulating null impedance
when simulating the device’s free-space movements.
Thus high servo rates (or low servo rate periods)
are a key issue for stable haptic interaction.
Two main sets of techniques for limiting unstable
behavior in haptic devices exist. The first set includes
solutions that use virtual damping to limit the energy
flow from the virtual environment toward the user when
it could create unstable behavior.8,28 Colgate introduced
virtual coupling, a connection between haptic device and
virtual avatar consisting of stiffness and damping, which
effectively limits the maximum impedance that the haptic
display must exhibit.28 A virtual coupling lets users
create virtual environments featuring unlimited stiffness
levels, as the haptic device will always attempt to
render only the maximum level set by the virtual coupling.
Although this ensures stability, it doesn’t make a
haptic device stably render higher stiffness levels.
The second set of techniques include solutions that
attempt to speed up haptic servo rates by decoupling
force-response algorithms from other slower algorithms,
such as collision-detection, visual-rendering,
and virtual environment dynamics algorithms.29 This
can be accomplished by running all of these algorithms
in different threads with different servo rates, and letting
the user interact with a simpler local virtual object
representation at the highest possible rate that can be
accomplished on the system.
Four main threads exist. The visual-rendering loop is
typically run at rates of up to 30 Hz. The simulation
thread is run as fast as possible congruent with the simulated
scene’s overall complexity. A collision-detection
thread, which computes a local representation of the
part of the virtual object closest to the user avatar, is run
at slower rates to limit CPU usage. Finally a faster collision
detection and force response is run at high servo rates.
An extremely simple local representation makes this
possible (typical examples include planes or spheres).
Surface discontinuities are normally not perceived,
given that the maximum speed of human movements is
limited and thus the local representation can always
catch up with the current avatar position. This approach
has gained success in recent years with the advent of
surgical simulators employing haptic devices, because
algorithms to accurately compute deformable object
dynamics are still fairly slow and not very scalable.30,31
Conclusion
As haptics moves beyond the buzzes and thumps of
today’s video games, technology will enable increasingly
believable and complex physical interaction with
virtual or remote objects. Already haptically enabled
commercial products let designers sculpt digital clay
figures to rapidly produce new product geometry,
museum goers feel previously inaccessible artifacts, and
doctors train for simple procedures without endangering
patients.
Past technological advances that permitted recording,
encoding, storage, transmission, editing, and ultimately
synthesis of images and sound profoundly
affected society. A wide range of human activities,
including communication, education, art, entertainment,
commerce, and science, were forever changed
when we learned to capture, manipulate, and create
sensory stimuli nearly indistinguishable from reality. It’s
not unreasonable to expect that future advancements
in haptics will have equally deep effects. Though the
field is still in its infancy, hints of vast, unexplored intellectual
and commercial territory add excitement and
energy to a growing number of conferences, courses,
product releases, and invention efforts.
For the field to move beyond today’s state of the art,
researchers must surmount a number of commercial and
technological barriers. Device and software tool-oriented
corporate efforts have provided the tools we need to
Survey step out of the laboratory, yet we need new business models.
For example, can we create haptic content and authoring
tools that will make the technology broadly attractive?
Can the interface devices be made practical and inexpensive
enough to make them widely accessible?
Once we move beyond single-point force-only interactions
with rigid objects, we should explore several
technical and scientific avenues. Multipoint, multihand,
and multiperson interaction scenarios all offer enticingly
rich interactivity. Adding submodality stimulation
such as tactile (pressure distribution) display and vibration
could add subtle and important richness to the
experience. Modeling compliant objects, such as for surgical
simulation and training, presents many challenging
problems to enable realistic deformations, arbitrary
collisions, and topological changes caused by cutting
and joining actions.
Improved accuracy and richness in object modeling
and haptic rendering will require advances in our understanding
of how to represent and render psychophysically
and cognitively germane attributes of objects, as
well as algorithms and perhaps specialty hardware
(such as haptic or physics engines) to perform real-time
computations.
Development of multimodal workstations that provide
haptic, visual, and auditory engagement will offer
opportunities for more integrated interactions. We’re
only beginning to understand the psychophysical and
cognitive details needed to enable successful multimodality
interactions. For example, how do we encode
and render an object so there is a seamless consistency
and congruence across sensory modalities—that is, does
it look like it feels? Are the object’s density, compliance,
motion, and appearance familiar and unconsciously
consistent with context? Are sensory events predictable
enough that we consider objects to be persistent, and
can we make correct inference about properties?
Finally we shouldn’t forget that touch and physical
interaction are among the fundamental ways in which
we come to understand our world and to effect changes
in it. This is true on a developmental as well as an evolutionary
level. For early primates to survive in a physical
world, as Frank Wilson suggested, “a new physics
would eventually have to come into this their brain, a
new way of registering and representing the behavior
of objects moving and changing under the control of the
hand. It is precisely such a representational system—a
syntax of cause and effect, of stories, and of experiments,
each having a beginning, a middle, and an end—
that one finds at the deepest levels of the organization
of human language.”32
Our efforts to communicate information by rendering
how objects feel through haptic technology, and the
excitement in our pursuit, might reflect a deeper desire
to speak with an inner, physically based language that
has yet to be given a true voice.
Monday, March 1, 2010
Terminology
In the introduction a number of terminologies originating from the context of
haptic science and device design has already been used. In this chapter a systematic
introduction into the area of designing haptic devices begins. The following sections
explain the scientific and industrial disciplines participating in the research and development of haptic devices. Afterward terms and their definitions are introduced
and illustrated by examples how to characterize haptic systems based on some concrete
technical devices.
Scientific Disciplines as Part of Haptic Research:
In haptic science there are three groups of interest (fig. 2.1) with quite fluent bordersin between: Scientists working within the area of “haptic perception” proceed
according to strictly deductive scientific principles: Resulting from an observation a
hypothesis is derived. For this hypothesis an experiment is designed by testing the
point of the hypothesis by the exclusion of other varying parameters. As a result the
hypothesis is veri- or falsified leading to a new and improved hypothesis.
Research in the area of “haptic perception” is done by two scientific disciplines:Psychophysics and Neurobiology. Psychophysics deals with the analysis of the impression of physical stimuli - in the case of haptic perception this mainly refers to oscillations and forces of different spatial orientation. The aim of psychophysics is to create a model explaining perception. Neurobiology observes biologically measurable connections and analyzes the direct conversion of physical stimuli into neuronal signals and their processing within the brain. Both disciplines complement each other so that the neuronal observation should be able to explain a part of the psychophysical model and vice versa. These scientific disciplines formulate technical tasks for the preparation of experiments which are processed by two groups interested in “haptic synthesis” or “haptic measurement”, respectively
Fig. 2.1 Overview about the disciplines participating in haptic research.
On an alternative track both groups get assignments from industry making themselves use of the knowledge gathered by research on haptic perception. These groups work according to engineering solution strategies An assumption of requirements is derived from a technical question based on the current state of knowledge. A functional prototype and later a product to fulfill the requirements is designed in a developmental process accompanied by a continuous tracking of the prior assumptions and their meaning. Then the product obtained can be used for the analysis of psychophysical questions, or, respectively as a a product of the gaming-, automotive or aviation industry.
In the case of the generation of haptic impressions for Virtual-Reality (VR) applications the technical requirements typically ask for tactile, kinaesthetic or combined feedback systems. In that area the emphasis is on the correct choice of actuators, control and driver electronics and on the processing and transmission of signals. Due to the coupling of devices and time-discrete simulation systems a consideration of discretization-effects and their influence on the haptic quality of the impression is necessary. In the case of telemanipulation systems technical challenges are comparable. The main difference lies in the necessary measurement technology for the acquisition of haptic object properties. Additionally, the control engineering questions are more complex, as this area typically deals with closed-loop systems with unknown loads on both ends.
Terms and Terminology Used for the Description of Haptic Systems:
The definition of the terminology within the context of haptic systems is subject
to the current ISO 9241-910 norm. Many of the definitions used in this book follow
the terminology presented there. According to the author’s experience, all these
terminologies have the status of recommendations shared by a large number of researchers associated with the haptic area. However, there is no binding consent for their usage within the haptic community, so that many actual and future papers differ from the definitions presented above. The nomenclature mentioned here is based on prior publications to this material, especially by HAYWARD [90], COLGATE [176], HANNAFORD [85], BURDEA [34], ADAMS [2] and many papers by other authors.
Basic Concepts of Haptics:
Haptics means the combined sensation of mechanical, thermal and noci-perception
(fig. 2.2). It is more or less defined by the exclusion of the optical, acoustic, olfactoryand gustatory perception from the sum of sensory perceptions. As a result
haptics consists of nociceptive, thermoceptive, kinaesthetic and tactile perceptions. The sense of balance takes an exceptional position as it is not counted among the five human senses having receptors of their own Yet, it really exists making use of all other senses’ receptors, especially the haptic ones.
Haptics describes the sensory as well as the motor capabilities within the skin,joints, muscles and tendons.
Tactile means the mechanical interaction with the skin. Therefore tactile perception is the sensation of exclusively mechanical interaction. Please note that tactile perception is not exclusively bound to forces or movements.
Kinaesthetics describes both, actuatory and sensory capabilities of muscles and joints. It refers to their forces, torques, movements, positions and angles. As a result any kinaesthetic interaction has a tactile component due to this definition.
Fig. 2.2 Distribution of senses.
2.2.2 Definition of Haptic Systems:
The technical terminology is listed from the special to the general and illustrated
by block diagrams. The arrows between the components of the block diagrams may
represent different kinds of information depending on the devices they refer to. They
remain unlabeled. Haptic devices are capable of transmitting elongations, forces and
temperature differences and in a few realizations they also stimulate pain receptors.
The terms “system” and “device” and “component” are not defined on an interdisciplinary
basis. Dependent on one’s point of view the same object can be e.g. “a
device” for a hardware-designer ,“a system” for the software-engineer, or “just a
component” for another hardware-engineer. These terms are nevertheless part of any
engineering discipline and are used accordingly here but should anyhow be read with
this knowledge in mind.
A haptic device is a system generating an output which can be perceived haptically.
It has (fig. 2.3) at least one output, but not necessarily any input. The tactile
markers on the keys F and J of a keyboard represent information for the positioning
of the index finger. By these properties the keys are already tactile devices. At
a closer look the key itself shows a haptically notable point of actuation, the haptic
click. This information is transmitted in a kinaesthetic and tactile way by the
interaction of the key’s mechanics with the muscles and joints and the force being
transmitted through the skin. Such a key is a haptic device without a changing input
and two outputs.
A user (in the context of haptic systems) is a receiver of haptic information.
A haptic controller describes a component of a haptic system for processing
haptic information flows and improving transmission. Quite pragmatic in the case
of telemanipulation systems these kinds of controllers are frequently either a spring damper coupling element between end-effector and the operating element or a local
abstraction model of the area of interaction to compensate transmission delays. In
the case of a haptic simulator it is quite frequently a simple LTI-model with a high
in- and output rate. The LTI model itself is then updated on a lower frequency than
the actual speed of the haptic in- and output.
Fig. 2.3 Haptic device, user and controller.
Haptic interaction describes the haptic transmission of information. This transmission
can be bi- or unidirectional (fig. 2.4). Moreover, specifically tactile (unidirectional)
or kinaesthetic (uni- or bidirectional) interaction may happen. A tactile
marker like embossed printing on a bill can communicate tactile information (the
bill’s value) as a result of haptic interaction.
Fig. 2.4 Haptic interaction.
The addressability of haptic systems refers to the subdivision (spatial or temporal)
of an output signal of a device (frequently a force) or of the user (frequently a
position).
The resolution of a haptic system refers to the capability to detect a subdivision
(spatial or temporal) of an input signal. With reference to a device this is in accordance with the measuring accuracy. With respect to the user this corresponds to his
perceptual resolution.
A haptic marker refers to a mark communicating information about the object
carrying the marker by way of a defined code of some kind. Examples are markers
in Braille on bills or road maps. Frequently these markers are just tactile, but there
are also kinaesthetically effective ones marking sidewalks and road crossings for
visually handicapped people.
A haptic display is a haptic device permitting haptic interaction, whereby the
transmitted information is subject to change (fig. 2.5). There are purely tactile as
well as kinaesthetic displays.
A tactor is a haptic purely tactile haptic display generating a dynamic and oscillating
output. They usually provide a translatory output (e.g. fig. 9.19), but could
also be rotatory (e.g. fig. 2.14).
Fig. 2.5 Haptic display.
A haptic interface is a haptic device permitting a haptic interaction, whereby the
transmitted information is subject to change and a measure of the haptic interaction
is acquired (fig. 2.6). A haptic interface always refers to data and device.
Force-Feedback (FFB) refers to the information transmitted by kinaesthetic interaction
(fig. 2.6). It is a term coined by numerous commercial products like FFBjoysticks,
FFB-steering wheels and FFB-mice. Due to its usage in advertising, the
term Force Feedback (FFB) is seldom consistent with the other terminology given
here.
A haptic manipulator is a system interacting mechanically with objects whereby
continuously information about positions in space and forces and torques of the interaction
is acquired.
>
Fig. 2.6 Haptic interface.
A telemanipulation system refers to a system enabling a spatially separated haptic
interaction with a real physical object. There are purely mechanical telemanipulation
systems (fig. 2.7), scaling forces and movements via a lever-cable-system. In
the area of haptic interfaces, there are mainly electromechanic telemanipulation systems
according to figure 2.8 relevant. These systems allow an independent scaling
of forces and positions and an independent closed-loop control of haptic interface
and manipulator.
Fig. 2.7 Mechanical telemanipulator for handling dangerous goods (CRL model L) .
A haptic assistive system is a system adding haptic information to a natural interaction
(fig. 2.9). For this purpose object or interaction properties are measured via
a sensor and used to add valuable information in the interaction path. An application
would be a vibrating element indicating the leaving of a lane in a drive assistance
system.
A haptic simulator is a system enabling interaction with a virtual object (fig. 2.10).
It always requires a computer for the calculation of the object’s physical properties.
Haptic simulators and simulations are important incitements for the development of haptic devices. They can be found in serious training applications, e.g. for surgeons,
as well as in gaming applications for private use (see also chapter 13).
haptic devices. They can be found in serious training applications, e.g. for surgeons,
as well as in gaming applications for private use (see also chapter 13).
2.2.3 Parameters of Haptic Systems:
In [156] LAWRENCE defines the transparency T as a factor between impedance as
the input source of the haptic interface Zin and the actually felt output impedance
Zout of the device.
The principle of transparency is mainly a tool for control engineering purposes
analyzing stability and should be within the range ±3dB. T may be regarded as
the sole established, frequency dependent, characteristic value of haptic interfaces.
Frequently only the transparency‘s magnitude is considered. A transparency close
to “one” shows that the input impedance is not altered by the technical system. The
user of the haptic device being the end of the transmission chain experiences the
haptic input data in a pristine way. The concept of transparency can be applied to
telemanipulation systems and as well as to haptic simulators .
In [39] COLGATE describes the impedance width (Z-width) of a haptic system
Z−width = Zmax−Zmin
as the difference between the maximum load Zmax and the perceivable friction
and inertia at free space movement Zmin. The Z-width describes the potential of devices
and enables the comparability between them, after technical changes, e.g. by
the integration of a closed-loop control and a force measurement.
Active haptic devices are systems requiring an external energy source for the
display of haptic information. Usually, these are at least haptic displays. Passive
haptic devices, on the contrary, are systems transmitting haptic information solely
by their shape. This may lead to a false conclusion: A passive system in a control
engineering sense is a system with a negative energy flow at its input, e.g. a system
not emitting energy into the outside world. This concept of passive control is an
important stability criterion which will be discussed in detail in subsection 7.3.3.
For the moment, it should be noted that a passive haptic system is not necessarily
identical with a haptic system designed according to the criterion of passivity1.
The mechanical impedance Z is the complex coefficient between force F and velocity
v respectively torque M and angular velocityΩ. Impedance and its reciprocal
value - the mechanical admittance Y - are used for the mathematical description of
dynamic technical systems. High impedance means that a system is “stiff” or “inert”
and “grinds”. Low impedance describes a “light” or “soft” and “sliding” system.
The concept of impedance is applied to haptic systems by way of the terms displayimpedance
or interface-impedance Zd. It describes the impedance a system shows
when it is moved at its mechanical output (e.g. its handle).The concept of impedance
cannot be applied only to technical systems, but also to a simplified model of the
user and his mechanical properties. This is described by the term user-impedance
ZH. User-impedance - how stiff a user tends to be - can be influenced at will up
to a certain point. Shaking hands can either be hard or soft depending on its frequency.
The mechanical resistance of a handshake is lower at low frequencies and higher at high frequencies resulting simply from the inertia of the hand’s material.
Detailed descriptions of the building of models and the application of the concept
of user-impedance are given in section 4.2. An introduction into calculating with a
complex basis and mechanical systems is given in appendix 16. Understanding complex
calculation rules and the mechanical impedances are fundamental to the design
of haptic devices in the context of this book. Therefore it is recommended to update
one’s knowledge by self-studies of the relevant literature of electromechanics [158]
and control-engineering [167].
2.2.4 Characterization of Haptic Object Properties:
Besides the terminology for haptic systems, there is another group of terms describing
solely haptic objects and their properties:
Haptic texture refers to those object properties, which can exclusively be felt by
touch. The roughness of a surface, the structure of leather, even the haptic markers
already mentioned are haptic textures of the objects they are located on. In some
cases a differentiation is made between tangential and normal textures, whereby
the directional information refers to the skin’s surface. This specific differentiation
is more a result of technical limitations, than of a specialty of tactile perceptions
as tactile displays are frequently unable to generate a feedback covering a two or
three-dimensional movement.
Haptic shape refers to object properties which can mainly be felt kinaesthetically.
This can be the shape of a cup held in one’s hand. But it can also be the shape
and geometric design of a table rendered to be touched in a virtual environment.
In fact terms like texture and shape are used analogically to their meaning in
graphical programming and software techniques for 3D objects, where meshes provide
shape and surface-textures give color and fine structures. However, in comparison
with graphical texture, haptic texture mainly describes three-dimensional
surface properties incorporating properties like adhesion or friction, i.e. a realistic
haptic texture is much more complex in its parameters than a typical graphical
texture, even when considering bump-, specular or normal-maps. Therefore numerous
haptic surface properties, e.g. specific haptic surface effects are defined and
described from the perspective of a software engineer. These surface effects are
partly derived from physical equivalents of real objects, narrowed down to softwaremotivated
concepts in order to increase the degree of realism of haptic textures:
• Surface friction describes the viscose (velocity-proportional) friction of a contact
point on a surface.
• Surface adhesion Surface adhesion describes a force binding the movement of
a contact point to a surface. This concept allows simulating magnetic or sticking
effects.
• Roughness describes an uniform, sinoid structure of a small, defined amplitude
making the movement of a contact point on a surface appears rough.
Tacton refers to a sequence of stimuli adressing the tactile sense. It usually encodes
an event within the sequence’s pattern. The stimuli vary in intensity and frequency.
Both, stimuli and tacton, may even be overlayed with a time-dependent
amplitude modulation, such as fade-in or fade-out.
2.2.5 Technical examples:
There are several commercial haptic control units available on the market for the application
in design, CAD and modeling. One major player on the market is SensAble
with their PHANTOM R -series and the actually most low-cost product PHANTOM
Omni (fig. 2.11a). The PHANTOM-series can most easily be identified by the free
positioning of a pen-like handle in a three dimensional space. The position and
orientation of this handle is measured in three translational and three rotational degrees
of freedom. Depending on the model of the series, the tip force can act on the
handle in at least three translational dimensions. The generation of forces is done
via electrodynamic actuators; depending on the model these are either mechanically
or electronically commutated. The actuators are located within the device’s
basis and transmit their mechanical energy via levers and Bowden cables on the
corresponding joints. As a result of changing level-lengths the transmission-ratio
of the PHANTOM devices is nonlinear. For the static situation these changes are
compensated within the software driver. The PHANTOM devices are connected to
common PCs. The electrical interface used depends largely on the device’s product
generation and ranges from parallel ports to IDE cards and FireWire connectors.
The PHANTOM devices from SensAble are haptic devices (fig. 2.11c) primarily
addressing the kinaesthetic perception of the whole hand and the arm. As the force
transmission happens via a hand-held pen tactile requirements are automatically relevant
for the design too. This bidirectional haptic display is a haptic interface to the
user transmitting force information of a software application in a PC and feeding
back positioning information to her or him.
The network model of one degree of freedom (fig. 2.11b) shows the electronic
commutated electrodynamic motor as an idealized torque source M0 with inertia of
Θ of the rotor and a rotary damping dR resulting from bearings and links. By the
use of a converter resembling levers the rotary movement is transformed in a linear
movement with a force F0 and a velocity v0. An inertia m describes the mass of
the hand-held pen. The portion of the generated force Fout is dependent on the ratio
between the sum of all display-impedances against the user impedance ZH.
2.2.5.2 Reconfigurable Keyboard:
The reconfigurable keyboard (fig. 2.12a) is made of a number of independent actuators
arranged in a matrix. The actuators are electrodynamic linear motors with
a moving magnet. Each actuator can be controlled individually either as an openloop
controlled force source or as a positioning actuator by a closed-loop control.
When being used as force source, the primary purpose of the actuator is to follow
a configurable force/displacement curve of a typical key. The application of this reconfigurable
keyboard [46] is an alternative to the classical touchscreen - a surface
providing different haptically accessible functions depending on a selection within
a menu. For this purpose single actuators can be combined to larger keys and may
change in size and switching characteristics.
The reconfigurable keyboard is a haptic device (fig. 2.12c) mainly addressing the
kinaesthetic sensation, but has strong tactile properties, too. The user of the device is
the controller of the keyboard, receiving haptic information in form of the changing
shape of keys and their switching characteristics during interaction. The keyboard is at least a haptic display. As it communicates with another unit about the switching
event and the selection, it is also a haptic interface.
The network model (fig. 2.12b) of a single key shows the open-loop controlled
force source F0 of the electrodynamic actuator, the mass of the moving magnet m
and the friction in the linear guides d. Elasticity does not exist, as the design does
not contain any spring. This is in contrast to what could be expected from the typical
designs of electrodynamic speakers and their membranes. The actuator is capable
of generating a force Fout dependent on the ratio between the complex impedance
of the haptic display ZD = sm+d and the user’s impedance ZH.
2.2.5.3 Tactile Pin-Array:
Tactile pin-arrays are the archetype of all systems generating spatially coded information
for the haptic sense. Conceptually they are based on Braille-displays whose
psychophysical impression has been studied comprehensively since the middle of
the 20th century, e.g. by BÉKÉSY [23]. Many approaches were made ranging from
electromagnetic actuators of dot matrix printers [232] to piezoelectric bending actuators
[149] and pneumatic [292], hydraulic [231], electrostatic [290] and thermal [5]
actuators. Tactile pin arrays mainly focus on the skin’s stimulation in normal direction.
Only lately spatially resolved arrays with lateral force generation are receiving
an increased interest [142].
A tactile pin-array with excitation in normal skin direction is a haptic device
(fig. 2.13c) mainly addressing the tactile perception. The user is in continuous haptic
interaction with the device and receives haptic information coded in changing
pin heights. A tactile pin array is a haptic display. In contrast to the systems examined
before this time the user’s interaction does not include any user-feedback. As a
result the device is not necessarily a haptic interface2.
In the mechanical network model (fig. 2.13) a tactile pin array corresponds to
a positioning or velocity source v with a mechanical stiffness k in series to it (a
combination of actuator and kinematics). In a stiff design the mechanical admittance
of the technical system is small resulting in the elongation being totally unaffected
by the user’s touch. The system is open-loop position controlled.
2.2.5.4 Vibration-Motor:
Vibration-motors are used to direct attention to a certain event. There is a vibration
motor similar to figure 2.14a within every modern mobile phone, made of a rotary
actuator combined with a mass located eccentrically on its axis. Its rotation speed is
controlled by the voltage applied. It typically ranges from 7000 to 12000 rotations per minute (117 to 200 Hz). It is possible to encode information into the felt vibration
by varying the control voltage. This is often done with mobile phones in order
to make the ring tone haptically perceptible.
A vibration-motor is a haptic device (fig. 2.14c) addressing tactile perception.
The user is haptically interacting with the device and receives haptic information
in the form of oscillation coded in frequency and amplitude. A vibration-motor is a
pure form of a haptic display, or more precisely a purely tactile display.
With vibration motors the relevant force effect is the centripetal force. Assuming
a rotational speed of ω = 2π 10000RPM
60 Hz and a moving mass of 0.5g on a radius
of 2mm a force amplitude of F = mω2 r = 1.1N is generated, producing a sinoid
force with a peak-to-peak amplitude of 2.2N. This is an extraordinary value of an
actuator with a length of only 20 mm. Considering the network model (fig. 2.14b)
the vibratory-motor can be regarded as a force-source with sinoid output. It has to
accelerate a housing (e.g. the phone) with a mass m which is coupled to the user
via an elastic material, e.g. clothes. It is important for the function of the device
that the impedance of the branch with spring/damper coupling and user-impedance
ZH is large against the mass m. This guarantees that most of the vibration energy is
directed to the user, thus generating a perception.
haptic science and device design has already been used. In this chapter a systematic
introduction into the area of designing haptic devices begins. The following sections
explain the scientific and industrial disciplines participating in the research and development of haptic devices. Afterward terms and their definitions are introduced
and illustrated by examples how to characterize haptic systems based on some concrete
technical devices.
Scientific Disciplines as Part of Haptic Research:
In haptic science there are three groups of interest (fig. 2.1) with quite fluent bordersin between: Scientists working within the area of “haptic perception” proceed
according to strictly deductive scientific principles: Resulting from an observation a
hypothesis is derived. For this hypothesis an experiment is designed by testing the
point of the hypothesis by the exclusion of other varying parameters. As a result the
hypothesis is veri- or falsified leading to a new and improved hypothesis.
Research in the area of “haptic perception” is done by two scientific disciplines:Psychophysics and Neurobiology. Psychophysics deals with the analysis of the impression of physical stimuli - in the case of haptic perception this mainly refers to oscillations and forces of different spatial orientation. The aim of psychophysics is to create a model explaining perception. Neurobiology observes biologically measurable connections and analyzes the direct conversion of physical stimuli into neuronal signals and their processing within the brain. Both disciplines complement each other so that the neuronal observation should be able to explain a part of the psychophysical model and vice versa. These scientific disciplines formulate technical tasks for the preparation of experiments which are processed by two groups interested in “haptic synthesis” or “haptic measurement”, respectively
Fig. 2.1 Overview about the disciplines participating in haptic research.
On an alternative track both groups get assignments from industry making themselves use of the knowledge gathered by research on haptic perception. These groups work according to engineering solution strategies An assumption of requirements is derived from a technical question based on the current state of knowledge. A functional prototype and later a product to fulfill the requirements is designed in a developmental process accompanied by a continuous tracking of the prior assumptions and their meaning. Then the product obtained can be used for the analysis of psychophysical questions, or, respectively as a a product of the gaming-, automotive or aviation industry.
In the case of the generation of haptic impressions for Virtual-Reality (VR) applications the technical requirements typically ask for tactile, kinaesthetic or combined feedback systems. In that area the emphasis is on the correct choice of actuators, control and driver electronics and on the processing and transmission of signals. Due to the coupling of devices and time-discrete simulation systems a consideration of discretization-effects and their influence on the haptic quality of the impression is necessary. In the case of telemanipulation systems technical challenges are comparable. The main difference lies in the necessary measurement technology for the acquisition of haptic object properties. Additionally, the control engineering questions are more complex, as this area typically deals with closed-loop systems with unknown loads on both ends.
Terms and Terminology Used for the Description of Haptic Systems:
The definition of the terminology within the context of haptic systems is subject
to the current ISO 9241-910 norm. Many of the definitions used in this book follow
the terminology presented there. According to the author’s experience, all these
terminologies have the status of recommendations shared by a large number of researchers associated with the haptic area. However, there is no binding consent for their usage within the haptic community, so that many actual and future papers differ from the definitions presented above. The nomenclature mentioned here is based on prior publications to this material, especially by HAYWARD [90], COLGATE [176], HANNAFORD [85], BURDEA [34], ADAMS [2] and many papers by other authors.
Basic Concepts of Haptics:
Haptics means the combined sensation of mechanical, thermal and noci-perception
(fig. 2.2). It is more or less defined by the exclusion of the optical, acoustic, olfactoryand gustatory perception from the sum of sensory perceptions. As a result
haptics consists of nociceptive, thermoceptive, kinaesthetic and tactile perceptions. The sense of balance takes an exceptional position as it is not counted among the five human senses having receptors of their own Yet, it really exists making use of all other senses’ receptors, especially the haptic ones.
Haptics describes the sensory as well as the motor capabilities within the skin,joints, muscles and tendons.
Tactile means the mechanical interaction with the skin. Therefore tactile perception is the sensation of exclusively mechanical interaction. Please note that tactile perception is not exclusively bound to forces or movements.
Kinaesthetics describes both, actuatory and sensory capabilities of muscles and joints. It refers to their forces, torques, movements, positions and angles. As a result any kinaesthetic interaction has a tactile component due to this definition.
Fig. 2.2 Distribution of senses.
2.2.2 Definition of Haptic Systems:
The technical terminology is listed from the special to the general and illustrated
by block diagrams. The arrows between the components of the block diagrams may
represent different kinds of information depending on the devices they refer to. They
remain unlabeled. Haptic devices are capable of transmitting elongations, forces and
temperature differences and in a few realizations they also stimulate pain receptors.
The terms “system” and “device” and “component” are not defined on an interdisciplinary
basis. Dependent on one’s point of view the same object can be e.g. “a
device” for a hardware-designer ,“a system” for the software-engineer, or “just a
component” for another hardware-engineer. These terms are nevertheless part of any
engineering discipline and are used accordingly here but should anyhow be read with
this knowledge in mind.
A haptic device is a system generating an output which can be perceived haptically.
It has (fig. 2.3) at least one output, but not necessarily any input. The tactile
markers on the keys F and J of a keyboard represent information for the positioning
of the index finger. By these properties the keys are already tactile devices. At
a closer look the key itself shows a haptically notable point of actuation, the haptic
click. This information is transmitted in a kinaesthetic and tactile way by the
interaction of the key’s mechanics with the muscles and joints and the force being
transmitted through the skin. Such a key is a haptic device without a changing input
and two outputs.
A user (in the context of haptic systems) is a receiver of haptic information.
A haptic controller describes a component of a haptic system for processing
haptic information flows and improving transmission. Quite pragmatic in the case
of telemanipulation systems these kinds of controllers are frequently either a spring damper coupling element between end-effector and the operating element or a local
abstraction model of the area of interaction to compensate transmission delays. In
the case of a haptic simulator it is quite frequently a simple LTI-model with a high
in- and output rate. The LTI model itself is then updated on a lower frequency than
the actual speed of the haptic in- and output.
Fig. 2.3 Haptic device, user and controller.
Haptic interaction describes the haptic transmission of information. This transmission
can be bi- or unidirectional (fig. 2.4). Moreover, specifically tactile (unidirectional)
or kinaesthetic (uni- or bidirectional) interaction may happen. A tactile
marker like embossed printing on a bill can communicate tactile information (the
bill’s value) as a result of haptic interaction.
Fig. 2.4 Haptic interaction.
The addressability of haptic systems refers to the subdivision (spatial or temporal)
of an output signal of a device (frequently a force) or of the user (frequently a
position).
The resolution of a haptic system refers to the capability to detect a subdivision
(spatial or temporal) of an input signal. With reference to a device this is in accordance with the measuring accuracy. With respect to the user this corresponds to his
perceptual resolution.
A haptic marker refers to a mark communicating information about the object
carrying the marker by way of a defined code of some kind. Examples are markers
in Braille on bills or road maps. Frequently these markers are just tactile, but there
are also kinaesthetically effective ones marking sidewalks and road crossings for
visually handicapped people.
A haptic display is a haptic device permitting haptic interaction, whereby the
transmitted information is subject to change (fig. 2.5). There are purely tactile as
well as kinaesthetic displays.
A tactor is a haptic purely tactile haptic display generating a dynamic and oscillating
output. They usually provide a translatory output (e.g. fig. 9.19), but could
also be rotatory (e.g. fig. 2.14).
Fig. 2.5 Haptic display.
A haptic interface is a haptic device permitting a haptic interaction, whereby the
transmitted information is subject to change and a measure of the haptic interaction
is acquired (fig. 2.6). A haptic interface always refers to data and device.
Force-Feedback (FFB) refers to the information transmitted by kinaesthetic interaction
(fig. 2.6). It is a term coined by numerous commercial products like FFBjoysticks,
FFB-steering wheels and FFB-mice. Due to its usage in advertising, the
term Force Feedback (FFB) is seldom consistent with the other terminology given
here.
A haptic manipulator is a system interacting mechanically with objects whereby
continuously information about positions in space and forces and torques of the interaction
is acquired.
>
Fig. 2.6 Haptic interface.
A telemanipulation system refers to a system enabling a spatially separated haptic
interaction with a real physical object. There are purely mechanical telemanipulation
systems (fig. 2.7), scaling forces and movements via a lever-cable-system. In
the area of haptic interfaces, there are mainly electromechanic telemanipulation systems
according to figure 2.8 relevant. These systems allow an independent scaling
of forces and positions and an independent closed-loop control of haptic interface
and manipulator.
Fig. 2.7 Mechanical telemanipulator for handling dangerous goods (CRL model L) .
A haptic assistive system is a system adding haptic information to a natural interaction
(fig. 2.9). For this purpose object or interaction properties are measured via
a sensor and used to add valuable information in the interaction path. An application
would be a vibrating element indicating the leaving of a lane in a drive assistance
system.
A haptic simulator is a system enabling interaction with a virtual object (fig. 2.10).
It always requires a computer for the calculation of the object’s physical properties.
Haptic simulators and simulations are important incitements for the development of haptic devices. They can be found in serious training applications, e.g. for surgeons,
as well as in gaming applications for private use (see also chapter 13).
haptic devices. They can be found in serious training applications, e.g. for surgeons,
as well as in gaming applications for private use (see also chapter 13).
2.2.3 Parameters of Haptic Systems:
In [156] LAWRENCE defines the transparency T as a factor between impedance as
the input source of the haptic interface Zin and the actually felt output impedance
Zout of the device.
The principle of transparency is mainly a tool for control engineering purposes
analyzing stability and should be within the range ±3dB. T may be regarded as
the sole established, frequency dependent, characteristic value of haptic interfaces.
Frequently only the transparency‘s magnitude is considered. A transparency close
to “one” shows that the input impedance is not altered by the technical system. The
user of the haptic device being the end of the transmission chain experiences the
haptic input data in a pristine way. The concept of transparency can be applied to
telemanipulation systems and as well as to haptic simulators .
In [39] COLGATE describes the impedance width (Z-width) of a haptic system
Z−width = Zmax−Zmin
as the difference between the maximum load Zmax and the perceivable friction
and inertia at free space movement Zmin. The Z-width describes the potential of devices
and enables the comparability between them, after technical changes, e.g. by
the integration of a closed-loop control and a force measurement.
Active haptic devices are systems requiring an external energy source for the
display of haptic information. Usually, these are at least haptic displays. Passive
haptic devices, on the contrary, are systems transmitting haptic information solely
by their shape. This may lead to a false conclusion: A passive system in a control
engineering sense is a system with a negative energy flow at its input, e.g. a system
not emitting energy into the outside world. This concept of passive control is an
important stability criterion which will be discussed in detail in subsection 7.3.3.
For the moment, it should be noted that a passive haptic system is not necessarily
identical with a haptic system designed according to the criterion of passivity1.
The mechanical impedance Z is the complex coefficient between force F and velocity
v respectively torque M and angular velocityΩ. Impedance and its reciprocal
value - the mechanical admittance Y - are used for the mathematical description of
dynamic technical systems. High impedance means that a system is “stiff” or “inert”
and “grinds”. Low impedance describes a “light” or “soft” and “sliding” system.
The concept of impedance is applied to haptic systems by way of the terms displayimpedance
or interface-impedance Zd. It describes the impedance a system shows
when it is moved at its mechanical output (e.g. its handle).The concept of impedance
cannot be applied only to technical systems, but also to a simplified model of the
user and his mechanical properties. This is described by the term user-impedance
ZH. User-impedance - how stiff a user tends to be - can be influenced at will up
to a certain point. Shaking hands can either be hard or soft depending on its frequency.
The mechanical resistance of a handshake is lower at low frequencies and higher at high frequencies resulting simply from the inertia of the hand’s material.
Detailed descriptions of the building of models and the application of the concept
of user-impedance are given in section 4.2. An introduction into calculating with a
complex basis and mechanical systems is given in appendix 16. Understanding complex
calculation rules and the mechanical impedances are fundamental to the design
of haptic devices in the context of this book. Therefore it is recommended to update
one’s knowledge by self-studies of the relevant literature of electromechanics [158]
and control-engineering [167].
2.2.4 Characterization of Haptic Object Properties:
Besides the terminology for haptic systems, there is another group of terms describing
solely haptic objects and their properties:
Haptic texture refers to those object properties, which can exclusively be felt by
touch. The roughness of a surface, the structure of leather, even the haptic markers
already mentioned are haptic textures of the objects they are located on. In some
cases a differentiation is made between tangential and normal textures, whereby
the directional information refers to the skin’s surface. This specific differentiation
is more a result of technical limitations, than of a specialty of tactile perceptions
as tactile displays are frequently unable to generate a feedback covering a two or
three-dimensional movement.
Haptic shape refers to object properties which can mainly be felt kinaesthetically.
This can be the shape of a cup held in one’s hand. But it can also be the shape
and geometric design of a table rendered to be touched in a virtual environment.
In fact terms like texture and shape are used analogically to their meaning in
graphical programming and software techniques for 3D objects, where meshes provide
shape and surface-textures give color and fine structures. However, in comparison
with graphical texture, haptic texture mainly describes three-dimensional
surface properties incorporating properties like adhesion or friction, i.e. a realistic
haptic texture is much more complex in its parameters than a typical graphical
texture, even when considering bump-, specular or normal-maps. Therefore numerous
haptic surface properties, e.g. specific haptic surface effects are defined and
described from the perspective of a software engineer. These surface effects are
partly derived from physical equivalents of real objects, narrowed down to softwaremotivated
concepts in order to increase the degree of realism of haptic textures:
• Surface friction describes the viscose (velocity-proportional) friction of a contact
point on a surface.
• Surface adhesion Surface adhesion describes a force binding the movement of
a contact point to a surface. This concept allows simulating magnetic or sticking
effects.
• Roughness describes an uniform, sinoid structure of a small, defined amplitude
making the movement of a contact point on a surface appears rough.
Tacton refers to a sequence of stimuli adressing the tactile sense. It usually encodes
an event within the sequence’s pattern. The stimuli vary in intensity and frequency.
Both, stimuli and tacton, may even be overlayed with a time-dependent
amplitude modulation, such as fade-in or fade-out.
2.2.5 Technical examples:
There are several commercial haptic control units available on the market for the application
in design, CAD and modeling. One major player on the market is SensAble
with their PHANTOM R -series and the actually most low-cost product PHANTOM
Omni (fig. 2.11a). The PHANTOM-series can most easily be identified by the free
positioning of a pen-like handle in a three dimensional space. The position and
orientation of this handle is measured in three translational and three rotational degrees
of freedom. Depending on the model of the series, the tip force can act on the
handle in at least three translational dimensions. The generation of forces is done
via electrodynamic actuators; depending on the model these are either mechanically
or electronically commutated. The actuators are located within the device’s
basis and transmit their mechanical energy via levers and Bowden cables on the
corresponding joints. As a result of changing level-lengths the transmission-ratio
of the PHANTOM devices is nonlinear. For the static situation these changes are
compensated within the software driver. The PHANTOM devices are connected to
common PCs. The electrical interface used depends largely on the device’s product
generation and ranges from parallel ports to IDE cards and FireWire connectors.
The PHANTOM devices from SensAble are haptic devices (fig. 2.11c) primarily
addressing the kinaesthetic perception of the whole hand and the arm. As the force
transmission happens via a hand-held pen tactile requirements are automatically relevant
for the design too. This bidirectional haptic display is a haptic interface to the
user transmitting force information of a software application in a PC and feeding
back positioning information to her or him.
The network model of one degree of freedom (fig. 2.11b) shows the electronic
commutated electrodynamic motor as an idealized torque source M0 with inertia of
Θ of the rotor and a rotary damping dR resulting from bearings and links. By the
use of a converter resembling levers the rotary movement is transformed in a linear
movement with a force F0 and a velocity v0. An inertia m describes the mass of
the hand-held pen. The portion of the generated force Fout is dependent on the ratio
between the sum of all display-impedances against the user impedance ZH.
2.2.5.2 Reconfigurable Keyboard:
The reconfigurable keyboard (fig. 2.12a) is made of a number of independent actuators
arranged in a matrix. The actuators are electrodynamic linear motors with
a moving magnet. Each actuator can be controlled individually either as an openloop
controlled force source or as a positioning actuator by a closed-loop control.
When being used as force source, the primary purpose of the actuator is to follow
a configurable force/displacement curve of a typical key. The application of this reconfigurable
keyboard [46] is an alternative to the classical touchscreen - a surface
providing different haptically accessible functions depending on a selection within
a menu. For this purpose single actuators can be combined to larger keys and may
change in size and switching characteristics.
The reconfigurable keyboard is a haptic device (fig. 2.12c) mainly addressing the
kinaesthetic sensation, but has strong tactile properties, too. The user of the device is
the controller of the keyboard, receiving haptic information in form of the changing
shape of keys and their switching characteristics during interaction. The keyboard is at least a haptic display. As it communicates with another unit about the switching
event and the selection, it is also a haptic interface.
The network model (fig. 2.12b) of a single key shows the open-loop controlled
force source F0 of the electrodynamic actuator, the mass of the moving magnet m
and the friction in the linear guides d. Elasticity does not exist, as the design does
not contain any spring. This is in contrast to what could be expected from the typical
designs of electrodynamic speakers and their membranes. The actuator is capable
of generating a force Fout dependent on the ratio between the complex impedance
of the haptic display ZD = sm+d and the user’s impedance ZH.
2.2.5.3 Tactile Pin-Array:
Tactile pin-arrays are the archetype of all systems generating spatially coded information
for the haptic sense. Conceptually they are based on Braille-displays whose
psychophysical impression has been studied comprehensively since the middle of
the 20th century, e.g. by BÉKÉSY [23]. Many approaches were made ranging from
electromagnetic actuators of dot matrix printers [232] to piezoelectric bending actuators
[149] and pneumatic [292], hydraulic [231], electrostatic [290] and thermal [5]
actuators. Tactile pin arrays mainly focus on the skin’s stimulation in normal direction.
Only lately spatially resolved arrays with lateral force generation are receiving
an increased interest [142].
A tactile pin-array with excitation in normal skin direction is a haptic device
(fig. 2.13c) mainly addressing the tactile perception. The user is in continuous haptic
interaction with the device and receives haptic information coded in changing
pin heights. A tactile pin array is a haptic display. In contrast to the systems examined
before this time the user’s interaction does not include any user-feedback. As a
result the device is not necessarily a haptic interface2.
In the mechanical network model (fig. 2.13) a tactile pin array corresponds to
a positioning or velocity source v with a mechanical stiffness k in series to it (a
combination of actuator and kinematics). In a stiff design the mechanical admittance
of the technical system is small resulting in the elongation being totally unaffected
by the user’s touch. The system is open-loop position controlled.
2.2.5.4 Vibration-Motor:
Vibration-motors are used to direct attention to a certain event. There is a vibration
motor similar to figure 2.14a within every modern mobile phone, made of a rotary
actuator combined with a mass located eccentrically on its axis. Its rotation speed is
controlled by the voltage applied. It typically ranges from 7000 to 12000 rotations per minute (117 to 200 Hz). It is possible to encode information into the felt vibration
by varying the control voltage. This is often done with mobile phones in order
to make the ring tone haptically perceptible.
A vibration-motor is a haptic device (fig. 2.14c) addressing tactile perception.
The user is haptically interacting with the device and receives haptic information
in the form of oscillation coded in frequency and amplitude. A vibration-motor is a
pure form of a haptic display, or more precisely a purely tactile display.
With vibration motors the relevant force effect is the centripetal force. Assuming
a rotational speed of ω = 2π 10000RPM
60 Hz and a moving mass of 0.5g on a radius
of 2mm a force amplitude of F = mω2 r = 1.1N is generated, producing a sinoid
force with a peak-to-peak amplitude of 2.2N. This is an extraordinary value of an
actuator with a length of only 20 mm. Considering the network model (fig. 2.14b)
the vibratory-motor can be regarded as a force-source with sinoid output. It has to
accelerate a housing (e.g. the phone) with a mass m which is coupled to the user
via an elastic material, e.g. clothes. It is important for the function of the device
that the impedance of the branch with spring/damper coupling and user-impedance
ZH is large against the mass m. This guarantees that most of the vibration energy is
directed to the user, thus generating a perception.
Subscribe to:
Posts (Atom)