if you could see what i hear

SoundjenlucFor every major Hollywood hit film that smashes box office records, there’s a thousand modest little gems that get passed over and forgotten. One of those is an independent biopic from the 1980s called If You Could See What I Hear, a fictionalized retelling of the college years of Tom Sullivan, a blind man who went on to become a successful author, musician, occasional actor and (today) motivational speaker. The most refreshing thing about the film — and Sullivan himself, frankly, who golfs, skis and sky-dives in addition to be being an avid runner — is that it doesn’t present him as a victim. He just happens to be blind.

His roommate, Will Sly, far from tip-toeing around the subject, cracks jokes with the fictional Sullivan about his lack of sight. In perhaps the funniest scene, Sullivan, Will, and a couple other pals are pulled over by a policeman, who notices the erratic driving — erratic because Sullivan is at the wheel. The officer is understandably incredulous:

Officer: Your friend is blind?
Will: More or less. Yeah.
Officer: Then why the hell is he driving?
Will: ‘Cause he’s the only one who’s sober!

Sullivan then does a backflip over the car, and Will deadpans, "See?" Later in the film, Sullivan falls in love with a black woman, and while things don’t work out between them — his being literally "color blind" can’t overcome the very real social issues surrounding inter-racial dating back in those days —  he does save his girlfriend’s little sister from drowning in a pool, locating her by listening for the sound of air bubbles. Like any fictional biopic, the episode is based on an actual event, except it happened many years later, and the infant girl Sullivan saved from drowning was his very own daughter. And yes: he located her by listening for air bubbles, an impressive example of human echolocation at work.

We normally associate echolocation with animals like bats or dolphins. But there are a few humans who can echolocate very well, most notably James Holman, Jamesholmanbygeorgechinnery1830_2
an early 19th-century British adventurer known as the "blind traveler." While serving in the Royal Navy, he contracted an illness that first affected his joints, then robbed him of his sight at the age of 25. In recognition of his service, he was awarded a lifetime grant of care. He could have simply lived out his days in leisure, but instead, he took multiple leaves of absence, the first to study medicine and literature at the University of Edinburgh. Then the travel bug bit.

From 1819 to 1821, Holman took a Grand Tour through France, Italy, Switzerland, parts of Germany, and the Netherlands, and later toured Russia (where he was accused of being a spy by the czar — yep, a blind spy — and deported), Austria, Saxony, Prussia, and Hanover. Near the end of his life, he journeyed through Spain, Portugal, Moldavia, Montenegro, and Turkey. I suspect he and Sullivan would have got along very well.

Holman used the sound of a tapping cane to navigate his environment. Teenager Ben Underwood makes frequent clicking noises with his tongue and listens to the returning echoes to get his bearings and identify objects around him, augmented by a tapping cane. (This video clip shows him correctly identifying things like a fire hydrant and garbage cans.) Underwood was diagnosed with retinal cancer when he was 2, requiring the removal of his eyes a year later. By age 5, he’d discovered echolocation, developing the ability under the tutelage of another blind echolocator, Daniel Kish, who regularly leads blind teenagers on hiking and mountain-biking expeditions.

It’s very much a learned ability, apparently; most of us just don’t face the necessity of acquiring such skills, and even so, some people prove to be better at it than others. For those of us without our own built-in sonar-sense, researchers at Boston University have developed a prototype device that can enhance auditory cues while navigating an environment — designed, naturally, to assist the blind. The device repeatedly emits an inaudible (to humans, which means it might the family dog a bit crazy) ultrasonic cluck several times per second, and each click reflects off any objects in the environment. The reflections are then detected by special head-mounted microphones. Computer processing does the rest, converting the ultrasonic signal into audible signals. The person wearing the device can then hear those signals over custom open-ear earphones.

The result? An "auditory image" in which objects in the environment seem to emit sounds to the user. Objects of different shapes and textures emit subtly different sounds, such that the user can distinguish between them. Per BU researcher Cameron Morland (who kindly answered my emailed queries), the unique acoustic characteristics of the reflections enable the user to better distinguish the location and size "surface" properties of various objects. For instance, sounds emitted by an object to the left will arrive at the left ear a bit sooner and louder — an effect acousticians call interaural time difference and interaural level difference. Morland will present the group’s findings at the upcoming meeting of the Acoustics ’08 meeting in Paris, the annual meeting of the Acoustical Society of America (ASA), running June 30th through July 4th.

Also, sounds reflect off the body, the head, and the outer ear (pinna), modifying the spectrum of the signal.  At normal frequencies — within the range of human hearing — this kind of spectrum shaping is what enables us to determine elevation of an object, according to Morland: "We can then synthesize this to make it sound ‘in front’." Unfortunately, the new device uses high-frequency pulses, so users won’t get those elevation cues. But who knows? Future prototypes might resolve the issue.

It just so happens you can tell a lot about your surroundings, if you just know how to listen to aural cues. Sweeping the device over an object’s surface while remaining the same distance from it will produce a refection with no changes in velocity if that surface is flat. If the surface is tilted so it moves closer to the user, it will sound higher in pitch; if tilted the other way, it will sound lower in pitch, thanks to our old friend the Doppler shift. A roughly textured surface will have some regions that are closer, and others that are further away, and users can be trained to recognize the resulting pattern of increased and decreased pitch. "Venetian blinds sound quite different than a flat surface, or a bookshelf packed with different-sized books," says Morland. You can more fully appreciate these Doppler shift effects by watching the short movies at Morland’s website.

The BU team’s prototype device is capable of simple detection of objects and open spaces. Preliminary tests show that most people can echolocate a little using the device, and improve quickly with practice. (As with anything else, practice makes perfect, or at least Most Improved.) Morland and cohorts are now refining their prototype so that it can help users get in touch with their inner bat and navigate effectively in more acoustically complex, real-world environments.

A few years ago, Peter Meijer, a research scientist in the Netherlands, developed a similar system he called the vOICe, enabling the user to "see" using sound. (There’s an entire project devoted to this area of research, actually, called Seeing with Sound.) It’s basically a tiny head-mounted camera, a laptop, and headphones; the camera collects visual data and the computer converts the video input into soundscapes, presented to the user in stereo. And again, with practice, users can learn how to relate the features of a given soundscape with objects of features in their real-world environment. For those who found the laptops a bit cumbersome, camera cell phones came to the rescue: users can now download the simplified version of the software and just point their camera phones in any direction they’d like to "see." The image at right is a "soundscape" of a face, based on one second of sound, giving a sense of what vOICe users are "seeing": ghostly grayscale images, a bit fuzzy in terms of resolution, but frankly, not far off from what a sighted person would "see."Dia17test

Morland & Company’s device is a tremendous achievement (as are similar technologies), even with its current limitations, if one stops to consider how sophisticated a bat’s system for echolocation actually is. The upcoming ASA meeting in the City of Lights also features some of the latest research on bat sonar. For instance, Cynthia Moss of the University of Maryland says that bats can control both the distance and direction of their acoustic "gaze." Not only does the bat use the returning echoes from its ultrasonic pulses to build a 3D acoustic "image" of its surroundings, but it adapts its behavior in response.

Moss’s team recorded a bat’s 3D flight path with high-speed stereo IR video and recorded its sonar signals with a microphone array. This way, they could build their own "acoustic image," reconstructing the emission. They found that the big brown bat (Eptesicus fuscus) "points" its sonar beam in different directions to inspect objects in its environment. The bat’s beam is pretty wide, falling within a 60 to 90-degree cone, and this width is sufficient to enable it to gather information about closely spaced objects — what Bat People like Moss consider a complex environment — simply by  directing its beam in the general direction of those objects. However, the bat doesn’t really do that: instead, the bat "points" the center of its beam sequentially at these objects. 

Moss interprets this to mean that the bat is carefully analyzing acoustic information separately for closely spaced objects. The bat also modifies the duration of its calls to avoid overlap between its vocalizations and returning echoes. When the bat encounters objects that are at a close distance, it produces shorter sonar calls than when it encounters objects that are further away. For instance, in Moss’s latest study, the bat being tested encountered an obstacle that was closer to it than an edible food reward, and the animal made adjustments in the duration of its calls. This indicates, say the researchers, whether or not it was paying attention to the near or distance objects.

But even bats aren’t perfect; they still have to compensate for so-called "ranging errors," according to Marc Holderied of the University of Bristol, who studies that very thing. Ranging errors are especially risky when flying in those "complex environments" filled with closely spaced objects. Bats depend on the accuracy of their auditory images, especially in determining the distance between them and an object, which they figure out "measuring" the time delay between call and echo. But accuracy can get a bit shaky during flight. Bats are moving around as they emit their pulses and receive echoes back, for starters, and their changing position must be accurately accounted for by the animal’s echolocation "system." Those echoes can also be modified by Doppler shifts.

At the ASA meeting, Holderied will report on new results from his studies of how 18 different species of bat (to date — his research is ongoing) control the spatial distribution of ranging errors via their signal sweep rate. He found that objects at one particular distance from the bat might have zero ranging errors –a kind of echolocation "sweet spot," if you will — while ranging errors will increase for closer or more distant objects. Apparently bats can adjust their signals so that the "sweet spot" distance shifts to whatever distance they need it to be (within reason, of course) — similar to how our eyes will readjust their focus as we move around an environment (accommodation in vision, distance of focus). Holderied used 3D tracking techniques combined with 3D laser scans of bat habitats to study their adaptive calling behavior. And he can cite specific examples of actual distances of focus for different bat species in different behavioral contexts: search flight, obstacle avoidance, and target approach, specifically.

What the bats are doing is probably not all that different from how echolocating humans adapt  acoustically to their environment as they navigate. Yet another paper at the upcoming ASA meeting will focus on a study demonstrating that if humans are motivated and/or determined enough, they most certainly can learn to "hear" silent objects, just by listening to reflected and ambient sounds. You can read about some of the particulars here.

Not surprisingly, the University of California, Riverside, researchers also say their results confirm that there’s really no such thing as "true" silence, apart from those we create artificially, like soundproof booths. There is always some ambient noise. Psychology professor Lawrence Rosenblum, who participated in the study, pointed to what we hear when we hold a seashell up to our ears as an example: something that sounds like the distant roar of the ocean. In fact, the seashell is merely amplifying the specific frequencies of ambient sound, which, to our hears, sounds like the ocean.

Kinda makes me want to go back to the Paris for the ASA meeting, frankly, just to hear firsthand about all the nifty research going on in echolocation. I like acousticians. They see the world just a little bit differently from the rest of us, precisely because they’re so attuned to sound. If only we could see what they hear — we might learn to view our familiar world in a fresh new way, too.

3 thoughts on “if you could see what i hear”

  1. Partially Deflected

    “inaudible” is a relative term. I can hear higher than normal frequencies – and it can be painful. I once got an extreme case of tinnitus from some older computer terminals that had failing high-voltage power supplies. When those start to go bad they make an extremely loud noise just out of the most people’s hearing range.

  2. The question is, how much does the technology of these new echolocation devices derive from naval sonar systems (and, if the answer is none, then why not?). On the other hand (for the cynical/conspiracy-theorists), what are the chances that these new echolocation devices will be classified as military technology?
    As for the ability to perform passive echolocation, I can usually tell when someone is standing behind me due to the change in the sound of the room. I’m definitely not good enough to navigate with it (since, having tried it a few times, ending up with a mashed nose from walking into a wall).
    As for inaudible, the good news is that, as a person ages, the ability to hear high frequencies usually diminishes.
    Dave

  3. In the UK, where there is a perceived problem with teenagers’ generally antisocial behaviour, public places are equipped with high fequency noise generators that emit signals that are at once, to the young and unwelcome, a shreiking cacophony and, to the elderly and entitled, a blissful silence. This observation supports Dave’s final observation and also shows how far a once free and fair society has sunk.

Comments are closed.

Scroll to Top