honey, i shrunk the camera

Bookishjenluc
Every writer needs both encouragement and constructive input during the long, painful process of producing a manuscript. I owe a great debt of gratitude to the various members of my former writer’s group in Washington DC, who provided both of those things for my two books. One of those members was fellow science writer (and blogger) James Riordon (a.k.a., Buzz Skyline), who was working on his first science fiction novel during the same time period. I’m thrilled to announce that the novel is now finished, and available for your purchasing pleasure via Lulu. It’s called The Dark Net — tag line: "You can log out any time you like, but you can never leave" — and features not just a "hero" who is epileptic and rendered pretty much impotent by the drugs to control his condition, but also an adorable pair of virtual penguins named Linus and Minus, the subjects of an experiment in neural network behavioral conditioning. In true science fiction fashion, it all goes horribly wrong, and the hero is compelled to ponder the question of what makes something "real."

James was inspired to write the novel after reading that as much as 5% of the Internet is "dark," i.e., inaccessible to the average user Googling around Cyberspace. That portion is most likely home to, say, high-level government classified information, and probably quite a few hackers as well, so it’s a natural fodder source for science fiction. The Dark Net takes us into the not-so-distant future, where the trappings of everyday life are still familiar, but with advanced technological gadgetry having now become commonplace, including biometrics and wearable computers with those nifty headset/eyewear display screens. You know you’re just itching to read it. Go! Order The Dark Net now! We’ll wait…

Are you back? Good. I love the fact that The Dark Net‘s main character has a physical disability — reminds me a bit of Ben Elton’s Gridlock, which features a wheelchair-bound protagonist with cerebral palsy — and that for all the technological developments, he still had to rely on imperfect pharmaceutical solutions to keep seizures at bay.
Darknetcover16_2
It might have been easier (albeit less interesting) if James had made his protagonist blind, because scientists have made great strides over the last 10 years in developing artificial retinas that can restore some semblance of sight to the blind. One of the key enabling components to our visual system is a bunch of cells that lie at the base of the retina, which send electrical pulses to the optic nerve when they detect light; the brain then interprets these signals into images.

Not all blindness is created equal, particularly when it comes to the causes of losing one’s eyesight, but artificial retinas will soon be a viable solution for people who lost their eyesight due to retinitis pigmentosa or age-related macular degeneration –- two of the most common causes of vision loss. In both these conditions, the photoreceptor layer of the retina is destroyed, but the inner layers remain intact, still capable of integrating incoming signals and transmitting output signals to the brain’s visual cortex via the optic nerve.

Sometime around 1985, a neuro-opthalmologist named Joseph Rizzo III was working retinal transplants and had an epiphany as he was removing a lab animal’s retina: even though the light-sensing cells had died off, the nerve connections were still intact and could still transmit signals from those cells to the brain. Rizzo concluded that it should be possible to create a retinal prosthesis capable of gathering wireless video signals from an external camera and feeding those signals directly to the brain, bypassing the damaged cells altogether. He lost no time demonstrating proof of principle with experiments showing that direct electrical stimulation of the retinal ganglion cells in blind test subjects produced some sense of vision. And he launched the Boston Retinal Implant Project with MIT electrical engineer John Wyatt Jr., which now boasts more than 27 researchers at eight institutions.

There’s lots of different options when it comes to finding substitutes for the mangled photoreceptor layer. One of my favorites can be found in the work of Oak Ridge National Laboratory’s Elias Greenbaum, who works with the protein centers of spinach leaves. He’s got a lot of fancy, complicated equipment in his lab, like most scientists, but he’s also got a common kitchen blender. That’s what he uses to puree spinach leaves to a pulp, after which the pulp is put into a centrifuge to separate the various molecules. Greenbaum’s particular interest is in one particular reaction center (Photosynthesis I, or PSI), which is basically a tiny photosensitive battery, converting light into energy by absorbing sunlight and emitting electrons. Spinach PSIs can’t generate a lot of current, it’s true, but there’s enough that Greenbaum thinks it might produce enough electricity to one day run minuscule molecular machines. PSIs also behave like simple diodes, passing current in one direction but not the other, which means they could eventually be used to construct logic gates for molecular computers, connecting them into functioning circuits using wires made of carbon nanotubes, for instance.

While we’re all waiting for molecular machines to become a reality, however, Greenbaum’s found another use for PSIs: as a raw material for constructing artificial retinas.  This is biomimicry in its purest form: the PSIs perform a similar function to the retina’s photoreceptor cells, turning light into electrical signals, so why not use them to replace damaged photoreceptor cells? To do this, the PSIs are embedded into tiny fatty spheres (liposomes) and injected into the membranes of retinal cells, so when light falls on them, they produce a voltage in response strong enough to trigger electrical impulses along the optic nerve to brain. Whether or not these constitute "neural events" that the brain can then accurately interpret has yet to be determined.

Fascinating though Greenbaum’s work may be, even he emphasizes that there’s a long way to go before this is ready for any kind of real-world clinical application — last time I checked, he hadn’t yet started experimenting with animals, never mind human trials. So people suffering from damaged photoreceptor layers should really be looking to Rizzo and his growing army of collaborators for a short-term solution. As he’d envisioned, his breakthrough insight and subsequent proof of principle did indeed lead to the development of the first retinal prosthesis.

In such a system, an array of electrically activated microelectrodes fabricated onto a flexible substrate is implanted close to the retinal ganglion cell layer from the front of the retina. Retina1_2
A small commercial video camera mounted on a pair of glasses takes the place of the damaged retinal photoreceptor layer, providing images of the outside world as the user "looks" around. These are transmitted wirelessly via a transmitter coil to a corresponding ‘receiver coil" sitting on an implant on the surface of the eye — basically a very thin electrode array placed just under the retina. The electrodes stimulate any surviving nerve cells in response to incoming images from the camera, providing a small patch of vision. The camera’s signals are designed to be transformed into a format suitable for directly stimulating the retinal ganglion cells, and then sent wirelessly to the array.

Ideally, they would have liked to have a fully implantable prosthetic retina, but, as Wyatt told Technology Review, "The eye doesn’t like stuff inside; that’s why it doesn’t have a zipper." Considering the discomfort a single eyelash can cause if it falls onto the surface of the eye, I can’t image anyone being able to endure a full-scale implant. So they developed the outer device. They attach the plant to the surface of the eye with tiny sutures and the only part that penetrates the eye is that electrode array, which is only 10 micrometers thick, two millimeter wide, and three millimeters long. In 2005, Wired reported that an artificial retina could be clinically available as early as 2008, thanks to a system developed by researchers at the University of Southern California and the Doheney Eye Institute. The system is called Argus, after the mythological Greek god who had 100 eyes, and it wouldn’t be cheap: it would probably cost between $30,000 to $50,000 — still a small price to pay, I’d think, for the gift of restored sight.

Yet scientists haven’t quite given up hope that it might be possible to come up with a fully implantable device. The extraocular camera approach is not without its limitations. For instance, subjects must move their heads in order to scan the
environment. This conflicts with natural eye movements, which can be
confusing since the only information the retina receives is from the
camera. Even something as simple as shaking one’s head "yes" or "no"
can be uncomfortable and disorienting.

To address that problem, researchers at the University of Southern California’s National Science Foundation Engineering Research (ERC) for Biomimetic MicroElectronic Systems (BMES) are developing a tiny "intraocular" camera
for retinal prosthetic systems that can be implanted directly into the
crystalline lens sac of the human eye. (Full disclosure: I helped write
the linked press release, so if you hear echoes of the same wording in
this post, that’s why. There’s a very nice writeup in Popular Science
on the research, too. I had nothing to do with that.) The operation is
similar to the techniques used in cataract surgery. The prototype being
developed by USC/BMES team leader Armand Tanguay
and his colleagues’ would allow for natural eye and head movements.

Any camera that small has to meet all kinds of stringent requirements to make it suitable for implantation
into a human eye, not least of which is a certain degree of
biocompatibility. But it also has to be very, very small, extremely
lightweight, and not require too much power to operate (batteries are
so very bulky). The researchers needed to include only the bare minimum
of functioning components. In order to optimize the design constraints
for their ultra-miniature camera, Tanguay’s group performed a series of
psychophysical studies to determine the minimum requirements for
several important characteristics of human visual perception: object
recognition, face recognition, navigation, and mobility. They did this
by pixellating high-resolution images at increasing lower resolutions
to emulate the number of electrodes in the microstimulator array, with
each electrode representing one "pixel," or picture element. They then
tested both the accuracy and speed of image (object) recognition in
human subjects.

(Interesting side note: one of Tanguay’s team members, Noelle
Stiles, worked extensively on the psychophysical aspects of the
project, but she’s not a PhD: she is still an undergraduate
at USC. Yet she’s been working with
the group since her junior year in high school, after she met Tanguay
at a science fair awards ceremony. When I asked him about it, Tanguay
admitted in an email exchange that even he forgets sometimes that
Noelle is still an undergraduate: ""She really understands what
research is all about, and what it means when we really know something
is true as opposed o think (or even hope) it is. Many PhD students
aren’t this far along even when they graduate." At the Annual Meeting
of the
Association for Research in Vision and Opthalmology (ARVO) in 2005, she
became the only high school student ever to present a conference paper
as a full member of USC’s Optical Materials and Device Laboratory. You
go, girl! At this rate, she’ll win the Nobel Prize before she’s 30.
Suddenly I feel like a total slacker.)

Tanguay’s
team found that surprisingly few pixels were required to achieve good
results for many of those tasks: approximately 625 pixels in total,
compared to more than a million for a typical computer display. They
also found that pre- and post-pixellation blurring of images resulted
in significantly improved object recognition and tracking – even better
for moving objects as with static ones. Those findings have made it
possible to substantially relax the design of key components of the
intraocular camera, thereby reducing the prototype intraocular camera’s
size and weight from an object the size of a Tylenol tablet down to an
object that is now about one-third the size of a Tic-Tac. Tanguay and
ERC Director Mark S. Humayun (who was instrumental in developing Argus)
predict that the next generation prototype will be pretty much fully
implantable — and that includes things like support arms for the
device. Early prototypes have been highly successful in initial tests,
although human FDA trials are still at least two years in the future.

Tanguay and his collaborators’ work will be well represented at the upcoming meeting of the Optical Society of America, should you wish to hear more about their latest progress. And in between talks you can read your shiny new copy of The Dark Net. Go on, you know you want to!

3 thoughts on “honey, i shrunk the camera”

  1. marvin thalenberg

    About 30 years ago(off the top of my head- everything is aout 30 years ago, Judah Folkmann started looking for an angiogenesis factor. He had a toough time convincing people, but the factor has beenisolated, and since wet macular degeneration is a proiferaltion of new blood vessels, preventing angiogenesis is a more elegant way of treating than an elaborate device- an occasional injectin into the eyeball(ouch). Cheaper, too

    (Generic name: ranibizumab injection)

    Year Approved by the FDA: 2006

    Effective for: Wet macular degeneration

    How it works: Lucentis®is an antibody fragment that binds to and inhibits the biologic activity of human Vascular Endothelial Growth Factor A (VEGF-A), a protein that is believed to play a critical role in the formation of the new abnormal and leaky blood vessels, characteristic of wet macular degeneration. The drug is injected into the vitreous portion of the eye (the clear jelly-like substance that fills the eye from the lens back to the retina). Due to the fact that the production of VEGF-A is ongoing, routine administration of this drug is required.

    According to data collected during clinical trials, nearly 95 percent of the participants who received a monthly injection maintained their vision at 12 months following the beginning of treatment compared to approximately 60 percent of patients who received the control treatment. Approximately one-third of patients in these trials had improved vision at 12 months.

    Most common side effects: The most commonly reported adverse events included hemorrhage of the conjunctiva (the membrane that covers the white part of the eye), floaters, eye pain, increased eye pressure, and inflammation of the eye. Serious adverse events such as endophthalmitis (severe inflammation of the interior of the eye), retinal detachment, retinal tear, increased eye pressure and traumatic cataract were rare and often related to the injection procedure. There is also a small increase in the risk of stroke. Clinical trial data indicated that approximately .3 percent of patients suffered a stroke when given a .3 milligram dose of Lucentis®compared to 1.2 percent of patients who received a .5 milligram dose. In addition, patients who have previously suffered a stroke may be at greater risk of having another stroke.

    Love Marvin

  2. I wonder how long it will be before they do away with the physical eye altogether and just use a small camera.

Comments are closed.

Scroll to Top