Progress Toward Eradicating HIV

I was shocked to last summer of the Malaysian Airlines flight MH 17 that crashed over Ukraine, carrying among its passengers six AIDS researchers. Here is my [overdue] tribute: a brief summary of our progress toward eradicating HIV from this planet.

HIV virusHuman Immunodeficiency Virus (HIV) in brief:
HIV is a retrovirus of the genus lentivirus. Retroviruses carry their genetic material in the form of RNA, and then reverse transcribe this RNA when inside the host cell to create viral DNA. The DNA is then incorporated into the host genome. Lentiviruses are special in that they infect cells at any time, while other viruses can only do so during cell division. For this reason, lentiviral vectors can be used to deliver copies of a correct gene to people with a mutated version. HIV in particular infects several types of immune cells, including CD4 T cells. In humans, this results in a continual failure of the immune system over time, which progressively increases patient susceptibility to infections and cancers. If left untreated, average life expectancy after the time of infection is 9 to 11 years. Fortunately there exist many kinds of effective treatments called antiretroviral therapy (ART). You can read about these drugs and how they inhibit different steps of the viral life cycle here. Unfortunately, these treatments are not capable of completely curing an HIV patient.

Source: Joshua L. Hood, MD, PHD

Source: Joshua L. Hood, MD, PHD

If you Google “HIV cure,” you’ll find all kinds of interesting stuff. For example, a compound found in bee venom, melittin, pokes holes in lipid bilayers and thus can kill the human immunodeficiency virus by destroying its outer envelope. The melittin can be delivered with nanoparticles, which are already known to be safe in humans. The figure at right illustrates this delivery system. The nanoparticles (purple) are modified with small molecular “bumpers” (red ovals) that prevent human cells from coming into contact with the melittin (green), but allow the much smaller virus to get close enough that the bee venom compound can rip it apart. A PLOS ONE journal article detailing a study of the safety of melittin as a virucide can be found here. This particular possibility is really exciting because unlike other existing therapies, it directly attacks the physical virus. I just hope that if this is pursued as an HIV solution, it can be synthesized in a lab and won’t have to be obtained via mass bee sacrifice.

While there’s no cure currently available, there exist individuals that have a natural resistance to HIV, called “elite controllers.” Such individuals are infected with HIV but do not need to take ART because they are naturally able to maintain a CD4 count greater than 500. You can read about several different types of genetic variations that confer this natural resistance here, but I’ll focus on one in particular: the CCR5-delta-32 mutation, which nullifies the CCR5 gene. This gene normally encodes a surface cell receptor protein on CD4 immune cells, which the HIV virus uses to gain entry into the host cell. Clinical trials have already been conducted in humans to modify the CCR5 gene with gene therapy methods, so this viral entryway can be deleted in HIV patients without the natural mutation. It has been determined to be a safe therapy within the limits of one clinical study published last year in the New England Journal of Medicine. Efforts are currently underway to improve the therapy with newer and more reliable gene editing methods. I’m not sure when this treatment will come widely available, and it hasn’t permanently cured anyone yet, but it definitely reduces viral loads and increases CD4 cell counts. Read more here.

There actually is one person who is confidently said to have been functionally cured of HIV (he has had undetectable levels of viral genetic material and a normal CD4 cell counts for more than six years now). This is Timothy Brown, also known as the second “Berlin patient.” Interestingly, however, the exact mechanism by which he was cured still eludes the researchers of the world. I’ll detail his treatment briefly. Brown was diagnosed with acute myeloid leukemia more than a decade after he contracted HIV. His doctor had heard that patients with the CCR5-delta-32 mutation have natural resistance to HIV, and found a bone marrow donor that carried two copies of that mutation, meaning that he did not make any of the CCR5 receptor. After undergoing radiation which killed all Brown’s white blood cells (eliminating the cancer), he underwent two separate bone marrow transplants from the aforementioned donor (the second one was required because of a leukemia relapse). This allowed his immune system to start over with CD4 T cells which lacked the CCR5 receptor. One article describes a study that made some progress in elucidating why this treatment worked so well; researchers think that the foreign cells from the donor killed any of Brown’s immune cells which survived chemotherapy, allowing his immune system to truly start over again after the transplant.

One other person may have been cured, using a transplant of blood from the umbilical cord of someone carrying that very important CCR5 mutation, but this is said with hesitance, because the “Barcelona patient” was HIV free for only three months before dying of cancer.

Relying on transplants is not the answer for the majority of the HIV-positive population. It’s simple a matter of supply and demand; to find someone with 2 copies of the CCR5-delta-32 mutation is rare, and to hunt these individuals down and ask them to donate their bone marrow is unfair. Most experts agree that a cure must involve some combination of immune system therapy, gene therapy, and drug therapy to force the HIV out of hiding so it can be destroyed.

There’s one more thing that I’m really excited about, and perhaps you have already thought about it yourself. If gene therapy has been used in a human to modify the CCR5 receptor, why can’t it be used to delete the viral DNA from the infected cells as well? Well, a research group at Temple University School of Medicine in Philadelphia has asked this very question, and they seem to have an answer. Their study, published last summer, details how they successfully eliminated the HIV virus from cultured human cells for the first time with high-fidelity gene editing methods. Unfortunately, this therapy is not yet ready to go into clinical trials. An article from the Temple University School of Medicine website details the main obstacles: “The researchers must devise a method to deliver the therapeutic agent to every single infected cell. Finally, because HIV-1 is prone to mutations, treatment may need to be individualized for each patient’s unique viral sequences.” But imagine what a potent combination it will be when we can use gene therapy to both prevent infection by deleting the CCR5 receptor, and for erasing the viral DNA for those cells which have already been infected!

While we wait with great anticipation and hope for the cure for HIV, the World Health Organization (WHO) is making great progress towards providing universal access to antiretroviral therapy; they aim for all the world’s HIV victims to get access to ART by the end of 2015.

Even without a cure, we should be proud with how far we’ve come. Remember how I said that without treatment, HIV victims will live on average only 9-11 years post-infection? Well, successfully treated HIV-positive individuals now have a normal life expectancy. That’s no insignificant achievement.

So, thank you to all the AIDS researchers of the world!! It’s an exciting time to be alive.

Mind-Reading and Manipulation

So, as it has been some time since my last post (I have a really good excuse, trust me), I know you’ve had plenty of time to think about the possible negative ramifications of developing the technology to read someone’s mind. We can start with the obvious: it’s a plain and simple invasion of privacy. I remember hearing about the Patriot Act sometime after its implementation in 2001, and thinking, I’m so glad that we will always have one true privacy that cannot be invaded by the government, etc.: our thoughts. To my adolescent self, the technology to read minds seemed completely impossible. I knew very little about neuroscience back then. I didn’t know that our brain waves comprise a code just waiting to be cracked. Now I’m sure it’s merely a matter of time.

(Side note: I just Wikied the Patriot Act and realized that the USA in “USA PATRIOT Act” doesn’t stand for “United States of America.” It stands for “Uniting and Strengthening America.” For some reason, that is hilarious to me. Why use the acronym for the name of a country in the title of an Act of Congress to stand for anything other than the name of that country? Why??)

Of course, while I now believe it’s entirely within the realm of possibility, I also think it will be a long time before a person’s thoughts can be read. The business of brain wave decryption isn’t exactly analogous to deciphering an ancient language. In the brain, it matters how many neurons are firing in what region, with what frequency, and so forth. And the number of neurons in the adult human brain is estimated using tomography to be about 90 billion–so just imagine the complexity! These networks are different for every person, and moreover, they’re constantly changing within each person. Neurons are gaining and losing synapses (connections with other neurons); they are dying and being replaced. As we learn, which we do every waking moment, our brains are altered.

Graphical Abstract--Neural portraits of perception: Reconstructing face images from evoked brain activity

Graphical Abstract–Neural portraits of perception: Reconstructing face images from evoked brain activity

When you think about that, it’s actually really impressive how far we’ve gotten. Remember that Yale paper you were supposed to read? In this study, researchers were able to reconstruct impressively accurate facial images from the brain waves of subjects looking at photographs. The really crazy thing about their study in particular is that they didn’t factor in activity from the occipital cortex, the visual processing center of the brain. The authors write, “Visual stimuli based on patterns of activity outside occipital cortex have not, to our knowledge, been reported. The potential for reconstructions from higher-level regions (e.g., ventral temporal cortex or even fronto-parietal cortex) is enticing because reconstructions from these regions may be more closely related to perceptual experience as opposed to visual analysis.” How did they do it? The most basic explanation is that they used a learning algorithm to match brain waves for each participant to various features of a “training” facial image. Thus, with a decoding algorithm primed for each subject, researchers knew what brain activity would give information about certain features in the “test” photographs. They did reconstructions with several sets of neural activity, including ones that excluded input from the visual processing center of the brain!!! Above you can see their graphical abstract. The reconstructed image represented therein is one that integrated activity from all regions of the brain, but please see their paper for the reconstructions that exclude the occipital cortex!

A neuroscience lab at UC, Berkeley is working in a similar vein–these scientists want to be able to interpret the brain’s activity in response to a moving picture, rather than a still picture. I presume that the end goal of all this would be to enable blind persons to see again–perhaps a helmet could be devised which would live stream the view that a person would normally see with his eyes and translate it into electrical stimulations at exactly the right place in the person’s brain to allow him to see the world around him. Of course, we’re still learning to map the brain activity to certain visual stimuli, and cannot even begin to try and evoke the proper visual interpretation in the brain until we have the mapping part down. But Berkeley’s Professor Jack Gallant has made some progress, using similar methods as the Yale researchers above, adapted for dynamic images. Here you can see some clips from movie trailers that Gallant and his team reconstructed from test subjects’ brains, but I’ll warn you: they’re not that impressive yet. There’s a big difference between reconstructing static facial images and reconstructing random movie clips in real time.

Now, this is something of a tangent, but if we’re discussing mind reading and its possible evils, we have to touch on mind control. So, remember that scene in Avatar where Jake and Neytiri use their little built-in Ethernet cords to plug their minds into each other? …Yes, you do. Well, in some sense, that is now a reality. Researchers at the University of Washington have enabled a person to control the hand movements of another person located in a completely different building about a half mile away. No, really! The first participant sits in front of a computer game and must defend a city by blocking enemy fire and retaliating with cannons. The catch is, he has no physical controls. Instead of pushing buttons or clicking a mouse, this player must defend the city by thinking about moving his hand to push buttons. An encephalography machine reads this intention from his brain and transmits the motor control command via electrical pulses through the Internet and then through electrodes hooked up to a transcranial magnetic stimulation coil placed over the region of the receiving player’s brain that executes hand movements. (Man, that’s a mouthful.) Within a fraction of a second, the second person’s brain is stimulated and his hands taps a touch screen game control. He cannot see the game and does not know when to fire the cannon or block incoming fire, yet accuracy during these trials is as high as 83%.

This amazes and terrifies me. And guess what? The UW researchers have recently been given a $1 million grant from the W.M. Keck Foundation to try and expand the types of information that can be shared from brain to brain. Apparently, they’d like to enable transmission of visual and psychological concepts and thoughts. I can see the interest from a basic science perspective, but in a practical sense, do we want that technology to exist?

The one thing the UW researchers are looking into that has a clear positive objective is learning how to influence neural activity involved with alertness and sleepiness. With such knowledge, they hope that signals from a dozing e.g. airplane pilot’s brain could activate an electrical stimulation to wake him so he doesn’t…crash the plane, I guess. Which, I’ve just learned, actually happens sometimes.

What other helpful/scary things are neuroscientists cooking up nowadays? Memory erasure. Researchers from Shanghai Institute of Brain Functional Genomics and East China Normal University in Shanghai have achieved erasure of fear memories in mice. This is like, Forgetfulness Charm from Harry Potter and the Chamber of Secrets level stuff. You can see why such a thing might possibly be an attractive option for patients with Post Traumatic Stress Disorder, but jeez. The whole Neuron article abstract is terrifying. “We find that transient alphaCaMKII overexpression at the time of recall impairs the retrieval of both newly formed one-hour object recognition memory and fear memories, as well as 1-month-old fear memories. Systematic analyses suggest that excessive alphaCaMKII activity-induced recall deficits are not caused by disrupting the retrieval access to the stored information but are, rather, due to the active erasure of the stored memories. Further experiments show that the recall-induced erasure of fear memories is highly restricted to the memory being retrieved while leaving other memories intact. Therefore, our study reveals a molecular genetic paradigm through which a given memory, such as new or old fear memory, can be rapidly and specifically erased in a controlled and inducible manner in the brain.” And this was published back in 2008. The ability to erase fear memories is already six years old.

Okay, there’s one more thing I have to mention, and maybe you’ve already thought of it. What will mind reading do to the criminal justice system? Lie detectors would be obsolete. There would be fewer trials, because a suspect’s innocence could be determined by a simple fMRI test. On the one hand, perhaps no more will innocent people be wrongly accused. On the other hand, how will we know when the technology is accurate/reliable enough to entrust with a person’s life? And of course, if we want to get really dramatic: the glory of dying with testimony will be a thing of the past.

Perhaps you can guess what I’ll say next: the technology does already exist, though it is imperfect, and though it is definitely unconstitutional to involuntarily impose. In a brief article on this subject, Jay Stanley, Senior Policy Analyst of the ACLU Speech, Privacy & Technology Project, wrote the following: “Unlike the polygraph, which measures heart rate and temperature in an attempt to detect a subject’s response to lying, fMRI lie detection attempts to detect a subject’s decision to lie. And for a polygraph to work you have to get a subject to actively participate by answering questions, while fMRI could be used to extract information from a person whether they actively provide an answer to a question or not.” The ACLU views techniques for peering inside the human mind as a “violation of the 4th and 5th Amendments, as well as a fundamental affront to human dignity.” And I think it would be hard to find someone who disagrees. I mean, we are our minds. This is seen clearly in some victims of severe brain trauma, who are not the same person afterwards (there are endless examples of this, but here’s one I just happened to come across). So to manipulate a mind is to manipulate an identity, and to invade another’s thoughts is to undress a soul.

But rest easy, for now. An article in Frontiers in Human Neuroscience, “Prospects of functional magnetic resonance imaging as lie detector,” attempts to tackle this ethical issue: “We argue that the current status of fMRI studies on lie detection meets neither basic legal nor scientific standards…and provide an overview on the stages and operations involved in fMRI studies, as well as the difficulties of translating these laboratory protocols into a practical criminal justice environment. It is our overall conclusion that fMRI is unlikely to constitute a viable lie detector for criminal courts.”

In conclusion, I think that there are several honorable applications of mind reading capabilities. As mentioned before, there are many ways in which all the technologies listed here and in my previous post can give injured persons their lives back. Sure, I dread to think of this technology falling into the wrong hands. I don’t want my mind invaded any more than the next guy; however, I cannot imagine the pain of losing the ability to speak, or perhaps worse–seeing a loved one lose that ability. In such a situation, I would give anything to be able to talk with him or her again. To be locked outside of a precious other’s mind, or locked inside oneself, seems like the cruelest fate imaginable.

…There’s also the thought that maybe one day we can learn to read animals’ thoughts and better communicate with them. Could be cool.

My main goal in writing on this topic is not to get you to pick a side, but to get you thinking about it, because it concerns all of us guys with that piece of anatomy called a brain. Thanks for reading, and please share your thoughts in the comments below!

 

How Advanced Is Mind-Reading Technology?

I don’t know how this topic hasn’t come up more frequently in my conversations. I mean, we all know about fMRI and EEG for studying brain activity, but do we realize these are the first steps towards mind-reading? And do we realize that scientists are actively trying to develop mind-reading technology for a good cause: to help mute people speak, and paralytics move prosthetic limbs? Such advanced technology seems ludicrous, but we’re starting to learn that it’s within the realm of possibility. And suddenly, the one thing you believed would always be private, your mind, becomes unfathomably precious and fragile.

Stephen Hawking has A.L.S. He currently communicates using facial recognition technology–a twitch of his cheek or eyebrow will stop a cursor moving across a keyboard on a computer screen so he can select letters and spell words (this system also has a word prediction algorithm so he doesn’t have to painstakingly choose every letter). But he sometimes wears a headband with an in-development computer chip called the iBrain, allowing it to read his brain waves and learn what signals correspond to certain letters, words or actions. Its developers at NeuroVigil hope that one day it will be able to read the mind of Hawking and others to allow them to speak efficiently and expressively.

BrainGate is another research team endeavoring to decipher brain signals. They’re working on something called “Intracortical brain computer interfaces,” which aim to permit brain control of, among other things, a cursor on a computer screen. If perfected, it would replace Stephen Hawking’s current method of communication. But they have another interesting technology in development, which they hope one day will allow people to naturally control prosthetic limbs the same way that they would control real ones–through a direct link to the motor control region of the brain. BrainGate researchers state on their website: “Using a baby aspirin-sized array of electrodes implanted into the brain, early research from the BrainGate team has shown that the neural signals associated with the intent to move a limb can be ‘decoded’ by a computer in real-time and used to operate external devices.”

In this endeavor, the researchers have already enjoyed incredible success–two stroke victims have been able to control a robotic arm using only their brains. Participant Cathy, who was paralyzed for 15 years prior to this trial, was able to use the arm to raise a bottle of coffee to her lips and drink. But John Donoghue, the leader of the BrainGate2 clinical trial, has emphasized that the technology is far from functional: “Movements right now are too slow and inaccurate — we need to improve decoding algorithms.”

It seems that a company called Battelle, in collaboration with researchers at Ohio State University, has gotten even closer. A quadriplegic named Ian Burkhart is the first person to use Neurobridge, a device that reconnects the brain to muscles without the spinal cord. This happened in April of this year, guys. This is the future. When I first read about it, it sounded like science fiction. We’re here already?? Science has done it??? We’re curing paralysis???? It’s real, but don’t be misled: it doesn’t communicate to the muscles internally. This article posted on the Ohio State University Wexner Medical Center website describes it accurately: “The tiny chip interprets brain signals and sends them to a computer, which recodes and sends them to the high-definition electrode stimulation sleeve that stimulates the proper muscles to execute [Ian’s] desired movements.” Maybe one day this can be made to work internally. But plain and simply, Neurobridge developers have restored hands and hope to a guy who’s been paralyzed for four years because of a diving accident. That’s no small deal.

As you can see, this technology has amazing potential to give many people their lives back. But can you think of some possible negative effects as well? Next time, we’ll discuss the scary implications. If you want to read ahead, here’s a paper by Yale researchers who’ve reconstructed imperfect but impressively recognizable facial images from brain scans of people viewing photographs: Neural portraits of perception: Reconstructing face images from evoked brain activity.

Discussion Topic: These are some pros; brainstorm the cons. Are you excited, scared, or both?

The Line Becomes Finer … A Post Script

While doing research for my next post, I came across the website of Neurovigil, Inc., the company developing the iBrain (my next post will be pretty exciting, guys–exciting and/or terrifying). On their site, they link to a July 2014 article in the New York Times called “Zoo Animals and Their Discontents.” As it’s so perfectly related to my most recent post, I can’t help but to share it. Here’s an excerpt:

The notion that animals think and feel may be rampant among pet owners, but it makes all kinds of scientific types uncomfortable. “If you ask my colleagues whether animals have emotions and thoughts,” says Philip Low, a prominent computational neuroscientist, “many will drop their voices to a whisper or simply change the subject. They don’t want to touch it.” Jaak Panksepp, a professor at Washington State University, has studied the emotional responses of rats. “Once, not very long ago,” he said, “you couldn’t even talk about these things with colleagues.”

That may be changing. A profusion of recent studies has shown animals to be far closer to us than we previously believed — it turns out that common shore crabs feel and remember pain, zebra finches experience REM sleep, fruit-fly brothers cooperate, dolphins and elephants recognize themselves in mirrors, chimpanzees assist one another without expecting favors in return and dogs really do feel elation in their owners’ presence. In the summer of 2012, an unprecedented document, masterminded by Low — “The Cambridge Declaration on Consciousness in Human and Nonhuman Animals” — was signed by a group of leading animal researchers in the presence of Stephen Hawking. It asserted that mammals, birds and other creatures like octopuses possess consciousness and, in all likelihood, emotions and self-awareness. Scientists, as a rule, don’t issue declarations. But Low claims that the new research, and the ripples of unease it has engendered among rank-and-file colleagues, demanded an emphatic gesture. “Afterward, an eminent neuroanatomist came up to me and said, ‘We were all thinking this, but were afraid to say it,’ ” Low recalled.

The article also details the work of animal behaviorist Dr. Vint Virga, who reminds me in many ways of Hagrid from Harry Potter, despite being short and completely devoid of facial hair.

Zoos contact Virga when animals develop difficulties that vets and keepers cannot address, and he is expected to produce tangible, observable results. Often, the animals suffer from afflictions that haven’t been documented in the wild and appear uncomfortably close to our own: He has treated severely depressed snow leopards, brown bears with obsessive-compulsive disorder and phobic zebras. “Scientists often say that we don’t know what animals feel because they can’t speak to us and can’t report their inner states,” Virga told me. “But the thing is, they are reporting their inner states. We’re just not listening.”

The article’s author, Alex Halberstadt, was fortunate to visit Virga’s home one day. When pulling into the driveway, Virga stopped suddenly because a frog halfway down a snake’s throat was impeding the route to the garage. He immediately called his wife to warn her so she wouldn’t run them over when she returned home.

This article is beautiful, and long, and completely worth 10-15 minutes of your time. Cozy up in an armchair with a glass of wine and read about Libby the bitchy Barbary sheep, Sukari the anxious giraffe, a mortally apathetic clouded leopard, and many more relatably flawed yet beautiful non-human individuals. Of Sukari, Halberstadt writes:

Standing eye to eye with a giraffe is weirdly peaceful. The creature is so unlike us in its particulars and scale, yet so deliberate in its design. It’s comforting not to be at the center of creation. Sukari chewed the leaves gamely, working her jaws with real gourmandise. And then her eye strayed toward the ceiling, and she quit chewing and slightly turned her head. No sound or movement had distracted her. For a span of some seconds, her eyes grew unfocused and rested upon no tangible object, and an expression crossed her distracted face that could only be a passing thought. Or so it looked to me.

The Line Becomes Finer

In the past year as I’ve explored neuroscience in greater depth than ever before, what have struck me are the similarities, rather than the differences, between the brains of animals and that of ours. So many things I’d previously perceived as uniquely human are shared by an astonishing breadth of our relatives, near and distant. As always, when I want to learn more about a scientific subject I blog about it. Here is my mama bird science regurgitation; I hope it permits you to see our co-earthlings in a new light.

Humanity has evidently come a long way since the father of modern philosophy, Reneé Descartes, justified cruel experimentation with animals by declaring they did not have souls. According to animalethics.org.uk, Descartes believed that animals cannot reason and do not feel pain. Can you imagine believing that if you torture an animal and it screams and cries that it’s just a robotic reaction, no different than if a potato were screaming? Then again, if a potato could scream, it would not quite be a potato. Descartes maintained that humans are the only conscious living beings who have minds and souls, can learn, can speak language, can actually experience pain. He believed it was foolish to have compassion for non-human animals. In his own words: “But the greatest of all the prejudices we have retained from infancy is that of believing that brutes think” (Reneé Descartes, 1649). Gary Francione wrote in his Introduction to Animal Rights that Descartes and his followers held public demonstrations in which they inflicted severe pain onto animals (examples: nailing the paws of dogs onto boards, cutting open their chests to reveal the beating hearts, burning, otherwise mutilating) in order to have the opportunity to educate the crowd to not feel sympathy for these organic ‘machines’ that were only ‘functioning properly.’

Thank god that the issue of consciousness in animals is no longer up for debate in the scientific community. And beyond that: I was very pleased to learn during my training for research animal handling that daily intellectual stimulation is a requirement for the care of research monkeys and apes; to deny them play and learning is inhumane and punishable by law. Theologically, however, the subject of animal consciousness remains contended; see this argument carried out through the medium of vintage church marquee in which a Catholic church comically competes with a Presbyterian church for the souls of pre-believer passersby by promising to grant one dog a soul with each conversion.

Scientists recognize that many things historically considered to be exclusively human traits are not such. We’ll start with one of the most primitive: tool use. Elephants, bears, bottlenose dolphins, sea otters, mongooses, badgers, many birds, apes, fish, and even insects have been observed using tools. In fact, there’s an entire Wikipedia page dedicated to tool use by animals, with examples, of course. Let’s define this so we’re all on the same page: a tool was characterized by one scientist named Benjamin B. Beck as “the external employment of an unattached or manipulable attached environmental object to alter more efficiently the form, position, or condition of another object, another organism, or the user itself, when the user holds and directly manipulates the tool during or prior to use and is responsible for the proper and effective orientation of the tool.”

Insects, guys. No, basic tools defined thus are not the dividing line. However, a stricter definition, “complex” tool use, seems to be more credible; it requires that two or more tools be incorporated in a certain order to accomplish a task, or else the tool must be built from multiple elements. Even so, chimps fit that bill. According to journalist Kate Yoshida, a group of chimpanzees in Gabon extract honey using a chronological succession of five tools, all which are essential to the process. Chimpanzees alone of all non-human primates have been observed using a single material to construct a variety of tools (e.g. they use leaves to make sponges as well as probes to reach insects), and they are choosy about the materials, traveling considerable distances to obtain the correct tree species to construct the probe. William McGrew writes in his article “Chimpanzee Technology”: “Almost 50 years ago, Jane Goodall watched an adult male chimpanzee in the Gombe Stream Reserve, Tanzania, make and use a blade of grass to ‘fish’ termites from a mound for food. Her mentor, Louis Leakey, declared, ‘Now we must redefine “tool,” redefine “man,” or accept chimpanzees as humans!'” While we get the idea, many other things such as cognitive abilities and number of chromosomes separate our species. Still, all the time, the line becomes finer. 

Thanks to advances in technology that give us near mind-reading power (a little scary, no?), we know that chimps engage in very similar resting-state brain activity as we humans do (where examples of resting-state thoughts include when the mind wanders to “past social interactions, potential future social interactions and to problems you need to solve.”) Team member Dr. Preuss said the findings suggested that “humans and chimpanzees share brain systems involved in thinking about one’s own behavior and that of others.” The functional brain imaging revealed differences in addition to similarities, but the differences are never so shocking to me. My mind is still in process of opening. That a chimp can sit and think about his day and anticipate hanging out with his friends is incredible to me. Disparities: humans showed more resting state activity in regions of the brain associated with the analysis of meaning compared to chimps. And humans are the only animals known to think in words, as evidenced in the high activity in language regions of the brain during imaging. Read the Science Daily article for more information, and read this Cell Press Review to learn more about cognitive limits in chimps (we don’t think they understand the concept of false beliefs, for example).

Chimpanzee_(13968481823)

Nevertheless, while we outperform chimps in many higher processes, Caltech scientists have shown that chimps are better strategists than humans. Researchers administered a game theory test in which a human and chimp pair of opponents competed for rewards (food for chimp, money for human) by trying to predict the opponent’s decisions. One player was seeker, one was hider. The rules were simple: two rectangles appeared on a computer screen and the seeker must choose the one he thinks the hider will choose; the hider must choose the one he thinks the seeker will not. Astonishingly, the chimps consistently defeated their human competitors. They  scraped dangerously close to the theoretical success limit defined by John F. Nash, Jr., winner of the Nobel Prize for his game theory discoveries. It’s thought that chimps’ superior short-term memory and their more competitive natures may have contributed to the observed result. But slice it how you like: in at least strategic thinking, chimps have humans beat.

There are a few other things that I was surprised to learn this week during my research. Animals have been known to commit suicide. See also: Seven Cases of Animals that Committed Suicide. (Of course, the jury is still out as to whether these animals were aware of what it meant to end their own lives.) Apes have learned sign language, and one chimp used it to ask a zoo visitor to set him free (video here). Rats possess metacognitive abilities, as proven in 2007 by researchers at the University of Georgia (the test subjects were found to understand whether they possessed knowledge of the answer to a test). Chimps and dolphins are two animals considered to be self aware for reasons including: they can anticipate the effects of their actions, and they recognize themselves in a mirror (i.e. a chimp whose face was painted would try to wipe it off when it saw the paint on his reflection). With astonishing memory and reasoning abilities, a crow has solved a difficult eight-step puzzle as detailed in this BBC special. In her book Animal Madness, Laurel Braitman writes about mental illness in animals–examples: depressed gorillas, compulsive parrots, and a cow with anger management issues. And there’s so much more information out there.

As a final challenge to your preconceptions: even plants can see, smell, feel, learn, and remember

In conclusion, I am not a vegetarian and do not think that complete abstinence from consumption of animals is necessary or even healthy for most people. I would like to kill and pluck my own free-range chicken someday–I’m sure it would not only be healthier and happier than those chickens from Tyson Foods or similar, but I would also feel more gratitude as I ate it. Of course, I highly discourage the consumption of close relatives such as monkeys, for ethical and health reasons. I would not eat a dog or cat, and there are many other lines that I’ve personally drawn. But let’s be considerate, thankful, and respectful of the organisms that we use for sustenance, companionship, and research. We are not the only rightful inhabitants of this planet, after all.

Discussion Topic: Do you have a pet that seems quite intelligent at times? Name something it’s done that has surprised you.

Why Didn’t I Think of That?

It’s hard to believe that simple things can still be invented. Doesn’t anything useful require some type of engineering degree to conceive? Yet we learn time and time again that people are still doing it, and some of these people are quite young.

Last year, 16-year-old Ann Makosinski invented a flashlight powered by body heat. She employed Peltier tiles to convert the difference between the temperature of the hand on one side of the flashlight and the air temperature on the other side into electricity. This process relies on the thermoelectric effect: when a temperature gradient is applied to a material, the charged particles that can move will move from the hot side to the cold side. And that’s what electricity is: a flow of charge that can be harnessed to do work. But to give Ann credit, it’s not entirely that simple. Even her engineer father was amazed that she was able to manipulate the circuit to make it put out 20 millivolts.

I can find tons of examples like this by reading about annual Google Science Fair finalists (of which Makosinski was one). Peruse through that list each year and experience one of two feelings: “Why didn’t I think of that?” and/or “Can I please get a do-over on my high school years?”

The feature of this brief post is a program conceived by a team of scientist at M.I.T. that amplifies variations in videos to extract more visual information, e.g. so you can monitor the heartbeat of an infant in ICU without applying a physical EKG monitor. A Ph.D. student involved with the algorithm’s development, Michael Rubinstein, said it’s actually a very simple algorithm. It makes a spacial average of the pixel color intensities, amplifies them x100, then replays the video with these new intensities. I just can’t get over what a good idea this is! And that is hasn’t been done yet–amazing, given how uncomplicated it is. Watch the video below (if for no more reason than it includes a motion magnified clip of Christian Bale from Batman):

I guess this is just a good reminder that anyone can have a worthwhile, novel idea. Innovation never ends, and simplicity is often a virtue. If you think you’ve got a good one, believe in yourself and run with it.

Discussion Topic: Have you recently heard of an invention that made you wonder why you hadn’t thought of it?

Is Free Will a Thing?

I’d like to call the following a neurophysiosophical rant, i.e. philosophical rant grounded in neurophysiology. You can use it.

If any of you have studied for the GRE (Graduate Records Examination) within the past several years, chances are you’ve run across a reading passage describing the findings of Benjamin Libet, a neurophysiologist from UC, San Francisco. When I think about it, I’ve actually learned a lot of interesting things from GRE reading passages. But this one in particular has stuck with me, because ever since I read it I’ve been skeptical about the idea of free will.

There is a lengthy (understatement!) Wikipedia page dedicated to this historical debate, but if you aren’t particularly in the mood to tackle an 8,920 word scientific discourse (that’s equivalent to 10% of a full-length novel), this post should suffice (however, if you are, you should seriously read it–it’s pretty cool). Here’s the meat:

Two scientists name Lüder Deecke and Hans Helmut Kornhuber discovered something called readiness potential (RP) in 1964. RP quantifies the ramping up of electrical activity in the motor cortex and supplementary motor area of the brain that precedes a voluntary muscle movement. Benjamin Libet’s contribution in the 1980s was to show through experimentation that this RP signal precedes  the conscious will to move. Of course, his methods were a bit ghetto: RP was measured with an electroencephalogram (EEG), movement was detected with  an electromyograph (EMG), and first awareness of will to move was noted by the test subject using a fancy oscilloscope timer–as you can imagine, the time of this first will to move would be impossible to record exactly due to the amount of time between urge to act and ability to note the location of the dot on the oscilloscope. Furthermore, if the subject follows Libet’s direct instructions to note when “s/he was first aware of the wish or urge to act,” s/he would not in fact be willfully deciding to move. In this case, perhaps the “urge” felt by the subjects is the RP.

I wouldn’t hang my hat on the results of this study. But a more recent study (2008) repeated the experiment with some modifications and extensions, including the use of fMRI machine learning through multivariate pattern analysis to predict which button, left or right, a test subject would choose to press. The authors write, “We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 seconds before it enters awareness.” However, the accuracy rate was only 60%. Moreover, this experiment still relied on the test subjects noting the time of first awareness of urge to move. In my opinion, the result is therefore more intuitive than the alternative. I would be much more surprised if a person felt an “urge” to move and it was not the result of an electrical signal in the brain. On the other hand–is it even possible to experience will to move in the absence of an urge of any kind? The neural motor default is inhibition, not excitation.

The issue of finding proper controls for a scientific experiment is pervasive and fascinating. This scientific question regarding free will is particularly troublesome because there are so many unanswered questions whose predicted answers necessarily contribute to the premise upon which the experiment is based. Such questions include: How might free will be observed if it does exist? If asked to exercise free will, does this conscious determination to make certain decisions at “random” intervals preclude the ability to act in a truly autonomous manner? (I hypothesize yes.) Under the right circumstances, can RP actually start after conscious will to move (and then, what are these circumstances)? Do different kinds of actions require different kinds of free will? What is free will, anyway?

To the last question, I present the Merriam-Webster definition: “Freedom of humans to make choices that are not determined by prior causes or by divine intervention.” Great–now what do these “prior causes” entail? Where philosophy meets science, the rabbit hole runs deep.

Okay, so here is one sub-question which some scientists, including Libet, have endeavored to answer: Once started, can RP and/or the progression toward movement be stopped? Libet did observe that RP could be initiated without being followed by actual movement, implying that the subconscious decision to move was vetoed. Michael Egnor of Science News thinks that the buck, while perhaps not stopping, brakes to a school-zone appropriate speed here, and Benjamin Libet would agree. According to Egnor, Libet wrote, “This kind of role for free will is actually in accord with religious and ethical strictures. These commonly advocate that you ‘control yourself.’ Most of the Ten Commandments are ‘do not’ orders.” Interestingly, he firmly believed in free will (or at least free won’t), maintaining that the ‘veto’ need not be neurophysiologically predetermined–in his words: “We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.” But in my opinion, it’s very likely that there is a separate brain signal that competes with the readiness potential and overrides it. In fact, how could there not be? Our sensory neurons are constantly competing with each other to dominate our awareness, so why should the process of decision-making be any different? I don’t think anyone would argue that consciousness is an entity wholly distinct from the physical wiring of the brain. Thus, the question that I think becomes most pertinent in this debate is: Does the competing “veto” signal exist physically, and if so, where and how does it arise?

Scientists Simone Kühn and Marcel Brass were of the mind that the veto probably also arises subconsciously. In 2009, they sought to answer this question. The premise: If in fact the decision to veto is an act of consciousness, test subjects should be able to  distinguish the true permission of movement from mere impulse (as in failure to make a decision at all either way). I won’t go into the methods (read their paper if you’re interested), but the results showed that the volunteers could not make this critical distinction. Thus, the evidence more strongly supports a model in which decision to veto an action also arises subconsciously.

One of the most compelling experiments on this subject for me is a 1990 study by Ammon and Gandevia in which the scientists were able to manipulate test subjects’ perception of their control over decisions. Summary: any given right-handed volunteer would normally choose to move his right hand 60% of the time; however, when the right hemisphere of the brain was stimulated using magnets, he would choose to move his left hand 80% of the time. The incredible part: despite external influence, subjects still believed that their choices regarding which hand to move had been made freely.

After reading all this literature, if I had to say which side I’m leaning to, it’s definitely the one in which all our decisions result from an optimization calculation in the brain. It makes so much sense to me that we would integrate all our nature and nurture–observations, information, training and genetic tendencies–as parameters for some extremely complex multivariate nonlinear regression, in order to make the best possible decision. I mean, I can understand situations in which even suicide might be computed by the brain to be the least negative/painful option.

Am I okay with the idea that I may be no more than a biotic cyborg? I guess so, yea. But there’s still a strong sense of personal responsibility. It’s more important than ever to stay as informed as I possibly can about all issues (moral and otherwise) that might directly affect my life, so that when the need to decide presents itself, my neural networks will make the best decision for me and for those around me.

Discussion Topic: What do you think? Is free will a thing?