Progress Toward Eradicating HIV

I was shocked to last summer of the Malaysian Airlines flight MH 17 that crashed over Ukraine, carrying among its passengers six AIDS researchers. Here is my [overdue] tribute: a brief summary of our progress toward eradicating HIV from this planet.

HIV virusHuman Immunodeficiency Virus (HIV) in brief:
HIV is a retrovirus of the genus lentivirus. Retroviruses carry their genetic material in the form of RNA, and then reverse transcribe this RNA when inside the host cell to create viral DNA. The DNA is then incorporated into the host genome. Lentiviruses are special in that they infect cells at any time, while other viruses can only do so during cell division. For this reason, lentiviral vectors can be used to deliver copies of a correct gene to people with a mutated version. HIV in particular infects several types of immune cells, including CD4 T cells. In humans, this results in a continual failure of the immune system over time, which progressively increases patient susceptibility to infections and cancers. If left untreated, average life expectancy after the time of infection is 9 to 11 years. Fortunately there exist many kinds of effective treatments called antiretroviral therapy (ART). You can read about these drugs and how they inhibit different steps of the viral life cycle here. Unfortunately, these treatments are not capable of completely curing an HIV patient.

Source: Joshua L. Hood, MD, PHD

Source: Joshua L. Hood, MD, PHD

If you Google “HIV cure,” you’ll find all kinds of interesting stuff. For example, a compound found in bee venom, melittin, pokes holes in lipid bilayers and thus can kill the human immunodeficiency virus by destroying its outer envelope. The melittin can be delivered with nanoparticles, which are already known to be safe in humans. The figure at right illustrates this delivery system. The nanoparticles (purple) are modified with small molecular “bumpers” (red ovals) that prevent human cells from coming into contact with the melittin (green), but allow the much smaller virus to get close enough that the bee venom compound can rip it apart. A PLOS ONE journal article detailing a study of the safety of melittin as a virucide can be found here. This particular possibility is really exciting because unlike other existing therapies, it directly attacks the physical virus. I just hope that if this is pursued as an HIV solution, it can be synthesized in a lab and won’t have to be obtained via mass bee sacrifice.

While there’s no cure currently available, there exist individuals that have a natural resistance to HIV, called “elite controllers.” Such individuals are infected with HIV but do not need to take ART because they are naturally able to maintain a CD4 count greater than 500. You can read about several different types of genetic variations that confer this natural resistance here, but I’ll focus on one in particular: the CCR5-delta-32 mutation, which nullifies the CCR5 gene. This gene normally encodes a surface cell receptor protein on CD4 immune cells, which the HIV virus uses to gain entry into the host cell. Clinical trials have already been conducted in humans to modify the CCR5 gene with gene therapy methods, so this viral entryway can be deleted in HIV patients without the natural mutation. It has been determined to be a safe therapy within the limits of one clinical study published last year in the New England Journal of Medicine. Efforts are currently underway to improve the therapy with newer and more reliable gene editing methods. I’m not sure when this treatment will come widely available, and it hasn’t permanently cured anyone yet, but it definitely reduces viral loads and increases CD4 cell counts. Read more here.

There actually is one person who is confidently said to have been functionally cured of HIV (he has had undetectable levels of viral genetic material and a normal CD4 cell counts for more than six years now). This is Timothy Brown, also known as the second “Berlin patient.” Interestingly, however, the exact mechanism by which he was cured still eludes the researchers of the world. I’ll detail his treatment briefly. Brown was diagnosed with acute myeloid leukemia more than a decade after he contracted HIV. His doctor had heard that patients with the CCR5-delta-32 mutation have natural resistance to HIV, and found a bone marrow donor that carried two copies of that mutation, meaning that he did not make any of the CCR5 receptor. After undergoing radiation which killed all Brown’s white blood cells (eliminating the cancer), he underwent two separate bone marrow transplants from the aforementioned donor (the second one was required because of a leukemia relapse). This allowed his immune system to start over with CD4 T cells which lacked the CCR5 receptor. One article describes a study that made some progress in elucidating why this treatment worked so well; researchers think that the foreign cells from the donor killed any of Brown’s immune cells which survived chemotherapy, allowing his immune system to truly start over again after the transplant.

One other person may have been cured, using a transplant of blood from the umbilical cord of someone carrying that very important CCR5 mutation, but this is said with hesitance, because the “Barcelona patient” was HIV free for only three months before dying of cancer.

Relying on transplants is not the answer for the majority of the HIV-positive population. It’s simple a matter of supply and demand; to find someone with 2 copies of the CCR5-delta-32 mutation is rare, and to hunt these individuals down and ask them to donate their bone marrow is unfair. Most experts agree that a cure must involve some combination of immune system therapy, gene therapy, and drug therapy to force the HIV out of hiding so it can be destroyed.

There’s one more thing that I’m really excited about, and perhaps you have already thought about it yourself. If gene therapy has been used in a human to modify the CCR5 receptor, why can’t it be used to delete the viral DNA from the infected cells as well? Well, a research group at Temple University School of Medicine in Philadelphia has asked this very question, and they seem to have an answer. Their study, published last summer, details how they successfully eliminated the HIV virus from cultured human cells for the first time with high-fidelity gene editing methods. Unfortunately, this therapy is not yet ready to go into clinical trials. An article from the Temple University School of Medicine website details the main obstacles: “The researchers must devise a method to deliver the therapeutic agent to every single infected cell. Finally, because HIV-1 is prone to mutations, treatment may need to be individualized for each patient’s unique viral sequences.” But imagine what a potent combination it will be when we can use gene therapy to both prevent infection by deleting the CCR5 receptor, and for erasing the viral DNA for those cells which have already been infected!

While we wait with great anticipation and hope for the cure for HIV, the World Health Organization (WHO) is making great progress towards providing universal access to antiretroviral therapy; they aim for all the world’s HIV victims to get access to ART by the end of 2015.

Even without a cure, we should be proud with how far we’ve come. Remember how I said that without treatment, HIV victims will live on average only 9-11 years post-infection? Well, successfully treated HIV-positive individuals now have a normal life expectancy. That’s no insignificant achievement.

So, thank you to all the AIDS researchers of the world!! It’s an exciting time to be alive.

Some Holiday Reindeer Science

Have you ever wondered how animals who live near the poles know when to sleep, what with the long seasons of mostly light or mostly dark days? Me too!

Before I get into that, let me explain how sleep/wake cycles work in the animals that live in normal parts of the earth. Circadian rhythms are biological oscillations that maintain an intrinsic periodicity of about 24 hours in constant (i.e. 24 hours of dark) conditions–to simplify: they start over again each day. Some circadian processes may include:

  • behavior (sleep/wake, brain activity, physical activity)
  • physiology (hormone levels, metabolism, body temp)
  • cellular functions (cell cycle and maintenance)
  • gene expression (e.g. transcription)

These rhythms can all be entrained, or re-set, by cues such as light/dark (LD) cycles, temperature fluctuations, and feeding times. One hormone that’s very important for regulating circadian behavior and physiology is melatonin. MelatoninIt is produced rhythmically from the pineal gland, and it helps us get sleepy at night (among other roles). Reciprocally, melatonin secretion is also governed by the endogenous circadian clock; that’s why we humans can pretty well maintain our sleep schedules as day lengths fluctuate throughout the year. Even if the sun was blotted out for a day, our melatonin levels would still rise at the time we normally get ready to sleep.

However, and this is a side note: melatonin levels do decrease with exposure to light even late at night. That is why some researchers have advised against staring at a bright computer or phone screen past your normal bedtime–this suppresses melatonin secretion and may make it harder for you to sleep. The consequences of the resulting sleep deficit may include increased stress, lowered immune system, impaired cognitive function, and other health issues. Of course, do what I say, not what I do–I’m up writing this at 1 a.m.

Image Credit: Thinkstock

Image Credit: Thinkstock

Back to the point. Some scientists studied reindeer (or caribou, as many of us know them) to help answer the aforementioned question concerning their sleep regulation. Reindeer are native to the far north, including Arctic and subarctic regions, so they definitely have to spend their winters and summers in extreme (or “polar”) light/dark conditions. The scientists found that reindeer melatonin levels do not oscillate during midwinter. Experimentally, melatonin secretion was found to simply rise in the dark and fall during light exposure. Moreover, these reindeer do not have much of an endogenous clock at all during the most extreme times of the year. What does this mean for their behavior during midwinter and midsummer? Well, they definitely don’t sleep for eight straight hours like we do. Basically, their sleep rhythms are regulated by feeding. They eat, then they sleep or nap. And they eat about 8-10 times in one 24 hour period.

Interestingly, reindeer do show circadian rhythms in the spring and autumn, when there is a more normal LD cycle. But they’ve adapted to life in constant light/dark conditions so their clocks are very flexible. You might say that their circadian rhythms take summer and winter vacations.

Anyway, let’s just hope that Santa gives his reindeer some snacks and nap breaks on this long Christmas Eve night. Happy Holidays!

Mind-Reading and Manipulation

So, as it has been some time since my last post (I have a really good excuse, trust me), I know you’ve had plenty of time to think about the possible negative ramifications of developing the technology to read someone’s mind. We can start with the obvious: it’s a plain and simple invasion of privacy. I remember hearing about the Patriot Act sometime after its implementation in 2001, and thinking, I’m so glad that we will always have one true privacy that cannot be invaded by the government, etc.: our thoughts. To my adolescent self, the technology to read minds seemed completely impossible. I knew very little about neuroscience back then. I didn’t know that our brain waves comprise a code just waiting to be cracked. Now I’m sure it’s merely a matter of time.

(Side note: I just Wikied the Patriot Act and realized that the USA in “USA PATRIOT Act” doesn’t stand for “United States of America.” It stands for “Uniting and Strengthening America.” For some reason, that is hilarious to me. Why use the acronym for the name of a country in the title of an Act of Congress to stand for anything other than the name of that country? Why??)

Of course, while I now believe it’s entirely within the realm of possibility, I also think it will be a long time before a person’s thoughts can be read. The business of brain wave decryption isn’t exactly analogous to deciphering an ancient language. In the brain, it matters how many neurons are firing in what region, with what frequency, and so forth. And the number of neurons in the adult human brain is estimated using tomography to be about 90 billion–so just imagine the complexity! These networks are different for every person, and moreover, they’re constantly changing within each person. Neurons are gaining and losing synapses (connections with other neurons); they are dying and being replaced. As we learn, which we do every waking moment, our brains are altered.

Graphical Abstract--Neural portraits of perception: Reconstructing face images from evoked brain activity

Graphical Abstract–Neural portraits of perception: Reconstructing face images from evoked brain activity

When you think about that, it’s actually really impressive how far we’ve gotten. Remember that Yale paper you were supposed to read? In this study, researchers were able to reconstruct impressively accurate facial images from the brain waves of subjects looking at photographs. The really crazy thing about their study in particular is that they didn’t factor in activity from the occipital cortex, the visual processing center of the brain. The authors write, “Visual stimuli based on patterns of activity outside occipital cortex have not, to our knowledge, been reported. The potential for reconstructions from higher-level regions (e.g., ventral temporal cortex or even fronto-parietal cortex) is enticing because reconstructions from these regions may be more closely related to perceptual experience as opposed to visual analysis.” How did they do it? The most basic explanation is that they used a learning algorithm to match brain waves for each participant to various features of a “training” facial image. Thus, with a decoding algorithm primed for each subject, researchers knew what brain activity would give information about certain features in the “test” photographs. They did reconstructions with several sets of neural activity, including ones that excluded input from the visual processing center of the brain!!! Above you can see their graphical abstract. The reconstructed image represented therein is one that integrated activity from all regions of the brain, but please see their paper for the reconstructions that exclude the occipital cortex!

A neuroscience lab at UC, Berkeley is working in a similar vein–these scientists want to be able to interpret the brain’s activity in response to a moving picture, rather than a still picture. I presume that the end goal of all this would be to enable blind persons to see again–perhaps a helmet could be devised which would live stream the view that a person would normally see with his eyes and translate it into electrical stimulations at exactly the right place in the person’s brain to allow him to see the world around him. Of course, we’re still learning to map the brain activity to certain visual stimuli, and cannot even begin to try and evoke the proper visual interpretation in the brain until we have the mapping part down. But Berkeley’s Professor Jack Gallant has made some progress, using similar methods as the Yale researchers above, adapted for dynamic images. Here you can see some clips from movie trailers that Gallant and his team reconstructed from test subjects’ brains, but I’ll warn you: they’re not that impressive yet. There’s a big difference between reconstructing static facial images and reconstructing random movie clips in real time.

Now, this is something of a tangent, but if we’re discussing mind reading and its possible evils, we have to touch on mind control. So, remember that scene in Avatar where Jake and Neytiri use their little built-in Ethernet cords to plug their minds into each other? …Yes, you do. Well, in some sense, that is now a reality. Researchers at the University of Washington have enabled a person to control the hand movements of another person located in a completely different building about a half mile away. No, really! The first participant sits in front of a computer game and must defend a city by blocking enemy fire and retaliating with cannons. The catch is, he has no physical controls. Instead of pushing buttons or clicking a mouse, this player must defend the city by thinking about moving his hand to push buttons. An encephalography machine reads this intention from his brain and transmits the motor control command via electrical pulses through the Internet and then through electrodes hooked up to a transcranial magnetic stimulation coil placed over the region of the receiving player’s brain that executes hand movements. (Man, that’s a mouthful.) Within a fraction of a second, the second person’s brain is stimulated and his hands taps a touch screen game control. He cannot see the game and does not know when to fire the cannon or block incoming fire, yet accuracy during these trials is as high as 83%.

This amazes and terrifies me. And guess what? The UW researchers have recently been given a $1 million grant from the W.M. Keck Foundation to try and expand the types of information that can be shared from brain to brain. Apparently, they’d like to enable transmission of visual and psychological concepts and thoughts. I can see the interest from a basic science perspective, but in a practical sense, do we want that technology to exist?

The one thing the UW researchers are looking into that has a clear positive objective is learning how to influence neural activity involved with alertness and sleepiness. With such knowledge, they hope that signals from a dozing e.g. airplane pilot’s brain could activate an electrical stimulation to wake him so he doesn’t…crash the plane, I guess. Which, I’ve just learned, actually happens sometimes.

What other helpful/scary things are neuroscientists cooking up nowadays? Memory erasure. Researchers from Shanghai Institute of Brain Functional Genomics and East China Normal University in Shanghai have achieved erasure of fear memories in mice. This is like, Forgetfulness Charm from Harry Potter and the Chamber of Secrets level stuff. You can see why such a thing might possibly be an attractive option for patients with Post Traumatic Stress Disorder, but jeez. The whole Neuron article abstract is terrifying. “We find that transient alphaCaMKII overexpression at the time of recall impairs the retrieval of both newly formed one-hour object recognition memory and fear memories, as well as 1-month-old fear memories. Systematic analyses suggest that excessive alphaCaMKII activity-induced recall deficits are not caused by disrupting the retrieval access to the stored information but are, rather, due to the active erasure of the stored memories. Further experiments show that the recall-induced erasure of fear memories is highly restricted to the memory being retrieved while leaving other memories intact. Therefore, our study reveals a molecular genetic paradigm through which a given memory, such as new or old fear memory, can be rapidly and specifically erased in a controlled and inducible manner in the brain.” And this was published back in 2008. The ability to erase fear memories is already six years old.

Okay, there’s one more thing I have to mention, and maybe you’ve already thought of it. What will mind reading do to the criminal justice system? Lie detectors would be obsolete. There would be fewer trials, because a suspect’s innocence could be determined by a simple fMRI test. On the one hand, perhaps no more will innocent people be wrongly accused. On the other hand, how will we know when the technology is accurate/reliable enough to entrust with a person’s life? And of course, if we want to get really dramatic: the glory of dying with testimony will be a thing of the past.

Perhaps you can guess what I’ll say next: the technology does already exist, though it is imperfect, and though it is definitely unconstitutional to involuntarily impose. In a brief article on this subject, Jay Stanley, Senior Policy Analyst of the ACLU Speech, Privacy & Technology Project, wrote the following: “Unlike the polygraph, which measures heart rate and temperature in an attempt to detect a subject’s response to lying, fMRI lie detection attempts to detect a subject’s decision to lie. And for a polygraph to work you have to get a subject to actively participate by answering questions, while fMRI could be used to extract information from a person whether they actively provide an answer to a question or not.” The ACLU views techniques for peering inside the human mind as a “violation of the 4th and 5th Amendments, as well as a fundamental affront to human dignity.” And I think it would be hard to find someone who disagrees. I mean, we are our minds. This is seen clearly in some victims of severe brain trauma, who are not the same person afterwards (there are endless examples of this, but here’s one I just happened to come across). So to manipulate a mind is to manipulate an identity, and to invade another’s thoughts is to undress a soul.

But rest easy, for now. An article in Frontiers in Human Neuroscience, “Prospects of functional magnetic resonance imaging as lie detector,” attempts to tackle this ethical issue: “We argue that the current status of fMRI studies on lie detection meets neither basic legal nor scientific standards…and provide an overview on the stages and operations involved in fMRI studies, as well as the difficulties of translating these laboratory protocols into a practical criminal justice environment. It is our overall conclusion that fMRI is unlikely to constitute a viable lie detector for criminal courts.”

In conclusion, I think that there are several honorable applications of mind reading capabilities. As mentioned before, there are many ways in which all the technologies listed here and in my previous post can give injured persons their lives back. Sure, I dread to think of this technology falling into the wrong hands. I don’t want my mind invaded any more than the next guy; however, I cannot imagine the pain of losing the ability to speak, or perhaps worse–seeing a loved one lose that ability. In such a situation, I would give anything to be able to talk with him or her again. To be locked outside of a precious other’s mind, or locked inside oneself, seems like the cruelest fate imaginable.

…There’s also the thought that maybe one day we can learn to read animals’ thoughts and better communicate with them. Could be cool.

My main goal in writing on this topic is not to get you to pick a side, but to get you thinking about it, because it concerns all of us guys with that piece of anatomy called a brain. Thanks for reading, and please share your thoughts in the comments below!


How Advanced Is Mind-Reading Technology?

I don’t know how this topic hasn’t come up more frequently in my conversations. I mean, we all know about fMRI and EEG for studying brain activity, but do we realize these are the first steps towards mind-reading? And do we realize that scientists are actively trying to develop mind-reading technology for a good cause: to help mute people speak, and paralytics move prosthetic limbs? Such advanced technology seems ludicrous, but we’re starting to learn that it’s within the realm of possibility. And suddenly, the one thing you believed would always be private, your mind, becomes unfathomably precious and fragile.

Stephen Hawking has A.L.S. He currently communicates using facial recognition technology–a twitch of his cheek or eyebrow will stop a cursor moving across a keyboard on a computer screen so he can select letters and spell words (this system also has a word prediction algorithm so he doesn’t have to painstakingly choose every letter). But he sometimes wears a headband with an in-development computer chip called the iBrain, allowing it to read his brain waves and learn what signals correspond to certain letters, words or actions. Its developers at NeuroVigil hope that one day it will be able to read the mind of Hawking and others to allow them to speak efficiently and expressively.

BrainGate is another research team endeavoring to decipher brain signals. They’re working on something called “Intracortical brain computer interfaces,” which aim to permit brain control of, among other things, a cursor on a computer screen. If perfected, it would replace Stephen Hawking’s current method of communication. But they have another interesting technology in development, which they hope one day will allow people to naturally control prosthetic limbs the same way that they would control real ones–through a direct link to the motor control region of the brain. BrainGate researchers state on their website: “Using a baby aspirin-sized array of electrodes implanted into the brain, early research from the BrainGate team has shown that the neural signals associated with the intent to move a limb can be ‘decoded’ by a computer in real-time and used to operate external devices.”

In this endeavor, the researchers have already enjoyed incredible success–two stroke victims have been able to control a robotic arm using only their brains. Participant Cathy, who was paralyzed for 15 years prior to this trial, was able to use the arm to raise a bottle of coffee to her lips and drink. But John Donoghue, the leader of the BrainGate2 clinical trial, has emphasized that the technology is far from functional: “Movements right now are too slow and inaccurate — we need to improve decoding algorithms.”

It seems that a company called Battelle, in collaboration with researchers at Ohio State University, has gotten even closer. A quadriplegic named Ian Burkhart is the first person to use Neurobridge, a device that reconnects the brain to muscles without the spinal cord. This happened in April of this year, guys. This is the future. When I first read about it, it sounded like science fiction. We’re here already?? Science has done it??? We’re curing paralysis???? It’s real, but don’t be misled: it doesn’t communicate to the muscles internally. This article posted on the Ohio State University Wexner Medical Center website describes it accurately: “The tiny chip interprets brain signals and sends them to a computer, which recodes and sends them to the high-definition electrode stimulation sleeve that stimulates the proper muscles to execute [Ian’s] desired movements.” Maybe one day this can be made to work internally. But plain and simply, Neurobridge developers have restored hands and hope to a guy who’s been paralyzed for four years because of a diving accident. That’s no small deal.

As you can see, this technology has amazing potential to give many people their lives back. But can you think of some possible negative effects as well? Next time, we’ll discuss the scary implications. If you want to read ahead, here’s a paper by Yale researchers who’ve reconstructed imperfect but impressively recognizable facial images from brain scans of people viewing photographs: Neural portraits of perception: Reconstructing face images from evoked brain activity.

Discussion Topic: These are some pros; brainstorm the cons. Are you excited, scared, or both?

The Line Becomes Finer … A Post Script

While doing research for my next post, I came across the website of Neurovigil, Inc., the company developing the iBrain (my next post will be pretty exciting, guys–exciting and/or terrifying). On their site, they link to a July 2014 article in the New York Times called “Zoo Animals and Their Discontents.” As it’s so perfectly related to my most recent post, I can’t help but to share it. Here’s an excerpt:

The notion that animals think and feel may be rampant among pet owners, but it makes all kinds of scientific types uncomfortable. “If you ask my colleagues whether animals have emotions and thoughts,” says Philip Low, a prominent computational neuroscientist, “many will drop their voices to a whisper or simply change the subject. They don’t want to touch it.” Jaak Panksepp, a professor at Washington State University, has studied the emotional responses of rats. “Once, not very long ago,” he said, “you couldn’t even talk about these things with colleagues.”

That may be changing. A profusion of recent studies has shown animals to be far closer to us than we previously believed — it turns out that common shore crabs feel and remember pain, zebra finches experience REM sleep, fruit-fly brothers cooperate, dolphins and elephants recognize themselves in mirrors, chimpanzees assist one another without expecting favors in return and dogs really do feel elation in their owners’ presence. In the summer of 2012, an unprecedented document, masterminded by Low — “The Cambridge Declaration on Consciousness in Human and Nonhuman Animals” — was signed by a group of leading animal researchers in the presence of Stephen Hawking. It asserted that mammals, birds and other creatures like octopuses possess consciousness and, in all likelihood, emotions and self-awareness. Scientists, as a rule, don’t issue declarations. But Low claims that the new research, and the ripples of unease it has engendered among rank-and-file colleagues, demanded an emphatic gesture. “Afterward, an eminent neuroanatomist came up to me and said, ‘We were all thinking this, but were afraid to say it,’ ” Low recalled.

The article also details the work of animal behaviorist Dr. Vint Virga, who reminds me in many ways of Hagrid from Harry Potter, despite being short and completely devoid of facial hair.

Zoos contact Virga when animals develop difficulties that vets and keepers cannot address, and he is expected to produce tangible, observable results. Often, the animals suffer from afflictions that haven’t been documented in the wild and appear uncomfortably close to our own: He has treated severely depressed snow leopards, brown bears with obsessive-compulsive disorder and phobic zebras. “Scientists often say that we don’t know what animals feel because they can’t speak to us and can’t report their inner states,” Virga told me. “But the thing is, they are reporting their inner states. We’re just not listening.”

The article’s author, Alex Halberstadt, was fortunate to visit Virga’s home one day. When pulling into the driveway, Virga stopped suddenly because a frog halfway down a snake’s throat was impeding the route to the garage. He immediately called his wife to warn her so she wouldn’t run them over when she returned home.

This article is beautiful, and long, and completely worth 10-15 minutes of your time. Cozy up in an armchair with a glass of wine and read about Libby the bitchy Barbary sheep, Sukari the anxious giraffe, a mortally apathetic clouded leopard, and many more relatably flawed yet beautiful non-human individuals. Of Sukari, Halberstadt writes:

Standing eye to eye with a giraffe is weirdly peaceful. The creature is so unlike us in its particulars and scale, yet so deliberate in its design. It’s comforting not to be at the center of creation. Sukari chewed the leaves gamely, working her jaws with real gourmandise. And then her eye strayed toward the ceiling, and she quit chewing and slightly turned her head. No sound or movement had distracted her. For a span of some seconds, her eyes grew unfocused and rested upon no tangible object, and an expression crossed her distracted face that could only be a passing thought. Or so it looked to me.

The Line Becomes Finer

In the past year as I’ve explored neuroscience in greater depth than ever before, what have struck me are the similarities, rather than the differences, between the brains of animals and that of ours. So many things I’d previously perceived as uniquely human are shared by an astonishing breadth of our relatives, near and distant. As always, when I want to learn more about a scientific subject I blog about it. Here is my mama bird science regurgitation; I hope it permits you to see our co-earthlings in a new light.

Humanity has evidently come a long way since the father of modern philosophy, Reneé Descartes, justified cruel experimentation with animals by declaring they did not have souls. According to, Descartes believed that animals cannot reason and do not feel pain. Can you imagine believing that if you torture an animal and it screams and cries that it’s just a robotic reaction, no different than if a potato were screaming? Then again, if a potato could scream, it would not quite be a potato. Descartes maintained that humans are the only conscious living beings who have minds and souls, can learn, can speak language, can actually experience pain. He believed it was foolish to have compassion for non-human animals. In his own words: “But the greatest of all the prejudices we have retained from infancy is that of believing that brutes think” (Reneé Descartes, 1649). Gary Francione wrote in his Introduction to Animal Rights that Descartes and his followers held public demonstrations in which they inflicted severe pain onto animals (examples: nailing the paws of dogs onto boards, cutting open their chests to reveal the beating hearts, burning, otherwise mutilating) in order to have the opportunity to educate the crowd to not feel sympathy for these organic ‘machines’ that were only ‘functioning properly.’

Thank god that the issue of consciousness in animals is no longer up for debate in the scientific community. And beyond that: I was very pleased to learn during my training for research animal handling that daily intellectual stimulation is a requirement for the care of research monkeys and apes; to deny them play and learning is inhumane and punishable by law. Theologically, however, the subject of animal consciousness remains contended; see this argument carried out through the medium of vintage church marquee in which a Catholic church comically competes with a Presbyterian church for the souls of pre-believer passersby by promising to grant one dog a soul with each conversion.

Scientists recognize that many things historically considered to be exclusively human traits are not such. We’ll start with one of the most primitive: tool use. Elephants, bears, bottlenose dolphins, sea otters, mongooses, badgers, many birds, apes, fish, and even insects have been observed using tools. In fact, there’s an entire Wikipedia page dedicated to tool use by animals, with examples, of course. Let’s define this so we’re all on the same page: a tool was characterized by one scientist named Benjamin B. Beck as “the external employment of an unattached or manipulable attached environmental object to alter more efficiently the form, position, or condition of another object, another organism, or the user itself, when the user holds and directly manipulates the tool during or prior to use and is responsible for the proper and effective orientation of the tool.”

Insects, guys. No, basic tools defined thus are not the dividing line. However, a stricter definition, “complex” tool use, seems to be more credible; it requires that two or more tools be incorporated in a certain order to accomplish a task, or else the tool must be built from multiple elements. Even so, chimps fit that bill. According to journalist Kate Yoshida, a group of chimpanzees in Gabon extract honey using a chronological succession of five tools, all which are essential to the process. Chimpanzees alone of all non-human primates have been observed using a single material to construct a variety of tools (e.g. they use leaves to make sponges as well as probes to reach insects), and they are choosy about the materials, traveling considerable distances to obtain the correct tree species to construct the probe. William McGrew writes in his article “Chimpanzee Technology”: “Almost 50 years ago, Jane Goodall watched an adult male chimpanzee in the Gombe Stream Reserve, Tanzania, make and use a blade of grass to ‘fish’ termites from a mound for food. Her mentor, Louis Leakey, declared, ‘Now we must redefine “tool,” redefine “man,” or accept chimpanzees as humans!'” While we get the idea, many other things such as cognitive abilities and number of chromosomes separate our species. Still, all the time, the line becomes finer. 

Thanks to advances in technology that give us near mind-reading power (a little scary, no?), we know that chimps engage in very similar resting-state brain activity as we humans do (where examples of resting-state thoughts include when the mind wanders to “past social interactions, potential future social interactions and to problems you need to solve.”) Team member Dr. Preuss said the findings suggested that “humans and chimpanzees share brain systems involved in thinking about one’s own behavior and that of others.” The functional brain imaging revealed differences in addition to similarities, but the differences are never so shocking to me. My mind is still in process of opening. That a chimp can sit and think about his day and anticipate hanging out with his friends is incredible to me. Disparities: humans showed more resting state activity in regions of the brain associated with the analysis of meaning compared to chimps. And humans are the only animals known to think in words, as evidenced in the high activity in language regions of the brain during imaging. Read the Science Daily article for more information, and read this Cell Press Review to learn more about cognitive limits in chimps (we don’t think they understand the concept of false beliefs, for example).


Nevertheless, while we outperform chimps in many higher processes, Caltech scientists have shown that chimps are better strategists than humans. Researchers administered a game theory test in which a human and chimp pair of opponents competed for rewards (food for chimp, money for human) by trying to predict the opponent’s decisions. One player was seeker, one was hider. The rules were simple: two rectangles appeared on a computer screen and the seeker must choose the one he thinks the hider will choose; the hider must choose the one he thinks the seeker will not. Astonishingly, the chimps consistently defeated their human competitors. They  scraped dangerously close to the theoretical success limit defined by John F. Nash, Jr., winner of the Nobel Prize for his game theory discoveries. It’s thought that chimps’ superior short-term memory and their more competitive natures may have contributed to the observed result. But slice it how you like: in at least strategic thinking, chimps have humans beat.

There are a few other things that I was surprised to learn this week during my research. Animals have been known to commit suicide. See also: Seven Cases of Animals that Committed Suicide. (Of course, the jury is still out as to whether these animals were aware of what it meant to end their own lives.) Apes have learned sign language, and one chimp used it to ask a zoo visitor to set him free (video here). Rats possess metacognitive abilities, as proven in 2007 by researchers at the University of Georgia (the test subjects were found to understand whether they possessed knowledge of the answer to a test). Chimps and dolphins are two animals considered to be self aware for reasons including: they can anticipate the effects of their actions, and they recognize themselves in a mirror (i.e. a chimp whose face was painted would try to wipe it off when it saw the paint on his reflection). With astonishing memory and reasoning abilities, a crow has solved a difficult eight-step puzzle as detailed in this BBC special. In her book Animal Madness, Laurel Braitman writes about mental illness in animals–examples: depressed gorillas, compulsive parrots, and a cow with anger management issues. And there’s so much more information out there.

As a final challenge to your preconceptions: even plants can see, smell, feel, learn, and remember

In conclusion, I am not a vegetarian and do not think that complete abstinence from consumption of animals is necessary or even healthy for most people. I would like to kill and pluck my own free-range chicken someday–I’m sure it would not only be healthier and happier than those chickens from Tyson Foods or similar, but I would also feel more gratitude as I ate it. Of course, I highly discourage the consumption of close relatives such as monkeys, for ethical and health reasons. I would not eat a dog or cat, and there are many other lines that I’ve personally drawn. But let’s be considerate, thankful, and respectful of the organisms that we use for sustenance, companionship, and research. We are not the only rightful inhabitants of this planet, after all.

Discussion Topic: Do you have a pet that seems quite intelligent at times? Name something it’s done that has surprised you.

Why Didn’t I Think of That?

It’s hard to believe that simple things can still be invented. Doesn’t anything useful require some type of engineering degree to conceive? Yet we learn time and time again that people are still doing it, and some of these people are quite young.

Last year, 16-year-old Ann Makosinski invented a flashlight powered by body heat. She employed Peltier tiles to convert the difference between the temperature of the hand on one side of the flashlight and the air temperature on the other side into electricity. This process relies on the thermoelectric effect: when a temperature gradient is applied to a material, the charged particles that can move will move from the hot side to the cold side. And that’s what electricity is: a flow of charge that can be harnessed to do work. But to give Ann credit, it’s not entirely that simple. Even her engineer father was amazed that she was able to manipulate the circuit to make it put out 20 millivolts.

I can find tons of examples like this by reading about annual Google Science Fair finalists (of which Makosinski was one). Peruse through that list each year and experience one of two feelings: “Why didn’t I think of that?” and/or “Can I please get a do-over on my high school years?”

The feature of this brief post is a program conceived by a team of scientist at M.I.T. that amplifies variations in videos to extract more visual information, e.g. so you can monitor the heartbeat of an infant in ICU without applying a physical EKG monitor. A Ph.D. student involved with the algorithm’s development, Michael Rubinstein, said it’s actually a very simple algorithm. It makes a spacial average of the pixel color intensities, amplifies them x100, then replays the video with these new intensities. I just can’t get over what a good idea this is! And that is hasn’t been done yet–amazing, given how uncomplicated it is. Watch the video below (if for no more reason than it includes a motion magnified clip of Christian Bale from Batman):

I guess this is just a good reminder that anyone can have a worthwhile, novel idea. Innovation never ends, and simplicity is often a virtue. If you think you’ve got a good one, believe in yourself and run with it.

Discussion Topic: Have you recently heard of an invention that made you wonder why you hadn’t thought of it?

Is Free Will a Thing?

I’d like to call the following a neurophysiosophical rant, i.e. philosophical rant grounded in neurophysiology. You can use it.

If any of you have studied for the GRE (Graduate Records Examination) within the past several years, chances are you’ve run across a reading passage describing the findings of Benjamin Libet, a neurophysiologist from UC, San Francisco. When I think about it, I’ve actually learned a lot of interesting things from GRE reading passages. But this one in particular has stuck with me, because ever since I read it I’ve been skeptical about the idea of free will.

There is a lengthy (understatement!) Wikipedia page dedicated to this historical debate, but if you aren’t particularly in the mood to tackle an 8,920 word scientific discourse (that’s equivalent to 10% of a full-length novel), this post should suffice (however, if you are, you should seriously read it–it’s pretty cool). Here’s the meat:

Two scientists name Lüder Deecke and Hans Helmut Kornhuber discovered something called readiness potential (RP) in 1964. RP quantifies the ramping up of electrical activity in the motor cortex and supplementary motor area of the brain that precedes a voluntary muscle movement. Benjamin Libet’s contribution in the 1980s was to show through experimentation that this RP signal precedes  the conscious will to move. Of course, his methods were a bit ghetto: RP was measured with an electroencephalogram (EEG), movement was detected with  an electromyograph (EMG), and first awareness of will to move was noted by the test subject using a fancy oscilloscope timer–as you can imagine, the time of this first will to move would be impossible to record exactly due to the amount of time between urge to act and ability to note the location of the dot on the oscilloscope. Furthermore, if the subject follows Libet’s direct instructions to note when “s/he was first aware of the wish or urge to act,” s/he would not in fact be willfully deciding to move. In this case, perhaps the “urge” felt by the subjects is the RP.

I wouldn’t hang my hat on the results of this study. But a more recent study (2008) repeated the experiment with some modifications and extensions, including the use of fMRI machine learning through multivariate pattern analysis to predict which button, left or right, a test subject would choose to press. The authors write, “We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 seconds before it enters awareness.” However, the accuracy rate was only 60%. Moreover, this experiment still relied on the test subjects noting the time of first awareness of urge to move. In my opinion, the result is therefore more intuitive than the alternative. I would be much more surprised if a person felt an “urge” to move and it was not the result of an electrical signal in the brain. On the other hand–is it even possible to experience will to move in the absence of an urge of any kind? The neural motor default is inhibition, not excitation.

The issue of finding proper controls for a scientific experiment is pervasive and fascinating. This scientific question regarding free will is particularly troublesome because there are so many unanswered questions whose predicted answers necessarily contribute to the premise upon which the experiment is based. Such questions include: How might free will be observed if it does exist? If asked to exercise free will, does this conscious determination to make certain decisions at “random” intervals preclude the ability to act in a truly autonomous manner? (I hypothesize yes.) Under the right circumstances, can RP actually start after conscious will to move (and then, what are these circumstances)? Do different kinds of actions require different kinds of free will? What is free will, anyway?

To the last question, I present the Merriam-Webster definition: “Freedom of humans to make choices that are not determined by prior causes or by divine intervention.” Great–now what do these “prior causes” entail? Where philosophy meets science, the rabbit hole runs deep.

Okay, so here is one sub-question which some scientists, including Libet, have endeavored to answer: Once started, can RP and/or the progression toward movement be stopped? Libet did observe that RP could be initiated without being followed by actual movement, implying that the subconscious decision to move was vetoed. Michael Egnor of Science News thinks that the buck, while perhaps not stopping, brakes to a school-zone appropriate speed here, and Benjamin Libet would agree. According to Egnor, Libet wrote, “This kind of role for free will is actually in accord with religious and ethical strictures. These commonly advocate that you ‘control yourself.’ Most of the Ten Commandments are ‘do not’ orders.” Interestingly, he firmly believed in free will (or at least free won’t), maintaining that the ‘veto’ need not be neurophysiologically predetermined–in his words: “We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.” But in my opinion, it’s very likely that there is a separate brain signal that competes with the readiness potential and overrides it. In fact, how could there not be? Our sensory neurons are constantly competing with each other to dominate our awareness, so why should the process of decision-making be any different? I don’t think anyone would argue that consciousness is an entity wholly distinct from the physical wiring of the brain. Thus, the question that I think becomes most pertinent in this debate is: Does the competing “veto” signal exist physically, and if so, where and how does it arise?

Scientists Simone Kühn and Marcel Brass were of the mind that the veto probably also arises subconsciously. In 2009, they sought to answer this question. The premise: If in fact the decision to veto is an act of consciousness, test subjects should be able to  distinguish the true permission of movement from mere impulse (as in failure to make a decision at all either way). I won’t go into the methods (read their paper if you’re interested), but the results showed that the volunteers could not make this critical distinction. Thus, the evidence more strongly supports a model in which decision to veto an action also arises subconsciously.

One of the most compelling experiments on this subject for me is a 1990 study by Ammon and Gandevia in which the scientists were able to manipulate test subjects’ perception of their control over decisions. Summary: any given right-handed volunteer would normally choose to move his right hand 60% of the time; however, when the right hemisphere of the brain was stimulated using magnets, he would choose to move his left hand 80% of the time. The incredible part: despite external influence, subjects still believed that their choices regarding which hand to move had been made freely.

After reading all this literature, if I had to say which side I’m leaning to, it’s definitely the one in which all our decisions result from an optimization calculation in the brain. It makes so much sense to me that we would integrate all our nature and nurture–observations, information, training and genetic tendencies–as parameters for some extremely complex multivariate nonlinear regression, in order to make the best possible decision. I mean, I can understand situations in which even suicide might be computed by the brain to be the least negative/painful option.

Am I okay with the idea that I may be no more than a biotic cyborg? I guess so, yea. But there’s still a strong sense of personal responsibility. It’s more important than ever to stay as informed as I possibly can about all issues (moral and otherwise) that might directly affect my life, so that when the need to decide presents itself, my neural networks will make the best decision for me and for those around me.

Discussion Topic: What do you think? Is free will a thing?

Don’t Worry–You DO Make New Brain Cells

So, you know how your mom always told you to wear a helmet when riding your bike because if you hit your head, you’ll lose brain cells that you can never get back? Well, back in the 90s, researchers discovered that adult neuronal stem cells (NSCs, cells which can become new neurons) do in fact exist, and what’s more, the brain never stops developing and incorporating new neurons! However, these NSCs have remained shrouded in mystery for some time. The big questions have included:

  • “How are adult stem cells maintained in the adult?” and
  • “What are the factors that control adult stem cell proliferation and differentiation?”

The first question was answered by Duke researchers in 2011 when they discovered the cells that keep the brain’s stem cells neurogenic, or able to form new neurons. When NSCs are harvested for culture in a dish, they don’t form new neurons; instead they form a type of cell called astrocytes, which if permitted to proliferate unchecked, can lead to brain tumor formation. This has been a major impediment to cultivating neurons for replacement therapies to treat brain injuries. But in their Neuron journal article, the researchers explain that neighboring cells called ependymal cells produce proteins involved in a pathway that is required for neurogenesis. When these genes were deleted from the ependyma, there was a dramatic depletion of neurogenesis. What’s special about these proteins? They instruct the ependymal cells to cluster around NSCs and morph into pinwheel-like architecture, providing what seems to be critical structural support. The study’s senior author, one Dr. Chay Kuo said, “We believe these findings will have important implications for human therapy,” and how could they not? With this new knowledge, cultivating neurons in a lab dish to implant in a damaged brain is much more feasible. Woo hoo!

The second question was answered at least in part by…the same Duke researchers. In an advance online publication released on June 1, 2014 (that’s like two weeks ago, guys!) they describe the discovery of an entirely new kind of brain cell called a ChAT+ neuron within the subventricular zone (SVZ) of the brain, an area where neurogenesis occurs. This region is hot stuff right now. A recent Medical Xpress article speaks of experiments in rodents with stroke injury which demonstrate migration of SVZ cells into the neighboring striatum (just a subcortical part of the forebrain that helps coordinate motivation with motor activity–don’t freak out), apparently aiding in the healing process. Additionally, a recent Cell paper identifies the striatum as a destination for new interneurons (connector brain cells) from this area. What’s more, the researchers write that “postnatally generated striatal neurons are preferentially depleted in patients with Huntington’s disease.” If only we could figure out how fix this! Now, thanks to the Duke scientists, we’re starting to put the puzzle together. The previously mentioned ChAT+ neurons were discovered to direct NSC differentiation–they use the neurotransmitter acetlycholine (ACh) to tell the stem cells to become neurons! When the ChAT+ neurons were stimulated by the researchers, there was an increase in nueroblast (dividing cells that will become neurons) formation. When they were inhibited, formation of neuroblasts was also inhibited. ChAT+ neurons are now a major target for medical research because the ability to stimulate new neuron formation will be invaluable in the treatment of traumatic brain injuries. The next step is to figure out what’s telling the ChAT+ cells to tell the stem cells to differentiate. What a beautiful and complex molecular bureaucracy!

Dr. Chay Kuo is on the ball lately, because he’s also behind some amazing recent discoveries about the brain’s response to injury. But first, some statistics. The CDC reports that in 2010, 2.5 million people suffered from a traumatic brain injury in the U.S. Additionally, 795,000 people a year suffer a stroke, the leading cause of death in the United States (it kills nearly 130,000 Americans each year). So it’s really exciting to be able to peek into the brain’s self-healing process, because the better we understand that, the better we can aid the process medically.

800px-Human_astrocyteWhat Kuo and colleagues discovered in this 2013 study was surprising, given the scientific understanding at the time. Besides neurons, neural stem cells can differentiate into several different types of brain cells, including astrocytes (see pic at right), as mentioned previously. When astrogenesis occurs prolifically, it often leads to malignant astrocytic gliomas (e.g. glioblastoma), which are the most invasive, aggressive and lethal type of intracranial tumor, especially due to their resistance to most current therapeutic approaches (read about this kind of brain cancer here). So it was pretty crazy when the Duke researchers found that instead of producing new neurons to replace the damaged ones, the brain’s initial response after severe trauma is to up-regulate production of a certain kind of astrocyte that will migrate to the injured area in order to make a scar to stop the bleeding, which allows the tissue to start recovering. Importantly, when the scientists experimentally prevented mouse neural stem cells from differentiating into these astrocytes after a brain injury, it resulted in hemorrhaging and failure of the region to heal. But in fact, while this was an unexpected finding, it’s by no means counter-intuitive. Why would the brain produce new neurons to replace the dead ones when the brain is still bleeding? They wouldn’t have a chance.

So now we better understand the brain’s internal equivalent of a band-aid or scab, and we can use this knowledge to better treat brain traumas. According to this Medical Xpress article, the lead investigator Kuo commented, “We are very excited about this innate flexibility in neural stem cell behavior to know just what to do to help the brain after injury. Since bleeding in the brain after injury is a common and serious problem for patients, further research into this area may lead to effective therapies for accelerated brain recovery after injury.” And in the Nature letter itself the authors write, “We believe these results will have important implications for therapeutic interventions using transplanted and/or endogenous NSCs after brain injury28,29, as well as astrocytic tumors that can arise from the SVZ niche30.” Considering how many people are affected by brain trauma each year, this is a pretty big deal.

Of course, having said all that, you should still wear a helmet when you ride your bike.

Discussion Topic: What are some other things you thought were true ten years ago that you now know are completely false?

Gene Editing Update: Genetically Modified Primates Are Here!

Incredible progress in the world of gene editing:

  1. Researchers have successfully generated genetically-modified monkey babies!Image
    If you recall, in an earlier post I mentioned a few types of “molecular cursors” whose job it is to find the right place in the DNA out of the whole genome (using a guide RNA to bring it to the right spot) in order to add or delete certain pieces of DNA. Well, using the newest cursor,  CRISPR/Cas9, collaborators from at Nanjing University, the Yunnan Key Laboratory of Primate Biomedical Research and Kunming Biomed International succeeded in making not one but two different precise genetic mutations at once, and what’s more, they confirmed the absence of off-site mutations at other locations where the single guide RNAs (sgRNAs) could feasibly (albeit poorly) bind, which has been a concern with this particular editing system. The mutated embryos were then implanted into the uterus of a surrogate monkey mother and carried to term. See highly technical “graphical abstract” at right (can’t you just picture an undergrad shrinking, copying and pasting those monkeys?), and pic of the babies below.This is the first time that GM primates have been made–which means that we’re closer than ever (although still very far off, for ethical reasons) to creation of genetically modified humans. But the truly revolutionary thing about this achievement is that now we can study human genetic diseases way more effectively, since humans are obviously more closely related to other primates than to rodents such as mice. And the fact that CRISPR/Cas9 successfully made simultaneous mutations at different genes is particularly exciting, because many human genetic diseases result from a combination of genetic mutations, rather than a single one. Read an article on this paper here, or read the actual paper here.
  2. Fine-tuning of the editing process: A Nature Biotechnology paper was just published on May 18, 2014 (two weeks ago!) that analyzed and improved the accuracy of the CRISPR/Cas9 gene editing system. Many scientists (like the ones performing the monkey gene editing above) predict where Cas9 will bind based on where else the guide RNAs might hybridize with the organism’s DNA, and then test each possible off-site target for mutations one-by-freakin’-one. In contrast, the University of Virginia School of Medicine researchers in this study performed a genome-wide analysis of Cas9 binding using a technique called ChIP-Seq, or Chromatin Immunoprecipitation followed by high-throughput DNA sequencing. This accurately and without bias identified all the locations where Cas9 was binding, because it didn’t depend on the sequence of the sgRNA.Dr. Mazhar Adli and team were able to identify several factors that influence Cas9 binding (read the abstract if you’re interested to know what these factors are), which are already helping scientists everywhere design more effective and precise gene editing experiments. In addition, these researchers found that a certain variant of the Cas9 enzyme, while more difficult to use, is much more accurate and introduces far fewer unintended mutations than the wild type enzyme does. Go science!

Read more than you ever cared to know about CRISPR/Cas9 here.

ImageDiscussion Topic: Dr. Adli from University of Virginia found that ChIP-Seq is highly effective at identifying all genomic locations that Cas9 binds during a given gene editing process. I can imagine this method one day being applied to IVF gene editing treatments–any cultured embryos that show presence of undesired mutations would not be selected for transfer to the mother’s uterus. Such a reliable screening process makes gene editing way safer and more practical. Someday it will be inevitably proven safe and effective in all organisms, even humans, but that’s not to say it will ever be permitted. Does the good of gene editing outweigh the evil? What laws could we put in place to make sure that it does?