Yesterday I attended a talk by Rebecca Saxe on how the brain learns to conceptualize other people’s thoughts. Having dabbled in neuroscience, I love learning about new developments. During the talk, Rebecca mentioned that the experimental techniques she uses, fMRI (a way to measure brain activity) and transcranial magnetic stimulation (a way to affect brain activity) have been around only for the last 10 years or so.
This brought me back to discussions we used to have in grad school when a significantly new paper or finding came out. A lot of new research is incremental, but every now and then there’s a leap. What enables a particular discovery to emerge when it does? Was it a technological hurdle that was overcome (e.g., Rebecca Saxe’s work could not have been done 15 years ago). Or was it a conceptual one — people could have done it 10 years before, but it just didn’t occur to anyone? And what lights the spark for the person who makes the connection?
Although fMRI and transcranial magnetic stimulation are both exciting techniques, in some ways they are still crude. Although they are able to affect/measure regions of the brain, they are not able to precisely affect/measure circuits or connections within those regions. Using fMRI to learn about the brain is sort of like studying how a computer works by measuring which parts of the motherboard get hot when you run certain programs. You know the hot area is important, but you don’t know what’s going on inside it. What could we learn about the brain with more sophisticated methods? If we had that technology now, how would it change our understanding? Would we have to develop new math or new systems theory to explain it?
Rebecca’s talk is up here and starts around minute 51.