So the always-wise Big Easy pointed me to this particular article:
I’m no particular friend of the heavy-formalism crowd he’s poking fun at, but I think there is perhaps another explanation for the fact that a lot of research is not particularly useful.
I sort of think of research as a government funded place to produce really great ideas. Pretty great ideas like websites that let you build other websites, for example, are supposed to be accomplished by enterprising people in industry. Really great ideas might be something like a brain scanning device that let you build websites.
The bad news about these ideas is that they are probably impossible. But of course it’s difficult to be 100% sure until you build a brain scanner and hook it up to Microsoft FrontPage. Actually even then you wouldn’t be 100% sure because maybe you need a slightly different version of FrontPage, or a slightly different kind of brain scanner.
What this means what academics tend to seem to produce, from an external perspective, is bad ideas. If an idea has to be 99% unlikely to work before industry won’t touch it, that implies that 99/100 academic papers are going to be describing things that don’t work.
Of course, you can’t really say on a grant application that you’ve done 10 things and produced 10 uniformly uninteresting results. So some of your work is even more boring stuff where you study things, to attempt to reduce the risk of producing bad ideas. Like if you’re an AI guy studying natural language, you might painstaking analyse 100 lunchroom conversations for pronouns, just to see if the many potentially great language processing algorithms that can’t handle pronouns well might actually great idea. And then you might publish a paper who’s exciting finding is “pronoun usage of category X occur in less than 5% of conversations”. Woo! But most likely, you’ll find that people use category X 55% of the time, which is the same thing as saying that natural language processing is just as hard as everybody thought.
Researchers are always looking for huge wins, which means that even ideas that turn out to be pretty good are discarded. Like say you are curious about type-checking, so you build some better type checking for C. And it turns out to make C a little bit better. From an academic’s perspective, that’s pretty much the same as a failure. So it’s no wonder you don’t spend a lot of time trying to publicize this, even though this might actually be a helpful worthwhile idea. In some sense, this is not a mistake: your job is to search the idea space quickly, not capitalize on every OK idea you run across. So you go on and find 5 other type-checking things that are even less useful. Or maybe you develop some formalism of your first OK idea on the theory that the formalism might reveal some ground breaking abstraction you could use to reliably produce neat type-checking ideas.
So that’s how I think the system is supposed to work. Research produces craploads of bad ideas in the hopes of someday producing some good ideas. Now is this a good plan? I dunno.
One thing that is obvious is that it is a profoundly depressing thing for the people involved. The freedom to come up with your own ideas is very much tempered by the knowledge that with all likelihood that idea will be a failure (where failure is defined as a lack of insane success). And once you get the failure, you’ve got the academic damage control of spinning the failure to imply that it implies a huge success is literally around the next corner (and of course, your job depends on this lie, even though everyone realises that you have no idea if success is really coming).
Another good question is in a field where you don’t need fancy stuff like supercolliders, isn’t industry and hobbists sufficiently incentivized to do this idea search without the government kicking on money. Well, I suppose that likely depends on what you think the structure of idea-space looks like which is surely a difficult question to characterize with confidence.