For the last several weeks I’ve been thinking about a question which I’m almost afraid to ask, because if taken too literally, it could sound like I’m advocating restrictions on freedom of inquiry. Nevertheless, I think it’s an interesting question, so I’m going to ask it here anyways.
So here goes:
Whereas technology exerts and enormous influence on culture to the point of being one of the major driving forces of history, and
whereas these influences are not always positive (or, at least, not always desired), and
whereas a technology, once having been invented, cannot be un-invented, and
whereas the dissemination of new technologies, unlike other major driving forces of history, is almost entirely under Human control,
why should there not be some process in place for determining, based on the desirability of the likely effects of a new technology upon society, whether or not the new technology should be disseminated?
Now, I must admit, I feel a little bit dirty even for asking this question; indeed, it seems almost heretical, particularly to someone of my age group and scientific background: “information wants to be free” and all of that. But let’s at least entertain this idea as a thought experiment: right now, the unwritten rule is, essentially, that any new technology upon which someone can turn a profit gets spread around. But given that any new technology can have profound effects upon a society, ranging well-beyond what its inventor intended or what it was marketed for, and given that we all have to live in this society whatever the consequences, is it not right, based on the principles of democracy, that we should get to have some say over the process? Would this be any worse than living in a perpetual series of randomly-selected social experiments with no controls?
Indeed, even hardened free marketeers agree, whether consciously or otherwise, that the line needs to be drawn somewhere. Few and far between, mercifully, are those who think that you should be able to walk into any corner drugstore and stock up on nuclear weapons*. So if we are agreed that avoiding an otherwise-statistically certain nuclear holocaust is worth a little regulation, how about an environmental catastrophe? Why should we not, after all, use legislation to forbid the worst of the greenhouse gas emitting technologies? Phase out engines below a certain degree of efficiency and mandate the replacement of carbon-based power plants. This one attracts a lot more controversy than the first, although not deservedly so. And of course, if you’re going to guard against nuclear and environmental catastrophes, biological catastrophes certainly seem like fair game as well. Antibiotics should be rationed, so as to prevent the evolution of “superbugs.” So far, I don’t think I’ve said anything that any reasonable person should find particularly outrageous.
Problems only really start to emerge once we start to widen the circle a bit; looking beyond outcomes which are strictly and almost certainly catastrophic to those which are merely undesirable. Here, the question naturally becomes: undesirable to whom? And here is where the waters start to become muddied, because things like cultural value systems and entrenched power structures enter into the equation in a much more major way**. Imagine, for example, if there had been some sort of a board in place in the United States to review new inventions and give them the stamp of legislative approval based on their projected cultural impact during the early 1960s; now, given the likely composition of this board (namely: men), how likely do you suppose that they would be to allow the dissemination of the combined oral contraceptive pill when it came across their desk? It certainly wouldn’t have been a sure thing. And can you imagine what the last 50 years would have been like without the sexual revolution? Now, of course, it’s entirely possible that the pill could have passed anyways, if it’s manufacturer had put enough pressure on the legislative process–which, of course, only highlights another of the system’s flaws.
Of course, all of this overlooks one key point: the fact that, in most cases, it is impossible or at least very difficult to accurately predict all of the societal ramifications of new technologies. As such, debates over any new technology would come to be dominated by wild speculation, probably of the most sensational nature.
All in all then, the idea is a bad one; while it is reasonable in the most extreme (i.e., likely catastrophic) cases, there’s just too much that could go wrong with it. If we lived in a perfect society where everyone could have roughly the same amount of say in determining what our culture’s “values” were, and if there were some empirically verifiable method for predicting the likely outcomes of introducing any given new technology, and if the whole thing was run by people who were utterly incorruptable, then it might be different, but in any society populated by Humans, it’s probably best to leave the boundary strictly at things like “preventing nuclear annihilation” or “stopping global warming.”
Science, as Isaac Asimov noted, may be gathering knowledge faster than society gathers wisdom, but there seems to be very little on the regulatory end that can be done to mitigate that.
*Although I don’t want to give the NRA any ideas for their next policy initiative.
**Not that they didn’t enter into it in the first place, mind you. The wealth of the petroleum industry is by far the largest reason why so little action has been taken on climate change.