Tuesday, September 16, 2008

the (likely) near singularity

For my first entry under "issues" (finally!), I'm going to talk about something I'm almost completely sure no other candidate will talk about at all.

Almost everyone is now in recognition of the fact that technology is moving at a very fast pace. And some, given this recognition, are pointing out that we should be doing more by way of preparing ourselves for the new kinds of dangers that this poses.

But what almost no one recognizes is that there is a very compelling argument that technology in the next many decades will actually move a hell of a lot faster. I never even thought about this until I read Ray Kurzweil's THE SINGULARITY IS NEAR (Viking 2005). This is, in large part, a book-length argument for this conclusion. But the core idea is actually rather simple.

Before turning to the core idea of the argument, it will help to remind ourselves of the idea of exponential growth. LINEAR GROWTH has to do with growth by way of a constant ADDITION FACTOR, per unit of time. (Strictly, the unit with respect to which we are measuring growth need not be time, but, except for an analogy/example to come very soon, I think it'll be easiest if I put it particularly in terms of time.) In contrast, EXPONENTIAL GROWTH has to do with growth by way of a constant MULTIPLICATION FACTOR. In the beginning, linear growth and exponential growth look rather similar, but as things progress, exponential growth begins to outpace linear growth in a rather extraordinary way.

An example (THE stock example) can illustrate both facts. Consider a standard 8x8 chessboard, which thus has a total of 64 squares. Put one penny on one of the squares in a corner. In both cases (linear growth and exponential growth), we will move square by square, increasing the amount of money on each square, until we have hit the 64th square. (It will help to imagine that we fill up one row, or column, before moving on to the next.)

For linear growth, our money will grow by a constant addition factor, let's say +2. Since there is one penny on our first square, there will be three pennies on our second square, five pennies on our third square, and so on. This means, at the end of our first row, we will have 15 pennies (15 cents) on square 8. And this means, at the final end of all of this, we will have 127 pennies ($1.27) on square 64.

In contrast, for exponential growth, our money will grow by a constant multiplication factor, let's say x2. Since there is one penny in our first square, there will be two pennies on our second square, four pennies on our third square, and so on. This means, at the end of our first row, we will have 128 pennies ($1.28) on square 8. Although quite a bit more than 15 cents (= what we have on square 8 with linear growth), it's not crazily different, and the initial stages were very similar (1,3,5,7 for linear growth, 1,2,4,8 for exponential growth). However, at the final end of all of this (of all of this exponential growth), we will have 10,223,391,840,850,775,808 (2 to the 63rd power) pennies ($102,233,918,408,507,758.08). I'm assuming everyone can agree this is amazingly gigantic (and shockingly larger than 127 pennies = $1.27).

(Not being able to find a traditional or online calculator that would give me all the digits, I had to do a significant portion of the calculation by hand, so this figure might involve a very small human error. But even so, this still gives the basic magnitude, namely, well over a quintillion dollars. It goes: thousand, million, billion, trillion, quadrillion, quintillion. Which gives us about one-tenth a trillion trillion, or one hundred thousand billion billion.)

Having reminded ourselves of the power of exponential growth, we can now understand the three claims that together make up the core idea of Kurzweil's argument.

FIRST, up to now, technology has been growing at an exponential rate.

As I understand it, almost anyone with decent knowledge in (as I'll here put it) technology studies will grant this fact. This is mostly the result of empirical historical knowledge. But there is also theoretical explanation to back it: the latest technology, which works better and faster than the previous generation of technology, can be used in creating the next generation of technology; so technology builds on itself in cumulative fashion.

SECOND, there is very good reason to think technology will continue growing at an exponential rate.

Some commentators have offered reasons for thinking that this is not true, but Kurzweil very carefully and elaborately offers rebuttals to all of these, almost always in part by way of reference to new technologies already in development (often in the prototype stage, though sometimes only in the conceptual development stage).

THIRD, there is very good reason to think we are about to approach the "knee" of the curve that represents all this exponential growth of technology from the past to the current time to the future. This "knee" is what Kurzweil and others call "the Singularity" - it is the transition period from slow exponential growth in early stages to extraordinarily rapid exponential growth in later stages.

When represented visually/graphically, every exponential curve has a "knee": a short transition from slow growth (graphically: low slope) to rapid growth (graphically: high slope). For actual graphs of a helpful sort, see some of the early pages of Kurzweil's book.

(This part of Kurzweil's arguments is less clear. But, given that the multiplication factor involved in [perfectly regular] exponential growth is constant, whether or not we are at the so-called "knee", I really don't see that Kurzweil needs this third claim. Which would be especially nice since, as far as I can tell, the perceived location of the so-called "knee" will depend on arbitrary scaling factors when we represent the exponential curve graphically. I'm well aware that some mathematical background is needed to make full sense of what I've said in this parenthetical comment, so if it seems obscure to you, just ignore it. You can do that and still fully digest the basic gist of what Kurzweil is getting at.)

Given the speed of exponential growth, and given some of the examples Kurzweil gives of future technologies already in various stages of development, I claim that the US government should be doing a hell of a lot more in preparation for protecting US citizens (and all people around the globe more generally) from the following worries and dangers:

1. One to two decades: easily biologically engineered pathogens.

2. Two to four decades: easily synthetically engineered pathogenic nanobots.

3. Three to six decades: evil (or at least un-human-concerned) artificial intelligence at human-level or beyond-human-level intelligence.

These three are actually three that Kurzweil himself pays close attention to near the end of his book. (I would give you the exact chapter except for the fact that my copy is currently out on loan to a friend.)

But there are other things to worry about:

4. Privacy: I here don't have in mind US e-passports which are very easily read by not-necessarily-benevolent-or-even-neutral third parties (as reported in Scientific American, Sep 2008 pp. 72-77, where all sorts of very important privacy concerns are discussed in very helpful detail), though such things are definitely a big concern that the US government is mostly ignoring.

Rather, I have in mind the following question: is the US government doing anything now to prepare for the possibility, for example, that within one to three decades super-spy-satellites will allow the government (or other agents/entities who have their own such satellite, paid for rental time on such a satellite, or hacked into such a satellite, etc.) to know virtually anything it wants about anyone at any time? Not at all.

5. The disappearance of sleep: is the US government doing anything now to prepare for the possibility that within one to three decades, scientists will have figured out ways to make it so that humans don't need to sleep, drastically changing the economy and human nightlife? Not at all.

6. Eternal life: is the US government doing anything to prepare for the possibility that within one decade (no joke - if you doubt, do some of your own research into the science of aging) the knowledge will exist to indefinitely extend human life (and canine life, and feline life, etc.), and the extreme likelihood that within three decades this knowledge will exist? Not at all.

7. Pleasure machines: is the US government doing anything to prepare for the likelihood that within one to three decades, droves of people will be spending enormous amounts of time hooking themselves up to machines which either directly produce pleasure, or indirectly do so by way of providing the machine-user with fully-life-like experiences of the user's choosing (these could have any nature: sexual, athletic, musical, intellectual, even political)? Not at all.

8. Boundless energy and cheap replication machines: is the US government doing anything to prepare for the possibility that within this century economies (or the one global economy) will be shockingly different in virtue of the fact that -- given the existence of boundless energy (say as the product of ultra-efficent and ultra-cheap solar panels, or as the product of genuinely effective fusion reactors) and cheap replication machines in the spirit of Star Trek (no joke, check out what Kurzweil has to say about some nanotechnology applications that are already in fairly detailed stages of conceptual development) -- almost no one will need to work in any traditional sense? Not at all.

9. Genetic engineering: is the US government doing anything now to prepare for the likelihood that within one to two decades the knowledge will exist allowing people to have their children genetically engineered to have all sorts of features (including "non-natural" ones from the relatively minimal, like purple irises, to the quite dramatic, like an extra functional appendage)? Not at all.

10. Non-evil, non-unconcerned artificial intelligence: is the US government doing anything now to prepare for questions that will need to be answered once we have, in the next two to five decades, produced near-human-like, human-like, or beyond-human-like artificial intelligence? Not at all.

And here I don't have in mind (as in 3. above) evil artificial intelligence and the like, but other questions like: Will people be allowed to order/purchase tailor-made artificial intelligences? Will artificial intelligences be given the right to vote, or other standard political and economic rights? Will artificial intelligences themselves be allowed to work on producing even smarter artificial intelligences (or less obviously AI artificial intelligences - think the new BattleStar Galactica)? It may seem that the answers to some of these questions are easy, but I think that reaction would be a mistake. It's one thing in the abstract to say, e.g., we should not give artificial intelligences the right to vote, it's another thing to say that when they have been given fully-human-like synthetic bodies and they have gathered "in the town square" to engage in political protest of a very traditional and compelling sort. And, it's one thing in the abstract to say we should not allow people to order tailor-made artificial intelligences with fully-human-like synthetic bodies, and it's another thing to stick with this answer in practice when droves of people, presented with the actual technological and economic possibility, want to place an order (given their failed romantic lives, or whatever), especially when you might very well be among those people tempted by such a possibility (or at least those people who find a part of themselves really wanting to exploit such a possibility, and so place such an order).

The way I have posed these questions on my list might make it seem that I am anti-technology or, at least, that I wish technology would not grow any further. This, in fact, is not at all the case. In trying to argue that the government is doing pathetically little to prepare ourselves for such technological growth, and the worries and dangers that poses, I have been emphasizing the negative possibilities that, if the government were to get its ass in gear, they could help protect us from. But all sorts of great things are very likely also on their way. In fact, many people, at least at a personal level, would welcome many of the developments on my list (such as lack of a need to sleep, eternal life, pleasure machines, etc.). And there are all sorts of other less dramatic future likelihoods: an all-purpose way to fight viral infections, self-driving cars, memory chips implanted directly in the brain, etc.

In fact, I've suspected for quite awhile now that it will be technology, not the "human spirit", that will save us from various ills. This, I hope it is clear, is not to say that there aren't lots of people trying very hard to make the world a better place. And it is not to say that humans COULDN'T make the world a better place (e.g., completely eradicate starvation) without newer, fancier technology. (In fact, as I understand it, pretty much all the evidence points in the direction of the view that starvation, right now, is completely eradicable.) The problem is that the current overall social and economic structures of the globe prevent (whether, when it comes to individual bits of the structure, intentionally or not) these forces for good from being able to properly mobilize themselves and existing resources to make it happen.

Even if Kurzweil is wrong, once you read his book, it is plainly obvious that, rather than being a crackpot, he is a brilliant thinker and has at minimum put together a compelling argument. (I am in fact right now voice-dictating this on voice-recognition software that descends from software he initially developed. In particular, I am using Dragon NaturallySpeaking 9, almost universally recognized as the best voice-recognition software on the market right now.)

And, when it comes to prediction of the future, where there can be no certainties, one has got to pay attention to likelihoods. And in these terms, I find it hard to deny that Kurzweil has presented an extremely strong argument.

He is not alone in his prediction of near future incredibly drastic growth of technology. Much of the disagreement among techno-experts themselves simply concerns how optimistic or pessimistic we should be about the results of this growth. My whole point in this blog entry has been to say the US government, completely contrary to what it is now doing (since it is now doing virtually nothing), should do all it can to make positive results as likely as possible.

In any case, I am telling you that, as president, I will do as much as I can to rectify this complete dearth of government action (indeed, government attention) in this respect. The other candidates, in contrast, will say absolutely nothing about any of this.


ADDED 9/19/08:

A BSU colleague of mine quite rightly pointed out to me that the exponential growth I described above is actually only a special case of exponential growth, what is usually called "geometric growth". Not being in physical possession of my copy of Kurzweil's book at this moment, I cannot check and let you know which term (or whether both) Kurzweil himself uses.

Basically, for the mathematically inclined, exponential growth generally is where: y = b (a constant, one which can always be e given compensating adjustments to constants c and d, which immediately follow) to the cx+d power. Geometric growth is the special case where c=1 and d=0. Usual cases of geometric growth (such as the chessboard illustration above) set b=2. In such cases, we can't get equivalent functions by letting b=e (or anything else other than 2) since we are disallowing compensating adjustments given the insistence that c=1 and d=0 for geometric growth.

Actually, treating functions simply a something like sets of ordered pairs of numbers (or mathematical entities more generally), what I just said makes me realize that we could distinguish two notions. Letting "gg" stand for "geometrical growth/growing", these two notions are: gg-functions, and gg-representations of functions. Then, at least one way of understanding the former in terms of the latter is "having at least one gg-representation". But I won't pursue this any further here. Except to say that perhaps a gg-function can be equivalently but independently defined as "exponential function where d=0". This seems intuitively right to me, but I've done no checking, and my mathematical intuitions about such matters are surely to be treated as possessing almost no trustworthiness, given how long it's been and what it was even then.

3 comments:

Nathan Hall said...

Interesting post, and a great way to lead off, as it emphasizes the philosophical bent you bring to the campaign as compared with the other candidates. Personally, I welcome the singularity, and think it will do much more good than harm. I think most efforts at government interference with it will probably do more harm than good (because government--unless you're elected, of course!--is likely to react fearfully and incompetently, not because no good could be done).

I would point out that, although neither McCain nor Obama is likely to address this topic, the government is not wholly unaware of the singularity. For example, the President's Counsel on Bioethics produced a report which is quite skeptical of the desirability of emerging life-extension techniques. I would not characterize their report as stupid. It points out the obvious benefits of living longer, healthier lives, then bizarrely claims that these obvious benefits call for especially careful considerations of possible negative consequences. After some plausible, but extremely speculative, ruminations on harm that could conceivably come to society were lifespans to dramatically increase, they conclude with: "Is the purpose of medicine and biotechnology, in principle, to let us live endless, painless lives of perfect bliss? Or is their purpose rather to let us live out the humanly full span of life within the edifying limits and constraints of humanity's grasp and power? As that grasp expands, and that power increases, these fundamental questions of human purposes and ends become more and more important, and finding the proper ways to think about them becomes more vital but more difficult. The techniques themselves will not answer these questions for us, and ignoring the questions will not make them go away, even if we lived forever."

As I said, not stupid. There are, indeed, difficult questions that we will have to face. Nevertheless, this article frightens me, because it raises the spectre of some bureaucrats trying to determine the value of life for us. It raises the spectre of government officials declaring, according to all their munificent wisdom, that lifespans shall not be extended beyond X, even though different individuals may find completely different, and completely valid, answers to the challenging ethical questions raised above. The Counsel's skepticism about the ethics of life-extension isn't stupid, it's just arrogant and deadly.

I am afraid that Leon Kaas wants to murder me in fifty years. If you can promise that your administration won't try to discourage anti-aging research, it'll go a long way toward winning my vote. And if you aren't elected, I'll gladly vote for your great-great-grandchild in 150 years. :)

Nathan Hall said...

I should preview my comments before submitting them; I'm sorry it was worded so clumsily, and more sorry that I left out the paragraph in which the Counsel's objections to life-extension are summarized. This probably makes it seem rather flippant when I dismiss the idea that the government could rightly impose a one-size-fits-all to the "questions raised above." For a full and fair summary of the Counsel's worries, one should of course consult their report. Since I don't want to hijack Dr. Kierland's thread completely, a more thorough explanation of exactly why I find their arguments against longevity unpersuasive will appear on my own blog in a few days, at which time I'll link to it from here. (I know, I know, you're on the edge of your seat.) Anyway, let me restate the question for Dr. Kierland: should the government encourage or discourage longevity research which may culminate in near-immortality?

Brian Kierland said...

Thanks for the comments.

With you and Kurzweil, my best guess is also that the singularity will overall do much more good than harm.

Thanks for the point that the federal government is in fact not right now doing absolutely nothing to prepare for near future dramatic technological progress. Still, I think the federal government should be doing a lot more. Not by way of trying to make it not happen (with Kurzweil, I don't even think the government could succeed in doing that), but rather to prepare ourselves as best as possible to minimize/eliminate any potential negative consequences. While, again, I think the singularity will be overall quite good, that's just overall - that's completely consistent with some negative consequences. For example, although I myself would very much like to live billions of years (so long as we attach adequate enough quality of life to that), there might be some real problems if we don't prepare ourselves ahead of time for the state of affairs where suddenly almost everyone lives indefinitely.

Also thanks for the particular reference to the President's Council on Bioethics. I've read some bits and pieces of that, and I agree, it's far from entirely stupid. But the authors do seem, when they reach the end of the discussions, nonetheless to offer fairly conservative (both in the old-fashioned and contemporary sense) views.

Leon Kass is to be feared. He is one of those individuals who is a very skilled writer in the sense that he can construct many turns of phrase that sound profound and appeal to certain reactionary emotions. This can make it seem like he's giving a good argument, when in fact he has a bad argument or no argument at all. My own view (having read several of his papers, but far from everything he's written) is that he's quite a bad philosopher. But as liberal proponents of views on such issues, if we want to be fully responsible thnkers, we should seek out a more worthy and respectable opponents. Maybe Oderberg or Velleman? Or Foot, or Finnis, or Brody? I'll do some more thinking about this, and perhaps get back to you.

My administration would most certainly not attempt to hinder anti-aging research. Will it attempt to promote it? That's a good question, one which I had not thought about and need more time to think about.

One point to consider is this. From a utilitarian perspective, it's far from obvious that it's better to spend money giving an already existing individual 100 more years of high quality life than to spend the same money giving a new individual (one who wouldn't be born and so exist in the first scenario) a 100 year long high-quality life. (For this kind of issue, it is plausibly very important whether you are a classical pleasure/pain utilitarian, a more modern preference utilitarian, or some other kind of utilitarian altogether.)

Another point to consider (which works nicely with the previous point) is that, a from utilitarian perspective, there would seem to be many more-pressing projects. So, although as should be clear from other blog entries, I don't have any great fear of high taxation, there will have to be a limit somewhere, and we may reach our limit before we reach the project of supporting anti-again research.

Yes, please do give the link to your own blog.