William Kahan, 1989 ACM Turing Award Recipient

Im Thomas Haigh and this is an oral historyinterview conducted with Professor William Kahan. Its the 12th of March, 2016, and werein Professor Kahans house in Berkeley, California. Professor Kahan, thank you very much for takingpart in this interview, which is part of the ACMs series of video oral history interviewswith Turing Award winners. I should also mention, as you know, we didan interview back in 2005. That ran to almost 24 hours of tape and wasvery detailed.So what well be doing here is a shorter,more accessible interview format where well be recapping for a broad audience some ofthe key points of your career. To begin with, I wonder if you can tell ussomething about your early life, education, and family background. I was lucky to have parents who valued educationvery highly. It had been denied them in their Europeanhomelands Poland and what is now Moldova. They sacrificed a good deal of their comfortand pleasures to put us into Forest Hill Village, which was a sub-municipality in Toronto thathad reputedly the best public school system.Our house was three doors from the boundaryof this Forest Hill Village. Our backyard was separated from the railwaytracks by a creek, so we were sort of out in the boonies of Forest Hill Village. But that really made us, my brother and me,made us what we are. It was a terrific education and, I thoughtto give you some idea, I got skipped two grades.When I entered Grade 9, I had skipped Grade8 and I forget which grade before it. So they thought I might not be able to handlethe rigors of an academic program and put me into the tradesmans program. Which meant that while some of my contemporarieswere taking classes in Latin, I was taking classes in shop. I was learning how to use saws and drill pressesand lathes and a little bit of brazing, a tiny bit of welding. And that suited me perfectly. I loved it. And in fact Ive got something here thatI made when I was 12 years old. Ive taken off a little cap that preventsit from scratching my shirt. Now you see, this looks like a screwdriver,doesnt it? It does. But its not.Its three screwdrivers. If I unscrew it here, I then take out theresanother screwdriver. And if I unscrew that, why theres anotherscrewdriver. Really handy for tightening the screws onmy glasses, which Im not wearing today. Ive shown you this because it will giveyou some idea that although Im a mathematician, I was pretty handy with my hands and thatis the way Ive been all my life. Repair my own household appliances, usuallycan repair the plumbing, the cars, and so on. I dont think of life as something thatshould be lived just as a mathematician or just as an engineer.And I learned that at school. Well, after Grade 9, they realized I couldhandle the academic program, so they stuck me into Grade 10 with the academic class,and I had to catch up on Latin. I loved Latin. Its a very epigrammatic language, verydifferent from my normal style. And this interest continued when I entereduniversity. Now I won a scholarship Lets backtrack a little bit with the family. So your parents moved to, emigrated to Canada? Oh, they emigrated to Canada in the 20sin order to escape pogroms and really vicious anti-Semitism in Poland, which is a foretastereally of what was to come when Germany invaded. They were in many ways ahead of their time. My mother designed dresses for girls to workas receptionists or even female doctors, which were becoming more and more numerous.And my father ran the factory that made thedresses, and he bought the cloth and went out to sell them. They were ahead of their time because, forinstance, they refused to use the epithets that are sometimes used now. We were not to use words like niggeror shiksa or disparaging terms for Italians or whatever. [phone rings] Oh, excuse me. I think Sheila will get that. Yes. The idea that women were disadvantaged inthe workplace was planted firmly because that happened to my mother. Thats why she ended up working with myfather rather than for other, more established outfits. Because they valued education, as I said,we went to this very good school system. There was a particular friend, David Gauthier,who was the son of one of the teachers there.That teacher is the one who, among other things,taught shop. Thats where I made this . His son, whowas in the same year as me there, he was outstanding, just a marvel to behold. And after Grade 13, which meant 13 in Canadaat the time instead of 12 that you have here, he won the provinces most prestigious scholarship,and I think I got the second or third. But it wasnt tenable in a professionalschool, so that meant that I couldnt use it to go to engineering school and insteaddecided, If the governments going to pay my way, why dont I learn somethinghard like mathematics? So you think otherwise you would have goneto engineering school? Oh yes. Well, that was my mindset at the time. I expected to be an electronic engineer becausethat was my hobby. Yes. So youve already spoken about enjoyingcrafting mechanical things. What was your first exposure to electronics? Oh. Well, when I was 11, I wound wire around acereal box and attached it with a tuning capacitor and a little crystal the crystals in thosedays had these little feelers so you can find a sensitive spot and had earphones anda long wire antenna.That was the first radio that I built. But Id already understood about electricityand I read a couple of books so that I became well-acquainted with things like the vacuumtubes of that era and lots of other electronic and electrical things. But that was just a hobby. That was just something I did for fun. I used to repair things around the house,repaired my bicycle quite often. So when I entered college Now well just give people a sense of thechronology. Say again? Just to give people a sense of the chronology,I believe that was in 1950.Thats correct. The autumn of 1950. That was also the year when I suggested toSheila that she might keep open the possibility that we might get together more firmly, butI didnt want her to feel trapped. I didnt say, Well go steady oranything, just let her know Im interested. So had you been in the same high school? Wed been in the same grade school briefly,but she went to a different high school than I did.We did go to the same Jewish synagogue youthgroup. And she was gorgeous. She just didnt know it. And she was genuine and didnt try to pretendthat she was silly or helpless, as so many girls did. Their mothers told them, Dont let theboys know how clever you are or intelligent, because . Sheila wasnt like that. And when I started in college, I started inmath, physics, and chemistry, as did David Gauthier. But he decided that wasnt for him and heswitched to philosophy. About 175 people started out in the firstclass. It was a calculus class. Of those, 27 graduated in math, physics, orchemistry. The others had to go somewhere else and takeother classes. Thats an attrition ratio that nowadayswould be considered scandalous, but then it was considered quite normal.Well, it was shortly after that that I gota summer job. I think that was the summer of 51. Im not quite sure. It may have been the summer of 52. I got a job through what amounted to a classroomacquaintance whose father owned this electronics company that made electrical instruments,and they were repairing mainly high-frequency direction-finding sets that were in use bythe then Royal Canadian Navy. They were also repairing various instrumentsfor the Air Force. I was working next to a bunch of immigrantsof whom only one knew Ohms law. The others knew nothing about electricity. They were just told, Here, replace thatpart, and they had a soldering gun and soldering iron and so on.The foreman didnt know anything about electricity. He just took his orders from who knows who. But then they found out that I understoodthis subject and I was able to produce what are now called fault trees, so thatwhen one of the instruments came in from the Air Force and had been damaged somehow, itwas just a sequence of tests or questions that youd ask, and on the basis of that,you could pretty well identify the part that had to be replaced or recalibrated. And that saved an enormous amount of timebecause what they had done previously was replace everything that looked a little bitdarker than it should be. I was even able to repair the moving coilmeters.You know, these things that have a hand thatswings back and forth in response to the voltage. My eyesight was good enough and hands weresteady enough that I could repair those, and that came in handy later. In my third year in college, I got preoccupiedwith Sheila. She had very reasonable doubts, and I hadto overcome those doubts to persuade her that we should try to grow old together. And I couldnt think of anything else thatyear, so I really made a mess of it academically. Now did Canada at that point have three-yeardegrees under British system, or was it four-year? No, it was four-year. It was a four-year degree.This was my third year and I really did louseit up. I think Ive mentioned that I may have spelledmy name correctly on the thermodynamics exam, but if so, that was the only thing I got right. But I also took the Putnam exam in I guessit was December of 52, and I scored fairly high on that. I even won a little pin. Thats the way my mother found out thatI had other interests, when I gave the pin to Sheila instead of to my mother.Well, so I figured out that its not enoughto love someone. If you would like to be loved, you have todeserve it. So from becoming the laziest man on campus,I turned into someone industrious for my fourth year. Part of what I did was to fix the electricaland electronic equipment for the physics lab. I wont say that my contemporaries brokeequipment so that I could fix it, but a lot of stuff did get broken and I fixed it. I also fixed electronic equipment for graduatestudents, mainly high-voltage pulse generators. I could fix the oscilloscopes and so on. So that got me a little bit of money and Iwas I thought pretty happy. Id made a choice to go into mathematicsfor the final year. Now that was done partly for a completelyunreasonable reason. The professor who ran the optics lab thatI took in my third year had noticed that I and my lab partner were getting superb results,much better than everyone else.My lab partner was Joe Vise . He was a friendfrom high school. A guy with very similar inclinations healso was good with his hands. So the first thing we did in the lab was torepair the equipment, and then of course wed recalibrate it, and then wed run the experiment. Of course our results in consequence cameout rather better than those of people who didnt pay attention to these little details. So this professor cornered me in the hallone Friday afternoon and said, Kahan, you really ought to choose to go into physicsnext year. You know, all the worlds greatest scientistshave been physicists. As he said that, the name Pasteur went throughmy mind. I dont think he was a physicist. Neither was Darwin. And I got a whiff of his breath. Then I concluded that if being drunk at 3:30on a Friday afternoon was what it meant to be a physicist, I wanted none of it, and chosemathematics. This is the way these very important decisionsget made. Now am I correct that at this point in Canada,you wouldnt take a whole set of different courses at the same time, that you reallywould choose one subject for essentially the whole year? It wasnt a smorgasbord system, thatscorrect.You committed yourself to a certain program,there were certain courses you had to take, there was a small number of options. In my final year, one of my options I chosewas a course in differential geometry and non-Euclidian geometry offered by ProfessorCoxeter . Now Coxeter was the 20th centurys greatest 19th century geometer I think, andalso experienced in group theory. And I took the course because everything hedid was so beautifully done. It was so elegant and crystalline. It wasnt that I wanted to be a geometer. I just wanted some of him to rub off on me.Are there any of the other professors or teachersthat you would mention as particular influences? Well, there were teachers who influenced me,some in perverse ways. Ralph Stanton was an algebra professor. Very clever and very nasty. So I learned not to treat students the wayhe did. I mean I remember once having a conversationwith him in which I expressed some sympathy for the downtrodden, and he said, How canyou care about people youve never met and dont know? Well, that told me what sort of guy he was. But he went on. He moved from Toronto to Waterloo. He was largely instrumental in founding theircollege of mathematics and computer science, statistics, business mathematics, combinatorics,a flourishing enterprise. But he was a mean-spirited person and finallythey got rid of him. They banished him to Winnipeg I think. And there was Tutte, William Tutte . He wasvery quiet and his sense of humour, these were always in-jokes, the jokes that onlymathematicians or young mathematicians would appreciate.They were so sly. He was a delightful character and we did notknow at that time what he had done during the Second World War. Yes. With the Fish code. This was, yes, the machine that the Britishcalled Fish, the Germans called it the Lorenz machine. It was essentially a teletype machine on whichyou would type out your message but what got sent was encrypted, and at the other end whenthey received the encrypted message and punched up a tape and ran it through the tape reader,on a similar machine it would print out the plaintext. And apparently a message from I dont know if it was France or Viennawas sent to someone in Holland, and the person in Holland complained that it had become corruptedand asked to have it sent again. And the British were eavesdropping and theypicked up both messages. They could see there were similarities andthey had some ideas about what the message might be about, so they had some ideas ofwhat phrases or words might appear.Bill Tutte was asked to look through thisand see if he could figure it out, and he did. He figured out how the Lorenz machine workedwithout ever seeing one and they built a replica without ever seeing one. Now all they needed was the key. And to get the key, they built the Colossus. That was the motivation for this huge electroniccomputer, I think probably the first one built even if it didnt have a stored program.Well, I had a number of other professors,but the one who influenced me most was unquestionably Kelly Gotlieb, Calvin C. Gotlieb . That happenedbecause in March It was March of 1953 and I was told that therewas a thing called a computer. I asked, What does this computerdo? and somebody told me in brief outline, and it set me back. I thought, I didnt think that was possible. How would they do a thing like that? So I designed my own electrical computer usingtelephone relays. Of course I didnt build it. I used I dont know how many hundreds oftelephone relays, and I know that at one point when I tried to compute how much power thisthing would use, I came up with a number that was a large fraction of the electrical powergenerated by Niagara Falls. So it was a good thing that it was never built. But I had it all drawn out on diagrams andI went in one day in I guess it was April to Kelly Gotliebs office.He was the guy who was the de facto managerof the computer. The person officially in charge, Watson, wasthe head of the physics department, but Kelly Gotlieb was the one who actually ran things. I went into his office and I said, Look,Ill show you mine if you show me yours. So he did. And he became first one of my professors andthen a friend and a guru, and Ive admired him greatly. When you ask who influenced me a lot, well,lets put it this way. There are many times when I would have beenbetter off if I had taken his advice. And I think hes still alive in Toronto. Yes. In fact, theres a relatively recent oralhistory interview with him in the ACM Digital Library . So Im glad that they were ableto get his experiences and memories documented.Oh good. Because hes played a vital role in initiatingall sorts of activities using computers. I mean he was early on the scene for big datalibrary stuff and so on on computers that couldnt hold much data. But he saw ahead. Yes. So I believe the computer youre talkingabout here would be the FERUT? This was Yes. The first computer in Toronto was actuallybuilt by Joe Kates . He was an Austrian refugee who came to Canada and was getting an advanceddegree in electrical engineering. He and a sidekick, Len Casciato , they builta small computer to demonstrate that Joe had successfully figured out how to improve theread-around ratio. Thats the analogue for cathode-ray storageof what we do now for DRAM, dynamic-read random-access memory. DRAM will lose its memory after a while ifyou dont refresh it, and in those days thats what you had to do with the cathode-raymemory.But also there was a problem. That was if you read a number from the cathode-raymemory, it would be done by really looking at secondary emission of electrons. The electrons emitted would splash into neighbouringnumbers, and there was some very serious questions about what could you do to reduce the rateat which adjacent numbers would fill up with electrons that didnt really belong to them? Joes thesis was how to improve this, howto improve this read-around ratio. Although obviously in Toronto the people whoknew about such things wanted a computer, but on the other hand, building a computeraccording to the design of this Austrian immigrant didnt seem like such a great idea to thepowers that be, so they decided to go with something designed in the old country, inBritain, and that was the Ferranti Manchester Mark 1, the first machine that I think wasintended to be sold commercially.That was the machine I cut my teeth on. I see that youve reproduced here partsof the Creed teleprinter code. Oh, its all very familiar. These were the Creed teleprinter characters,and when you programmed this computer, you did it with these characters, because therewas no assembly language. Yes. And there were five data channels on the tape Thats right. so there were not an enormous number ofcharacters available. Well, there were 32 characters available. The tape had five holes plus a sixth smallhole for the sprocket. The tape reader was a marvel. It was an electromechanical marvel. It had a magnetic clutch. The clutch has two faces, one driven by amotor all the time, and the other one would drive the spindle for the tape reader.They were both immersed in a cylinder filledwith oil and ground-up iron, iron powder. If there was no magnetic field, the drivendisk would spin and it wouldnt have much effect on the other, which at its other endhad a magnetic clutch to hold it still, you see. But when they wanted to move the spindle sothat they could move the tape forward on its sprocket, they would un-magnetize the brake,magnetize the driven part, and then it would become solid. The magnetic particles would all clump togetheras if it was a solid connection. This gadget would read 200 characters persecond going all out.But the computer had a millisecond clock rate. It took a millisecond to do anything, threemilliseconds to do a multiply. I think Im the only person who ever gotthe tape reader to read integers, six-digit integers without stopping the tape. The tape flew through the reader and intoa big bin, and if you got in the way, you could get hurt. And if there was any flaw in the tape, itwould break. But alright, so there I am boasting abouttrivia, but it was an interesting machine. So if we step back a bit, can you give peoplea sense of what the machine was like physically? You described the tape reader. What else would be in the room? Well, there was a console. The console had a number of cathode-ray screens. Two of them were in effect Oh, what is the word? Something that repeats whats on some otherscreen.Slave? Slaves. Yes, thatll do. They were just slaves that would tell youwhat was in the memory on two of the four cathode-ray screens that we had, and thenthere were two other screens that showed us the accumulator and the B registers. The B registers were Turings idea . Theywere ways of accessing indexed arrays by putting the index in a register, and then you putthe address in your instruction thats the address at the beginning of the array and you add the contents to the register, except for one peculiarity. Although you added the contents of that registerto the whole instruction, if you wanted to change the contents of the register, the onlyinstruction available was the subtract instruction, because it turns out that two subtracts arelike an add. Talk about RISC you know, reduced instructionset computers. And that would have the advantage of eliminatingthe need to modify code in memory, because you can just change the .. Well, on other machines if you wanted to computethe address of an element in the array, you actually had to do the computation in theaccumulator and then plant what it had computed into an instruction, and then you executethe instruction, that loads the accumulator with the thing you wanted to fetch.But thanks to Turings idea, what you didwas you loaded the instruction with an address at the beginning of the array knowing thatthree bits of that instruction specify the index register you wanted of the eight, andthat would get added before the instruction was executed. So youd be able to pick up if you wantthe 13th element of the array, you just put 13 in the index register. Well, actually there was a little bit morethan that. During the summer, Id become a pretty goodprogrammer. It really seemed very natural to me. So to write a program for this computer, youwould need to punch it onto the paper tape.Oh, absolutely yes. And then what happens next once youve gotyour program on a piece of paper tape? Well, you go into this room where theresthis console, and on the right-hand side of the console there is this tape reader. Theres also a Creed teleprinter and a cardpunchon the teleprinter. Behind you at the side, there are these cabinets. The room, about twice as big as this room,was full of cabinets, and the cabinets were full of vacuum tubes. I think there were 5,000 of them. Of course they created a lot of heat, so therewas the rush of air, air-conditioned air rushing through the cabinets to cool the vacuum tubes. And when you got on the computer, it was yoursfor typically maximum of five minutes. You put your tape in there. I also built a tape winder so people couldspool up, wind their tapes into these big spools. Youd put that into the reader and it wouldsuck your tape in. Then it would execute your program.Then you would get a result, or maybe youwouldnt. And youd get off the machine by the timeyour time was up, and the next person is right behind you, ready to start again, rebootingthe machine from scratch to read the tape, and so on. And the output would also be paper tape? You could get it as paper, but it would typicallybe punched on paper tape so you could print it out at your leisure, so to speak. So then youd take the paper tape over tothe Youd take it to another room and ran itthrough what amounted to a Creed teleprinter, and it would do what the teleprinters didwith paper tapes. It would read the tape with little mechanicalfingers that would poke up. If they hit a hole they would notice it, andif they didnt hit a hole it would stop, and that would set up a bunch of complicatedlittle levers and switches to get the print head to work. And you have mentioned that doing this, youdiscovered that programming was something that you enjoyed and were good at. Oh, I got really good at it because Well, all I can say is that the idea seemedperfectly obvious once I saw what was there.It was just the right thing to do. But this machine had a fatal flaw. Its mean free time between errors was aboutfive minutes on a good day. On a bad day, it could be less. So I learned to write programs with redundancyand checkpoints and backup and all sorts of other stuff. I think I and Harvey Gellman were the onlyones who were good at that. Harvey Gellman worked for Atomic Energy ofCanada and he used the machine for the atomic energy calculations. He and I were the only ones who could getthe machine to do anything worth doing on a bad day. Or on a bad time of day, because typicallyaround five oclock give or take, the machine would get very cranky. It was because industry was shutting downand they say housewives were turning on, and so there were a lot of electrical transients. Those electrical transients shouldnt havegotten through because the machine was powered by a generator that generated 400 Hz power,400 Hz because that meant that the transformers needed for the filament voltage for the tubescould be much smaller.That generator was driven through a rubber-coupledflywheel by a motor that was driven from the power line. So the inertia of the flywheel should haveabsorbed any transients. And somehow the transients would get through,and we never really found out how. Although we got some idea. I remember one It was a summers day. It was August. It was very hot.Windows were open. And Pat Hume, Professor of Physics and a friend,Pat Hume and I were using the computer, and every now and then wed hear a clackety-clackfrom down the hall and the B lines would fill up. So wed reboot the computer and then theredbe this clackety-clack from down the hall and the B lines would fill up again. And Pat walked down the hall and I would tellhim when the B lines are filling up. You know, shouting out the door. He found that the clackety-clack wasone of these electromechanical desktop calculators, a Marchant, and every time the guy using itpushed the divide key, the calculator would go clackety-clack, clackety-clack, clackety-clack,because division is a try it and see if you like it business, and if you dont,you back off and move the carriage. So now we knew what was doing the clackety-clack,and that was correlated with filling the B lines.And what Pat did was ask the guy, Look,would you unplug the calculator from this socket and plug it into that one? Lets see what happens. Oh, then the clackety-clack occurredbut the B lines didnt fill up. And we still didnt figure out why thathappened, because everything was in conduit. All the wiring was conduit, shielded, andso on. It was a mystery to the day that the computerdied. And I think the biggest job that you tackledon that I understand was a simulated airline reservation system. Oh yes, thats right. Joe Kates was aware that I was a good programmer. I had written programs for General Electricstransformer people. I had psyched out one of the really deviouscodes that Joe Kates had written for binary-decimal and decimal-binary conversion.So he employed me in the summer of 54,starting early May 54, to write a simulation of an airline reservation system. The purpose was to show that if you can computerizethe reservation system, you can cut down on communications costs which land on telegraphwires or telephone wires. Either way expensive, because people wouldcall in from distant places saying, I have a customer and he wants so many seats on such-and-sucha flight, and Is that okay? and then theyd have to go through a huge rolodexsystem. It was very elaborate. I actually spent almost the whole month ofMay studying exactly what they did, watching their operation, watching the codes they use,watching the two-letter codes for the airports at that time they were two letters, nowthree. I came, I thought, to understand really wellexactly what they were doing and why they were doing it. Then it was time to write up flowcharts sothat I can get the computer to simulate this. So now what youd do would be to enter yourrequest to the computer and instead of having to wait for somebody in a distant city, youdget the response in two seconds.The computer would maintain the inventoryof seats on the airlines and so on. The only problem that really weighed on mewas the fact that the mean free time between errors of five minutes was death to a demonstration. The businesspeople who would come to see thisdemonstration could not tolerate a machine that crapped out every now and then. So I had to put in all of these measures inorder to reduce the part of the program that was vulnerable to crash to as few instructionsas possible.All the rest was done redundantly, backedup on the drum, and the drum was checked redundantly, and so on. And the flowcharts were getting pretty elaborateby the end of July when Joe Kates said, Hows the program coming? I was showing him a flowchart and he says,Well, wheres the code? I said, Dont worry about that. The code will write itself. We have to get the flowcharts right. So he got a friend of his, a physicist, JoeShapiro to come and help me. Then with Joes help, the translation tookabout two weeks and then we had the program running. Alright, so maybe it took another day or twoto debug it, but the program was running, and now what were we going to do with thetime? So first what we did was prepare manuals.These were designed to look exactly like themanuals that the TCA people would use to learn about what are the operations that they canrequest and what codes do they send on the teletype machine to say, I want reservationor I want to cancel or this or that. After Id studied that, we ensured thatthe Creed teleprinter at our console could be used just like their machine so that ifyou typed in the same stuff, it would go to the computer and it would execute the simulationof the same operation on the inventory. Then we still had time left over, so we putin management things like cancel flights, launch a new flight. If theres going to be the Stanley Cup forexample in Edmonton, you get an awful lot of people flying to Edmonton, so theydlay on extra flights.Sometimes theyd cancel flights if bad weatheror low demand. Then we put in some very limited ability tosuggest alternate routes. Because although the Trans-Canada Air Linesroute looked like a fish spine that was running more or less parallel with the border, andthen there were little spines that went up to places like Ottawa and Calgary and downto New York and Chicago and I think Seattle Im not sure about Seattle therewas a certain amount of redundancy in the route. So we made up a table which said, If somebodywants to go from hither to yon but the direct flight is booked, why doesnt he take aflight that goes from hither to an intermediate spot to yon? and we would suggest that.It worked beautifully. At the end of August, all these guys in graysuits came, and looking over our shoulders, some of them looking over one of their shouldersas we type this stuff, said, Oh, lets see if you can cancel this flight. Of course there was no such flight. The computer said, There is no such flight. Or Lets see if I can get 13 seats onthat flight, and the computer would say, Well, 13 is too many. And so on. It worked just the way their system worked. And you would think that they should be delighted,but they werent. You see Joe and I had explained the systemand we were there at the demonstration. I was wearing a red sports jacket. It was really sort of tweed, red tweed sportsjacket. And Joe was wearing a green tweed sports jacket,but they were all in gray.They were looking at these two young sproutswho were going to take over their lifes work. And they wouldnt do it. So Trans-Canada Air Lines shelved this thingand I went on, got married in September, and became a graduate student. I went to England for two years. When I came back in 1960, Trans-Canada AirLines had finally retired these guys and now they were going to go and set up a reservationsystem, but I think the company that they chose I dont remember its name. I think it was something Packard . Thecompany that they chose didnt get it done, and ultimately Trans-Canada Air Lines boughtthe same system that American Airlines was using, the SABRE system.Then of course these systems grew. I mean nowadays they have everything, notjust your name and so on but how to get in touch with you and what have you been doingin the past and so on. Well, that was a big job. If Trans-Canada Air Lines had been interestedin building on that to a real system, do you think you would have stayed with that project? Well, the first thing to do is get reliablehardware, and that would have changed things a lot. Also, by the time How shall I put it? This was 1954. By 1957, machines were already in existencewith memories very much larger. By 1960, larger again. So that and the ability to interact, the I/Ochannels for example on an IBM machine An I/O channel was a little computer in itsown right, and it had direct memory access.So the whole architecture would have to bethought out over again. It would have been an interesting problem,but I had even more interesting problems I thought. So at that point, you already knew that youwanted to go to graduate school? Oh, uh Well, I know I wanted to work with computers,and where would I find a computer? I could move to the United States, but Torontohad the computer and I had the problems. So I was working on my masters degree anddiscovered, Something interesting is happening here. Its not just a masters degree. Im able to reproduce results that dontrequire the hypotheses that people used to think they needed. And And that was a masters degree in mathematics? This was mathematics, thats right.So the hard part was the proof. The tedious part was running example afterexample after example to find out that what was going on was predictable. Well, the rate of convergence was predictablewithin a factor of 2, which considering that we were going to increase the rate of convergenceby an order of magnitude by choosing the right overrelaxation parameter, a factor of 2 wasreally inconsequential in the rate of convergence. So I managed to find some papers, I thinkit was Stein and Rosenberg, a paper in one of the London math journals, which triggereda realization that what they had done could be adapted to bigger and better things.So I was able to prove that this behaviourthat I had observed was predictable and was very much like the behaviour that youdexpect if this combinatorial property was present. So you didnt need the combinatorial property. It meant that instead of having square meshes,you could have hexagonal meshes or irregular meshes. As long as it was a diffusion problem. A diffusion problem has a certain reciprocitywhich says that The influence of this node on that is the same as the influence of thatnode on this. Roughly speaking, thats what it says. But Id also figured out a lot of otherthings that were interesting. For example, I wrote a program for ProfessorCoxeter that figured out from generating relationships what is the smallest group that can be generatedthat way that uses something called a Todd-Coxeter coset enumeration algorithm. Of course he was delighted because now hecould deal with groups that were very much bigger than he could have handled by hand,but you know they werent very big. They were just bigger than he could handleby hand. Other than that, nothing particularly awesomeabout them.So it seemed like a good idea to continuethis. I remember a fellow graduate student, PeterBandler B-A-N-D-L-E-R Peter Bandler and I both wanted to work on this Todd-Coxeterscheme, so we flipped a coin and I lost. So I ended up working on these diffusion problems. Peter never finished. And its not surprising, because in orderto get real traction, you had to be clever. I think it was John Leech in Edinburgh, hewrote what was in the late 50s certainly the best such algorithm. But Im very lucky. I had all this computing experience. The engineers let me take the computer overon Friday night. Not to take it home thats what yourdraft says. I think Yes. A transcription error, I suspect. Yeah, I took it over from them on Friday eveningand Sheila came down, and while my program was running on FERUT, she would babysit itand I could go and punch more tapes or print them out or whatever it took.If the computer crashed, it would usuallycrash because the memory filled up, and so we had an idea where it would go. And what I did was to put a little loop therethat it would beep. So Sheila would come and say, The babyis crying, and I would go and reboot it and start it up again. And then if need be, I could wheel arounda cart that had an oscilloscope, and if the computer malfunctioned, sometimes it wouldrun a diagnostic program. Not everything worked but some things worked,and then I could see where the bug might be according to the diagnostic program. I could go to a cabinet and open it up, andI could touch one of the vacuum tubes and Pssh! and I knew it was too hot.Well, that was interesting too. You see, they used 5,000 vacuum tubes, sothey knew they were going to consume a lot of vacuum tubes. Where were they going to get them? Well, Second World War was over and therewere warehouse full of vacuum tubes intended for radar sets on aircraft. Somebody had decided that the vacuum tubeswould be lighter if instead of steel or glass envelopes they used aluminum for the envelopes. Well, aluminum is permeable to hydrogen, sothe tubes would get gassy. If you could have looked inside, you wouldhave seen that beside the red glow of the cathode, you would have seen this purple glow. It was hydrogen atoms bouncing. Once that happened, the vacuum tube was nolonger a switch. It just wouldnt turn off, and then it wouldoverheat. That was one way to find out which vacuumtube had to be replaced, and I was allowed to do that. So I would keep the machine going in manycases. There were some situations where I reallydidnt know what to do. Just had to shut it down.The engineers would come the next morningand try to figure out what was going on. From Friday night till early Monday morning,I had the computer mostly to myself. If someone came in on the weekend and wantedto use it for a little while, that was okay. You had to be humane about these things. Now we start actually the ENIAC book witha quote from Hartree, who had written one of the first public descriptions of ENIAC. And something of an understatement, but hewas trying to give a sense of what the availability of electronic computing could mean for mathematics,and he said, You can do quite a lot with 10 million multiplications. So did you have a sense as you were tacklingthese various problems that mathematics was entering a new kind of era in terms of whatcould be done mathematically? Oh, not just mathematics. It was just everything.In science and engineering. The biologists werent interested at thetime, but theyve come around since. In the late summer of 1954, Sheilas familyhad a party at which I was to be introduced to her family because we were going to getmarried in September. And one of her uncles was a very prominentlawyer in Toronto, Nathan Strauss, Queens Counsel. Very prestigious. And he felt that he should, well, check outthis young guy, and he said, Look, I understand youre into computers. Yes. Well, what are you going to do when fouror five computers have been built and thats all the computing that the country needs? What will you do with computers after that? And I said, Oh, theyll become companions. See. That was really done more to pull his legthan anything else, but prescient in a way. Yes. Would it have been possible for you to havedone the work that you did for your thesis without a computer? Could you have tackled that? I dont see how.It would take far too long, and I dontthink the idea would have occurred to me that you didnt need this hypothesis becauseI wouldnt have had the energy to run one of the electromechanical calculators on allthe different kinds of meshes that I tried. On the computer, it took a while to programthe thing. It might have taken a couple of days or threeto write and get the program onto tape and debug it. But then it would go in a flash by comparison. Alright, so it took a few hours. But thats instantaneous compared with whatit would take if you had to do it on electromechanical machines. And I was in charge of electromechanical machinesfor quite a while. Now you got your PhD in 1958. Before that, I know you spent a summer atthe University of Illinois where they had their own computer. Thats right. Oh yes. Well, you see it was clear that FERUT was Well, it was out of its depth as computerswent.The thought was that Toronto and the Universityof Illinois might collaborate on a new computer that they were thinking about. David Muller M-U-L-L-E-R without an Eafter the U, Muller he had an idea to use what we would now call self-timed orself-clocked circuits. These are logical circuits which send outa completion signal when their result has become stable. If you do it that way, you dont have todistribute a clock signal and synchronize everything, because when you synchronize everything,you got to allow for the slowest, which could be a lot slower than other things, so it slowsthe computer down all told. But it became clear pretty soon, both to meand anyone else who would look, that this would require an awfully large number of thesenewfangled transistors, and each one of the transistors came in a little can like that, you see, and where were you going to put them all? We didnt have integrated circuits then.And now that we do have integrated circuits,the question is not where well put the transistors but where are you going to putthe wires? So we dont have very many self-clockedcircuits. There are some. There are some very clever divide circuitthat use I think the name of the guy at Stanford whodid this, I think his name was Feller. Im not sure. Im an old man and my memory has holes init. But once it was clear that the ILLIAC II wasnot going to be born, Toronto went ahead and got an IBM 650 and later an IBM 7090.Meanwhile that summer, which was hell in somerespects because it was 95 during the day, both temperature and humidity, and the onlythings that flourished were the cornfields and the bean fields, and the only places withair conditioning were the computer room and the student lounge. And the student lounge was already occupiedby students on couches trying to get some sleep out of the heat. But alright, so I managed. But I met some really interesting people. Gene Golub.He was my host for my first week or so there. After that, he went off to Bell Labs. But he and I became friends from then on. And I met several other guys whom I foundinstructive. One of them was Bill Gear, C.W. Gear. Gear was reputed to be the only sane personaround, and he was. I mean he was really very steady. But his thesis advisor was Abe Taub T-A-U-B and Taub had a personality peculiarity. If he detected a weakness, he would prey uponit. That taught me something important about whatnot to do with graduate students. Their life is uncertain enough and so youreally dont want to shake them. But Taub and Gear would get into arguments,which could be heard through the closed door and down the hall. But what was interesting was that Gear gaveas good as he got, and he did it with a slightly Cockney accent. He had cultivated a more plummy accent atcollege, but he could revert to a slightly Cockney accent.So he wouldnt take this junk from Taub. But alas, Gene Golub did and was I think damagedby Taub. Taub mellowed when he came to Berkeley. He and I were on reasonably good terms, butI dont think he had very many graduate students after that. Yes. And I should mention for people who are interestedin pulling up those stories, I did oral history interviews for SIAM with both Gene Golub andBill Gear, so those are also available . They corroborate the story, do they? Essentially I think yes. Now when you got your PhD in 1958, then youwere able to spend two years at Cambridge in England. Yes, yes. That was because I wanted to work with Hartree. Hartree had come through Toronto in I think 57, in autumn, late autumn of 57, andI thought what he was doing was marvellous. He was computing the energy levels of molecules. These could be revealed by their spectra,but what you really wanted to know was how can you arrange them in order to minimizethe total energy involved in their oscillations.And that computation was terrific. Thats what I wanted to do. So we made the arrangements to go to Cambridgeand I applied for a postdoc fellowship, which fortunately I got. Now this occurred after Kelly Gotlieb hadtaken me to Detroit, to a meeting where I had presented the results that became my thesisthat I just told you about, in 15 slides in 15 minutes. Glass slides. They had to be prepared by a photographer. And it caused quite a stir, I guess. So I was what the mafia would call a mademan, and that may have helped get the postdoc fellowship support from the Canadian government. But Hartree died before I even set out. Im sure his family was distraught, andit upset me too. But all the arrangements had been made, sowe went to Cambridge. Well, it was really not such a bad thing. We had a marvellous time. I met lots of really interesting people, bothat Cambridge and Oxford, and London.And Gene Golub turned up after a while. He was taking his postdoc there also. And if you can imagine packing Sheila andme and Gene Golub and our luggage into a Volkswagen Beetle that we drove and took a hovercraftferry across the Channel and drove to Paris for a congress of what became the InternationalFederation for Information Processing societies that was in 59 how did we everfit all of us in that car? I dont think I could do it now. Among the people I met was J.C.P. Miller . Nowhe became my pro forma thesis advisor because, since my PhD was from Canada in the colonies,they didnt recognize it, and so I had to enrol there as if I were going to be a graduatestudent. Oh well. So I had to pay some fees.It was like contributions to a charitableorganization. I should say by the way that after I camehome, I heard nothing from Johns College until a few years ago when they decided thatI should remember them in my will. So theyve been sending me stuff ever since. But anyway, J.C.P. Miller had been computingfor decades on hand-cranked machines and also on a National Cash Register machine, whichwas sort of an accounting machine that produced difference tables automatically. And he was a devious old coot and he taughtme all sorts of tricks which he thought couldnt be used in electronic computers anymore becauseyou just cant see the digits, they go by too fast. But I got to use almost all of them to figureout what to do. So that was great.Now Wilkes was very hospitable. We did have a small clash because someonehad to teach Hartrees numerical analysis class, and Wilkes thought he was up for thejob and he wasnt. Wilkes was a competent and clever guy, butI think he thought numerical analysis was trivial, and it wasnt. He mangled some things and I remember beingcalled into his office and being told, Kahan, I can appreciate your questions better ifyou ask them privately rather than in class. After all, he said, this is an elementaryclass. And I said, Well, there is a differencebetween elementary and superficial. And his face darkened and that was the endof that interview. And strangely enough Ive heard from friendsthat he tells that story about me, same story as I tell about him, each of us to illustratesomething about the other guys personality.But I have no complaints about Wilkes. He was hospitable and I enjoyed the two yearsin his Math Lab. I think the other future Turing Award winnerthat you met on that trip was James Wilkinson. Oh yes, oh yes. Well, you see I had started to figure outbackward error analysis while I was still a graduate student, in somewhere around 1957-58. It helped me explain an awful lot about howcomputations actually worked. And I wasnt the only one. Wallace Givens at Argonne National Lab hada glimpse of this in 1954. Turing had a glimpse of it in 1948-49, andeven mentioned it in a throwaway line, one of the last paragraphs of one of his papers.But it took us until the late 50s for thepenny to drop. And Wilkinson caught on in Teddington nearLondon, and Fritz Bauer caught on in Munich. We all caught on approximately simultaneously. So when I went to Cambridge, I would traveldown to Teddington to meet with Wilkinson and show him what I had just done. You know, Ive now figured this out foreigenvalue calculation or for this calculation, or whatever.And he would reach up on a high shelf andpull off an envelope and Pff! blow the dust off and pull out some pages, Oh yes,I think Ive got that here. And do you have this too? And, well, it was great. I felt I was with someone who was in manyways a kindred spirit, but he was really very good at explaining things lucidly. It would read like a novel. So it was a pleasure to read the stuff hewrote and I was happy that he wrote the stuff up and I didnt have to. Now to pull back slightly, so all the peoplethat youve mentioned were pioneer users of electronic computers. Yes. Now clearly error analysis is something thatpredates electronic computing. Oh yes. But what was it about mathematical practicewith the availability of extremely fast automatic computation that meant that existing methodswere no longer sufficient? Okay.Well, the atmosphere was poisoned by von Neumann. He disparaged floating point. His explanation in his written reports Abe Taub was editor of von Neumanns worksand he allowed me to sit in his office and leaf through and even borrow things for awhile. So I looked through very carefully and I foundthat the only reason that von Neumann gave for disparaging floating point was that agood physicist should understand the range of his variables and could do his own scalingand didnt need floating point to do it.But I didnt find that out until afterwards. What I had found out was that numerical analysiswas a somewhat crippled subject. Kelly Gotlieb had given a class on numericalanalysis in 1956 and it wasnt very good. It was out of a text that was the latest text,but the text wasnt really very helpful. The problem was partly that engineers wereusing floating point on computers, including FERUT, despite what von Neumann said, becauseit just didnt make sense to try to do all the scaling yourself. Youd have to understand what all the intermediatequantities meant, and on electronic computers, you ended up with a lot of intermediate quantitiesbecause the computations got longer, much longer than you could afford to do by hand. So von Neumann was mistaken, and his mistakeinfluenced others who believed that floating-point arithmetic is refractory to error analysis. Its just too complicated. You never know what the range of the variablesare, and so what do you mean by the last bit fluctuates according to the exponent? How do you take care of all that? And I figured out how.Not for every computation. There are lots of computations for which westill dont have error analyses. You run the computation and you pray. But the major computations were susceptibleto this kind of error analysis. And when I had to write a thesis, I had askedmyself, Well, which of the various things Ive been doing should I write a thesisabout? and Griffith , my thesis advisor, said one day I guess it was in March of 1958,he said, Kahan, write a thesis. I dont care which of the 10 theses youcould write. Choose one and write it and have it in myhands by May, or find another thesis advisor. Well, I revered that man and so I took himvery seriously, and Sheila and I went out and brought a typewriter with mathematicalkeys on it so we could type up my thesis, which then had to be limited to whatever therewas on the typewriter.I chose the mathematically most challengingthing, which Ive told you about this rate of convergence problem. Then in 1957, Kelly took me to this WayneState thing in Detroit and I gave this talk and it made quite a stir, but at the samemeeting, the seeds of my ideas demise were sown. Peaceman and Rachford described an alternatingdirection scheme, which if you choose the parameters appropriately, convergence is evenfaster. And to this day we dont know exactly why. For rectangular regions, we understand it,but for arbitrary-shaped, like banana-shaped regions, we dont really understand whythe thing works so well as it does. In any event, their scheme was subsequentlyovertaken by what we now call multigrid methods, which are very much faster again. Well, if I had my thesis about backward erroranalysis, maybe it would have had a more lasting impression. I dont know. In any event, Wilkinson and I were kindredspirits about that. We saw things the same way. Whats more, so did David Wheeler at Cambridge. Now David Wheeler was involved in the microcodingof the EDSAC 2.He was really a genius, but a very reticentgenius. In order to get information out of him, itwas well, it was like trying to open a can without a can opener, really. But if he did say something, you really wantedto think about it. He had also understood I dont think he had invented it, but hehad understood backward error analysis also. So some of my ideas could be bounced off him. There were other people at the Cambridge lab. Peter Swinnerton-Dyer, who was extraordinarilybright, really one of the brightest people Ive ever seen. And my officemates were no slouches either. Colin Cryer , thats the name of a SouthAfrican. Its not in those lists, but he was doingreally interesting work on partial differential equations with moveable boundaries, boundariesthat would move under the influence of the solution to the differential equation.So we had a great time. In the meantime, I was working on variousalgorithms and insights, helped partly by J.C.P. Millers tricks, in order to betterunderstand how approximate computation could survive at not just a thousand operationsper second but they were getting up to a million per second. You see, very soon, 1963, they were doinga million per second on the CDC 6600. So that pretty much determined my career. I was going to be an error analyst and figureout how floating-point computation worked, why it worked when it did, and why it didntwork when it didnt.Now with the machines youve been describingso far, am I correct that theres no floating point in hardware, so it would be then withsubroutines? Well, thats right. On FERUT it was a subroutine family designedby Pat Hume. The thing about FERUT was that as early machinesgo, it had a very good subroutine management system designed I was told by Alan Turing. There was a part of the memory that was reservedfor the subroutine calling sequence, and it would automatically overlay your program witha subroutine that you called, executed, and then bring your original program back offa drum in its original state but now with the results from the subroutine. But the IBM 650 had decimal floating pointbuilt in, and so had the 7090, but it was binary floating point built in. So lets get on track with the story then. In 1960, you returned to Toronto as a facultymember. Thats right. Did they have already a 650 when you returned? No. By the time I returned in the I think it was late August of 1960, they hadalready installed a 7090 to supplant the IBM 650.However, something funny was going on becausepeople were running the 650 emulator on the 7090. Why would they do that? It slows the 7090 down dreadfully to pretendits a 650. Well, the reason appears to have been my matrixpackage. I wrote a matrix package for the IBM 650. It was very convenient. Apparently while I was off in Cambridge, unknownto me it was getting heavy use by statisticians. Really by statisticians clients thatis to say the psychologists and the life science people and so on who use statistics. They had used the scheme on the 650 becausethe 650 had to be programmed by assembly language. There was something called FORTRANSIT F-O-R-T-R-A-N-S-I-T and it was a crude form of FORTRAN, but it was inefficient. If you really wanted to use the machine efficiently,youd have to program it in assembly language, and programming matrix-handling stuff in assemblylanguage is just no joy.But I had found a way to pretend that theinstruction set of the IBM 650 had as its operands not decimal numbers in memory butmatrices. So the add operation, if interpreted thisway, would add two matrices and put the result in the standard place, and similar for themultiply operation. The divide essentially did matrix inversionor solve the linear systems of equations. Im pretty sure I had Jacobis methodfor eigenvalues in there, and just stuff for preparing matrices and stuff like that. And I handed that package over on a Thursdayso that there was time on Friday to pack for the ship that was going to sail on Sundayto take us to England. When I got to Cambridge, I let them have myphone number, so I said, If youve got any trouble, call me, but they never calledme.So I concluded that no one was using it. Oh well. Only to come back and discover yes, theydbeen using it very heavily, so heavily that they couldnt wean themselves away fromit when they got the 7090. Well, as it happens I did write programs forthe 7090, I did write matrix-handling programs. Matrix-handling programs including dealingwith complex matrices, matrices consisting of complex numbers, which I got to go very,very fast, because instead of calling complex arithmetic subprograms to do the complex add,subtract, multiply, during matrix stuff, I did it all with real arithmetic.So it went very fast. No subroutine call overhead. Apparently Ontario Hydro became addicted tomy programs because they have to solve complex linear systems in order to understand theelectric distribution networks. They have inductance and capacitance. Theyre really transmission lines. So when Toronto switched from a 7094 thatran this program of mine to the IBM 360, on the 360 it couldnt run the code. I forget exactly why. Id have to think about it now. So the chemistry department took over the7094 and put it in their basement and continued running it for several years, and OntarioHydro ran their codes in the chemistry departments basement for several years until finally theycouldnt get spare parts.And that was the end of the 7094. But it lasted for a long time. Im sure they had it for at least five yearsafter I left Toronto. Now picking back up with the floating-pointstory, so youve mentioned that the 7090 obviously was a much, much larger, more reliablemachine than FERUT. Oh yes. But youd mentioned one of the differenceswas that it had floating-point hardware support, and specifically for binary floating point.Yes. Correct. So what difference did the presence of floatingpoint in hardware make for the kind of work that you were doing? Well, for one thing it went an awful lot faster,not just because the Ah, the 7090 had I think it was a 2-microsecondcycle time to read from core memory, and it took maybe a hundred microseconds to do afloating-point multiply or maybe a divide. I dont remember which. Multiply I think. But a hundred microseconds is a lot fasterthan 3 milliseconds. Then the 7094 was even faster, about twiceas fast. It had the same instruction set with a fewfrills. It had double-position operations. But it could run the 7090 code faster. And it had what we would now call 128K bytesof memory, which at that time seemed enormous. So of course it meant that scientific andengineering computations that were hopeless in the 50s were now routine in the 60s.And what did that change in terms of erroranalysis? Well, for one thing, differential equationsolving became more practical. Differential equation solving is sort of aninchworm process. I think Ive described it to you. You know, you take small steps, hoping thatif the steps are small enough, youll follow the correct trajectory and not get too faroff it. But these things were plagued with certainkinds of errors, combinations of instabilities, because of the unfortunate choice of a numericalmethod. Although Dahlquist the Swede figured out in1956 what it was that made predictorcorrector methods unstable, so we ended up with a betteridea of which numerical methods to use. But still there were accumulations of errorsjust simply because every step had to get rounded, and thats partly why I came upwith compensated summation, which I wrote a note about that I think it was 1960-61. But I wasnt the only one who did it.There was a guy in Denmark who figured itout about the same time. So that made it possible then to solve trajectorycalculations, orbit calculations that would go on and on and on and on without being overwhelmedby rounding errors. There were many computations, especially eigenvaluecomputations, matrix computations, there were many of them where it turned out that a certainamount of error analysis would give you confidence that you understood why you were getting theresult you were getting. Now unfortunately, backward error analysishas been badly misunderstood. People describe it as follows Ive gota book where a guy actually says this I cant give you the answer to a problem yougave me. I can give you the answer to a slightly differentproblem. Its a pity you didnt give me the slightlydifferent problem.That was your mistake. Well, thats a misunderstanding. It is as if theres an intrinsic limit tohow accurately you can solve a problem because of backward error analysis. And if you dont get a good answer, well,backward error analysis explains that thats what happened and thats the justificationfor getting a bad answer. And not just for matrix calculations. For example, the logarithm of a number x near1 is closely proportional to x minus 1 plus higher-order terms, powers of x minus 1.But if you believe that the argument to thelogarithm function is uncertain in its last digit, well, then x minus 1 will largely cancel,leaving you with nothing much more than the last digit or so about which you dont knowvery much, and so you cant complain if the logarithm function is wrong. That is a mistake. Its a serious mistake. Its a serious mistake because first ofall backward error analysis when its applicable It doesnt always work. Some computations dont submit to backwarderror analysis. But when it works, its an explanation,not an excuse. It says why things happen this way. So if in your computation the result you getis no worse than if you had perturbed the data by n figures but your data is uncertainby hundreds of units in the last place, not just a few, well, in that case the uncertaintyin your answer is more because of your data than because of the algorithm.If on the other hand the uncertainty you getis intolerable, that you cant stand having perturbations in n figures, well, then carrymore digits. Do it again with more digits. You dont just acquiesce. Its an explanation, not an excuse. In the case of the logarithm code, somebodysays, Look at my logarithm algorithm. Its no worse than if you perturbed x bya unit or so in the last place. Well, that may be, but for all I know, x isexactly what somebody wants. It isnt uncertain by a unit in its lastplace. So if what you do is screw it up that way,then youre giving me a bad answer. And it did screw me up. The IBM logarithm did that on the 7090, andit caused one of my differential equation solvers to malfunction. It took me weeks to find out what was goingwrong. And so I fixed the logarithm program.Actually I even made it a little bit faster. Now I understand you finished up making quitea few changes to the system software Oh yes, indeed. Well, the system software, the math libraryis not now regarded as part of system software. Its regarded as part of the compiler nowadays. But yes, I rewrote the log and the exponential,I think sin and cos. Oh, square root, absolutely, because squareroot hadnt been monotonic. How can you have a square root that isntmonotonic? If you increase the argument, you expect thesquare root to well, if it doesnt increase, it should at least stay the same, but notdecrease. The graph of square root you see looks likethis . So after a while it bends over. Doesnt flatten exactly, but you can changethe argument without changing the square root, but you dont want it to go back.And that screwed up one of my programs, soI had to rewrite that one. And we contributed these things to SHARE . Aguy at the University of Chicago, Clemens Roothaan , asked, Well, why dont youjust rewrite the whole library? I said, Well, because Im a professor. I have to meet classes. Okay, he said, and he gave the job toone of his programmers, Hirondo Kuki. Kuki did a superb job. He rewrote the whole math library, not justthe parts of it that I had written. That put the subject in a different light. It meant that you could analyze the errorsin floating point, that you could get answers that were correct to within a unit or lessthan the last place, that you could do it. And if you could do it and it wouldnt hurtperformance much, then you should do it. It was as simple as that. Does that answer your question? Yes.I think it leads us to another aspect. Obviously implicit in what youre sayingis that the application programmer would have a good understanding of what was happening,of how much accuracy was needed, and so on. And I know that you Well, some did and some didnt. Yes. I think you had also mentioned that you triedto you made some changes to the error messages Oh yes. so it would be reported to the programmersto try and expose that information to them. Well, getting an error message in octal wasntvery helpful. The floating-point error messages came fromsomething called a floating-point trap routine, FPT. What I did was rewrite it, not only to givebetter error messages but to give retrospective diagnostics.Retrospective diagnostics means that if youare willing to accept a patch of some sort For example, if you say, Well, if a numberoverflows, replace it by the biggest possible number. If it underflows, replace it by zero. Or actually put gradual underflow into themachine, because hardware supported it, believe it or not. So what I did was to store in the operatingsystem a cell that said, Here is a list of the exceptions that have occurred and nothinghas been done about them other than to put in the default fix-up. And I supplied code that said, If you want,you can clear this flag.But if you dont clear the flag, it willstick up. And then at the end of your job when you getthe accounting record, it might say, You had an unrequited overflow at such-and-such. Lovely word, unrequited. And so on. Well, I didnt get a lot of thanks for that,except once. A mathematician had thought that he had discovereda counterexample to a famous conjecture, but it turned out that his counterexample existedbecause division by zero in the 7090 gave you a quotient of zero unless you stoppedthe machine, and he didnt stop the machine and he didnt know that. But my message said, You have an unrequiteddivide by zero at such-and-such a place. It hadnt occurred to him that he had adivision by zero, and that came just in time to save him from sending in a paper to publishhis counterexample. And that theme of floating point not justbeing hardware but the interaction between the hardware and software environment is somethingthat, as I understand, runs through the remainder of your career.Oh yes. Well, theyre married. So they have to learn to get along with eachother. Now we should say I guess just a few thingsabout the broader development of your career, first at Toronto until 1968 Say, could you speak up a bit, please? Sure. Sorry, yes. We should say a few things about your broadercareer, both at Toronto until 1968 and since then at Berkeley. Yes. Okay. Well, I told you I got back to Toronto inlate August, end of August 1960. I thought I was going to work with the twonumerical analysts. One was Jimmy Chung in the statistics departmentand the other was Boris Davison in the physics department. But Chung had died in August. He had some lung problems.I think he was living on one lung. And Boris Davison died in November. There I was, the only numerical analyst. Now that wasnt necessarily good news becauseI was fairly demanding. I wanted students to understand what theywere doing. But there are many students who take classesin order to get the credits, and there were quite a few like that and they would havefared badly in my classes.I dont think its a good idea to scoffat students who may be slow. Its like poking fun at somebody whosgot only one leg. But I do get exasperated when people are inclass and they dont want to know. In fact, I sometimes get exasperated withSheila if she doesnt want to know. Well, she doesnt like to know about mechanicalthings other than that Itll work if you push this button or that button. Then of course if she breaks it, our contractsays I have to fix it.So I was willing to deal with students whowere slow on the uptake, but unwilling to deal with students who just wanted the creditand didnt want to actually do the work. And theres a lot of work in numerical computation,because when I started numerical analysis at Toronto, the students did not have easyaccess to a computer. I mean the IBM 650 was fully subscribed withpeople who had serious computations. It wasnt used for educating undergraduates. So I used slide rules. Okay. So all of my problems were slide ruleaccessible,which by the way takes a good deal of effort, to find interesting computations that areportents of things to come but you can do it with a slide rule. Anyway, thats what I did. After a while, it was clear that I shouldnot be unleashed on freshmen, although in that freshman class there were two outstandingstudents.One went into physics, became famous. Another one went into mathematics. She also became famous. In fact, she became for a while head of theFields Institute in Toronto. Betty Keyfitz. Was that right? I think thats Well, Im really bad with names, especiallywomens names, but Im pretty sure Keyfitz was the last name. She was very, very bright. I really should digress a bit. Oh, we dont have much time for digressing. Well, this is important I think. At the time, there werent very many womenin the mathematical technologies.They were rare. They were rare in physics. One of the few women in physics was HarrietThompson, who was in my class, and she was absolutely marvellous. She went on to get a PhD at Cambridge andgot it shortly before I got mine, and then left with her husband to go to South Africa. And when I turned up in Cambridge, peoplewould think first that I was American. Oh, youre Canadian.Oh, then you must know Harriet Thompson. Harriet Thompson really shook the place up. For one thing, she and her fellow graduatestudents would often go to places that had been restricted to men alone. She would come in and some flunky would say,Oh, no, no, youre not allowed here, and she would ask, Why not? Then her companions would say, Sheswith us. Oh, there are lots of things she did thatmade it Lets say she was a pioneer that made itmore respectable for women to go into such things as physics and so on. And at Toronto, we needed such pioneers too. Among them, one who worked for me, she wasassigned as a programmer to work for me. I think it was sort of a part-time job, summerjob.Marion Green had graduated at the top of amath class at Toronto. I think her husband was second. He went into government work consulting oncomputing and she for the time being got this part-time job and was working for me. She was so bright, so quick. I remember that I had to sit down with herand have a serious conversation. I said, Marion, youve got to go to graduateschool and get a PhD, and then instead of me telling you what to do, youre goingto tell me what to do. Well, I dont know whether that frightenedher or not.She decided ultimately that she would ratherstay home with her babies. Many women made that decision. We didnt make it easy for them. It was very unfair. We lost a lot of really good talent that way. Thats important enough that I felt I hadto say that. So what brought you to Berkeley? Ah. Beresford Parlett. He was a friend whom I had met in I guessit was 1964 in New York at some conference or other. He also worked with matrix computations andhe was English. He invited me to come to Berkeley to helpset up a computer science department. Now that was 68. In 66, I had spent half a year at Stanfordwith my family. And as we were heading for home, my youngerson who was just about two and a half at the time said, Daddy, why cant you get ajob in California? So that and Beresfords invitation workedupon me and my family, and I thought, Well, why dont we go off to Berkeley for sayseven or eight years? Seven or eight years because it says in theBible that after seven years, a slave is entitled to his manumission.Hes entitled to be released. And if he decides he doesnt want to bereleased, then he is to be branded on the forehead and then becomes a slave for therest of his days as a member of the family, slave member of the family. Anyway, thats where the number seven andsabbatical comes from. Sheila didnt want to go. Sheila had a large network of friends in Toronto,friends from her college days.She had gotten a degree in philosophy in 1955and then a graduate degree in social work in I think it was 1956 or 7. She specialized in geriatric social work,which has come in handy now. She takes care of me. And she didnt want to go, but I thoughtabout it and I figured that shed like it once she got there. And she has liked it. She has a wide network of friends, includingquite a few widows. So we came here and found a really strangesituation. Not a computer science department. It was going to be one of three computer sciencedepartments, one of them clandestine. Lotfi Zadeh in electrical engineering wasforming his computer science department as a condition, that he was allowed to do that.That was the condition he extracted from theadministration so that he would assent to their setting up a computer science departmentin the School of Letters & Science. But there was also the Computing Center, andthe Computing Center had some guys in there who were attracted because of the thoughtthat ultimately they would become part a computer science department. That didnt quite happen that way. When I was brought to Berkeley, I had a tripleappointment math, computer science, and a little sinecure in the Computing Center. Well, that Computing Center thing had beenstarted by Abe Taub. He had been brought in to run the ComputingCenter under the auspices of the math department. He was brought in specifically because themath department feared Lotfi Zadeh, who had a well-earned reputation as a shark. The math department thought, We have tobring in a shark to offset Zadeh. By the way, Zadeh is still alive, but hesvery old and sort of hunched over.Comes in from time to time. Well, it turned out Zadeh was a better sharkthan Taub, and so Taub didnt really manage to set up a mathematical enterprise at all. But he did enlist some people who thoughtthat they were going to be part of a computer science department. It included a guy from the math department,Ren DeVogelaere. The Computing Center also employed a guy fromelectrical engineering, Professor Martin Graham. Martin Graham died just last year. And they had a couple others and me. But they also had an annual deficit of $150,000,and no matter what they did, it seemed that this became a constant of nature.They couldnt get rid of this deficit. To make the deficit worse, the federal governmentfinally cottoned on to the idea that people were using computers financed by federal researchgrants also as teaching instruments to teach undergraduates how to program, and the federalgovernment said, No, you cant do that. You cant use our money for that purpose. Well, that was a serious blow because computersat that time cost between $400 and $600 an hour no matter how fast they went, and teachingundergraduates consumed a lot of time. So as they tried to get funds from variousplaces, they discovered theyd always had this deficit and a committee was formed tosee how the deficit could be relieved. I was a member of that committee. And I agreed with the What year was this? Beg your pardon.What year was the? 69 I think. So not long after you arrived? Thats right. I arrived in January 69 and I think bySeptember of 69, I was on this committee. With a couple of other guys, a guy from chemistryand one from I think demography was it? Cant remember. I cant remember. Anyway, what we figured out was part of thedeficit was caused by this sinecure. There were people in the Computing Centerwho were supposed to give advice that was hardly ever wanted.So we recommended that the sinecures be terminated,that the Computing Center find a competent director. It was currently being directed by a sortof temporary director because the previous director had become unpopular. He had just made the deficit worse. So the administration said, Yes, wellaccept your recommendations, and what they did was promote the acting director to director. He was incompetent. They eliminated the sinecures except for MartyGraham who consulted on communications networks for the campus.I didnt need the sinecure because I hada tenured position in math and one in computer science. Ren DeVogelaere was a little bit annoyedbecause the math department didnt want to take him back for a while, but ultimatelythey were compelled to. And a couple of other guys had to be let go. I think I earned their undying enmity formy recommendation that this thing be squashed. And you know what? The next year, the deficit was $150,000 again. Hmm. So that got rid of one of the three computerscience departments you have mentioned. Were the other two eventually merged? Well, what happened with the L&S computerscience department was that we were promised a certain number of FTEs full-time equivalents. You know, we could appoint a certain numberof people. But then demonstrations began around 1970-71. Students wanted an ethnic studies department. Finally the administration knuckled underand they compelled the Dean of Letters & Science to set up an ethnic studies department, andhe only had a certain number of FTEs and he had to take one or two from us.Well, what happened then was that we wantedto appoint a very bright woman who got her PhD at Stanford in programming languages,and the dean was looking for any excuse to renege on his promise. And he got word that one of the members ofour department and this young woman were tight, that this could therefore be considered anepotism case. So they used that as an excuse to deny herappointment, which was really rather annoying.The chairwoman of the statistics department,Elizabeth What was her name? Oh gosh. Her surname has fallen into a hole in my memory. Maybe it will come back in an hour or so. She got wind of this and she observed thatthe relationship between this young woman and one of my colleagues was only a rumour. They didnt have actual witnesses. So she said that they had denied her due processand so on, and she was going to make quite a fuss. So ultimately we did get her. She was appointed. But that was the last appointment that wecould make. We couldnt grow. So ultimately it appeared reasonable to mergeour computer science department with Electrical Engineering and Computer Science department. This wasnt a bad thing for all of us.Marty Graham had been in EE and he had becomechairman of this computer science department when the merger occurred. And it didnt bother me because my hobbywas electronics. It was a forced marriage, but it wasnta really bad thing. And ultimately we ended up, all the computerscience people but a few ended up in the new building, Soda Hall, just on the north sideof Hearst, just across from the campus. Some stayed in Cory Hall, which was just acrossthe street on the south side of Hearst. That was electrical engineerings domain. I think the department has flourished, andI think its partly because rather than despite the forced marriage.Now numerical analysis was a much more prominentpart of computer science in the 60s when these disciplines first came together. Oh yes. So how did that change over the course ofyour career? Well, computer science has become what I describeas sliver under the fingernail of computer science, because although numerical computationis more, theres more of it and its more important than ever before, and especiallyfloating-point computation on a grand scale For example, biological systems are beingsimulated right down to the level of mitochondria inside the cell. You set up a bunch of differential equations,exchange equations, and delay differential equations, and gosh, thats floating pointin vast quantities.And the astronomers are into much more ofit now that theyve been discovering exoplanets and they wonder how big they are, and physicistsof course have all sorts of interesting theories theyre trying to explore, and chemistsare still trying to figure out how to put various molecules together, bigger and biggerproteins, and other things. So theres more and more of this scientificand engineering computation than there has ever been, but theres a lot more of theother stuff. Games, entertainment.Games use floating point a lot. Theyre simulating the movement of theseavatars and other strange figures. Theyve got to figure out what you can seeand what you cant see, how do things move, if you drop something it splashes. There has to be, well, a certain amount ofartistic verisimilitude I guess youd call it. It doesnt have to be a perfect simulation,but has to look more or less like what youd expect to see at least in a cartoon. And theres music videos, theres communicationson a vast scale. Neural nets are now big consumers of floatingpoint, but its very short floating-point numbers. I dont think anybody knows what a neuralnet is doing, which I guess is why theyre using it. But the niceties of numerical computationare generally appreciated only by a small number of people who take graduate classesin numerical analysis, with one or two exceptions.One exception is the graduate class on bigmatrix computations. Its a math class, its a graduate mathclass. Always heavily subscribed. Its taught by a former student of mine,Jim Demmel, now a colleague. He also teaches a CS class on parallel computationthat is also very heavily subscribed. The people who work in graphics, they havea numerical analyst amongst them who specializes in meshes, and meshes are used for regeneratingsurface information. But the language people have other fish tofry, really. They arent terribly interested in supportingnumerical computation. Although some may say they are, but they soonfind out that there are other things that are in greater demand. The operating systems people have never beenparticularly interested in numerical analysis, but there are lots of operating systems challenges,especially in memory management issues, a lot of problems with memory hierarchies, whatsgoing on in the cloud, and what have you got in the cloud up there, what is it? And I could go on.The human interface people dont have alot of demand for numerical stuff, and usually if they do, its very simple. So computer science has proliferated. Its almost as if its the Greater EastAsia Co-Prosperity Sphere, if you remember that. Japanese. Yeah, an enormous empire. Thats what computer science is becoming. And it frightens some people who feel as ifwhatever they were doing before is now being taken over by the computer nerds. And I try to persuade my colleagues, ForGods sake, dont frighten the customers. Yes.So in the second half of the interview, wellbe looking specifically at your work as a consultant to HP and Intel, the IEEE standard,and the Turing Award. Is there anything else that you think youshould particularly mention about your career at Berkeley? So youve talked about the origins of thedepartment, youve talked a little about the changing role of numerical analysis. Well, Ive taught numerical analysis classes,but Ive also taught classical analysis, complex variables, linear algebra. I introduced a discrete math class. Im not really good at it, but I thoughtit was really important, chose a good text which continued to be used for a few years. I felt that combinatorics should be a compulsorypart of the syllabus for a math major.It has become that now, although I think theypromised to me one year and then they reneged. It took a few years after that to finallyget it going. I helped students learn to solve problemsostensibly practicing for the Putnam exam, but I specifically said, You dont haveto take the Putnam exam to take this class and you dont have to take this class totake the Putnam exam. But if you want to learn about how to solveproblems, Ill try to help you learn that. Now my teaching style is very different fromthat of the folks whove taken it over. Well, its been taken over by a couple ofRussians, and they teach it as a class where they teach techniques.Now I dont do that because when you givestudents lessons on problem solving, you could use a book like Polyas book How to SolveIt, a delightful little book, and there are others also on how to solve problems. What they do is they tell students certainprinciples that the students should follow, and of course students dutifully memorizethose principles. But memorizing a principle is not the samething as recognizing when its applicable. What I would much rather do is have them strugglewith various problems, Id try to give some easy ones, every week some easier ones andsome really hard ones, because there are a few guys who are such hot shots that the hardones are the ones they need.And I try to tried, when I was doing it,to cover the range of problems algebra, geometry, a little bit of combinatorics, andproblems off the wall. Heres a problem off the wall if you wantto hear one. Look, there are four ghostly galleons sailingon the sea so beset by fog that you can barely see from one side of the ship to the other. Each galleon has a steady course, steady indirection and speed, in order to reach its destination. One of the ships strikes another one amidships,and the captain of the ship struck amidships hears the captain of the other ship say, Damnit, thats the second collision weve had this night. Then it happens again the same ship isstruck amidships. Now of course theyre ghostly galleons,so they just pass through each other. They dont have any damage, you see. And its the same the guy on this galleonstruck amidships hears the other one say, Damnation, thats the second collisionweve had this night. What must this captain of the ship strucktwice do in order to avoid being struck a third time? Well, I can explain it to you later.But you understand what I mean by an offthe wall problem? Yes. Okay. Actually this is a problem in vector geometry. I got lots of them. I would scour The American Math Monthly andother places for a variety of problems, even theorems. Sometimes somebody would prove an interestingtheorem. I would look at that and You know, thatwould make an interesting problem for my class. I must have at least a couple hundred of themburied somewhere in some disk file with the solutions. The solutions are examples of exposition,mathematical exposition. My students who arent accustomed to that Look, some of them are inarticulate. They may have been born into an English languagehome but theyre still inarticulate. Some are not so hot at English because thatsnot their mother tongue. And one of the things they need is not justto solve the problem but to explain the solution to persuade someone that Yes, this is asolution. That was what I wanted them to practice andI did this for over a dozen years.And, well, we didnt often win anythingmuch. There was one year when we had a team comein I think it came in second. Beat Stanford. But were competing against outfits thatcan cherry-pick as we cannot. We cannot pick which undergraduates will beadmitted to the math major, and therefore I cant pick which students are going towant to come and learn about problem solving. Furthermore this is an unfortunate andimportant issue some of the best students are so jealous of their grade point averagethat they will not take a course where they cannot see their way clear to getting an A,at least an A. Then theres problems with women. They dont come! I had, I think it was in 2005, I had a woman,a Romanian woman who had won European Olympiads, gotten her PhD in New York, came to Berkeley. It was sort of a postdoc. I had her help me with the Putnam exams oneyear and I made sure that she figured prominently, and she did play a prominent role.I thought that would attract women. Didnt. It attracted more boys, men, because she wasvery pretty. Then a year later or so, I took a sabbaticaland left it in the hands of this woman and another, a Russian, who was a Morrey Professor shes now permanently on the faculty and I left it to them to set things upand to do it their way. And I thought, Maybe this will attractwomen. No. Both of them are very pretty, so it attractedmore boys. I dont know what to do about that. And its not that women arent brightenough. Theyre more than bright enough! They just dont believe it. Well, you asked me about something important. Ive told you a couple things that I thinkare important. Okay. Great. Well, lets break for lunch there and thenresume. Okay. After a lunch break the interview continuedwith part 2 So having reviewed the earlier part of yourcareer and talked in general about your time as a faculty member in Berkeley and Toronto,the second half of the interview will be looking specifically at your involvement in a numberof high-profile projects.The first of these would be your consultingcareer particularly with respect to Hewlett-Packard and its calculators, and the work on floatingpoint for the Intel processor chips. So chronologically, I believe it would makesense to talk about Hewlett-Packard first. Alright. I got into Hewlett-Packard because there wasan advertisement by Texas Instruments of their calculator which made Hewlett-PackardsHP-45 look bad. The exercise was the following. Type in your telephone number includingarea code. Thats a 10-digit number. Take its logarithm. Take its exponential. Do you get your telephone number back? Well, on the Texas Instruments machine youdid and on the HP machine you didnt. And the guys at HP were worried about this. It cast what they regarded as unfortunateaspersions upon the integrity of their machine, their HP-45. I think it was Dick Sites who told them thatI knew enough about floating point I might be able to help them with that. I cant remember who got them to get mebecause all I got was an invitation from somebody at HP to come down.And I did and I showed them something interesting. I said, Well, heres the TI machine. Look, I put in the telephone number for atelephone number in North Carolina. 919 area code. Logarithm, exponential. Oh, there it is back again. Logarithm, exponential. There it is, back again. And the seventh time, Logarithm, exponential. Ah, the telephone number changed. What does that tell you is going on? Apparently there are more digits to the rightof the 10 than you can see here. Thats what was going on. The TI calculator had sloppy 13-digit arithmetic,which seemed to be better than HPs more careful 10-digit arithmetic, and their arithmeticwasnt as careful as it could be. But this was an intrinsic problem. Youre going to get into this type of problemno matter how many digits you carry. Its just a matter of looking at all thedigits in order to see that somehow you are able to reproduce log and exponential. Okay. Was this 1974? I believe so. However, I said, there is a way toget better results and to get rid of anomalies, and that is carry 13 digits internally forevery operation.Youll supply 10-digit operands, but youdo the arithmetic to 13 and then you round it back to 10, and youll find that thathas the least anomalous behaviour, whereas on the TI calculator I could show them allsorts of strange things. Even though youre only looking at 10 digits,I could show them some very strange things were happening, and thats because the TIcalculator didnt carry what we call a guard digit. It wasnt very careful about its transcendentalfunctions, and so youd have anomalous behaviour. On a programmable machine, the anomalous behaviourcan be really, really bad news because the programmer has to assume something about thearithmetic, particularly when it comes to branches.If his assumptions about the arithmetic, basedperhaps on intuition, maybe on an inadequate education, but if his assumptions are wrong,his program may go wrong. So I really wanted them to improve the qualityof the arithmetic, starting with the trig functions and transcendentals like log andexponential. And they did it. Dennis Harms, he was the chief mathematicianin the calculator group. Hed gotten a PhD in math at Cedar RapidsI think. So he implemented this. It didnt take him very long, and he demonstratedthat yes, the arithmetic did seem to be much cleaner. Yes. They showed me a calculator. I think it was the HP-27, a little shirt-pocketmachine about so big. I saw that yes, the trig functions and everythingbehaved better, but I said, But what are these buttons here? N, I, PV, FV, and PMT. Five top-row keys. What do they do? Oh, you dont have to worry about that. So I worried about it and I discovered thatthey were the financial keys.Then I thought about it and I said, Howcan they do that? Thats not easy. Its not easy to compute the interest rate,its not easy to compute things that involve what amounts to exponentials and logarithmsand so on in order to get nice, tidy formulas. So I found an instance where instead of delivering10 figures they were delivering only about six. And I phoned up my contact at HP and he said,Well, six will be enough. I said, Yeah, but if I have an examplewhere you just get six, there will be another example where you wont get that many,and I found one where now you had only two or three digits. So now he got worried. Then I said, And heres what you do inorder to avoid this problem, and I showed him that now that they were carrying 13 digitsin some other computations, they could exploit that in order to make the financial calculationsreally clean. So I showed him the code and he tried it,and the next thing I knew I was invited to a meeting of Hewlett-Packard people includingDavid Packard.The problem was that they had a warehousefull of these HP-27s, and now that they knew that there were defects, what should theydo? Well, they could send them out and tell people,If you dont like it, send it back and well fix it or replace it, or we canjust simply take all the ones in the warehouse, unsolder the ROM and put in a new ROM.And to David Packards credit, he said,Were not going to sell something which we know has a repairable defect. That and other comments made me think extremelyhighly of him. He seemed Well, our word for it in Yiddish is a mensch,a real person. So they fixed it and then the HP-27 was ableto do financial calculations as reliably as any of the other things. But that led to work on a bigger financialcalculator. Its sort of a desk machine like this witha big display. That was the HP-92. Let me just ask you a general question beforeyou talk about that.So to what extent would these anomalies thatyouve talked about be connected with the feature that generally computers are holdinginternally numbers in binary but that the arithmetic that humans are concerned withis decimal? I think youve got to put your questionagain because Im not sure I understand it. Sure. I mean youve spoken about a couple of specificanomalies and youve talked about a number more, and Im sure you could list many,many, many, like a defect here, a defect there, or a workaround, a way to get around it. Now ENIAC and machine calculators had decimalarithmetic in the hardware. Humans are concerned with numbers in base10 primarily, but the internal hardware is doing everything in binary.So are a significant proportions of theseproblems exacerbated or caused by the difference between the binary arithmetic and the decimal? Yes. There is a serious problem for someone whois unaware or has forgotten that a machines arithmetic is binary, or hexadecimal for thatmatter, but what he sees displayed and printed is in decimal. So what you see is not quite what youvegot, and when you try to put a number in, unless its a moderate-size integer or afew others, what you put in is not what gets there. Not quite. And that causes distress on occasion becauseyou may know what the correct result would be if the number you put in the machine wasthe one that was really there, or if the one you saw being acted on by the machine Because you display an intermediate value,but of course its displayed in decimal.Its really in binary, and it goes in andsomething else comes out? Weird. This is prevalent in spreadsheets and Microsoftfor many years, and still for all I know, has tried to cover up the fact that the Excelspreadsheet is doing binary arithmetic though everything is displayed in decimal. That is a serious mistake because whateverway you choose to cover up this fact to make it appear as if its decimal will simplyexacerbate the anomalies later. Theyll become quite inexplicable. And Ive written about that on my webpage. Errors designed not to be corrected. I believe that decimal arithmetic is morehumane. We have, you and I, already discussed someof that. The new IEEE standard new as of 2008 mergedbinary and decimal arithmetic together so that many of the rules and so on apply toboth, but the decimal arithmetic was not what you think.It was a very subtle concoction. We can go into that later. I believe decimal arithmetic is what oughtto supplant binary arithmetic for all casual computations and many that arent so casual,because what you see is what you get. Its going to be easier for people to understandwhats going on. Even if whats going on is anomalous, atleast they can see the numbers and they really are the numbers instead of phantoms of somesort. For example, MATLAB prints numbers that looklike integers. If the numbers are spaced close together,they really are integers. If theyre spaced far apart, then they arentthe integers you think. Theyre just floating-point numbers thathappen to be awfully close to the integers and they got rounded to those integers. Thats a fairly subtle distinction, yousee. I mean, how often would you notice it? And it can be very annoying too, because againwhat you see is not what youve got, and sometimes the difference matters.However, I think that for the largest large-scalecomputations in science and engineering, people wont really care if its binary. That will be the least of their worries. Okay. So, sorry, returning to the Hewlett-Packardmachines, so you mentioned then they were working on a large desk-sized calculator. They were working on I think it was an HP-9000machine that was about so big . It was a programmable decimal machine with a reasonable amount ofmemory, and I was lent one of those in order to develop some of the algorithms that wereused later.I think my work with HP, which continued whenthey moved the calculator division from Cupertino up to Corvallis, Oregon, that work was veryfruitful. I enjoyed the people with whom I was working. And there were some interesting stories. One interesting story arose because the subdivisionof the calculator division, the subdivision with whom I worked which made small calculators,little handheld shirt-pocket calculators, was told that they had incurred enormous lossesup there in Corvallis. Enormous losses, but their calculators werethe ones that were selling, and the other guys were behind or theyre on the fab line,they dont sell directly, and so they arent making any money. Were the ones who are making money. How could we be the ones incurring enormouslosses? Well, after a while the accountants figuredit out.You see, they were charging every subdivisionpro rata to their sales. So if the other subdivisions didnt haveany sales, that meant that these guys in the small calculator division were bearing allthe expenses including the fab lines and development of this machine and that one. Then of course they realized that these guyswere the ones who were keeping the lights on. So when I went up there one day, they toldme that this is what happened, that they were going to have a celebration. They were having a picnic outside. They have a spacious lawn there. It was lovely green. Beautiful day. And I thought Id bother the project manager,Stan Mintz, and I wanted to get a solve key on the calculator. I wanted to arrange that if its a programmablecalculator and you can program the left-hand side of an equation, that then Id likethe calculator to search for the root, and I had an algorithm for doing it which I thoughtwould converge at an adequate speed and a way of deciding when to quit and so on.Thats what I wanted to put into a calculator. Stan had resisted me because hed said,Ive asked this of the marketing people, and they say no one has ever asked for a solvekey. Of course they wouldnt ask for a solvekey. It wouldnt have dawned on them perhapsthat they could have it. But finally we were at this picnic and Stanspiked the punch, which is very much contrary to HP rules.Im teetotal, so it didnt affect me,but Stan got jolly. He said, I tell you what, Professor Kahan. Ill let you have your solve key ifyou can provide me with an integrate key. I was never able to integrate when I wentto college. Give me an integrate key and you canhave a solve key too. Well, how could I resist that challenge? So I did come up with an integrate keythat would fit in the space. The space available was very small. Just a handful of registers were availablefor scratch. And Dennis Harms helped me work on it andmicrocode it. So we got it working, and then the solvekey, and they got put into the HP-34C. And I have one, but its battery is dead soIm not going to bring it out. Well, the thing was though that the HP-34Cssolve key and integrate keys were now interesting challenges since things couldgo wrong. Some equations dont have solutions. Some equations give you the wrong solutionbecause when you compute the left-hand side of the equation, roundoff gives you junk.And integrals can be even more interesting. So I wanted the guys who write the manualsto include this kind of information in the manual so it would help people use these verypowerful keys correctly. But HP had a policy, and the policy was Wedont put tutorial material in our manuals because were dealing with professionalslike us. We know what were doing, they know whattheyre doing. And besides, Professor Kahan, are you tellingus that youve persuaded us to put something in a calculator that can get wrong answers? Well, I seduced the marketing guys and theyput in an extra chapter in the HP-34C manual, and then their manager said, Hey, I toldyou not to put that in.Take it out. And they said, Well, if we take it out,were going to delay the completion of the manual, and that will delay getting the calculatoron the market. So there he was, blackmailed in effect. He had to leave the stuff in, and they wona prize at the Willamette Valley technical writers association for technical manuals,partly because of these chapters. And when the salespeople did surveys, theyfound that several of their customers said, I bought the calculator because those chapterstell me how to use it and I can use it more easily than I can on mainframes. So we were vindicated.In the meantime, the financial calculatorswere getting complicated, and thats because of something called internal rate of return. Ive got one of the financial calculatorshere. Yes. It has an IRR key. Now internal rate of return is a fictionalinterest rate, but explains a transaction where youve got a certain number of cashflows. Say initial cash flow goes into a bank andthen you get some cash flows out, and maybe youll have to put more cash in sometime,and finally you get all the cash out and you exhaust the account.And the question is, when you look at allthese cash flows back and forth, what fictional interest rate would explain these sums ofmoney? Because what you really intend to do is notput the money in the bank, you intend to invest it in some project, in some product. It will cost you so much to develop the product,then youll start selling them. Oh, and then theres going to be taxes downthere of some sort, and maybe thats going to be an outflow.So once youve got this schedule for theexpenditures, which may be the first 10 cash flows, the cash going out to build it, andthen theres an income and you expect to sell it for a certain period of time, andnow what we want to know is, well, what rate of interest are we making on our investment? That is the internal rate of return. It requires that you solve an equation whichcan be fairly complicated. In fact, under some circumstances, you couldhave a polynomial with degree in the tens of thousands to solve. But the calculator only does only so manyarithmetic operations per second, the slow ones are seven arithmetic operations per second,and youd really like to get the answer in 20 or 30 seconds because most people wontwait any longer.That was a challenge, and I found a way todo it. Actually its written up in my lecture noteson real roots of equations. But at that time, people hadnt thoughtabout that. That made a lot of the difference. Now we had a calculator, financial calculator. Not only was it accurate, the errors confinedto one unit in the last digit displayed. Not only is it accurate, but it solves theinternal rate of return problem. And it solves these problems, including somefrom mortgages and, oh, various kinds of investments, discounts, it solves them accurately and quickly,whereas on the then-competing Texas Instruments Business Analyst, you could put in the dataand the calculator was supposed to solve a problem, but it would go catatonic, or itwould run, run, run, run, run, never stop, or it would give you a junk answer. And that wouldnt happen on my program.So I recall I thought Id proved it. So we really had a superior beast. That was the HP-38C. That was the first one. Then this guy , the HP12-C. And having producedthese and found them very well adopted and people really liked them. What HP liked best of all was that every nowand then theyd get a phone call from someone whod say, Look, Ive got this calculatorand its giving me an answer different from the one in the tables. Oh dear, the telephone answerer wouldsay, Well have to look into that. Can I have your number and well get backto you? So what they would do would be to run thesame calculation on their big HP 3000 and check as best they could, and theyd alwaysfind that the calculator was right, the tables were wrong. After a while, they had a list of tables thatwere wrong.After a while, the telephone answerer wouldsay, Oh yes, we know about that one. That table is mistaken. The number should be such-and-such insteadof whatever you see in there. And that seemed to invite a great deal ofconfidence in this little machine. So here Ive got This one here is a prototype. This is one of the Oh, this is a prototype. Its one of the originals. It survived a certain amount of testing andthen they gave it to me. In a box, Ive got the 25th anniversaryversion. And here is the 30th anniversary version. That was in 2012. Where do you see an appliance, an electricor an electronic appliance on the market for 30 years? And so far as I know, its still on themarket.Well, the success of that calculator mademy next demand sound more credible. I wanted a calculator that had not just thesolve and integrate keys. I wanted to have complex variables and smallmatrices, so that it would have everything for sophomore engineering students exceptdivs, grads, and curls. Divs, grads, and curls require a larger display. Cant do it on that little window. The fact that you can do matrices on thatlittle window, thats a little bit remarkable to begin with, but of course I represent matricessimply by their name its matrix A, B, C, D, or E. And if you want to know whatthe elements are, then theres a walkthrough key, which allows you to walk through it andsee what the elements are, and the same thing for putting the elements in. But normally you dont want to look at theelements in the matrix. Normally you just want to do the matrix algebrauntil you get a final result and then you want to walk through it. And I got them to build it.That was this one here. Thats the 15C. It really is my favourite. Now in order to get them to build it, I hadto make an estimate of a market. So what I did was go to our library here,and I looked up all the colleges I could think of that had engineering schools, and I lookedup their enrolments, and I figured that about half of the incoming class each year wouldwant to buy this calculator. Why? Because when I used the HP-34C, the one withjust solve and integrate, I would plant myself in the library or I would plantmyself out in Sproul Plaza at a table and I would pretend to be working with this calculator. Students would come up and ask what I wasdoing, and if they were engineering students, theyd say, Oh, and what does that do?and Id show them.And half of them would go to the student bookstorejust on the other side of the plaza and buy one with their own money, not waiting fora doting aunt or someone like that to buy one for them. So I figured half the students are going tobuy one. And would they have cost several hundred dollars? At the time, it would have cost about $150,but the price came down rapidly. It came down to rather less than a hundred. I think it was $70. Well, I think the financial one costs $70,or its priced at $70. Anyway, the marketing people wouldnt believeme. They figured that they could sell only a thirdof the number that I quoted on the basis of their experience with calculators. So they told Dennis Harms Now Dennis Harms was in charge of mechanizingthe production line for this range of calculators. Their little shirt-pocket calculators areput together by robots. Their subassemblies are tested by robots.Theres 20 seconds of hand labour betweenthe stuff coming in on the input dock, the various parts and so on, and the cartons goingout on the going-out dock. Twenty seconds of hand labour on average percalculator. The rest is all done by robots. And what Dennis did was set up the productionline so it would produce about half again as many as these guys wanted, which turnedout to be about half of what I wanted, and they were selling by word of mouth alone. There has never been an advertisement in aWestern language for the HP-15C, but they were always sold out. There were waiting lists for them. I think there was an advertisement in Japanese,but not in Danish, German, English, French, Spanish, none of those.Well, I think I have to say that there shouldnot be an expectation that these things would survive all that long, because after a whileyou get the tablets, the laptops, the PCs and so on, and its not impossible to reverseengineer these things so that you can have all the benefits of the calculator and moreon a big screen. So I didnt expect the things to last forever. But the 12C seems still to be soldiering on. I dont quite understand it. And about three years ago, or maybe a bitmore, there was a demand for I think it was 150,000 people asked on emailfor the 15C to be reissued.I think what they did do was issue 15,000of them and they all got snapped up. And I was looking on eBay for something elseand I came across an advertisement, someone wants to sell a 15C for $500. Well, Im not proud of a price, but I wasproud of the calculator. MIT I think in one year if not two arrangedto have one shipped in to be purchased by every incoming engineering student. And it satisfied my desire, my fervent desireto allow professors to offer problems to students that were at the same time more realisticand less tedious. I thought that would make for a better educationalexperience. Well, now of course they can do it on PCs. But now they can look up the answer on theweb. So problems dont serve the same role asthey used to. So your role as a consultant to HP was from197- Im sorry, I cant hear you. According to our notes, so you were with Hewlett-Packardconsulting from 1974 to 1984. Yeah, that sounds about right. And that would overlap with the work as aconsultant to Intel, which you began in 1976.Yes, I think thats right. Now that had an interesting origin also. While I was at Stanford in 1966, I gave someclasses on floating-point computation including error analysis and a little bit about howit was done. One of the students there was a graduate studentactually, Gene Golubs graduate student, John Palmer. He got his PhD there but then went to workfor Intel. And when they were developing the 8086, 8088,and the Intel 432 if I remember the number correctly, they wanted to get floating pointon these things, and I guess Palmer persuaded them that perhaps it would be an idea forevery box with Intel on the outside to have the same floating point on the inside. He remembered the classes Id given andso he brought me in and said hed like to have a really nice floating-point design. I said, Well, if you mimic IBMs floatingpoint, youre going to have a very big market. No, he said, we want a good floatingpoint. The story about IBMs floating point isinteresting too, but thats from the 1960s, late 60s.Alright, I said, Why not copy theDEC VAX floating point? As binary floating points go, thats amongthe best. And he said, We want the very best. For whom? He said, For a mass market. How big a mass market? He couldnt say, but he wanted me to understandthat this was for a really, really big market. Thats what they had in mind. I had to think about this, because you see,up to that time, if a computer was used a lot for scientific and engineering computation,it normally had a numerical analyst attached, very much in the spirit of the Asian elephantswhere they tie a little string around the elephants leg and tie it to a stake inthe ground.The elephant could pull it all out easilybut doesnt. You can think of the numerical analyst asattached to the computer in the same way. But if they were going to have a mass market,there arent that many numerical analysts. So I had to think about what would work notonly for the specialist technicians, the numerical experts, but also for this mass market. And that is what induced me to design the8087 arithmetic the way I did. Now I had to do it with a stack because theydidnt have opcodes in abundance. So the stack architecture was a little bitof a hindrance, but I got them to include the ability to reach down into the stack foran operand to combine it with a top, and that made it a lot easier. I had lots of experience with that kind ofstack and I knew that it would function well.Now we maybe should just introduce the conceptof a coprocessor. Say again. We should maybe just introduce the conceptof a coprocessor at this point, since The coprocessor? yes, not everyone may be familiar withthat. Ah, right. Well, of course nowadays floating point ison chip with the main processor. In fact, you might even get one per, umm Core. core. Thank you. But at that time, the density of transistorsmade it seem miraculous that you could get the CPU on the chip at all. So if you wanted floating point, that wouldhave to be a separate computing engine, and youd plug it into a socket near the CPUand there would be conventions which would say, If you see an instruction of a certainkind, youll know that its a floating-point instruction instead of an ordinary instruction. And since the coprocessor and the main processorare both reading the same instruction stream, the coprocessor knows when its time toexecute one of its own instructions. However, the address computations have tobe done by the main processor, and so the main processor has to squirt the final addressover to the coprocessor.So there has to be enough wires between them. Yes. And of course minicomputers would have processorsspread over many boards. Say again. Minicomputers would have processors spreadover many different boards. Oh yes. With microprocessors, they could As you say, managing to squeeze the wholeof the processor on a chip even only with integer arithmetic was a real accomplishment. Well, even on the minis, on the PDP-11 forexample, yes, the processor took up a board or two, but the floating point was also ona board. You could buy that board. And in order to exercise the board, you hadto tell the operating system to activate it and also to say whether you wanted to do floatfour bytes wide or a double eight bytes wide. This was a nuisance because an operating systemcall would take longer than the floating-point operation. So Kernighan and Ritchie decided that theywould do everything in double, save the time for conversation. And if your variable was a float, thatsokay. You just pad it with zero so it becomes adouble. And if you want to store something as a double,you just throw away the bottom half and store the leading half.That turned out to be advantageous, strangelyenough. They used to get better results doing arithmeticon data that was all floats. But it was geometrical computations, someof which are a little bit dicey, especially for games and scenery. And they got better results than they wouldhave gotten on the mainframes of the time working on the same data. But then the C language took off and CDC andCray, they got annoyed because their single-precision was as wide, very nearly as wide as everyoneelses double, and their double had to be done in software and it was slow. So they asked for permission to do FORTRAN-ishevaluation.If you got an expression all on floats, youjust evaluate everything in floats, whereas Kernighan and Ritchie would have said, Ifyouve got an expression with floating-point variables, it will all be evaluated in double. Whether the floats double, it doesntmatter. In fact, even integers, if they got mixedin with floats, then youd get evaluated in double. So there was a standards committee. I guess its for I cant remember the organization. J-something-11. It wasnt the IEEE. I think it was the ISO. But there was a standards committee that said,Okay, in C, you can use FORTRAN-ish evaluation, and then everybody did because they coulduse the same rear end for C as for FORTRAN. And then these beautiful results went away. Back to the 808- HP. the 8087. Or 8087? Yes. So I just looked. The 8086 itself apparently had 20,000 activetransistors, and I understand you had 40,000 in Well, there were 20,000 in the ROM, and whatsmore, of those 20,000, the Israelis figured out how to pack two bits per transistor insteadof just one by using very, very artful and delicate sense circuitry.So they managed to get all the microcode in,not only for add, subtract, multiply, divide, square root, assistance for decimal-binaryconversion, and log essentially it was really log(1+x) , exponential, the exp(x) 1, tan, and arctan. And these kernels, as I call them, would makeit easy for software to compute all the other transcendental functions sin, cos, hyperbolicfunctions, inverse hyperbolic in arcsinh, arccos, and so on. You could have a math library. And there was a math library written by afriend of mine at Intel which was pretty good.Of course some compiler writers insisted onusing their own library, much to the detriment of reliability and repeatability. But the library was going to serve the thirdfloating-point format. The 8087 chip could bring in operands thatwere integers or four-byte floating-point numbers, or eight-byte, or 10-byte. And the stack operated on floating-point numbers10 bytes wide with 64 significant bits. This was more than floated double, but itwas based on longstanding knowledge. They used to have a scratchpad with extradigits so that for every operation that you would like to appear atomic, you can implementit in software in the scratchpad round it back to the working precision and out it goesand it looks beautiful. So for example in MATLAB, ideally when youmultiply two matrices together, youd like to accumulate the scalar products for eachelement with extra precision. Then youd round it back to the ordinaryprecision and you would get a much better matrix product, especially in the circumstanceswhere otherwise a lot of cancellation would occur.That was the design and I said to John, Its10 bytes wide now only because when I asked your engineers about the longest carry propagationthey could tolerate within a clock cycle, they said, 67 bits. Well, we need a guard, round, and sticky,and that leaves 64. And thats what determined the width ofthe word. But, I said, if demand grows forwider precision, this can grow to a 12-byte word and 16-byte word. All you want to do is make sure the programmersare aware that when youre programming with this extra width we called it an extendedformat all you can rely upon is that you have at least a certain number of significantbits, at least a certain exponent range. And if you really want to know how many bitsyouve got, you do what we call environmental inquiry which was very easy on thismachine and it tells you how many significant bits youve got.Later if they want to put in a chip with afull 16-byte-wide quadruple-precision word, thats okay because your environmental enquirywill tell you thats how many bits youve got, 113. So I thought that this would be a design thatwould last into the future, and it would make it possible to implement math libraries extremelyreliably. You had these extra digits. Like on the HP calculator, you got a 13-digitscratchpad in order to generate digits, of which 10 are actually going to be displayedand stored. The same principle applied here. So basically what youre saying is thatthe chip would not only allow enormous speed-up compared to doing things in software, butalso basically by building best practices into the hardware, it would make the softwaremuch simpler and more accurate. Simpler and more accurate, yes. You could afford to use an algorithm thatmight lose eight bits. But thats okay, youve got 11 extra bits. Lose 8? Who cares! When you finally deliver the result, it willbe a nice double-precision result, correct within a fraction of a unit in the last place.Yes. So youre not just designing a piece ofhardware. Youre also very much bearing in mind exactlyhow it would be programmed. Yes. And I insisted upon flags. That meant that if an exception occurred,either the hardware would not trap it, give you a default result, but it would raise aflag, and so you could test the flag at your convenience. You see, in Java where youre not allowedto test flags, if you want to know that something bad has happened, youve got to test afterevery potentially aberrant arithmetic operation. Whos got the patience for that? It turns your code into spaghetti. But if youve got the flags, you can speculate. Lets do this operation. Theres a chance of one in a million thatit wont work.At the end Ill test the flag, and if itdidnt work, okay, then Ill say, Sorry about that. Well go back up here, but now were goingto do the job in a different way, which I hope will be more robust but its somewhatslower. Sorry about that. And this is a sane way to do it. It corresponds to throw-catch-finally, butwithout the interference of the operating system. It means everything can be handled in themath library, and that includes the possibility that if you divide zero by zero, you dontwant to get a NaN . Maybe you know that this occurs only because the correct result wouldbe 13 if you took limits. Okay. What I want you to do is to be able to presubstituteand tell the system, In this block of code, if you encounter a zero over zero, call it13. Or call it Q. Well store the address of Q instead ofthe NaN, and when somebody gets a zero over zero it doesnt happen very often thena trap will occur.It wont go to the operating system; itwill go to the math library. The math library will look into a table Oh,what does he want? Oh, hes given me this address. Okay, Ill fetch from that address and thatsthe result Ill give him. Okay. So the advantage of that would be the operatingsystem really is not going to know what to do with it and the math library would? Exactly. Unless of course the programmer has made noprovision, and it might be that he enables traps and the traps have nowhere to go butup into the operating system. Well, thats bad. But programmers can do that anyway and I cantstop them, and wont. But now they have a better option. Also, the flag should ideally serve for retrospectivediagnostics, and that means once the flag is raised, it should point by hashing to aplace now this would be maintained by the operating system if its requested toa place in your program keyed to the source code so you know, In my source code atline so-and-so, this event occurred. Wow! That gives you an enormous advantage in tryingto figure out what went wrong.And maybe it didnt go wrong. Maybe its okay. But that says you can look there and say,Oh, is that what I did? Oh, thats okay. Thatll work. That also would require the well, I supposethe hardware, the math library, and the operating system, the compiler, and the programmer allto work together in a particular kind of way. Yes. Well, wouldnt it be nice if they collaborated? But you see our programs are getting hideouslycomplicated. Scientific and engineering computations haveburst the bounds of the most insane ambition of 20 years ago. And so now theyre so big and so complicatedthat we cant debug them. Ive written Its called Boulder.pdf. Boulder, Colorado is where I presented this,so Boulder with a capital B .pdf on my webpage. It explains what I think we have to do inorder that programs be well, floating-point programs be debuggable. Now look, parallel programming has its ownlong list of ways for things to go wrong, and floating point isnt going to make thatany better.But the floating-point exceptions are peculiarbecause theres so little you can do about them. Theyre very local. All you have to do to cope with floating-pointexception is look to see What were the operands? What was the operation? Where is the destination? Then you go to the math library with thatinformation and the math library can have things pre-set, like what to presubstitute. Or it can have something that says, Imgoing to generate a message. Or it can say, Well, Im going to abandonthat computation. Lets jump somewhere else thatsthe throw-catch-finally. All these options are available to you, butthe most important one is that you dont have to test frequently, because those testsare murder.You see, computers look ahead in the instructionstream. They have an instruction pipeline and whatthey do is they first disassemble the instruction, the opcode, the address field and so on. Then they figure out, Is this really theaddress we want or do I have to modify it and upgrade it? And so on and so forth until finally theyvegot to the point where they know what they want to do and they know what they want todo it with, but its taken several pipeline stages you see and theyre ready to go atan issue rate for instructions which is essentially one per cycle.But if youve got branches, you cantdo that, because youve got to have some idea of where youre going to go. After you branch, youre going to have toabandon the things that youve deciphered. And theres such a thing as speculativeexecution Lets go down both branches, and then at some well figure out that wecan abandon what we did on one of these branches. Well, that requires extra resources that couldotherwise have been used for concurrency. I dont have to go into detail to tell youthat tests and branches are not only going to rob you of performance.Theyre going to turn your code into spaghetti,and its very, very hard to read and to understand. Its much better to have a statement atthe beginning of a block which says, In this ambience, understand these conventions. Understand that zero over zero makes 13, orinfinity minus infinity makes zero. Or whatever it is you want. Then as you read the code, you can check asa mathematician, Oh, what happens in the special cases? instead of having to branch somewhere to findout what happens in the special cases, in which case you may have a compounding of specialcases and things can get very hairy.So thats what I wanted, and I didntget it. I almost got it. I came pretty close, but no cigar. Im still fighting with the language community. They seem I guess the only word I can think of is refractoryto this. Theyd really rather do something else withtheir time. Because you see, you said it, there has tobe collaboration. Its not just the programmer. The programming language has to afford thesecapabilities.It has to be tied in with the debugger sothey can exercise appropriate options. It means that the math library has to be readyto handle things that otherwise the operating system would have had to handle. And the operating system has to let the mathlibrary handle it instead of insisting that every trap is going to go into the operatingsystem because theyre worried who knows what insecurity you might create.And lots of what youve said I think appliesalso to the floating-point standard that well be getting to. So lets backtrack ourselves in time. Weve been talking about the 8087. Now clearly squeezing Well, the 8087 morphed into the 387, whichmorphed into the 486, and then came Pentium. Yes, and the 486 integrated it onto the mainprocessor. You know, it was a crock. You could buy a 486DX that had floating onit. You could buy a 486 that didnt have floatingpoint. Actually it did, but it was disabled. Then if you had a 486 with a floating pointdisabled, you can buy a 486-something-or-other that had floating point on it, except it didnt all it did was enable it on the main chip. Talk about marketing. Well, I dont want to spout a dirty word.So we said the complexity, on the one handsqueezing all that onto the coprocessor required enormous ingenuity, but on the other handthe coprocessor was still approximately twice as complicated as the main processor. What kind of uptake did the coprocessors find? How widely instituted? Oh, thats another terrific story. You see, Intel struck a devils deal withMicrosoft. The deal was that Microsoft would supportthis coprocessor, the 8087, in I think it was five languages. Let me see if I remember them.They were BASIC, Pascal, C, FORTRAN, COBOL. And Microsoft said, Okay, well do it. But you, Intel, must not sell your own compilersin competition with ours, and Intel agreed to that. But Microsoft reneged. They did produce a BASIC and FORTRAN and C.I dont remember a Pascal and I know they didnt do a COBOL . Furthermore, they didntreally support the coprocessor. I know that because I went up with a delegationfrom Intel to Redmond in 1982, and what we wanted to do was to help them understand whatthey needed in their compilers to support the coprocessor. And starting at nine oclock in the morning,there were the Intel bunch and me on one side of the table, and there were the Microsoftbunch on the other side of the table, and we were compatible.You know, we were talking to guys like us. Everything was going along swimmingly untila boss came in. Bill Gates comes into the room, and, you know,Hello, etc., and says, Look, heres this PC on the table. You see that socket for the 8087? Almost none of them are going to be filled,he said, and so it really isnt worth going too far out of our way to support it. Ah, you can imagine the cloud that cast overus. What Microsoft did then was the following. Microsoft had its own floating-point softwarebefore this. The way you got it was the way you call anyother subprograms you push the operands on the system stack, call this subprogram,subprogram operates on these things, leaves the result on the top of the system stackand backs up. What they were going to do was to change theirsubprogram so that instead of the subprogram doing it in software they would simply invokethe chip, which means they had to then transfer the operands remember theyve got theaddresses on the software side they have to transfer the operands by fetching the operandsfor this address in some little buffer area and then moving it onto the chip.You then do the operation and then you goand reverse the process. You never actually use most of the registerson the chip. Thats what they did initially and it crippledthe chip. It meant that instead of getting a speed-upby factors that could be as big as a hundred from the chip, youre getting speed-up bymaybe factors as big as four. That was the worst of it of course, becauseyou know performance is everything. But also they didnt support all three formats. They only supported the floats and the doubles,not the extended format. So it meant that the programmers in variouslanguages didnt have access to a scratchpad where they could get extra precision in casethey needed it. Well, Borland did it differently. Borland produced a spreadsheet called Quattro,and the arithmetic for that was done by Roger Schlafly, the son of Phyllis Schlafly.Phyllis Schlaflys a well-known conservativenut. Roger wasnt always proud of her here inBerkeley, but anyway, he was the one who worked for Borland and he said, I want to be ableto use the widest format on a spreadsheet, because its got almost 19 significant decimaldigits worth of precision and on the spreadsheet thats a good thing to have. So Borlands C supported all three formatsand Roger wrote the Quattro code in Borlands C. Which was just lovely except for one thing. Somebody figured out how Quattro could mimicLotus 1-2-3, mimic it so well that if you had these macros for Lotus 1-2-3, the samemacros could be made to work in Quattro. And Lotus sued. They said, Thats infringing on our copyrights. Initially the judge ruled in their favour. Borland appealed. Then Lotus got bought up by IBM, which wasominous because IBM can afford infinitely many lawyers. But ultimately Borland won. The judge ruled that if Lotus really had wantedto protect these things, they should have been patented, if they were patentable atall.But copyright wasnt the way to go. But it was a pyrrhic victory for Borland becausethey had devoted so many resources to fighting this lawsuit that they fell behind in compilers. Well, in the meantime GNU and others put inperfunctory or reluctant support for the third format, and ultimately Microsoft did the samething. They didnt allow you to declare the 10-byteformat as a variable type, but they used the stack and the 10-byte variable format in orderto do the arithmetic. But now they killed one of the important capabilitiesof the chip, which was the invalid operation flag. If you do a divide zero by zero or infinityminus infinity or square root of a negative number in a real context, it will set theinvalid flag. But also if stack overflow occurs, itllset the invalid flag. Stack overflow, remember, was handled alasimproperly because the Israelis and the guys in Santa Clara didnt really get togetherproperly. So stack overflow was a real pain to handle. It was best avoided. But Microsoft compiler wasnt designed toavoid it. They figured out that youd have to havea fairly expensive expression to overflow, so what the hell.What they did was arrange that if at executiontime the stack overflow occurred, this would be this invalid operation and Microsoft wouldissue this message at runtime, Your expression is too complicated. Please simplify it. At runtime. Of course you might not have been the onewho wrote the program, youre just running it, but you may then have executed a branchthat the programmer forgot to test. And with this branch, it turns out the expressionis too complicated. Programmer didnt know that. Now youve got this message. What are you supposed to do with it? Well, that wasnt the only thing that waswrong. You see, it meant that Microsoft had to clearthe invalid flag, because the hardware has the property that if the invalid flag is upand you enable the invalid trap, it will trap right then and there, and they didnt wantthat. They wanted the invalid flag to be down, thetrap to be enabled, and if the stack overflow occurred, then youd get the trap and youdget this runtime message. But that meant that now I couldnt use theinvalid flag to detect zero over zero or things like that, because the invalid flag wouldalways be down.They always put it down. So that meant that you lost this capability. Well, then of course Java doesnt like flagsat all because theyre side effects. So people havent really gotten the benefitof it. Then even in places where there is support,its thought that the flags were these hardware things, but theyre not. The flags are really storage cells which maycontain a pointer into a hash table so you can figure out what went wrong. But whenever you access this flag, what youdo is you check the hardware to see whether a flag has been raised since last time youlooked, because that would override the fact that you dont see a flag raised here Theresone over there. Uh oh, got raised since I last looked. And that has to be understood, but it isntunderstood.So people think the flag is just a hardwarething, and now that means thats its a global variable. And global variables are a pain in the ass. So what can I say? I guess Id have to quote a military maximwhose origin I do not know, but it goes like this. In every army large enough, there is alwayssomebody who doesnt get the message or gets it wrong or forgets it. And thats what happened to some of thesefeatures of the IEEE standard and the coprocessor chips. So lets move back and talk about the standardnow then. And I know a lot of the ideas that finishedup in the standard I understand are coming from your work with Yes. As soon as I heard that a standard committeewas deliberating these things, then I attended one of their meetings. What year was that? I think it was 1977. Im not 100% sure. Alright. So at that point, you had starting consultingfor Intel, but still the chip itself had not yet appeared? No, no. But I have to warn you, Im 82 years oldand every now and then, something falls into a hole in my memory or gets bent.So if I say it was 1977 but it was really1976, I just cant remember exactly. I think it was 1977 that I went to the firstmeeting. Al Despain sicced me onto them. And I realized that there were all these peoplethat had different floating-point designs, none of which was fully satisfactory for amass market. Even the DEC design, which was a good designfor its era, it really wasnt designed for a mass market. Not if youre going to have what could be,I thought, hundreds of thousands of people writing programs with floating-point in them. Hundreds of thousands. Millions! Millions and millions! But anyway So theres two distinct problems then. One is that theres a lack of consistencybetween different architectures Oh yes.to the extent that if you run the sameFORTRAN code, youll get different answers. Sometimes drastically different answers, thatsright. And Robert Stewart was worried about thatbecause he saw microprocessors proliferating, and on each different companys microprocessors,there was either software or firmware for doing floating point and they all did it differently. So he set this thing going and I thought Ihad the only design that was really suitable for what appeared to be an enormous mass market,reaching down to people who had never taken a numerical analysis class and might be veryclever about whatever it was they were doing, except theyre not experienced with floating-pointarithmetic.So I persuaded Intel to let me divulge therational and the algebraic operations add, subtract, multiply, divide, square root, plusone or two other little things that dont matter. Intel agreed. Intel felt at that time, first, that theycan do anything, and second, that theyre leading the world, and part of leading theworld is sharing just this kind of information. But what was still to be kept very, very secretwas the fact that this chip was also going to include decimal-binary conversion assistand the kernel for the elementary transcendental functions.I think some Intel executive blabbed to anexecutive from a competing company at a golf game or something, because it was generallyunderstood that Intel was going to put an awful lot on this chip, much, much more thanjust these few arithmetic operations, and one could suspect that maybe this IEEE committeewas formed to slow Intel down. Well, thats perhaps an unworthy estimateof the competition, but there were times when it felt that way.Anyway, Intel agreed, and so the KCS proposalwas born. C for Jerome Coonen, who was a graduatestudent. He had made the mistake of coming into myoffice and asking if I could explain what the PDP-11 was doing. One thing led to another and the next thingyou knew, he was ensnared. Stone, S. Stone, he was an experiencedcomputer architect from the east coast, and his experience included drafting things forstandards. So he helped shape the format for the document. And you were the K? I was the K, thats right. As such documents go, it was comparativelyreadable and it wasnt terribly long. So we put that forward, and of course therewas resistance. The resistance came initially from peoplewho said, Hey, you cant do that. How can you round correctly without an extracarry propagation? On and on.And so youre going to have terribleperformance penalties. Well, part of the purpose of the meetingswas educational. It was to show people that Yes, you cando it without a significant performance penalty, and heres how. And transistors are cheap, so you use transistorsin a certain way and gee, you could get it all correctly rounded. All it needed was guard, round, and stickybit. Its three extra bits. People knew about guard and round, but thathadnt been so well informed about sticky, although there were guys at Michigan who knewabout sticky way back in the 60s. So we showed people how to do it, includingsquare root, how to get a correctly rounded square root.Well, it will cost you maybe one extra divisionover getting a square root that would otherwise have been regarded as adequate previously. And as people learned that they could do it,they became the engineers in any event became much more receptive to this, becauseit looked like a good design and it had a considerable rationale behind it. Now the novelty was the inexact exceptionthat is raised when a numerical result has to be approximated, and you do an arithmeticoperation and you have to approximate it. Maybe it overflows or underflows, so you haveto approximate it. Or you have to round it and you have to approximateit. But if its exact, dont raise the exception.And Ill explain what that was for in amoment. And the other was the NaN, the not a number. Now not a number existed in vestigial formon Seymour Crays machines and also on the DEC VAX. The trouble was you never knew what it woulddo, and so you couldnt program with it. The purpose of the not-a-number thing wasso that you could defer judgment. Something has happened thats bad, but maybeyou wont use it because your program may branch.Whatever condition caused this to be bad maycause your program to branch later to a place where it just ignores this. That happens fairly often. It certainly happens with speculative execution. You know, you come to a place and you knowyouve got a branch but you dont know yet which way youre going to go, becauseit takes a while. So you run down both paths, but one of thosepaths may be fatal. Youll know that later when youll betold, Hey, you accept this path. Throw away that path. But in the meantime, that path, has hit anexception like divide by zero. Do you want to stop the computer? So that sounds very much akin to what youwere describing earlier in terms of throwing the flag but not immediately going to theoperating system to hold things. Thats right. Youve got to give the programmer an opportunityto fix exceptional arithmetic events which are difficult to predict without doing extratesting thatll slow you down on the normal cases. So if on rare occasions something happens,you want to be able to test for it at your convenience.Then alright, so occasionally youre goingto abandon this piece of computation or go over there and you do it in a slightly slowerbut more robust way. So we had inexact and NaN, but otherwise itwas the divide by zero, which really means you create infinity out of finite operands. Theres the overflow condition, which meansyou tried to compute something that was too big and now theres some interesting questionsabout what to do about it. And underflow, well, I made underflow gradual. Gradual underflow wasnt my invention. However, I had implemented it on the 7090and the 7094, and it turned out to be really handy for some things. So underflow is the opposite of overflow,then the number gets too small and it vanishes.Thats right, yeah. If you think of it in terms of digits, youimagine that your digits are pushed ever further to the right until finally they get to theend. The computer isnt infinite so, after all,digits have to fall off, and if you push it far enough to the right, they all fall off. Well, gradual underflow was a way of arrangingthat the most common and frequent kinds of underflows could be handled relatively humanelywithout bothering people about it. Youd get an overflow signal, but you couldignore it, which is what we people did anyway because theyre going to ignore an underflowsignal no matter what you do.You can flush it to zero, you can do it gradually. People get in the habit of ignoring the underflowflag. So our task is not to correct a mistake butrather to treat underflow in such a way that the chances are pretty good that you couldignore it safely. Thats the best we can do. And well raise a flag so that if yourea curious programmer and youre worried about it, yes, you can find out that it happenedand then fix it up later. So we had the exceptions, we had directedroundings. That meant that the normal mode of roundingwas to round off within a half a unit in the last place. If its half Well, lets put it this way. If its bigger than a half, you go up; smallerthan a half, you go down; and if its half, you go to the nearest even.Why to nearest even? Well, a statistician by the name of Hotellinghad discovered that if you always round away from zero at the halfway case, youre goingto introduce a weak statistical bias. And being a statistician, he said, However,if you round to the nearest even or round to the nearest odd, choose one or theother, but round to the nearest even seems to work better in most cases well,then you dont get this statistical bias. Now the law of averages is on your side. So thats what I did. And it wasnt very difficult. We got guard, round, sticky. It really is easy to decide to do that, andthe logic isnt particularly expensive. Of course there were people who said, Ohmy God, performance of this thing is going to be awful. George Taylor was a graduate student and hebuilt a board to replace the VAX floating point with IEEE standard-confirming floatand double.How do you like that? And if anything it went faster. It was a marvellous design. They wrote it up for one of the IEEE symposiaand it was going to be tested by Cary. What was his first name? Cary, C-A-R-Y. That was his last name. But he had a swimming accident. He dove into the pool and hit his head andnobody noticed that he was underwater for a while. So the guy who was going to do the tests didnt.So we really couldnt promulgate the boardas a tested device even though initial tests had seemed okay. That was really It was a very discouraging blow. There were a lot of us, not just his family,who were really grieving over that loss, because he was a nice kid. Ah, well. Ultimately the tutorials, which is what thesemeetings had turned into, these tutorials bore fruit and we gained pretty much of amajority. Didnt get everything I wanted.I wanted to have two ways of treating infinity,but the engineers just choked at the idea that plus infinity might be regarded as equalto minus infinity. But it is, if you think of the real numbersas being on a circle and you have infinity up here. Both infinities fit. Theyre both the same. And you only need one zero. But if you have two infinities, you see, ifyouve got plus infinity and minus infinity, then you need two zeros, because whatsthe reciprocal of infinity? Well, its zero with the same sign. Thats a little bit troublesome. Youve got two zeros and theyre arithmeticallyequal, and the only way you can tell them apart is either by asking about their signsor dividing by them.But they go silently through computationsand they allow physicists to deal with a common situation, which is you take a domain andyou slit it, and when you slit it, the question is To which side of the slit does the slititself belong? This way it could be both sides, and thatgives you a closed domain you see, because its closed on both sides. So there were these interesting little things. How do you compare NaNs? And the answer is a NaN compares as unequalwith everything including itself. And so on. So we had reasonable rules. Now you could predict how NaNs would propagatethrough your code, you could predict how they would affect your branches, and that was new.The indefinite on a CDC machine or on a DECVAX, you could never predict what it was going to do. In fact on a DEC VAX, the chances were prettygood that if you touched it, it would trap, unless you just moved it. But if you tried to do an arithmetic operationon the reserved operand, youd get a trap, and then what would you do? I mean you dont know where you are. That was part of the problem with the operatingsystems at the time. When you did your trap, you might have gottena hexadecimal address, but the hexadecimal address couldnt be correlated with yoursource code because the compiler would rearrange operations in order to take advantage of pipelinesand other things. Your instructions look like this, but whenyou look at them in assembly language, theyre all jumbled up in order to be executed faster. I mean, isnt that what you want the compilerto do for you? So of course you couldnt figure out, ifan exception occurred, where it had occurred unless people did something more than justturn off .Well, youre getting some idea then of theIEEE standard. We settled on a final draft in 1981, but itwasnt adopted until 1985 because DEC opposed it. They brought a bevy of lawyers in to arguein front of the IEEE people and so on. However, while they were arguing, it was beingimplemented. Weitek implemented it. I mean Intel of course came out with it in1981 I think, and I think AMD shortly after. I forget how many I dont remember the details of how manypeople there were, but there were at least six implementations. So DECs lawyers arguments got undercut. They said it couldnt be a standard becauseit wasnt in general use. But as it happens, by 1985, it was. And so it was canonized then. It was a sad story for DEC because Mary Payne,who was their chief numerical analyst, was actually pretty competent, but she got badadvice from the hardware people who said that implementing this thing would be impossible.Especially gradual underflow, they said itwas impossible to do that at a reasonable speed. They were mistaken. George Taylor did it. Did DEC ever adopt the standard? Yes. DEC produced the DEC Alpha. That was a pretty marvellous chip. Instead of waiting for things to settle downat the outputs of registers, they hand-tuned it so that the output of one register wouldgo to the next without waiting to settle down, and that way they got rather higher speedfor their technologies, for their line widths and so on. For their transistor densities, they weregetting extra-high speed. And on the DEC Alpha, you could choose eitherto be compatible with DEC VAX floating-point arithmetic or with IEEE standard floating-pointarithmetic and most of the other features of the IEEE standard. It was a little bit peculiar because the directionsof rounding, whether you round up, down, or towards zero or so on, that was in the opcodeand they didnt choose it well. They chose the Oh, how shall I put it? They chose I think the zero, zero patternfor round to nearest, and what they should have done was have the three directed roundingsand the fourth from the control register.If they had done that, then that would havebeen a really good system, a very satisfactory system. But they just didnt quite understand whatto do, so they didnt quite get that right. But the rest wasnt bad. And I think the DEC Alpha continued in productionon an Intel fab line for a while after the company DEC had disappeared. So well return to more recent developmentswith the standard a little later. We should say something about the Turing Awardat this point. So you won in Well, you won the award for 1989. I guess it would have been announced in 1990,the way they do it. The citation, For his fundamental contributionsto numerical analysis. One of the foremost experts on floating-pointcomputations.Kahan has dedicated himself to making theworld safe for numerical computations! Well, that was for a number of things I think,because they would have gotten word that a paper that Golub and I had written in 1964made it possible to compute the singular value decomposition of matrices of not immoderatesize much faster and more accurately than previously. And I had written a number of other paperson things like Why does something converge the way it does? or How do you computethis without losing digits? A transcendental function library was producedby some of my students for the freely distributed math library now. That was in the early 1980s. That included argument reduction for the trigfunctions sin, cos, and tan as if we knew p to infinite precision, but we didnt. However, a student, Bob Corbett, found a wayto do it which required only about twice the precision of what we were using. You know, working precision is this wide andif you carry that many digits, you can reduce your arguments as if p were known to infiniteprecision.And a similar scheme was discovered by MaryPayne and her workers at DEC, except they needed twice as many bits as we needed. There was a guy, ah, Vyssotsky, Vyssotskyat Bell Labs who didnt believe that we could do it. But he tested it and he found yeah, it worked. And it worked because Id found some continuedfraction manipulations, and another student, Stuart Macdonald, he helped to write a programthat first got a continued fraction due to Was it Robert Gosper from MIT? He got his continued fraction for p and weused it to show that we didnt need more than this many digits. So it was fast, it was economical. So we had a really nice library and guys atSun continued to develop it. The guys at Sun, well, many of them were mystudents, former students, and they continued to develop it until Oracle took over. So I think you can get the last version thatSun distributed, but there was an even later version. I think Oracle has not distributed that. Oh well. You can be grateful for what you can get,I guess.And so this is some of your other contributions,and in the long interview that we did back in 2005, you gave a much broader perspectiveon those things than will be possible here. Were you surprised to win the award? Well, I certainly hadnt expected it. That sounds like surprised. Hmm? That sounds like surprised then. Well, yeah, I guess so. I didnt do it for the award. I did it because it seemed like the only goodway to go. And I had a lot of help. That was one of the things that the awarddidnt make as clear as I wish it had. It wasnt just Jerome Coonen, a graduatestudent who worked with me on the standard. I had a number of friends and colleagues Well, youve spoken with Jim Cody. He certainly helped. Then there was Fred Ris at IBM. IBM didnt meddle in the standard. They behaved as if they were above it all. But Fred Ris was nonetheless an importantally. And I had some reluctant allies from Hewlett-Packard.They didnt get quite all of what they wanted,but otherwise they were happy to see it going that way. Then of course there was George Taylor, thegraduate student who actually built the thing. You see, he build a high performance boardto replace the floating point board on a DEC VAX, and whats more, he did it twice becausethere was a machine called the Elxsi E-L-X-S-I I think it was 6400, and he built IEEEstandard-conforming floating point for that machine. It was a supermini that was actually fasterthan a DEC VAX. So the citation itself doesnt mention thefloating-point standard, but it sounds as if your sense is that that would the primarything the award was given for? I think so.Yeah. I mean normally mathematicians dont getawards from the ACM, unless theyre working in computability theory or something likethat. But really the work on floating point hadan enormous commercial impact. It meant first of all that people could exchangedata between even hostile machines. Even big endian versus little endian. As long as you knew which was which, you couldexchange the bit patterns and youd get exactly the same numbers on this machine asyou had on that machine, until of course you started doing the arithmetic because thenthere were variations in how people optimized things and what instruction sets they usedand so on.And it also meant that if you stuck to a certaincore set of operations, you could pretty much guarantee that the results would be exactlyreproducible. Until we got threads, until stuff startedhappening in parallel, because when it happens in parallel, you can have a whole bunch ofdifferent threads and they come together each one in its own sweet time, and when you addthem all together, the order in which youve added the numbers depends upon which threadgot there sooner.Well, so now you lose reproducibility andyou have to use special codes which are being developed, have been developed by studentsin my college. Jim Demmel, a former student, and some ofhis students have developed algorithms for dealing with that problem. But before parallelism became popular, ifyou stuck with the same core instruction set and you used the same math library whichyou can get, FDLIBM you can get exactly reproducible results. They might not be correct, but at least yourein the same boat. So if you could write your own two- or three-sentencecitation, what would it say? Citation for what? Well, citation for the Turing Award.Oh, for the Turing Award. I see. It would say that I was very lucky in my choiceof collaborators and opportunities to be able to get something done that was really worthwhile,but I didnt do it myself. And its unfortunate that when you get theseawards, it makes it appear as if This guys a hero.He did it all himself. Well, Ive got a bowl and I think I got$25,000. Whoopee! But I was in some respects just a howshall I put it? I was a guy near the centre of it allfor a while, but I wasnt the only guy working on it. Jerome Coonen had a much better politicalsense than I had, and so he often tamped me down a little bit. You know, moderate things, moderate things. Dont irritate people unnecessarily. And Jim Cody had an enormous amount of experiencebecause then Argonne had so many different machines.And then there was Stan Brown at Bell Labs. Stan Brown came up with a model of floating-pointarithmetic which ideally would have had the property that if your program uses only thoseaxioms that Stan Brown articulates, then you should be able to prove your program workscorrectly if it can be proved at all. You hope. But that turned out not to work out, firstbecause his axioms didnt cover the CDC 6000 family. I dont think it covered all of Crayseither.But here was what was in order to coverall the different computers, the axioms couldnt be categorical. They limited the possibilities, but they didntspecify the possibilities. That meant that its like having the designof a computer whose arithmetic is worse than any arithmetic ever actually built. Youve got all these inequalities. What are you going to do with them? He came to appreciate this limitation of hisscheme. It had looked so good at first, as if nowat last we have an axiomatic foundation for floating point.And all it did was give you an opportunityto be unable to prove two theorems about the actual arithmetic. So that was a valuable lesson, an importantlesson, which meant that a standard could not be descriptive, it had to be prescriptive. I could go on and on with various bits ofexperience culled from this place or that place, people who contributed to these thingsin many ways, and I just happen to have been around to have sort of accreted all this stuff.And do you feel that winning the award gaveyou any opportunities to have your voice heard or to do things that you might not otherwisehave been able to do? Well, you might have thought so, but as Ivesaid, the importance of floating-point computation has dwindled. When I started computing, the raison dtrefor a computer was to do scientific and engineering computations. Specifically they were worrying about dredgingthe Saint Lawrence Seaway and where are you going to put the mud? That was the sort of computation they weredoing. That was a civil engineering computation. Thats what the machines were for initially,especially since they were binary, so the business world didnt think that that couldbe of any use. Well, it was, but they didnt think so. And over the years, although scientific computinghas grown enormously, spread enormously, relative to the rest of computing its dwindled toa little raindrop. So youve heard me say it, numerical computingis like a sliver under the fingernail of computer science. Its just an irritant because there areall these things that you have to watch out for that dont occur if you limit yourselfto computation with integers or with character strings or with discrete structures like graphs,until you start approximating graphs, because they get too big.And when your graph has a billion nodes, ah,then there you are approximating again. So we should say something about what youvebeen doing since the award. I know a chunk of that is the revision ofthe standard, but there are maybe some other . Oh yeah, thats a very painful thing. The standard is supposed to have been revisedevery five years, but in fact the revision didnt start until 2000. Fifteen years. I think people were afraid to tamper withit. But that revision came under almost a directorder that we combine the binary floating-point standard with a decimal floating-point standard. And there was a guy in England who workedfor IBM. Oh God, his name has just dropped into a holein my memory. Now would combining the standards be so thattheres only one standard to maintain? Or would the implication be that chips shouldbe able to do both to meet the standard? No. What it amounted to Oh, it was Mike Cowlishaw C-O-W-L-I-S-H-A-W. His design for decimal arithmetic promulgatedthrough a language that he was fond of at IBM in England, it allowed not only for floating-pointcomputation in decimal very much as it had been done on the IBM 650, although betterrounding properties.He also wanted to be able to use this decimalarithmetic in contexts where what you were really dealing with is essentially integers. So, for example, in most cases billing dollarsand cents or pounds and pennies are really in integer multiples of pennies. What he wanted was to arrange that peoplecould do that type of computation very fast, which meant without normalizing the floating-pointnumbers. So the floating-point number might look alittle bit anomalous. Youve got a string of digits and then alarge bunch of zeroes and your exponent. And he had worked out a scheme for supportingthat. But his scheme would have meant that decimalarithmetic in a given word size would have either a rather smaller range than binaryor a rather smaller precision, or both.But Dan Zuras Z-U-R-A-S he had retiredfrom Hewlett-Packard. He had been a hardware engineer for them. He and I came up with a packing scheme thatmeant that although it wasnt all that easy to read, the bit string that represented adecimal number, because chunks of it were compressed, you now had in a given word sizea range and precision comparable to the binary format of the same word size. So you wouldnt sacrifice range or precisionsignificantly, although there is a well-known wobble thats associated with decimal arithmetic,and that is that it just makes it harder to do an error analysis, thats all.Well, that would have been tolerable. But then theres sort of a bandwagon effect. All sorts of creatures jump onto the bandwagon. These were creatures who came from companiesthat had their own axe to grind. They wanted their own product to be standard-conformingand they didnt want to have to modify it. That induced a number of what I would callcentrifugal proposals, proposals that detracted, pulled away from the centre. People wanted certain options to be allowedso that youd have your choice about whether youll do it this way or that way, becausethis company does it this way and that company does it that way, or wanted to do it thisway or that way. Well, the trouble with a standard is whenyou start putting in too many options, its not a standard anymore.The outcome is unpredictable. And thats what has happened. This has caused real pain. For instance, Sylvie Boldo S-Y-L-V-I-E,and then the last name is Boldo, B-O-L-D-O. Now she works for one of the French researchoutfits in one of the southern suburbs of Paris. And her accomplishment is to be able to usethe theorem verification language Coq C-O-Q to verify the correctness of certain floating-pointcomputations.But she has observed ruefully that althoughshes been able to do it for things that conform strictly to the IEEE standard, whenshe gets to programs where somebody exercises certain options, it invalidates her proof. It doesnt mean the code wont work maybeit will, maybe it wont but the proof doesnt work, thats for sure. So thats, you see, one of the expensesthat you incur when you have too many options. And the 2008 standard, what evolved after2005, that has too many options. It has become very complicated. And part of the complication arises becausenobody is willing to bite the bullet and say, Who should conform to this standard? What should conform to this standard? Should everybody conform to it? Well, if everybodys going to conform toit, then you have to have all these options, you see, and then you dont know what conformitymeans anymore. And I just couldnt get this across. Well, that may have been a failing on my part. I just failed to get it across. So Im not happy with the way the standardhas turned out.I think the older standard, the 1985 standardis much more humane, even though it does have certain gaps. And here we are revising it again. Were on schedule to produce a revisionat the end of this year. But the chairman of the revision committee,David Hough, a former student of mine, has said that he does not propose to incorporateanything new, only clarifications of the old. God knows it needs clarifications. And so itll still be too complicated. And the decimal thing is so screwily writtenthat I cant blame people if they dont understand whats going on. I really should do my own write-up to explainwhats happening, but Ive got A year ago, I fell ill because a doctor hadtold me that I have benign prostatic hyperplasia.What does the word benign mean? Doesnt cause harm. No, thats not what it means. It means its not cancerous. It wasnt benign. And I suffer from shrunken cartilages betweenmy vertebrae. They pinched nerves, so half the nerves belowmy waist stopped reporting, they stopped functioning properly. Thats why you see me hobble. I didnt know that the prostate was causingenormous pressure to build up and it damaged my kidneys, which is now why I have to havethis little pipe, so that itll drain my bladder and not allow back pressure on kidneys. But all of that has taken a year out of mylife, so I didnt get around to writing up what I should have liked to write up toexplain the decimal part of that standard in intelligible terms. Because its really two arithmetics in one. Its an ordinary floating-point arithmetic,just as youd imagine it to be with all the digits shifted as far as possible to theleft, to normalize, and then you have an exponent (so in scientific notation, as scientistssay), combined with a fixed-point format where now your decimal point or quantum point ison the right-hand side and youre dealing with numbers that are all jammed to the rightas far as they can go, because thats the natural way to write them.And there is a transition point where if thenumbers get too big, you end up with floating point and rounding occurs. And I hope you look at the inexact flag becauseit will warn you that you may have lost some pennies. Or who knows? Nowadays you may have lost some hundreds ofdollars or millions of yen. That part is not what bothers me. What bothers me is all the other things thatpeople wanted to stick in.Kulisch, Ulrich Kulisch at Karlsruhe in Germanyhas people who like his ideas and they wanted to put in whats called a superaccumulator,which is a register so wide that if you multiply two numbers and then convert them to fixed-point,theyll stick somewhere in this wide register. It allows you to do some things which canalso be done in other ways, but this happens to be the way that Kulisch and his studentsdo it. So theres an option, and I can go on andon with various options.So are those features that have actually beenadopted by any microprocessor designers or hardware? Well, except for research purposes, that onehasnt been. Theres a fused multiply-add. Thats partly my fault. In 1984, I went down to Austin, Texas to helpwith a problem that had been incurred because IBM wanted to implement John Cockes veryreduced instruction set computer. You know, R-I-S-C, RISC, a very, very popularbuzzword. Actually the first RISC computer was SeymourCrays 6000 family, but thats a story for another day. The trouble was that John Cocke had observedthat all they really need is a multiply-add operation, an operation which multiplies twonumbers, rounds them, then adds a third and rounds them.He said, If youve got that, youvegot everything you need, because if you want to do a multiply, add zero, if you want tomultiply sorry, if you want to do an add, multiply one of the numbers by 1, and if youwant to do a divide, well, youll use an iterative process. Oh, it wasnt quite that simple becauseif what you want to do is round correctly the way the IEEE standard specifies, roundingto nearest, ahhh, then it will cost you twice as much as just simply getting a quotientthats OK, except maybe for the last bit or two.So thats where they called me and said,What can we do about this? We cant afford divide to be that slow. And I looked at their chip and I said, Youknow, youve got room here to do a fused multiply-add it says. Do the multiply, dont round it. Then do the add, and then round it. And I said, If you do that, then you cancompute a remainder, and from the remainder, you can figure out how to round it, and youwont have to do something thats so lengthy. And then I said, And besides, it helpswith decimal-binary conversion and it will help with your logarithms and other thingstoo. And they did; and it did. And Peter Markstein, he had worked for IBMin Yorktown Heights. Thats where I first met him, but he wasworking for this group in Austin.And yes he, and I think with the help of hiswife, they produced a math library and they found that the fused multiply-add made allsorts of things go better. It was almost like having a scratchpad, yousee. And well, Robert Montoye. Robert Montoye was the engineer who actuallyimplemented the fused multiply-add. I found out later, years later that his thesishad been about fused multiply-add, and its just possible that he left room on the chiphoping that a visiting fireman like me would say, Hey, fill it up and well use itfor a fused multiply-add. Maybe. But Ive asked about it and nobody seemsto know. That is a crock for the historian, becauseI bet you would really like to know, and I would like to know, whether he was seriousabout this. Anyway, it got done. And then the question was Well, does itconform to the standard? I said, Well, it goes beyond the standard. And as long as you let people know that youreusing it and let them know that they can turn it off, thats okay, because, I said,there are some situations where youll wish you hadnt done this. An example is multiply a complex number byits conjugate to get the squared magnitude.Youll wish you hadnt done this, becausewhen you do it, youre going to find the squared magnitude, which should be real, hasa tiny imaginary part due to roundoff. Bad news. And theres some other peculiar situationswhere this happens. So I said, As long as you can turn it off,well, its something that somebody can use if they want, or not. Then other folks decided that theyd liketo have it too, and so there we are. Its another option in the standard, inthe 2008 standard. Has the 2008 standard had any effect on whatvendors are doing with real machines at this point? I dont think people have departed fromthe 1985 standard, to my knowledge, unless what theyve done is depart from the standardaltogether.For example, ARM, A-R-M, they have processorswhich do floating point in a way that does not conform to the standard. It doesnt round quite right. It doesnt have flags. The exceptions are very stilted. Theres certain rudimentary options. I think they even underflow abruptly insteadof gradually. Oh, abrupt underflow is now an option. Some people want it. I dont think its a good idea, but ifthey want it, well, what can I say? No, I think that the old 1985 standard isthe one thats still preponderant in hardware.But thats fading because the languagesare slow to support it. Theres a version of C, C99, that supportedthe IEEE standard to some reasonable extent, and the latest revision for C is underwayto support the standard more fully. Theres also a version of FORTRAN which,if I remember rightly, makes provision for access to flags, but it thinks that theyremachine bits. It doesnt understand them as localizablevariables. And there are serious questions about whenyou enter a subprogram, what should you do? Should you inherit the flags? Should you propagate them? If you raise flags and deal with them in yourown subprogram, what should you put out? APL dealt with this problem.I dont know if you remember the APL language,Ken Iversons language. Well, what they had was a situation whereif you entered a function, you would inherit what were called system variables. System variables include a comparison tolerance,CT, and included an index origin, could be 0 or 1. So if you entered a subprogram, youd inherit. Now if you declared these as local variablesand altered them, then when you exited, youd restore things the way they were when youentered. But if you didnt declare them as localvariables and you exited, well, then whatever you may have changed, thats what you passedon. Thats not a bad way to handle flags andthe directed rounding modes. But thats not been implemented so far asI know in any current languages, and APL is a ghost. Its long gone. Well, I have an APL implementation for oneof my computers, but I hardly ever use it.Yes. It had a devoted following, but it was nevera very mainstream Say again. APL had a devoted following, but Oh yeah. There were people who loved to write programsin APL. The trouble was you couldnt read them. So are there any other aspects of your workover the last few decades that you think it would be relevant to mention? Im having trouble hearing you. Sorry. Are there any other aspects of my work? Over the last 20 years or so that you thinkit would be relevant to mention? Well, Ive worked on all sorts of things,as I always do.But what weighs most upon my mind is our inabilityto debug our large floating-point codes, and Ive proposed what I think should be donein a posting called Boulder.pdf thats the name of the city in Colorado. And this includes what amounts to two, letssay two major foci. One is How do we deal with rounding errors? We cant name the rounding errors in ourprograms because theres so many of them wed drown if we tried, so theyre invisible.And when people write programs and they writeout algebraic expressions and they imagine that thats what the computer is going toevaluate, thats not whats going to happen. Every operation is going to get rounded. Well, some operations like division by 2 ormultiplication by 2 hardly ever are inexact, they hardly ever over/underflow, but theyreexact as far as roundoff is concerned. And there are lots of such things that areexact, but usually theres going to be a rounding error. The rounding error of course affects onlythe end figures of the immediate result, but then that can turn out to have a really largeeffect. Depends on where the singularities of yourprogram are. And the singularities are very hard to find. Ive given examples where to most peopleseyes, the singularities are invisible. Oh, theyre there. They kill the accuracy of the result. But when you look at the program, you cantsee why that should happen. So lets face it. Rounding errors are going to baffle even expertsfrom time to time, and therefore we need tools to deal with rounding errors.And after a lot of experiments and after tryinga lot of other suggestions, other peoples suggestions and so on, Ive come to theconclusion that the least expensive way that offers the most for your patience and timeis to rerun a computation three more times, one for each of the directed rounding modes. That gives you four results round to nearest,round up, round down, round to zero. And very often I cant say almostalways, though Im tempted very often the spread in those results gives you an ideaof the sensitivity of your computation to roundoff in the internal computation, theinternal works. Now it cant be infallible, because I alsogive examples showing where it wont do you any good. But the examples are pathological. Theyre designed specifically to show youthat this scheme is not foolproof. But the computations dont compute anythingyoud want to compute.Theyre just there to show that you cantalways guarantee it. Now that sounds easy, but its not, becausefor instance, do you want to redirect the roundings inside the math library? Probably not. And in order to rig these tests, what youwant to do is identify some module in your program, like one of the blocks in the blockdiagram, you want to identify some module that seems to be hypersensitive. In order to do that, you got to capture theinput that came to this program at a time when your suspicions were raised, and thenrerun it and look at the spread in the output.And if that spread is excessive, you can say,Ah-ha, I think maybe this block is responsible for my troubles. But you cant do that without assistancefrom the debugger. That means the debugger and the compiler haveto collaborate, so you can tell the debugger, I want to save the input to this subprogramin order to be able to run these tests. And I want to protect this logarithm and exponentialprogram and decimal-binary conversion, and so I want to protect them from redirectedroundings. I always want to protect decimal-binary conversionfrom redirected roundings for this purpose. And so the debugger has to be able to insertthe appropriate things into the running code, the object code.Now theres another way to do it, and thatis youve got a program lets say in float arithmetic and maybe what youd like todo is run the same program text-wise except that all the declarations are now in double. In C, thatll tell you whats gone wrong. Now if what you want to do is trace the programsin C where they diverge, youve had it because thats not going to tell you anything. It turns out there are lots of programs wherethe results diverge as a consequence of rounding errors, but they always come back. So see, youd get a false positive then. No, I show how to do it in that document. I show how to do it so you can take advantageof the ability to run with extra precision. But thats not trivial either because someof the subprograms youve obtained have been obtained from other people in objectform and you cant recompile them for the higher precision, and some of the constants,some of the parameters have to be modified if were going to go to higher precision.I mean if this is an iterative program, doyou want to iterate until convergence to a higher precision or do you want to iteratejust the same number of times as you iterated here? Well, it depends, so youve got to be ableto exercise these options in order to be able to benefit from running the higher precision. Neither then of these schemes is altogethertrivial. They require support the language, compiler,debugger, and in some cases the operating system. So thats one arena, dealing with roundoff,the invisible menace. Now the other arena has to do with exceptionhandling. Floating-point exceptions are different fromerrors unless you handle them badly. The reason theyre different is becausetheres so little that you can do with them. Okay. What can you do about a floating-point exception? Well, you might decide to abandon the computation. Thats the try, catch, and so on. But a graduate student here, now hes aprofessor I think in Virginia, Weimer W-E-I-M-E-R, thats his last name he found out thatif you use try-catch-finally for more than one condition to be caught, the chances arepretty high that youll do it erroneously.He looked at some vast number of lines ofcode, maybe a million. I dont know how he did it, but apparentlyhe set up a search procedure. What he found was that more often than not,if there was more than one condition to worry about, the programmer got it wrong. That would happen with floating point also,because with floating point, its very rare that you have just one anomalous situation. So as a programmer, you should certainly havethe option to try something and if you detect that it doesnt work, then go back and trysomething else. You should have the ability to presubstitute,which means you should be able to say at the beginning of a block, In this block, ifsuch-and-such an event occurs, substitute this number. All you have to know are the operand valuesand the destination and the operation.Then you look up a table and you should beable to set whatever you like in that table. It can be either a value or the address toa value. And the operating system should not get intothe act unless youve decided what you want to do is really abort. Maybe if what youre doing is debugginga program, thats what you may indeed want to do. Then you hope that if it aborts, it will tellyou at what line so youll know where the exception occurred, not with a hexadecimaladdress but with a line in your source code. And that means that that information has tobe preserved by the compiler and available to the debugger. And possibly the operating system has to getinto the act, because if youre going to abort, where are you going to abort to? Well, you can abort to a calling program oryou abort to the operating system.So operating system has to be prepared. Now there is a larger problem with exceptionand error handling that has nothing to do specifically with floating point. It has to do with the whole concept of a runtimeerror. Because its ambiguous. Is a runtime error an error on the part ofthe programs user or is it an error on the part of the programmer? Hard to tell.What that means, as I see it, is that especiallyin real-time systems that are running something like an aircraft, youve got to think yourway through very carefully so that before a program abandons control, it first of allfinds a default situation which is as little harmful as you can imagine it to be in a worldwhere you cant imagine everything, and you provide a message which is usable at ahigher level to say, What went wrong? And if what went wrong was potentially transient,programs should maintain a watch to see whether the transient has passed and now things areokay. I say this because if you dont do that,it will cost lives and it has. If you look at my Boulder.pdf, youllsee it happened to Air France 447. In 2009 I think it was, it got lost in themiddle of the Atlantic on the way from Rio de Janeiro to Paris.It got lost because it was caught in a thunderstorm,the pitot tubes iced up, they stopped sending reliable airspeed indications. The automatic pilot said, How can I bemaintaining my altitude at 35,000 feet when Im only doing 160 knots or less? So the automatic pilot said, Invalid data. You guys take over. Invalid data? What data was invalid? The pilots didnt know. Which of their instruments must be distrusted? The pilots didnt know. Could they look outside and see whats goingon? No, its pitch-black night and theyrebeing tossed up and down, and they cant see anything outside the window. Well, the younger co-pilot figured that ifhe doesnt know which way were going, up is better than down, so he pulled backon the stick. That turned out to be a mistake, because whenthey pulled back on the stick, it increased the angle of attack of the wing and the aircraftstalled, and it started to fall like a rock.Then there was a signal that said, Stall,stall, stall, but can you trust it? You look outside, you cant see whatsgoing on. Youre being tossed around anyway in a thunderstorm,so you cant tell by the seat of your pants. It took them more than three minutes to figureout what was going on. Three of them, two co-pilots and the captain. It took them more than three minutes, butit took them only three minutes to hit the surface of the ocean. So they and all the passengers died, and theydied because they didnt have an informative indication of what was wrong.Because as they fell down through 20,000 feet,the pitot tubes lost their ice and started to record correctly, but the program had abandonedthem. It was no longer looking. This is a tragedy that was caused by software,but the board of inquiry didnt ask about software. They ended up blaming the younger co-pilotfor pulling back on the stick when what he should have done to get out of a stall, ifhe had known that he was in stall and certainly the plane was trying to tell himhe was in a stall, he just didnt know he could trust it you push forward on thestick. You put the nose down, you gain speed. When you gain speed, the stall condition goesaway, and now you can pull out and continue in a normal way. Of course you can if you know what to do. Well, they lost another aircraft on the wayfrom I think somewhere in Indonesia to somewhere in Malaysia. This was not the one that was supposed togo to Beijing and ended up in the Indian Ocean.No, this is the one that was it was a two-hourflight, and in the middle of the two-hour flight, they got caught in a thunderstorm. Inexplicably, one of the pilots popped thecircuit breaker that supplied power to the automatic pilot, which I interpret as meaningthat that was the only way he knew to reboot. But it takes time to reboot, and during thattime, they didnt know how to control the aircraft. So it crashed. They lost it. They managed to recover I think the blackbox, but the pilots and all the passengers got lost. Same thing. The software did not tell them why they werebeing abandoned. I think we have to look at programming languagesand ask ourselves whether a programming language convention is worth enforcing. Namely that every programmer must have inmind that if something happens that he has not anticipated and it causes an error conditionthat causes a program to fail, that there should be a default status of the data structuresthat the programs responsible for, and that thats the condition in which the programwill put them before it abandons.And in the calling program, knowing that,knowing that if the program fails the data structures will be consistent, theyll bein some neutral or standard or predictable format, it can try to deal with the emergency. But you cant just abandon the guys andleave the data structures, which means the control of the aircraft, in some unpredictablestate and they dont know what it is, dont know what to do about it. That will cost lives. Well, I havent yet updated that documentwith the Malaysian Airlines thing, but you can read about that in Boulder.pdf.That is the most important thing that Imdoing. Its not my favourite thing. In a way, I dont want to get involved withthat stuff, but what can you do? It seems to me to be necessary. Other things Im doing are what an erroranalyst would do. There are people who want to compute the energylevels of various molecules, which are represented by differential operators, by differentialequations, Schrdingers equation. The differential operator has eigenvaluesand theyd like to know the tiny eigenvalues. But the differential operator is an unboundedoperator. That means that the derivative of a wigglyfunction can be an awful lot bigger than the amplitude of the function.Since the function would be arbitrarily wiggly,that means that the differential operator is essentially unbounded. So when you discretize it, of course discretizingit, now youve only got something finite. But this finite thing can get rather big,really big, because its an unbounded operator, and you want the small eigenvalues. Whats more, youd like them to reasonablyhigh accuracy. This is a challenge. And, well, my collaborators like Jim Demmeland others have found ways to do the computation which usually gives you very high relativeaccuracy. You usually get practically all the digitsyou want from the small eigenvalues, even though theyre small compared with biggerones, they can be enormously bigger. But how do you know? How do you check it out? So Ive been trying to compute error boundswhich say, Under such-and-such circumstances, you can confirm the accuracy of what youvecomputed so you know Yes, this many digits are okay. But it isnt often that you can do thatbecause it isnt often you can get high relatively accuracy of the tiny eigenvalues. Usually the bigger eigenvalues determine therounding error level.So youve got your big eigenvalues hereand you look at their rightmost digits and you imagine them being somewhat fuzzy, andas the small eigenvalue gets smaller, after a while they disappear in the fuzz. Thats what usually happens, so its onlyin rather special occasions that you can get high relative accuracy where you get digitsthat go like that. Well, and then Im reviewing a crazy bookthat says THE END OF ERROR: Unum Computing . Theend of error means that if you do what he says, youll never get incorrect results. This is provably impossible because, as DerrickLehmer said, once we acquiesce to approximation, to rounding errors, youre in a state ofsin. And I can go into details, but I dont thinkthats what you have in mind right now. No. Alright.Well, you had mentioned that you would liketo say something about the role of luck in your life. Yes. Well, I think there are lots of people whoare at least as smart as I am and maybe a lot smarter, and maybe more virtuous and maybemore industrious, and they appear to get less recognition. The difference I used to think was that someguys are really a lot smarter than others, but I know better now. Luck plays a large role, a very large role. Youve got to have the right opportunities. Its helpful to have the right friends. There are times when something is right andother times when it isnt. And my luckiest stroke, luckiest by far, waspersuading a young woman to grow old with me back in 1953, and thats what wevebeen doing.Without her, Id be a lazy bum. In order to deserve her love, I had to changemy ways from being relatively lazy, do-what-comes-easy, to being industrious and trying to do whatsright. She has supported me, she has taken care ofme. She soothes and stimulates me, and has forsome 63 years. So God, I was lucky. Because she could easily have been snappedup by someone else. I look around and I see a lot of people whosemarriages seem temporary. I guess they just arent as lucky as I was,because I havent noticed that theyre stupid, I havent noticed that they havedefects of character that are obvious to me. So I think we have to acknowledge that luckplays a large role.I was lucky that Jerome Coonen wandered inon me one day and stayed to help out. I was very lucky that I was living down hereinstead of in Toronto so it was actually possible, practicable to participate both in the consultingwith HP and Intel and IBM and a few others, whereas from Toronto it would have been harder. I left Toronto partly because of the damagedone by John Diefenbaker . He was elected in 1957 on an anti-American platform. Its almost unbelievable that a Canadianprime minister could be elected on an anti-American platform under the circumstances, but thatswhat happened. How did it happen? Well, he came from the Prairie provinces andthey grow wheat. And its a very good wheat. Durum wheat. Its very hard, makes very good flour. Now of course processed flour may not be allthat good for you, but back in 1957 people thought it was just marvellous.And the United States subsidized Americanwheat farmers exports, so of course Canadian farmers got testy about that, and Diefenbakerwas their champion. As far as he was concerned, the farmers, theminers, the lumbermen, they were the salt of the earth. Those businesspeople and scientists and especiallythe Liberals He was a member of the Progressive ConservativeParty about which nothing is progressive and they didnt conserve anything except theirfamilys wealth. But anyway, he was a member of that partyand the Liberals then of course were scoundrels as far as he was concerned. And as he campaigned, the Senate of the UnitedStates was debating laws that would restrict American defense purchases to American producers,but there was an exemption for Canada as there had been during the Second World War and continuedinto the 50s. But when they heard Diefenbaker spouting off,they got into a snit and they struck out this exemption. Now of course the exemption could be put back. I mean Congress can change its mind if itfeels like it.That had an impact on the Avro Arrow. The Avro Arrow was a jet interceptor designedby people living in Canada though working for an English company. You see, we had different immigration policies. Canada allowed immigrants to come if theybrought with them various technical skills that the government deemed to be in shortsupply whereas in the United States there was a quota system. The H-1B system hadnt really been perfectedyet, and so there was a quota system.Well, you could be in a country with an unluckyquota and you got several years to wait before youll be admitted or even considered foradmission. So many of the European technologists, especiallythe Germans and British came to Canada, and we were happy to have them. That included people who were experts in aeronauticalengineering, in ceramics, materials for jet engines, and on and on. And we needed this aircraft because peoplewere worried that bombers from Russia would come sneaking over the Arctic in order tobomb American cities. Well, of course the Americans would have tointercept them, but the range of American fighters was such that these bombers wouldbe intercepted over Canadian cities. We didnt feel happy about that, so we werewilling to pay taxes to have an aircraft designed that had the range to intercept the bombersway up near the Arctic Circle where hardly anybody lived. This aircraft was capable of supersonic speeds,capable of very considerable range.It had two very powerful engines. These were Canadian-designed Orendajet engines which were certainly the most powerful in the Western world if not in thewhole world. And three Arrows had been built and two ofthem had flown. But Diefenbaker got elected. He was not quite the Donald Trump of Canadianpolitics, but it was similar to that sort of phenomenon. The first thing he did was have the Avro Arrowscut up into scrap, just like that, because they represented to him the Liberal Party.He foamed at the mouth when he talked aboutthe Liberals. Well, what that meant was that all these peoplein the small machine shops and the ceramic shops and the aeronautical design shops andthe computing and everything else, a lot of it around the bypass highway near Toronto,all of them realized that they werent going to be employed anymore. And Americans realized that too, so Northrupcame up and they bought the airframe. And Pratt & Whitney came up and they boughtthe engine. And Hughes came up and bought the electronicsand so on. But they didnt just buy the parts, theybought the people. What has to be understood, and it isntunderstood enough, is that you may think youve got high tech in patents on your shelf, butthe high tech resides in the minds of people. It doesnt do any good on paper in patents,or for that matter in PDF files. And if you dont provide a path for youryoung people to progress, then theyre going to go somewhere else, and theyll take yourhigh technology with them, non-compete agreements notwithstanding.So of course thats how we got the British,and the Germans came because things were tough in Germany. We also got others Italians, French, andso on. So we had all these high-tech people and ina fortnight, in two weeks, they were gone, snapped up by American companies and transported. And I didnt realize at first that thiswas going to impact my students, because I was still in Cambridge when it happened. But when I came back in 1960, I could seewhat had happened, that the jobs available for my students were available preponderantlysouth of the border. I felt somewhat discouraged about that. Maybe I should have stayed and fought it out,but it just seemed like it seemed so overwhelmingly difficult to fight it in that particular politicaland economic climate. So when I got an invitation to come down tothe United States, I thought, Oh boy. Ill come down here for seven or eight years.By that time, maybe Canada will have recovered. Well, it didnt. And after seven or eight years, when I askedmy family, Hey, weve been here eight years. How would you like to go somewhere else? We could go to London or to Adelaide or toBoston. There are all sorts of places where we couldgo. How would you like to get a fresh change ofscene? No, no, no! Three votes no. My two young boys were now old enough to vote,and my wife was happy here, had made lots of friends. So I was stuck. Now I didnt suffer. The decision to stay here has not createdany significant disadvantage for me.Quite the contrary. But I feel sad that Canada lost another brain. Dont have that many, you know? Well, theyve got a bright young Prime Ministernow, young Trudeau. He seems like a pretty nifty guy and maybehe can reverse or at least start the reversal of decades of this government. I think a country that does not provide arole for the young people to gain experience, to make mistakes, is wasting them. It is part of it due to not doing your duty. And you asked me what I thought was important,and Ive told you another one of the things I think is important. So that I think concludes the outline thatwe had. Is there anything else that you would liketo mention? Well, I dont know. I think Ive kept you for long enough. Its getting very foggy out. I hope youll be able to find your way home..

As found on YouTube

Book Now For Environmental Consultingl In Newcastle

Leave a Reply

Your email address will not be published.