UNIVERSE

A DAY IN THE COUNTRY

Topic: 

  • UNIVERSE
http://vimeo.com/79412223

"Physics and everything we know in the world around us may really be tied to processes whose fundamental existence is not here around us, but rather exists in some distant bounding surface like some thin hologram, which by virtue of illuminating it in the right way can reproduce what looks like a 3-dimensional world. Perhaps our three dimensional world is really just a holographic illumination of laws that exist on some thin bounding slice, like that thin little piece of plastic, that thin hologram.

HOW FAST, HOW SMALL, AND HOW POWERFUL?

MOORE'S LAW AND THE ULTIMATE LAPTOP
Seth Lloyd
[7.22.01]

 

"Something else has happened with computers. What's happened with society is that we have created these devices, computers, which already can register and process huge amounts of information, which is a significant fraction of the amount of information that human beings themselves, as a species, can process. When I think of all the information being processed there, all the information being communicated back and forth over the Internet, or even just all the information that you and I can communicate back and forth by talking, I start to look at the total amount of information being processed by human beings — and their artifacts — we are at a very interesting point of human history, which is at the stage where our artifacts will soon be processing more information than we physically will be able to process."

THE REALITY CLUB: Joseph Traub, Jaron Lanier, John McCarthy, Lee Smolin, Philip W. Anderson, Antony Valentini, Stuart Hameroff and Paola Zizzi respond to Seth Lloyd

Introduction 

"Lloyd's Hypothesis" states that everything that's worth understanding about a complex system, can be understood in terms of how it processes information. This is a new revolution that's occurring in science.

Part of this revolution is being driven by the work and ideas of Seth Lloyd, a Professor of Mechanical Engineering at MIT. Last year, Lloyd published an article in the journal Nature — "Ultimate Physical Limits to Computation" (vol. 406, no. 6788, 31 August 2000, pp. 1047-1054) — in which he sought to determine the limits the laws of physics place on the power of computers. "Over the past half century," he wrote, "the amount of information that computers are capable of processing and the rate at which they process it has doubled every 18 months, a phenomenon known as Moore's law. A variety of technologies — most recently, integrated circuits — have enabled this exponential increase in information processing power. But there is no particular reason why Moore's law should continue to hold: it is a law of human ingenuity, not of nature. At some point, Moore's law will break down. The question is, when?"

His stunning conclusion?

"The amount of information that can be stored by the ultimate laptop, 10 to the 31st bits, is much higher than the 10 to the 10th bits stored on current laptops. This is because conventional laptops use many degrees of freedom to store a bit whereas the ultimate laptop uses just one. There are considerable advantages to using many degrees of freedom to store information, stability and controllability being perhaps the most important. Indeed, as the above calculation indicates, to take full advantage of the memory space available, the ultimate laptop must turn all its matter into energy. A typical state of the ultimate laptop's memory looks like a plasma at a billion degrees Kelvin — like a thermonuclear explosion or a little piece of the Big Bang! Clearly, packaging issues alone make it unlikely that this limit can be obtained, even setting aside the difficulties of stability and control."

Ask Lloyd why he is interested in building quantum computers and you will get a two part answer. The first, and obvious one, he says, is "because we can, and because it's a cool thing to do." The second concerns some interesting scientific implications. "First," he says, "there are implications in pure mathematics, which are really quite surprising, that is that you can use quantum mechanics to solve problems in pure math that are simply intractable on ordinary computers." The second scientific implication is a use for quantum computers was first suggested by Richard Feynman in 1982, that one quantum system could simulate another quantum system. Lloyd points out that "if you've ever tried to calculate Feynman diagrams and do quantum dynamics, simulating quantum systems is hard. It's hard for a good reason, which is that classical computers aren't good at simulating quantum systems."

Lloyd notes that Feynman suggested the possibility of making one quantum system simulate another. He conjectured that it might be possible to do this using something like a quantum computer. In 90s Lloyd showed that in fact Feynman's conjecture was correct, and that not only could you simulate virtually any other quantum system if you had a quantum computer, but you could do so remarkably efficiently. So by using quantum computers, even quite simple ones, once again you surpass the limits of classical computers when you get down to, say, 30 or 40 bits in your quantum computer. You don't need a large quantum computer to get a big huge speedup over classical simulations of physical systems.

"A salt crystal has around 10 to the 17 possible bits in it," he points out. "As an example, let's take your own brain. If I were to use every one of those spins, the nuclear spins, in your brain that are currently being wasted and not being used to store useful information, we could probably get about 10 to the 28 bits there."

Sitting with Lloyd in the Ritz Carlton Hotel in Boston, overlooking the tranquil Boston Public Gardens, I am suddenly flooded with fantasies of licensing arrangements regarding the nuclear spins of my brain. No doubt this would be a first in distributed computing

"You've got a heck of a lot of nuclear spins in your brain," Lloyd says. "If you've ever had magnetic resonance imaging, MRI, done on your brain, then they were in fact tickling those spins. What we're talking about in terms of quantum computing, is just sophisticated 'spin tickling'."

This leads me to wonder how "spin tickling" fits into intellectual property law. How about remote access? Can you in theory designate and exploit people who would have no idea that their brains were being used for quantum computation?

Lloyd points out that so far as we know, our brains don't pay any attention to these nuclear spins. "You could have a whole parallel computational universe going on inside your brain. This is, of course, fantasy. But hey, it might happen."

"But it's not a fantasy to explore this question about making computers that are much, much, more powerful than the kind that we have sitting around now — in which a grain of salt has all the computational powers of all the computers in the world. Having the spins in your brain have all the computational power of all the computers in a billion worlds like ours raises another question which is related to the other part of the research that I do."

In the '80s, Lloyd began working on how large complex systems process information. How things process information at a very small scale, and how to make ordinary stuff, like a grain of salt or a cube of sugar, process information, relates to the complex systems work in his thesis that he did with the late physicist Heinz Pagels, his advisor at Rockefeller University. "Understanding how very large complex systems process information is the key to understanding how they behave, how they break down, how they work, what goes right and what goes wrong with them," he says.

Science is being done in new an different ways, and the changes accelerates the exchange of ideas and the development of new ideas. Until a few years ago, it was very important for a young scientist to be to "in the know" — that is, to know the right people, because results were distributed primarily by pre prints, and if you weren't on the right mailing list, then you weren't going to get the information, and you wouldn't be able to keep up with the field.

"Certainly in my field, and fundamental physics, and quantum mechanics, and physics of information," Lloyd notes, "results are distributed electronically, the electronic pre-print servers, and they're available to everybody via the World Wide Web. Anybody who wants to find out what's happening right now in the field can go to http://xxx.lanl.gov and find out. So this is an amazing democratization of knowledge which I think most people aren't aware of, and its effects are only beginning to be felt."

"At the same time," he continues, "a more obvious way in which science has become public is that major newspapers such as The New York Times have all introduced weekly science sections in the last ten years. Now it's hard to find a newspaper that doesn't have a weekly section on science. People are becoming more and more interested in science, and that's because they realize that science impacts their daily lives in important ways."

A big change in science is taking place, and that's that science is becoming more public — that is, belonging to the people. In some sense, it's a realization of the capacity of science. Science in some sense is fundamentally public.

"A scientific result is a result that can be duplicated by anybody who has the right background and the right equipment, be they a professor at M.I.T. like me," he points out, "or be they from an underdeveloped country, or be they an alien from another planet, or a robot, or an intelligent rat. Science consists exactly of those forms of knowledge that can be verified and duplicated by anybody. So science is basically, at it most fundamental level, a public form of knowledge, a form of knowledge that is in principle accessible to everybody. Of course, not everybody's willing to go out and do the experiments, but for the people who are willing to go out and do that, — if the experiments don't work, then it means it's not science.

"This democratization of science, this making it public, is in a sense the realization of a promise that science has held for a long time. Instead of having to be a member of the Royal Society to do science, the way you had to be in England in the 17th, 18th, centuries today pretty much anybody who wants to do it can, and the information that they need to do it is there. This is a great thing about science. That's why ideas about the third culture are particularly apropos right now, as you are concentrating on scientists trying to take their case directly to the public. Certainly, now is the time to do it."

—JB

SETH LLOYD is an Associate Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. He is also adjunct assistant professor at the Santa Fe Institute. He works on problems having to do with information and complex systems from the very small — how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information?

Click Here for Seth Lloyd's Bio Page 


IT'S A MUCH BIGGER THING THAN IT LOOKS

David Deutsch
[11.19.00]

However useful the theory [of quantum computation] as such is today and however spectacular the practical applications may be in the distant future, the really important thing is the philosophical implications — epistemological and metaphysical — and the implications for theoretical physics itself. One of the most important implications from my point of view is one that we get before we even build the first qubit [quantum bit]. The very structure of the theory already forces upon us a view of physical reality as a multiverse. Whether you call this the multiverse or 'parallel universes' or 'parallel histories', or 'many histories', or 'many minds' — there are now half a dozen or more variants of this idea — what the theory of quantum computation does is force us to revise our explanatory theories of the world, to recognize that it is a much bigger thing than it looks. I'm trying to say this in a way that is independent of 'interpretation': it's a much bigger thing than it looks.

 

Introduction

In 1998 Oxford physicist David Deutsch was awarded the Paul Dirac Prize "For pioneering work in quantum computation leading to the concept of a quantum computer and for contributing to the understanding of how such devices might be constructed from quantum logic gates in quantum networks."/p> /p> "Quantum computing," Deutsch says, "is information processing that depends for its action on some inherently quantum property, especially superposition. Typically we would superpose a vast number of different computations ‹ potentially more than there are atoms in the universe ‹ and then bring them together by quantum interference to get a result. Other quantum computations, notably quantum cryptography, couldn't be done by classical computers even in theory."

Deutsch's work on quantum computation has led him into two important areas of research concerning (a) "the structure of the multiverse ‹ making precise what we mean by such previously hand-waving terms as 'parallel', 'universes' and 'consists of'. It turns out that the structure of the multiverse is largely determined by the flow of quantum information within it, and I am applying the techniques we used in that paper to analyse that information flow"; and (b) "a generalization of the quantum theory of computation, to allow it to describe exotic types of information flow such as we expect to exist in black holes and at the quantum gravity level. This is all in the context of my growing conviction that the quantum theory of computation is quantum theory."

According to Deutsch, one spinoff from the quantum theory of computation is that "it provides the clearest and simplest language, and mathematical formalism, for setting out quantum theory itself."

-JB

DAVID DEUTSCH'S research in quantum physics has been influential and highly acclaimed. His papers on quantum computation laid the foundations for that field, breaking new ground in the theory of computation as well as physics, and have triggered an explosion of research efforts worldwide. His work has revealed the importance of quantum effects in the physics of time travel, and he is an authority on the theory of parallel universes.

Born in Haifa, Israel, David Deutsch was educated at Cambridge and Oxford universities. After several years at the University of Texas at Austin, he returned to Oxford, where he now lives and works. He is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University. He is the author of The Fabric Of Reality.

David Deutsch's Edge Bio Page

TIME LOOPS

Paul Davies
[11.1.00]

"As providing an insight into the nature of reality, and the nature of the physical universe, this whole area is really fascinating. I've thought a lot about it over the years, and I'm still undecided as to whether nature could never permit such a crazy thing, or whether yes, these entities, these wormholes, or some other type of gravitational system do at least in principle exist, and in principle one could visit the past, and we have to find some way of avoiding the paradox. Maybe the way is to give up free will. Maybe that's an illusion. Maybe we can't go back and change the past freely."

Inroduction

The theoretical physicist Paul Davies works in the fields of cosmology, gravitation, and quantum field theory, with particular emphasis on black holes and the origin of the universe. A prolific and influential popularizer of physics, he has written more than a dozen books. 

In recent years he has pursued an antireductionist agenda, making the case for moving both physics and biology onto "the synthetic path," recognizing the importance of the organizational and qualitative features of complex systems. He advocates a meeting of the minds between physicists and biologists, noting that complicated systems, whether biological or cosmological, are more than just the accretion of their parts but operate with their own internal laws and logic.

– JB

PAUL DAVIES is an internationally acclaimed physicist, writer and broadcaster, now based in South Australia. Professor Davies is the author of some twenty books, including Other Worlds, God and the New Physics, The Edge of Infinity, The Mind of God, The Cosmic Blueprint, Are We Alone? About Time and The Fifth Miracle: The Search for the Origin of Life.

He is the recipient of a Glaxo Science Writers' Fellowship, an Advance Australia Award and a Eureka prize for his contributions to Australian science, and in 1995 he won the prestigious Templeton Prize for his work on the deeper meaning of science. The Mind of God won the 1992 Eureka book prize and was also shortlisted for the Rhone-Poulenc Science Book Prize, as was About Time in 1996. Davies has just been awarded the Kelvin Medal by the UK Institute of Physics for his success in bringing science to the wider public.

Paul Davies Edge Bio Page

REALITY CLUB: Joseph Traub, Julian Barbour, Lee Smolin, Gregory Benford


TIME LOOPS

Topic: 

  • UNIVERSE
http://vimeo.com/79411856

As providing an insight into the nature of reality, and the nature of the physical universe, this whole area is really fascinating. I've thought a lot about it over the years, and I'm still undecided as to whether nature could never permit such a crazy thing, or whether yes, these entities, these wormholes, or some other type of gravitational system do at least in principle exist, and in principle one could visit the past, and we have to find some way of avoiding the paradox. Maybe the way is to give up free will. Maybe that's an illusion.

THE END OF TIME

Julian Barbour
[8.15.99]

Julian Barbour, a theoretical physicist, has worked on foundational issues in physics for 35 years. He is responsible for a radical notion of "time capsules which explain how the powerful impression of the passage of time can arise in a timeless world".

He lives on a farm north of Oxford village and for the past 30 years he has made a living translating Russian while pursuing his interests in physics.

"I've been working for myself, following my ideas," he says. I wanted to be independent because I'm not the sort of person who can produce a lot of research papers with equations, on a regular basis — I've got quite a good intuition, at least it seems to me I'm always coming up with ideas at least for myself, and some of them stand up to the test of colleagues. I just wanted to be away of all pressure to publish just for the sake of having a publication.

In a profile in The Sunday Times (October, 1998), Steve Farrar wrote: "Barbour argues that we live in a universe which has neither past nor future. A strange new world in which we are alive and dead in the same instant. In this eternal present, our sense of the passage of time is nothing more than a giant cosmic illusion. 'There is nothing modest about my aspirations,' he said. 'This could herald a revolution in the way we perceive the world.'" Cosmologist Lee Smolin notes thatBarbour has presented "the most interesting and provocative new idea about time to be proposed in many years. If true, it will change the way we see reality. Barbour is one of the few people who is truly both a scientist and a philosopher."

SPECIAL RELATIVITY: WHY CAN'T YOU GO FASTER THAN LIGHT?

W. Daniel Hillis
[1.24.99]

Danny Hillis is one of the most inventive people I've ever met, and one of the deepest thinkers. He's contributed many important ideas to computer science Ñ especially, but not exclusively, in the domain of parallel computation. He's taken many algorithms that people believed could run only on serial machines and found new ways to make them run in parallel Ñ and therefore much faster. Whenever he gets a new idea, he soon sees ways to test it, to build machines that exploit it, and to discover new mathematical ways to prove things about it. After doing wonderful things in computer science, he got interested in evolution, and I think he's now on the road to becoming one of our major evolutionary theorists.

— Marvin Minsky

Danny Hillis, physicist and computer scientist, brings together, in full circle, many of the ideas circulating among third culture thinkers: Marvin Minsky's society of mind; Christopher G. Langton's artificial life; Richard Dawkins' gene's-eye view; the plectics practiced at Santa Fe. Hillis developed the algorithms that made possible the massively parallel computer. He began in physics and then went into computer science — where he revolutionized the field — and he brought his algorithms to bear on the study of evolution. He sees the autocatalytic effect of fast computers, which lets us design better and faster computers faster, as analogous to the evolution of intelligence. At MIT in the late seventies, Hillis built his "connection machine," a computer that makes use of integrated circuits and, in its parallel operations, closely reflects the workings of the human mind. In 1983, he spun off a computer company called Thinking Machines, which set out to build the world's fastest supercomputer by utilizing parallel architecture.

The massively parallel computational model is critical to an understanding of today's revolution in human communication. Hillis's computers, which are fast enough to simulate the process of evolution itself, have shown that programs of random instructions can, by competing, produce new generations of programs — an approach that may well lead to the first machine that truly "thinks." Hillis's work demonstrates that when systems are not engineered but instead allowed to evolve — to build themselves — then the resultant whole is greater than the sum of its parts. Simple entities working together produce some complex thing that transcends them; the implications for biology, engineering, and physics are enormous. following introduction:

So why is Danny Hillis working for Disney today? Well, the founder of Thinking Machines Corporation and the innovative designer of the massively parallel "connection machine," used to drive to work in a fire engine and was once a toy designer for Milton Bradley. In college, he became interested in building a computer out of anything. As a demonstration, he and some friends built the Tinkertoy computer, which was comprised of 10,000 Tinkertoys and could play tick-tack-toe. His interest in building gadgets and games was, to some degree, influenced by his friend, the late physicist Richard Feynman, who would leave Caltech in the summer to go to Cambridge to work for Danny at Thinking Machines.

Part of Danny's charm is his childlike curiosity and demeanor. The first time we talked was on the telephone one Sunday morning in 1988 when he was at his home in Cambridge. We got into a serious discussion about the relationship of physics to computation. "This is interesting," he said. "I'd like to come to New York and continue the conversation face-to-face." Three hours later, my doorbell rang, and there stood a young man, looking like a clean-cut hippie. He had long hair, wore a plain white T-shirt and jeans, and carried nothing. We talked for hours.

I later returned the visit. This was a different side of Danny. He lived with his family in a huge old house off Brattle Street. The domestic scene I encountered included a bunch of babies, two au pairs (a blonde from France and a brunette from Argentina), a dog, and a houseful of interesting guests — all presided over by Danny and his wife, Patty. I sat in the living room with Danny and evolutionary biologist Stephen Jay Gould as they discussed the effect of massively parallel computers on evolutionary theory; meanwhile, Danny's mentor, computer scientist Marvin Minsky, played Mozart sonatas on the grand piano in the adjacent room.

Danny's energies have concentrated on getting processors to work together so that computation takes place with communicating processors, as happens with the Internet. The Net's potential to become an organism of intelligent agents interacting with each other, with an intelligence of its own that goes beyond the intelligence of the individual agents fires Danny up. "In a sense," he says, "the Net can become smarter than any of the individual people on the Net or sites on the Net. Parallel processing is the way that kind of emergent phenomenon can happen.

The Net right now is only a glimmer of that." Danny describes the Internet of today simply as a huge document that is stored in a lot of different places and that can be modified by many people at once, but essentially a document in the old sense of the word. In principle, the Internet could be done on paper, but the logistics are much better handled with the computer. "I am interested in the step beyond that," he says, "where what is going on is not just a passive document, but an active computation, where people are using the Net to think of new things that they couldn't think of as individuals, where the Net thinks of new things that the individuals on the Net couldn't think of."

Danny asks questions like What are the limits to what computers can do? Can they think? Do they learn? His intellectual range is startling. Unlike many other people engaged in the world of computing, he does not limit himself to any particular group of colleagues. Some of his biggest fans are among the brightest people on the planet. Marvin Minsky says "Danny Hillis is one of the most inventive people I've ever met, and one of the deepest thinkers." Philosopher Daniel C. Dennett says, "what Danny did was to create if not the first then one of the first really practical, really massive, parallel computers. It precipitated a gold rush." Physicist and Nobel laureate Murray Gell-Mann notes that "he's not only a daring person, which we know, but also a deep thinker — and a very effective one.

THE CLOCK OF THE LONG NOW

Stewart Brand
[11.23.98]

In a sense, what we're doing with the clock is even more for time than what the photograph of the Earth did for space. Like understanding the earthly environment as one whole thing—we're trying to understand a period of time reaching 10,000 years into the past and 10,000 years into the future as one containable thought.

Introduction

When Danny Hillis first started talking about his 10,000-year clock, many of his friends worried that he was going through some kind of mid-life crisis. I was one of them. But eventually we all started listening. A group of Danny's friends, led by Stewart Brand, got together and created "The Long Now Foundation" to build the clock, and also to begin to address the bigger issue involved: how to get people to think in a longer term, how to stretch out their sense of time.

It's fitting that Stewart Brand got behind Danny's project. When I met him in 1965, he was sporting a button on which was printed: "America Needs Indians." His next conceptual piece was his 1968 campaign for a picture of "The Whole Earth," which led, in no small part, to the creation of the ecology movement. In 1983 he urged me to get involved with something called "online conferencing." This led to "The WELL," (the Whole Earth "Lectronic Link"), a precursor of the radical changes that our use of the Internet is bringing to human communications. Stewart is the king of initially obscure, ultimately compelling conceptual art. Call it reality.

A couple of years ago he was featured on the cover of The Los Angeles Times Magazine: "Always two steps ahead of others....[he] is the least recognized, most influential thinker in America." No question about it.

JB

STEWART BRAND is a founding member and a director of Global Business Network and president of The Long Now Foundation. He is on the board of the Santa Fe Institute, and maintains connections with Electronic Frontier Foundation, Wired magazine and MIT's Media Lab, while occasionally consulting for Ecotrust.

A substantial item on the Global Business Network site is a list of all the books he has recommended for the GBN "Book Club" since 1988 and several hundred of his reviews.

GBN explores global futures and business strategy for 61 multinational clients (mostly among the Global Fortune 1,000, and half in the top 100) including ABC/Cap Cities, Arco, AT&T, Andersen Consulting, Bell South, Leo Burdett, Fiat, IBM, Nissan, L'Oreal, Volvo, and Xerox.

He is the founder and original editor of The Whole Earth Catalog and author of The Media Lab How Buildings Learn, and Clock of the Long Now: Responsibility and Time, forthcoming in January from Basic (a MasterMinds Book). Stewart Brand's Edge Bio Page

ON THE NATURE OF MATHEMATICAL CONCEPTS: WHY AND HOW DO MATHEMATICIANS JUMP TO CONCLUSIONS?

Verena Huber-Dyson
[2.15.98]

In 1997, my son George Dyson handed me a batch of comments dated Oct. 29—Edge #29 ("What Are Numbers, Really? A Cerebral Basis For Number Sense"), by Stanislas Dehaene and Nov. 7—Edge #30 (the subsequent Reality Club discussion) by a group of Edge researchers. I read it all with great interest, and then my head started spinning.

Anyone interested in the psychology (or even psycho-pathology) of mathematical activity could have had fun watching me these last weeks. And now here I am with an octopus of inconclusive ramblings on the Foundations bulging my in "essays" file and a proliferation of hieroglyphs in the one entitled "doodles." It is so much easier to do mathematics than to philosophize about it. My group theoretic musings, the doodles, have been a refuge all my life.

Although I am a mathematician, did research in group theory and have taught in various mathematics departments (Berkeley, U of Ill in Chicago, and others), my move to Canada landed me in the philosophy department of the University of Calgary. That is where I got exposed to the philosophy of Mathematics and of the Sciences and even taught in these realms, although my main job was teaching logic which led to a book on Gödel's theorems. To my mind the pure philosophers, those who believe there are problems that they can get to grips with by pure thinking, are the worst.

If I am right, you Reality people all have a definite subject of research and a down to earth approach to it. That is great.

There are two issues in your group's commentary that I would like to address and possibly clarify.

For a refutation of Platonism George Lakoff appeals to non standard phenomena on the one hand and to the deductive incomplenetess of geometry and of set theory on the other. First of all these two are totally different situations, to each of which the Platonist would have an easy retort. The first one is simply a matter of the limitation inherent in first order languages: they are not capable of fully characterizing the "intended Models," the models that the symbolisms are meant to describe. The Platonist will of course exclaim: "If you do not believe in the objective existence of those standard models, how can you tell what is standard and what is not?" The deductive incompleteness of a theory such as geometry or set theory, however, simply means that the theory leaves some sentences undecided. Here the Platonist will point out that your knowledge of the object envisaged is incomplete and encourage you to forge ahead looking for more axioms, i.e., basic truths!

Incidentally I consider myself an Intuitionist not a Platonist.

I wonder whether it is appropriate for me to send you my rather lengthy discourse on non standard phenomena. You may find it tedious. Yet I believe that the question, how it is possible for us to form ideas so definite that we can make distinctions transcending the reach of formal languages? is pertinent to your topic "what are numbers really?" It is very difficult to put these phenomena into a correct perspective without explaining at least a little bit how they come about.

The other contribution is a simple illustration of the naive mathematical mind at work on the number 1729! And a remark about a prodigy.

—Verena Huber-Dyson

VERENA HUBER-DYSON is a mathematician who received her PhD from the University of Zurich in 1947. She has published research in group theory, and taught in various mathematics departments such as UC Berkeley and University of Illinois at Chicago. She is now emeritus professor from the philosophy department of the University of Calgary where she taught logic and philosophy of the sciences and of mathematics, which led to a book on Gödel's theorems published in 1991.

Pages

Subscribe to RSS - UNIVERSE