new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online
new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online__left
new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online__right
new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online__front

Description

Product Description

In this authoritative and eye-opening book, Max Tegmark describes and illuminates the recent, path-breaking advances in Artificial Intelligence and how it is poised to overtake human intelligence. How will AI affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial.
 
How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?
 
What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

Review

“Anyone who wants to discuss how artificial intelligence is shaping the world should read this book. Tegmark, a physicist by training, takes a scientific approach. He doesn’t spend a lot of time saying we should do this or that, and as a result,  Life 3.0 offers a terrific baseline of knowledge on the subject.”  —Bill Gates, “10 Favorite Books about Technology”

“Original, accessible, and provocative. . . . Tegmark successfully gives clarity to the many faces of AI, creating a highly readable book that complements  The Second Machine Age’s economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in  Superintelligence. . . . At one point, Tegmark quotes Emerson: ‘Life is a journey, not a destination.’ The same may be said of the book itself. Enjoy the ride, and you will come out the other end with a greater appreciation of where people might take technology and themselves in the years ahead.” — Science

“Lucid and engaging, it has much to offer the general reader. Mr. Tegmark’s explanation of how electronic circuitry—or a human brain—could produce something as evanescent and immaterial as thought is both elegant and enlightening. But the idea that machine-based superintelligence could somehow run amok is fiercely resisted by many computer scientists. . . . Yet the notion enjoys more credence today than a few years ago, partly thanks to Mr. Tegmark.”  Wall Street Journal 

“This is a compelling guide to the challenges and choices in our quest for a great future of life, intelligence and consciousness—on Earth and beyond.” —Elon Musk, Founder, CEO and CTO of SpaceX and co-founder and CEO of Tesla Motors

“All of us—not only scientists, industrialists and generals—should ask ourselves what can we do now to improve the chances of reaping the benefits of future AI and avoiding the risks. This is the most important conversation of our time, and Tegmark’s thought-provoking book will help you join it.” —Professor Stephen Hawking, Director of Research, Cambridge Centre for Theoretical Cosmology
 
“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” —“Being an eminent physicist and the leader of the Future of Life Institute has given Max Tegmark a unique vantage point from which to give the reader an inside scoop on the most important issue of our time, in a way that is approachable without being dumbed down.” —Jaan Tallinn, co-founder of Skype
 
“This is an exhilarating book that will change the way we think about AI, intelligence, and the future of humanity.” —Bart Selman, Professor of Computer Science, Cornell University

“The unprecedented power unleashed by artificial intelligence means the next decade could be humanity’s best—or worst.  Tegmark has written the most insightful and just plain fun exploration of AI’s implications that I’ve ever read. If you haven’t been exposed to Tegmark’s joyful mind yet, you’re in for a huge treat.” —Professor Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy and co-author of The Second Machine Age

“Tegmark seeks to facilitate a much wider conversation about what kind of future we, as a species, would want to create. Though the topics he covers—AI, cosmology, values, even the nature of conscious experience—can be fairly challenging, he presents them in an unintimidating manner that invites the reader to form her own opinions.” —Nick Bostrom, Founder of Oxford’s Future of Humanity Institute, author of Superintelligence

“I was riveted by this book. The transformational consequences of AI may soon be upon us­—but will they be utopian or catastrophic? The jury is out, but this enlightening, lively and accessible book by a distinguished scientist helps us to assess the odds.” —Professor Martin Rees, Astronomer Royal, cosmology pioneer, author of  Our Final Hour 

"In [Tegmark''s] magnificent brain, each fact or idea appears to slip neatly into its appointed place like another little silver globe in an orrery the size of the universe. There are spaces for Kant, Cold War history and Dostoyevsky, for the behaviour of subatomic particles and the neuroscience of consciousness. . . . Tegmark describes the present, near-future and distant possibilities of AI through a series of highly original thought experiments. . . . Tegmark is not personally wedded to any of these ideas. He asks only that his readers make up their own minds. In the meantime, he has forged a remarkable consensus on the need for AI researchers to work on the mind-bogglingly complex task of building digital chains that are strong and durable enough to hold a superintelligent machine to our bidding. . . . This is a rich and visionary book and everyone should read it."  —The Times (UK)

“Life 3.0
is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required.” —Stuart Russell,  Nature 


“Tegmark’s book, along with Nick Bostrom’s  Superintelligence, stands out among the current books about our possible AI futures. . . . Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too.”  —The Telegraph (UK)

“Exhilarating. . . . MIT physicist Tegmark surveys advances in artificial intelligence such as self-driving cars and Jeopardy-winning software, but focuses on the looming prospect of ‘recursive self-improvement’—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark’s smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons. . . . Engrossing.” — Publishers Weekly

About the Author

MAX TEGMARK is an MIT professor who has authored more than 200 technical papers on topics from cosmology to artificial intelligence. As president of the Future of Life Institute, he worked with Elon Musk to launch the first-ever grants program for AI safety research. He has been featured in dozens of science documentaries. His passion for ideas, adventure, and entrepreneurship is infectious.

Excerpt. © Reprinted by permission. All rights reserved.

THE THREE STAGES OF LIFE

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species we’ve encountered so far, let’s instead define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information. In other words, we can think of life as a self-replicating information processing system whose information (software) determines both its behavior and the blueprints for its hardware.

Like our universe itself, life gradually grew more complex and interesting, and as I’ll now explain, I find it helpful to classify life forms into three levels of sophistication: Life 1.0, 2.0 and 3.0.

It’s still an open question how, when and where life first appeared in our universe, but there is strong evidence that, here on Earth, life first appeared about 4 billion years ago. Before long, our planet was teeming with a diverse panoply of life forms. The most successful ones, which soon outcompeted the rest, were able to react to their environment in some way. Specifically, they were what computer scientists call “intelligent agents”: entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment. This can include highly complex information-processing, such as when you use information from our eyes and ears to decide what to say in a conversation. But it can also involve hardware and software that’s quite simple.

For example, many bacteria have a sensor measuring the sugar concentration in the liquid around them and can swim using propeller-shaped structures called flagella. The hardware linking the sensor to the flagella might implement the following simple but useful algorithm: “If my sugar concentration sensor reports a lower value than a couple of seconds ago, then reverse the rotation of my flagella so that I change direction.”

Whereas you’ve learned how to speak and countless other skills, bacteria aren’t great learners. Their DNA specifies not only the design of their hardware, such as sugar sensors and flagella, but also the design of their software. They never learn to swim toward sugar; instead, that algorithm was hard-coded into their DNA from the start. There was of course a learning process of sorts, but it didn’t take place during the lifetime of that particular bacterium. Rather, it occurred during the preceding evolution of that species of bacteria, through a slow trial-and-error process spanning many generations, where natural selection favored those random DNA mutations that improved sugar consumption. Some of these mutations helped by improving the design of flagella and other hardware, while other mutations improved the bacterial information processing system that implements the sugar-finding algorithm and other software.

Such bacteria are an example of what I’ll call “Life 1.0”: life where both the hardware and software is evolved rather than designed. You and I, on the other hand, are examples of “Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.

You weren’t able to perform any of those tasks when you were born, so all this software got programmed into your brain later through the process we call learning. Whereas your childhood curriculum is largely designed by your family and teachers, who decide what you should learn, you gradually gain more power to design your own software. Perhaps your school allows you to select a foreign language: do you want to install a software module into your brain that enables you to speak French, or one that enables you to speak Spanish? Do you want to learn to play tennis or chess? Do you want to study to become a chef, a lawyer or a pharmacist? Do you want to learn more about artificial intelligence (AI) and the future of life by reading a book about it?

This ability of Life 2.0 to design its software enables it to be much smarter than Life 1.0. High intelligence requires both lots of hardware (made of atoms) and lots of software (made of bits). The fact that most of our human hardware is added after birth (through growth) is useful, since our ultimate size isn’t limited by the width of our mom’s birth canal. In the same way, the fact that most of our human software is added after birth (through learning) is useful, since our ultimate intelligence isn’t limited by how much information can be transmitted to us at conception via our DNA, 1.0-style. I weigh about 25 times more than when I was born, and the synaptic connections that link the neurons in my brain can store about a hundred thousand times more information than the DNA that I was born with. Your synapses store all your knowledge and skills as roughly 100 terabytes worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been pre-loaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The ability to design its software enables Life 2.0 to be not only smarter than Life 1.0, but also more flexible. If the environment changes, 1.0 can only adapt by slowly evolving over many generations. 2.0, on the other hand, can adapt almost instantly, via a software update. For example, bacteria frequently encountering antibiotics may evolve drug resistance over many generations, but an individual bacterium won’t change its behavior at all, while a girl learning that she has a peanut allergy will immediately change her behavior to start avoiding peanuts. This flexibility gives Life 2.0 an even greater edge at the population level: even though the information in our human DNA hasn’t evolved dramatically over the past 50,000 years, the information collectively stored in our brains, books and computers has exploded. By installing a software module enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains, potentially surviving even after the original brain died. By installing a software module enabling us to read and write, we became able to store and share vastly more information than people could memorize. By developing brain-software capable of producing technology (i.e., by studying science and engineering), we enabled much of the world’s information to be accessed by many of the world’s humans with just a few clicks.

This flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next: language, writing, the printing press, modern science, computers, the internet, etc. This ever-faster cultural evolution of our shared software has emerged as the dominant force shaping our human future, rendering our glacially slow biological evolution almost irrelevant.

Yet despite the most powerful technologies we have today, all life forms we know of remain fundamentally limited by their biological hardware. None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years, enabling our universe to finally fulfill its potential and wake up fully. All this requires life to undergo a final upgrade, to Life 3.0, which can design not only its software but also its hardware. In other words, Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles.

The boundaries between the three stages of life are slightly fuzzy. If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Moreover, because they lack language, what they learn gets largely lost when they die, not passed on to the next generation. Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or getting a thousand times bigger brains.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:

·      Life 1.0 (biological stage): evolves its hardware and software
·      Life 2.0 (cultural stage): evolves its hardware, designs much of its software
·      Life 3.0 (technological stage): designs its hardware and software

After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many artificial AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.

Product information

Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

Videos

Help others learn more about this product by uploading a video!
Upload video
Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

More items to explore

Customer reviews

4.5 out of 54.5 out of 5
2,265 global ratings

Reviews with images

Top reviews from the United States

RJ
3.0 out of 5 starsVerified Purchase
High on enthusiasm, low on content
Reviewed in the United States on May 27, 2018
While one cannot help being taken in by Tegmark''s boundless enthusiasm, the book contains essentially no substantial content that I could extract. It may have some value in increasing general awareness about the dangers of AI, as well as providing an optimistic outlook for... See more
While one cannot help being taken in by Tegmark''s boundless enthusiasm, the book contains essentially no substantial content that I could extract. It may have some value in increasing general awareness about the dangers of AI, as well as providing an optimistic outlook for our technological future; and for those who are utterly unfamiliar with these sorts of ideas, it provides a gentle and entertaining introduction. I might recommend it to children interested in AI, and science/technology more generally, but for serious enthusiasts: don''t waste your time, skip right to Bostrom''s "Superintelligence" instead.
96 people found this helpful
Helpful
Report
Anthony Aguirre
5.0 out of 5 stars
A brilliant guide to the massive AI revolution headed our way
Reviewed in the United States on August 29, 2017
The first chapter of Tegmark’s new book is called “Welcome to the most important conversation of our time,” and that’s exactly what this book is. Before diving into the book, a few words about why this conversation is so important and why Tegmark is a central agent helping... See more
The first chapter of Tegmark’s new book is called “Welcome to the most important conversation of our time,” and that’s exactly what this book is. Before diving into the book, a few words about why this conversation is so important and why Tegmark is a central agent helping make it happen and, through the book, the perfect guide.

Have you notice how you don’t “solve” CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) anymore? That’s because computers now can. Artificial Intelligence, from being a fairly niche area of mostly academic study a decade ago has exploded in the last five years. Much more quickly than many anticipated, machine learning (a subset of AI) systems have defeated the best human Go players, are piloting self-driving cars, usefully if imperfectly translating documents, labeling your photos, understanding your speech, and so on. This has led to huge investment in AI by companies and governments, with every sign that progress will continue. This book is about what happens if and when it does.

But why hear about it from Tegmark, an accomplished MIT physicists and cosmologist, rather than (say) an AI researcher? First, Tegmark has over the past few years *become* an AI researcher, with 5 published technical papers in the past two years. But he’s also got a lifetime of experience thinking carefully, rigorously, generally (and entertainingly to boot) about the “big picture” of what is possible, and what is not, over long timescales and cosmic distances (see his last book!) – which most AI researchers do not. Finally, he''s played an active and very key role (as you can read about in the book’s epilogue) in actually creating conversation and research about the impacts and safety of AI in the long-term. I don’t think anyone is more comprehensively aware of the full spectrum of important aspects of the issue.

So now the book. Chapter 1 lays out why AI is suddenly on everyone’s radar, and very likely to be extremely important over the coming decades, situating present-day as a crucial point within the wider sweep of human and evolutionary history on Earth. Chapter 2 takes the question of “what is intelligence?” and abstracts it from its customary human application, to “what is intelligence *in general*?” How can we define it in a useful way to cover both biological and artificial forms, and how do these tie to a basic understanding of the physical world? This lays the groundwork for the question of what happens as artificial intelligences grow ever more powerful. Chapter 3 addresses this question in the near future: what happens as more and more human jobs can be done by AIs? What about AI weapons replacing human-directed ones? How will be cope when more and more decision are made by AIs what may be flawed or biased? This is a about a lot of important changes occurring *right now* to which society is, for the most part, asleep at the wheel. Chapter 4 gets into what is exciting – and terrifying – about AI: as a designed intelligence, it can in principle *re*design itself to get better and better, potentially on a relatively short timescale. This raises a lot of rich, important, and extremely difficult questions that not that many people have thought through carefully (another in-print example is the excellent book by Bostrom). Chapter 5 discusses where what happens to humans as a species after an “intelligence explosion” takes place. Here Tegmark is making a call to start thinking about where we want to be, as we may end up somewhere sooner than we think, and some of the possibilities are pretty awful. Chapter 6 exhibits Tegmark’s unique talent for tackling the big questions, looking at the *ultimate* limits and promise of intelligent life in the universe, and how stupefyingly high the stakes might be fore getting the next few decades right. It’s both a sobering and an exhilerating prospect. Chapters 7 and 8 then dig into some of the deep and interesting questions about AI: what does it mean for a machine to have “goals”? What are our goals as individuals and a society, and how can we best aim toward them in the long term? Can a machine we design have consciousness? What is the long-term future of consciousness? Is there a danger of relapsing into a universe *without* consciousness if we aren’t careful? Finally, an epilogue describes Tegmark’s own experience – which I’ve had the privilege to personally witness – as a key player in an effort to focus thought and effort on AI and its long-term implications, of which writing this book is a part. (And I should also mention the prologue, which gives an fictional but less *science*fictional depiction of an artificial superintelligence being used by a small group to seize control of human society.

The book is written in a very lively and engaging style. The explanations are clear, and Tegmark develops a lot of material at a level that is understandable to a general audience, but rigorous enough to give readers a real understanding of the issues relevant to thinking about the future impact of AI. There are a lot of news ideas in the book, and although it is sometimes written in a breezy and engaging style, that belies a lot of careful thinking about the issues.

It’s possible that real, general artificial intelligence (AGI) is 100 or more years away, a problem for the next generation, with large but manageable effects of “narrow” AI to deal with over a span of decades. But it’s also quite possible that it’s going to happen 10, 15, 20, or 30 years from now, in which case society is going to have to make a lot of very wise and very important (literally of cosmic import) decisions very quickly. It’s important to start the conversation now, and there’s no better way.
222 people found this helpful
Helpful
Report
AudreyLM
5.0 out of 5 starsVerified Purchase
Accessible, delightful study of AI and its manifold implications
Reviewed in the United States on September 24, 2017
Max Tegmark, thank you for accomplishing the astounding feat of writing a book that will clearly delight and intrigue your fellow brainiacs, but that is also actually accessible to English majors like myself, intimidated by complex science. What an exhilarating ride!! The... See more
Max Tegmark, thank you for accomplishing the astounding feat of writing a book that will clearly delight and intrigue your fellow brainiacs, but that is also actually accessible to English majors like myself, intimidated by complex science. What an exhilarating ride!! The importance of understanding AI, the potential impact on humanity, is certainly not limited to physicists, so it is really a service to biological beings to make the subject (mostly) understandable. The topic also so quickly brings up the BIG questions . . . which continue to have no definitive answers but for which you''ve given us so much food for thought. I also greatly appreciated your playfulness, humility and awe. I have now purchased your book 7 times. First I downloaded the audiobook, but saw I was going to need pictures for this one and downloaded it to my Kindle. Then because with four other couples we have a monthly movie night (like a book club but with movies) and will watch "Her" at our next one, I sent copies to the other participants. This is going to be a loooooong conversation. So long in fact, that if you are reading this, I wonder if you would recommend other movies that intelligently explore this topic?
20 people found this helpful
Helpful
Report
J. Kutz
4.0 out of 5 starsVerified Purchase
Deeper thought and action needed
Reviewed in the United States on October 27, 2017
Life 3.0 Max Tegmark enthusiastically and excitedly writes about what life will be like for us humans with the rise in AI (Artificial Intelligence), AGI (Artificial General Intelligence – Intelligence on par with humans) and the possibility/probability of creating... See more
Life 3.0
Max Tegmark enthusiastically and excitedly writes about what life will be like for us humans with the rise in AI (Artificial Intelligence), AGI (Artificial General Intelligence – Intelligence on par with humans) and the possibility/probability of creating Super-Intelligence (AI enabled intelligence that far surpasses human intelligence and capabilities.). He asks the reader to critically engage with him in imagining scenarios of what such AI reality could mean for us and to respond on his Age of AI website.

The book begins with the Tale of the Omega Team, a group of humans who decide to release advanced AI, named Prometheus, surreptitiously and in a controlled way into human society. The tale unfolds as a world take-over by Prometheus and in a final triumph becomes the world’s first single power able to enable life to flourish for billions of years on Earth and to be spread throughout the cosmos.

If you have never read much post-modern futurology, Tegmark is a good way to take the plunge. He brings together much of the thinking about what humanity will have to deal with, the decisions it will have to make and the options it might have with the inevitable advancement of technology and specifically AI. Above all he encourages the reader to believe that she/he has an important role to play in what the future will hold for us and that we need not, indeed cannot, succumb to fatalism. The most commendable, concrete and hopeful part of the book is in his story of AI researchers coming to agreement about the path forward for AI that is pro-active in addressing the challenges it presents and the impact it will have on human society. The end of the book lays out this path in the Asilomar AI Principles, which were created, critiqued, refined and agreed through a process initiated in an AI conference in Puerto Rico in January 2015. The takeaway for Tegmark is that AI research can now confidently go forward with the knowledge that impacts and consequences for humanity have been and will be addressed in the process to mitigate any negatives. He and his colleagues deserve credit for such engagement and thoughtful commitment in their endeavors.

For the above I gave the book four stars. The book is also fun to read and challenging to our common political and economic realities. There are, however, areas of concern that are either untouched or passed over lightly, to which I now turn:

1. The quest for truth - Tegmark assumes that we have an “excellent framework for our truth quest: the scientific method.” I start my critique here because this assumption is not argued nor established. There is no argument against the formidable power of scientific methodology to give deep explanation to natural reality. However, the issue of truth is rightly not the purview of science, but of philosophy. This may seem nit-picky, but we are too used to the idea that science is the absolute arbiter of truth as though it can offer a complete picture of reality, when in fact that’s not within its job description.

2. The way Tegmark frames his definition of life is a case in point. To do this he makes two moves: first, using the scientific method he deconstructs life in a reductionist move; the second move is to decenter biotic, human life in its importance and necessity in the unfolding of what he calls Life 3.0. Tegmark''s first move reduces the definition of life to “a process that can retain its complexity and replicate itself.” In this highly generalized definition he can than reduce life further to atoms arranged in a pattern that contains information.

This broad definition is important for the second move which is the decentering of biotic human life. Here he offers a post-modern notion that human life (anthropocentric) can no longer be the measure of all things. Humans have been displaced from the center of the universe in great steps since Copernicus. If we are going to promote Life 3.0, we must continue this decentering to make room for the expanded definition of life he offers. Life must now be imagined as other than biotic. It must include the possibilities imagined by our new technologies of superintelligence housed in robust substrates where human consciousness or even non-human consciousness can reside for great lengths of time and go beyond earth to the reaches of the universe. If it sounds utopian, there is that clear melody line in Tegmark’s writing, in spite of some protestations to the contrary.

This is Tegmark’s book. He can define life however he sees fit. From my perspective life was the good old fashioned, highly unlikely emergence of biotic generativity – the beginning of which we yet do not know. Evolution did its trial and error number over four billion years to produce humans. If and when there is ever the need to call something non-biotic, life, it will be apparent at that moment and not before. This does not mean that preparation for AI is not needed. It is that sapience is not sentience nor does intelligence to some superhuman degree make something life even if it can mimic or surpass human neurology. Call it what it is: a really smart human-made machine that is programed to learn, replicate, maybe have what we call consciousness and cause us all kinds of grief and gladness. Life? No.

3. It is good that Tegmark wades into the arena of ethics because they cry out for attention.
• First, can anyone actually account for or quantify/qualify accurately for human behavior? History has yet to convince us that humans, whether naturally tending toward the moral or not, cannot be morally controlled. The scientific evidence is in our history. And yes, there are many heroes, but there are many who are classified “evil.” One need only to look at the current “fade” of mass shootings in the USA. We may blame mentally unstable people for this, but we are those people. Tegmark points out that AI is morally neutral and like guns is not the evil element in the equation. But AI is initially and therefore ultimately a human endeavor and therefore is imbued with human imitation and limits. As good and needed an attempt that is made with the Asilomar AI Principles, we can be sure that AI will be used wrongly and perhaps fatally to all of life. Our certainty is because we know ourselves as humans. We are a product of Nature which models the whole spectrum of behaviors from the deeply violent to the deeply loving. More species of life on earth have gone extinct than are alive today. Dare we think that humans might escape a similar fate because we are intelligent or have benign superintelligent buddies? Before anything else can be discussed regarding the deep future of humanity, humanity itself has to come to grips with itself. Though Tegmark rhetorically acknowledges such negative possibilities, he is full steam ahead in his assumptions and commitment to the development of superintelligence.

• Second, in our modern world moral absolutes are hard to come by. In a purely naturalistic setting all morality is relative and therefore depends upon the decision of humans within a cultural setting within the personal psyches of the individuals making moral choices. It is not cynical to believe that if you scratch a beautiful public moral persona, you will get it to bleed a bewildering moral anomaly. Look at how many moral quibbles some of the scientists who were involved in developing atomic/nuclear weaponry had. When threatened, it seems “all options are on the table.” For all the good of Tegmark’s intentions this is a very uncertain area. Even his examples of several Russian men, who prevented nuclear holocaust, are frightening enough for us to understand just how serious the moment in which we live is morally. So, the question is: do we have a sufficient moral foundation and will to unleash AI invention and use?

•Third, in spite of trying to move away from human-centeredness rhetorically throughout his book, Tegmark does no better than anyone else when he, in the end, does not do so. In fact it is likely that humans will never be able to decenter themselves because all our concepts, heuristic overlays, thought processes, bodily constraints and needs make it impossible. At any rate, Tegmark, without great explanation or justification joins others in believing that humans must spread their life and intelligence throughout as much of the universe as possible – in order to unleash its potential! That very idea is human-centered: colonialist, exploitative, presumptive and perhaps idolatrous. In a universe where life is located only on our planet, as far as we know for sure, why do we think life, our life, should interrupt that immense time/space with our angst? Do we think our machines will overcome human moral ambivalence? Why inflict our unfinished project on earth to more territory? Why not make a moral stand to address earth and human issues so that until we have reached a greater potential morally, spiritually, intellectually, materially and relationally, we stay here and make sure our AI does too? Talk about a utopian dream! The point is that morally there is no good argument for taking human life and issues elsewhere, especially because that means unleashing the whole spectrum of human experience.

Fourth, though the book’s subtitle is “Being Human in an Age of Artificial Intelligence,” Tegmark does not address to any depth what happens to or even if humanity can last in the face of superintelligence. This is even with the assumption that AI will be good for humans. Human and AI life forms are critically different from each other. Though there might be some compatibility between the two, AI is more like the rocks and electrical switches than it is to humans. The human biotic substrate of our existence is in comparison, obsolete. The issues this raises cannot be put aside cavalierly with the technological move of uploading our humanity into a more robust substrate. Humanity by definition is biotic. If one cannot accept Tegmark’s generous new definition of life it means humans will be decentered in a devastating way.

4. One last thing needs mention, Tegmark’s use of the words “pessimistic” and “optimistic” in regard to the future path that AI will take. Both these words are unscientific. They describe a general psychological intuition or feeling about something based on a foundation that seems solid or not. To use such words in the context of AI value and possible future effects on humanity is misplaced. Better to stick with more concrete descriptions. One can say the same thing about Tegmark and his colleagues regarding their enthusiasm for technological future wonderments. History again has to keep us grounded. Who would have thought (no one obviously did) at the beginning of the Industrial Revolution that its descendants would be threatened within a degree or two their lives because of the burning of plentiful fossil fuel? Whatever plans are put forth to mitigate the impact of humans messing around with nature, we can be assured that we will always miscalculate and create unintended consequences. Explorers, explore, but beware!
27 people found this helpful
Helpful
Report
T. V. Robertson
3.0 out of 5 starsVerified Purchase
Written by an AI?
Reviewed in the United States on July 9, 2019
I enjoyed this author’s clear and interesting style, and that he takes pains to illuminate controversies and differing views concerning the future of AI. And he makes it very clear that discussion of AI implications matters. His entertaining style brings in... See more
I enjoyed this author’s clear and interesting style, and that he takes pains to illuminate controversies and differing views concerning the future of AI. And he makes it very clear that discussion of AI implications matters.

His entertaining style brings in interesting facts and frameworks to help wrap our minds around these facts, and to consider and care about where things might lead, even a billion years hence.

While I very much appreciated Tegmark’s mind-expanding ruminations, there were several times his line of reasoning made me say “Huh”? It was as if this intelligent and well-informed author chose math and computation as his sole frame of reference. Like an AI would?

The first area I struggled with was his definition and characterization of intelligence: (a) intelligence is the ability to accomplish complex goals; (b) intelligence is a quantity like energy, that is in principle unlimited; and (c) with unlimited intelligence, any goal can be accomplished, subject to the laws of physics. This line of reasoning leads to super intelligent AIs rearranging particles in the known universe to satisfy a paper clip production goal, etc. Huh?

I think his definition of intelligence is problematic several ways. Clearly intelligence enables the accomplishment of complex goals, though the relationship between level of intelligence and goal complexity is tricky. Homing pigeon? Chess master? Also, actually accomplishing many goals requires matter and energy – are these parts of intelligence?

Experts such as Steven Pinker agree that intelligence is multi-faceted, and better described as a list of capabilities rather than a single capability with a range from low to high. For something specific like arithmetic, people and machines can be ranked. But human (general) intelligence is a bunch of specific capabilities integrated/embodied in a particular context.

And whatever intelligence is, it is an enabler, not omnipotent. We are not going to cure cancer just by being smarter.

The second area I struggled with is what I think is a confusion of the map with the territory.

Just because we can model brains as mathematical artificial neural networks (ANNs) doesn’t mean they are ANNs. Just because ANNs can do some wonderful things, it doesn’t mean they can in principle achieve human level intelligence. For example, no ANN can learn a human language with as few examples as heard by a human child. And the intelligence exercised by human engineers in designing ANNs is in a whole different league that of the AIs they create.

At a more abstract level, computation and mathematics are human inventions that allow us to model and predict reality. It does not follow that human level intelligence can be created by solving some arbitrary number of mathematical functions or even by running an infinitely fast universal computer. (Unless perhaps you find “42” a useful answer to “What is the answer to life, the universe, and everything?”!)

Another of our abstractions, the model of an intelligent agent as a problem-solving goal-seeker, does lead to useful artifacts but it is likely a significant oversimplification of how organic intelligence works. For example, emergence is everywhere in natural intelligence, with the implication that some capabilities of natural intelligence might not be reachable through an intelligent agent model.

Although I find difficulty with the author’s arguments for the plausibility of human level AI and an intelligence explosion, I agree with his fundamental themes: nobody knows for sure, there are risks, and we need to talk about it. His book is an entertaining and valuable contribution to this discussion.
10 people found this helpful
Helpful
Report
Gilbert Reeser
5.0 out of 5 starsVerified Purchase
Approaching the greatest transition in human history?
Reviewed in the United States on November 24, 2017
We may be approaching the greatest transition in history. This change is more dangerous than global warming or nuclear weapons. It is the possibility that superintelligence can be created by artificial intelligence which has been created by us. The idea is that once we... See more
We may be approaching the greatest transition in history. This change is more dangerous than global warming or nuclear weapons. It is the possibility that superintelligence can be created by artificial intelligence which has been created by us. The idea is that once we have crossed a threshold in AI where consciousness is achieved, it can improve on itself. Making itself smarter and smarter in a cycle of redesign, much smarter than any human. Then what?

The book explores the possibility that we may be closer to this than most people think. Maybe by mid-century. Once the threshold is reached, like critical mass in a nuclear reaction, the process could happen very quickly. It is not an issue of robots taking jobs, it is an issue of superintelligence taking control of civilization. Max Tegmark explores many possible scenarios that might ensue. Some are good and some are bad.

This is a very disturbing book. It is not particularly well written. I gave it five stars for the message but would give it only three stars otherwise. Too many details on organization and some name dropping. But what a message! Since he is a physics professor at MIT you can be confident of the material. It makes your mind whirl.

We have some experience with NAI. (Non-artificial intelligence) There are over 7.6 billion of us alive right now. But we are optimized for survival and reproduction and cannot easily improve our intelligence with software updates. So all existing NAI''s have logic flaws and are often irrational. Still, an occasional NAI has been able to take control of a big part of civilization, like Hitler an Stalin. The problem is that AI may produce only one example of superintelligence and no one knows what that will be like. At least we NAI''s have also produced George Washington and Charles Darwin, etc. On balance we are getting better and better at improving our lot. Read "The Better Angels of Our Nature" by Steven Pinker.

There are some positive ideas presented in "Life 3.0." One is the issue of free will which has been debated by philosophers of science endlessly.
"Any conscious decision maker will subjectivelty feel that it has free will, regardless of whether it''s biological or artificial."
So if a superintelligence does take over it won''t be debating the issue with you.
9 people found this helpful
Helpful
Report
Doug D.
4.0 out of 5 starsVerified Purchase
You may lose some sleep, but you need to read it
Reviewed in the United States on June 1, 2018
Other reviewers commented on the writing style, but didn''t want to go down to 4 stars, because the topic is so important. I agree the topic is important and the content of the book is great. But, I think it is important that someone buying the book knows what to expect,... See more
Other reviewers commented on the writing style, but didn''t want to go down to 4 stars, because the topic is so important. I agree the topic is important and the content of the book is great. But, I think it is important that someone buying the book knows what to expect, so I gave it 4 stars. At times the writing is tedious, and it would have been hard to get through the book, if the topic wasn''t so interesting. I accept that the author is a brilliant scientist, but his writing style does not draw the reader in. So, buy the book, read it, think about it, but be prepared to work some on reading it.

The content was more varied than I expected. I expected the history lesson on information technology. I expected the sections about advances in Artificial Intelligence (AI) and how they will be (are) smarter than us. And, I expected the sections on goals for AI development. I did not expect the sections on limits of technology, based on physical laws (as we currently know them), and how AI could eventually approach those limits. I expected a discussion of consciousness, but not a scientific approach to determine if a system is conscious or not. This book will give you a lot of important things to think about.

Personally, I don''t think the author will ever come up with a scientific method to determine if an AI system is conscious. Being conscious is the only thing we really know about ourselves (Descartes, et al.). But, it is something we can never know about someone else. We take people''s word that they are conscious; which makes sense, since it is coincident with our own experience. But, we can''t objectively know it is true. Will we take an AI system''s word for it too? I think we will have to. But, I wish the author well. I hope I am wrong. I hope he succeeds in developing a method to tell if an AI system is conscious.

As far as the guidelines for AI development, assembled by the FLI Team, I think they look great. But, the hardest part will be enforcement. Even if 90% of people accept the guidelines, the 10% who don''t can pose a huge danger. As AI advances, it will likely be impossible for people to enforce AI rules. We will likely need to develop AI to police other AI.

Anyway, read the book. Think about these things. If you don''t lose a little sleep over some of these things, then you are probably not thinking deeply enough. But, ultimately I agree with the author''s optimism. I think it will turn out good.
6 people found this helpful
Helpful
Report
Orange Monkey
5.0 out of 5 starsVerified Purchase
Yes. You DO want to join this conversation. Read it, share it and discuss it!
Reviewed in the United States on April 17, 2018
Fantastiskt valskriven bok Max! (Ej oversatt av Google, utan en svenska som skriver - pa amerikanskt skrivbord.) The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. Isaac Asimov A quote from... See more
Fantastiskt valskriven bok Max! (Ej oversatt av Google, utan en svenska som skriver - pa amerikanskt skrivbord.)

The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.
Isaac Asimov

A quote from this highly accessible and eye-opening book which could be introduced with the title of the first chapter "Welcome to the Most Important Conversation of Our Time".

Technological ability is exploding. Concurrently we''re facing massive challenges when it comes to dysfunctional, unsafe societies and poor education. There is a growing gap.

Don''t expect billion-of-years-long trajectories of intelligence finding its way forward, to be stopped, simply because biology is no longer the best vehicle. The question we need to concern ourselves with is how do we want to guide this development? What guardrails should be put in place, where and by whom?

Sun-Tzu said: "A wise leader always considers advantages and disadvantages equally."
Tegmark is a strong, powerful force in this movement: to create a rational and reflective dialogue about what Life 3.0 will be like.
Move away from sensational, divisive media stories, read this book, reflect and discuss over a glass of wine.

You don''t think it''ll happen? Or at least not during your life-time?
Read the book.

A phenomena has been observed when it comes to social online communities: Whatever you are when you are small, grows with your size. Thus, those communities that don''t establish the right culture from the start implode from trolls or other kinds of destructive misbehaviors.
We had better equip ourselves, our societies and our cultures to deal with the wave of artificial intelligence that is coming. There is still time.

But spring is coming if we make it such.
6 people found this helpful
Helpful
Report

Top reviews from other countries

Rahul Madhavan
2.0 out of 5 starsVerified Purchase
Place in your library after Ray Kurzweil and Nick Bostrom
Reviewed in India on July 19, 2018
Adds to a wonderful discussion about risks of Ai in which the seminal work this far has been by Nick Bostrom. But some extrapolations are drawn out of thin air. Looks like Tegmark is an expert on math/physics, but has only a cursory knowledge of the electronics behind the...See more
Adds to a wonderful discussion about risks of Ai in which the seminal work this far has been by Nick Bostrom. But some extrapolations are drawn out of thin air. Looks like Tegmark is an expert on math/physics, but has only a cursory knowledge of the electronics behind the working of computers. Maybe he has forgotten his computer architecture basics. Content : Tegmark writes on a topic that''s not his area of expertise, he is a physicist and a reductionist. I''m not sure real Ai and consciousness should not be viewed as other emergent fields like biology or more specifically neuroscience. It doesn''t lend itself to the sparseness of equations (especially when they have not been clearly dilineated) - this focus had me a bit disappointed with the book When to read: This is definitely not the first read for anyone interested in the field of Ai security. There is no alternative to Bostrom there. If you are in general interested in superintelligence you should start with Vernor Vinge ''s original paper and then move to Ray Kurzweil. In both of these aspects this could be a supplementary read. Standouts: There are some sublime pieces of writing where Tegmark'' s clear logic and incisive thought made me go wow. For example when he tries to derive subprinciples and goals from any possible ultimate goals (page 264) or in the section where he defines consciousness through our knowledge of decision making (page 312). These were fresh perspectives for me and were my main takeaways from the book. Depth: i felt much more could be said about neuroscience of our brain (given that he chose to touch on the topic). If the topic of the book was AI security more could be said about why each principle was taken up. Possible there was too much emphasis (without enough mathematics) on going directly from physical substrates (quarks, electrons) to "sentronium" (sentient matter) Fiction Piece: The prelude is a masterful imagination of the future and reads like a fast sci-fi piece. Loved it. Book Quality: The book print quality (i got the hard copy) is excellent for the price. Disclaimer : As an Ai researcher my views may be colored by higher expectations. PS: Irrespective of whether you buy the book, head over to futureoflife.org - it''s a movement started by Tegmark that deserves a read through.
79 people found this helpful
Report
Daniel
3.0 out of 5 starsVerified Purchase
Should have read the blurb more closely
Reviewed in the United Kingdom on January 10, 2019
I’m sure this book is well written and given the number of positive reviews, has credibility However I really didn’t enjoy it and ended up skimming large chunks of it to the synopsis at each chapter. I bought I having become intrigued by AI after watching the Go documentary...See more
I’m sure this book is well written and given the number of positive reviews, has credibility However I really didn’t enjoy it and ended up skimming large chunks of it to the synopsis at each chapter. I bought I having become intrigued by AI after watching the Go documentary on Netflix and wanted to find out a bit more about the subject. This book doesn’t really do that (apart from the first few chapters) but is more of a societal analysis of potential dystopian effects of AI, which reads like bad sci-fi and has very little depth Good book but I’ll stick to the google AI blog
22 people found this helpful
Report
John
5.0 out of 5 starsVerified Purchase
Thought provoking humanistic paradigm
Reviewed in the United Kingdom on August 12, 2018
The future of intelligent life considered in an entirely humanistic paradigm. The meaning of the universe? None unless there''s an intelligent being - either human or created by humans. No possibility of religious meaning is entertained here. So this book is completely...See more
The future of intelligent life considered in an entirely humanistic paradigm. The meaning of the universe? None unless there''s an intelligent being - either human or created by humans. No possibility of religious meaning is entertained here. So this book is completely divorced from thousands of years of human culture and understanding. The AI that we end up creating is going to outstrip not just the speed of human thought (that already happened) but the flexibility and imagination of human thought by recursive self-modification. Its claim on the biosphere will likely displace human claims on the same resources. The implications for the future of humanity both startling and horrific. Since it will outstrip human intelligence any limitation placed on its goals is unlikely to apply for long. And since rival military powers likely wish assistance of recursive intelligence to assist their programs, any optimism on the author''s part appears misplaced. Assuming of course that humans are alone in the universe without God to oversee future events. This possibility isn''t considered.
16 people found this helpful
Report
Philip M
5.0 out of 5 starsVerified Purchase
An essential read for anyone who might believe that AI is overhyped!
Reviewed in the United Kingdom on August 12, 2020
Life 3.0 poses an interesting question: What happens when humans are no longer the smartest species on the planet? Tegmark has written a compelling challenge analysis of the choices facing us as we create ever more powerful AI super-computers; will they usher in a new era –...See more
Life 3.0 poses an interesting question: What happens when humans are no longer the smartest species on the planet? Tegmark has written a compelling challenge analysis of the choices facing us as we create ever more powerful AI super-computers; will they usher in a new era – or will they replace us? This is a tale about our own future with AI. Tegmark covers concepts from computing to cosmology with extraordinary clarity, whilst reminding us that many of these ideas were created by science fiction writers more than 50 years ago. And throughout he asks us to consider how we want AI to impact on our lives, jobs, laws and weapons. How will we live with a greater intelligence than our own, of our own creation? He doesn’t offer any simple answers to the challenge, but instead sets the reader thinking about what kind of future we would want to create. He does this in an insightful, unintimidating way that invites you to come to your own conclusions. Life 3.0 is an exciting, accessible read that has helped me think anew about the future in a world with artificial intelligence. Will it be Utopia or a catastrophe?
4 people found this helpful
Report
Paolo66
3.0 out of 5 starsVerified Purchase
Lost me half way through
Reviewed in the United Kingdom on August 27, 2018
I enjoyed some of the first half as it raised so many questions of a philosophical nature but the second half became so technical it lost me and I just sped through it. I do appreciate the important work the FLI is doing though.
9 people found this helpful
Report
See all reviews
Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

Customers who bought this item also bought

Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

Customers who viewed this item also viewed

Brief content visible, double tap to read full content.
Full content visible, double tap to read brief content.

Pages with related products.

  • firefly books
  • marine biology for kids

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online

new arrival Life outlet online sale 3.0: Being Human in the outlet sale Age of Artificial Intelligence online