Martin Ford: Author, Rise of the Robots: Technology and the Threat of a Jobless Future
DJ Kleinbaum: Co-Founder, Emerald Therapeutics
Brian Frezza: Co-Founder, Emerald Therapeutics
Luke Nosek: Partner, Founders Fund
Nick Pinkston: Founder, Plethora
Michael Solana: VP, Founders Fund
Solana: From the ancient Greeks’ Pygmalion to the Czech coining of the word “robot” in 1928, humanity has dreamt of building some semblance of itself from earth, in stone, in metal, for thousands of years. But by the 20th century, when robots were no longer the stuff of science fiction, our science fiction writers began to dream beyond the machines themselves and of the world in which those machines would be living. It was an exciting world, but it was also a world of considerable danger. What if humanity’s inventions turned against us?
I, Robot clip: We’re going to miss the good old days.
What good old days?
When people were killed by other people.
Solana: The fear of the machine is almost baked into the premise. If your goal is to build something stronger and faster and smarter than you, how, in any hypothetical worst-case scenario where you might have to stop it, would you stand any chance of doing so? Isaac Asimov famously outlined what he felt our first line of defense should be in his Three Laws of Robotics.
Isaac Asimov: The first law is as follows: A robot may not harm a human being or, through inaction, allow a human being to come to harm.
Number two, a robot must obey orders given it by qualified personnel unless those orders violate Rule No. 1. In other words, a robot can’t be ordered to kill a human being.
Rule No. 3, a robot must protect its own existence—after all, it’s an expensive piece of equipment—unless that violates Rules 1 or 2. A robot must cheerfully go into self-destruction if it is in order to follow an order or to save a human life.
Solana: These were meant to be programming guidelines to avoid any situation in which robots might hurt a human. And engineers today, especially engineers working in artificial intelligence, still refer back to them. A world of robotics or a massively automated world is coming, and even from its very first imaginings there was this question of: what does this mean for the human being? Previously in Anatomy of Next we’ve talked about power and biology. What does a world look like with unlimited power? And what does a world look like in which we control our biological destiny?
But the entrenched fears our culture holds of the nuclear sciences and of synthetic biology tend to be a little more far out. And even while these fears do shape the way we think and act, when we face the subjects directly, the flaws in their prospective nightmare scenarios become pretty obvious. But in robotics, especially in the automation of human labor, we find a much more grounded fear. And it’s not just writers or filmmakers telling the story.
News clip: The question right now: Could robots take over the workforce? A new study says that close to half of the jobs out there could be taken over by robots.
Solana: For the stuff that has people on edge today, we look to the news media. There it is an army of robots determined to make Mankind irrelevant.
News clip: When I was going around talking to all the people who are really on the cutting edge of research for robotics, both where you are in Boston and other places, they are optimistic. They say, “Listen, this is going to free us as humans.” But when I talk to everyone else about doing this series, they were so worried about losing their jobs.
News clip 2: Well, the way technologists love to think about this is in terms of new problems to solve, new mountains to climb. But the social side of this that’s interesting is that the robots get only better at everything they do today and only cheaper. So take any job you want, draw a line into the future, and there will come a day when a robot can do that job cheaper than a human. And there’s simply no way for us to stop that dark side of technological progress.
News clip 3: Two scholars, Carl Frey and Michael Osborne, have estimated that nearly half of all U.S. occupational categories are vulnerable to automation within the next 20 years.
News clip 4: I don’t want to talk about Democrats and Republicans. I want to talk about robots. You’re going to drive up to a fast-food restaurant in the next five years. It’s just going to be a flat screen. You’re going to hit a couple of buttons, and there’s going to be a window opens up, hands you out your sandwich. There’s going to be no people in there. It’s just going to be a big robot.
News clip 5: It’s something to think about.
Solana: We’re going to spend two weeks on robotics. This week, we’ll look at the kind of automation that’s already happening today, especially in the automation of manual labor. And next week, we’ll focus a little more on the automation of thought and the stuff that really terrifies people in sentient AI and super-intelligence. But before we get to Arnold Schwarzenegger on a motorcycle with a shotgun and that crazy liquid-metal guy, there are a few technological steps.
Even in a world with lesser complex robots automating simple tasks, we’re looking at potentially—at least this is the fear we’ve been nursed on—a major economic crisis.
I called up Martin Ford. He wrote a book called Rise of the Robots: Technology and the Threat of a Jobless Future.
Ford: There are estimates that have been done at Oxford University. There are a couple of researchers that looked at this and came up with the estimate that about half of the jobs in the United States could potentially be automated because they are susceptible to machine learning. So those numbers are a little bit alarming. That’s a huge number. It doesn’t really matter whether it’s a quarter of the jobs or half of the jobs. That’s a devastating number in terms of our society and our economy.
Solana: What does a society with half joblessness look like?
Ford: Well, it could be very dystopian or it could be very utopian. That’s a choice that we’re going to have to make. The dystopia is those people are literally on the street. They don’t have the means to have a decent life even in an advanced society, and they also don’t have the ability to act as consumers in the economy and contribute to demand that is essential to economic growth.
Solana: You mentioned briefly you think maybe there’s a little bit of an elitist attitude out in Silicon Valley. Why are so many people so slow to really address this potentially huge problem?
Ford: Well, I think that if you look at Silicon Valley, everyone has got a college degree, very often in a technical area. The people that they interact with all have technical degrees. Even within that group, there is an impact. You can find people that have worked in IT, for example, in areas like system administration and so forth that have been impacted by automation.
But I think by and large in Silicon Valley, and especially among the super-elite—the people that have founded companies and become fabulously wealthy and so forth—they just don’t have a whole lot of interaction with people that are going to work and doing the same kind of thing again and again and again. Those are the people that are really going to be impacted in the near term, and that’s a very large percentage of our workforce. We need to understand it.
In the United States, we’ve got a workforce of about 140 million people. All those people are not engineers. They’re not software developers. They’re not working in artificial-intelligence research, and they’re not Silicon Valley entrepreneurs. They’re just regular people going to work, doing regular jobs that very often are repetitive and predictable. So that’s where the impact is going to be in the short term. I am worried that in Silicon Valley the connection between the people, the top thinkers in Silicon Valley, and the people doing these kinds of jobs is not very strong.
Solana: They make this argument often. You hear that, “Well, the economy is going to be so good when these machines get working that everybody’s life is going to be better.” What is wrong with that kind of thinking?
Ford: Well, I think that it’s potentially true. I absolutely believe in that promise. I just think that it doesn’t happen as the result of some automatic libertarian-type process where the market just works everything out, and then we get to that outcome.
Solana: Why not?
Ford: If jobs disappear, then incomes disappear. The way our world and our economy work right now is that jobs are a package deal. You get two things. One is that you get something to do and a sense of accomplishment and all of that. The second thing is you get an income. If jobs disappear, then income disappears. And we don’t have an adequate safety net in the United States right now to address that problem, so a lot of people will essentially be left out in the cold.
You can see evidence of that already. There are plenty of homeless people around and so forth that don’t have access to an income or don’t have any way to really enter and become a productive member of our economy already. There are huge numbers of those people. My concern is that that will get to be a bigger problem, and more and more people will essentially be left out and won’t have access to our economy and our society.
Solana: So what would you say, then, to your critics who would cite the Industrial Revolution and the Luddites and say, “Well, this is something that we’re always fearing: people in every generation lament this trend in culture and in technology. And actually, everything always turns out fine.”
Ford: Well, that’s sort of the central argument is that there is certainly a historical basis for arguing that things will work out. I always say that this topic is a little bit like the story of the little boy who cried wolf. There’s been this series of false alarms raised. It hasn’t happened. People, therefore, become kind of complacent and skeptical. And to some extent, that’s rightfully true.
But with anything, either it’s a fundamental truth and it will always be true or, at some point, it stops being true. And in the area of technology, you can point to certainly many examples where things were the case for thousands of years and then suddenly they change.
You look at powered flight and what the Wright brothers did. Reality shifted in an instant. And I think that that can happen in this case, as well.
So I do argue that this time is different. And the fundamental reason that I would point to is, number one: that machines and algorithms are beginning to, at least in a limited sense, take on cognitive tasks. They’re beginning to compete with human brain power, not just muscle power. And the second point is that this has become so ubiquitous, it’s truly a general-purpose technology.
And that’s quite different than what we’ve seen in the past, where technologies have disrupted particular industries, for example, the mechanization of agriculture, where millions of jobs on farms were lost. People did move on to other areas, yes they did; but that was a specific technology. It just impacted agriculture. It didn’t unfold across the board and scale across the whole economy the way information technology is doing.
Solana: In our conversation, Martin did go on to forward a solution to the problem. He prescribed a universal basic income. This is actually a pretty old libertarian idea, with roots in Milton Friedman and Friedrich Hayek, devised as both a way to take care of people who can’t take care of themselves but also to manage what was, in their minds, an incredibly bloated government.
Today there is something like 83 overlapping welfare programs in the U.S. What if we just got rid of all of them and replaced the system with one simple idea: Every week, every adult in the entire country gets a check. And you can do with that money whatever you want. Recently the UBI has picked up a lot of attention in Silicon Valley.
This week on our channel, we did publish some supportive material on the topic, digging into the idea a little bit further, featuring excerpts both from this interview with Martin and another interview I did with economist Veronique de Rugy. I don’t think it solves the problem.
As Veronique explains in today’s supporting content: Let’s say we get a basic income. Great. Now we have a way to pay every single person in the country a little bit of money; but you still have a system in which a very small minority of people are now made fundamentally different in some way than the vast majority. You have the productive few and the subsistence-level many.
What I’m describing here is basically a terrifying and entrenched class system. The universal basic income is not a magical solution to this potentially enormous problem of mass global joblessness. It’s a huge experimental idea. Okay, we don’t know what’s going to happen in a system where people lose their work in exchange for, really, just enough money to live.
And consider the fact that it’s actually just not very likely to happen. At least it’s far less likely to happen than mass automation of work because, while technology is progressing basically predictably, there’s absolutely nothing routine or predictable about our political process. Just turn on CNN and watch it for five minutes to understand what I’m talking about here. The idea that we are all just going to peacefully and collectively decide to change our entire economic system in the next five to seven years is absolutely nuts.
And lastly, I think it’s important not to focus too much on the UBI because its basic premise fails to fully imagine or even largely imagine what a world of automation is going to look like and what it’s going to mean.
The universal basic income is often conceived of sort of on top of a world that really makes no sense, and that’s a world where we have robots that automate at least 50% of the world’s labor and nothing else changes. That’s not an interesting idea. That’s a straw man. So let’s set it aside and talk about what the world is actually going to look like.
In other words, let’s talk about robots.
D.J. Kleinbaum and Brian Frezza are the CEOs and founders of Emerald Therapeutics and the men behind the Emerald Cloud Lab, a massive robotic laboratory automating experiments run by scientists from around the world. I talked to D.J. first.
Solana: Let’s start with: there’s a lot of fear surrounding robots taking our jobs. What do you think about this problem? And how do you think about it?
Kleinbaum: I do think about it, but not in the context of what Emerald is doing or our industry. Our whole goal with the Emerald Cloud Laboratory and with this idea of remote science in general is to empower scientists and actually to enable more people to do research. Right now the scientific world, in many ways, is not that different from what computing looked like before the personal computer became a thing—where you had to be at a giant company or at a top-tier research institute to have access to the best equipment.
Our goal with the ECL is to make it possible for anyone with an idea about a new diagnostic or a new therapeutic to do that research. So in many ways we think about it more in terms of not just empowering scientists who are already working in the field, which is obviously a huge part of what we’re doing by allowing them to run more experiments than they could do with their hands or than a small team that they work with could do, but also to allow people who normally wouldn’t even have the chance to do experiments to run things. And so for us, I think about it the opposite way: that we’ll allow more people to be doing science than less.
Solana: What is science in a world of automation going to look like?
Kleinbaum: I think when you’ve decoupled the manual labor of running experiments from the actual hard and deep thought process that has to go into designing experiments and analyzing data, you not only let more people into this world and allow people who normally wouldn’t have access to this equipment to run experiments, but you also really empower the teams—whether they’re in academia or in small startups or even in large pharmaceutical companies—to just be much more highly leveraged and more efficient and able to do more.
I think you’ll see the number of teams multiply significantly but the size of the team shrink because you no longer need this gigantic team to undertake some giant research initiative. You can have small teams, these very small creative strike-force teams that are going after particular disease indications or diagnostics or new biofuels.
You have the ability to reduce the team size to just the scientists and the creative thinkers around a problem because what they’re doing is they’re designing the experiments. They’re analyzing the data, and they’re making decisions about what to do next. And that enables us to not only go after more things and try to solve more problems in the life sciences but to do it in a more efficient way.
Solana: You just said we’re going to be able to do this with much smaller teams, and I think that’s actually what scares a lot of people. But the other piece of that is, “yeah, we’ll have smaller teams, but we’re going to have a lot more teams.”
Kleinbaum: Right. If we’re successful, the total number of people doing research in the life sciences should be higher. So one of the ways that I really like to talk about this is actually in terms of rare or poorly served diseases, because there are all of these things—this is just one of those tropes that people say about the pharmaceutical industry, even though it’s becoming less and less true over time: that these companies can only afford to go after indications where it’s going to make a billion dollars, where you’re going to be able to recoup the very high cost of developing drugs.
If you’re able to lower the costs of developing drugs and make research more efficient, then all of a sudden—whether it’s in a big company or a small company—you can go after these smaller, rarer disease indications. And if you only need a small team of people to do that, that’s a really powerful thing.
My dream for the ECL and for this whole idea of building layers of abstraction into the life sciences is that we’ll eventually get to the point where people see advances in the life sciences with the same triviality that we see new iPhone apps. Like, “Oh, look, another rare-disease cure.” They’ll be so commonplace that people will actually start to see cures for diseases as something that’s almost trivial.
It’s not that that many jobs will go away because of this. It’s just that each scientist—and you can now more broadly define what a scientist is—will be much more highly leveraged in terms of the number of experiments they can run a day. The whole art of doing science, the idea of having to design good experiments and analyze data and see the insights in that data, that doesn’t go away. That becomes more important.
Solana: Here we have our first clear look at automation actually increasing the labor demand. With slashed startup costs, a host of rare diseases become practically approachable for the first time. We don’t need less scientists; we need more scientists and, as D.J. said, scientists who are now more broadly defined.
So the question shifts, and becomes: How do you define yourself? Is science unfathomable to you? Have you already labeled yourself beyond scientific work? And if that’s the case, why? The longer I spoke to D.J., the more obvious it became to me that I was trying to understand what humans were going to be in this future of robots without addressing the question of what humans actually are today. What is the difference between humans and robots?
Now, if this sounds a little needlessly college dorm room philosophical, please just bear with me for a minute because I do think this is an important point. And it’s one I talked about briefly with Luke Nosek, a founder and partner of Founders Fund very interested in artificial intelligence. I want to share just a quick piece of that conversation, the rest of which we’re going to take a listen to next week.
Nosek: People love to talk about threats: someone taking their jobs. You can look at this in all kinds of political discourse we’re having today. Is someone going to replace you? What we don’t consider is: What if something just replaces the machine component of jobs? And what we don’t consider today that there are a lot of jobs that are either physically or mentally back-breaking.
In fact, let’s just say one thing, like farming. Farming, you used to take a hoe and a wooden plow and an ox. We invented the term “back-breaking labor” for a reason. Now you have an air-conditioned, GPS-controlled tractor that can semi-robotically drive itself around a field. We don’t look at how the quality of the jobs improves dramatically and how we can do the human component that still needs to be a part of these jobs.
If you look at a lot of occupations, there’s a human component that will take a very long time for a machine to be able to replicate it, if ever. Look at teaching, for instance. Look at the machine component and look at the human component. The machine component, it’s like an information-copier and video-delivery device. You have to copy stuff onto a board, and you have to repeat a lecture. Then an information grading mechanism—you have to sit for hours and hours grading.
All of that could be replaced by a machine that can do it much, much better and also, further, target individually so that every student has an individual lecture for them that is based on their learning level, and then immediately tested to see whether it worked or not. It’s called “adaptive learning.”
It removes all of that work and does it so much better. And what about the students in that class? They’re not learning, and the machine doesn’t know why. Maybe it’s because their family was in a divorce and they’re in an emotional hell. A machine is not going to know that. A machine is not going to be able to do anything about that, but a teacher is.
Solana: Robots only automate the carrying out of repetitious tasks. When you remove that robotic work from a day job, what you’re left with are the qualities with which humans have defined themselves in art and literature and music forever. It’s imagination, creativity, agency—by our own definition and record, the very best of us.
Even an incredibly powerful AI, short of some hypothetical sentience which, don’t worry, we’re going to get to next week, is still just a tool. And ten years from now, twenty years from now, we’re still going to be using tools. The only difference is going to be how much one person can get done with the tools at his or her disposal.
And then there’s this question of flexibility. It’s been a relatively short time humans have considered themselves one thing: a farmer, as Luke mentioned, or a teacher, a doctor, a driver, a cook. As the shape of the world continues to change at an ever faster pace, we’re going to have to change our definition of that word, “human.” And the new definition is going to have to be a lot less robot. This is where Brian Frezza picked up.
Frezza: When we fret about whether or not this will leave us all jobless, the question to consider is not, “Should we do things in a more effective manner that leave us with more product for less work?” The right way to frame it is: “Well, how do we cope with the fact that we can’t stop learning now?” Maybe there was a time in the past where it was okay to say: “Well, I only learned things in the first three years of my career, and then I just did the same thing every day from then until when I retired.” Obviously, if we keep continuing at the pace we’re at, that’s not really possible.
We have to accept the fact that even year to year, decade to decade, we’re going to be coming in to work and coming up with new ways to do things. We’re going to have to continually be learning. Learning is a big part of the process. One of the things I test for for employees is, I look for—speaking of science fiction and nerds—you know the doctor from the first Star Trek, Bones?
Frezza: He’s always given this request, and responds: “Damn it, Jim. I’m a doctor, not a—then fill in the blank, whatever, “car mechanic,” or whatever thing is going to fall there. One of the things we look for in hiring is making sure someone is not in that camp. So they won’t say, “Oh. Well, my job description is this rigid, locked-in thing and, therefore, I can’t do Y” because the requirements of what you’re doing day to day pull from what needs to happen, not necessarily what you’ve prearranged to say that these are the activities that my hands can do.
Solana: Yeah, but I think the fear is—you mentioned there was no shortage of work in the field of hardware. The answer to that from people today would be, “Well, there was no shortage of work for hardware engineers.” I guess there’s a sense that there’s this black curtain between the world of science and technology—this magical, ethereal world of mechanics –and the rest of the population. How do we get those people, maybe, into the field? What do you say to that?
Frezza: Well, I think you’ve definitely hit the nail on the head. This idea that, if you want to analyze what makes people uncomfortable about the concept, it’s definitely any time you get exclusionary about it and you say, “This is for these sets of people and not these other set of people.”
I get the feeling it’s a Western issue of predestination, where we say, “I am this,” and that, too, I think of as reductionist. At this point, am I a scientist? Am I an engineer? I’m not sure. Am I a businessman? I don’t know. All of those things seem like reductionary views of what I can do in the world.
One thing I’ve seen for sure is that there’s definitely a very real boundary between people’s idea of who is techie and not. So once you kind of label yourself as a technological person, there’s definitely a sense in the field of not drawing boundaries of saying, “well, I can’t understand this because I didn’t take a class on it.” I don’t think any self-respecting engineer or scientist would ever say that. In the field, they would be expected. It’s part of the culture. We laugh at it because it’s culturally assumed that, well, once you’re in, you’re in. You figure it out.
The same is true I’m sure on the humanities side, too. So if you were trained as one type of writer—if you were trained as a journalist and then you wanted to write a novel, it would be kind of silly to say, “Well, I can’t possibly do this unless I go back to school and get a degree in writing novels.” It would be like, “Really?” Let’s just get some experience and figure it out.
And making sure that those borders are easy to cross, jumping into the tech world would certainly be a healthy thing to make easier, so that people who don’t start with a technological degree and give you that start point of saying, “These people are on this side, and these people are on the other side” definitely helps. We think about this. Our primary customer is a scientist. They’re all life scientists, and they may or may not come with any training whatsoever in computer science.
The majority of them don’t know how to write code. A lot of our focus in the company is making that transition as easy as possible. A lot of the software around what we’re delivering is making it so that someone who has never sat down and written a single line of code before can start using this Cloud Lab product because they’re given essentially a graphical interface that walks them through it, it thinks about progression from very first time you’ve logged on. About how you get something done, how you start picking up skills over time and then, “Oh, I’ve learned how to do that. Now I can do the next set of automatic techniques to go further and further in the software.”
And bridging that gap and making sure that that’s an easy thing to do seems, to me as an entrepreneur, a huge technological possibility in terms of coming up with the next generation of technologies that allow people who didn’t necessarily have any background and technical skills to sit down and, in a couple of sessions, start to pick up the general concepts of “How do I do X, Y, Z?” in terms of any sort of engineering or any sort of science—same thing. That definitely could be compressed down.
For someone who spent, I guess counting PhD, nine years in higher education, I can say definitively that all nine of those years were not required to get the skill set I have at the end of the day. A lot of it was just doing labor and the general inadequacies of the training system as a whole. So when we worry about the future of automation and those things, I think really the question is making sure that technology is available to get people educated and that it’s warm and inviting and that we don’t tell people: “You are in this camp, and you can’t do this.”
Solana: Jobs as we currently know and hold them are going to change. And work, the idea of what that means, is also going to change. Advances in energy and biology with automation, and maybe even the kinds of economic models popular among folks like Martin Ford, are going to eradicate material concerns like hunger and health and crippling poverty. But how will people live? How will people fit into this world?
At least in the short term, it’s important not to frame this as humans competing with robots in a static economy for a fixed number of jobs. Because, for the first time in history, the kinds of work we do will be limited only by the bounds of our imagination. With Brian and D.J., we considered men and women working on every single disease without a cure, on every single scientific query left unanswered.
And one can imagine further pursuits in art and engineering, in education, as Luke talked about. Increasingly important not only for young people but for people of all ages, as we enter an age of dynamic humanity, freed from the robotic identities of the Industrial Revolution but now elevated to an almost totally unbridled ability by the robots themselves. And the expectation that people will be learning now for their entire lives, changing and growing, is far less daunting in a world of machines that teach you how to use them.
Pinkston: The noises we’re hearing are milling machines. So milling machines are basically—it’s like the opposite of a 3-D printer—instead of adding material to build something up, you take a block of material and you cut away what you don’t want. And then the thing that remains is the finished part. So that sound is the whining of the milling machine.
Solana: Nick Pinkston is the founder and CEO of Plethora, another company working on automation. But here it’s the automation of machining, or the automation of the automation.
Pinkston: Fundamentally the company is for helping people make physical products. Normally the process is you design something, you talk to someone at a factory, you have this conversation, and then eventually you make something. That whole process can take weeks to months. The point of Plethora is to make that process as push-button as ordering something on the Internet. So you can use our system to evaluate your files automatically. It prices them, tells you if there’s any feedback you need, you push a button, and the parts come out.
Solana: The notion that the job force is going to change for everyone on the planet except engineers and technologists in Silicon Valley fits very well into the popular anti-technology industry narrative but, frankly, does not correspond with reality at all. Everything is changing for everyone. This is something that Nick, at the nexus of this conversation, thinks a lot about.
Pinkston: Well, you can’t tell them what their next job is. That’s, I think, the hard part. Some people are like, “We’re just going to teach them to code.” If Plethora is any sort of rule, the machinists of fifty years ago are programmers today. And the programmers of today will be automated, too. There’s nothing special about being a programmer, actually.
It makes me wonder, “What is the ultimate job of humans?” In the beginning the job of humans was muscles. Then it was being the control on the machine. Now it’s more of being the programmer of the machine. But once that’s automated, maybe it’s then the design of the object gets automated. I think that ultimately it’s human choice. People choose what they want. So you say, “I want an X widget,” or, “I want X experience.” And something, whether it’s the market or machines or whatever, spins up and does it.
As a society, we collectively vote to produce a thing that says, “What do we want to do?” We want parks. We want highways. Ultimately I think humans have choice, and everything else is going to be automated. And so then we’ll just have to figure out what choices we want to make. That’s the ultimate-ultimate end game on, say, some kind of AI revolution.
When people say “work,” it’s often as a pejorative: “Going to work.” “I’ve got a job.” “I’ve got to feed myself, so I need this job.” There’s a movement on the left for anti-work, and I’ve always been opposed to anti-work because I think that work fundamentally is helping people.
Most work today might not feel like that. The proper economy would remove all work that isn’t helping people. And then we have to say: “Okay, what work is helping people?” Can we help people today? Like you’re saying, what jobs don’t exist that we should fund? So maybe cancer research, all these different science things are underfunded. Maybe the arts are underfunded.
And there’s a few ways of thinking about that. For science, you can increase funding or you can decrease the expenses. If the majority of, say, mechanical engineering research for robotics is actually the tools to do it, then Plethora would let you lower the costs of actually doing it.
Solana: Nick mentioned a ratio that had to be balanced. On the one hand, there’s the automation of jobs; and on the other, there’s the reskilling of the labor force. It’s on this reskilling piece that we’re focused. Here we need education and we need tools that simplify the task. But four-year college degrees or months or years spent in trade schools picking up new skills is not going to cut it. Tools need to be more intuitively designed. And what if the technology taught you how to use it?
Engineers and designers are already thinking about this stuff. Plethora as a technology is hitting the tools side, and Plethora as a company under Nick’s leadership is already reskilling the workforce.
But we stepped back from machining for a moment and really focused on computer programming because it illustrates this broader point perfectly. Once it’s easier to understand how people are going to be working in the future, it becomes a little more easy to imagine the kinds of things they’ll be working on.
Pinkston: To me, that is the main goal. How hard would Twitter be to describe to someone? “Oh. Well, you write something, check if it’s 140 characters, hit ‘post,’ and anyone who is subscribed to you—they get it, sorted chronologically.” It seems like that’s pretty straightforward, and yet the scaling of Twitter is so difficult.
I like to separate tools into interfaces and infrastructures. I think that for Twitter, how programming is done right now, with Python or whatever language and frameworks you have, both are very tied together. If you want to deploy to a server, you need to know all this stuff—all the whole stack of the server and the database and all these things.
And that, to me, is infrastructure. I don’t think people need to know that in the same way that you’ve got a steering wheel but you don’t know what your engine does. Most of us don’t know that. I want to make better steering wheels, and I want the engine to just be auto-pilot.
Solana: This first robotics dystopia sits on top of three foundational premises. One, at least half the jobs done by humans are going to be taken by robots. Two, this naturally results in mass joblessness with no way at all for people without work to quickly reskill and compete with robots. And last, this mass joblessness results in major global unrest.
For the sake of argument we’ve assumed that first premise is correct, and half the jobs currently being done by humans are going to be taken by robots. But next, before we consider what people will be doing, it’s important to consider how people will be living. Here, we need to think about context, and in this the fear we have of the future is fundamentally challenged, because we are approaching a world of mass automation. But as illustrated by DJ, Brian, and Nick, with automation come cheaper services, products, and far faster, far more efficient production. We’re already looking at a dramatically lower cost of living. Without even considering what’s happening in every other technological vertical.
For the past two weeks we’ve talked about unlimited energy. We’ve talked about the malleability of biology. We’ve talked about unlimited food, and healthcare, and the biological amplification of the human being’s natural abilities. Robots are not emerging from a giant clam in the middle of the ocean like Botticelli’s Venus. They’re coming online at a time of humanity’s broader technological maturity.
This world of mass joblessness is a lot less scary once we know mass joblessness does not necessarily mean mass poverty. And it is well within our ability to build a world where people are well fed, and well cared for regardless of how much money they have.
But to the question of class, and to productive, meaningful lives. A copacetic world is a world in which everyone participates. So when last century’s work begins to dry up, and it will, we need to reskill—we turn to natural language programming, we turn to tools that teach us how to use them, and to technology that better interfaces with people and integrates into our lives. And we turn to adaptive learning programs and educational AIs that not only teach us what we’d like to be taught, but that guide us through our new society.
Nick wondered if the ultimate job of the human being was choice. And in the context of automation, that means our primary responsibility and purpose is dreaming up the shape of things to come. But is that it? Is choice the boundary computers can’t cross? And what might a world look like in which androids dream of electric sheep?
Next week: sentient machine super-intelligence.
I’m Mike Solana, and this is Anatomy of Next.
Why do we think this is important?