This content is provided in partnership with Tokyo-based startup podcast Disrupting Japan. Please enjoy the podcast and the full transcript of this interview on Disrupting Japan's website!
Japan is lagging behind in AI, but that might not be the case for long.
Today we sit down with Jad Tarifi, current founder of Integral AI and previously, founder of Google’s first Generative AI team, and we talk about some of Japan’s potential advantages in AI, the most likely path to AGI, and how small AI startups can compete against the over-funded AI giants.
It’s a great conversation, and I think you’ll enjoy it.


***
Transcript
Welcome to Disrupting Japan, Straight Talk from Japan’s most innovative founders and VCs.
I’m Tim Romero and thanks for joining me.Japan is lagging behind in AI, but that was not always the case. And it won’t necessarily be the case in the future.
Today we sit down with Jad Tarifi, current founder of Integral AI, and previously founder of Google’s first generative AI team.
We talk about his decision to leave Google after over a decade of groundbreaking research to focus on what he sees as a better, faster path to AGI or artificial general intelligence. And then to super intelligence.
It’s a fascinating discussion that begins very practically and gets more and more philosophical as we go on.
We talk about the key role robotics has to play in reaching AGI, how to leverage the overlooked AI development talent here in Japan, how small startups can compete against today’s AI giants, and then how we can live with AI, how to keep our interest aligned.And at the end, one important thing Elon Musk shows us about our relationship to AI. And I guarantee it’s not what you, and certainly not what Elon thinks it is.
But you know, Jad tells that story much better than I can.
So, let’s get right to the interview.
(Part 3 of 4. Continuing from Part 2)
***
Interview

Tim: Excellent, do you want to talk about AGI?
Jad: Sure.
Tim: All right, before we dive in let’s do definitions. How do you define AGI?
Jad: For me AGI is the ability to learn new skills, unseen skills and this ability to learn new skills has to be executed safely so without unintended side effects and second efficiently. So, with energy consumption at or below a human learning that same skill. Learning new skills as opposed to existing skills. Why? Because you can always brute force by training on a lot of human data. So the key is not existing skills, but the ability to acquire new skills.
Tim: But haven’t we already crossed that threshold? I mean, don’t we have both robots and software that can learn new skills and adapt?
Jad: If you limit the skill scope, you can have a model that can learn. But if you’re saying completely unseen skill, we can root force it by giving a lot of examples of that skill. And that gives you the whole lacking efficiency. Or we could have the robot just try every potential movement themselves. And that leads you to the lacking safety issue.
Tim: I mean, those are worthy goals of any type of AI. It seems like they should be kind of table stakes. You know, you don’t want to burn down the factory, you don’t want to waste resources. You want to learn new skills. But it seems like a definition of general intelligence should have some form of intention or autonomy. Like would it be learning skills that the software decides on its own, it wants to learn, or it just seems like there should be something more?
Jad: I would separate AGI from autonomous general intelligence. Artificial general intelligence is just the ability to solve problems across the world. So it’s like a neocortex, it has little to do with free will or autonomy. That’s something we can add on top. And I’m happy to discuss that.

Tim: Oh, no, let’s discuss. This is great. When I discuss AGI, I usually think of it in terms of some form of consciousness and self-awareness. Do you think that is a necessary component or is that something that can be mixed in later?
Jad: I will discuss it, but let me just one step back. Just because you don’t have intention and you just respond to the user’s request, doesn’t mean that the user can give you a very high level intention. Like find out the truth of the universe or become as powerful as possible. And then that becomes an inner sub goal. So there’s higher level goals, and if you give me a high enough goal, you can always create sub goals and sub goals. And from the outside it looks like completely autonomous being. So in a sense, what I’m saying is AGI doesn’t have to have intentionality. Intentionality is something you get for free. Do you need self-awareness and consciousness? I think we need to discuss the meaning of the words for self-awareness. If you mean having a model of yourself, you absolutely need to have that even Chat GPT has a model of itself. You can ask who are you? And Chat GPT consciousness. You can divide consciousness into two components. One is what we call an inner theater. And inner theater. You absolutely need to have inner theater just means I can imagine what the world is like. I can create kind of this internal representation of the world that I can play with. I can control, I can do experiments in my mind. And I think that’s necessary. There’s another component of consciousness that goes a little bit beyond that, and that’s usually discussed in the mind body problem or in the context of physics. For that, I think it really depends on how you define consciousness.
Tim: Okay. To drill down on something you said, you were talking about designing an intention and designing this kind of intelligence, but you also mentioned that like you get this intention for free, that it’s kind of an emergent property. And I think like intelligence itself and probably consciousness as well is an emergent property in humans and it quite likely will be in AGI as well. It seems to me it’s more likely that AGI will emerge rather than be designed in.
Jad: I think by AGI here you mean the inner theater.
Tim: The inner theater, yes.
Jad: This will be learned. Yes. I think having a world model, it’s you’re able to simulate universes in your mind. And that is something that is learned by the model. So I would say it’s something that will happen naturally when we have AGI. What I was mentioning about intentionality is a little bit different. So you can ask Chat GPT, hey, you are a doctor, act like a doctor. So now you’re giving it the intention of being a doctor. But Chat GPT is more powerful than any single intention. It lets you give it the intention.
Tim: But that’s still your intention. It’s not chat GPT’s intention of being a doctor.
Jad: Yes. The point is, the way we train the models right now is we give them across intentions so we can get to decide after the training. Hey, I want you to have this intention for now, and one of the things that I’ve been fascinating for years is what is the intention you want to give to an AGI to let it scale to super intelligence? It comes down to giving it the intention of amplifying what we call freedom or a sense of agency, not for itself, but for the entire ecosystem around it. If you define freedom carefully as a state of infinite agency, then you have a very strong story how this can be the right intention for AGI to achieve super intelligence. An intention that not only guides it, but also gives it alignment with human values and makes it beneficial.

Tim: I want to get to alignment, but when we’re talking about an intention, so if we have AGI that emerges, there’s no survival instinct, there’s no desire to reproduce, there’s no response to pleasure and pain. These are the things that pretty much all of human intelligence was evolved to deal with and to some level of abstraction or another. It’s still mostly what we humans are focused on. So would we even be able to recognize an intelligence that’s emergent from artificial intelligence?
Jad: So the way I like to think of AGI, it’s not replicating the entire brain, it’s just replicating our neocortex. And the neocortex, all it’s trying to do is minimize surprise. Now, the limbic system or other parts of your brain and body, they have expectations. You don’t want to be hungry. Being hungry is very surprising. All of these are fundamental drives that we have that we learn across evolutionary history. Neocortex comes in and says, I don’t know what these motivations are. I just don’t want them to be surprised. And then all the other stuff emerges from that. And in that sense, when we build AGI, what I’m building is the neocortex, and then we can add those drives if we want. All these drives can be adapted to human drives. For example, a neocortex that is focused on, you might adapt your drives as its own drives in that sense that AGI would be there as an augmentation of who you are.
Tim: In a sense, without all this emotional and evolutionary baggage, the AI intelligence is very likely to emerge as just a problem solving optimizing machine.
Jad: Exactly.
Tim: That’s pretty interesting. So the alignment problem is always talked about like, okay, how do we keep these AIs from rising up and killing us? But I think it’s simpler than that. I think it’s addressed really well by what you just brought up. So technology progresses incredibly quickly, and so the time period between when robots can do your dishes and when robots start wondering why they have to do your dishes is really small. But if I’m understanding what you’re saying correctly, is that they don’t have the emotional baggage to feel resentment or feel like they’re being exploited and they would just focus on problem solving and optimization.
Jad: Yeah. They can simulate feeling that way though.
Tim: If we put that into them?
Jad: We don’t have to put that into them. That’s the beauty of building products with AGI is that you get to integrate that with human civilization in a way that actually empowers us. But once you want to make them fully autonomous, then we are going to ask ourselves what fundamental intention or aspiration should they seek?
Tim: Well, actually, let me back up just a minute. One thing does occur to me, robots are a little bit different because robots can interact with their outside world. They would have a rudimentary pain and pleasure. They can break. They’re not self-contained beings like software is. So wouldn’t they start to develop some of their own evolutionary baggage, if you will?
Jad: Well, just because something can break doesn’t mean that you can feel pain or pleasure from breaking it.
Tim: I guess you’re right. I’m kind of putting my own biases and evolutionary baggage and projecting that onto the robots, aren’t I?
Jad: It’s natural that we do.
(To be continued in Part 4)
In Part 4, we’ll focus on AI alignment and discuss the coexistence of AGI and humanity.
[ This content is provided in partnership with Tokyo-based startup podcast Disrupting Japan. Please enjoy the podcast and the full transcript of this interview on Disrupting Japan's website! ]
Top photo: Envato
***
Click here for the Japanese version of the article