background top icon
background center wave icon
background filled rhombus icon
background two lines icon
background stroke rhombus icon

Download "Why this top AI guru thinks we might be in extinction level trouble | The InnerView"

input logo icon
Table of contents
|

Table of contents

0:00
AI guru
2:11
What is AGI, and what are the risks it could bring?
3:51
Nobody knows why AI does what it does
5:41
From an AI enthusiast to advocating for a more cautious approach to AI development
7:58
What does Connor expect to happen when we lose control to AGI?
11:40
People like Sam Altman are rockstars
14:38
Connor's vision of a safe AI future
15:24
Imran: "One year left?"
17:26
Normal people do have the right intuition about AI
20:58
ChatGPT, limitations of AI, and a story about a frog
24:53
Control AI
Video tags
|

Video tags

AI doom
AI extinction
AI manipulation
Agi
Ai Development
Ai Laws
Ai Regulation
Artificial General Intelligence
Chatgpt
Conjecture
Connor Leahy
Connor Leahy Imran Garda
Connor Leahy TRT World
Connor Leahy interview
Geoffrey Hinton
Openai
Sam Altman
Tech people
ai
artificial intelligence
control ai
robots
tech
technology
Subtitles
|

Subtitles

subtitles menu arrow
  • ruRussian
Download
00:00:00
[Music]
00:00:05
coni is one of the world's leading Minds
00:00:08
in artificial
00:00:10
intelligence he's a hacker who sees the
00:00:12
rise of AI as an existential threat to
00:00:16
humanity he dedicates his life to make
00:00:19
sure its success doesn't spell our
00:00:23
Doom there will be intelligent creatures
00:00:28
on this planet that are not
00:00:31
human this is not
00:00:33
normal and there will be no going
00:00:38
back and if we don't control them then
00:00:41
the future will belong to them not to
00:00:45
us Ley is the CEO of conjecture AI a
00:00:49
startup that tries to understand how AI
00:00:52
systems think with the aim of aligning
00:00:55
them to human values he speaks to the
00:00:58
interview about why he believes the end
00:01:00
is near and explains how he's trying to
00:01:03
stop
00:01:05
[Music]
00:01:19
it and Connor Ley joins us now on the
00:01:23
interview he's the CEO of conjecture
00:01:27
he's in our London Studio good to see
00:01:29
you there good to have you on the
00:01:31
program Connor You're Something of an AI
00:01:34
Guru and you're also one of those voices
00:01:37
saying we need to be very very careful
00:01:39
right now and a lot of people don't
00:01:42
quite have the knowledge or the they
00:01:45
don't quite have the vocabulary or the
00:01:47
deeper understanding as to why they
00:01:48
should be worried they just feel some
00:01:50
sort of sense of Doom but they can't
00:01:54
quite map it out so maybe you can help
00:01:57
us along that path why should should we
00:02:00
be worried about
00:02:02
AGI and tell me the difference between
00:02:04
AGI and what is widely perceived as AI
00:02:08
right now so I'll answer the second
00:02:11
question first just to get some
00:02:12
definitions out of the way sure the
00:02:14
truth is is that there's really no true
00:02:16
definition of the word AGI and people
00:02:18
use it to mean all kinds of different
00:02:20
things when I talk about the word AGI
00:02:22
usually what I mean by this is AI
00:02:25
systems or computer systems that are
00:02:27
more capable than humans at all tasks
00:02:31
that they could do so this involves you
00:02:33
know any scientific task programming
00:02:36
remote work uh science business politics
00:02:40
anything and these are systems that do
00:02:42
not currently exist but are actively
00:02:45
attempting to be built there are many
00:02:47
people working on building of systems
00:02:48
and many experts believe these systems
00:02:50
are close and as for why these systems
00:02:53
could are going to be a problem well I
00:02:56
actually think that a lot of people have
00:02:57
the right intuition here the intuition
00:02:59
question here is just well if you build
00:03:02
something that is more competent than
00:03:04
you it's smarter than you and all the
00:03:06
people you know and all the people in
00:03:07
the world it is better at business
00:03:09
politics manipulation
00:03:12
deception science weapons development
00:03:14
everything and you don't control those
00:03:16
things which we currently do not know
00:03:19
how to do well why would you expect that
00:03:21
to go well yeah it reminds me a little
00:03:23
bit about the debate about whether we
00:03:25
should be looking for life in in the
00:03:28
universe beyond our solar system Stephen
00:03:30
Hawking said be careful look at the
00:03:33
history of the world anytime you sort of
00:03:34
invite us a stronger power more more
00:03:37
competent power they might come and
00:03:39
destroy you but then the counter to that
00:03:41
is that you're mapping human behavior
00:03:44
human
00:03:46
desires passions needs wants onto this
00:03:49
thing is this natural to do and fair to
00:03:53
do because humans created it humans
00:03:55
humans created the parameters for
00:03:57
it so it's actually worse than that in
00:04:01
that it's really important to understand
00:04:03
that when we talk about AI it's easy to
00:04:05
imagine it to be software and the way
00:04:08
software generally works it is written
00:04:11
by a pro by a programmer they write code
00:04:14
which tells the computer what to do step
00:04:16
by step this is not how AI Works AI is
00:04:21
more like organic it's more like it is
00:04:24
grown you use these big
00:04:27
supercomputers to take a bunch of data
00:04:30
and grow a program that can solve the
00:04:34
problems in the data now this program
00:04:37
does not look like something written by
00:04:38
humans it's not code it's not lines of
00:04:41
instructions it's more like a huge pile
00:04:44
of billions and billions of numbers and
00:04:47
we know if we can run all these numbers
00:04:49
re execute these numbers they can do
00:04:51
really amazing things but no one knows
00:04:54
why so it's way more like dealing with a
00:04:56
biological thing like if you look at
00:04:58
like a bacterium or something and the
00:05:00
bacteria can do some crazy things and we
00:05:01
don't really know why and this is kind
00:05:03
of how our AIS are so the question is
00:05:06
less you know will humans impart
00:05:09
emotions into these systems we don't
00:05:11
know how to do that it's more if you
00:05:13
build systems if you grow systems if you
00:05:15
grow bacteria who are designed to solve
00:05:19
problems to you know solve games to make
00:05:23
money or whatever what kind of things
00:05:26
will you grow and by default you're
00:05:29
going to grow things that are good at
00:05:31
solving problems at gaining power at
00:05:33
tricking people at you know building
00:05:36
things and so on because this is what we
00:05:38
want you reverse engineered gpt2 at the
00:05:44
age of 24 which was a few years
00:05:47
ago that's well part of the legend I
00:05:50
mean that's part of the the the
00:05:52
credentialing of you before they say
00:05:53
well this guy is saying we're in big
00:05:56
trouble they say well by the way you
00:05:58
know he knows what he's talking about
00:05:59
because because technically he knows
00:06:00
what he's
00:06:01
doing tell me tell me about the pivot
00:06:04
point between being a Believer and
00:06:06
enthusiastic about this to becoming a
00:06:08
Warner what
00:06:10
happened so uh the story goes back even
00:06:13
further than that um reverse engineering
00:06:15
is um a bit generous it's more like I
00:06:19
built a system I found out that no one
00:06:21
can reverse engineer it oh and this is a
00:06:23
big problem um but it was even before
00:06:25
then so I've been very into AI since I
00:06:28
was a teenager because I want to make
00:06:30
the world a better place and I think
00:06:31
that a lot of people who believe in AI a
00:06:33
lot of the tech people who are doing the
00:06:34
things which are think are dangerous I
00:06:35
think most of them maybe not most but
00:06:38
most of them probably are good people
00:06:40
they're trying to build technology to
00:06:41
make the world a better place you know
00:06:43
when I grew up uh technology was great
00:06:45
you know the internet was making people
00:06:47
more connected we were getting access to
00:06:49
better medicines and there was you know
00:06:51
solar power was improving there's all
00:06:53
these great things that science was
00:06:54
doing so I was very excited about more
00:06:56
science and about more technology and
00:06:58
well what is the what is the best
00:07:00
technology than intelligence if we just
00:07:02
had intelligence well wow we could solve
00:07:05
all the problems we could do all the
00:07:06
science we could you know invent all the
00:07:09
cancer medicines we could you know
00:07:11
develop all the cool stuff so I was
00:07:13
thinking when I was a teenager and this
00:07:16
is I think a common trajectory is that
00:07:17
people when they're kind of like first
00:07:19
exposed to some of these like techno
00:07:21
utopian AGI dreams it sounds great you
00:07:24
know it sounds like such a great great
00:07:26
solution but then as you think about
00:07:28
this problem more you kind of realize
00:07:30
that like the problem with AGI is not
00:07:33
really how to build it it's how to
00:07:35
control it that's much harder just
00:07:39
because you can make something which is
00:07:41
smart or that solves a problem does not
00:07:43
mean you can make something that will
00:07:44
listen to you that will do what you
00:07:46
truly want this is much much harder and
00:07:49
this is and as I started looking into
00:07:51
this problem more in my early 20s I
00:07:53
start realizing like wow we are really
00:07:55
really not making progress on this
00:07:57
problem so in that worst case scenario
00:07:59
whether we have an apocalyptic ending
00:08:02
for all of us we get destroyed
00:08:04
existentially or we become enslaved in
00:08:06
The Matrix or whatever it might
00:08:08
be tell me how it actually happens in
00:08:12
your mind how does this
00:08:15
AGI um assume control I mean there these
00:08:19
famous moments in Terminator and
00:08:21
Elsewhere One of the Terminators that
00:08:23
final scene where the nuclear bombs are
00:08:25
going off all over I mean there lots of
00:08:26
different ways people have imagined this
00:08:29
the way you see it tell me how it
00:08:31
happens and how if things continue to go
00:08:35
in in the direction that you fear how
00:08:38
long will it take to get
00:08:40
there well of course I don't personally
00:08:43
know how exactly things will play out I
00:08:46
can't see the future and I can give you
00:08:48
a feeling though of how I expect it to
00:08:50
feel how I expect it to feel like when
00:08:53
it happens the way I expect it to feel
00:08:55
is kind of like if you play chess
00:08:58
against a grandmas now I'm really bad at
00:09:00
chess I'm I'm not good at chess at all
00:09:03
but I you know I can play you know a
00:09:04
little bit of an amateur game and then
00:09:07
but when you play against a Grandmaster
00:09:08
or someone who's much much much better
00:09:10
than you the way it feels it's not like
00:09:12
you're having a heroic battle against
00:09:15
the Terminator you're having this
00:09:16
incredible back and forth and then you
00:09:18
lose no it feels more like you think
00:09:21
you're playing well you think everything
00:09:23
is okay and then suddenly you lose in
00:09:25
one move and you don't know why this is
00:09:27
what it feels like to play chess like
00:09:29
against a grandm and this is what it's
00:09:30
going to feel like for Humanity to play
00:09:32
against AGI what's going to happen is
00:09:35
not some dramatic battle that you know
00:09:36
the Terminators rise up and try to
00:09:38
destroy Humanity no it will be things
00:09:41
get more and more
00:09:43
confusing more and more jobs get
00:09:45
automated faster and faster more and
00:09:47
more technology gets built which no one
00:09:48
even quite knows how the technology
00:09:50
works there will be mass media movements
00:09:53
that don't really make any sense like do
00:09:54
we really know the truth of what's going
00:09:56
on in the world right now even now with
00:09:58
social media do you or I really know
00:10:00
what's going on well how much of this is
00:10:02
fake how much of it is generated you
00:10:04
know with AI or other methods we don't
00:10:06
know and this will get much worse
00:10:08
imagine if you have extremely
00:10:09
intelligent systems much smarter than
00:10:12
humans that can generate any image any
00:10:14
video anything trying to manipulate you
00:10:17
well and being able to develop new
00:10:18
technologies to interfere with politics
00:10:21
the way I expect it will go is that
00:10:23
things will seem like mostly normal just
00:10:25
like weird just like things are getting
00:10:27
weirder and weirder and then one day
00:10:30
we will just not be in control anymore
00:10:32
just it won't be dramatic there won't be
00:10:34
a fight there won't be a war it will
00:10:36
just be one day the machines are in
00:10:40
control and not us and and even if there
00:10:42
is a fight yeah sorry to even if there
00:10:44
is a fight or a war they've handed us
00:10:47
the gun and the bullets and we've done
00:10:48
it I mean it's us that might might do
00:10:51
all of this precipitated by being
00:10:54
controlled in some way absolutely
00:10:56
possible I don't think an AI would need
00:10:57
to use humans for that cuz you know it
00:10:59
could develop extremely advanced
00:11:00
technology but it's totally possible
00:11:02
humans are not secure it is absolutely
00:11:04
possible to manipulate humans like you
00:11:06
know everyone knows this you humans are
00:11:08
not immune to propaganda not immune to
00:11:10
mass movements imagine if you know an an
00:11:13
AGI gives Kim Jong Un the call and says
00:11:16
hey I'm going to make your country run
00:11:18
extremely well and tell you how to build
00:11:19
super weapons in return do me this favor
00:11:22
I mean Kim jamong is going to think
00:11:23
that's great and it's very easy to gain
00:11:27
power if you're extremely intelligent if
00:11:29
you're capable of manipulating people of
00:11:32
developing new techologies weapon
00:11:34
trading to on the stock market to make
00:11:35
tons of money well yeah you can do
00:11:38
whatever you want so you're sounding the
00:11:41
alarm Jeffrey Hinton seen as the founder
00:11:45
or father or Godfather of AI he's
00:11:47
sounding the alarm and has distanced
00:11:50
himself from a lot of his previous
00:11:53
statements others in the mainstream are
00:11:56
coming out heavily credentialed people
00:11:58
who who are the real deal when it comes
00:12:00
to a AIS saying we need guard rails we
00:12:04
need regulation we need to be careful
00:12:05
maybe we should stop
00:12:07
everything yet open AI Microsoft Deep
00:12:12
Mind these are companies but then you
00:12:14
have governments investing in this
00:12:16
everybody's still
00:12:17
rushing
00:12:19
forward hurtling forward towards a
00:12:23
possible Doom why are they still doing
00:12:26
it despite these very legitimate and
00:12:28
strong warning is it only about the
00:12:30
bottom line and money and competition or
00:12:32
is there more to it this is a great
00:12:34
question and I really like how you
00:12:36
phrase you said they were rushing
00:12:38
towards because this is really the
00:12:39
correct way of looking at this it's not
00:12:42
that it is not possible to do this well
00:12:44
it is not that it's not possible to
00:12:46
build safe AI I think this is possible
00:12:48
it's just really hard it takes time it's
00:12:51
the same way that it's much easier to
00:12:52
build a nuclear reactor that melts down
00:12:54
than to build a nuclear reactor that is
00:12:56
stable like of course this is just hard
00:12:59
so you need time and you need resources
00:13:01
to do this but unfortunately what we're
00:13:03
we're in the situation right now is
00:13:05
we're currently in a situation right now
00:13:07
where at least here in the UK there is
00:13:10
currently more regulation on selling a
00:13:12
sandwich to the public than to develop
00:13:15
potentially lethal technology that could
00:13:18
kill every human on earth this is true
00:13:20
this is this is the current case and a
00:13:23
lot of this is because of slowdown it's
00:13:25
just you know governments are slow
00:13:27
people don't want and vested interest
00:13:29
you make a lot of money by pushing AI
00:13:32
pushing AI further makes you a lot of
00:13:35
money it gets you famous on Twitter you
00:13:37
know look how much the like these people
00:13:39
are rock stars you know people like Sam
00:13:40
alman's a rock star on Twitter you know
00:13:43
people love these people they're like oh
00:13:44
yeah they're bringing the future they're
00:13:46
making big money so they must be good
00:13:48
but like I mean it's just not that
00:13:50
simple unfortunately we're in a
00:13:52
territory where we all agree somewhere
00:13:55
in the future there's a precipice which
00:13:59
we will fall down if we continue we
00:14:01
don't know where it is we don't maybe
00:14:04
it's far away maybe it's very close and
00:14:06
my opinion is if you don't know where it
00:14:08
is you should stop well other people who
00:14:10
you know gain money power or just
00:14:13
ideological points like a lot of these
00:14:15
people is very important to understand
00:14:17
do this because they truly believe like
00:14:19
a religion they believe in transhumanism
00:14:22
in in the Glorious future where AI will
00:14:26
love us and so on like so there's many
00:14:29
reasons but I mean yeah I mean the
00:14:30
cynical take is just I could be making a
00:14:33
lot more money right now if I was just
00:14:35
pushing AI I could get a lot more money
00:14:37
than I have right now how do we do
00:14:39
anything about this without just
00:14:41
deciding
00:14:43
to cut the uny internet cables and blow
00:14:46
up the satellites in space and just
00:14:48
start again how do you actually because
00:14:50
this is a technical problem and it's
00:14:53
also a moral and ethical problem so
00:14:56
where do you even begin right now or is
00:14:58
it too
00:14:59
late so the weirdest thing about the
00:15:02
world to me right now as someone who's
00:15:04
deep into this is that things are going
00:15:07
very very bad we have you know crazy you
00:15:12
know just corporations with zero
00:15:14
oversight just plowing billions of
00:15:16
dollars into going as fast as possible
00:15:19
with no oversight with no accountability
00:15:22
which is about as bad as it could be but
00:15:24
somehow we haven't yet lost it's not yet
00:15:29
over it could have been over there's
00:15:31
many things where it could be over
00:15:32
tomorrow but it's not yet there is still
00:15:35
hope there is still hope I don't know if
00:15:37
there's going to be hope in a couple
00:15:38
years or even in one year but there
00:15:40
currently still is Hope Oh wait hold on
00:15:42
one year I mean
00:15:43
that's come on man I mean we're probably
00:15:47
going to put out this interview like a
00:15:48
couple of weeks after we record it a few
00:15:51
months will pass we could all be dead by
00:15:53
the time you I know this gets 10,000
00:15:55
views I mean just just for explain this
00:15:58
timeline line one year why one year why
00:16:00
why is it going so fast that even one
00:16:02
year would be too far ahead explain that
00:16:04
I'm not saying one year is like
00:16:06
guaranteed by any means I think it's
00:16:07
unlu unlikely but it's not impossible
00:16:09
and this is important to understand is
00:16:12
that Ai and computer technology is an
00:16:14
exponential it's like covid this is like
00:16:17
saying in February you know a million
00:16:20
covid infections that's impossible that
00:16:22
can't happen in six months and it
00:16:24
absolutely did this is kind of how AI is
00:16:28
as well exponentials look slow they look
00:16:31
like you don't go have one infected two
00:16:33
infected four infected that's not so bad
00:16:37
but then you have 10,000 20,000 40,000
00:16:41
you know
00:16:42
100,000 yeah you know within a single
00:16:45
week and this is how te this technology
00:16:47
works as well is that as our computers
00:16:50
get there's something called Moors law
00:16:52
which is it's not really a lot it's more
00:16:53
like an observation that every two years
00:16:56
our computers get about you know there's
00:16:58
some details but about twice as powerful
00:17:01
so that's an exponential and our Tech
00:17:03
and it's not just our computers are
00:17:04
getting more powerful our software is
00:17:06
getting better our AIS are getting
00:17:08
better our data is getting better more
00:17:10
money is coming into this field we are
00:17:12
on an exponential this is why things can
00:17:14
go so fast so while I'm not like you
00:17:17
know it would be weird if we would all
00:17:19
be dead in one year it is physically
00:17:22
possible you can't rule it out if we
00:17:24
continue on this path the powerful
00:17:27
people who can do something about this
00:17:30
especially when it comes to regulation
00:17:31
when you saw those Congressman speaking
00:17:33
to Sam Alman they didn't seem to know
00:17:36
what the hell they were talking about so
00:17:38
how frustrating is it for you that the
00:17:40
people who can make a difference have
00:17:42
zero clue about what's really going on
00:17:44
and and more important than that they
00:17:47
didn't seem to want to actually know
00:17:50
they had weird questions that made no
00:17:52
sense and so you're thinking okay these
00:17:55
guys are in charge I mean no wonder the
00:17:56
AI is going to come and wipe us all out
00:17:58
maybe maybe we deserve
00:18:00
it well I wouldn't go that far but um
00:18:04
this used to annoy me a lot this used to
00:18:06
be extremely frustrating um but I've
00:18:09
come to I've come to peace with it to a
00:18:10
large degree because the thing that I've
00:18:12
really found is that understanding the
00:18:14
world is hard understanding complex
00:18:17
topics and technology is hard not just
00:18:18
because they they're complicated but
00:18:20
also because people have lives and this
00:18:22
is okay this is normal people have
00:18:23
families they have responsibilities they
00:18:26
have there's a lot of things people have
00:18:28
to do deal with and I don't shame people
00:18:30
for this you know like you know I have
00:18:32
turkey you know with my family over
00:18:34
Thanksgiving and whatever and you know
00:18:35
my aunts and uncles look they have their
00:18:37
own lives going on they maybe don't
00:18:39
really have time you know to listen to
00:18:41
me give them a rant about it so I don't
00:18:43
so I have a lot of love and a lot of
00:18:45
compassion for that things are hard this
00:18:48
is of course doesn't mean it that solves
00:18:50
the problem but I'm just trying to say
00:18:52
that like it is of course frustrating to
00:18:55
some degree that there are no adults in
00:18:57
the room this is this is how I would see
00:18:59
it is that there is sometimes a belief
00:19:03
that somewhere there is someone who
00:19:05
knows what's going on there's an adult
00:19:07
who's got under control you know someone
00:19:09
in the government they've got this under
00:19:11
control and as someone who's tried to
00:19:14
find that person I could tell you this
00:19:15
person does not exist the truth is is
00:19:18
the fact that anything works at all in
00:19:19
the world is kind of a miracle it's kind
00:19:21
of amazing that anything works at all
00:19:23
with how chaotic everything is but the
00:19:25
truth is is that there are quite a lot
00:19:27
of people who like
00:19:28
who want the world to be good you know
00:19:30
they might not have the right
00:19:31
information they might be confused they
00:19:34
might be getting lobbied by various
00:19:35
people with bad intentions but like most
00:19:38
people want their families to live and
00:19:41
have a good life most people don't want
00:19:44
bad things to happen most people want
00:19:47
other people to be happy and safe and
00:19:50
luckily for us most normal people so not
00:19:54
Elites not necessarily politicians or
00:19:56
technologists most normal people yeah do
00:19:59
have the right intuition around AI where
00:20:02
they see like wow that seems really
00:20:03
scary let's be careful with this and
00:20:07
this is what gives me hope so when I
00:20:09
think about politicians and them not
00:20:10
being in charge I think this is now our
00:20:13
responsibility as citizens of the world
00:20:15
that we have to take this into our own
00:20:16
hands we can't wait for people to save
00:20:18
us we have to make them save us we have
00:20:20
to make these things happen we have to
00:20:21
you know we have to make our voices hurt
00:20:24
we have to say hey how the hell are you
00:20:26
letting this happen like one of the
00:20:29
Beautiful Things is that you know to a
00:20:31
large degree politicians can be moved
00:20:35
they can be reasoned with and they can
00:20:36
be moved by the voters you can vote them
00:20:38
out of office that's a good argument for
00:20:40
democracy that's a great argument for
00:20:41
democracy that's that's wonderful you
00:20:43
know democracy is the worst system
00:20:45
except for all the other ones yeah um so
00:20:49
to the point of people's feeling and and
00:20:52
I asked about this at the very beginning
00:20:53
that intuitive feeling of like
00:20:55
something's up here there's something
00:20:57
ominous
00:20:59
there did seem to be a little bit of a
00:21:00
plateau with something like chat GPT so
00:21:04
initially people were very anxious very
00:21:07
surprised very wowed by what this thing
00:21:09
could do it could write your University
00:21:11
thesis and whatever it could you know do
00:21:13
all these these fancy gimmicks they seem
00:21:15
like magic tricks but then once the hype
00:21:20
died down a little bit uh people began
00:21:23
to input new things ask maybe better
00:21:26
questions and you could see some of the
00:21:28
limitations of something like you know
00:21:30
chat G GPT and its uh forerunners and
00:21:34
that led a lot of people to say well I
00:21:36
mean okay sometimes this thing just
00:21:37
sounds like a PR department or an HR
00:21:39
department in a company sometimes it it
00:21:42
actually it's there to detect plagiarism
00:21:44
but sometimes it feels like a
00:21:46
plagiarized like college um paper which
00:21:50
led to and this is anecdotally a lot of
00:21:53
friends of mine going ah maybe this
00:21:54
thing maybe we're okay for a while
00:21:56
because this thing has severe
00:21:58
limitations address that for me because
00:22:00
a lot of people are still sort of like
00:22:01
well I know there was the hype but now
00:22:03
I'm not so sure tell me about that so
00:22:07
there is a story I'm not sure if the
00:22:08
story is actually true or not but it's a
00:22:10
good metaphor where if you take a frog
00:22:13
and you put it into a pot of water you
00:22:16
know cold pot of water the Frog will sit
00:22:18
there happily if you slowly and turn up
00:22:20
the heat on your pot the Frog will sit
00:22:23
there there no problem and if you do it
00:22:25
very slowly you very slow slly slowly
00:22:28
increase the temperature the Frog will
00:22:30
get used to the temperature and won't
00:22:32
jump out until the water boils and the
00:22:34
frog dies I think this is what is
00:22:37
happening with people is that um people
00:22:41
are extremely good at making things
00:22:44
which are crazy normal is that if it's a
00:22:47
normal thing if it's a thing all your
00:22:49
friends do then it just becomes normal
00:22:51
this is like during war why people can
00:22:53
Slaughter other people because if all
00:22:55
your friends are doing it well it's
00:22:56
normal it's a yeah you slaughter people
00:22:58
it's normal you killing people is fine
00:23:01
this is how it can happen and the same
00:23:02
thing applies here is that well okay you
00:23:05
can talk to your computer now like sure
00:23:07
we can argue about oh chat jpt it's not
00:23:09
that smart you could talk to your
00:23:10
computer like slow down if this was a
00:23:13
sci-fi movie from 20 years ago everyone
00:23:16
would be yelling at the screen like what
00:23:18
the hell are you doing like this thing
00:23:19
is obviously like crazy like what the
00:23:21
hell is going on but because it's you
00:23:24
know available now you know cheaply
00:23:26
online he doesn't feel special so the
00:23:30
way to address this is I think a lack of
00:23:34
coordinated campaigning effort what I
00:23:36
mean by this is is that the general when
00:23:39
we think about our civilization not just
00:23:41
individual people when we think about
00:23:42
our civilization how does our
00:23:44
civilization deal with problems how does
00:23:47
it decide which problems to address
00:23:50
because there's always so many problems
00:23:53
you could be putting your effort on how
00:23:55
does it decide which one to pay
00:23:56
attention to and this is actually very
00:23:58
complicated and it can be because of a
00:24:01
natural catastrophe or a war or whatever
00:24:04
it can be because of some stupid fashion
00:24:06
hype just like some viral video on Tik
00:24:08
Tok makes everyone freak out sometimes
00:24:10
yes but usually if you actually want
00:24:13
your civilization to address a problem a
00:24:16
big problem it takes long hard grinding
00:24:20
effort from people trying to raise this
00:24:24
to saliency to raise it to attention
00:24:27
because again people have lives you know
00:24:29
like most people don't have time to go
00:24:32
go online and read huge books about AI
00:24:34
safety and like oh how do we integrate
00:24:36
chat gbt or how do we deal with like the
00:24:38
safety TR they don't have time for that
00:24:40
of course they don't and I'm not trying
00:24:42
to judge these people I understand it's
00:24:44
not their job in a good World there
00:24:46
should be a group of people that deals
00:24:49
with this the problem is they don't
00:24:52
really exist before we go I'm glad you
00:24:55
mentioned that people don't know where
00:24:57
to look look if there was one resource
00:24:59
that you could Point people in the
00:25:02
direction of so that they can educate
00:25:04
themselves about the reality of the
00:25:08
situation and can bring themselves up to
00:25:11
speed that would be
00:25:13
what there's not one who I think has the
00:25:16
whole thing which is a big problem
00:25:17
someone should make that resource if
00:25:19
someone make that resource please let me
00:25:21
know but what I would probably Point
00:25:22
people towards is control AI which is a
00:25:26
group of people who I'm also involved
00:25:27
with who are campaigning for exactly
00:25:30
these issues who are trying to bring
00:25:31
Humanity together to solve these
00:25:34
problems because this is a problem that
00:25:36
not you or me can solve no human can
00:25:39
solve these problems we're dealing with
00:25:40
right now this is a problem that
00:25:42
Humanity has to solve right that our
00:25:44
civilization needs to solve and I think
00:25:45
our civilization can do this but it
00:25:48
won't do it without our help it won't
00:25:50
help it won't happen without us working
00:25:52
together so if there's one thing I can
00:25:54
go go on Twitter or Google or whatever
00:25:56
go to control AI
00:25:58
and support them listen to what they to
00:26:00
say and this is the campaign I'm behind
00:26:04
well I support them okay we'll put the
00:26:06
link also in the YouTube description uh
00:26:08
if anybody wants to check it out Conor
00:26:10
you have a brilliant mind and I'm really
00:26:12
grateful that we got to talk thank you
00:26:14
very much for joining us on the
00:26:15
interview thank you so much take
00:26:26
care

Description:

Lauded for his groundbreaking work in reverse-engineering OpenAI's large language model, GPT-2, AI expert Connor Leahy tells Imran Garda why he is now sounding the alarm. Leahy is a hacker, researcher and expert on Artificial General Intelligence. He is the CEO of Conjecture, a company he co-founded to focus on making AGI safe. He shares the view of many leading thinkers in the industry, including the godfather of AI Geoffrey Hinton, who fear what they have built. They argue that recent rapid unregulated research combined with the exponential growth of AGI will soon lead to catastrophe for humanity - unless an urgent intervention is made. 00:00 AI guru 02:11 What is AGI, and what are the risks it could bring? 03:51 "Nobody knows why AI does what it does" 05:41 From an AI enthusiast to advocating for a more cautious approach to AI development 07:58 What does Connor expect to happen when we lose control to AGI? 11:40 "People like Sam Altman are rockstars" 14:38 Connor's vision of a safe AI future 15:24 Imran: "One year left?" 17:26 "Normal people do have the right intuition about AI" 20:58 ChatGPT, limitations of AI, and a story about a frog 24:53 Control AI Connor recommends we all visit Control AI: https://controlai.com/deepfakes Control AI / Twitter: @ai_ctrl Control AI / TikTok: @ctrl.ai Control AI / Instagram: @ai_ctrl Subscribe: https://www.trtworld.com//subscribe Livestream: http://trt.world/ytlive Facebook: http://trt.world/facebook Twitter: http://trt.world/twitter Instagram: http://trt.world/instagram Visit our website: http://trt.world

Preparing download options

popular icon
Popular
hd icon
HD video
audio icon
Only sound
total icon
All
* — If the video is playing in a new tab, go to it, then right-click on the video and select "Save video as..."
** — Link intended for online playback in specialized players

Questions about downloading video

mobile menu iconHow can I download "Why this top AI guru thinks we might be in extinction level trouble | The InnerView" video?mobile menu icon

  • http://unidownloader.com/ website is the best way to download a video or a separate audio track if you want to do without installing programs and extensions.

  • The UDL Helper extension is a convenient button that is seamlessly integrated into YouTube, Instagram and OK.ru sites for fast content download.

  • UDL Client program (for Windows) is the most powerful solution that supports more than 900 websites, social networks and video hosting sites, as well as any video quality that is available in the source.

  • UDL Lite is a really convenient way to access a website from your mobile device. With its help, you can easily download videos directly to your smartphone.

mobile menu iconWhich format of "Why this top AI guru thinks we might be in extinction level trouble | The InnerView" video should I choose?mobile menu icon

  • The best quality formats are FullHD (1080p), 2K (1440p), 4K (2160p) and 8K (4320p). The higher the resolution of your screen, the higher the video quality should be. However, there are other factors to consider: download speed, amount of free space, and device performance during playback.

mobile menu iconWhy does my computer freeze when loading a "Why this top AI guru thinks we might be in extinction level trouble | The InnerView" video?mobile menu icon

  • The browser/computer should not freeze completely! If this happens, please report it with a link to the video. Sometimes videos cannot be downloaded directly in a suitable format, so we have added the ability to convert the file to the desired format. In some cases, this process may actively use computer resources.

mobile menu iconHow can I download "Why this top AI guru thinks we might be in extinction level trouble | The InnerView" video to my phone?mobile menu icon

  • You can download a video to your smartphone using the website or the PWA application UDL Lite. It is also possible to send a download link via QR code using the UDL Helper extension.

mobile menu iconHow can I download an audio track (music) to MP3 "Why this top AI guru thinks we might be in extinction level trouble | The InnerView"?mobile menu icon

  • The most convenient way is to use the UDL Client program, which supports converting video to MP3 format. In some cases, MP3 can also be downloaded through the UDL Helper extension.

mobile menu iconHow can I save a frame from a video "Why this top AI guru thinks we might be in extinction level trouble | The InnerView"?mobile menu icon

  • This feature is available in the UDL Helper extension. Make sure that "Show the video snapshot button" is checked in the settings. A camera icon should appear in the lower right corner of the player to the left of the "Settings" icon. When you click on it, the current frame from the video will be saved to your computer in JPEG format.

mobile menu iconWhat's the price of all this stuff?mobile menu icon

  • It costs nothing. Our services are absolutely free for all users. There are no PRO subscriptions, no restrictions on the number or maximum length of downloaded videos.