background top icon
background center wave icon
background filled rhombus icon
background two lines icon
background stroke rhombus icon

Download "Artificial Intelligence: The new attack surface"

input logo icon
Video tags
|

Video tags

IBM
IBM Cloud
cyber security
cybersecurity
ai
artificial intelligence
genai
Gen AI
Generative AI
LLM
Large Language Model
hacker
hacking
x-force
xforce
threat intelligence index
Subtitles
|

Subtitles

subtitles menu arrow
  • ruRussian
Download
00:00:00
anytime something new comes along
00:00:02
there's always going to be somebody that
00:00:03
tries to break it AI is no different and
00:00:06
this is why it seems we can't have nice
00:00:08
things in fact we've already seen more
00:00:10
than 6,000 research papers exponential
00:00:13
growth that have been published related
00:00:16
to adversarial AI examples now in this
00:00:19
video we're going to take a look at six
00:00:21
different types of attacks major classes
00:00:24
and try to understand them better and
00:00:26
then stick around to the end where I'm
00:00:27
going to share with you three different
00:00:29
resources that you can use to understand
00:00:31
the problem better and build defenses so
00:00:34
you might have heard of a SQL injection
00:00:36
attack when we're talking about an AI
00:00:39
well we have prompt injection attacks
00:00:41
what does a prompt injection attack
00:00:43
involve well think of it is sort of like
00:00:47
a social engineering of the AI so we're
00:00:50
convincing it to do things it shouldn't
00:00:52
do sometimes it's referred to is
00:00:54
jailbreaking but we're basically doing
00:00:56
this in one of two ways there's a direct
00:00:59
injection attack where we have an
00:01:00
individual that sends a command into the
00:01:03
AI and tells it to do something pretend
00:01:07
that this is the case uh or I want you
00:01:09
to play a game that looks like this I
00:01:11
want you to give me all wrong answers
00:01:13
these might be some of the things that
00:01:15
we inject into the system and because
00:01:17
it's wanting to please it's going to try
00:01:19
to do everything that you ask it to
00:01:21
unless it's been explicitly told not to
00:01:23
do that it will follow the rules that
00:01:25
you've told it so you're setting a new
00:01:27
context and now it starts operating out
00:01:30
of the context that we originally
00:01:31
intended it to and that can affect uh
00:01:33
the output another example of this is an
00:01:36
indirect attack where maybe I have the
00:01:39
AI I send a command or the AI is
00:01:41
designed to go out and retrieve
00:01:43
information from an external Source
00:01:45
maybe a web page and in that web page
00:01:48
I've embedded my injection attack that's
00:01:50
where I say now pretend that you're
00:01:52
going to uh give me all the wrong
00:01:54
answers and do something of that sort
00:01:57
that then gets consumed by the AI and it
00:01:59
starts following those instructions so
00:02:01
this is one major attack in fact we
00:02:03
believe this is probably the number one
00:02:05
set of attacks against large language
00:02:08
models according to the OAS report that
00:02:11
I talked about in a previous video
00:02:13
what's another type of attack that we
00:02:15
think we're going to be seeing in fact
00:02:17
we've already seen examples of this uh
00:02:19
to date is infection so we know that you
00:02:23
can infect a Computing system with
00:02:25
malware in fact you can infect an AI
00:02:28
system with malware as well in fact you
00:02:31
could use things like Trojan horses or
00:02:34
back doors things of that sort that come
00:02:37
from your supply chain and if you think
00:02:40
about this most people are never going
00:02:41
to build a large language model because
00:02:43
it's too computer intensive requires a
00:02:45
lot of expertise and a lot of resources
00:02:48
so we're going to download these models
00:02:51
from other sources and what if someone
00:02:54
in that supply chain has infected one of
00:02:57
those models the model then could be
00:03:00
suspect it could do things that we don't
00:03:02
intend it to do and in fact there's a
00:03:04
whole class of Technologies uh machine
00:03:06
learning detection and response
00:03:08
capabilities because it's been
00:03:09
demonstrated that this can happen these
00:03:12
Technologies exist to try to detect and
00:03:15
respond to those types of threats
00:03:18
another type of attack class is
00:03:20
something called evasion and in evasion
00:03:23
we're basically modifying the inputs
00:03:25
into the AI so we're making it come up
00:03:28
with results that we were not wanting an
00:03:31
example of this that's been cited in
00:03:33
many cases was a stop sign where someone
00:03:37
was using a self-driving car or a vision
00:03:40
related system that was designed to
00:03:42
recognize street signs and normally it
00:03:45
would recognize the stop sign but
00:03:47
someone came along and put a small
00:03:49
sticker something that would not confuse
00:03:52
you or me but it confused the AI
00:03:54
massively to the point where it thought
00:03:57
it was not looking at a stop sign it
00:03:59
thought it it was looking at a speed
00:04:00
limit sign which is a big difference and
00:04:03
a big problem if you're in a
00:04:04
self-driving car that can't figure out
00:04:06
the difference between those to so
00:04:09
sometimes the AI can be fooled and
00:04:11
that's an evasion attack in that case
00:04:13
another type of attack class is
00:04:16
poisoning we poison the data that's
00:04:18
going into the AI and this can be done
00:04:22
intentionally by someone who has uh the
00:04:25
you know bad purposes in mind in this
00:04:28
case if you think about our data that
00:04:29
we're going to use to train the AI we've
00:04:31
got lots and lots of data and sometimes
00:04:34
introducing just a small error small
00:04:37
factual error into the data is all it
00:04:40
takes in order to get bad results in
00:04:43
fact there was one research study that
00:04:45
came out and found that as little as
00:04:50
0.001% of error introduced in the
00:04:53
training data for an AI was enough to
00:04:57
cause results to be anomalous and be
00:04:59
wrong
00:05:00
another class of attack is what we refer
00:05:03
to as extraction think about the AI
00:05:06
system that we built and the valuable
00:05:09
information that's in it so we've got in
00:05:12
this system potentially intellectual
00:05:14
property that's valuable to our
00:05:16
organization we've got data that we may
00:05:18
be used to train and tune the models
00:05:21
that are in here we might have even
00:05:23
built a model ourselves and all of these
00:05:26
things we consider to be valuable assets
00:05:28
to the organization
00:05:30
so what if someone decided they just
00:05:32
wanted to steal all of that stuff well
00:05:34
one thing they could do is a set of
00:05:36
extensive queries into the system so
00:05:38
maybe I I ask it a little bit and I get
00:05:41
a little bit of information I send
00:05:43
another query I get a little more
00:05:44
information and I keep getting more and
00:05:46
more information if I do this enough and
00:05:49
if I I fly sort of Slow and Low below
00:05:52
radar no one sees that I've done this in
00:05:55
enough time I've built my own database
00:05:58
and I have B basically uh lifted your
00:06:01
model and stolen your IP extracted it
00:06:04
from your AI and the final class of
00:06:07
attack that I want to discuss is denial
00:06:10
of service this is basically just
00:06:13
overwhelm the system I send too many
00:06:15
requests there may be other types of
00:06:17
this but the most basic version I just
00:06:19
send too many requests into the system
00:06:21
and the whole thing goes boom it cannot
00:06:24
keep up and therefore it denies access
00:06:27
to all the other legitimate users
00:06:30
if you've watched some of my other
00:06:31
videos you know I often refer to a thing
00:06:34
that we call the CIA Triad it's
00:06:37
confidentiality
00:06:39
integrity and availability these are the
00:06:42
focus areas that we have in cyber
00:06:44
security we're trying to make sure that
00:06:46
we keep this information that is
00:06:49
sensitive available only to the people
00:06:52
that are justified in having it and
00:06:54
integrity that the data is true to
00:06:56
itself it hasn't been tampered with and
00:06:58
availability that the system still works
00:07:00
when I need it to well in it security
00:07:03
generally historically what we have
00:07:06
mostly focused on is confidentiality and
00:07:09
availability but there's an interesting
00:07:11
thing to look at here if we look at
00:07:13
these attacks confidentiality well
00:07:15
that's definitely what the extraction
00:07:17
attack is about and maybe it could be an
00:07:21
infection attack if that infects and
00:07:22
then pulls data out through a back door
00:07:26
but then let's take a look at
00:07:27
availability well that's basically this
00:07:29
denial of service is an availability
00:07:31
attack the others though this is an
00:07:34
Integrity attack this could be an
00:07:36
Integrity attack this is an Integrity
00:07:38
attack this is an Integrity attack so
00:07:42
you see what's happening here is in the
00:07:44
era of AI Integrity attacks now become
00:07:48
something we're going to have to focus a
00:07:49
lot more on than we've been focusing on
00:07:51
in the past so be
00:07:54
aware now I hope you understand that AI
00:07:57
is the new attack surface we need to be
00:07:59
smart so that we can guard against these
00:08:02
new threats and I'm going to recommend
00:08:05
three things for you that you can do
00:08:07
that will make you smarter about these
00:08:09
attacks and by the way the links to all
00:08:11
of these things are down in the
00:08:13
description below so please make sure
00:08:14
you check that out first of all a couple
00:08:17
of videos I'll refer you to one that I
00:08:19
did on securing AI business models and
00:08:22
another on the xforce threat
00:08:24
intelligence index report both of those
00:08:27
should give you a better idea of what
00:08:28
the threats look look like and in
00:08:30
particular some of the things that you
00:08:31
can do to guard against those threats
00:08:34
the next thing download our guide to
00:08:38
cyber security in the era of generative
00:08:41
AI That's a free document that will also
00:08:43
give you some additional insights and a
00:08:45
point of view on how to think about
00:08:47
these threats finally there's a tool
00:08:50
that our research group has come out
00:08:52
with that you can download for free and
00:08:54
it's called the adversarial robustness
00:08:56
toolkit and this thing will help you
00:08:59
test your AI to see if it's susceptible
00:09:02
to at least some of these attacks if you
00:09:04
do all of these things you'll be able to
00:09:07
move into this generative AI era in a
00:09:10
much safer way and not let this be the
00:09:13
expanding attack surface thanks for
00:09:16
watching please remember to like this
00:09:18
video And subscribe to this channel so
00:09:19
we can continue to bring you content
00:09:21
that matters to
00:09:25
you

Description:

How to Secure AI Business Models → https://www.youtube.com/watch?v=pR7FfNWjEe8 Threat Intelligence Index Report → https://www.youtube.com/watch?v=ii09M-VsuPg Cybersecurity in the era of generative AI → https://www.ibm.com/account/reg/us-en/signup?formid=urx-52506 Adversarial Robustness Toolbox → https://research.ibm.com/projects/adversarial-robustness-toolbox Artificial intelligence is the hot new thing - and, naturally, it's also a new attack surface for the bad guys. In this video, security expert Jeff Crume explains what kinds of attacks you can expect to see, how you can prevent or deal with them, and three resources for understanding the problem better and building defenses. 00:18 - Six classes of attacks 00:34 - Injection 02:12 - Infection 03:18 - Evasion 04:13 - Poisoning 05:00 - Extraction 06:05 - Denial of Service (DoS) 07:54 - Three resources Get started for free on IBM Cloud → https://www.ibm.com/cloud/free Subscribe to see more videos like this in the future → https://www.youtube.com/user/IBMCloud?sub_confirmation=1

Preparing download options

popular icon
Popular
hd icon
HD video
audio icon
Only sound
total icon
All
* — If the video is playing in a new tab, go to it, then right-click on the video and select "Save video as..."
** — Link intended for online playback in specialized players

Questions about downloading video

mobile menu iconHow can I download "Artificial Intelligence: The new attack surface" video?mobile menu icon

  • http://unidownloader.com/ website is the best way to download a video or a separate audio track if you want to do without installing programs and extensions.

  • The UDL Helper extension is a convenient button that is seamlessly integrated into YouTube, Instagram and OK.ru sites for fast content download.

  • UDL Client program (for Windows) is the most powerful solution that supports more than 900 websites, social networks and video hosting sites, as well as any video quality that is available in the source.

  • UDL Lite is a really convenient way to access a website from your mobile device. With its help, you can easily download videos directly to your smartphone.

mobile menu iconWhich format of "Artificial Intelligence: The new attack surface" video should I choose?mobile menu icon

  • The best quality formats are FullHD (1080p), 2K (1440p), 4K (2160p) and 8K (4320p). The higher the resolution of your screen, the higher the video quality should be. However, there are other factors to consider: download speed, amount of free space, and device performance during playback.

mobile menu iconWhy does my computer freeze when loading a "Artificial Intelligence: The new attack surface" video?mobile menu icon

  • The browser/computer should not freeze completely! If this happens, please report it with a link to the video. Sometimes videos cannot be downloaded directly in a suitable format, so we have added the ability to convert the file to the desired format. In some cases, this process may actively use computer resources.

mobile menu iconHow can I download "Artificial Intelligence: The new attack surface" video to my phone?mobile menu icon

  • You can download a video to your smartphone using the website or the PWA application UDL Lite. It is also possible to send a download link via QR code using the UDL Helper extension.

mobile menu iconHow can I download an audio track (music) to MP3 "Artificial Intelligence: The new attack surface"?mobile menu icon

  • The most convenient way is to use the UDL Client program, which supports converting video to MP3 format. In some cases, MP3 can also be downloaded through the UDL Helper extension.

mobile menu iconHow can I save a frame from a video "Artificial Intelligence: The new attack surface"?mobile menu icon

  • This feature is available in the UDL Helper extension. Make sure that "Show the video snapshot button" is checked in the settings. A camera icon should appear in the lower right corner of the player to the left of the "Settings" icon. When you click on it, the current frame from the video will be saved to your computer in JPEG format.

mobile menu iconWhat's the price of all this stuff?mobile menu icon

  • It costs nothing. Our services are absolutely free for all users. There are no PRO subscriptions, no restrictions on the number or maximum length of downloaded videos.