background top icon
background center wave icon
background filled rhombus icon
background two lines icon
background stroke rhombus icon

Download "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал"

input logo icon
Video tags
|

Video tags

нейросеть
нейросети
gpt chat
chatgpt
эмбеддинги
машинное обучение
onigiri
онигири
анализ данных
большие данные
биг дата
big data
проанализировал комментрании
искусственный интеллект
ии
ai
визуализация нейросети
openai
Subtitles
|

Subtitles

subtitles menu arrow
  • ruRussian
Download
00:00:02
I studied the statistics of comments on
00:00:04
YouTube, for this I downloaded 10
00:00:06
million comments from YouTube and analyzed
00:00:08
them using neural networks and many other
00:00:10
methods. In short, the best
00:00:13
chance of getting likes is for a comment
00:00:15
about 100 characters long that is written in
00:00:17
about 8 minutes after
00:00:19
the release of the video So if you set
00:00:21
the bell and immediately saw this video,
00:00:23
then you have a whole 8 minutes and you know
00:00:25
what to do And also with the help of neural networks I
00:00:28
built these clouds of comments
00:00:29
Where comments that are close in meaning
00:00:31
are nearby and they can be divided into
00:00:33
groups in which there are comments with
00:00:35
approximately the same meaning.
00:00:37
At first I thought that for my
00:00:38
statistics I would download somewhere around 100,000
00:00:40
comments or maybe even a million,
00:00:42
and then I looked at how you can
00:00:45
officially download comments from YouTube and it
00:00:47
says that you can make 10,000 requests per day.
00:00:48
yes it turns out If I
00:00:51
type only 10,000 comments per day,
00:00:53
then for 100,000 comments it will take 10 days
00:00:56
and for a million comments it will take
00:00:57
more than three months and I will have to
00:01:00
download these comments every day, but then it
00:01:02
turned out that one request produces
00:01:04
approximately 100 comments and if
00:01:07
you multiply 100 comments by 10 thousand
00:01:08
requests per day, it turns out that per
00:01:10
day we can receive a million
00:01:12
comments, even a little more
00:01:13
because on average it produces a little more than 100
00:01:15
comments per request, I downloaded all the
00:01:17
comments from almost a hundred channels, many
00:01:19
of these channels are similar in topic to
00:01:21
mine, but there are others I took some channels
00:01:23
simply from YouTube recommendations so that
00:01:25
the topics sometimes differed and there was a
00:01:27
greater variety of comments and
00:01:29
Let's see in what form YouTube
00:01:31
issues comments, here is a table with all the
00:01:33
properties of comments, firstly they
00:01:35
have a type of YouTube object, in this
00:01:37
case everyone will just be a comment,
00:01:39
then there is some other information
00:01:41
that I won’t need here, I
00:01:43
marked it all in red, which means that
00:01:45
I don’t save this data. What I save is, for
00:01:47
example, the ID of the comment itself, that
00:01:49
is, its unique number, I can
00:01:51
distinguish between comments and then the ID of the video
00:01:53
to which this was made comment and
00:01:55
then all sorts of information that I
00:01:57
don’t really need. But what I need is the
00:01:59
text of two comments and the display name of the
00:02:02
author of the comment, also the channel ID of the author of the comment, the
00:02:04
number of likes and when
00:02:07
the comment was published, for
00:02:09
example, there is such a field as is it possible to
00:02:10
evaluate comments and it everywhere True
00:02:12
I don’t know at all. Could there be a situation
00:02:14
where you can’t rate comments, that is, you ca
00:02:16
n’t give them a like? And here
00:02:18
the rating of the comment is stored; in fact, it
00:02:20
just means whether we liked it or
00:02:21
not; the number of
00:02:23
replies to comments and the replies themselves,
00:02:25
which are essentially more, are also stored. in one exactly the
00:02:27
same table in this table I
00:02:30
mentioned the display name of the author of the
00:02:32
comment And when this name is downloaded
00:02:34
it is displayed in the normal form in
00:02:36
which I am used to seeing names in
00:02:37
comments, new ones may have noticed that
00:02:39
instead of the usual names now in the
00:02:41
comments there are these tags and many
00:02:42
people I didn’t install them, it says User and there are
00:02:45
a lot of letters and numbers I at first thought
00:02:47
it was a bug but maybe it’s a feature but one way or
00:02:49
another in the downloaded comments the names are
00:02:51
still normal In this video I will show
00:02:53
different algorithms and approaches from Data Sans
00:02:55
but so far I’ve only just I’m studying all this.
00:02:57
Therefore, I’m not an expert in this area,
00:02:59
but I know for sure that the demand for data
00:03:01
scientists is now frantic and
00:03:03
they earn on average 200 thousand
00:03:05
rubles and there is no ceiling on income, so if
00:03:07
you also want to work from anywhere in
00:03:09
the world, make good money and be in
00:03:11
great demand among large companies I
00:03:13
advise you to go study at skillfactory on the
00:03:15
data sentis course from scratch to pro, which was
00:03:17
developed jointly with a professor from
00:03:19
Moscow State University. As part of the course, you will learn the
00:03:22
very basics from scratch, including remembering
00:03:23
school mathematics and mastering the
00:03:25
advanced level necessary to
00:03:27
work as a data scientist at skillfactory this is an
00:03:30
online school that gives you not just
00:03:31
training, but guides you by the hand into the world of it,
00:03:34
training on real tasks from
00:03:36
customers at the end of the course gives you
00:03:38
a portfolio that stands out favorably and elevates
00:03:40
you in the list of candidates, especially since you
00:03:42
choose the time and pace of training yourself and,
00:03:44
if necessary, you can even freeze the
00:03:46
course by resuming it after a
00:03:48
while so as not to abandon it halfway,
00:03:50
there will be a mentor next to you who
00:03:52
will answer all your questions and the career center
00:03:54
will help with employment. And if after
00:03:57
training you do not find a job, the school
00:03:59
will return the money and this is stated in the contract.
00:04:01
So sign up for the course by the QR code
00:04:03
on the screen or by the link in the description and
00:04:05
start a career in datascience now and
00:04:08
with the promotional code onigiri you will receive a 45
00:04:10
percent discount on training skillfactory teaches
00:04:13
those who work I decided to start studying
00:04:15
comment statistics with something
00:04:17
simple, let's see for example how long are
00:04:19
comments under videos usually For example,
00:04:21
if I take my channel and plot the
00:04:23
distribution of the length of comments, I
00:04:25
will get a graph like this: on the X axis
00:04:27
there are different long comments, and
00:04:30
on the Y axis the number of comments of this
00:04:32
length, for example, comments of less
00:04:34
than 10 characters, is quite small. But the
00:04:37
more characters, the more the
00:04:38
number of comments grows. and then
00:04:40
such a maximum of the graph is formed
00:04:42
for approximately long comments from 40
00:04:44
to 60, here there are the most comments and
00:04:47
then the number of comments gradually
00:04:49
decreases, that is, there are much fewer comments with a length of 100
00:04:51
and comments with a length of 200
00:04:53
Even less than 300 Even less than 400 Even less
00:04:55
and 500 I have displayed in at the end of
00:04:57
the graph, in general, the maximum length of a
00:04:59
comment on YouTube is 10 thousand
00:05:01
characters, but there the graph will only
00:05:03
gradually decrease and decrease, so
00:05:04
I did not draw it further. And what’s
00:05:06
interesting is that each channel has a
00:05:08
slightly different graph. For example, the same
00:05:10
graph for Makar Light starts
00:05:13
very similar they are almost the same, but
00:05:15
then he goes a little higher. Which means
00:05:17
that he has a little more
00:05:19
longer comments. But here is the graph for
00:05:21
Dagon, you can see that at the beginning he has a lot
00:05:23
more comments, but then the graph
00:05:25
compares roughly with my channel. Which
00:05:28
means that he has more short
00:05:29
comments and a little less long, and
00:05:32
since on some channels there are
00:05:33
shorter comments on average, and on some
00:05:35
longer ones we can sort the
00:05:37
channels by this parameter. Here, for example, I
00:05:40
took from my sample popular science channels
00:05:42
or channels related to them in topic and
00:05:45
sorted them their medians for her
00:05:46
comments are in the first places
00:05:48
Anthropogenesis Reviews and Phelologist of All
00:05:51
Russia And on the last topless Audi Hot and
00:05:53
Utopia shows a long comment most
00:05:56
likely depends on many factors,
00:05:57
for example, it seems to me that they are strongly all
00:06:00
this middle-aged audience and
00:06:02
therefore if there are cholevar topics or Is
00:06:04
it possible to write a sheet of text in
00:06:06
response to a phrase from the video? Well, I
00:06:08
showed the distribution of
00:06:09
comment lengths on my channel on the
00:06:11
Makar of Light and Dagon channel. But what if
00:06:13
we consider such a general distribution across
00:06:15
all channels and here the graph turns out to be
00:06:17
much more even due to the large
00:06:19
number of comments but you can immediately
00:06:21
pay attention to this strange
00:06:23
detail, from somewhere a lot of
00:06:25
comments with a length of 142 characters came from, I started
00:06:28
looking for what it was and it turned out that these were
00:06:30
several tens of thousands of comments,
00:06:31
all these comments look about the
00:06:33
same Hello, please
00:06:35
watch my videos, I’m not asking you
00:06:37
to subscribe, you just need an opinion from
00:06:38
sides and so on And what’s interesting is that I
00:06:41
found more than a thousand accounts that
00:06:43
write these messages and this is only from
00:06:45
what I found in my statistics. And
00:06:47
I might not have noticed something and something might
00:06:49
not have been included in my statistics at all, that is,
00:06:51
someone- I somehow registered or
00:06:53
somehow got more than a thousand
00:06:55
accounts on YouTube and in all these
00:06:57
accounts there is the same request to
00:06:59
switch to another account. That is, all
00:07:01
these more than 1000 accounts lead to
00:07:04
one single account and by the way it is
00:07:06
blocked for violating the rules YouTube,
00:07:09
that is, what it was, it was some kind of
00:07:11
advertising or it was some kind of experiment,
00:07:13
now we most likely won’t know. But
00:07:15
let’s move on, I decided to take a look.
00:07:17
What do the most popular
00:07:19
comments under the video have in common? Firstly, the
00:07:22
largest number of likes in in my
00:07:23
statistics I got comments I want an
00:07:25
episode about sleeping under a video on the
00:07:27
topless channel in general, usually the most popular
00:07:29
comments are either supportive or
00:07:31
quoting something from the video and almost
00:07:33
always the most popular
00:07:35
comment on the channel is quite ordinary
00:07:36
except for my channel I have this in some at a
00:07:39
certain point, I began to realize
00:07:41
that this video may never
00:07:43
end and Yes, these are the comments under the
00:07:45
video with the largest numbers, firstly,
00:07:47
the video has the largest numbers, it has the
00:07:49
largest number of views on my channel,
00:07:51
and secondly, the comments under this video
00:07:53
have received the largest number of likes A
00:07:56
I also had a video making a neural network from scratch
00:07:58
where I showed how a neural network can
00:08:00
recognize handwritten numbers and the
00:08:02
comment under this video is that alarmists
00:08:04
are afraid that neural networks will get out of
00:08:06
control and enslave humanity. The
00:08:07
neural network thinks that the black screen is 5.
00:08:10
A. It’s funny that the comments are about the number 5
00:08:12
is the fifth largest number of likes on
00:08:15
my channel, then I thought, so I made a
00:08:17
distribution of comment lengths. But what
00:08:19
if comments of a certain
00:08:21
length get more likes? Well, for example,
00:08:24
if a comment consists of one letter, it is
00:08:25
unlikely that anyone will like it, or
00:08:28
if it is a sheet of text It’s also unlikely that
00:08:30
anyone will like it. But if
00:08:31
the comments are easy to read, most likely
00:08:33
then they will like it and probably such a
00:08:35
comment will not be too long and
00:08:37
not too short. If you build the
00:08:39
same distribution as I did only for
00:08:41
likes, then of course there are more
00:08:43
there will be more likes in the comments,
00:08:44
so of course you need to take this into account and
00:08:46
divide the number of likes by the number of
00:08:48
comments for each length of
00:08:49
the comment, I tried to make this
00:08:52
graph and this is what it turned out in general.
00:08:54
Looks like she’s just random numbers, as if there is
00:08:56
no dependence here, but
00:08:58
when I I made this graph in my
00:09:00
statistics there were only 100 thousand
00:09:02
comments And now I already have
00:09:04
much more Therefore the graph can be
00:09:06
made much smoother I made a
00:09:08
new graph for 10 million
00:09:09
comments and this is how it looks and
00:09:12
now it is clearly visible that if
00:09:13
the comment is too short then
00:09:15
it has few likes; if it is longer, then there are more of them and
00:09:18
then they gradually subside. Yes, there are
00:09:20
very strong fluctuations and to
00:09:22
remove them and make the graph truly
00:09:23
smooth you probably need 100 million
00:09:25
comments on the statistics or even
00:09:27
a billion, but I can imagine that
00:09:29
if I had scored more comments, then
00:09:30
the graph would look something like this: here the
00:09:33
most likes are received by comments
00:09:34
from about 90 to 15 characters and then the
00:09:37
number of likes gradually decreases. And
00:09:40
here you can also notice such a
00:09:42
sharp dip in likes and there is
00:09:44
even a reason for it, I’ll tell you about it now, but
00:09:46
first Let's look at two graphs: the
00:09:48
blue graph is the same. This is the number of
00:09:50
likes adjusted for the number of
00:09:51
comments depending on the length of the
00:09:53
comment. But the red graph is
00:09:56
just the number of comments of different
00:09:57
damn things that I showed a few ago.
00:10:00
Firstly, here you can notice that the
00:10:02
majority write rather short
00:10:03
comments and comments of this length
00:10:05
get few likes, that is, typical
00:10:07
comments under a video on YouTube are somewhere from
00:10:09
20 to 35 characters, but a typical
00:10:12
comment under a video on YouTube that
00:10:14
gets a lot of likes is somewhere from 90
00:10:16
to 105 characters and then the longer
00:10:19
comment grows the number of likes under it
00:10:20
gradually decreases, but not too
00:10:22
much; rather, it is the lowest when there are
00:10:24
very few characters in the comment, despite the fact
00:10:25
that there are a lot of such comments,
00:10:27
and here is the
00:10:29
same dip in likes that I was
00:10:32
talking about; there is a sharp rise in the
00:10:34
number of comments and remember that
00:10:36
same spammer I talked about
00:10:37
earlier who left tens of thousands of
00:10:39
comments, so these sharp rises
00:10:41
and falls in the graph were caused by him, he wrote
00:10:43
a lot of comments but they got
00:10:45
few likes, but let’s move away from the graphs for now
00:10:47
and look at something more
00:10:48
interesting, for example then how can you
00:10:50
group comments and for this
00:10:52
you need a special algorithm that will
00:10:54
somehow break the comments into
00:10:56
main groups, that is,
00:10:58
clustering algorithms and
00:11:00
there are very, very many such clustering algorithms, here
00:11:03
in the table the most basic ones are shown,
00:11:05
probably one of the simplest and most
00:11:07
effective is the Dibiscan algorithm is
00:11:09
what I will use, it sounds
00:11:11
like a Database, that is, some kind of database.
00:11:13
But in fact, it is a density Base,
00:11:15
that is, based on density, it
00:11:17
finds the main groups that are
00:11:19
densely located like that. And those Points
00:11:20
that do not fall into any group They are
00:11:22
shown here in Gray and since comments
00:11:24
can also be represented as points in
00:11:26
space, we can also apply the
00:11:28
following clustering algorithm to them. So
00:11:30
stop, how can comments be represented
00:11:32
as a point in space? I’ll
00:11:35
also tell you about this, but let’s tell you everything in order
00:11:36
first about clustering, I
00:11:39
decided to write a program that
00:11:40
performs debiscant with visualization
00:11:41
in order to better understand how it
00:11:43
works, let’s say we have a set of points
00:11:45
and we want to split it into clusters, take
00:11:48
any point and see if it has at
00:11:50
least three other points in the radius. Well, for
00:11:52
example 100 pixels If there is, then
00:11:54
mark it as the main point, let’s say
00:11:56
I have it darker, the number of
00:11:58
neighboring points for each point so that
00:12:00
it becomes the main point and the radius in which we are
00:12:03
looking for these neighboring points, these are the two main
00:12:05
parameters of the dibiscan and accordingly
00:12:07
we can adjust these parameters, but
00:12:09
let’s let's go further than the main one, we
00:12:11
may have more than one point, but all the points
00:12:13
that have a sufficient number of
00:12:14
neighbors, so we look at all the points and
00:12:17
if they have a
00:12:19
certain number of neighbors within a certain radius, then
00:12:20
we mark them as main points, after
00:12:22
we have checked all the points, we move on to the
00:12:24
second stage again we take any point,
00:12:26
for example in my case I took
00:12:28
the Central one and assign it the first
00:12:30
cluster. For example, I will mark it in
00:12:31
red and now the cluster is spreading.
00:12:34
But other main points in its radius
00:12:36
they spread it further, that is,
00:12:38
they seem to be infected with this cluster of
00:12:39
now the main points in there are
00:12:42
no more points left in the radius of the cluster, then we take a new
00:12:44
random point and mark it with the next
00:12:46
cluster, for example green, when it
00:12:48
also has no points left, we start the blue
00:12:50
class, and so on, when we
00:12:52
have assigned all the main points to the cluster, we begin to
00:12:53
assign non-basic ones there, after which the
00:12:55
algorithms end, that is, the main ones
00:12:57
the points seem to spread the cluster. Then the
00:12:59
main points just fall into
00:13:02
it and in this picture you can see
00:13:04
that the green cluster is located somehow
00:13:05
strangely, it could relate to the red
00:13:07
cluster and indeed we can
00:13:09
change the parameters of this algorithm and, for
00:13:11
example, set the radius to be larger. Then
00:13:13
more points will fall into each
00:13:15
cluster and in this situation it looks
00:13:17
much better, only one point did not
00:13:19
fall into any cluster. Although it could be
00:13:21
attributed to the red cluster.
00:13:22
But if we increase
00:13:25
the radius even a little, the red cluster will unite with
00:13:27
green and here specifically in In this
00:13:29
situation, the clusters can be easily separated,
00:13:31
this is even visually visible, but this is not
00:13:33
always the case. Therefore, sometimes if we start
00:13:35
increasing the radius for a point, the clusters
00:13:38
will simply stick together into one large cluster.
00:13:40
Also, the problem with clustering is that
00:13:42
it has quadratic complexity and
00:13:44
sometimes it takes a very long time to calculate it,
00:13:45
especially here because you
00:13:47
need to cluster tens and sometimes more than hundreds of
00:13:49
thousands of objects, yes And there is
00:13:52
far more than one clustering here and in addition to clustering
00:13:53
you need to use other time-consuming
00:13:55
methods, but there are algorithms that
00:13:57
can speed up the speed of implementation
00:13:59
using a video card, so I
00:14:01
decided to transfer these calculations to a remote server
00:14:03
for example, here I will use
00:14:04
selectel because I have already
00:14:06
used it in previous videos, it is
00:14:07
familiar and you can create a server
00:14:09
very quickly. I am creating a new server
00:14:11
on Windows because it is more familiar to me and I choose
00:14:13
a video card and you can start the
00:14:16
server even for 1 hour with a special
00:14:18
video card for training neural networks and
00:14:20
it will cost about 50 rubles,
00:14:22
I click create a server and it starts,
00:14:24
I connected to it via a remote
00:14:27
desktop and now my magician
00:14:28
flips to Windows, two computers
00:14:30
in one for the test, I tried
00:14:32
to generate clusters for several
00:14:33
videos and overall it turned out very well
00:14:36
not bad, for example, this was the first one that
00:14:38
came across 280
00:14:40
comments and basically all the
00:14:42
comments say thank you, probably
00:14:44
this cluster could be given
00:14:46
the name Thank you or gratitude And then
00:14:49
I thought the name of the clusters could be
00:14:51
asked from gpc just like in the
00:14:53
previous video where I had neural networks
00:14:55
playing mafia; the program could
00:14:57
connect via IP to gpt automatically
00:14:59
receive a response and I added it and it
00:15:02
even worked well. But in some
00:15:04
cases it broke. For example, here I give
00:15:06
gpt the first three messages from the cluster who is
00:15:09
from 2023 funny And why am I watching this is at two
00:15:13
o’clock in the morning, and not for the first time,
00:15:14
it’s not clear what unites these three
00:15:16
messages and why they even ended up in
00:15:18
one cluster, but the whole point is that I
00:15:20
decided to take the first few
00:15:22
comments I came across from the cluster in order to
00:15:24
make a headline based on them. Because if, for
00:15:26
example the cluster consists of thousands of
00:15:27
messages, I won’t send them all
00:15:29
to gpt and it may turn out that
00:15:31
the comments that I send to gpt
00:15:33
are located, as it were, at different ends of
00:15:34
the cluster and then the comments at different
00:15:37
ends of this cluster can be
00:15:38
quite different from each other. Therefore, you
00:15:40
just need to find the center of the cluster and
00:15:43
take the comments closest to the center of the cluster,
00:15:44
they will best
00:15:46
show the meaning of the comments in the cluster,
00:15:48
but how to understand where the center is and where the
00:15:51
cluster is located in general, I was going to
00:15:52
tell you how comments can be
00:15:54
turned into a point in space. And
00:15:56
now is probably the time for this,
00:15:58
comments need to be turned in imbidding,
00:16:00
imbidding is like a numerical
00:16:02
representation of the meaning of something and all
00:16:04
modern generative models that
00:16:06
create text, pictures, videos, sounds, they
00:16:09
all use imbidings, there are
00:16:11
special models, but conditionally neural networks
00:16:13
that generate these same bedings, that is,
00:16:16
as an input we give it, for example, text
00:16:18
and as an output it gives us a set of numbers and
00:16:21
these numbers mean the meaning of a picture or a
00:16:23
phrase or something else, if these numbers are
00:16:26
represented as some coordinates, then
00:16:27
accordingly there will be some point in
00:16:29
space with these coordinates, but this
00:16:31
point will not be on a two-dimensional plane or
00:16:33
even in a three-dimensional one space
00:16:35
because there are not two or three numbers. But for
00:16:37
example, here I use a model from Open
00:16:39
ai and it gives us 1.536 numbers,
00:16:42
it turns out that this point is in 1.536 dimensional
00:16:46
space. And in fact, this model of
00:16:48
Medics and not even the best is a
00:16:51
rating of such models and it is only
00:16:53
in eighth place there But this is a general rating And
00:16:56
here I use it for a specific task
00:16:57
for clustering and in terms of clustering
00:17:00
it is already in first place Moreover, our
00:17:02
comments here are in Russian and almost
00:17:04
all of these models work only with
00:17:06
English So I get all these
00:17:07
imbidities and since they are, in a
00:17:09
sense, points in a multidimensional space,
00:17:10
you can apply the same Debiscan to them
00:17:12
and accordingly divide
00:17:14
it all into clusters and get the center of each
00:17:16
cluster, find the closest points to the centers of
00:17:18
each cluster and send them to
00:17:20
gpt and thus for each cluster
00:17:22
you can get a name I’ll try to find
00:17:25
clusters of comments in my video about
00:17:27
geometric shapes in four-dimensional
00:17:29
space and the first one gives me a cluster
00:17:31
of only four comments guy You’re a
00:17:33
cool guy you’ve confused everyone the guy is a
00:17:35
prodigy cool continue and the neural network
00:17:38
titled it as support and
00:17:39
achievement the next cluster consists from
00:17:42
the comments And what are the figures in 5D
00:17:43
next video 5D figures And what does
00:17:46
5D look like Or maybe 5D and the neural network correctly
00:17:48
titled this cluster cocktail
00:17:50
geometry and I didn’t even notice that under
00:17:53
this video there are so many comments
00:17:54
about 5D the next cluster is small it
00:17:57
consists of comments Hello everyone nonsense
00:17:59
nonsense another nonsense and the neural network
00:18:02
titled it as problems in
00:18:04
statements there is also a cluster
00:18:06
called Overcoming Difficulties and it
00:18:08
consists of comments difficult Difficult
00:18:09
complex Difficult There
00:18:11
is also a cluster called
00:18:13
evaluation and praise and comments there,
00:18:15
well, you are smart young smart Kick you are smart
00:18:18
so young and already so stupid and a
00:18:21
cluster of negative reactions What kind of
00:18:23
nonsense What nonsense God what kind of nonsense some kind of
00:18:25
nonsense It’s interesting that this cluster is
00:18:27
different from the other cluster Brad
00:18:29
clusters emoji that the neural network
00:18:31
named positive reviews and emotions
00:18:33
cluster where comments like
00:18:35
comments for promoting the channel got
00:18:36
Thank you comment for promotion
00:18:38
video and so on, and for some reason the neural network
00:18:40
named this topic and the marketing video and
00:18:42
also a cluster where they discuss the name of the
00:18:44
four-dimensional figure binder from ferender.
00:18:46
It’s interesting that the neural network understood this and
00:18:49
called the cluster the name of the geometric
00:18:50
shapes. Everywhere here for clustering I
00:18:53
used the method But this is far from the
00:18:55
only method clustering As
00:18:56
you can see from the table earlier,
00:18:58
for example, there is also a fairly
00:18:59
common method called keymins, here from the
00:19:02
pictures it seems that it does not work
00:19:03
very well, but in most cases
00:19:05
the clusters will look something like in the
00:19:07
third picture or 5. And
00:19:09
it copes well with such cases. In
00:19:11
addition it works very quickly, but its disadvantage
00:19:13
And in some situations it may even be a
00:19:15
plus, this is that this method needs to
00:19:17
indicate the number of clusters in advance,
00:19:19
let’s say I want to split the comments
00:19:21
into exactly 10 clusters, then I run
00:19:22
keymins With development into 10 clusters
00:19:25
if I want to split for 30 clusters, then I’ll
00:19:27
indicate 30, it works relatively. It’s just that
00:19:29
it
00:19:30
randomly scatters the center of the clusters in space and
00:19:33
then tries to adjust them as much as possible
00:19:34
to the points. In fact, with this method I did
00:19:36
n’t always get a good
00:19:38
result, but if you select a good
00:19:40
number of clusters, it works very
00:19:42
good. Besides, and very quickly, for
00:19:43
example, like this, he broke me up
00:19:45
in the comments and what’s interesting is that one of the
00:19:47
commentators writes that none of the
00:19:48
four-dimensional crazy stuff is used anywhere,
00:19:50
otherwise say that Well, then he has
00:19:53
some complaints about the theory of
00:19:54
relativity, which are even in this video
00:19:55
was not discussed, but he writes that
00:19:57
four-dimensional space is not
00:19:59
used, and I’m now reading
00:20:00
the comments thanks to the fact that I found it
00:20:02
among a group of points in
00:20:04
1536-dimensional space and it would be
00:20:06
nice to visualize all these points in
00:20:09
multidimensional space, I already had
00:20:10
videos about visualizing multidimensional
00:20:12
objects but I mostly showed 4D and
00:20:15
showed a little 5d6d, but there were
00:20:17
mostly three-dimensional slices of these figures. And
00:20:20
here you need a projection and projection of a
00:20:22
1536-dimensional space. But in
00:20:25
fact, this is not as difficult as it seems if
00:20:27
we had some more complex
00:20:28
figures, such as cubes balls, they would
00:20:30
block each other, but the points
00:20:32
can be arbitrarily small and
00:20:34
block each other. They will be smaller,
00:20:36
but they will still be there because we need to
00:20:38
draw them. Which means they will have a
00:20:40
size, so there are special
00:20:42
algorithms to sort of move these
00:20:43
points so that they block less than each other,
00:20:45
probably the most popular of them
00:20:47
is the algorithm with this name,
00:20:49
which is a semi-acronem and
00:20:51
is pronounced like teni or in Russian,
00:20:53
stochastic nesting of neighbors
00:20:55
by distribution and it just projects
00:20:57
the points And then either moves it to better
00:21:00
maintain the distance between them, I
00:21:02
experimented with Debiscan and
00:21:03
emboss and it turned out that they work fine on the processor
00:21:05
but not on the video card And
00:21:08
all because I installed Windows and I
00:21:10
had to install Linux, all these
00:21:11
algorithms were already made by NVIDIA for their
00:21:13
video card and for some reason they made them
00:21:16
only for Linux, I tried to look for
00:21:18
alternative libraries but It turns out
00:21:20
they all use the same
00:21:21
implementation from NVIDIA and Yes,
00:21:24
I associate this company with gaming
00:21:25
video cards and games are usually for Windows and
00:21:28
I thought that these algorithms would also be for
00:21:30
Windows But it turned out that not, but
00:21:32
fortunately you can simply copy it from that Windows server
00:21:33
delete all the files
00:21:35
and create a new one in a minute on Linux and
00:21:38
why didn’t I immediately notice that there is
00:21:39
a version here that is literally called the
00:21:40
Learning machine,
00:21:42
almost everything you need for machine
00:21:44
learning is already installed on it, where is
00:21:45
Python installed here and the libraries for it,
00:21:47
I tried to project points onto a
00:21:49
two-dimensional plane and it turned out like this
00:21:51
picture here I used my
00:21:53
latest video of a neural network playing mafia
00:21:55
But there are not many comments in it so the
00:21:57
clusters are not so clearly visible Let's
00:21:59
better take my video in which the
00:22:01
most comments are this video that
00:22:04
looks more like infinity this is it so here
00:22:06
the points are clearly not randomly scattered; there
00:22:09
are some patterns in them;
00:22:10
the projection is calculated using embossing; and
00:22:13
clustering using Dibiscan and
00:22:15
different clusters are marked with different colors,
00:22:17
but here it seems that all the points are the same
00:22:20
color, that is, they seem to have all stuck together
00:22:22
into one cluster and the first problem here
00:22:24
is that the first cluster is the
00:22:26
largest and the points that did not fall into
00:22:28
any cluster are approximately the same color,
00:22:30
but I can simply say that
00:22:32
approximately 2000 points did not fall into more than one cluster,
00:22:34
but 4600 points ended up in the first cluster;
00:22:37
sudden connection before the release
00:22:39
video, I finally found how
00:22:41
to put better colors on the dots and now you can
00:22:43
clearly see that this blue
00:22:45
cluster is the same main cluster into
00:22:47
which most of the dots stuck together. But
00:22:49
these orange dots are those that did
00:22:51
not fall into more than one class, the author
00:22:53
can still It seems strange that the main
00:22:54
cluster is kind of scattered throughout this
00:22:56
picture, there is some part here there is a
00:22:58
part here here it is in different places.
00:23:01
But in fact this happens
00:23:03
because initially it is multidimensional, namely
00:23:05
1536-dimensional and it is projected onto
00:23:08
two-dimensional space accordingly
00:23:09
there are some inaccuracies and the second
00:23:11
problem with this sample, no matter how I
00:23:13
set up the clustering parameters,
00:23:15
either most of the points do not fall
00:23:16
into any cluster, or vice versa, they all
00:23:18
stick together into one large cluster, but
00:23:21
nevertheless some clusters
00:23:22
stand out here, I’ll get closer, for example, to
00:23:24
this cluster and there are relatively
00:23:25
large clusters with blue dots. Well, they are
00:23:28
large compared to other
00:23:29
small clusters. Because there is
00:23:31
generally one supergiant cluster
00:23:32
that occupies most of the dots and a
00:23:34
comment in this class is like why the hell did
00:23:36
I enter this video at 2:00 am another
00:23:38
comment 2 o'clock in the morning so that I can still
00:23:40
watch Why am I watching this video at
00:23:42
2:00 am why am I watching this at night why am
00:23:45
I watching this at nine o'clock why am I
00:23:48
watching this and why are you watching It can be seen that with
00:23:50
distance the comments in the cluster
00:23:52
change a little, there are also
00:23:54
comments nearby ideal videos to
00:23:55
fall asleep ideal video before going to bed
00:23:58
I advise you to watch it before going to bed and don’t even
00:24:00
think about watching this video before going to bed
00:24:02
sounds like a clickbait title for the
00:24:04
video next to it there are also comments about the fact that
00:24:06
this video is watched at 5 o’clock in the morning and at
00:24:08
00:57, that is, it is clear that all the comments
00:24:10
around they are approximately about sleep or about the night,
00:24:13
but at the same time they did not unite into one
00:24:15
cluster, if you move away and look at the
00:24:17
clusters as they were projected
00:24:18
using embossing, and not how they were
00:24:20
clustered using Debiscan, then
00:24:22
this
00:24:24
large round cluster is visually visible here all
00:24:25
the comments are about the fact that no one
00:24:27
understood anything, but it’s very interesting. It’s interesting that there
00:24:29
is also one small
00:24:31
cluster nearby, they write in it: Scary, very
00:24:34
scary, we don’t know what it is and
00:24:36
these two meme clusters are nearby, and
00:24:38
these points can also be projected onto a
00:24:40
three-dimensional space, but it seems to me that
00:24:42
this only makes it more difficult, and then
00:24:45
I thought that my Debiscan does
00:24:46
n’t seem to cluster comments very effectively,
00:24:48
but at the same time the brushes project
00:24:50
them so that they immediately seem to
00:24:51
visually turn into some kind of
00:24:53
cool ones, and what if
00:24:55
I project the comments first on a 2D
00:24:57
plane and only then cluster them
00:24:59
using Debiscan and it seems that it
00:25:01
turned out much better. Let's
00:25:03
look at some cluster,
00:25:04
for example this green one. Here they
00:25:06
basically write that there is an infinity of infinities greater than infinity,
00:25:08
and next to this
00:25:10
pink class they write that there is
00:25:12
greater than infinity two infinities And
00:25:15
in this cluster they write that the letter Ali,
00:25:16
which is used to denote an
00:25:18
infinite set, is the first letter
00:25:20
of Hebrew in this pink cluster.
00:25:22
The question is, why are all these
00:25:23
infinities needed? And here they write that this
00:25:25
video blew up and of course I
00:25:27
googled my brain, is it possible to use
00:25:29
clustering after projection and
00:25:31
it turns out that many have already asked this
00:25:33
question and usually everyone answers that it’s better
00:25:35
not to because after projection the
00:25:37
distances are not fully
00:25:39
preserved and clusters will then be
00:25:41
searched for at distorted distances. But
00:25:43
it seems that in my case it worked
00:25:45
very well. Well, either I did
00:25:47
something... that’s not true, but now I want to show a
00:25:49
video with one of the largest numbers
00:25:52
of comments from those that were included in my
00:25:54
statistics, this video about sleep paralysis
00:25:56
on the channel Utopia show house 49 thousand
00:25:59
comments and quite interesting
00:26:01
clusters and this is how it looks
00:26:03
interesting what is it the first such video with a
00:26:05
huge number of comments
00:26:06
that came across statistics And it was on
00:26:08
it that the program crashed. At first I didn’t even
00:26:10
understand why, but then it turned out that
00:26:13
this was due to the fact that I
00:26:14
first converted the array of all imbidings into a string and
00:26:16
then saved this string to a file and
00:26:18
it turns out that note.gs simply cannot
00:26:20
create such a giant string and instead
00:26:23
I just decided to append each
00:26:24
imbiding to the end of the file. And to store
00:26:26
all these huge bedings on the server
00:26:28
we need more memory. Initially there was
00:26:31
40 GB, I increased it to 64 but it is
00:26:34
very easy to do just go to the
00:26:36
selectel server settings and press
00:26:37
the plus sign, let's start with the cluster on the
00:26:39
left side, here they talk about the dance at
00:26:41
the end, here about the splash screen And here they write
00:26:44
about the fact that they liked But these were
00:26:46
quite ordinary clusters Let's
00:26:48
look at more interesting ones here,
00:26:49
for example they write about coronavirus And here
00:26:52
they ask to make a video about lucid
00:26:53
dreams, here they write about the fact that they
00:26:56
had sleep paralysis, and in the largest
00:26:59
cluster they share stories of sleep
00:27:01
paralysis, usually they are quite
00:27:02
long, here they write that they have never
00:27:04
had sleep paralysis, and here they write that they
00:27:07
really want sleep and paralysis here are
00:27:09
already classic comments why am I
00:27:11
watching this at two in the morning And in this
00:27:13
class they write about who sleeps on their back on their
00:27:15
side or on their stomach here they write about
00:27:17
demons that come during
00:27:18
sleep paralysis And here they write about a
00:27:21
succubus here they discuss what they
00:27:23
call this in different cultures Demona But
00:27:25
this is a rather interesting cluster Let's get
00:27:27
closer to it, here they write no one
00:27:28
absolutely no one nearby they write about an episode from
00:27:31
Dr. House and on the other hand they write
00:27:34
about [ __ ] sofa pillar And next to the
00:27:37
cluster where they write no one absolutely
00:27:38
no one nearby there is such a small
00:27:39
cluster where they write no one absolutely no one
00:27:41
and then they write about the Witcher from the
00:27:44
center of the cluster where no one is absolutely
00:27:45
no one this cluster about the Witcher, it
00:27:47
seems to be pointing somewhere in the direction if
00:27:49
we look in this direction and go here then
00:27:51
here it will be Just the cluster where
00:27:53
they discuss the Witcher here they write timecodes
00:27:56
here are different plus symbols other
00:27:58
question marks And here are emojis And I also
00:28:02
decided to make statistics not only on
00:28:03
the comments but also on the videos themselves and this is what it
00:28:06
looks like here I didn’t use
00:28:08
clustering because in essence
00:28:10
we already know the clusters, these are the channels themselves
00:28:12
Some videos clearly form separate
00:28:14
clusters But for example, my videos are
00:28:16
scattered among these statistics, but it’s
00:28:18
not surprising because mainly for
00:28:20
these statistics I was looking for a channel and
00:28:22
there are at least 80 percent of such channels similar to mine.
00:28:24
Let’s look at those
00:28:27
videos that are clearly separated into
00:28:28
separate clusters for example, this chemistry is
00:28:30
simple and quite easy to understand why it
00:28:32
ended up in such a separate cluster. The fact is
00:28:34
that the neural network looks at the title of the
00:28:36
video and every time it has
00:28:39
chemistry added to the title at the end, it’s simple and therefore
00:28:41
easy to understand which channel
00:28:43
these videos belong to, there’s a cluster nearby
00:28:45
anthropogenesis and verdider it is interesting that the
00:28:48
verdider is scattered across the statistics and it
00:28:51
is, for example, here it is also
00:28:53
here and here it is a very compact
00:28:55
cluster. But the fact is that on the
00:28:57
water verdider channel there are mainly English-language
00:28:59
videos and at the end of each title the
00:29:02
channel from which the video is added was translated, there
00:29:03
is also a cluster of Makar Light with a
00:29:06
section and science show, it is located
00:29:08
quite densely But the rest of the videos of
00:29:09
Makar Light are scattered according to statistics, there
00:29:11
is a huge cluster of Boris
00:29:14
Trushin and among his videos there is a video of
00:29:16
Dagon Impossible task binary code in
00:29:19
action, which in general is suitable under this
00:29:21
cluster there is a super compact cluster of
00:29:23
one of the headings Utopia show here the
00:29:26
Space cluster is simple and next to it there is a DS Astra video and
00:29:29
around them are scattered videos from the
00:29:31
Zemlyakov channel, there is also Shklovsky street nearby
00:29:33
And below there are game
00:29:35
channels here many channels make videos
00:29:37
using Unity and some does Let's Play or
00:29:40
reviews of games, here for example there is Trishka boom
00:29:42
and a cluster with undertail below there is a
00:29:45
cluster with Detroitbike Human and then
00:29:48
suddenly there is one video from another
00:29:49
channel And this sitar is also about Detroit
00:29:52
Became Human there is a clusterinj And
00:29:55
if you get closer to it, there
00:29:57
is also one here The video here in the center is a fairly
00:29:59
compact cluster with videos from the
00:30:01
topless channel. And here at the top there is a very
00:30:03
separate cluster with the Domioz channel, it is
00:30:05
so separate because in each
00:30:06
video title there is a phrase 10 interesting
00:30:09
facts, but another interesting fact is that
00:30:11
two more videos from other channels are included here.
00:30:13
Let's let's get closer to this cluster and this is
00:30:15
suddenly a video from the DS Astra channel 10
00:30:18
most beautiful lunar craters and videos from the
00:30:20
Cosmos channel just 10 fakes about the Moon
00:30:23
I also tried to find the users
00:30:25
who leave the most
00:30:26
comments under the video and in first place
00:30:29
was the one who was in my statistics
00:30:31
hit
00:30:32
28,000 comments and It turned out to be chemistry It’s
00:30:35
just that he simply responded to comments
00:30:37
under his videos and thus scored
00:30:40
28,000 comments in the first place
00:30:42
chemistry is simple and in the second place Space just
00:30:46
got 20,000 comments in the same way
00:30:48
Then three closet wildbers
00:30:50
stubborn paleontologist Yes, I’m with Astra Boris
00:30:52
Rushin it turns out that the Seven first places in
00:30:55
comments are the bloggers and
00:30:57
only then the Subscriber who
00:30:59
wrote more than 4000 comments to a
00:31:01
variety of channels. Now let’s
00:31:03
see when comments are written after
00:31:05
the video is released, here you can see that
00:31:07
most comments are written at the very
00:31:08
beginning and then their number
00:31:10
drops sharply, but then it seems to rise a little
00:31:12
and then drops again,
00:31:14
rises as if there are some
00:31:16
waves here, and if you get closer to this graph, the
00:31:19
waves are visible even better and as you can
00:31:21
guess, the length of this wave is equal to a day
00:31:23
and most comments are
00:31:25
probably written during the day, but Wait, this is not an
00:31:27
absolute time, this is a time relative to the
00:31:29
release of the video, and since everyone uploads
00:31:32
videos at different times, these waves
00:31:34
could be smoothed out, but as you can see, they are there,
00:31:36
probably because the majority
00:31:37
upload videos at similar times. That
00:31:39
is, somewhere in the evening, but since all the channels
00:31:41
in this sample are Russian-language, and then the evening
00:31:43
takes place in approximately the
00:31:45
same time zones and Let's also
00:31:47
try to look at the number of likes
00:31:49
by time, it also, of course, needs to be
00:31:51
divided by the number of comments
00:31:52
so that there are no so that where there are more
00:31:54
comments there are more likes I thought
00:31:56
that the comments
00:31:58
that were written immediately after the release of the
00:31:59
video get more likes and it seems that this is the case then the
00:32:01
statistics become much
00:32:03
noisier and nothing is clear here, but
00:32:05
at the beginning it is much smoother because
00:32:07
there are simply more comments there
00:32:09
were the number of comments by the hour and
00:32:11
now let's by minute and it is clear that in the
00:32:13
first minutes after the release of the video a
00:32:15
very large number of
00:32:16
comments appear and it passes approximately in the
00:32:19
seventh minute after the release of the video this
00:32:21
graph shows exactly the first day
00:32:23
after the release of the video if you remove the first 7
00:32:25
minutes then the graph will go like this and
00:32:27
now let's see in terms of the number of
00:32:28
likes by minute, here it is of course
00:32:31
much more noisy and, again,
00:32:33
most likes receive comments
00:32:34
that are written at the very beginning or not
00:32:37
at all. Look, there is such a
00:32:39
small stripe because of the noise. It’s
00:32:41
not very visible, but in fact
00:32:42
the graph is not very good at first high And it
00:32:44
goes up and reaches the Peak
00:32:46
Approximately at the eighth minute.
00:32:48
Remember, I showed before that a
00:32:49
huge number of comments are written
00:32:51
in the first 7 minutes, but at the same time They do
00:32:53
n’t get that many likes. It happens at the
00:32:56
eighth minute and it is the
00:32:59
comments written then that get
00:33:00
the majority of likes, that is the comment that is most
00:33:03
likely to receive more likes is the one
00:33:04
written eight
00:33:06
minutes after the video is released, and
00:33:08
remember that its length should be from 90
00:33:10
to 105 characters. I also tried to find the
00:33:13
most popular words in the comments on
00:33:15
different channels. And in fact, they are
00:33:17
the same on all channels. and in and not Well,
00:33:19
simply because these are the most
00:33:21
common words in the Russian language,
00:33:23
so I tried to find not just those
00:33:25
words of which there are a lot, but those words of which there are
00:33:27
more on a specific channel than in the general
00:33:29
sample, and then on my channel, for example, the
00:33:31
top 10 words turn out to be They are weights
00:33:33
colon dog onigirirayo this is connected
00:33:37
with the number of paradise from the video about the largest
00:33:39
numbers they are a weight with a comma and not weights without a
00:33:41
comma they are weights in English letters with a
00:33:44
comma Nigeria without a comma also
00:33:46
this 3-3 from the video about the largest
00:33:48
numbers all Finnish this is the
00:33:50
Sierpinski triangle apparently because that the word
00:33:52
triangle is often found on
00:33:54
other channels, but not the word of Serfinsky and
00:33:56
Google plex, also from the video, the largest
00:33:58
numbers seem to me to be the most interesting top
00:34:01
10 words in chemistry, it’s just that his first one is
00:34:03
bullum bloom comma, then a link to a
00:34:06
deleted video on YouTube dog chemistry
00:34:08
bullum comma broma bulm will come out
00:34:12
colon exhaust bul bul with a
00:34:14
question mark and a spirit lamp Well,
00:34:16
here you can see that a lot of words
00:34:18
are repeated It’s just that they are followed by a
00:34:20
comma dot Because I
00:34:21
separate words simply by spaces You can
00:34:23
try not to take into account any commas,
00:34:25
dots and other symbols and then the top 10
00:34:28
words For example, it looks like this here,
00:34:30
first the place where the word is,
00:34:31
then the number of its mentions and the
00:34:33
word itself and the rating
00:34:35
haven’t changed much, then the word arrow was added,
00:34:37
which is interesting, this is not even the game about
00:34:39
arrows that I make, but arrows from a
00:34:42
video about the largest numbers and
00:34:43
a rating was also added. For example, chemistry is
00:34:46
simply found more than a thousand times as a
00:34:48
word and more than a thousand times.
00:34:50
It’s interesting that it occurs only
00:34:53
twice more than just the boom of a stubborn
00:34:56
paleontologist; the very words are basilosaurus,
00:34:59
mosasaurs, akylosaurus, and so on, and on the
00:35:03
DS Astra channel it is in fourth place in
00:35:05
popularity the word goes milking And I also
00:35:07
made this site with voting for the
00:35:09
best thing, two completely
00:35:11
random things are given to choose from and we choose what
00:35:14
we like best. I already posted this
00:35:16
on my Telegram Channel, by the way,
00:35:17
subscribe to it and there were
00:35:19
more than 100,000 votes but after this
00:35:22
video you can make sure that this is
00:35:23
not enough for good statistics And
00:35:26
this test will be indirectly related to the
00:35:27
next video and a link to it will be in the
00:35:29
description So go vote and
00:35:32
see what a beautiful animation of
00:35:33
cards I made there And if you want to
00:35:35
support the channel then Subscribe and
00:35:38
bye everyone
00:35:43
[music]

Description:

Начни карьеру в Data Science: https://go.skillfactory.ru/jperFg Возврат денег за курс, если не нашли работу после обучения + скидка 45% по промокоду ONIGIRI до 31.07.2023 Сервер с GPU в аренду: https://selectel.ru/services/gpu/?section=cloud Телеграм https://t.me/onigiriScience Boosty https://boosty.to/onigiriscience Сайт с голосованием: https://www.bestthingever.fun/ В этом видео я анализирую комментарии с разных каналов на ютубе. Применяю методы кластеризации и делаю визуализацию эмбеддингов Реклама: ООО "Селектел", ИНН: 7842393933, токен: 2VtzqupnNU4 ООО "СКИЛФЭКТОРИ" ИНН 9702009530, токен: 2VtzquWXGBn

Preparing download options

popular icon
Popular
hd icon
HD video
audio icon
Only sound
total icon
All
* — If the video is playing in a new tab, go to it, then right-click on the video and select "Save video as..."
** — Link intended for online playback in specialized players

Questions about downloading video

mobile menu iconHow can I download "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал" video?mobile menu icon

  • http://unidownloader.com/ website is the best way to download a video or a separate audio track if you want to do without installing programs and extensions.

  • The UDL Helper extension is a convenient button that is seamlessly integrated into YouTube, Instagram and OK.ru sites for fast content download.

  • UDL Client program (for Windows) is the most powerful solution that supports more than 900 websites, social networks and video hosting sites, as well as any video quality that is available in the source.

  • UDL Lite is a really convenient way to access a website from your mobile device. With its help, you can easily download videos directly to your smartphone.

mobile menu iconWhich format of "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал" video should I choose?mobile menu icon

  • The best quality formats are FullHD (1080p), 2K (1440p), 4K (2160p) and 8K (4320p). The higher the resolution of your screen, the higher the video quality should be. However, there are other factors to consider: download speed, amount of free space, and device performance during playback.

mobile menu iconWhy does my computer freeze when loading a "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал" video?mobile menu icon

  • The browser/computer should not freeze completely! If this happens, please report it with a link to the video. Sometimes videos cannot be downloaded directly in a suitable format, so we have added the ability to convert the file to the desired format. In some cases, this process may actively use computer resources.

mobile menu iconHow can I download "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал" video to my phone?mobile menu icon

  • You can download a video to your smartphone using the website or the PWA application UDL Lite. It is also possible to send a download link via QR code using the UDL Helper extension.

mobile menu iconHow can I download an audio track (music) to MP3 "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал"?mobile menu icon

  • The most convenient way is to use the UDL Client program, which supports converting video to MP3 format. In some cases, MP3 can also be downloaded through the UDL Helper extension.

mobile menu iconHow can I save a frame from a video "Проанализировал комментарии НЕЙРОСЕТЬЮ. И вот, что я узнал"?mobile menu icon

  • This feature is available in the UDL Helper extension. Make sure that "Show the video snapshot button" is checked in the settings. A camera icon should appear in the lower right corner of the player to the left of the "Settings" icon. When you click on it, the current frame from the video will be saved to your computer in JPEG format.

mobile menu iconWhat's the price of all this stuff?mobile menu icon

  • It costs nothing. Our services are absolutely free for all users. There are no PRO subscriptions, no restrictions on the number or maximum length of downloaded videos.