background top icon
background center wave icon
background filled rhombus icon
background two lines icon
background stroke rhombus icon

Download "Нейробиологи дешифровали мысли в текст | ПУШКА #57"

input logo icon
Table of contents
|

Table of contents

0:00
как научились «читать мысли»
9:36
мегапроект по мозгу + реклама курса, за который платишь, если только устроился потом на работу
11:47
научились считывать «видения»
15:50
и проникают во сны
20:01
благодарности
Video tags
|

Video tags

новости
загадки
тайны
открытия
наука
изобретения
технологии
техника
будущее
эксперименты
опыты
прогресс
мозг
интеллект
нейроны
нейронауки
нейробиология
нейросети
ИИ
искусственный интеллект
машинное обучение
нейроинтерфейсы
чтение мыслей
нейролинк
протезы
Пушка
Влад Гончарук
scione
сайуан
сцыван
скайван
скайуан
скиван
Subtitles
|

Subtitles

subtitles menu arrow
  • ruRussian
Download
00:00:01
when knowledge and technology reaches
00:00:04
such a density that begins to work
00:00:05
for each other, we get something like a
00:00:08
thermonuclear reaction, new
00:00:10
ideas and technologies are synthesized and we get a chance to
00:00:13
reach a new level of development, the same
00:00:16
special time is now coming in the
00:00:18
research of our brain, the source of
00:00:20
our most valuable opportunities and
00:00:23
achievements throughout history, this
00:00:25
fall, almost unnoticed, the news went almost unnoticed
00:00:27
that a thought decoder was invented at the University of Texas,
00:00:29
this is the wording of
00:00:32
journalists and not the authors, but they
00:00:34
really did what can be
00:00:36
called that, a person silently thought something and at the
00:00:39
output the researchers received his thoughts
00:00:41
in the form of text, the authors initially set,
00:00:45
of course, not the same task to encrypt the
00:00:46
inner voice; they learned to decode individual words or
00:00:49
even phrases from thoughts into text
00:00:52
only based on data on brain
00:00:54
activity, and before there was nothing
00:00:56
surprising here, now the task was
00:00:59
to decode the entire stream,
00:01:01
once you start thinking, and now
00:01:04
the machine, like a shameless typist,
00:01:06
translates in plain text what is in your
00:01:09
head, but most existing
00:01:11
methods of accessing your thoughts require
00:01:14
brain surgery and the implantation of electrodes
00:01:16
can only be suitable for those who are already being
00:01:19
examined, for example, paralyzed
00:01:21
people or those who are facing an already
00:01:24
difficult operation and they are not averse to helping science at the same time;
00:01:27
for the rest, this is too
00:01:29
risky a method. Usually, through
00:01:32
electrodes implanted into the brain, neuroscientists
00:01:34
monitor the electrical activity of the
00:01:35
motor cortex, that is, in the area that is
00:01:38
mainly responsible for movement,
00:01:41
including the lips, and according to the signals that the brain
00:01:43
voluntarily or involuntarily sends to the lip muscles
00:01:45
and are trying to decipher what a person
00:01:48
says in his head, neuroscientists and
00:01:51
Stichas decided to test a fundamentally
00:01:53
different idea. Is it possible, without surgery,
00:01:55
which already radically limit the
00:01:57
use of technology without a direct
00:01:59
connection to the brain, to take some
00:02:02
data about the processes in it in order to
00:02:04
decipher a person’s inner speech
00:02:06
and all flow of speech and not individual
00:02:10
words and phrases. In the future, if it
00:02:12
works out, of course, with modifications this
00:02:15
would allow not only to restore
00:02:18
the ability to communicate with the outside
00:02:20
world to completely paralyzed people or with speech impairments, but also to expand the capabilities of
00:02:22
healthy people, for example, to help
00:02:25
those who are not very strong in
00:02:27
expressing themselves thoughts, if we let’s
00:02:29
connect a client to a text generator
00:02:31
like gpt chat, then it would already produce a
00:02:34
collapsible match, and even in the desired style,
00:02:36
simply based on your thoughts, or you
00:02:39
could find out what you really think
00:02:42
but don’t notice about yourself or what you don’t
00:02:45
even want confess
00:02:46
reveal true thoughts Can you imagine
00:02:50
how this could help psychotherapy Well,
00:02:52
or marketers or hypothetically
00:02:55
find out what a person is trying to hide when
00:02:57
applying for a job or during interrogation, which is not
00:03:00
always the case or what is more likely
00:03:03
to be checked at a completely different level,
00:03:06
for example in court, truth or lies Says
00:03:08
man, and if we consider that there have been serious
00:03:12
advances in deciphering the pictures that
00:03:13
our brain sees, then we can go
00:03:16
even deeper and get seriously closer to the
00:03:19
possibility of live broadcasting dreams of
00:03:21
hallucinations or just imagination. But more
00:03:25
about this a little later in the video, so back in the
00:03:27
fall, researchers from Texas
00:03:29
managed to show that without surgery
00:03:31
you can decipher the stream of internal speech, it really
00:03:34
came out very, very crookedly,
00:03:36
so much so that they themselves said it
00:03:38
cannot yet be used even for communication between
00:03:40
paralyzed people, but the other day they
00:03:43
released a new work and there is already a breakthrough;
00:03:45
this time neuroscientists crossed their
00:03:48
technology with a large language model
00:03:50
gpt So now the prospects are completely
00:03:53
different, it looks like a fantasy of a person
00:03:56
being put into a tomograph, a person thinks and the
00:03:58
output is a transcript of thoughts, but
00:04:01
they had to make a serious
00:04:03
compromise so that it would be so simple and
00:04:05
without getting directly into the convolutions instead of
00:04:07
more or less direct data,
00:04:09
for example about Researchers can only obtain electrical activity
00:04:11
directly from neurons by
00:04:12
doing an
00:04:15
MRI scan from the outside, that is, you can see very
00:04:18
indirect information about the processes in the brain
00:04:20
on the scan. How it changes Yes, in very
00:04:23
detail, in the details of the
00:04:25
oxygen saturation of the blood in the brain, it
00:04:29
really changes depending on the
00:04:30
activity with which your head is occupied
00:04:32
here and now, but still it’s like
00:04:36
trying to understand what’s happening today by looking at the traffic of cars on the roads of the city.
00:04:39
For example, you can say that it’s
00:04:41
the weekend if everyone is going to shopping centers and theaters, but it’s
00:04:44
unlikely that
00:04:46
you can reliably clarify just by traffic activity. Or maybe it’s
00:04:50
some kind of holiday then what exactly is
00:04:53
March 8th or Valentine's Day or
00:04:55
something else and this is where
00:04:57
artificial intelligence technologies made it possible to
00:04:59
compensate for the weakness of the method,
00:05:01
introduce it to a new level,
00:05:03
researchers knew that a system like gpt
00:05:05
use something like maps of meaning, which
00:05:08
allows them to produce meaningful
00:05:10
texts for humans Although such systems
00:05:12
simply don’t know in advance what will happen at
00:05:14
the end of the text, they don’t work at all
00:05:17
like we thought of something and select the
00:05:19
words for it no, that’s why they produce
00:05:21
the text as if they were typing Here and now they
00:05:24
predict each step like this Step by step the
00:05:26
next piece of text Which is better
00:05:29
to put these pieces are called
00:05:31
tokens on the website of the creators of gpt Open ai
00:05:34
you can enter any text and see how
00:05:37
it looks for their system as a set
00:05:39
of pieces these are syllables and words, but there are also
00:05:42
individual letters and from individual
00:05:44
pieces to collect words and phrases are,
00:05:46
of course, good, but how about
00:05:48
they all come together into something
00:05:50
connected? With such a blind
00:05:52
perspective, you start saying one thing and
00:05:54
go somewhere unknown, you need
00:05:56
something like maps by which the machine
00:05:59
would navigate how words are related to
00:06:01
each other, for example, by meaning and
00:06:04
These are the kind of maps you can create, they
00:06:06
are called embeddings, you can read in more detail
00:06:08
as always. From the links under the video,
00:06:10
this is the solution: with the help of similar
00:06:13
maps, it would be possible to associate different
00:06:15
changes in brain activity with meanings and
00:06:18
not individual dictionary meanings, since
00:06:21
we have such indirect data about
00:06:23
brain processes then and We need to capture the general meaning of
00:06:25
what a person is thinking now and not try to
00:06:27
literally translate it into text so that
00:06:30
researchers can build such
00:06:32
maps. Each volunteer had to
00:06:35
spend 16 hours in an MRI listening to Podcasts
00:06:38
literally every day so the algorithm could
00:06:40
compare This is the text A person
00:06:43
hears This is how
00:06:45
his brain behaves the same mile a second and it turns out a map of
00:06:48
meanings with which you can now do and
00:06:52
vice versa give an algorithm to show him
00:06:54
the activity of the brain and in response receive a
00:06:57
text from him what the person in the pereska thinks
00:07:01
not literally not word for word and they made the
00:07:04
volunteers a tomograph now showing
00:07:06
videos without sound And here are the thoughts the
00:07:09
algorithm was decoding at this time. I
00:07:11
look around and notice two guys in front of
00:07:13
me and immediately start screaming. They
00:07:16
fly by. And one of them grabs my
00:07:19
hand and pulls me forward towards him, he moves towards
00:07:21
my face. I can’t avoid the blow, he
00:07:25
knocks me down. from his feet and falls to the
00:07:27
ground with the same result. It was a
00:07:29
very slow movement that took
00:07:32
minutes, in the end the guy was exhausted and so
00:07:35
on. Volunteers and the decoder were tested
00:07:37
and in another way people also had to
00:07:40
listen to some kind of recorded story in the tomograph
00:07:42
or imagine a story. in your
00:07:44
imagination, that is, to fantasize,
00:07:46
something was more or less said, and here it became
00:07:49
especially clear how it works and
00:07:51
what the problems are so far. For example, if the
00:07:54
podcast said I don’t have a
00:07:55
driver’s license yet, then the decoder
00:07:58
gave this: she hasn’t started learning to drive
00:08:01
yet a good result, but as soon as
00:08:03
a person thought
00:08:05
about something else while listening, the decoder produced
00:08:08
nonsense that was not even related to what the
00:08:11
test subjects were listening to at that moment, that is, the
00:08:14
person in the tomograph should be
00:08:16
focused on thoughts. And these thoughts
00:08:19
should not yet be completely free, but be
00:08:22
asked some kind of consistent
00:08:24
material such as films of conversations, a
00:08:25
podcast or an audio book, these are just the first
00:08:28
steps. So I would not rush
00:08:30
to predict the fate of the technology. Now
00:08:32
decryption only works personally,
00:08:35
that is, you. You cannot just take it and
00:08:37
scan and read your thoughts, you
00:08:39
must first train the algorithm
00:08:42
on your scans. A these are many
00:08:45
long hours in a tomograph, but the
00:08:47
fundamental ability to read minds,
00:08:49
listen to your inner voice, seems to have
00:08:52
no longer become rabid and science fiction. Therefore,
00:08:54
in the authoritative scientific magazine Nation,
00:08:57
literally a few days later, an
00:08:59
article was published where they began to raise the question of the
00:09:02
risks and ethics of using
00:09:04
such technologies. However, if you
00:09:06
compare the pace With what
00:09:09
ethical and other ethical questions do we answer and,
00:09:12
most importantly, do we or do we not bring these
00:09:14
answers into practice and at what pace is
00:09:17
technology now developing? I’m not sure
00:09:20
that we can keep up. Although we must try.
00:09:22
Of course we must, meanwhile we are already
00:09:25
making our way deep into the brain to our dreams and
00:09:27
imagination about what I’ll tell you after a
00:09:29
short chapter about a mega-project in
00:09:31
neuroscience and advertising courses for which
00:09:33
you pay only if you were able to get a job after them.
00:09:39
This year the
00:09:41
world’s largest brain research project, the Human
00:09:43
Brain Project, ends 10 years ago. It united
00:09:46
teams of neuroscientists from all over Europe and
00:09:48
other regions in The
00:09:49
investigators were given the most powerful
00:09:51
supercomputers at their disposal, and all this to create an
00:09:54
advanced scientific and technological
00:09:56
infrastructure for studying the brain for the
00:09:58
decade ahead, which is interesting,
00:10:01
given the scope and goals, it cost only 607
00:10:05
million euros. One of the main tasks
00:10:07
that the participants solved was to create a new
00:10:09
tool for research and
00:10:11
collaboration, is it a joke? scientists
00:10:13
from 123 institutes and universities
00:10:16
Therefore, if you go to the page where
00:10:18
they post the Source code that they create
00:10:20
as part of the project, then you can see
00:10:23
what languages ​​Java scientists use, as a
00:10:26
rule, for managing servers and for
00:10:28
working with databases, the most important thing for
00:10:30
collaboration is for this and appreciates toad at
00:10:33
Kata Academy are so confident in him
00:10:35
that they offer a unique training opportunity for the market
00:10:39
with payment after employment; in an
00:10:41
agreement with the student, they guarantee the
00:10:43
minimum salary in the specialty
00:10:45
after graduation of 100 thousand rubles per month,
00:10:47
according to their statistics, already this year more than
00:10:50
80 percent of graduates receive 150
00:10:53
thousand rubles and above no installment
00:10:55
loans and fees passed the entrance
00:10:57
tests start studying from the very
00:10:59
beginning assesses motivation
00:11:01
readiness to complete the program to the end
00:11:03
after each block of knowledge looks at
00:11:06
academic performance it doesn’t matter no problem
00:11:08
say goodbye And you don’t pay anything after
00:11:10
mastering the theory practice a large
00:11:12
team project where you work in
00:11:14
conditions close to real, all
00:11:16
your studies are supervised by experienced mentors at the
00:11:18
attack academy, they are very interested
00:11:19
in you reaching the end and then graduating and
00:11:23
joining the company, if for some
00:11:25
reason you cannot find a job in your
00:11:27
specialty, you also do not pay for your studies,
00:11:29
that is, the only thing you need
00:11:31
investing to start the course and go
00:11:33
all the way requires effort, time and perseverance, and you
00:11:37
pay when
00:11:39
it starts to pay off for you,
00:11:40
follow the link under the video,
00:11:43
cat course at the Java Academy Development and up to the strength of the
00:11:45
road going back
00:11:49
in the eleventh year, scientists learned to
00:11:52
reconstruct from signals in the
00:11:54
human brain What on the left he sees clips
00:11:57
that were shown to a person and on the right
00:12:00
that they managed to recreate the algorithm then the
00:12:02
result As you can see, it was very, very
00:12:04
conditional, although important. And here is what the
00:12:07
researchers have now achieved in the
00:12:09
work published the other day: at the top are the
00:12:11
original frames of a black and white film And
00:12:14
at the bottom is a picture created by the algorithm Based on the
00:12:16
signals in the brain, you will agree that it is
00:12:19
very difficult to distinguish from the original, that’s the
00:12:21
progress in 11 years, but there are nuances in the
00:12:24
old work, they used MRI and the
00:12:26
experimental subjects were people in the new work.
00:12:28
Experimental mice and data are obtained from
00:12:31
electrodes implanted not just in the brain, but
00:12:34
in the visual cortex where
00:12:36
signals are processed the incoming retinas of our eyes,
00:12:38
otherwise everything is already familiar to you,
00:12:40
they used something like a value map, the
00:12:42
same imbining algorithm was first
00:12:45
trained on a mouse with electrodes when it was
00:12:47
shown the same excerpt from the film, the
00:12:49
program identified patterns in
00:12:52
changes in electrical activity, and
00:12:54
then when the algorithms gave signals
00:12:56
only so that it can recreate what the
00:12:59
mouse sees, it generates frame by frame with an
00:13:03
accuracy of 95%, researchers are more
00:13:06
and more actively trying to combine
00:13:08
several developments from different fields at once,
00:13:10
especially from the field of artificial
00:13:12
intelligence. So, they have already learned to reconstruct from MRI scans
00:13:15
without surgery,
00:13:18
albeit not in such detail like in mice, what
00:13:21
our human brain sees, the research
00:13:23
also turned out to be very dense
00:13:26
at the top, the pictures that were shown to the
00:13:28
experimental subjects and at the bottom, images reconstructed
00:13:31
only from brain activity,
00:13:33
which is especially interesting. In
00:13:36
this work, a
00:13:38
personal approach is no longer necessary; it is not necessary
00:13:40
to learn the algorithm from data for a specific
00:13:42
person, in order to get a tolerable
00:13:44
result, you don’t have to be stuffed into
00:13:47
tomographs, you don’t have to be kept there for hours. The
00:13:49
fact is that there are some general
00:13:51
patterns in activity for
00:13:53
any brain, for example in one place.
00:13:55
Everyone is more or less active if we see
00:13:57
a face and if it’s a landscape, what’s more in other
00:14:00
areas, too, more or less For everyone, the
00:14:03
details of the activity of one brain will,
00:14:04
of course, differ from the other, it turns out
00:14:07
that With this approach we are dealing only
00:14:09
with something very approximate, very
00:14:11
conditional, how then with such a degree of
00:14:14
abstraction can we collect a concrete face of the landscape animal?
00:14:17
restore the picture of what the
00:14:20
brain sees now in a dream or in reality And
00:14:22
here neuroscientists from Japan have already figured out
00:14:25
how to use
00:14:29
image generators like
00:14:31
stable Di Fusion or MFA Johnny to solve the problem.
00:14:36
the system uses these
00:14:39
words to create a picture, so if we
00:14:41
establish connections, this is Brain
00:14:43
activity such and such a concept, these are
00:14:46
MRI scans - This is more likely when a person
00:14:48
sees bear scans When an airplane, then we
00:14:52
will be able to turn data on
00:14:54
brain activity into a set of words, not into a picture, into
00:14:57
text description And these text
00:15:00
descriptions remain to be fed to the
00:15:02
image generator like this, if to simplify things very much,
00:15:04
neuroscientists have achieved quite
00:15:06
high quality, as if decoding
00:15:08
visual images, the pictures were in
00:15:11
the head, it became available to everyone, the accuracy
00:15:13
reaches 80 percent, the
00:15:15
next logical step would be to
00:15:17
try this approach for recording
00:15:19
pictures in the imagination or even dreams after all, it does
00:15:22
n’t matter to the brain what exactly
00:15:25
activates its visual cortex; signals
00:15:27
from the eyes or from other regions of the brain
00:15:29
when we remember, dream or just
00:15:31
fantasize; if at the same time we seem to
00:15:35
see pictures within ourselves, then this means
00:15:37
that the visual cortex is working. So
00:15:39
perhaps we are not so far away in
00:15:42
order to give a way out to what is actually
00:15:44
hidden in our heads not only
00:15:46
through words but also through digitized
00:15:49
visions,
00:15:53
here it is worth remembering another very
00:15:55
interesting technology that they can
00:15:57
try. Connect with other ones that are
00:15:59
already charging each other, dream incubation, I
00:16:03
talked about here in this video and I
00:16:05
’ll also remind you briefly over the last
00:16:07
few years, neuroscientists have begun
00:16:09
to create scientific instruments that
00:16:10
should help implement something in a
00:16:13
dream, not quite like in the movie
00:16:16
Inception, where the victim was given false
00:16:18
Memories or an idea, for example,
00:16:20
to transfer the company to another person,
00:16:23
woke up and believed that he himself began
00:16:25
to think this way, while we are talking about much more
00:16:27
modest things like individual images,
00:16:29
here is a square, a dream, here is a circle, this is
00:16:32
necessary for studying the nature and function of
00:16:34
dreams, a purely scientific task, purely
00:16:37
scientific interest, it may only seem
00:16:40
that we already understand a lot about this
00:16:42
phenomenon of the brain about dreams, so So
00:16:43
researchers use sensors to
00:16:45
determine when the brain of a sleeping person is
00:16:48
most receptive to external
00:16:50
stimuli and at this time exposes
00:16:53
Volunteers to smells, sounds of
00:16:56
flashing lights or even speech to
00:16:58
influence the content of their dreams, success
00:17:02
so far is very modest, well,
00:17:04
except to direct sleep but not ask them
00:17:07
to create some images that
00:17:10
were not in a dream, but the attempts themselves were
00:17:12
enough to raise
00:17:14
alarm among some scientists here, more than 40
00:17:17
researchers from around the world from leading
00:17:18
research centers Harvard
00:17:21
Cambridge Lund Paris Berlin Geneva
00:17:23
warned colleagues in an open letter that
00:17:25
corporations are already flirting with
00:17:28
technology, not understanding what a big
00:17:29
responsibility this is, you can’t do that, but it
00:17:32
was in an advertising campaign for a beer brand
00:17:35
where neuroscientists, commissioned by the company,
00:17:38
allegedly incubated the dreams of experimental subjects and instilled in
00:17:40
them images from a drink advertisement, a scientific
00:17:43
publication then was not pure
00:17:45
marketing, but they used real
00:17:48
scientists and real experiments real
00:17:51
methods and this was 4 years ago Now
00:17:54
the situation is completely different Then
00:17:56
neuroscientists called to intervene and
00:17:58
protect people from such unscrupulous games
00:18:01
with the brain, the state should intervene, but
00:18:03
everyone happily forgot about the letter and very quickly
00:18:06
forgot about the problem, but
00:18:08
new opportunities to penetrate the brain and
00:18:12
play appeared. with it they can bring both good
00:18:14
and evil and no one can predict what
00:18:16
more, besides, new opportunities are
00:18:19
appearing faster and more often that doing
00:18:22
nothing will not work banning unless something
00:18:26
happens is pointless others
00:18:27
will take advantage of the opponent’s stupor
00:18:29
developments are carried out in parallel by different
00:18:31
countries, let’s say a portable
00:18:34
decoder appears thoughts, we are introducing licenses for it
00:18:37
as for a weapon. And on what basis should
00:18:39
these licenses be limited? For the
00:18:42
purposes of using a psychotherapist, it is possible, but
00:18:44
for private interrogations, it is impossible. But for
00:18:47
police and intelligence services, it is possible. And who
00:18:50
said that for the illegal, unethical and
00:18:52
dangerous use of something
00:18:55
a license is needed? And in general, someone’s or resolution
00:18:57
No no, I don’t want to
00:19:01
scare you with something from the videos. I’m just trying
00:19:03
to understand what I’m observing and sharing
00:19:05
questions that I’m not the
00:19:07
only one who has questions. Well, that’s what you can be
00:19:09
sure of: more and more
00:19:12
interesting hybrid technologies will appear in
00:19:14
brain research: especially important
00:19:15
This can give the
00:19:18
impetus for development that has been so lacking in recent years;
00:19:21
there have been no fundamental
00:19:23
discoveries of great achievements for a very long time. I’m not saying
00:19:26
that they have to appear like new
00:19:28
clothing models on schedule every season, but
00:19:30
still it’s these breakthroughs of these
00:19:34
achievements that we are waiting for because that only
00:19:36
such big breakthroughs allow
00:19:38
us to hope for solving the most important
00:19:40
mysteries of the brain, for example, what is the neural
00:19:43
nature of many mental
00:19:45
disorders such as schizophrenia or
00:19:47
destructive brain conditions such as
00:19:49
Alzheimer's syndrome. So, of course, we
00:19:51
need new tools, new
00:19:53
capabilities and technologies for discoveries that will
00:19:55
forever change the lives of
00:19:58
millions and millions for the better. people all over the
00:20:00
world
00:20:01
friends Thank you for your support using
00:20:04
this button on Patreon for foreign
00:20:06
cards for Russian ones on boost and service
00:20:08
to support the creativity of authors rendly
00:20:10
to me there is a minimum commission special
00:20:13
thanks to Dmitry Orlov Pavel
00:20:15
Novikov Petr Kondaurov Alexander
00:20:16
Korystkina Andrey field idebager
00:20:18
Pavel Dunaev Anton Plgunov Pavel
00:20:20
Khabchinsky Alena Vysokova Oleg
00:20:22
Pavel Valentov Rinat Balbekov
00:20:24
Maxim Mendeleev
00:20:26
Vasilisa Vershusus Evgeny Balakhnin Dmitry
00:20:28
Abramov Konor Lyanor Pavel Valentov 137
00:20:31
hours of the universe Open Launchi
00:20:33
see Vladimir Podgorniy
00:20:34
Shevelev Seryogovsky
00:20:36
Alexander Ahaha
00:20:39
Roman Boronin Borinwa Borinwa Borinwa La thanks for
00:20:41
support Vlad was his likes and comments,
00:20:43
this is the Shroud of peaceful skies to us and
00:20:46
see you
00:20:47
[music]

Description:

Тот самый курс по Java-разработке с оплатой после трудоустройства и гарантированной зарплатой от 100 тысяч рублей: https://kata.academy/java/postpayment Что вас ждёт в ролике: 00:00 - как научились «читать мысли» 09:36 - мегапроект по мозгу + реклама курса, за который платишь, если только устроился потом на работу 11:47 - научились считывать «видения» 15:50 - и проникают во сны 20:01 - благодарности Ссылки: https://docs.google.com/document/d/1vXak3q3Wd7RfF4MPOt-xAj295S5JIHpD-Cz0OHWbics/edit Спасибо за поддержку! Российские карты: https://friendly2.me/support/scione/ https://boosty.to/scione Зарубежные карты https://www.patreon.com/SciOne BTC bc1qs4cnnk2h2pw78x74f58fd3zzxv7yvsgeztlvcg ETH 0x98b846A01397F32d67Ef57615a00f5bD654E701f Если кого-то забыли, напишите мне! OkOdyssey [собака] proton.me Хранители SciOne Дмитрий Орлов, Павел Новиков, Пётр Кондауров, Александра Корысткина, Андрей Полевой Покровители SciOne iDebugger, Павел Дунаев, Антон Пальгунов, Павел Борский, Xabchinsk, Алёна Мыцыкова, Олег Жин, Павел Валентов, Ринат Бальбеков, Максим Менделев, Владимир Ямщиков, Василиса Версус, Евгений Балахнин, Дмитрий Абрамов, Konorlevich, Павел Валентов, 137_число_вселенной, Сообщество Оупен Лонгивити, Владимир Подгорный, Алексей Шевелёв, Серёга, Павел Петриковский, Александр АХО, RateKing, Станислав Чернин, Роман Бородуля Вносят вклад в развитие SciOne Sergo Oganov, Mathic Society, Daniel, Sergey Belov-Fishilevich, Andrew Yarmola, Konstantin Zhernosenko, Chokotto, Андрей Гордиевский, Vadanta, blanc, Victor Bolshakov, John Kramer, Evgeni SpirTanol, Кирилл Высотин, AlexGrimm, Никита Чемерис, Pavel Marchenko, Виктор Павлов, Roman Gelingen, Александр Тайгар, pervprog, Aleksey Goglov, Dmitry Luzanov, Олег Трофим, Алексей Ефимов, Виталий Савельев, Александр Шнитко, Lexx, Natalia Ivannikova, Slafffka Æ, D1ana Drozhzh1na, Максим Фалалеев, overlelik, smaximov, Александр Каторгин, Надежда Мещерякова, Егор Богданов, Женя Воронин, Артемедий Макаров, Вячеслав Карташев, ТяниКрип, Sanchokihana, Александр Денисов, Оксана Мироненко, Phil, Igor Egorov, Yuri Grachevski, Оксана Мироненко, Александр Денисов, Konstantin Bozhikov, deeflash, bolknote.ru, leonhetch, Андрей Зарембо, Александра Бордер, Sanchokihana, Pavel Shtanipopravel, Всеволод Осипович, Dmitry Araslanov, О нас заботятся Белозьоров Владимир, Zaur Aslanov, Eugen Zinchenko, Anton Bolotov, Konstantin Bredyuk, Evgenii Beschastnov, Nataliia Tomilova, Eugene Trufanov, Александр Ляшенко, Mathic Society, Elena Aitova, Alexei Popovici, Dmitry, yauheni kanavalik, Artem Gnatenko, Glebiys, Natallia Barysevich, Ivan Emanov, Ivan Bondarenko, Igor Komarov, Anastasiya Matusyak, Daniel, Olga Koumrian, r00t3g, Пахан и Танюха, Anton Morya, Pmdsoon, Shanren, doomwood, Павел Глазков, rizn.org Виталий Рябенко, Programmable Artificial Life, Egor, Vally Pepyako Anton Vasiljev, Вячеслав Шаблинский, lutersergei, Natallia Barysevich,, Александр Вивтоненко, Сергей Паскаль, Никита Друба, Александр Петровский, Wolkow, Эд-Эд, ATVANT, Сергей Крестов, Varvara Spirina, Михаил Х., Pavel Agurov, Konstantin Bredyuk, Roman, Konstantin Yanko, Mark Menshow, Artem Gnatenko, Glebiys, Natallia Barysevich, Ivan Emanov, Ivan Bondarenko, Дмитрий Черкасов, Irina Davletchina, Sergey Vorontsov, Даниил Иваник, User100, Павел Иванов, Светлана, lutersergei, Андрей Родионов, Павел Глумов, Kiryl Lutsyk, Artemee Lemann, Alexander Mordvintsev, Майор Айсберг, Артём Новиков, Светлана, Александр Гор, Alexander Klek, Сергей Крестов, hArtKor, Иван Смирнов, Светлана, Daniel All, Эльвира Хисматуллина, GrigZZ, Pmdsoon, Vasilii Pankratov, Ирина Ведерникова, Андрей Родионов, Вячеслав из Altterra Нам помогают Георгий Журавлев, Dmitry Salnikov, Pumba abmup, Alex Abdugafarov, Hanna Kalesnikava, Jimi Jimi, Chumva, Arseny, Robert Grislis, Sergey Mertuta, максим крупенко, Boris Dus, Igor Khlebalin, Eugene Vyborov, Ugnius Bareikis, David Malko, Nick Starichenko, Olha, Ivan Liakhovenko, Igor Petetskih, Ahmet, Костянтин Хорозов, Shockster, Anton Mick, Aleksey Serebryakov, Ihar Kryvanos, Stanislav Vain, Vasya Pupkin, Dmitrii komarevtsev, Сергей Белорусец, Dmitry Dikun, Lidia Shkorinenko, Olga Bykov, Yurii Ryzhykh, Viktoria Bril, Roman Tsyupryk, Alexander Novikov, Serafim Nenarokov, Vadim Shender, DroidCartographer Сергей Дробов, Stanislav K, Андрей Козелецкий, Александр Семилетов, Константин Попов, Светлана, Дмитрий Смирнов, Михаил С, Зураб Мгеладзе, Pavel Koryagin, Виталий Хамин, ont rif, Наташа Подунова, Mr. B. Goode, Olya Mikheeva, Дмитрий Викторов, kRen0, Duory, Денис Петрик, Данил Закальский, Eu

Preparing download options

popular icon
Popular
hd icon
HD video
audio icon
Only sound
total icon
All
* — If the video is playing in a new tab, go to it, then right-click on the video and select "Save video as..."
** — Link intended for online playback in specialized players

Questions about downloading video

mobile menu iconHow can I download "Нейробиологи дешифровали мысли в текст | ПУШКА #57" video?mobile menu icon

  • http://unidownloader.com/ website is the best way to download a video or a separate audio track if you want to do without installing programs and extensions.

  • The UDL Helper extension is a convenient button that is seamlessly integrated into YouTube, Instagram and OK.ru sites for fast content download.

  • UDL Client program (for Windows) is the most powerful solution that supports more than 900 websites, social networks and video hosting sites, as well as any video quality that is available in the source.

  • UDL Lite is a really convenient way to access a website from your mobile device. With its help, you can easily download videos directly to your smartphone.

mobile menu iconWhich format of "Нейробиологи дешифровали мысли в текст | ПУШКА #57" video should I choose?mobile menu icon

  • The best quality formats are FullHD (1080p), 2K (1440p), 4K (2160p) and 8K (4320p). The higher the resolution of your screen, the higher the video quality should be. However, there are other factors to consider: download speed, amount of free space, and device performance during playback.

mobile menu iconWhy does my computer freeze when loading a "Нейробиологи дешифровали мысли в текст | ПУШКА #57" video?mobile menu icon

  • The browser/computer should not freeze completely! If this happens, please report it with a link to the video. Sometimes videos cannot be downloaded directly in a suitable format, so we have added the ability to convert the file to the desired format. In some cases, this process may actively use computer resources.

mobile menu iconHow can I download "Нейробиологи дешифровали мысли в текст | ПУШКА #57" video to my phone?mobile menu icon

  • You can download a video to your smartphone using the website or the PWA application UDL Lite. It is also possible to send a download link via QR code using the UDL Helper extension.

mobile menu iconHow can I download an audio track (music) to MP3 "Нейробиологи дешифровали мысли в текст | ПУШКА #57"?mobile menu icon

  • The most convenient way is to use the UDL Client program, which supports converting video to MP3 format. In some cases, MP3 can also be downloaded through the UDL Helper extension.

mobile menu iconHow can I save a frame from a video "Нейробиологи дешифровали мысли в текст | ПУШКА #57"?mobile menu icon

  • This feature is available in the UDL Helper extension. Make sure that "Show the video snapshot button" is checked in the settings. A camera icon should appear in the lower right corner of the player to the left of the "Settings" icon. When you click on it, the current frame from the video will be saved to your computer in JPEG format.

mobile menu iconWhat's the price of all this stuff?mobile menu icon

  • It costs nothing. Our services are absolutely free for all users. There are no PRO subscriptions, no restrictions on the number or maximum length of downloaded videos.