Biofeedback is a scientific method used to directly influence bodily functions such as the heartbeat or muscle tension.
In its easiest form one is wired to an EKG showing your pulse and the person connected tries to influence the heart rate by its own thoughts.
Since a few years we are able to read out a person's imagined pictures viá advanced algorithms, hence we now have possibilities to look into one's mind with more and more precision and therefore we can look at it as a form of interface.
With such an interface we now have a full setup for testing a lot of scenarios and even manipulating our own brain.
Now our whole setup looks as follows:
Input (sounds, words, pictures etc.) -> test subject (human/animal) -> brain monitor aka output/debug interface (complex algorithm + EEG).
To understand how we can harness this setup, we need to look at how the more complex systems work in current tests:
A test subject is strapped to an highly advanced EEG, that allows to show which neurons in your brain are firing.
Now the test subject has to look at certain pictures. In the next step the person is asked to imagine the picture as good as he or she can.
Now the EEG shows the affected neurons when the person recalls the picture of the red hammer for example.
This signature then gets saved onto a file and the whole process gets repeated with as many different pictures as possible.
As a result, there are now many more brain activity pictures linked to the pictures that had been presented to that person.
So a red hammer can be linked to brain activity encoded as RxH
While the red button is encoded to RxB for example.
If this data set gets applied to a special self learning algorithm, the program tries to form pictures according to the brain data shown.
This means when the person imagines a red hammer the AI is able to draw a picture from that.
Usually these pictures are a bit blurry but now it is possible to roughly see what a person imagines, even when it is a picture that is not in the program's data set yet, it tries to create a picture on its own according to the neuronal pattern.
So far so good, but what else can be done besides looking into one's brain?
Well, building a fully automatic feedback loop.
One such system could look like this:
Input (sounds, words, pictures etc.) -> test subject (human/animal) -> brain monitor aka output/debug interface (complex algorithm + EEG) -> computer input -> human interface aka monitor, speakers etc.
This means, the brain activity gets transmitted back to the computer.
The computer for example gets the goal to generate the brain pattern of pleasure (like the test subjects brain reaction to a good joke).
The computer then proceeds to create output viá pictures on a screen or composing a song that stimulates the person's mind to such a degree that the brain pattern ultimately occurs.
This could be achieved by using existing stock material or, even more interesting: by sophisticated learning algorithms, that generate the output themselves.
In such an experiment the AI might be able to instantly generate certain feelings and we possibly could find a way to design our first sophisticated brain interfaces, installing knowledge into our brains or at the very least get a more precise idea on how our minds work.
No comments:
Post a Comment