Efferent visual motion onset the display center BCI paradigm was developed by Jair Pereira Junior allowing for single trial-based operation of eight commands' application (here simple 1-8 digit speller).
Afferent visual motion onset the display center BCI paradigm was developed by Caio Teixeira allowing for single trial-based operation of six commands' application (here simple 1-6 digit speller).
Videos available in China on bilibili and youku.
These videos present our first attempt to control a humanoid robot NAO using tactile-pressure BCI developed by Kensuke SHIMIZU & Tomek. The BCI paradigm uses still seven oddball sequences averaging, thus still so slow. The click noises are "side effects" of the tactile pressure stimulus generator (this is also to be improved soon). The project is a collaboration with Cichocki Lab at RIKEN BSI. More to come soon!!! Stay tuned ;)
This video is about chromatic SSVEP BCI-based NAO robot control. This is a result of a collaboration between BCI-lab-group at University of Tsukuba (Daiki AMINAKA, Kensuke SHIMIZU & Dr. Tomasz "Tomek" M. RUTKOWSKI) and Cichocki Lab at RIKEN BSI (Peter JURICA and Dr. Andrzej Cichocki). This time we managed to control online NAO robot using chromatic SSVEP BCI developed by Daiki AMINAKA and Tomasz M. RUTKOWSKI.
This video is about tactile-body BCI-based NAO robot control. This is a result of a collaboration between BCI-lab-group at University of Tsukuba (Takumi KODAMA, Kensuke SHIMIZU & Dr. Tomasz "Tomek" M. RUTKOWSKI) and Cichocki Lab at RIKEN BSI (Peter JURICA and Dr. Andrzej Cichocki). This time we managed to control online NAO robot using tactile-body BCI developed by Takumi KODAMA, under supervision and advice of Dr. Tomasz M. RUTKOWSKI, using only four trials (sequences) averaging, which matchad the robot's command execution.
This video is about spatial auditory BCI speller developed by Moonjeong CHANG as her graduation project. We present a proof of concept of the Japanese kana (45 characters) speller developed in form of the two-step input spatial auditory BCI.
This video presents autdBCI controlling a small robot. We present results of a study in which contactless and airborne ultrasonic tactile display (AUTD) stimuli delivered to the palms of a user serve as a platform for a brain computer interface (autdBCI) paradigm. Six palm positions are used to evoke somatosensory brain responses, in order to define a novel contactless tactile autdBCI. The autdBCI won The BCI Research Award 2014 (http://www.bci-award.com/).
This demo was made in collaboration with Cichocki Lab, at RIKEN Brain Science Institute, Japan.
In this video we show a preliminary trial of a research project in progress, where chromatic and higher frequency SSVEP is used.
"Spatial Tactile Brain-Computer Interface Paradigm Applying Vibration Stimuli to Large Areas of User's Back" - demo video accompanying a BCI Conference 2014 paper available at http://arxiv.org/abs/1404.4226. The video demonstrates the online tbBCI application with healthy users.
The second collaboration output in our research group of Waldir's virtual reality (neurogaming) app with Hiromu's tbaBCI and Kensuke's tfBCI paradigms under Tomek's project leadership.
A perfect collaboration example of Waldir's virtual reality (neurogaming) app with Hiromu's tbaBCI paradigm developed in BCI-lab-group at University of Tsukuba, Japan, under Tomek's project leadership.
Tactile-force BCI demo with a small vehicular robot by Shota & Tomek (please note that the joystick is moving causing a tactile force stimulus, not the hand, as could be seen at the end of the demo movie).
Chest Tactile BCI (tBCI) for Vehicle Robot Navigation by Hiromu & Tomek (please note the vibrotactile transducers attached to the user's chest which also generate, as a side effect, acoustic noises).
The "Brain dreams Music" project conducts the development of the new musical instrument which can be played by brain waves, the musical performance using this instrument, and the other related research which bridges music and neuroscience. Our instrument uses the most advanced Brain-Computer Interface (BCI) technology, that directly transforms the imagination of music into the realization of the imagined music. The audiovisual representation of the brain wave classification analysis facilitates the communication among the brain player, musicians, and audiences. This project is a joint research project between JST, ERATO, Okanoya Emotional Information Project, Tokyo University of the Arts, and BCI-lab-group at University of Tsukuba.