Skip to main content

The IEEE Brain AI for Neurotechnology Workshop

The IAH is proud to announce the IEEE Brain AI for Neurotechnology Workshop at the World Congress on Computational Intelligence 2024 event in Yokohama, Japan.

IEEE Brain Workshop on AI for Neurotechnology

Find out more about the workshop.


IEEE Brain logo; a stylised brain image made from dots and dashes in purples and pinks, with the words IEEE Brain in pink and black

The IEEE Brain Workshop on AI for Neurotechnology is to be held at the World Congress on Computational Intelligence 2024 (WCCI2024) Yokohama, Japan, on 30 June 2024.

It is in association with the International Joint Conference on Neural Networks (IJCNN).

Keynote speaker

Prof Mitsuo Kawato, Advanced Telecommunications Research Institute International, Japan.

Invited speakers

  • Dongrui Wu, Huazhong University of Science and Technology, China.
  • Alexandre Gramfort, Meta Reality Labs, Paris, France.
  • Gerwin Schalk, Fudan University, Shanghai, China.
  • CT Lin, University of Technology Sydney, Australia.
  • TP Jung, University of California San Diego, United States of America.

The workshop is organised by:

Corresponding E-mail: dhc30@bath.ac.uk


Brief Overview of the Event

Further information surrounding the event can be found below.


Neural Networks and Computational Intelligence researchers have a lot to offer in terms of dealing with the challenges of creating robust and trustworthy AI for Neurotechnology. The IEEE Brain AI for Neurotechnology workshop aims to bring together researchers specifically focused on neurotechnology with experts in AI to present and learn about:

  • The most recent advances in AI for neurotechnology
  • Data gathering and data sharing initiatives
  • Federated learning for privacy preserving model training
  • Building towards foundational models approaches

The workshop, associated with the Institute for the Augmented Human at the University of Bath will provide opportunities for AI and neural networks researchers to contribute to and benefit from improving neurotechnology with the latest advances in AI.

The workshop will include a keynote talk, invited speakers, and a panel session. Invited speakers include those working at the cutting edge of applying Deep Neural Network Technologies to process brain data for neurotechnology applications.

According to IEEE Brain from whom we have sought sponsorship for this workshop, neurotechnologies represent the next technology frontier. The workshop is run by the IEEE Brain Technical Community and supported by the IEEE Computational Intelligence Society, the IEEE Signal Processing Society and the Bath Institute for the Augmented Human.


Tentative Schedule

The current schedule of the workshop on 30 June 2024 is as below.


  • 8.00 am - 8.30 am: Arrival and registration
  • 8.30 am - 8.40 am: Welcome and opening remarks
  • 8.40 am - 9.35 am: Invited talk 1- Brain-Computer Interface in Augmented Reality and Metaverse, CT Lin, University of Technology Sydney, Australia
  • 9.35 am - 10.30 am: Invited talk 2- Privacy-Preserving Brain-Computer Interfaces, Dongrui Wu, Huazhong University of Science and Technology, China
  • 10.30 am - 10.45 am: Break
  • 10.45 am - 11.40 am: Invited talk 3 – BCI in Action: Translating Research into Real-World Solutions, TP Jung, Univ of California San Diego, US
  • 11.40 am - 12.40 pm: Keynote address - AI-based precision psychiatry, Prof Mitsuo Kawato, ATR Institute International, Japan
  • 12.40 pm - 1.40 pm: Lunch break
  • 1.40 pm - 2.35 pm: Invited talk 4 - Translation of Neurotechnologies, Gerwin Schalk, Fudan University, Shanghai, China
  • 2.35 pm - 3.30 pm: Invited talk 5 - Neuromotor interfaces for human-computer interaction at Meta, Alexandre Gramfort, Meta Reality Labs, Paris, France
  • 3.30 pm - 4.00 pm: Panel discussion, all speakers
  • 4.00 pm - 4.10 pm: Closing remarks

Registration Information

If you are registered for the WCCI2024, attendance at the workshop is included/free. If you wish to attend the workshop only, 1-day registration (for 30 June) is $88.00.

Please email Damien Coyle to confirm you have registered.


Additional background

Further information in to the background of the research and the workshop can be found below.


Implantable and wearable neurotechnology utilization is expected to increase dramatically in the coming years, with applications in enabling movement-independent control and communication, rehabilitation, treating disease, improving health, recreation and sport among other applications. There are multiple driving forces:− continued advances in underlying science and technology; increasing demand for solutions to repair the nervous system; increase in the aging population world-wide producing a need for solutions to age-related, neurodegenerative disorders and “assistive” brain-computer interface (BCI) technologies; and commercial demand for nonmedical BCIs.

In recent years a range of new neurotechnology innovations have emerged to enable neuroimaging and neural recording and brain-computer interfacing at multiple scales in a variety of settings—in the lab, in the clinic, and in the wild—including low-cost wearable electroencephalography (EEG), ultra-high density EEG, stereoelectroencephalography (sEEG), advanced functional near infrared spectrography (fNIRS), functional magnetic resonance imaging (fMRI), functional ultrasound imaging (fUSi), optically pumped magnetometer magnetoencephalography (OPM-MEG), neural lace, neural dust, stent-electrode recording arrays (stentrodes) and endovascular recording techniques, multielectrode arrays, electrocorticographic (ECoG) arrays, and optogenetics among others and the data acquired from these technologies in growing in size and complexity and continue to drive and demand new innovations in signal processing and artificial intelligence.

For example, wearable EEG-based brain-computer interface, present an excellent challenge for Artificial Intelligence. First, EEG has a low signal-to-noise ratio (SNR), as the brain activity measured is often buried under multiple sources of environmental, physiological and activity-specific noise of similar or greater amplitude, often referred to as ‘artifacts'. Secondly, EEG is a non-stationary signal (i.e., its statistics vary across time). As a result, a classifier trained on a temporally-limited amount of user data might generalize poorly to data recorded at a different time on the same individual. Thirdly, there is high inter-subject (user) variability arising due to physiological differences between individuals, and can severely affect the performance of models that are meant to generalize across users. AI must be developed to cope with non-stationarity of biological signals (drifting dynamics) and for online real-time adaptation for continually evolving systems.

This workshop will focus on state-of the-art in AI for non-invasive brain-computer interfaces.


Sponsors and exhibiting opportunities

Find out more about our event sponsors.