Skip to main content

The IEEE Brain AI for Neurotechnology Workshop

The IAH is proud to announce the IEEE Brain AI for Neurotechnology Workshop at the World Congress on Computational Intelligence 2024 event in Yokohama, Japan.

IEEE Brain Workshop on AI for Neurotechnology

Find out more about the workshop.

Brief Overview of the Event

Further information surrounding the event can be found below.

Neural Networks and Computational Intelligence researchers have a lot to offer in terms of dealing with the challenges to create robust and trustworthy AI for Neurotechnology. The IEEE Brain AI for Neurotechnology workshop aims to bring together researchers specifically focused on neurotechnology with experts in AI to present and learn about

  • the most recent advances in AI for neurotechnology
  • data gathering and data sharing initiatives
  • federated learning for privacy preserving model training
  • building towards foundational models approaches

The workshop, associated with the Institute for the Augmented Human at the University of Bath will provide opportunities for AI and neural networks researchers to contribute to and benefit from improving neurotechnology with the latest advances in AI.

The workshop will include a keynote talk, invited speakers, a panel session and a poster session for papers submitted by delegates. Invited speakers will include this working at the cutting edge of applying Deep Neural Network Technologies to process brain data for neurotechnology applications.

There are prizes for best papers/poster.

According to IEEE Brain from whom we have sought sponsorship for this workshop (, neurotechnologies represent the next technology frontier – the workshop is supported by the IEEE Brain Technical Community and IEEE CIS.

Tentative Schedule

The current schedule of the workshop is as below.

Date TBD

  • 9:00 Welcome and Opening Remarks
  • 9:10 Keynote
  • 10:00 Invited Talk 1
  • 10:30 Invited Talk 2
  • 11:00 Break
  • 11:15 Invited Talk 3
  • 11:45 Invited Talk 4
  • 12:15 Invited Talk 5
  • 12.45 Lunch Break with poster session
  • 14.00 Panel session
  • 15:00 Best Poster Award announcement and final remarks


Further information in to the background of the research and the workshop can be found below.

Implantable and wearable neurotechnology utilization is expected to increase dramatically in the coming years, with applications in enabling movement-independent control and communication, rehabilitation, treating disease, improving health, recreation and sport among other applications. There are multiple driving forces:− continued advances in underlying science and technology; increasing demand for solutions to repair the nervous system; increase in the aging population world-wide producing a need for solutions to age-related, neurodegenerative disorders and “assistive” brain-computer interface (BCI) technologies; and commercial demand for nonmedical BCIs.

In recent years a range of new neurotechnology innovations have emerged to enable neuroimaging and neural recording and brain-computer interfacing at multiple scales in a variety of settings—in the lab, in the clinic, and in the wild—including low-cost wearable electroencephalography (EEG), ultra-high density EEG, stereoelectroencephalography (sEEG), advanced functional near infrared spectrography (fNIRS), functional magnetic resonance imaging (fMRI), functional ultrasound imaging (fUSi), optically pumped magnetometer magnetoencephalography (OPM-MEG), neural lace, neural dust, stent-electrode recording arrays (stentrodes) and endovascular recording techniques, multielectrode arrays, electrocorticographic (ECoG) arrays, and optogenetics among others and the data acquired from these technologies in growing in size and complexity and continue to drive and demand new innovations in signal processing and artificial intelligence.

For example, wearable EEG-based brain-computer interface, present an excellent challenge for Artificial Intelligence. First, EEG has a low signal-to-noise ratio (SNR), as the brain activity measured is often buried under multiple sources of environmental, physiological and activity-specific noise of similar or greater amplitude, often referred to as ‘artifacts'. Secondly, EEG is a non-stationary signal (i.e., its statistics vary across time). As a result, a classifier trained on a temporally-limited amount of user data might generalize poorly to data recorded at a different time on the same individual. Thirdly, there is high inter-subject (user) variability arising due to physiological differences between individuals, and can severely affect the performance of models that are meant to generalize across users. AI must be developed to cope with non-stationarity of biological signals (drifting dynamics) and for online real-time adaptation for continually evolving systems.

This workshop will focus on state-of the-art in AI for non-invasive brain-computer interfaces.