Enhancing the Presentation Experience with Brainwaves

Anush Mutyala
9 min readMar 10, 2021

--

Tim Urban doing his famous procrastination Ted Talk but with a photoshopped EEG headcap

“Why is one of the most important 21st-century skills still so primitive?”, I thought to myself, as I searched for an interesting application for my new Muse EEG headband. No, I’m not talking about team working, nor content creation. I’m talking about something that we have been doing since the start of our school years; presentations!

Think about it. Has the way we present and public speak changed in your lifetime? Maybe it has, through the introduction of projectors, or maybe proxy devices like the Apple TV that allows you to present slide shows from your phone. All these different methods introduced in the 20–21st century have definitely made presenting a much easier experience for both the presenter and the audience. But, has the basic concept changed? We still present with some type of extension of control to manipulate a slide show, such as remotes, or a phone.

Some may argue that there isn’t a need for innovation in this market, a remote will suffice; I disagree. There hasn’t been a symbiotic and truly natural way to control a slideshow while presenting. Now let me set the scene for a solution.

Steve Jobs unveiling the iPhone

Imagine. Think back to January 7, 2007, when Steve Jobs unveiled the first iPhone. What if instead of controlling the legendary presentation deck with a remote control, Steve was controlling the screen with his mind! Just imagine how much the dynamic and hype of the presentation would’ve changed.

The crazy thing is, the technology to do so is already here!

A Computer in your Brain??

Meet the field of brain-computer interfaces(BCI), where innovators are bridging the gap between the bundle of tissue in your brain and a digital device. If this doesn’t sound real, maybe the science will make more sense.

Our brains are composed of billions of neurons, and they form complex pathways that represent a plethora of different functions, such as movement or emotions.

Here’s a basic diagram of a neuron, what you’ve probably seen throughout high school. Visualize hundreds of thousands of these condensed in a cubic centimetre of your brain. Now in this cube of your brain, these neurons either activate or inhibit each other based on action potentials. If you think of a neuron as a circuit, an action potential refers to the voltage difference between one end of the axon to the other. Using a method called electroencephalography(EEG), researchers can measure the action potentials, specifically the post-synaptic potential, of large groups of neurons.

Neurable’s VR x Brain-Computer Interface

For the purpose of BCIs, EEG signals are used to encode different devices, such as prosthetics, word spellers, and even VR controllers. In the recent decade, BCIs have awoken from their clinical lumber and have finally reached the consumer market. With EEG devices popping up on the market from companies like Muse, Neurosity, Emotiv, and more, at-home users are constantly building new projects that have a major influence on the industry.

Coming back to the topic of presentation technology. I decided that a cool project would be to use my Muse headset to control a slideshow. The easiest way to do this would be via artefacts, specifically eyeblinks. EEG data can be plotted and interpreted as a complex waveform, hence EEG is sometimes referred to as brain waves. You can introduce noise to the quality of the waveform by introducing artefacts, which in the case of eyeblinks, can represent large abnormal spikes when plotted.

Signal of the 4 electrodes of the Muse after a blink event.

Eyeblinks or any significant eye movements produce large voltage potentials because your eyeballs are actual 2 electric dipoles, or in simpler terms, batteries. The positively charged portion of your eye is your cornea, and the negatively charged side is the retina.

When you blink, your eyeballs shift along an axis, resulting in an electrical field several times stronger than that produced by your neurons. Because of how pronounced the electrical field is, we can easily differentiate it from the rest of the signal, making eyeblinks the perfect brain input for the project.

The Game Plan

Points out the main electrodes used for the project.

If you’ve never seen the Muse headband before, here’s what it looks like. Personally, I find it to be the least intimidating looking out of all the other BCI headsets on the market. For this project, I specifically used the AF7 and AF8 electrodes, since they are situated right above both eyes.

Now that we have all the science and hardware out of the way, let’s dive into the code! Python will be the language of choice.

The intention behind the code can be explained in a few steps:

  • First, we collect the EEG signals
  • Next, we process the data so that it represents the info we need
  • After that, we set thresholds for a button click
  • Lastly, we map each eye’s data to the left and right arrow key respectively

Dependencies

import pyautoguifrom time import sleepfrom pynput.keyboard import Key, Controllerfrom os import system as sysfrom datetime import datetimeimport numpy as np from pylsl import StreamInlet, resolve_byprop EEG dataimport utils

Start by importing all the libraries. utils.py has been adopted from here. If you want to dive deeper into the signal processing, take a look at the functions in the linked file.

Initializing the Stream

To stream the data from my Muse headband, I used a software called Bluemuse. It makes setting up a connection with the muse 10x faster with its easy-to-use GUI. Once you click start streaming, Bluemuse will establish a Lab streaming Layer(LSL) connection with your computer. LSL is the industry standard for EEG streaming and other biometric streaming due to its built-in time synchronization, which basically means the data is much easier to process after recording.

Screenshot of Bluemuse GUI.

In terms of the code, the pylsl python library can be used to search and access active LSL streams. StreamInlet creates an inlet object that holds data related to the stream.

streams = resolve_byprop('type', 'EEG', timeout=2)if len(streams) == 0:raise RuntimeError('Can\'t find EEG stream.')inlet = StreamInlet(streams[0], max_chunklen=12)

Setting the Parameters

#all lengths are in n seconds
BUFFER_LENGTH = 5
EPOCH_LENGTH = 1OVERLAP_LENGTH = 0.5SHIFT_LENGTH = EPOCH_LENGTH - OVERLAP_LENGTH
# 0 = left ear, 1 = left forehead, 2 = right forehead, 3 = right ear
INDEX_CHANNEL_LEFT = [1]INDEX_CHANNEL_RIGHT = [2]INDEX_CHANNELS = [INDEX_CHANNEL_LEFT, INDEX_CHANNEL_RIGHT]#sampling rate
fs = int(info.nominal_srate())

To process the EEG data, we need to cut the stream into multiple time frames, or epochs. These epochs are then placed in buffer arrays that are used to calculate a certain metric, which will be discussed below. The “fs” variable holds the sampling rate, which basically represents the number of data points take from EEG waveform per second. This number will be used later on when creating arrays.

Building the Arrays

#raw EEG buffer
eeg_buffer = np.zeros((int(fs * BUFFER_LENGTH), 1))
# Compute the number of epochs in "buffer_length"n_win_test = int(np.floor((BUFFER_LENGTH - EPOCH_LENGTH) /SHIFT_LENGTH + 1))
# bands will be ordered: [delta, theta, alpha, beta]band_buffer = np.zeros((n_win_test, 4))#list of buffers for iterationbuffers = [[eeg_buffer, eeg_buffer], [band_buffer, band_buffer]]

This section of the code takes the declared variables from the previous sections and creates 2 empty numPy arrays, one for raw EEG, and one for processed EEG. The “buffers” array will be indexed when looping between the 2 forehead electrodes; there is one instance of each buffer for each electrode.

Data Processing

for index in range(len(INDEX_CHANNELS)):# Obtain EEG data from the LSL stream  eeg_data, timestamp = inlet.pull_chunk(  timeout=1, max_samples=int(SHIFT_LENGTH * fs))  ch_data = np.array(eeg_data)[:, INDEX_CHANNELS[index]]
#append new data stream to eeg_buffer buffers[0][index], filter_state = utils.update_buffer( buffers[0][index], ch_data, notch=True, filter_state=None)
#get new epochs from eeg_buffer data_epoch = utils.get_last_data(buffers[0][int(index)], EPOCH_LENGTH * fs)
#calculate band powers and append data to band_buffer band_powers = utils.compute_feature_vector(data_epoch, fs) buffers[1][index], _ = utils.update_buffer(buffers[1][index], np.asarray([band_powers]))

The code above basically fills up the buffers that we created before, as well as processes the raw EEG. “You’ve been saying ‘process the data’ so much, but what does that even mean???”, you may ask. In its original state, EEG data is not of use for our project, because it represents data in voltage over time.

As I stated earlier in the article, EEG data can be plotted as a complex waveform, and this waveform is composed of a variety of frequencies. The most common EEG frequency intervals or bands, used for classification are Beta, Alpha, Theta, Delta, and sometimes also Gamma(>30hz) frequency.

Image above shows how raw EEG is the sum of multiple frequency bands.

The raw EEG can tell us about the voltage differences of the complex waveform over a timeframe, but what we really need is the power of the frequency bands at each epoch. The reasoning is that the blink is almost indistinguishable in the raw EEG data, due to each frequency band's power being affected differently by the blink. When we separate a certain band, we are able to set consistent thresholds for power, and the band you choose to use is arbitrary.

To transform the raw EEG into power over frequency, by calculating the power spectral density(PSD). This can be done by using Fast Fourier transform(FFT) on each time frame or window of the EEG waveform. An in-depth explanation of PSD and FFT is beyond the scope of this article, but the key takeaway should be that PSD gives you the power of each frequency band, neglecting time as a variable. In the code, the “compute_feature_vector” function is what transforms the newest epochs of the “eeg_buffer” array into a PSD.

Setting Thresholds and Mapping to Keys

if  buffers[1][1][-1][Band.Delta] > 2.2:  print("""  right  """)  pyautogui.press('right')  sleep(2)elif buffers[1][0][-1][Band.Delta] > 2.2:  print("""  left  """)  pyautogui.press('left')  sleep(2)

Finally, this last section of the code indexes the band_buffer arrays by frequency band. I decided to use the delta frequency, and after a few rounds of experimentation and debugging, I found that the optimal threshold is 2.2 microvolts²/hz. Essentially, once a certain buffers delta frequency reaches the set power, the program will execute a left or right keypress based on the left or right eye. I also added a delay of 2 seconds, since the effects of a blink remain for 2–3 buffers, since we don't want unnecessary double clicks.

The Brainwave Presentation

In the video, we can see that eyeblinks are able to keypress left and right, hence moving along the slideshow. Though there were transitions where I had to blink multiple times, this is to be expected mainly because of how high the thresholds are, as well as the epoch rate. If the threshold was lower, noise from jaw movements when I talk would have also surpassed them. Feel free to mess around with the parameters for the epochs, and find the optimal settings.

Key Takeaways

  • Brain-Computer interface use brain inputs for a digital output
  • Electroencephalography measures the electrical activity in the brain
  • Artefacts such as eyeblinks are easily detectable and act as a useful brain input
  • You can measure the power of a brainwave, and set thresholds for when a certain event occurs

This project is only the tip of the iceberg for this field. Cognitive brain signals and event-related potentials exhibit the true potential of EEG, that's where the real fun is at. Stay tuned for more EEG projects with these concepts!

--

--

No responses yet