Why we need the Internet X BCI Collab

Anush Mutyala
14 min readJun 10, 2021

As of June 2021, I’m 15 years old, and from as far back as I can remember the internet has played an important role in my life.

My parents immigrated from India to the States back in 2004, an opportunity that was enabled by the internet. Post-2000s, the Internet was quickly becoming a behemoth, and many corporate companies were outsourcing more and more of their IT work to developing countries. From those emerging opportunities, my dad was able to get a work-permit for the States, and since then, we’ve stayed in North America. This was a major jump in the quality of life for my family.

Not only did the internet bring my family into a better environment, but it also superpowered my own childhood. Online video games were etched into my life as early as when I was 6. I remember coming back from school hopping on adobe flash games, or MMOs with my friends, spurring some of the most memorable experiences of my life. Living in Canada, the internet also provided the means for social interaction after school during the heavy winter months. Most importantly, it provided a way for me to learn.

Bringing it back to the present, I could never imagine how my pandemic would have gone without the internet. The internet-enabled for virtual learning, virtual work, virtual meetups, and everything virtual in between. Hypothetically, if the internet was to shut down because of the pandemic, COVID-19 would have probably been 10x more deadly.

A World without the Internet

Now imagine if you weren’t able to use the internet at all; you can’t go on social media, go watch cat videos on youtube, learn about black holes, talk to family overseas. For people with severe motor disabilities, this is their day-to-day reality.

People with physical disabilities or diseases that affect hand dexterity like quadriplegia resulting from spinal cord injury, multiple sclerosis, muscular dystrophy, cerebral palsy or stroke are unable to interact with touch screens, mouses, or keyboards without the presence of assistive technology to bridge the gap. These assistive technologies cost upwards of $5000, with implants and prosthetics costing even more.

Globally, there are 250,000 to 500,000 people suffering spinal cord injury; 2.3 million people have multiple sclerosis; 17 million people have cerebral; and of the 80 million people who have experienced a stroke in their lifetimes, 30% have also experienced muscle spasticity. What I’ve just named area few of the many conditions and situations in which one may face impairments in muscle movement.

Just because you, your friends, and your family haven't experienced any type of physical disabilities today doesn’t mean they won’t in the future. As healthcare continues to improve, and more and more people start to live longer, disabilities are becoming more and more common, and this means that more and more people are being locked out of the internet because of something they can’t control. More and more people will lack access to something that has made my life more colourful, so I decided to something. I built my own Brain-Computer interface for the Internet.

Why BCIs?

I think I’ve been on this journey long enough to not have to explain brain-computer interfaces on again, but if you want a quick rundown of BCIs, check out my article on brain-controlled presentations here. TL;DR, we have technology that can read and write to your mind, both invasively, and non-invasively, the latter being relatively inexpensive.

The reason why BCIs are the perfect solution to this problem is because there are so many amazing, cost-effective, and non-invasive BCI technology out in the market. For under $500, you can get a decent neuroimaging headset that can provide you with the necessary modalities to control to interface with the internet.

Chrome with your Brain

For this project, I used the MUSE 2 headband, which non-invasively records electroencephalography signals. Sadly, I couldn’t get a neuralink chip implanted into my head, my parents wouldn’t have let me anyways haha.

Anyways, EEG is a cool neuroimaging technique that allows us to listen in to the complicated firing of neurons in the brain using voltage signals! EEG doesn’t provide enough resolution to read individual neural pathways; think placing a microphone outside of a stadium and trying to piece together what the people inside are generally talking about. EEG gives us a generalized brain signal, but even with this rather noisy data, we can do a ton of cool sh*t.

With the Muse headset, I extracted eyeblinks and focus metrics from EEG data to use as input in a web browser. As web browsers are the main places we interface with the interface internet, it only made sense to use this technology here. Essentially, once I focus hard enough, the screen should scroll, and when I blink left and right, the browser should also switch tabs left and right.

Getting a Gauge on Focus and Eyeblinks

Focus is actually something very quantifiable through EEG, and many companies are currently trying to set waves in the productivity scene with focus. Our brain waves can be separated into multiple frequencies bands through EEG, each with its own significances:

  • Alpha: Conscious relaxation
  • Beta: Conscious focus, memory, problem-solving
  • Theta: Creativity, emotional connection, intuition, relaxation
  • Delta: Immune system, natural healing, restorative / deep sleep
  • Gamma: Binding senses, cognition, information processing, learning, perception, REM sleep

Those are just a few of the correlations and meanings found between these EEG frequency bands. For measuring focus, the Alpha band interests us the most. The Alpha is correlated with attention and concentration, as well as relaxation. You’re Alpha signals increase as you become more relaxed, and decrease as you get more focused. Some papers have even found that their detectable spikes in the Alpha band on the onset of focus and concentration, therefore it is a crucial frequency range to consider.

Although our brain signals are made up of these frequency bands, we cannot discriminate these bands directly from looking at raw EEG. We must decompose the EEG signal into its individual band components, and we can do this through a process known as power spectral density(PSD).

PSD allows us to measure how much each contributes to the overall signal in the form of power measurements.

The image above shows a plot of the output of a PSD calculation; you’ll receive multiple frequency bins that fall under each frequency band. These frequency bins give us information on how much each spectral line contributes to the overall signal, and by averaging these frequency bins into each of the frequency bands, we get a good measure of the average contribution for each band.

In my application, I used the mean of the Alpha band over 3 seconds as a measure for my focus. Once the Alpha metric reaches a certain threshold, the screen would scroll.

In terms of the eyeblinks, these are SUPER easy to detect in EEG. The huge deflection in the graph you see above is an example of what a blink looks like in EEG data.

You may be wondering why eyeblink produces such a profound spike in brain data since eyeblinks obviously aren’t a huge event that requires the brain. The reason for the spike actually has nothing to do with your brain, but rather physics.

Think of your eyeballs as 2 magnets; when you blink, your eyeballs shift along an axis, resulting in an electrical field several times stronger than that produced by your neurons. Because EEG measures electrical activity, we can easily pick up on the event and differentiate it from the natural brain activity.

In the project, I used the delta band as a gauge for detecting eye-blinks, as it has been proven in literature that there is a strong energy correlation between the artefact and the frequency band.

The Breakdown

The Muse provides 4 channels, 2 which are situated around the frontal, and another 2 which are around the temporal lobes. I use the AF7 and AF8 electrodes to detect left and right eyeblinks, as the rest right above the eyes.

Since it has been noted in literature that the Alpha periodic wave originates from the parietal and occipital lobes, I decided to use the TP9 electrode to measure focus.

Now that we understand the basis of how we’re going to control the web with EEG, here's the breakdown of the code:

  • Stream in raw EEG data using LSL
  • Filter the data and calculate PSD values
  • Set thresholds for eyeblinks and focus metric
  • Map thresholds to browser inputs using pyautogui
  • Browse the web with your brain 🧠

Dependencies

import pyautogui # package to interface with keyboard and mouseimport numpy as npfrom pylsl import StreamInlet, resolve_byprop  # Module to receive EEG dataimport playsoundimport utils  # Our own utility functions

The utils file can be found in the muselsl github repository here.

Streaming and Buffering

I used Bluemuse to open a stream of EEG data from my Muse, and use Pylsl to find the stream the through python. The specifics can be found in my previous article here. Essentially, LSL provides a research-standard streaming layer for synchronized time-series, perfect for EEG. Pylsl is a useful library that allows us to search for open inlets, and stream in data through python.

Preprocessing

# Obtain EEG data from the LSL stream      eeg_data, timestamp = inlet.pull_chunk(      timeout=1, max_samples=int(SHIFT_LENGTH * fs))# Only keep the channel we're interested in      ch_data = np.array(eeg_data)[:, INDEX_CHANNELS[index]]# Update EEG buffer with the new data      buffers[0][index] = utils_new.update_buffer(      buffers[0][index], ch_data)""" 3.2 COMPUTE BAND POWERS """# Get newest samples from the buffer      data_epoch = utils_new.get_last_data(buffers[0][int(index)],      EPOCH_LENGTH * fs)# Compute band powers       band_powers = vectorize(data_epoch.reshape(-1), fs,          filtering=True)       buffers[1][index] = utils_new.update_buffer(buffers[1][index], np.asarray([band_powers]))

Here, we fill in the buffers we’ve set previously for each electrode that we stream. The update_buffer function concatenates a fresh chunk of EEG into the buffer set for each electrode, and the get_last_data function pulls the last second of data out of the chunk.

Since the Muse samples at 256hz, the data_epoch variable holds an array of 256 data points. This variable is then used to calculate PSD via the vectorize function.

def vectorize(df, fs, filtering=False)index = len(df)feature_vectors = []if filtering == True:  DataFilter.perform_bandpass(df[:], fs, 22.0, 18.0, 4,FilterTypes.BESSEL.value, 0)  DataFilter.remove_environmental_noise(df[:], fs,     NoiseTypes.SIXTY.value)

The function starts off by filtering the input data with a bandpass filter and notch filter using the brainflow library.

The bandpass filter, as it’s name implies, allows for a certain range of frequencies to be highlighted, and the rest to be cutoff or attenuated. I specifically used a 22hz center frequency, and an 18hz bandwidth. This means that any frequencies between the range of 4hz and 40 hz in the EEG data will not be attenuated, and the rest will.

In contrast, the notch filter is kind of like an anti-bandpass filter, where instead of passing a certain range of frequencies, the notch filter attenuates a range of frequencies. Powerline noise commonly passes into EEG data, which can influence the PSD calculation, therefore it’s important to use a notch filter near the 60hz range to cutoff the noise.

for y in range(0,index,fs):  f, Pxx_den = signal.welch(df[y:y+fs], fs=fs, nfft=256)# Delta 1-4  ind_delta, = np.where(f < 4)  meanDelta = np.mean(Pxx_den[ind_delta], axis=0)# Theta 4-8  ind_theta, = np.where((f >= 4) & (f <= 8))  meanTheta = np.mean(Pxx_den[ind_theta], axis=0)# Alpha 8-12  ind_alpha, = np.where((f >= 8) & (f <= 12))  meanAlpha = np.mean(Pxx_den[ind_alpha], axis=0)# Beta 12-30  ind_beta, = np.where((f >= 12) & (f < 30))  meanBeta = np.mean(Pxx_den[ind_beta], axis=0)# Gamma 30-100+  ind_Gamma, = np.where((f >= 30) & (f < 40))  meanGamma = np.mean(Pxx_den[ind_Gamma], axis=0)  feature_vectors.insert(y, [meanDelta, meanTheta, meanAlpha,   meanBeta, meanGamma])powers = np.log10(np.asarray(feature_vectors))powers = powers.reshape(5)return powers

In terms of calculating the band power values, I used the Scipy welch function to calculate power spectral density of the 1 second worth of data. The welch method is a way of calculating PSD which involves calculating fast fourier transforms(FFTs) over small windows of the signal, in our case without overlap. These segments are then averaged over each sample to reduce variance, since the EEG signal is non-stationary.

Once the data is passed through the welch function, Pxx_den returns the array with the power data, and the f variable returns the available sample frequencies to index the power data by. I used ranges of frequencies with the f variable index the Pxx_den variable, extracting each of the frequency bands that we are interested in. Finally, we average the data over the range of indices to get a mean frequency band metric that can be used for the thresholds that are coming up next.

Setting Thresholds and Mapping Inputs

""" 3.3 COMPARE METRICS """# Switching scroll directionif buffers[1][1][-1][Band.Delta]+buffers[1][0][-1][Band.Delta] >= 3.9:   if Up == True:      Up = False      playsound.playsound('Vine Boom.mp3', True)   elif Up == False:      Up = True      playsound.playsound('Bruh.mp3', True)    print('switching scroll direction: Up is set to   {}'.format(str(Up)))
buffers[1][1][-1][Band.Delta] = 0
buffers[1][0][-1][Band.Delta] = 0
# Blink thresholdselif buffers[1][1][-2][Band.Delta] > 2.1: print(""" right """) pyautogui.hotkey('ctrl', 'tab') buffers[1][1][-1][Band.Delta] = 0 buffers[1][0][-1][Band.Delta] = 0elif buffers[1][0][-2][Band.Delta] > 2: print(""" left """) pyautogui.hotkey('ctrl', 'shift', 'tab') buffers[1][0][-1][Band.Delta] = 0

Now that we’ve extracted the necessary band power information from the raw EEG stream, we can code in conditionals that threshold these metrics. After a ton of trial and error, I’ve landed on acceptably accurate thresholds to detect eyeblinks.

One issue that I was met with early on was the fact that the spike in Delta band power created by eyeblinks last longer than 1 second, so everytime I blinked it would click to the next tab more than once. To fix this, I simply added script to clean out the array whenever the threshold is met.

I also added a conditional that changes the direction at which the application scrolls whenever you blink twice. I implemented a few funny sound effects that indicate a change in direction.

#Concentrationif i > 3:  if Up == False:    if buffers([1][2][:, Band.Alpha]) < 0.6:      print('he do be concentratin')      pyautogui.scroll(-200)  else:    if np.mean(buffers[1][2][:, Band.Alpha]) < 0.6:      print('he do be concentratin')      pyautogui.scroll(200)

Lastly, initializing the threshold for focus, I noticed that 0.6 microvolts²/hz was the best crossover point between concentrated and not concentrated for myself. Depending on the state of the Up variable, pyatogui would either scroll up or down.

The buffer I used holds the Alpha metric over a 3 second span, and averaging this 3 second buffer allows for a much more smooth representation of the Alpha band, which has less influence to spontaneous changes in EEG like eyeblinks.

The Web Browser BCI

Here it is! The first iteration of my Web Browser BCI. I think I nailed down the blink detection part, as my program is able to detect blink 90% of the time. On the other hand, the programs wasn’t too proficient at gauging focus. There is definitely a ton of room for improvement, and in the next section I’ll provide some insight into some of the iterations I’m working on.

Future Iterations

Thresholding a single metric to detect focus is definitely not the best way to gauge concentration. After hours of using my device, I realized it is pretty annoying to manually tune the thresholds each sitting. Luckily, we have AI!

Many of the latest studies on focus, attention, and concentration experiments utilize ML and deep learning to “learn” the thresholds through statistics. Using a single metric like I did ignores many of the nuances of EEG, and by using AI, we can use a wide variety of band powers, and ratios of bandpowers as inputs.

SVM with a Polynomial Kernel

SVM, LDA are popular ML methods used to discriminate focus, and deep learning methods like fully connected CNNs are also getting wide popularity in EEG analysis.

I actually made iteration of this project using the SVM classifier with a guassian kernel, and I was able to get accuracies of upwards of 85% with a normally distributed dataset. Although the scores are promising, more data is necessary for a more generalizable model, as live predictions are not as accurate.

An example of a potential P300-speller matrix that can be used to interface with the internet.

Outside of improving accuracy, a key feature that is extremely necessary is the access to some form of textual interface. P300-based spellers have been tested for this specific use-case with high efficiency, and have also been used for a web browser environment.

P300-spellers use visual stimuli to evoke spikes in EEG called event-related potential. We can detect this spikes just as we can detect eyeblinks, and connect to a character or function on a speller matrix. This opens the window for accessing all the features of a web browser, not only scrolling and switching tabs.

Key Takeaways

  • Millions of people globally lack access to the internet due to severe motor disabilities
  • Brain-Computer interfaces can provide a cost-effective solution to this issue
  • I built a Web Browser BCI that uses eye blinks and focus measures to control Chrome
  • We can decompose our brainwaves into different frequency bands, representative of different mental states
  • ML and Deep learning can be used to further improve accuracies of the application

Final Thoughts

This projects shows the potential of a BCI-based web browser, but I believe that how previous research has approached a solution is not enough. Right now, most literature surrounding this topic aims to map brain input to the conventional ways of interfacing with the internet, which is through a keyboard and mouse. But for a true acceleration in this technology, we need to completely rethink how one would interface the internet with their brain.

We created the keyboard and mouse as they provide an easy and efficient means for interacting with a computer with our hands. Now we need to find the easy and efficient means of interacting with a computer with our brains, whether this be via decoding motor imagery, or decoding imagined speech.

Once we reach such a breakthrough, this technology will not only be a solution for the physically disabled, but it may even completely change the paradigm of the internet for the average user. This technology will be the next revolution of he internet, and I can’t wait for when it will actualize.

Thanks for reading! You can find me on Linkedin here, and Github here.

--

--