Seeing with your Tongue and the Everchanging Mush inside your Head

Anush Mutyala
9 min readDec 16, 2020

--

Above is a scene taken from Ratatouille, where the main protagonist, Remy, is seen controlling a human by pulling their hair.

Ever watched Ratatouille? If you have, you would have noticed how absurd the main concept of the movie was. I mean come on, a rat. Hidden in a chef's cap. Controlling a person like a puppet by the pulls of his hair. Like that would happen in a million years, right? Yeah, maybe you should reconsider its likelihood after you read this article.

The Science Behind The Magic

Quick Note: I will be using the terms “sense” and “sensory modality” interchangeably throughout the article. Though, the general consensus is that sense relates to one of the 5 peripherals and the “senses” correlated to them, whereas sensory modality is used to describe the different feelings that can be perceived through the peripherals. For example, touch consists of many different feelings; temperature, pressure, and texture are all different sensory modalities relating to the sense of touch.

Sensory substitution almost seems like it shouldn’t be possible, but our brains never fail to amaze us. This technology relies on neuroplasticity, the concept that the brain is malleable and can change and create new neural pathways throughout your life.

You heard me, throughout your life. (Ramscar et al., 2014) Researchers have shown that the reason that older humans are slower with psychometric tests is due to the reliance on information processing and acquired knowledge that develops over the years, over simply digesting everything thrown at us with no thought. This has nothing to do with cognitive aptitude or growth. It is only a common misconception that your brain can’t further develop after childhood. No more excuses for you not to start learning the piano, or start learning french. You are more than capable to do what you want regardless of age.

Personally, stepping on lego is a big pet peeve.

Now that we’re done with that little pep talk, let’s continue with SSD’s(sensory sub device). To accept their existence we must also understand that sensation and perception are independent of each other. Sensation is what occurs when receptors of a sensory system are stimulated. For example, when I step on the lego, the process of the skin on my feet registering the touch of the lego is sensation. Me shouting in pain is the outcome of perception. I perceive when my brain connects the dots and realizes that indeed there is a sharp object under my foot and it is damaging the tissue in said area. TL:DR → the feeling of senses is only biologically routed to certain peripherals; the sense of vision is not created by your eyes but rather your brain.

The Magic Mix

Continuing from our points on perception and plasticity, SSD’s essentially change the mapping of neural pathways of one sense to another through learning and leveraging the concepts above.

They are generally made up of 3 parts:

  • the external sensor
  • the coupling interface
  • and the stimulator

The external sensor differs based on what sense you’re trying to substitute. If you want to emulate sight, you use a camera. If you want to emulate sound, you use a microphone, e.t.c. There are some exceptions, though, which I will get into later in the article.

Data from the sensor is fed into the coupling interface, which is generally just a microchip. It takes care of the signal processing required to translate the information from the external sensor into a form factor that can be used by the stimulator. So basically the microchip turns the input information into 1s and 0s that the output device can understand. For example, during sound-to-vision substitution, the video feed from a camera would be translated into sequential patterns using a pixel-frequency relationship(signal processing functions) to create an audio signal that can reflect different surfaces and edges.

Lastly, the processed signal is outputted to the user through a stimulator. The most popular peripheral systems that these stimulators interact with are touch and sound. Tactile feedback has been the prevailing choice of stimulation due to its vast unused potential. When are you actually using the touch receptors on your torso or legs to feel? It’s also popular because of the variety of stimuli that our touch peripherals can distinguish(temperature, vibration, electricity, e.t.c), which allows higher dimensionality in the information being transferred. Higher dimensionality in the information just means that you can distribute more details about the environment to different feelings.

Together, these combine to create innovative and inexpensive devices that help hundreds of people every day.

What's on the Market?

Hopefully, sensory substitution seems less like magic now. “So what's on the market?” you may ask. Unless you’re a part of the millions of people with sensory disabilities to some degree, unfortunately, there isn’t much fun stuff you could buy. However, whatever is available right now is still frickin’ amazing. The most popular being the Brainport Vision, which converts touch to sight, and vOICE, which converts sound to sight.

The Tongue is your New Eyes

Above is the newest version of the Brainport Vision, which is available for purchase on the Wicab website.

The Brainport is on par with Ratatouille in terms of craziness. It uses electrotactile stimuli on your TONGUE to emulate sight. Why the tongue? (Bach-y-Rita et al., 2003)It’s because it’s very sensitive to touch as its receptors are close to its surface, and also because saliva acts as an electrolytic solution that assures good electrical contact. The tongue's properties allow for very little voltage and current output necessary for perception, compared to other areas such as the fingertip. You definitely want to avoid getting shocked every millisecond.

Other than the tongue display unit(TDU), the Brainport has a pretty standard setup. It uses a camera for video input, and encodes spatial information from pixel data by variances in pulse current or voltage, pulse duration, and intervals between pulses. The TDU itself is an array of 394 electrodes, and provides enough spatiotemporal resolution(info on space and time) that you can react to a ball rolling towards you.

Aside from the medical applications, the Brainport is also being used in various other fields. Wicab, the company that made the Brainport is also conducting research on different military applications of the technology under DARPA. Something that the company is looking into right now is to provide orientational and navigational feedback to Navy SEALs underwater through sonar. Another potential use case can be with robotic surgery. Surgeons could wear gloves that provide electrotactile feedback when operating remotely through a robot.

Jammin’ with Vision

Left of the image is a picture of what the vOICE looks like. Right of the image is a diagram of how the vOICE encodes pixels into sounds.

The other big SSD is vOICE which turns visuals into “soundscapes”. The vOICE consists of everyday technology, a camera, a computer running the signal processing software, and a headset. The conversion software samples the image to a 176x64 array of pixels, which is then heard from left to right, column by column. Brighter pixels sound louder, and pixels that are higher in view sound higher in pitch.

Instead of spatial information being reflected on which electrodes are being activated like with the Brainport, the vOICE uses pitch of sound which allows fewer restrictions on resolution. Where the Brainport was restricted by the number of electrodes you could fit onto the TDU, the vOICE can utilize the full spectrum of sound to portray spatial information. The only issue with this is that there is a little bit more interpretation required by the user since sound is sequential, whereas an array of electrodes can create a spatially homologous feeling with one pulse. Think of it as instead of being dumped a large school assignment, you are given smaller worksheets that combine together to make your final.

Is the Feeling real though?

Many studies have been conducted with both the vOICE and the Brainport to see if whether or not the user actually “sees”. (Ward et al., 2010) The main indicator that users are feeling vision is that activity in the occipital, or visual, cortex has been observed. This means that biologically, perception is occurring in the part of the brain related to vision. Other observations that back up the hypothesis that vision is occurring include the descriptions that users give.

Location of the visual cortex in the brain.

People who have never seen with their eyes in their lives describe their experiences with words like ‘seeing’, ‘watching’, ‘field of view’, despite having no history of “actual” vision. Over time, users are even able to feel depth and colour, even though both qualities are not deliberately encoded from the image. Whats’s amazing is that all of this is occurring involuntarily, the users do not intentionally decode what they feel, the brain does all the work for them.

Expanding our Umwelt and the Future

Sure, right now, sensory substitution doesn’t seem like it has a huge impact on greater humanity. When we get into stuff like expanding from our senses from the 5 that we have, though, that’s where we get into some Cyberpunk 2077 territory.

Our umwelt is what we perceive as reality, which is governed by our senses. We can feel temperature, sound, and visible light, but we aren’t able to experience ultraviolet like the honeybees, or sense magnetic fields like birds. Our whole reality has been constrained by what we can feel, meaning with the additions of new sensory modalities we can experience drastically different umwelts. Sensory substitution technology is the key to unlocking the human umwelt.

David Eagleman is seen wearing a functional prototype of the VEST at his TED talk in 2015.

David Eagleman, a neuroscientist at Stanford, is one of the few individuals spearheading innovation in sensory “addition”. The VEST SSD, which was originally created by his team to emulate sound through vibrations, is now being used to test other forms of information, For example, data from the stock market was being streamed to the VEST, and his team was experimenting whether or not people can make sensible buy or sell decisions based on the information they “feel”.

“We live in a world of information now, and there is a difference between accessing big data and experiencing it” -David Eagleman

Imagine being able to feel the morning news, or be able to feel the weather forecast based on vibrations from the VEST. This opens up a whole new world of data processing on a scale never seen before.

Hair Wear team member demoing the unique device on one of the judges at the hackathon.

Remember the Ratatouille reference from the beginning of the article? What if I told you that there really is an SSD that uses your hair? Meet Hair Wear, vibratory clips that attach to your hair to provide directional information. I would say that's close enough to the movie. Hair Wear was one of the prototypes created during the last Hack The Senses. It's a hackathon in London where 40 participants go all out and build some pretty insane SSDs. These are the type of events that will push this technology to the mainstream, and we should be hyped when it does.

Other than cooler applications of SSDs, I think that in the future we will see higher bandwidth devices and more emphasis on continuity of information, which will make a more natural experience. When we feel, we do not feel in snapshots, we percept in a continuous stream of all 5 of our senses.

Another innovation we may see in the future is multisensory substitution. This type of SSD would use 2 or more sensory modalities to emulate one sense, which could provide a more defined percept.

In my opinion, the last area that requires more research is the aspect of acquired synaesthesia. A problem that arises with prolonged usage of SSD’s is that any stimuli to the sense that is used to emulate will result in an unwanted perception of the handicapped sense. For example, even when a user is not using the vOICE device, audio may cause visions to form.

Takeaways:

  • Your brain is malleable throughout your life, in terms of the neural connections it makes
  • Perception is the feeling, sensation is the detection
  • Sensory substitution devices(SSD) use a functional sensory modality to provide information on another sensory modality that may be diminished due to a defective peripheral
  • SSDs are capable of also adding new senses, outside of the 5 we biologically have

Further Readings:

--

--