From Linear to Dimensional: Reimagining Sensory Perception in Machines

Shamit Shrivastava
3 min readJul 24, 2023
Biology evolved sensory perception that interacts with the physical world collectively and nonlinearly our machines lack such sensory perception. Courtesy MidJourney

As humans, our understanding of the world is often confined by the lenses of our predetermined perspectives and definitions. We favor simplicity, preferring to interpret our surroundings through a linear correlation between cause and effect, or in the case of data, between two variables. We extend this predilection to the devices we design, creating sensors that reflect our linear and unidimensional views. When purchasing a thermometer, for example, we prefer an instrument sensitive only to temperature, not considering how intertwined factors such as pressure might enhance our understanding of the environment.

Our linear sensors are convenient tools for simple data collection, but when we aspire to grasp complex realities, their shortcomings become increasingly apparent. When a phenomenon requires multi-faceted examination, we try to merge data from various unidimensional sensors, inadvertently entering the challenging realm of sensor fusion.

Sensor fusion refers to the process of merging data from different sensory channels into a cohesive, comprehensive view. The key challenge here lies in the delicate balance between information enhancement and distortion. As sensors capture different aspects of the same phenomenon, discrepancies may arise due to variations in sensor accuracy, environmental factors, or timing issues. This can lead to information corruption, resulting in a less accurate depiction of reality.

To help visualize the transition of data from the tangible world into machine-understandable representations, imagine an abstract autoencoding neural network woven into the fabric of reality. This network extends from the physical realm into our machines. The narrowest part, the bottleneck, sits at the interface between the machines and reality. Ideally, our sensors serve as nodes within this bottleneck, converting real-world information into machine-readable form.

However, the corruption of information occurs when the choice of sensors is not optimized to autoencode reality effectively. The selection and alignment of sensors, crucial in reflecting a faithful rendition of reality, are not automatic processes. They require deliberate thought, and when not handled properly, lead to significant data distortion.

This leads us to a critical juncture. In the age of artificial intelligence and machine learning, where models thrive on nonlinearity and high dimensionality, is our clinging to linear and unidimensional sensors justified? We can argue it is not. The transition from simple, linear sensors to complex, multidimensional ones may indeed herald a shift similar to Copernicus’s revelation in the world of metrological sciences.

By abandoning our human-centric perspective rooted in our experiences, we can move towards an observation-centric approach. By designing sensors from an AI-first perspective, we can enhance our capacity to capture the complex, multidimensional nature of the physical world, and thus reduce information corruption.

The lifeblood of artificial intelligence, particularly the burgeoning class of large, powerful models, is data. These models possess an insatiable appetite for information, relying on vast amounts of varied and rich data to learn, generalize, and predict. The shift towards multidimensional, nonlinear sensors designed from an AI-first perspective can satiate this hunger more effectively. These advanced sensors, capable of capturing a broader spectrum of the physical world’s nuances, can furnish these large AI models with more detailed and comprehensive datasets. They can thus enhance the models’ learning potential and predictive accuracy. Therefore, by advancing sensor technology, we also indirectly advance the potential of artificial intelligence, allowing our AI models to delve deeper into the complexities of the world, and in turn, derive more accurate, insightful and powerful predictions and analyses. This fusion of AI and sensory technology can open new frontiers in technological advancements and foster a deeper understanding of the physical world and its complex phenomena.

We will soon be seeing a generation of sensors developed with AI in mind, designed to capture more than a linear perspective of reality, but rather its inherent complexity and nuance. This sensory revolution, akin to the heliocentric shift in cosmology, could redefine how we understand, interact with, and model the world around us. Ultimately, it will lead us towards an era of minimized information corruption and maximized understanding of the intricate and interwoven fabric of reality.

--

--

Shamit Shrivastava

Biophysics of sound in membranes and its applications. Post Doctoral Researcher, Engineering Sciences, University of Oxford, UK www.shamits.org