BLUF – Our bodies ingest volumes of data daily to determine a relative threat, with internal experiential shifts raising alerts to our attention. Despite the threat volume and growing impacts observed, our bodies don’t interpret danger during the time spent in digital space. Increasing individuals’ sensory data variety opens new opportunities for threat indications to invoke physiological response – sensing digital danger just as we do physically.
For a brief moment something catches your attention. Your neck hair rises, ears perk, eyes scan everything quickly, heart races, time slows. You’re not sure why this is, but something caught your attention.
Something is not right.
Not only is it not right, but your body interpreted the ‘not right’ into ‘threat.’ Pushing your mind from consciousness into the limbic system, assessing the environment quickly. Fight, flight or freeze – something pushed you out of your relatively relaxed state to hyper-vigilant.
Perhaps it was a sharp noise, a threatening motion, a look you caught, a toxic smell, or the combination. Multiple indicators you hadn’t time to process on a cognitive level raised awareness internally. This is survival instinct, evolutionarily intact, often triggered unnecessarily in the current physical environment we’ve created.
But we’ve created new environments in digital spaces – non-physical places we practically live in – lacking the sensory data to understand threat. Our biology never prepared us for the threats we now face, as there weren’t generations cut short from deadly misinterpretation – the harm invoked digitally doesn’t introduce our expiration date. What engages all our senses in a physical environment is limited to two or three at most, often suspending those not in use for the duration.
To treat threats properly, we need to recognise the potential for harm not only on cognitive levels, but on limbic/physical levels as well. Requiring both sensory inputs and trigger developments for biological notification, this could also bolster inclusion efforts for those requiring assistive tech.
Sight and sound are already embedded into our digital media, along with haptics (touch) for gaming controls and certain functions on mobile. I don’t anticipate taste or smell easily adapted, although AR/ VR can potentially incorporate vestibular (movement) and proprioceptive (body positioning) aspects into our digital worlds. VR being immersive there are many possibilities for sensory input and manipulation, but until we conduct our business a la William Gibson, those platforms will not have sufficient utility to support the potential security challenge. With AR there is further day-to-day utility, requiring better interpretation of how to identify and manifest warning indicators. Much like standing arm’s length to the edge of a high precipice, you’re not in danger but your body recognises the possibility – the experience we emulate.
Security establishes different threat factors for manual and automated interpretation, but without consideration for how to relay the information. Moving from fear/ uncertainty/ doubt, awareness content creators are experimenting with variety of attention grabbing methods (e.g. rude humour) for general awareness. Creativity is overdue, and the need will continue until the culture adapts to embed these as fundamental practices. Give it a few generations.
With creation of new ways to incorporate sensory warnings into our daily operations, a word of caution – desensitisation will kill any value and is common in digital space. Thinking of user responses when Chrome says the asked-for site is unsafe, and how quickly they relax their guard when it looks just like … any other site.
Nothing feels or looks dangerous, so we automatically default to trust.
We have sufficient stress responses for other components of our lives, digital space is our escape. We want to trust what’s there.
Was it curiosity or complacency that killed the cat? Either way, the cat should have paid attention.