MENU

Talking About New Privacy Perils

March 15, 2013 • Innovation

We need systems that watch us.  Maybe “need” is a very strong word, but the leap in what applications could do within a device that is capable of monitoring and anticipating the user’s instructions, and even reacting to the user’s emotions, is enormous.  So enormous it’s going to make a lot of people, myself included, a little nervous.  But it’s hard to imagine turning back at this point.  Let’s accept that it’s coming, and your next tablet will probably scroll and select based on your eye movements.  You’ll be able to click with a blink, or delete by sticking out your tongue (okay, no one has promised that last one yet).

On one level, it’s not such a big innovation.  Your computer already listens out for your actions.  It’s just that the actions it’s prepared to react to are very narrow.  Keystrokes and mouse movements are about it for PCs, with taps and expand/contract actions added for touchscreens.  Your phone knows what you want because you touched a link.  All that’s changing is that the phone is going to anticipate your touch by following your eyes; your conscious instructions will still be driving the behavior.

But it doesn’t end there.  The things we click, or ignore, constitutes useful information to someone.  How we move around the screen tells those who design and populate those interfaces a great deal about what people think.  They’ll be gathering data about more than your specific volitional instructions.  Indeed, we can convey more information than we intend, or wish inferred, by our private behavior when communing with our own devices, revealing all sorts of conscious and unconscious traits, quirks, foibles and preferences that are nobody else’s business.  What kind of ads draw our attention, whether or not we click on them, would tell advertisers more about which campaigns are working than simple click-through rates.  Clicks-throughs usually run in fractions of a percent; suppose you could gauge the reaction of 100% of the users who saw the ad?  But as consumers, we know that what we see on our screens will rapidly be adjusted based on what advertisers know about our reactions.  Will these systems be able to distinguish a covetous glance from one of disgust?  And regardless of our conscious or subconscious interest, maybe we don’t want all our banners and sidebars showing underwear models (let’s hope the good people at adblock are paying attention to all of this).

More insidious still are the affective interfaces under development which will read the user’s emotional state.  One can certainly see how information about user’s state of mind would be useful in any number of applications from online learning to clinical analysis.  But how will you feel when your phone starts implicitly telling you to calm down?  Perhaps fortunately, this technology still has a ways to go before that happens.

Software engineers can get swept up in new features, shrugging off a dark forest while mesmerized by the coolness of the trees.  Lawmakers and regulators will not be so enthralled.  Will consumers accept such features the same way they seem to be accepting GPS in everything?  How are we going to engage with consumers and the watchdogs as systems become more richly user aware?  It will be a new paragraph in the EULA, just for starters.  The point is, those involved in the development, integration and marketing of these tools needs to be able to understand the new capabilities, their power as well as their boundaries and limitations.

There doesn’t seem be any framework for talking about the degree of intrusiveness of a computer system (if you know of something in this area, please let me know in the comments).  The more common language we have in the way of memes and vocabulary, the easier it will be to have these discussions and make appropriate decisions.  As in anything, uncertainty equals risk.  It would be helpful if we had some kind of structure, a level set of common concepts and standards.

Let’s start by talking about a device’s level of user awareness and how it manages the data it gleans.  While the distinctions I’m suggesting imply a stepped escalation where the reality is on more of a continuum, I think it would be useful if we could look at systems as falling into one of these categories:

  • User Awareness (UA) level 0 – No user awareness other than through monitoring basic controls (keyboard, mouse, touchscreen, touchpad).  Any attached cameras are used for image capture only.
  • UA 1 – user behavior is monitored for control only, such as scrolling and selecting.  No user data is stored outside of ephemeral memory.
  • UA 2 – local processing and storage of user activity data – for applications that make significant use of user activity inferences but don’t share it with any other user or application.  Imagine a video recorder that saves footage based on the activity it is recording, such as a security camera application that only stores data when someone other than the user is in the image.
  • UA 3 – applications that interact with other systems or devices, but don’t share user activity data with a third party.  Any web site that makes use of your device’s user monitoring data would be at this level or above.
  • UA 4 – applications that delivery anonymized user activity data to a third party such as the software vendor or ad server.  Just what anonymized means and how it should be managed and audited might possibly be the subject of standards or regulatory oversight.
  • UA 5 – user activity is exchanged with third parties who may interact with the individual user on the basis of their specific behavior.  UA 5 in the browser means that if you keep glancing at those underwear ads, pretty soon coupons from FreshPair.com or La Perla will start showing up in your email.

This formulation doesn’t presume a technology, just a privacy-invading capability.  Location data for instance is one way a device can be user aware, so this structure can be applied to discussions of GPS-dependent features.

Notwithstanding some trepidation, I’m excited about these new capabilities.   I  don’t presume that they are dangerous any more than I assume their builders have malicious intent.  At the same time, it’s necessary to be mindful of how the power could be misused.  To the IT professional, the take-away is that we must ensure systems have the UA level needed to do the job and no higher, and be open and clear about the implications to our users.

Leave a Reply

Your email address will not be published. Required fields are marked *

« »