In a way, you might say that this information is priceless. It would utterly transform the way that we interact with one another.
In 1996, author James Halperin wrote a wonderful book called The Truth Machine about precisely this possibility. Now, thanks to developments in video processing algorithms at M.I.T. over the last few years, that vision might be a lot closer than you might imagine.
Zooming in on Change
The M.I.T. invention is called “Eulerian Video Magnification“ and essentially it’s an algorithm for transforming video signals so that small difference become big differences:
A feint change in temperature, something you wouldn’t be able to normally detect with your eye, is now exaggerated to become obvious. A micro-movement of an eye, a blink or ever so slight glance to the left or down, becomes impossible to ignore. The same goes for the countless micro-expressions we make throughout the day – those little micro-blasts of emotion, so fast (lasting just 1/25 to 1/15th of a second) that they’re hard for most of us to accurately detect and assess.
All of these subtle queues will now get much more visibility, thanks to the Eulerian Video Magnification algorithms and whatever they evolve into over time. Now imagine applying this new ability to a very specific application: detecting a lie.
The best poker players know how to spot the “tell” – those subtle signs that an opponent is bluffing. The signs of a lie are always there – they’re just hard to see sometimes.
There are lots of approaches to lie detection, such as measuring respiration rate, blood pressure, capillary dilation, vocal signals and muscular movement. And most of these measures could be addressed by tweaks to the Eulerian Video Magnification feedback process.
To be clear, today’s lie detector tests – particularly the polygraph – are notoriously subject to significant inaccuracies; so much so, that they’re not usually admitted as evidence in US courts. So, this approach to lie detection is still far from foolproof.
Where It Will Go
We’ll likely pursue it anyway for two reasons though. The first is that Eulerian Video Magnification doesn’t require strapping someone up with wires. You simply capture them on video and run the algorithm – you can even do it in realtime. That’s why this approach to lie detection will likely start as a kind of quick, dirty and inexpensive approach.
That will change over time, however, which leads to the second reason this approach will likely win over time: it’s going to get much better. We will eventually overlay blood flow analysis on voice stress and map that to eye movements and all kinds of other measures. We’ll do that because it will be cheap to do. Just throw a little more processing power at the problem, and, that’s very inexpensive to do. What’s more, as this stuff goes mainstream and is connected to cloud services, there will be countless points of feedback from millions of users, all of which will contribute to improving the algorithms over time.
In case it’s not obvious, the delivery mechanism for this technology will be something like Google Glass. Imagine running it with realtime video processing, so that just by looking at the person before you, you’d get some probabilistic assessment of how likely it is that they are telling the truth.
What would it feel like to wield that kind of power? Well, probably quite awkward at first. Some have noted that Google Glass can feel a little socially awkward already today, but think what happens when it gets a rudimentary lie detector.
How would people react to your instantly seeing signs that they might be lying? Think about the perceived invasion of privacy and the sense of nakedness those around you would likely feel. Wow. Talk about barriers to adoption.
There’s no doubt that the first few years of our new “truth machines” – which I’m guessing will be sometime in the next 3-5 years – will undoubtedly be rocky.
The Emerging Truth
Down the road though, as these tools become powerful and more accurate, my guess is that they will be widely adopted as a way to help us discern the truth.
As hard as getting there will be, in the end, I think these tools will make us better human beings. It’s been a while since I read Halperin’s The Truth Machine, but I remember it getting into very important questions about a society where lies cease to exist.
How would business meetings change? How would domestic politics and international diplomacy change? What happens with certain professions, such as the legal field – might they disappear, or morph into something radically different? These are all fascinating and critically important questions, and they are just a inkling of the kind of change that something like this might bring.
At the time I read Halperin’s book, I thought – yeah, someday we really will face these questions. Now, seeing these technology developments at M.I.T., I think that day could actually be very, very close.
And I wonder whether we are really ready for it.