Saturday, April 24, 2004

Truth Machines

I've been thinking about James L. Halperin's book, The Truth Machine. It portrays a near-future transformed by the invention of a 100% accurate lie-detector. Really interesting stuff. Halperin's vision of the consequences is very optimistic - 'utopian' even - but he manages to describe it all in a very compelling and plausible way.

Apart from all the usual ponderings of whether such an invention would be a "good thing" or not, last night I began to wonder about how such a machine would actually work. Presumably the idea is that our brain/nervous system behaves in some predictably different way when engaging in deceitful, as opposed to honest, behaviour (particularly involving the vocal expression of believed vs disbelieved propositions). But we cannot expect the 'symptoms of deceit' to be easily detectable (modern polygraphs seem to mostly just detect anxiety etc, which isn't quite good enough!). Responses may also vary from individual to individual.

Anyway, what's required here seems to be some sort of pattern-recognition - a job ideally suited to parallel processing, or artificial connectionist networks. As I understand it, connectionist algorithms are not 'programmed', but rather, are trained. You start off with a basic network, with all the inputs connected to the output(s), according to some base strength. Then to "train" it, you simply run through a large sample of known data as input, tweaking all the connection strengths accordingly so that the correct output results. Do this enough times, and it will eventually 'evolve' the ability to recognise the pattern, whatever it may be (possibly unknown even to the trainers!)... thus giving reliable results even when faced with entirely new input data.

I think that this sort of approach might have a lot of potential in psychological analysis, and particularly lie detection. It would certainly be interesting to try, anyway... Start off with a basic network, which takes a person's neural activity as inputs, and a single "lie detector" output. Train it by getting the person to say a large sample of propositions - known to be either true or false - thus 'training' the network to recognize whatever patterns there may be (if there are any), even if we are unaware of them. There's the added advantage that networks could be individually trained to respond to the patterns of a particular individual.

If there are any (patterns). That's the key, I think. If there are, then I think this approach should (at least in theory) be capable of picking up on them. If not, then reliable lie detection is presumably impossible.

So that's my key idea. If reliable lie detection is possible, then 'connectionist' algorithms are one way of achieving it.

Question time (please feel free to respond in the comments to any or all of them!):
1) Do you know whether anyone else has thought of this before?
2) Do you think it would work?

and the fun questions:
3) Do you think the invention of a 100% (or close) reliable lie-detector would be a good thing?
4) Would it be (practically) possible to restrict its use (e.g. to approved law enforcement/security purposes only) for privacy reasons?
5) Would such a restriction be a good thing?

Or anything else you can think of... Fascinating topic anyway, I reckon.

Update: The Economist has an interesting article suggesting brain scans (rather than merely examining physiological symptoms) may indeed have a lot of potential for detecting lies. The approach described uses more traditional computational techniques, rather than connectionist ones. I've still never heard of anyone investigating the use of the latter for this purpose. But then, I haven't looked very hard.

1 comment:

  1. [Copied from old comments thread]

    3)It would be interesting... but could people get around it? I'm kinda into fantasy novels, and one series that I'm reading about at the moment is Robert Jordan's the wheel of time. In it there is a class known as Aes Sedai (they are the magicians of the society) and they vow an oath never to lie - using magic to make it so that the oath holds fast - they cannot break it. Now, the interesting thing is that even though everyone knows that Aes Sedai can't lie, they can still decieve, by leading people into believing that they are saying things they are technically not saying. In this fantasy world, Aes Sedai have a terrible reputation as manipulators!

    So the people with the power could get around it. I'm not so sure that those without power would have the same experience, though. Those with power can choose to say nothing, but what about the downtrodden masses?
    Patrick Kerr | Email | 11th May 04 - 6:52 pm | #

    --------------------------------------------------------------------------------

    The Wheel of Time is a great series isn't it!

    As for the 'misleading but true' problem, I've got two main lines of response:

    1) As we discussed at uni today, I think the machine would be more likely to catch a general deceit-signal than a specific "literal lie" one. But I'm not sure about that, it would be interesting research to reveal which (if either) response produces detectable neural patterns.

    2) Even if you can only catch literal lies, this poses no problems so long as you phrase your questions carefully (this is discussed briefly in Halperin's book). It's kind of hard to be misleading when only given a choice between answering "yes" or "no"
    Richard Chappell | Email | Homepage | 12th May 04 - 8:36 pm | #

    --------------------------------------------------------------------------------

    heh. you're right. I'm wracking my brains for a morally problematic instance where people really should be able to lie...
    Patrick Kerr | Email | 13th May 04 - 8:04 pm | #

    --------------------------------------------------------------------------------

    4 & 5) Obviously you could illegalise it. But practically? Not sure. Actually I'm not sure if it would be too much of a problem in most situations anyways. People would consider it the gravest insult to be tested with a lie machine. Especially when being asked questions that the questioner shouldn't be asking - ones that breach privacy. They would probably refuse to answer, I think. However, the problem would be when people are somehow being FORCED to answer... that would be a lot of power, possibly ok in the hands of a democratic government especially if it was actually used ON the politicians as well... but in the hands of criminals? Could be dangerous.

    Psychopath asks: "Did you sleep with my wife?"
    Patrick Kerr | Email | 13th May 04 - 8:11 pm | #

    --------------------------------------------------------------------------------

    Yeah, if that's a serious danger. But it would be overcome if the truth machines became widespread.

    Why? Well, imagine if certain assertions became ritualized (if everyone has to say it, then noone should be offended by it).

    e.g. "I am aware of no illegal acts which have gone unpunished".
    and "I have no priveledged information about crimes yet to be committed".

    That should catch anyone committing (or even contemplating) crimes pretty quickly!

    Halperin's vision includes a mental health test which would identify all psychopaths (etc) before such events could take place anyway. (I did say he was optimistic!)
    Richard | 13th May 04 - 9:58 pm | #

    --------------------------------------------------------------------------------

    Also, I should mention that Halperin thought the technology would eventually progress to such a level that the lie-detectors would be incorporated into watches (or something like that)... and they'd just be working non-stop.

    If kids were raised in a society like that, I guess they'd never learn to be outraged about it.

    One result would be that people would become a LOT more careful what they ask others! You might not like the answer - or you might not like what the other person asks YOU in return!

    So i wonder if maybe it would be best to restrict their use to law courts & the ritualized oaths?
    Richard | 13th May 04 - 10:05 pm | #

    --------------------------------------------------------------------------------

    Yes. possibly restrict to law courts/parliament and those oaths.
    Patrick Kerr | Email | 16th May 04 - 7:28 am | #

    --------------------------------------------------------------------------------

    Right, glad that's all sorted then... now all we gotta do is invent the damned thing!
    Richard Chappell | Email | Homepage | 16th May 04 - 10:27 am |

    ReplyDelete

Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)

Note: only a member of this blog may post a comment.