I've been thinking about James L. Halperin's book, The Truth Machine. It portrays a near-future transformed by the invention of a 100% accurate lie-detector. Really interesting stuff. Halperin's vision of the consequences is very optimistic - 'utopian' even - but he manages to describe it all in a very compelling and plausible way.
Apart from all the usual ponderings of whether such an invention would be a "good thing" or not, last night I began to wonder about how such a machine would actually work. Presumably the idea is that our brain/nervous system behaves in some predictably different way when engaging in deceitful, as opposed to honest, behaviour (particularly involving the vocal expression of believed vs disbelieved propositions). But we cannot expect the 'symptoms of deceit' to be easily detectable (modern polygraphs seem to mostly just detect anxiety etc, which isn't quite good enough!). Responses may also vary from individual to individual.
Anyway, what's required here seems to be some sort of pattern-recognition - a job ideally suited to parallel processing, or artificial connectionist networks. As I understand it, connectionist algorithms are not 'programmed', but rather, are trained. You start off with a basic network, with all the inputs connected to the output(s), according to some base strength. Then to "train" it, you simply run through a large sample of known data as input, tweaking all the connection strengths accordingly so that the correct output results. Do this enough times, and it will eventually 'evolve' the ability to recognise the pattern, whatever it may be (possibly unknown even to the trainers!)... thus giving reliable results even when faced with entirely new input data.
I think that this sort of approach might have a lot of potential in psychological analysis, and particularly lie detection. It would certainly be interesting to try, anyway... Start off with a basic network, which takes a person's neural activity as inputs, and a single "lie detector" output. Train it by getting the person to say a large sample of propositions - known to be either true or false - thus 'training' the network to recognize whatever patterns there may be (if there are any), even if we are unaware of them. There's the added advantage that networks could be individually trained to respond to the patterns of a particular individual.
If there are any (patterns). That's the key, I think. If there are, then I think this approach should (at least in theory) be capable of picking up on them. If not, then reliable lie detection is presumably impossible.
So that's my key idea. If reliable lie detection is possible, then 'connectionist' algorithms are one way of achieving it.
Question time (please feel free to respond in the comments to any or all of them!):
1) Do you know whether anyone else has thought of this before?
2) Do you think it would work?
and the fun questions:
3) Do you think the invention of a 100% (or close) reliable lie-detector would be a good thing?
4) Would it be (practically) possible to restrict its use (e.g. to approved law enforcement/security purposes only) for privacy reasons?
5) Would such a restriction be a good thing?
Or anything else you can think of... Fascinating topic anyway, I reckon.
Update: The Economist has an interesting article suggesting brain scans (rather than merely examining physiological symptoms) may indeed have a lot of potential for detecting lies. The approach described uses more traditional computational techniques, rather than connectionist ones. I've still never heard of anyone investigating the use of the latter for this purpose. But then, I haven't looked very hard.