I did a posting yesterday on Transport Blog about how they’re now using lie detection software to monitor phone conversations from insurance claimants, to flag up potential liars, and then “give them the opportunity to change their story”. The result is a fall in insurance claims, and hence, presumably, potential cheaper car insurance.
I have a the overwhelming feeling that this procedure will bring bad news as well as good, in a White Rose Relevant way, when governments start using stuff like this for instance, as I dare say many have. But what form will this bad news take? I can’t think of any obvious badnesses, but I feel sure there are some. Comments please.
One suggestion. The insurance companies mentioned in this story are all saying at the start of their conversations that “this call is being monitored”, although I don’t believe they say straight out that this means a lie detection machine. Clearly others will not be so scrupulous, and will simply monitor all conversations and flag up what the machines says are lies, all the time. What are the White Rose Relevant implications of that?
On the face of it, I think I have the right to buy a machine that helps me decide whether I trust someone at the far end of a phone line. I could simply say “Is this a junk phone call?” every time I suspect it is, and if they say no but my machine goes “ping”, then down goes the phone. At present the danger is that with our own more fallible bullshit detection software that we all have in our brains, we do this to “real” phone callers who are merely a bit clumsy in identifying themselves, or whom we are a bit clumsy in identifying.
Presumably what makes this so much more usable now is that the kit has got a lot cheaper, and it supplies answers straight away, while the conversation is still going on.
Techo-food for thought here, I think.