Trust has been cropping up as a topic a bit for me recently. Benjamin Mitchell has just reviewed “The Art of Deception”, Kevin Mitnick’s book on social engineering, we talked about trust and identity at XTC last night, and it has recently cropped up as a question posed by Dale Emery on the Resistance as a Resource mailing list.
Nat Pryce gave an example of a neat spammers’ trick for getting around captchas on peoples’ email accounts: embed the captcha (usually an image) in a web page that tricks people into decoding the image for a different reason (Nat’s example was accessing a porn site). When the spammers get blocked by a captcha, they can exploit trust to get someone to decode it for them, in return for getting something they want. It’s low risk for the person tricked into decoding the captcha and this enables them to put enough trust into the process for it to work.
There’s another element of trust here that ties into the next thing we talked about: trust of identity. In the spammers’ avoidance of captchas, they exploit an identity trust: the person accessing the porn site trusts that the captcha has something to do with the site, when in reality the image they see and the data they send back to a server have nothing whatsoever to do with that site. (On a security note, checking referrer headers when serving the captcha image might inhibit this).
And then we got to talking about the distributed trust metric on Advogato. Some of us had some interesting issues with Advogato’s metric. I recalled that when I joined I wasn’t at all well-known in the Open Source community and consequently wasn’t allowed to post replies or articles. But after some time, some people began to rate me at around Journeyer level (the middle level). One of the most influential seems to have been based on a case of mistaken identity. After all, the person who rated me so highly had no idea which “duncan” I really was. It could go further. I could claim to be someone who exists, link to all their projects, and gather up all the trust people invested in that in order to certify others.
We then talked briefly about PGP, which is afflicted by the same problem (hence the talk of “key-signing parties” where people physically get together to prove their identities to each other).
The kinds of trust we talked about on Resistance as a Resource were quite different. We distinguishes 2 kinds of trust there: value-based trust, where you can predict someone’s responses based on understanding their value system, and predictability-based trust, where you feel a “blanket” trust for the person based on being able to understand their value system through their actions. Not having the predictability-trust prevents you establishing a model of their value system. It is then the value system that lets you model someone’s behaviour. Dale Emery summarized it like this:
I like that distinction. One is about not being confident that the person will act in accordance with your interests, and the other is about being confident that they will not. (The position of that “not” makes a big difference!)
I’m not sure either of these things is related to the identity trust I was talking about earlier though.
My final observation is that when we looked at trust’s characteristics on Resistance as a Resource, I noticed a similarity to the Prisoners’ Dilemma, and went off to do some reading. The best strategy seems to be Tit for Tat, in which (essentially) you trust someone at the start (a show of good faith), then do to them whatever they do to you. It seems to be very close to what the respondents on the mailing list feel happens to them. And it shouldn’t be a surprise that we operate this way. Do we have hard-wired trust-measuring parts of our brains? I could see how this could evolve.