What was it that Yoda said? "Fear leads to anger. Anger leads to hate. Hate leads to suffering" This week Dave asked us to discuss the relationships between cooperation, incentives, reputations, and trust. I think it would be fair to say that reputations lead to trust, trust leads to cooperation, and you must have incentives to make it all work.
I used to teach a class at Weber State University called The Wired Society. One of the topics was human nature and technology. I posed the question, if technology helps us to do what humans do 'better', then it might be important to determine if humans are inherently good or inherently evil. If technology is a magnification of human nature and by nature we are 'bad', then isn't technology a bad thing as well?
I ran a series of 'experiments' where the students had to make choices in an online environment. The results of these challenges were supposed to give us an insight into human nature. A key theme to this experiment was the question of anonymity (which could be argued is the antithesis to reputation), and trust. The results were interesting. One of them was the prisoner's dilemma. I found that when we did the exercise in class, people played quite nicely. Online was a completely different story. Here and here are a few of the the links that demonstrate what happened. The long and the short of it was that when the class got online, in a state of anonymity, with no worry about reputation or future retaliation, things got messy. Quite messy.
Compare this to real life where what I do is noted by those around me, and I build a 'reputation'. "Oh yeah, that's Marion. Isn't he the one that painted the local Wal-Mart a nice shade of salmon while wearing a Tick costume?" What I do becomes my 'reputation'. Online we have the option to be anonymous, but if we frequent the same spots, with the same people, we again begin to build a reputation. I argue reputation is a good thing. It keeps us from acting 'human'.
Because my class acted in a anonymous environment, for a short period of time, in a temporary environment, there was no reputations built. The end result was that there was no trust. Because there was no trust, there was ultimately no cooperation. Another of the challenges I gave my class was an exercise in cooperation. If the whole class cooperated, they all came out ahead. If, however, one person decided to go against the group, everybody lost a few points, but the person who defected got a lot of points. It was a non-zero-sum game. Since nobody was concerned about reputation can you guess the results? More than half the class defected, even though during the discussion portion of the challenge almost everybody promised to cooperate.
A good case study is e-bay, talked about in two of our readings. It's a great system, partly because it is so incredibly simple (lower cost for those following the rational choice theory threads), but the end result is that the system gives us a reputation, easily and readily accessible by all, thus forcing trust (or distrust, as the case may be). With a quick click, I can know how other people have acted in past transactions. If somebody tends to be dishonest, then I can avoid working with them. If the trust is built, then I may now enter into a cooperative act with them. I will give them money in exchange for an item.
That trust also plays back into what Kollock called a social dilemma. E-bay's system allows us to engage in a 'tit-for-tat' system of feedback. Tit-for-tat means that if somebody harms you, then you harm them back. If the other player cooperates, then you in turn cooperate. If somebody sends you a solar powered clothes dryer, and you are not happy with it, then you can retaliate. This won't keep them from trying to sell to others, but it may warn others that this person is dishonest, and ultimately they may lose business. Going back to the rational choice theory, it becomes to costly to act in a dishonest manner. It is to a sellers advantage to be fair because there is a cost (negative incentive) to cheating.
I haven't left out incentives on purpose, rather I feel that I've already shared my thoughts on how they play a very important and holistic role in online communities and human nature in general, and I certainly wouldn't want to bore anybody with further ramblings. :)
I just want to say in hindsight that I feel like I've only scratched the surface of this topic. I think there is a lot there, and my thoughts are quite convoluted and nomadic in nature. I only had a week to work, read, and think on this. I feel like there is more that I haven't considered or discussed.
Ok, on to Zork! I beat Zork 'back in the day' (by day I mean before you could hop on the internet and get a walkthrough), but who can pass up the opportunity to play video games when you've got such a great excuse? "Honey, look right here, it says I have to play this game for hours and hours this week. Yeah, my professor is a slave driver, but you gotta do what the good doctor says."
3 comments:
I'm not sure how easy it would be to manipulate the rating system. In order for a person to manipulate it, they would need to actually purchase items from an individual and then give a negative feedback. If that feedback is unwarranted, then the seller could simply choose to not sell to that person again. I guess the manipulator could have multiple logins, but I wouldn't be surprised to find out that e-bay has a way to track IP addresses or what not.
I think also that the incentive to manipulate isn't high enough to cover the cost. If there were two people selling on e-bay, you may see some attempts to make the other guy look bad, but you're competing against thousands of other sellers. You efforts are better spent making yourself look good, rather than trying to make 1,348 other people look bad.
I don't think eBay's system of 'tit for tat' is really essential. For new sellers & buyers, maybe. But when you reach critical mass, would you care what one or two negative feeds weigh in comparison to the thousands of other positive feedback you received? what's to stop these sellers then from occassionally putting out a series of lemons, then drowing out the evidence in a sea of positive comments?
Like your post on your experience with your class where there was no cooperation among students and ultimately did not obtain the highest payoffs possible. It got me thinking of the following scenario:
Think about a situation in a classroom (face2face or online, it does not matter) where students are asked to peer-review each other. For those who know Prisoner’s dilemma or Game theory (call them bonnie & Clyde), they will rate everyone in the class low, on the assumption that everyone else will rate each other fairly. This will result in bonnie & Clyde scoring significantly higher scores than everyone else. When high assessment stakes are on the line, what’s trust got to do with anything?
cheers,
BH
Dave, thank you very much for the links... I'm printing them as I type...
Bing, you are correct when you say, "would you care what one or two negative feeds weigh in comparison to the thousands of other positive feedback you received?" The fact of the matter is I wouldn't care, just as I wouldn't care that my neighbor down the street had a bad experience with Dell, because my other research shows me that Dell is almost always superb in their customer relations. I've been in customer oriented businesses, and I know that some people just won't be happy with their purchases, but remember that this system is designed to be fair to both buyers, and sellers. If my one negative comment barred a user who had 5,000 successful sales, then the system wouldn't work.
And as for "what's to stop these sellers then from occassionally putting out a series of lemons, then drowing out the evidence in a sea of positive comments?", well, it seems like a huge cost for a little payoff. A few bad reviews will probably affect their selling, even if only a few people decide not to purchase from them. I believe a crucial element of tit-for-tat states that you 'tit' the same degree with which you were 'tated'. In other words, when Kruschev bangs his shoe on the podium, you don't respond with a tactical nuclear attack. You send over your president to proclaim he is a jelly doughnut (http://en.wikipedia.org/wiki/Ich_bin_ein_Berliner).
Or something like that.
Anyway, thank you for your thoughts, the scenario you put forward is an interesting one. Does the Machiavelian always win?
Post a Comment