Edge.org
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
Published on Edge.org (https://www.edge.org)

Home > What is Reputation?

Conversation : CULTURE

What is Reputation?

A Conversation With Gloria Origgi [11.5.15]

NEW — A Reality Club discussion with responses from: Abbas Raza, William Poundstone, Hugo Mercier, Quentin Hardy, Martin Nowak and Roger Highfield, Bruce Schneier, Kai Krause, Sumit Paul-Choudhury, Margaret Levi.

That is basically what interests me—the double question of understanding our own biases, but also understanding the potential of using this indirect information and these indirect cues of quality of reputation in order to navigate this enormous amount of knowledge. What is interesting about Internet, and especially about the Web, is that Internet is not only an enormous reservoir of information, it is a reputational device. It means that it accumulates tons of evaluations of other people, so the information you get is pre-evaluated. This makes you go much faster. This is an evolutionary heuristic that we have, probably since the birth of the human mind.

Follow the people who know how to treat information. Don't go yourself for the solution. Follow those who have the solution. This is a super strong drive—to learn faster. Children know very well this drive. And of course it can bring you to conformism and have very negative side effects, but also can make you know faster. We know faster, not because there is a lot of information around, but because the information that is around is evaluated; it has a reputational label on it. 

Introduction

This Edge feature is our second foray into the idea of "reputation" in the age of the Internet. The first, in December 2004, "Indirect Reciprocity, Assessment Hardwiring, And Reputation": A Conversation with Karl Sigmund, occurred in another era (or was it another planet): no iPhones, no Facebook, no Twitter. We were sending faxes through our PCs and Macs attached to modems, and short messages through our pagers.

At that time Sigmund said, "In the early 70s, I read a famous paper by Robert Trivers, one of five he wrote as a graduate student at Harvard, in which the idea of indirect reciprocity was mentioned obliquely. He spoke of generalized altruism, where you are giving back something not to the person you owed it to but to somebody else in society. This sentence suggested the possibility that generosity may be a consideration of how altruism works in evolutionary biology."

"I am often thinking about the different ways of cooperating," he added, "and nowadays I'm mostly thinking about the strange aspects of indirect reciprocity. Right now it turns out that economists are excited about this idea in the context of e-trading and e-commerce. In this case you also have a lot of anonymous interactions, not between the same two people but within a hugely mixed group where you are unlikely ever to meet the same person again. Here the question of trusting the other, the idea of reputation, is particularly important. Google Page Rankings, the reputation of eBay buyers and sellers, and Amazon reader reviews are all based on trust, and there is a lot of moral hazard inherent in these interactions."

Gloria Origgi, whose previous Edge feature, "Who's Afraid Of The Third Culture?" appeared in these pages in 2006, is an exemplar of the Third Culture in Europe. According to philospher Daniel C. Dennett, she has completely mastered everything from philosophy to neuroscience, moving gracefully through the conceptual jungles of everything from neuroscience to cognitive science to anthropology. A Parisian, she is an antidote to that European genre of French thought that creates the illusion of depth and profundity that Dennett calls "Eumerdification"*.

In "What Is Reputation?" Origgi talks about "the double question of understanding our own biases, but also understanding the potential of using this indirect information and these indirect cues of quality of reputation in order to navigate this enormous amount of knowledge.

"What is interesting about Internet, and especially about the Web, is that Internet is not only an enormous reservoir of information, it is a reputational device. It means that it accumulates tons of evaluations of other people, so the information you get is pre-evaluated. This makes you go much faster. This is an evolutionary heuristic that we have, probably since the birth of the human mind.

"Follow the people who know how to treat information. Don't go yourself for the solution. Follow those who have the solution. This is a super strong drive—to learn faster. Children know very well this drive. And of course it can bring you to conformism and have very negative side effects, but also can make you know faster. We know faster, not because there is a lot of information around, but because the information that is around is evaluated; it has a reputational label on it."

GLORIA ORIGGI is a researcher at the Centre Nationale de la Recherche Scientifique in Paris and a journalist. She is a best-selling novelist in the Italian language, a respected philosopher in French, a cognitive scientist in English, and the person you want to sit next to at a dinner party. Her latest book, La Reputation, was recently published in France. Gloria Origgi's Edge Bio Page.

[* Dennett writes in his book Breaking The Spell: "John Searle once told me about a conversation he had with the late Michel Foucault: 'Michel, you're so clear in conversation; why is your written work so obscure?' To which Foucault replied, 'That's because, in order to be taken seriously by French philosophers, twenty-five percent of what you write has to be impenetrable nonsense.' I have coined a term for this tactic, in honour of Foucault's candor: eumerdification."]

—John Brockman


WHAT IS REPUTATION?

I'm a philosopher and I do some social sciences, but basically I stick to philosophy in my method, in my way of tackling questions. I was interested in epistemology, in questions about knowledge. At a certain point in the early 2000s, Internet became such a major phenomenon that I started to be interested in transformations of the ways in which we organize, access, produce, and distribute knowledge that was dependent on the introduction of Internet in our lives.

I was interested in the question of trust. It seems like a paradox. The traditional view of knowledge in philosophy and epistemology is that you should not trust, and you should be an autonomous thinker. You should have in your own mind the means to filter information, and to infer new knowledge from what you already know without taking into account the opinion of others. The opinion of others is doxa, and episteme—the true knowledge—is the opposite, being an autonomous knower. With Internet and this hyperconnectivity in which knowledge started to spin around faster than light, I had the feeling that trust was becoming a very important aspect of the way in which we acquire knowledge.

What is Reputation? [1]

We need to trust other people. In an information dense society in which you have so much information, you cannot just count on your own means. You need to trust other people, and what does it mean to trust other people? Does it mean to become gullible? Does it mean to become credulous? I started to work in some specific domains to try and understand what it means to trust other people in order to acquire knowledge, to acquire some reliable information. What do we do? Are we entitled to do this? Is this an appropriate way of using our mind or of doing inference?

I ended up with some interesting empirical research, and also with some interesting normative claims about when we are entitled to trust. When are we rational in using our trust, or when are we gullible? These are cases where there are heuristics, there are biases, there are different circumstances in which we can be more or less gullible.

My research mixes up a French way of thinking—social sciences and philosophy—and a more analytical and Anglo Saxon approach, so, in a sense, I'm a foreigner everywhere. I have the privilege of being able to mix up all the possible traditions without feeling that I'm contaminating myself. This is a big freedom, not to belong to any place. I'm an Italian. I have studied analytical philosophy and cognitive science in France, and I was exposed, living in Paris, to the French culture. I feel quite free to use Foucault and Darwin in the same article without feeling intimidated by disciplinary or ideological boundaries. It was more difficult at times to be heard, but it was a big freedom for me to be able to use any possible corpus of knowledge with a lot of personal freedom, which is an important value for me.

I started to mix different literature and tried to understand in some domains how we make sense of our trust in a domain. For example, in the academic domain. That was perfect because it is my field. I work in the academy; I am a researcher in the Centre National de la Recherche Scientifique in Paris. There is a tradition in sociology in France to take academy as an example to be self reflexive. Most notably, Pierre Bourdieu, who has been a very influential sociologist in France, worked on how credibility of knowledge in the academy is created.

I started to work on this, and I discovered many interesting things. In general, we have two kinds of constraints. In my work I'm interested in explaining and in interpreting these two constraints on how we make sense of a certain corpus of knowledge. One kind is structural constraints on that corpus of knowledge. We know, for example, that academic publications are a network of citations. Citation networks have certain structured properties. Citation networks have the tendency of being aristocratic, which means that rich get richer. The more citations you receive, the more you will receive in the future. That is an effect of the network. It doesn't have anything to do with your will, with your cognitive system. It is just how the network is organized.

This is one kind of constraint that I'm interested in analyzing. It was the famous American sociologist, Robert Merton, who was teaching in Columbia University, who called this effect the Matthew effect. In the New Testament you have the four texts from John, Matthew—I don't remember the others—and Matthew says, "The rich will have more, and the poor will have less." Merton called this the Matthew effect in order to describe citation networks and academic prestige. If you're prestigious in academia, you will have more prestige, and if you're just marginal, the tendency will be that you will be more and more marginal.

These kinds of effects are structural effects that depend on the shape of the network. Another network that works in this way is page rank. Page rank is an aristocratic network, so it has the tendency to prize the people who are on the top. You have other networks that are sadly very democratic like illness, epidemics. Contagion is very democratic; it's not aristocratic. Everybody can be contagious. The shape of a network of a phenomenon is important to understand, the structural constraints of this phenomenon.

Page rank, Facebook, or eBay, all these networks are a little different. Their structural constraints are different, so you can have predictions of who goes up and down in these different networks by taking into account the constraints. On the other hand, you have our minds, and you have all the constraints on the way in which we deal with a corpus of knowledge that has to do with our biases, our heuristics, also with our previous knowledge and the fact that we are going to privilege some information. We have an enormous literature today on these cognitive biases. Daniel Kahneman, Gerd Gigerenzer, and so many other people in behavioral economics, social psychology, and cognitive psychology have made us aware of the enormous number of biases that we have when we deal with certain phenomenon. It can distort the way in which we perceive the phenomenon.

My idea is that if you want to understand what knowledge is and what we can extract in a reliable way from this complex bundle of information that invades us, we have to deal with these two aspects: the cognitive constraints on the way in which we perceive a certain corpus of knowledge—the heuristics and the mistakes that we make—and on the other hand, the structural constraints that exist on the corpus.

I don't end up being constructivist or relativistic about knowledge. I don't end up saying, "Well, knowledge is constructed by our psychological biases and by the structural constraints of a certain network or a certain organization of knowledge." I'm not a skeptic about knowledge, but the way in which we construct our knowledge institutions matters a lot in order to understand what will be filtered as knowledge in a certain time span, in a certain era, for a certain society. It is important to know the constraints.

Also, there is a normative part of my work in which people should be aware of some of the biases they have in such an informationally dense environment, like Internet. This is something that could be good for education, etc. We tend to trust things and chunks of information sometimes on the basis of very poor heuristics. This is something that can be easily corrected, for example, just by teaching children to look at the URL, how the URL is written. It contains a lot of information about the reliability of the site. It is something that you should be able to do automatically.

I like fieldwork in social science, but I mix up epistemology and social science. My domain, in academic terms, is called social epistemology—how the social constraints have an influence on our way of processing knowledge. My own method in social epistemology has been in the last six or seven years, and it was influenced by the huge impact of the advent of societal information, by trying to develop a second-order epistemology. What do I mean with second-order epistemology?

First order epistemology tells you that in order to distinguish knowledge, to pry apart knowledge from belief, you need to check some constraints on a chunk of information. Is it logically structured in the appropriate way? What are the inferential consequences that can grow from that chunk of information? Are these consequences, for example, contradicting previous knowledge? If they are, be careful. This is first order epistemology. You try to check the reliability of a chunk of information by some methods. There have been many theories that we have developed in the history of thought in order to check the reliability of information or the scientific method.

Second-order epistemology is something that we needed today, given that you cannot check these details on the chunk of information. You can check the indirect reliability of the information like its authority, its reputation, where it comes from, who said that, the strength or the weight the person who has said this thing gives to that chunk of information. All this is something new. We have a spontaneous reaction to ways of attributing authority to other people: "This must be an authoritative guy, because he's on the Edge site." We have a lot of intuition about how to access this indirect index of reliability. In my work I try to make this intuition a little more controllable. I try to see how this intuition sometimes is just wrong, and hopeless. Sometimes they can use heuristics in order to get information.

Take, for example, the reputation of doctors. This is one of the most interesting examples that I like to cite. Everybody, and I don't know if it's the same in the United States, but it is surely a fact in France and in Italy that if you ask someone about his or her doctor, he will reply that this is the best doctor in town. Everybody has the best doctor, which is clearly paradoxical because we can't all have the best doctor. The way in which we select doctors is very mysterious, because you don't have explicit ratings of doctors. You have websites now that rate the doctors, but health is a very sensitive issue, and you give trust to someone for many, many different reasons. But in the end, everybody ends up being convinced they have the best doctor.

I try to understand why. What are the good things? What are the heuristics? What are the biases that make us react in this way? When you are in a weak position, you attribute a higher weight to authority, which should be the opposite. If you're in a weak position, like it's your health, you should be more careful. The routine that we use is exactly the opposite, and there are many of these biases that we use everyday in order to allocate authority to some sources of information.

The proximity bias is something that I have studied. Just because someone is next to someone else, he receives the reputation of the more important person, which leaks and illuminates and enlightens the other person. There is a halo effect of transmission of authority from one person to another. Of course you can justify in some ways why two people who are next to each other must share authority, but in many cases this can bring us to negative conclusions.

That is basically what interests me—the double question of understanding our own biases, but also understanding the potential of using this indirect information and these indirect cues of quality of reputation in order to navigate this enormous amount of knowledge. What is interesting about Internet, and especially about the Web, is that Internet is not only an enormous reservoir of information, it is a reputational device. It means that it accumulates tons of evaluations of other people, so the information you get is pre-evaluated. This makes you go much faster. This is an evolutionary heuristic that we have, probably since the birth of the human mind.

Follow the people who know how to treat information. Don't go yourself for the solution. Follow those who have the solution. This is a super strong drive—to learn faster. Children know very well this drive. And of course it can bring you to conformism and have very negative side effects, but also can make you know faster. We know faster, not because there is a lot of information around, but because the information that is around is evaluated; it has a reputational label on it.

My interest in social epistemology is related to my previous interest in cognitive science, philosophy of cognitive science, and cognitive epistemology. It is just a transition, because the collective dimension of our knowledge is so huge that you need to know how social structures are organized. Our mind is a piece of a puzzle in which there are many minds that are connected in many different ways The way in which we are connected is not like in a brain—neuron connected to a neuron—we are connected through social networks. That was just an empirical evidence.

When Internet started in the '90s there were a lot of metaphors around what Internet is. What is this bundle of links? What is the connectivity? One was the brain. Maybe we can use the connectionist models of the brain in order to understand how Internet works, and how it develops, because it was growing, growing, growing. That was a possibility we were all very excited about, because we were working in cognitive science and neuroscience, philosophy of neuroscience. In 2000 the mathematician Jon Kleinberg had a result, which was published in Science. He showed that Internet is a social network like the social network of people we invite to cocktail parties, or social networking in our work environment.

We live in many different social networks and he showed—in 1999 actually, the article was out in 2000—that Internet is a social network. In those years, Brin and Page were students in Stanford, they took his idea to design PageRank, which was designed exactly as a social network. You have three levels, you have the older possible nodes of the network, let's say, the websites, then you have authorities, which are a middle layer of nodes that are more authoritative, which doesn't mean in terms of social network that it receives a lot of interest. These authorities point to some of the website, and they make them go up.

What is special about this seems very banal, but the structure is the structure of the social network. What was important for them was to explain that there is a huge difference in terms of authority and ranking between a link from the webpage of Gloria Origgi to the webpage of Harvard University and vice versa. If Harvard, which is an authority, points toward me, it makes me go up a lot. If I point to Harvard, my weight is not enough to make such a difference. This asymmetry of the network doesn't exist in the brain exactly. That's why I decided in early 2000 to move towards social sciences and try to take them seriously. When people discuss trust, even in philosophy etc., I wanted to understand how the social scientists treat this notion, the research around it.

I think I was right. We are facing this phenomenon, a big change in the business life in which trust, reputation, become commodities that we can exchange and trade, like tech sites, websites like eBay or Airbnb. It's sort of reputational. You have to have a good reputation in order to take part of this exchange of houses, apartments, etc., so this is important. But what does it mean?

What does it mean to have a good reputation? What does it mean in general to have a reputation? What do I lose? It's not like having a toothache. What do I lose when I lose a reputation? What did Volkswagen lose when it apparently lost its reputation a month ago because of a scandal about the construction of some of the devices in order to trick ecological controls? What did Volkswagen lose? What is reputation? Is it only social information, as it seems to be on websites like eBay?

On eBay reputation is purely social information. You gather evaluations from the social group and it changes your position in this social dynamic—makes you go up and down. But is it the same with the "likes" for example? What does it mean to "like" something on Facebook? How can you count the likes? What are you doing when you're trying to retouch your pictures on Facebook, or try to have a better reputation than your actual one online? How is it important today to have an e-reputation, to have a good reputation? Very few people just try to understand what's there. Reputation doesn't exist. It's not like money. It's not like headaches. It just doesn't exist. It exists because someone attributes you to a reputation. It is a social property. Philosophers talk about properties that can change things in your life, but they don't exist. You can also not be aware about having or not having a reputation. We are becoming super aware of our own reputation, and we are excited by the fact that we have so many devices today that can leave us a freedom of manipulation of reputation.

We can manipulate our own reputation by what? By using in a strategic way, let's say, the social web. On the other hand, the more freedom we have in dealing with our own reputation, the more freedom other people have to manipulate our reputation, to do things with what we have done.

There is very little literature in social sciences about reputation, and I have tried to go through and understand where reputation was. You have the notion of social capital, which is a little related to it, but it is not exactly the same. The problem with reputation is that it is not only the opinion of others, it is what you think the opinion of others is. It's something that it is a little more complex.

Many phenomena that are interesting today deal with how our own minds deal with the social world, so you have to be aware of how our minds are structured, but also how the social world is structured. What is special about reputation in this era? In this particular social configuration of societies that are invaded by social networks and technology of communication? It is that reputation is a communicative phenomenon, it is something that we transmit.

We find an article of something that we feel can be interesting, and instead of reading it we just forward to someone else, or put it on Facebook. The willingness to communicate, to transmit information, is stronger than the willingness to acquire information, which is also a special phenomenon. You should say, I am a self-interested guy, or girl—sort of an instance of the homo economicus—so I find a piece of interesting information, well I keep it for me! Why should I share? We all know today the common behavior is if you find something interesting you share it before reading it.

What does it mean in terms of cognitive science and social science? It means that some social configurations that are around today, that are made by social institutions, technologies, etc., may highlight some dispositions of our brain in a special way. Probably we have this disposition of sharing information immediately which was less exploited by other social configurations, like the way in which knowledge and information used to circulate fifty years ago, and which is highlighted and made more explicit by this social configuration.

Things are there in our mind, probably all the competence we have in order to deal with the information around are there. Some social configurations which are complex devices that put together politics, economics, the way in which social institutions stabilize, cultural phenomena, etc., technology, the development of technology.

These complex devices change from one generation to another. Some are more stable, some change very rapidly. In a sense they can foster some of our cognitive competences, and conceal some others for a while and then change. That's why I'm so interested in the interplay between these two dimensions.

The communication between cognitive sciences and social sciences is a little easier today because of the adoption by a group of social scientists of experimental methods, which is something that was not around in some other traditions of social sciences. But still, I came from philosophy studies and from humanities studies. All the possible cultural traditions, we have an enormous privilege in these times. Everything is available and everything can be re-mixed, every part of the culture, from literature to mathematics and astrophysics, and pop music. It can be repacked in a new way in order to see phenomenon that you cannot see within only one discipline.

I'm attached to a multi-pluralistic method of work, and this is the way in which I work. I also write literature in Italy. I'm attached to this creative dimensional mixing up of traditions and things. Today, the way in which a part of social sciences tries to tackle some phenomena is more compatible with the experimental tradition, so it is easy to communicate.                                            

There are traditions in social sciences like economics, and part of sociology, which are highly formalized and that use models, so this is not just poetry. Models can predict something. But what I find interesting today is that you can talk to people who use models and try to say, "Well, this is an interesting prediction, can we just test it experimentally?" This is something that is also creative; it mixes up so many different competences. Take what I'm working on these days. I have the feeling—this is an intuition, and of course I have written about it—that we tend to evaluate better, to give a better reputation to people who reciprocate a little bit, to people who like us, at least a little bit.

Pure reputational free riders who want to go up in the hierarchies and never reciprocate, we are suspicious of them. We can adore them, but at a certain point we let them down. There are models in social sciences that formalize this way in which hierarchies are created. Why does a person go up in a hierarchy? Because many people defer to this person. The idea is that you have many people below, and one person up. If you're an important person in Twitter you have many followers, but you don't follow so many people. The idea that is around in some literature, and I want to test, is you prefer those who reciprocate a little bit, you defer to them, but they also defer a little bit. This is an adjustment that we do.

For example, when I try to submit a paper to the most high-ranked academic journal in philosophy and I am rejected, so I submit a second time, and my paper is rejected, at a certain point I do what? As we all do in the academic business, you try with a less prestigious journal. At a certain point down in the hierarchy you find a journal that publishes you. At this point sometimes people start to adjust and say, "Well, this journal is not so bad as people say. In a sense, it's much more accessible. The standards of those guys up there are becoming a little obsolete." That's a way in which hierarchies are adjusted. We adjust a lot in order to prefer who we think are going to prefer us.

This is something that exists in some formal model of social sciences. It exists in the formal study of hierarchies. It hasn't been tested. I'm just testing it with a student. There is now a small tradition of experimental philosophers, and in my lab, hopefully in Paris, there are many experimental philosophers—people who take philosophical questions and try to test them experimentally. We have a lot of fun. What is important to say, and this is the spirit of the experimental work, sometimes you're wrong. Sometimes your intuitions are completely wrong. Maybe I'm wrong. Maybe now we are testing it, we will discover that people just don't care to be reciprocated. And that is important, because we also have this self-evident authority of scientists. For one positive result that you find published in an academic journal you have tons of negative results that are in the graveyards of our labs that are very interesting too.

My approach is, don't think that the truth is only in a special place. The more modest you are and the more pluralistic you are, the better it is for research. That's the way I avoid any possible prejudice. That is also because of my background. I have a very mixed background. I have studied formal logic and Nietzsche and Husserl in my undergraduate studies. Then I studied cognitive science in Paris. It's very important to keep your mind open.

I write literature and I write a lot for newspapers in Italian because it was my way to keep contact with my own mother tongue. Europe is a very different situation from United States. We are twenty-seven countries with twenty-seven languages. I work sometimes in Brussels as an expert in advisory boards. In order to start a conversation you have to wait for the greetings in twenty-seven different languages. Europe is a very different scene in which trying to keep your difference is something that is important for us. It's like keeping your tradition. We pay an enormous price in Europe for protecting our folklore in a sense.

I have studied humanities and formal logic in my youth, when I was at university in Milano, which was a very good place. We didn't have this idea that there was a science on one side and cultural studies on the other side. We thought we would have been able to navigate the culture and navigate science with the same boat. It happened to be very difficult even in Europe where cultural studies have been a bigger disappointment, in a sense, a vulgarization of the values of humanities, which I still think is important in education. On the other hand, scientific methods were perceived as a tool of domination. A tool of dominating the discourse—the public, the scientific, the discourse about truth.

There is a famous piece on this, which was the inaugural lecture of Michel Foucault at the College De France, which is called "The Order of Discourse" in which he tries to argue that you should be careful when you use a certain discourse. It can be the scientific discourse, literature discourse, political discourse, who is controlling it in a sense. The scientific discourse was seen not as a tool for thought, but a tool for domination. That was also because the relationship between power and science has changed a lot in the last twenty years. We should be frank about this. This in Europe, I don't know what the scene is in the United States, but we have a political dimension of research. What we researchers are doing has become important. States display their reputation through their scientific research. They want to be present in the international rankings. They compete for prestige and for innovation.

We were a community twenty or thirty years ago in an ivory tower in which you do your research, and using the scientific method was something very independent of the real world. Now we are pushed on producing some research that is useful, and that has changed a little bit the vision and the positive vision of science as an activity of freedom that researchers used to have twenty years ago. The political dimension, at least in Europe, the way in which the political discourse has entered the scientific discourse, has changed a little bit the atmosphere.

But even if serious people hate post-modernism, because it was laissez-faire—and I don't want to defend post-modernism here—I just want to say something provocative about this, that we are in a post-modern era. Our knowledge is structurally post-modern, which means that there are no more clear filters. There are no more clear canons that establish what is high and what is low. This gives us an enormous cacophony of possible ways of dealing with culture and knowledge but also an enormous freedom, freedom of methods. You can find things, you can mix literature with big data. We have a freedom that we didn't have twenty or thirty years ago.

In a sense we are more uncertain about the results of research, but, at least in the intellectual debate, we should avoid polarization. What does that mean? The political debate is so polarized. In intellectual debate we should be pluralistic. Let's be pluralistic. We can access knowledge in so many different ways, we don't know which is going to be the one that is going to change the world.


Reality Club Discussion

S. Abbas Raza
Founding Editor, 3QuarksDaily.com

We Should Teach Everyone Second-Order Epistemology

Over the last many years I have frequently thought about some of the important epistemological issues Gloria Origgi raises in this excellent and thought-provoking piece, but not nearly as systematically and rigorously as she has done. The trajectory of my own thinking about these things started with a cringing frustration with the fact that far-too-large numbers of my fellow Americans hold beliefs which are incompatible with a scientific worldview and which are indefensible by any reasonable evidentiary standards. From there, I started to think about how all of us, even those in science, must take much if not most information on faith as we have neither the time and resources nor the expertise required to empirically evaluate what is reported to us, and this, of course, brought me straight to the question of the reputation of those doing the reporting.

Of all the interesting points that Origgi makes, I find two to be of special importance: first, that we actually already have an impressive corpus of literature on cognitive biases; and second, her notion of second-order epistemology or the study of the ways in which we assign authority or reputation to sources of information. It seems to me that since there is no way to get around the problem of an individual's inability to test all truth claims for herself (which would be a form of what Origgi characterizes as first-order epistemology), we are stuck with trying to make our second-order epistemology (how to accurately assign authority and reputation to sources of information) as good as possible for as large a number of people as possible. And that is precisely where I believe that a wide dissemination of what we have discovered about cognitive biases over the last five decades can play a very salutary role. I do not think that the various new forms and methods of information exchange that the internet provides can or will help in this in any particularly significant way. The problems of assigning reputation in the real world have just been transferred to cyberspace, where there is more information and more misinformation available than ever before.

Along with America's increasing wealth and income inequality has come an inequality in not just access to knowledge, but even access to second-order knowledge: knowing whom to trust for true and useful information. There has always been a populist, anti-intellectual streak in American society but I have little doubt that this unfortunate tendency has been on the rise in recent decades. And I believe that there is some urgency to develop ways of combating this trend lest we become an even less democratic society in which a small elite is able to manipulate the beliefs of a large proportion of the population through the clever use of media and exploitation of the very cognitive biases of which most people remain sadly ignorant.

We need to develop a practical second-order epistemology curriculum and teach it in high school.

William Poundstone
Journalist; Author, The Doomsday Calculation; Nominated twice for the Pulitzer Prize

Why Aren’t Online Ratings More Useful?

Bâtard, named the James Beard Best New Restaurant for 2015, is rated four stars on Yelp. It shares that four-star rating with New York outlets of Chick-fil-A, Shake Shack, and Chipotle Mexican Grill. It is difficult for any widely reviewed restaurant to top four and a half stars on Yelp. There is almost always an irate minority posting one-star reviews for reasons that often seem to have more to do with the reviewer’s personality issues than with the restaurant. Yelp is helpful in distinguishing the truly awful restaurants from the very best, but it’s not of much use in prioritizing a foodie bucket list.

It’s worth asking why. Ten years ago, many tech optimists (myself included) imagined that aggregated online reputations like Yelp’s would be a valuable resource for everyone. It hasn’t quite worked out that way.

There are several distinct problems with online ratings as they exist today. The late Harriet Klausner, famed as Amazon’s most prolific book reviewer, was a speed reader who had churned out 31,000 reviews. Klausner exemplified Pareto’s law. Twenty percent of the reviewers produce about eighty percent of the reviews. This becomes a problem if the super-prolific reviewers are unrepresentative of their audience. That may have been the case with Klausner, for it’s said that virtually none of her reviews were negative. This fact was not lost on publishers, who sent her truckloads of books a year, many of which were resold on the Internet via an account in her son’s name. (Amazon tweaked its rating system in 2012, perhaps in reaction to prolific reviewers like Klausner.)

There are less-examined problems. The highest-rated movie on Rotten Tomatoes is The Wizard of Oz. What does that mean? I think it means that anyone who watches that film already knows what to expect. They don’t watch it unless they expect to like it, and the movie doesn’t disappoint. That makes it hard to compare reviews of iconic movies with indie films that some stumble upon and find not to their liking.

My initial reaction was that all these problems are fixable. We don’t have to take a simple average of all submitted reviews. It can be a weighted average. Perhaps a rare one-star review of a five-star restaurant should be given less weight. A rating from someone who submits five book reviews a day could have less weight than those of someone who reviews five books a year. There should be a way of disentangling tastes and expectations. The Olive Garden crowd is different from the Zagat crowd. Average ratings could be weighted according to the tastes of each user.

But if fixing online ratings is so simple, why hasn’t someone done it? I suspect the answer is that averages of volunteer reviews are not so informative after all. No one wants to admit this particular emperor has no clothes, as many business plans depend on it. But it might be that we won’t realize the value of reputations until we invent smarter ways of aggregating them.

Hugo Mercier
Cognitive Scientist, French National Center for Scientific Research; Co-author (with Dan Sperber) of The Enigma of Reason

Reputation and Argumentation

Gloria Origgi rightfully points out the importance of ‘second-order epistemology’: knowing who to trust, who is a reliable source for such and such topic. I wholeheartedly agree. If we can’t do this well, we’ll end up either accepting many false beliefs, or rejecting many sound ones. This is not only critical for individuals who are trying to figure things out on their own, but also for argumentation.

To some extent, argumentation can take the place of trust. If a colleague insists you got a math problem wrong, you might not believe them. But if they give good arguments, you’ll change your mind. Here argumentation makes trust unnecessary. (Indeed, you might even pay more attention to the arguments, and understand them better, if you don’t trust your colleague on such matters than if you do.)

In most cases, however, argumentation and trust work hand in hand. Many, maybe most, arguments have premises that rest on trust. This is true of everyday conversations—“We should go to this restaurant because I had a great meal there the other day”—but also of science—“My theory is right because my experiment yielded such and such result.” In both cases, the audience can evaluate the strength of the argument—Is having had a good meal somewhere a good enough reason to go there now? Do the results of the experiment really support the theory?—but they accept the premise on trust. If we couldn’t do that, we’d get nowhere.

The interaction of argumentation and trust helps explain, I think, why people don’t rely more on hard evidence when discussing, say, policy issues.

The Internet has solved one of the two main reasons why we don’t use more evidence: availability. When chatting with your friends, you can’t easily ask them to pause the conversation while you go to a library, read the relevant section of the Encyclopedia Britannica, just so you can prove them all wrong. By contrast, looking up something on Wikipedia, thanks to your smartphone, is easily done. Nowadays most of the evidence available on any topic is an Internet search away.

But for this evidence to carry any weight, people have to trust its source. It’s well and good to cite Wikipedia in your arguments, but what if people don’t trust Wikipedia? Then the arguments are worthless. For argumentation to make the best of the evidence available, people have to agree on their second-order epistemologies: they have to agree on what is a reliable source for such and such topic. By providing a better understanding of the dynamic of reputation, Gloria Origgi’s work can help us solve this problem.

Quentin Hardy
Deputy Technology Editor, The New York Times; Former Lecturer, U.C. Berkeley's School of Information

Reputation has always occupied one of the most acute parts of consciousness, the border of self and the world. One’s inner-directed actions are judged by others, and behavior is styled to meet approval, not condemnation. We do things for our own sake and are judged, and we do things simply for the sake of the crowd’s judgment. “Behavior that is admired is the path to power everywhere,” says the opening of Beowulf, and it was true long before.

Now, however, as the Internet resets so many other ideas—about the self and the crowd, about knowledge and certainty, about media power and authenticity—reputation is in novel flux.

Reputation is vaunted, in the form of scores for good or bad behavior from people who have interacted on eBay, Uber, Airbnb, or other economic networks of generally glancing duration. Buyer and seller, driver and rider, or host and client can rate each other, with profound consequences for the other. 

As a result, reputation is gamed: Airbnb consumers rate their hosts not just based on their experience, but how they might be rated in turn. If you are thought of as a bad Uber rider, perhaps for giving harsh reviews, you are less likely to get a ride. Thus, everyone biases to the positive. As with all inflation, the currency is debased.

Reputation is universalized, so that all may see. This might seem like a boon: Adam Smith felt the invisible hand of the marketplace would also be steered by people’s fears of their neighbor’s opinions. Greed for gain would thus be matched by greed for esteem; in today’s world of so many eyes, this governor should be stronger than ever.

And yet, we have the outrageous and irresponsible greed that led to the 2008 collapse of the world financial system. Bad actors would easily disappear into the anonymity of global networks, so hidden in complexity that they feared no judgment of reputation, other than the smile of a Maybach salesman.

Our primary source of information, the Internet, is driven as never before by the quick-changing views of others. Reputation is at the heart of Google rankings, but also of a celebrity culture which has taken on a new monstrosity, in an online economy that primarily values clicks and eyeballs. He who has the most Twitter followers can most readily stir the crowd, sending heavenward or destroying the reputation of others with little forethought. 

Likewise, reputation of authority is often built through that clumsy word, curation: The passing on of information, whether or not the messenger has thought at all about the message. The reputation is thus gained by feeding the flow of information, not possessing judgment or insight.

That simply adding to the flow now builds reputation says much about the age. We have become creatures of a process: Information not for material gain, nor political power, nor—dare we even think of this now?—the growth of our souls, but information for the sake of additional information

Martin Nowak
Professor of Biology and Mathematics, Harvard University; Co-author, SuperCooperators
Roger Highfield
Director, External Affairs, Science Museum Group; Co-author (with Martin Nowak), SuperCooperators

Gloria Origgi asks ‘what is reputation?’ and points out that there is little relevant literature in the social sciences. However, the consequences of reputation have been analyzed in great detail when it comes to what is known as indirect reciprocity.

Indirect reciprocity, along with direct reciprocity, are mechanisms for the evolution of cooperation. Direct reciprocity means my behavior towards you depends on what you have done to me. Indirect reciprocity means my behavior towards you depends—in addition—on what you have done to others. Indirect reciprocity works via reputation.

Both theory and experiment suggest that natural selection favours discriminating strategies that pay attention to the reputations of others. If good reputations spread quickly enough, they can increase the chances of trust and thus cooperation taking hold in a society. As one would expect, Bad Samaritans with a poor reputation get less help. The first experiments supporting these predictions of basic theory were performed by Claus Wedekind and Manfred Milinski in 2000.

Because reputation is so important, our behavior is endlessly molded by the possibility that somebody else might find out what we have done: we behave differently when we know we might be observed.

We can make the models more realistic and allow for mistakes – or biases that make us more or less gullible, as Origgi says. As a result, we see cooperation wax and wane, as those with a good reputation are undermined by indiscriminate altruists who help anyone, no matter how the latter have behaved in the past. Then, free riders invade until discriminating cooperators cycle back in.

Origgi talks of the influence of the internet. Leaving aside the subtle role of its structure (which can shape the way individuals cooperate by another mechanism, spatial selection), the web enables reputations to be made and spread in minutes. The web abounds with ways to score the trustworthiness of people (when buying a camera online, you consider the seller’s reputation as closely as the price, for example). Online gossip, chat, and banter allow us to gauge the reputation of other people, sizing them up, or marking them down, to decide how to deal with them. Thus indirect reciprocity plays a powerful role in cooperation.

David Haig at Harvard has made the elegant remark that: “For direct reciprocity you need a face. For indirect reciprocity you need a name.” As a result, the benefit for social cooperation via indirect reciprocity has, more than anything else, propelled the evolution of human language.

And, of course, to possess a faculty as complex as human language, you need a big brain. Thus indirect reciprocity has also played a role in the development of our ability to lay down memories, and develop moral codes. Mediated by reputation, indirect reciprocity lies at the heart of what it means to be human. Like Origgi, we believe that our capacity to harness this mechanism of cooperation dates back to the birth of the human mind.

Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School; Author, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

The Automation of Reputation

Reputation is a social mechanism by which we come to trust one another, in all aspects of our society. I see it as a security mechanism. The promise and threat of a change in reputation entices us all to be trustworthy, which in turn enables others to trust us. In a very real sense, reputation enables friendships, commerce, and everything else we do in society. It's old, older than our species, and we are finely tuned to both perceive and remember reputation information, and broadcast it to others.

The nature of how we manage reputation has changed in the past couple of decades, and Gloria Origgi alludes to the change in her remarks. Reputation now involves technology. Feedback and review systems, whether they be eBay rankings, Amazon reviews, or Uber ratings, are reputational systems. So is Google PageRank. Our reputations are, at least in part, based on what we say on social networking sites like Facebook and Twitter. Basically, what were wholly social systems have become socio-technical systems.

This change is important, for both the good and the bad of what it allows.

An example might make this clearer. In a small town, everyone knows each other, and lenders can make decisions about whom to loan money to, based on reputation (like in the movie It's a Wonderful Life). The system isn't perfect; it is prone to "old-boy network" preferences and discrimination against outsiders. The real problem, though, is that the system doesn't scale. To enable lending on a larger scale, we replaced personal reputation with a technological system: credit reports and scores. They work well, and allow us to borrow money from strangers halfway across the country—and lending has exploded in our society, in part because of it. But the new system can be attacked technologically. Someone could hack the credit bureau's database and enhance her reputation by boosting her credit score. Or she could steal someone else's reputation. All sorts of attacks that just weren't possible with a wholly personal reputation system become possible against a system that works as a technological reputation system.

We like socio-technical systems of reputation because they empower us in so many ways. People can achieve a level of fame and notoriety much more easily on the Internet. Totally new ways of making a living—think of Uber and Airbnb, or popular bloggers and YouTubers—become possible. But the downsides are considerable. The hacker tactic of social engineering involves fooling someone by hijacking the reputation of someone else. Most social media companies make their money leeching off our activities on their sites. And because we trust the reputational information from these socio-technical systems, anyone who can figure out how to game those systems can artificially boost their reputation. Amazon, eBay, Yelp, and others have been trying to deal with fake reviews for years. And you can buy Twitter followers and Facebook likes cheap.

Reputation has always been gamed. It's been an eternal arms race between those trying to artificially enhance their reputation and those trying to detect those enhancements. In that respect, nothing is new here. But technology changes the mechanisms of both enhancement and enhancement detection. There's power to be had on either side of that arms race, and it'll be interesting to watch each side jockeying for the upper hand.

Kai Krause
Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

Truth, Cognition and Einstein's Sheep 

In order to be a perfect member
of a flock of sheep
one must, above all,
be... a sheep

In 1953 Einstein sent this along with eight other aphorisms to be included in a memorial document for his friend, Rabbi Leo Baeck. As a young teen I found Albert's writings—especially this quotation—a deep inspiration.

There was something soothing there—as it reflected my own uneasy dissatisfaction with the notion of being counted as "merely a member of a big flock of sheep." To this day I must be missing that "herd gene" as I just cannot find myself represented in so much of what is considered "popular opinion" or even basic "world culture". And over the next 5 decades it became ever more clear to me that my own credo is the diametrical opposite concept—being highly eclectic.

Interestingly this is an example in both directions: the disconnect from the prevailing opinions at large, and yet at the same time also: finding an author with whom one can feel a kind of kinship, and repeatedly built that trust "if HE says X, then he must do so for a reason" and give him the benefit of the doubt, that even in counter-intuitive cases there is a logic to that opinion. Albert rarely disappointed me there.

In other words—he built a reputation with me. Trust. And so it connects with this discussion here, from K. Sigmund to Origgi—which I find very worthwhile and needed. Unfortunately, these deeply "loaded terms" are rarely dealt with outside the narrow confines of behavioral science.

And yet the entire web, our search engines, the services, tools and products we all depend on in our daily lives are all based on that glue—"click counts", "page views" and "unique visitors". The value of all those billion dollar unicorns dances around that ethereal value of "impressions" and "engagement"—which in turn are just personal opinions and taste preferences cast in the form of daily and monthly "active users".

The fickle taste of the public at large, measured in realtime, in surreal ways, and sadly in the end, all for the sake of pathetic advertising and corporate earnings.

It is a bit sad that merely including the proper terms like 'heuristics" or "epistomology" will condemn a piece like this to be read by the small inner circles, the academic choir.

Those who could really benefit from the thinking and question their own gullibility will clearly not ever see it. But it is a start, I commend Edge for giving Origgi the space and for her to eloquently examine the aspects flowing together.

Sumit Paul-Choudhury
Editor, New Scientist

What is reputation? Gloria Origgi is right to question its existence as a "thing". It is meaningless simply to say that someone has a reputation, or for that matter any quantity of reputation: one must say they have a reputation for something: for telling the truth, for being a good host, for fair dealing on eBay. And of all the things reputation is not, it is not currency. It cannot be traded: neither bought nor sold, begged, borrowed or stolen. Setting aside such quirks as the proximity effect, it can only be earned or lost through our own words and deeds. Perhaps that is why organized crime reputedly puts such store by it.

Of course, there is much talk of how things are different online, where it is beginning to seem that reputation can indeed be transacted; there is even talk that it will be in some way become currency. I believe this is at best some way off.

Certainly public tokens of esteem abound, attached to status updates, transactions and reviews. These can be spoofed, stolen or purchased, as Bruce Schneier observes. They can indeed be used as currency in the attention (aka advertising) economy. But I do not think they are good proxies for reputation, for the reasons given by William Poundstone: to be useful, reputational systems must acknowledge and incorporate who tells whom about what, and for what reason.

Contrast this with online ratings systems' tendency to treat reputation as the linear aggregation of scalar tokens. Their functionality seems designed to allow third parties to insert themselves into the reputational conversation, mostly in order to sell us stuff. If the internet of 2000 was a social network like that of a cocktail party, social media seems more like a Tupperware party: a simulacrum of a social function, attended by friends and family, but whose host is ultimately motivated by penny-ante commerce.

The most egregious example is Facebook's obstinate refusal to add a "dislike" button. I'm sure this makes sense for Facebook. I am not sure it makes sense for the rest of us. Having failed so far to hone its klutzy newsfeed algorithm to the utility of say, PageRank, Facebook is now hoping AI will do the job: that a machine can learn how humans relate to each other. So perhaps it doesn't work for Facebook either.

Popular as they are, I think we are realizing that current token-based systems are poor solutions to the challenges of Origgi's second-order epistemology. While we are happy to bask as the hearts pile up on an Instagram picture, we know that the statement "retweets are not endorsements" fails to scale, we are sensitive to the difference between stars and hearts, and we worry that notoriety seems fated to win out over authority. We deride Klout scores as hopelessly naive, even as we fear that "social credit" will shepherd us all into the celebrity economy, where a carefully burnished public image can win us preferential access to goods and services, but increasingly inescapable scrutiny of our formerly private lives can tarnish it in an instant.

Reputation is far from becoming currency. Things might be different in a post-scarcity world, but we do not live in one and I am skeptical that we ever will. Even attention is a scarce resource: hence the profusion of ad-clicking bots. But reputation does have a financial aspect, and that might lead us to better models for the intermediaries of information exchange.

One manifestation of reputation in finance is creditworthiness. The financial system has developed several ways to facilitate huge volumes of transactions between hitherto unacquainted counterparties: they all depend on intermediaries. A borrower's ability and willingness to pay can be rated by credit analysts, using both quantitative and qualitative factors. A custodian can hold collateral, eliminating the question of reputation altogether. A clearing house stands as the middleman between two parties who never engage with each other directly.

In the latter two cases, it is the reputation of the intermediary that is important, not the counterparties. To be effective, they must be viewed as monolithic, objective and disinterested in the outcome of the exchange. It is striking that the most striking financial innovation of recent years, the blockchain, epitomizes these values. It is also striking that those online marketplaces that deal in physical commodities have adopted similar mechanisms. And finally, it is striking that these values are secured by one simple mechanism: they do not offer their services for free.

 

Margaret Levi
Sara Miller McCune Director, Center For Advanced Study in Behavioral Sciences, professor, Stanford University; Jere L. Bacharach Professor Emerita of International Studies, University of Washington

How do we come to have confidence in the information available to us? What role does the reputation of the source play? How is the credibility of a person or piece of work established? Gloria Origgi poses significant questions. Her domain is the academy although the implications of what she says extend well beyond universities and research centers. Her answers, while promising, are—by her own admission—far from complete, requiring further empirical exploration.

She is compelling that structural constraints combine with cognitive biases to give certain authorities weight (sometimes unmerited) over others. Aristocratic networks undoubtedly create a Matthew effect in which the rich get richer, but she misses the king of the hill effect. Scholars have motivations, both base and scientific, to find fault with those with the most prestige, sometimes generating exciting new knowledge and debunking old interpretations. And in her democratic networks, when it comes to information as opposed to contagion, populism prevails over expertise.

As several have noted, Origgi’s second order epistemology offers an intriguing opening for deeper and better consideration of how we assess and assign authority. She recognizes the problem of conformity and what many call echo chambers, the tendency for people to search for information that confirms what they already believe or want to believe. Yet her only suggestion for countering this bias is through education; she argues that bad heuristics can “be easily corrected” with fixes such as teaching children to understand how to read urls. This is in contradistinction to the considerable evidence of how hard it is to transform value predilections. There are interventions that can change willingness to learn and tolerance of unfamiliar and uncomfortable ideas, but it takes far more than comprehension of urls.  

Even when we do establish the trustworthiness of certain authorities using a combination of first-order epistemology (supporting evidence and testing) and second-order epistemology (a trusted researcher), we face two problems. The first is scientific fraud. It is not just the loss of the researcher’s reputation but a more general undermining of public confidence in the reliability of science if there is a perception of wide-spread prevalence of fraud. The second is the constant flip flops on findings, a particular problem in the health field where from day to day we learn something different and presumably authoritative about the pluses and minuses of coffee, mammography, whatever.

More kudos to her for becoming an experimental philosopher and, if I understand her correctly, being willing to report both negative and positive findings. And she is investigating an interesting claim: “…that we tend to evaluate better, to give a better reputation to people who reciprocate a little bit, to people who like us, at least a little bit.” If her experiment is in fact domain specific to academia, then she still must figure out how to generalize her findings to other domains. This will require exactly the “multi-pluralistic method of work” she advocates.  

Andrés Roemer
Co-creator, Ideas City; Author, Move UP: Why Some Cultures Advance While Others Don't

Is It Really Possible to Avoid Any Possible Prejudice?

I agree with 99% of Gloria Origgi’s thought-provoking piece. I embrace her call for a more pluralistic approach to knowledge. But we don’t agree on every issue and that 1% is an important disagreement. I don’t believe that it’s possible to avoid any possible prejudice “the more modest … and pluralistic you are.” While I agree that it’s essential to keep an open mind, I disagree that because of a plural background or education in formal-logic, one can eliminate any possible prejudice.

My view is that knowledge isn’t sufficient to dislodge the biases that Origgi mentions. That given the depth of cognitive and structural contraints, sound heuristics are wanting. To the extent that knowledge is embedded in human culture, prejudices are unavoidable —no matter how sound your heuristics. We’re embodied prejudice contained in DNA and, no level of epistemology or open-mindedness can counter the fact that we’re all serially biased.

Those committed to a view not only become invested in it, but also less capable of criticality assessing it. Religious and political alliances are formed before one surveys other options, first established and critically examined later. This is an evolutionary significant process: we learn and gain more or less reliable information when trusting and learning from others, but cling to those truths come what may!

Those committed to a view not only become invested in it, but also less capable of criticality assessing it. Religious and political alliances are formed before one surveys other options. Allegiances are first established and only critically examine later. This is an evolutionary significant natural process; we learn quickly and gain more or less reliable information when we learn to trust and trust to learn others; and henceforth, cling to those truths come what may! For example, in the past few years I’ve led La Ciudad de las Ideas, a cultural festival in Mexico, consisting in an open debate about current hot-button issues (God, the role of drugs or the Internet, etc.) I always start and end by asking the same question to the 6,000 attendees: are you for or against x? Never in the last eight years has one person reconsidered. This isn’t a lab controlled sample, but it’s an indicator of something: our brain is not designed to search for the truth but to reinforce acquired prejudices.These deep-seated commitments are formed with no critical judgment. We systematically weigh in favor of our view when asked to evaluate others. The sociopolitical conflict between worldviews is a testament to how entrenched these distortions are. Rival conceptions will be jettisoned merely on the basis that they aren’t compatible with one’s own view. Each can remain a dogmatist and only condemn the dogmatism of others.

The problem is that one can never secure an optimal pluralistic viewpoint; one is always becoming a critic: once begins an enthusiast, becomes a dogmatist and ends less assured. As Fitzgerald put it: “we beat on, boats against the current, borne ceaselessly into the past.”

But if first-order epistemology is polluted under biases, what prevents them from contaminating our second-order epistemology as well? I see no reason to think that the biases present at the first level aren’t going to affect our second level too. Furthermore, there’s the nagging question of costs and benefits entailed by personal investments (say, time) needed to achieve a second order epistemology: what’s the cost of an additional source vis-à-vis the marginal benefit?

And these questions are even more unsettling in a world driven by reputation where the hardest question remains: whom to trust and why? 

  • John Brockman, Editor and Publisher
  • Russell Weinberger, Associate Publisher
  • Nina Stegeman, Associate Editor
 
  • Contact Info:[email protected]
  • In the News
  • Get Edge.org by email
 
Edge.org is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.
Copyright © 2019 By Edge Foundation, Inc All Rights Reserved.

 


Links:
[1] https://www.edge.org/conversation/gloria_origgi-what-is-reputation