Home | Third Culture | Digerati | Reality Club

JB: Take the example of the Zagat restaurant guides. You assume that the people rating the restaurants are hip foodies who know at least as much as you do about restaurants. If you didn't have that orientation, you wouldn't trust the book and you wouldn't buy and use it.

MAES: This is a great example because in choosing a restaurant you don't want a recommendation based on the average of what other people do, but you want to get recommendations from people like you. The collaborative filtering software which we developed at MIT and which Firefly commercializes, does exactly that. We have a restaurant site on the Web called Boston Eats. You can go there and tell the system which restaurants in Boston you like, and whether or not you have very expensive taste or if cost isn't an issue for you; etc. If a student goes there and tells the system what restaurants they're interested in they may say they prefer cheap restaurants because they're on a budget. So you may not want to get recommendations based on their opinions and they may not be very interested in your recommendations for more pricey restaurants. In short, you want to get recommendations from people that have similar tastes as you do. I'm from Europe, I love eating a lot of food that some Americans would think is disgusting, like brains, kidney, etc. I love getting recommendations for the kind of restaurants where I can find liver and rabbits, and so on. I want to get recommendations from other people whose tastes are similar to mine. This is exactly what these software agents do. If you tell the system which restaurants you like and dislike, and everybody else does the same thing, then the system can identify who your taste-mates are, who the people are who have the most similar taste as you do, the people who like and dislike the same kind of restaurants. The system will only look at their opinions about restaurants that you don't know to give you recommendations, so you get recommendations from the people that like the same kind of restaurants that you like. The agents themselves don't know anything about restaurants, but what they do know, what they can analyze, is which people are similar to which other people, and which people you should listen to, which people should give you recommendations, which other people's problem-solving and opinions you should rely on.

JB: Could it be that one of the reasons you seem to attract a lot of flack is that by calling these algorithms "agents" they become personified. Some critics would claim that these so called agents make us less human not more human.

MAES: The reason why we use the word agent is to emphasize that you are delegating something. Whenever you delegate something there is a certain risk involved that whoever or whatever you delegate to may not do the task exactly the way you would have done it. In that sense I think it is appropriate to use the word agent, so that people keep in mind that there is an entity, acting on your behalf, doing things on your behalf, and so things may not get done exactly the way you would do them if you were to do it yourself. It's an agent in a sense that a travel agent is an agent, or a real estate agent is an agent; they work for you, they know something about your preferences and interests etc. with respect to the problem, but still, if you had enough time to do the job yourself, you may do a better job of it. Another reason we use the word agent is that we are changing the traditional notion of software. So far people have mostly used the metaphor of a tool to describe and build software.

Usually we think of software as passive. You have to turn on and instruct it to do something and then it will do it. The agents approach to software is different in the sense that the agents are continuously running. You don't want to have to start up that agent in your fridge that's watching the milk, it should continuously be taking care of that particular task for you, so it's long-lived software that is continuously running. That is very different from the kind of software that we've been using in the past, and that's another reason why a different term is appropriate - you have a different kind of relationship with this software. To rephrase McLuhan, every extension of ourselves is an amputation, and that's very much true for every technology that we invent that automates some things on our behalf. Take the pocket calculator. People today don't want to live without it any more, and most of us either have one on our computer that we can use or one on our desk. We've delegated the task of doing calculations to the pocket calculator, and this extension of ourselves also has meant an amputation, because 20 or 30 years ago people used to be able to do all these very complicated calculations in their head. They had all these tricks, these heuristics that we don't even know anymore. We've lost these as a population. The pocket calculator, a technology from which we derive benefit, is also an amputation which has made us less good at performing that a particular function.

It's important to keep that in mind, that if agents automate a certain task for you, then you may not be very good at that task anymore because you rely on the agent that is automating it for you, and after awhile you no longer know how to do it yourself. I don't care if I don't know how to do a lot of tasks any more. I don't need to be good at checking whether there is milk in the fridge; I'm perfectly happy delegating this to some technology and being less good at that. For other tasks, in other domains, you want to be careful, either because you may not want to lose the ability to perform the task yourself, or because the agent is not perfect enough to delegate the whole task with satisfactory results.

Examples include finding new music or deciding what news to read in a newspaper. You don't want to have an agent telling you exactly what articles you should be reading - you always want to be doing some browsing yourself, because otherwise there's this risk you'll get tunnel vision. The agent gives you gives more of the kind of articles that you like and over time you get a narrower and narrower selection of news. In the end you read just one type of story. This can be dangerous. It's important in that case to design the whole system so that the agent is only used as assistive technology. This is a problem that can be solved in the design of the interface with the agent.

To illustrate this point, we have built a software agent that makes a personalized newspaper for a user in two different ways. In the first way this agent takes all the news articles, picks the ones that it thinks you'll be interested in given what it knows about what you've been reading in the past, and it then gives you a personalized selection. This approach involves a risk that you are never even going to do some browsing yourself, and you're just going to read what the agent has presented to you, and then you get that tunnel vision problem. However, you can build that same agent by just having the agent highlight in the newspaper the articles that it thinks you will be interested in. It doesn't change the newspaper. You will still see all the articles in the newspaper have that element of serendipity, but the agent assists you, because it has highlighted these articles. Even if it's in very small print or it's on a page somewhere deep in the newspaper, you won't miss it, because you could just go through the paper and see what all the highlights are and make sure you've definitely read the stuff that you have a long-term interest in. It's important for us as designers of agents that we keep these issues in mind, and that we come up with interfaces like the highlighting interface that avoid the problem in which the extension becomes an amputation.


Previous | Page 1 2 3 4 5 | Next