In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents' preferences via past interactions. Over time, the agents can incrementally update their models of other agents' preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).
History
Event
National Conference on Artificial Intelligence (13th : 1996 : Portland, Or.)
Pagination
114 - 119
Publisher
AAAI
Location
Portland, Or.
Place of publication
Menlo Park, Calif.
Start date
1996-08-04
End date
1996-08-08
ISBN-13
9780262510912
ISBN-10
026251091X
Language
eng
Publication classification
E1.1 Full written paper - refereed
Copyright notice
1996, AAAI
Title of proceedings
AAAI-96 : Proceedings of the 13th National Conference on Artificial Intelligence