PENED 2003

Generalized Framework and Applications for behaviour and intelligence evaluation of software agent in multi-agent systems.

Acronym PENED
Title Generalized Framework and Applications for behaviour and intelligence evaluation of software agent in multi-agent systems
Funding Source General Secretariat of Research & Technology
Dates 12/05-12/08
Amount €184,000
Role PI

Description

PENED 2003 is a funding program for young researchers, supported by the General Secretariat for Research and Technology, Greece, under the operational program “Competitiveness”. PENED 2003 focuses on training young researchers through the elaboration of PhD Theses over the course of a three-year period.

ISSEL’s proposal for PENED 2003 has been approved under the tittle: “Generalized Framework and Applications for Behavior and Intelligence Evaluation of Software Agents in Multi-agent Systems”. The duration of the project is 36 months, starting at December 2005. The total allocated budget for this project is €184,000. The participating organizations and companies are the following:

  • Aristotle University of Thessaloniki
  • ALTEC
  • Control System Ltd.
  • Public Power Corporation S.A.
  • University College Dublin
  • University of Geneva

Scope

The project consists of 4 individual but closely related PhD theses:

  • Theories and Tools for Evaluating Intelligent Agents
  • Design, Development and Evaluation of Mulit-agent Systems in Enviromental Applications
  • Design, Development and Evaluation of Mulit-agent Systems in Supply Chain Management Applications
  • Design, Development and Evaluation of Mulit-agent Systems in Power Supply Management Applications

Motivation

It is a common and intutive fact that every newly proposed scientific field lacks generalized and well established methodologies, until it reaches a certain level of maturity. Scientists usually propose and examine a new scientific field by first identifying shortcommings of existing methods and afterwards developing ad hoc solutions to domain-speci_c problems. When the proposed scientific field reaches a certain degree of maturity, these solutions are being formally generalized and established for later use.

Currently, this the case for agent technology. In the early 90s, agents have been re-introduced as a modelling concept and implementation tool for autonomous intelligent systems, after their initial overestimated use at the dawn of computer intelligence (1960-1990). Today, many researchers and companies worldwide are adopting and using autonomous agents in order to address complex distributed problems, exploiting a wide spectrum of properties that agents exhibit: autonomy, reactiveness, proactiveness, communication… However, there are no widely agreed-upon methodologies for completing the lifecycle of agent systems.

The current project aims at providing a generalized methodology for evaluating agents and multi-agent systems (MAS).and subsequently apply this methodology in three real-world test cases: Enviromental Informatics, Supply Chain Management and Power Supply Management Evaluation is a vital part of any research methodology because it verifies and valides the results of a proposed scientific method. While there have been many other methods and tools for agent development (such as theories, tools and languages for designing and developing agents), little effort has been made in the direction of evaluating agent systems. A generalized and reusable evaluation methodology is necessary in order to:

  • Qualitatively and quantitatively compare agents to other modelling and programming methods
  • Understand in depth the properties of agent systems and use each in a more efficient way in future implementation
  • Assist researchers to decide under which circumstances agents are appropriate

The motivation for building such a methodology derives from the complexity and uncertain nature of the majority of current MAS. If these MAS were simple, closed and all parameters were known a priori, then evaluation would have been a typical distributed software engineering task. However, MAS are by definition ideal for implementing or simulating complex, open environments with a large degree of uncertainty. Therefore, we need a non trivial evaluation methodology that takes into account unpredicted intelligent behavior by the participating entities.

Methodology

The proposed approach for constructing a robust evaluation framework consists of the following steps:

  1. Review of the state of the art in agent development and intelligent systems evaluation
  2. Devise and propose metrics and criteria for intelligence, behavior and performance evaluation
  3. Design methodologies: Adjust the work plan at each of the three test cases
  4. Develop methodologies: Develop the corresponding MAS
  5. Develop a set of benchmarks and other appropriate testing tools
  6. Test each developed MAS thoroughly
  7. Analyze and assess the results