Andreas L. Symeonidis

asymeonAssociate Professor
Softeng and R4A team lead
Aristotle University of Thessaloniki
Department of Electrical and Computer Engineering
54124 Thessaloniki – GREECE
Tel: +30 2310 99 4344
Fax: +30 2310 99 4344
Email: asymeon (at) eng [dot] auth [dot] gr, asymeon (at) iti [dot] gr
Skype name: asymeon
Google Scholar | LinkedIn | Twitter | Full CV (Fall 2017) |

 

Education


2005 Postdoctoral research in Electrical and Computer Engineering, Aristotle University of Thessaloniki (AUTH), Thessaloniki, Greece, 2006.
Postdoctoral research topic: “Development and validation of a generic modeling framework for evaluating the performance of intelligent software agents”.
2000-2004 PhD in Electrical and Computer Engineering, AUTH, Thessaloniki, Greece, 2004.
Dissertation Title: “Data Mining Techniques for the Dynamic Infusion of Knowledge to Multi-Agent Systems”.
1994-1999 Diploma in Electrical and Computer Engineering, AUTH, Thessaloniki, Greece, 1999.
Thesis Title: “A position/force control for a soft tip robot finger under kinematic uncertainties”.

Research Interests


  • Software Quality
  • Automated Software Engineering
  • Requirements Engineering
  • Data Mining for automated knowledge discovery
  • Applied Data Mining
  • Software/Middleware engineering for Robotics
  • Service-oriented (and Agent-oriented) Software Engineering
  • Intelligent Systems

Currently Occupied (apart from teaching)


Previous highlights


Teaching Experience


– Faculty member with the Electrical & Computer Engineering Dept., Aristotle University of Thessaloniki, Greece (2008 – present)

  • Undergraduate Courses
    1. Software Engineering
    2. Pattern Recognition
    3. Algorithm Analysis and Design
    4. Advanced Programming Techniques (2008-2015)
    5. Operating Systems (2008-2010)
  • Postgraduate Courses
    1. Software Engineering Techniques, MSc in Advanced Computing and Communications
    2. Databases and Data Mining, MSc in Advanced Computing and Communications
    3. Data Mining, MSc in Medical Research Methodology

– Visiting Lecturer with the Department of Informatics, University Carlos III Madrid, Spain  (Fall 2007 – Spring 2008)

  • Undergraduate Courses
    1. Software Engineering I
    2. Software Engineering II
– Adjunct Lecturer with the Electrical & Computer Engineering Dept., Aristotle University of Thessaloniki, Greece (Fall 2005 – Spring 2007)
  • Undergraduate Courses
    1. Structured Programming
    2. Advanced Programming Techniques
  • Postgraduate Courses
    1. Software Engineering Techniques, MSc in Advanced Computing and Communications
    2. Databases and Data Mining, MSc in Advanced Computing and Communications
    3. Databases and Knowledge Mining, MSc in Network and E-Business Centered Computing

R&D Experience


European-funded Research Projects

02/2015–today SEAF – Standardization and Communication of Sustainable Energy Asset Evaluation Framework (H2020 – 696023)
Description: Developing a framework for strengthening Energy project evaluation and funding.
Role: Technical Coordination.
Funding: H2020 – EC, Budget: 1,71M€
02/2015–today Mobile Age – (H2020 – 693319)
Description: Co-creating personalised mobile access to public services for senior citizens.
Role: Technical partner, PaaS provider.
Funding: H2020, Budget: 2,92M€
09/2014–03/2016 RAPP – Robotic Applications for Delivering Smart User Empowering Applications (FP7-ICT-610947)
Description: Developing robotics applications for elderly inclusion.
Role: Project Coordination.
Funding: FP7 – EC, Budget: 1,95M€
11/2013–today S-CASE – Scaffolding Scalable Software Services(FP7-ICT-610717)
Description: Developing modular Software on the Cloud.
Role: Project Coordination.
Funding: FP7 – EC, Budget: 3,49M€
11/2011–07/2014 CASSANDRA – A multivariate platform for assessing the impact of strategic decisions in electrical power systems (FP7-ICT-288429)
Description: Modeling Electricity Networks by the use of Web Service and software Agents.
Role: Technical Coordination.
Funding: FP7 – EC, Budget: 3,64M€
07/2007–10/2008 VITALAS – Video and Image Indexing and Retrieval in the Large Scale (IST-2006-045389)
Description: Design and development of an intelligent indexing system for the search and retrieval of semantically-aware multimedia content.
Role: Research Affiliate.
Funding: FP6 – EC, Budget: 8,17M€
02/2006–06/2007 ASSIST – ASsociation Studies assisted by Inference and Semantic Technologies (IST-2004-027510)
Description: Applying data mining techniques and statistical analysis on semantically integrated medical databases, for the efficient treatment of cervical cancer.
Role: Research Affiliate.
Funding: FP6 – EC, Budget: 4,17M€
05/2004–07/2004 INTELCITIES – Intelligent Cities (IST-2002-507860)
Description: Knowledge extraction and experience modeling in e-government processes, aiming to motivate citizen participation.
Role: Software Developer.
Funding: FP6 – EC, Budget: 11,7M€
11/2001–04/2004 Agent Academy – A data mining framework for training intelligent agents (IST-2000-31050)
Description: Design and development of an integrated software framework for the implementation of data mining enhanced multi-agent applications.
Role: Research Associate.
Funding: FP5 – EC, Budget: 3,1M€
07/2007–08/2001 ASPIS – An authentication and Protection Innovative Software System for DVD Rom and Internet (IST-1999-12554)
Description: Developing infrastructures for the security of copyrighted material.
Role: Software Developer.
Funding: FP5 – EC, Budget: 2,34M€

National-funded Research Projects

06/2011–03/2012 WISE– A portal for the application of fault diagnosis models
Description: Design and development of a tool for identifying faults and diverging behaviors of sensors and building management systems.
Role: Project Coordinator
Funding: Private Sector, Budget: 8K€
07/2011–04/2012 Paragadi II– Data mining for product Clustering
Description: Analysis and data mining application on retailer data, in order to identify product clusters and product absorption profiles
Role: Project Coordinator.
Funding: Private Sector, Budget: 15K€
03/2011–08/2011 AMNOS – An integrated software framework for managing Veterinary Units.
Description: Development of an interactive web-based system for the recording of livestock genetic information.
Role: Technical coordinator
Funding: Ministry of Agriculture, Greece, Budget: 160K€
01/2011–05/2011 Paragadi I – Data mining for Customer Clustering and Classification
Description: Analysis and data mining application on retailer data, in order to identify Customer clusters and create Customer classification models.
Role: Project Coordinator
Funding: Private Sector, Budget: 10K€
11/2008–12/2008 Applied Telematics on HealthFlorina Prefecture
Description: Applying good practices in telematics at an inter- country hospital cooperation.
Role: Research Affiliate.
Funding: INTERREG III, GSRT, Greece, Budget: 80K€
11/2004–12/2004 EPEAEK ΙΙ – Operational Program for Education and Initial Vocational Training
Description: Development of electronic course material in a wide context of topics.
Role: Software Developer.
Funding: Ministry of Education, Greece, Budget: 1.2M€
06/2004–08/2004 Heraclitus – Advanced Data Mining and Knowledge Extraction Techniques
Description: Developing Data mining algorithms for bioinformatics applications.
Role: Software Developer.
Funding: Ministry of Education, Greece, Budget: 800K€

Main Distinctions


 

2017 – Excellence award for the best AUTH student team (Team PANDORA)
2015 – 2nd place winner at the RoboCup Rescue Autonomous vehicle category (Robot PANDORA)
2013 – 2nd place winner at the RoboCup Rescue Autonomous vehicle category (Robot PANDORA)
2012 – 1st / 3rd place winner at the TAC – Ad Auctions/CAT competitions, respectively.
2011 – 3rd place winner at the Trading Agent Market Design Competition (TAC – CAT)
2010 – 1st /3rd place winner at the TAC – CAT/Ad Auctions competitions, respectively.
2010 – Member of the Young Advisor Group for EU Commissioner Kroes
2005 – 3rd place winner at the Trading Agent Supply Chain Management Competition (TAC – SCM)

Foreign Languages


– English: Excellent (Cambridge Proficiency, Michigan Proficiency)
– German: Satisfactory (Goethe Institut Grundstufe)
– Spanish: Elementary

Publications


2018

Journal Articles

Christoforos Zolotas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
Enterprise Information Systems, pp. 1-27, 2018 Mar

In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.

@article{2018Zolotas,
author={Christoforos Zolotas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={RESTsec: a low-code platform for generating secure by design enterprise services},
journal={Enterprise Information Systems},
pages={1-27},
year={2018},
month={03},
date={2018-03-09},
doi={https://doi.org/10.1080/17517575.2018.1462403},
publisher's url={https://www.tandfonline.com/doi/full/10.1080/17517575.2018.1462403},
abstract={In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.}
}

George Mamalakis, Christos Diou, Andreas L. Symeonidis and Leonidas Georgiadis
Neural Computing and Applications, 2018 May

In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.

@article{Mamalakis2018,
author={George Mamalakis and Christos Diou and Andreas L. Symeonidis and Leonidas Georgiadis},
title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},
journal={Neural Computing and Applications},
year={2018},
month={05},
date={2018-05-12},
doi={https://doi.org/10.1007/s00521-018-3550-x},
issn={1433-3058},
publisher's url={https://rdcu.be/2vUc},
keywords={Intrusion detection systems;Anomaly detection;Sequences of outliers},
abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}
}

2018

Conference Papers

Panagiotis G. Mousouliotis, Konstantinos L. Panayiotou, Emmanouil G. Tsardoulias, Loukas P. Petrou and Andreas L. Symeonidis
MOCAST, 2018 Mar

FPGAs are commonly used to accelerate domain-specific algorithmic implementations, as they can achieve impressive performance boosts, are reprogrammable and exhibit minimal power consumption. In this work, the SqueezeNet DCNN is accelerated using an SoC FPGA in order for the offered object recognition resource to be employed in a robotic application. Experiments are conducted to investigate the performance and power consumption of the implementation in comparison to deployment on other widely-used computational systems. thanks you!

@conference{Mousouliotis2018,
author={Panagiotis G. Mousouliotis and Konstantinos L. Panayiotou and Emmanouil G. Tsardoulias and Loukas P. Petrou and Andreas L. Symeonidis},
title={Expanding a robots life: Low power object recognition via FPGA-based DCNN deployment},
booktitle={MOCAST},
year={2018},
note={Accepted in MOCAST 2018},
month={03},
date={2018-03-01},
publisher's url={https://arxiv.org/abs/1804.00512},
abstract={FPGAs are commonly used to accelerate domain-specific algorithmic implementations, as they can achieve impressive performance boosts, are reprogrammable and exhibit minimal power consumption. In this work, the SqueezeNet DCNN is accelerated using an SoC FPGA in order for the offered object recognition resource to be employed in a robotic application. Experiments are conducted to investigate the performance and power consumption of the implementation in comparison to deployment on other widely-used computational systems. thanks you!}
}

2018

Inproceedings Papers

Kyriakos C. Chatzidimitriou, Michail Papamichail, Themistoklis Diamantopoulos, Michail Tsapanos and Andreas L. Symeonidis
"npm-miner: An Infrastructure for Measuring the Quality of the npm Registry"
MSR ’18: 15th International Conference on Mining Software Repositories, pp. 4, ACM, Gothenburg, Sweden, 2018 May

As the popularity of the JavaScript language is constantly increasing, one of the most important challenges today is to assess the quality of JavaScript packages. Developers often employ tools for code linting and for the extraction of static analysis metrics in order to assess and/or improve their code. In this context, we have developed npn-miner, a platform that crawls the npm registry and analyzes the packages using static analysis tools in order to extract detailed quality metrics as well as high-level quality attributes, such as maintainability and security. Our infrastructure includes an index that is accessible through a web interface, while we have also constructed a dataset with the results of a detailed analysis for 2000 popular npm packages.

@inproceedings{Chatzidimitriou2018MSR,
author={Kyriakos C. Chatzidimitriou and Michail Papamichail and Themistoklis Diamantopoulos and Michail Tsapanos and Andreas L. Symeonidis},
title={npm-miner: An Infrastructure for Measuring the Quality of the npm Registry},
booktitle={MSR ’18: 15th International Conference on Mining Software Repositories},
pages={4},
publisher={ACM},
address={Gothenburg, Sweden},
year={2018},
month={05},
date={2018-05-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2018/03/msr2018.pdf},
doi={https:%20//doi.org/10.1145/3196398.3196465},
abstract={As the popularity of the JavaScript language is constantly increasing, one of the most important challenges today is to assess the quality of JavaScript packages. Developers often employ tools for code linting and for the extraction of static analysis metrics in order to assess and/or improve their code. In this context, we have developed npn-miner, a platform that crawls the npm registry and analyzes the packages using static analysis tools in order to extract detailed quality metrics as well as high-level quality attributes, such as maintainability and security. Our infrastructure includes an index that is accessible through a web interface, while we have also constructed a dataset with the results of a detailed analysis for 2000 popular npm packages.}
}

Anastasios Dimanidis, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"A Natural Language Driven Approach for Automated Web API Development: Gherkin2OAS"
WWW ’18 Companion: The 2018 Web Conference Companion, pp. 6, Lyon, France, 2018 Apr

Speeding up the development process of Web Services, while adhering to high quality software standards is a typical requirement in the software industry. This is why industry specialists usually suggest \"driven by\" development approaches to tackle this problem. In this paper, we propose such a methodology that employs Specification Driven Development and Behavior Driven Development in order to facilitate the phases of Web Service requirements elicitation and specification. Furthermore, we introduce gherkin2OAS, a software tool that aspires to bridge the aforementioned development approaches. Through the suggested methodology and tool, one may design and build RESTful services fast, while ensuring proper functionality.

@inproceedings{Dimanidis2018,
author={Anastasios Dimanidis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={A Natural Language Driven Approach for Automated Web API Development: Gherkin2OAS},
booktitle={WWW ’18 Companion: The 2018 Web Conference Companion},
pages={6},
address={Lyon, France},
year={2018},
month={04},
date={2018-04-23},
url={https://issel.ee.auth.gr/wp-content/uploads/2018/03/gherkin2oas.pdf},
doi={https://doi.org/10.1145/3184558.3191654%20},
abstract={Speeding up the development process of Web Services, while adhering to high quality software standards is a typical requirement in the software industry. This is why industry specialists usually suggest \\"driven by\\" development approaches to tackle this problem. In this paper, we propose such a methodology that employs Specification Driven Development and Behavior Driven Development in order to facilitate the phases of Web Service requirements elicitation and specification. Furthermore, we introduce gherkin2OAS, a software tool that aspires to bridge the aforementioned development approaches. Through the suggested methodology and tool, one may design and build RESTful services fast, while ensuring proper functionality.}
}

Michail Papamichail, Themistoklis Diamantopoulos, Ilias Chrysovergis, Philippos Samlidis and Andreas Symeonidis
"User-Perceived Reusability Estimation based on Analysis of Software Repositories"
Proceedings of the 2018 Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE), 2018 Mar

The popularity of open-source software repositories has led to a new reuse paradigm, where online resources can be thoroughly analyzed to identify reusable software components. Obviously, assessing the quality and specifically the reusability potential of source code residing in open software repositories poses a major challenge for the research community. Although several systems have been designed towards this direction, most of them do not focus on reusability. In this paper, we define and formulate a reusability score by employing information from GitHub stars and forks, which indicate the extent to which software components are adopted/accepted by developers. Our methodology involves applying and assessing different state-of-the-practice machine learning algorithms, in order to construct models for reusability estimation at both class and package levels. Preliminary evaluation of our methodology indicates that our approach can successfully assess reusability, as perceived by developers.

@inproceedings{Papamichail2018MaLTeSQuE,
author={Michail Papamichail and Themistoklis Diamantopoulos and Ilias Chrysovergis and Philippos Samlidis and Andreas Symeonidis},
title={User-Perceived Reusability Estimation based on Analysis of Software Repositories},
booktitle={Proceedings of the 2018 Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)},
year={2018},
month={03},
date={2018-03-20},
abstract={The popularity of open-source software repositories has led to a new reuse paradigm, where online resources can be thoroughly analyzed to identify reusable software components. Obviously, assessing the quality and specifically the reusability potential of source code residing in open software repositories poses a major challenge for the research community. Although several systems have been designed towards this direction, most of them do not focus on reusability. In this paper, we define and formulate a reusability score by employing information from GitHub stars and forks, which indicate the extent to which software components are adopted/accepted by developers. Our methodology involves applying and assessing different state-of-the-practice machine learning algorithms, in order to construct models for reusability estimation at both class and package levels. Preliminary evaluation of our methodology indicates that our approach can successfully assess reusability, as perceived by developers.}
}

2018

Inbooks

Valasia Dimaridou, Alexandros-Charalampos Kyprianidis, Michail Papamichail, Themistoklis Diamantopoulos and Andreas Symeonidis
"Assessing the User-Perceived Quality of Source Code Components using Static Analysis Metrics"
Charpter:1, pp. 25, Springer, 2018 Jan

Nowadays, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may also lead to low quality software products, if the components to be reused exhibit low quality. Thus, several approaches have been developed to measure the quality of software components. Most of them, however, rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by developers. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for source code components (classes or packages): complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are thus applied to estimate the final quality score given metrics from these axes. Preliminary evaluation indicates that our approach effectively estimates software quality at both class and package levels.

@inbook{Dimaridou2018,
author={Valasia Dimaridou and Alexandros-Charalampos Kyprianidis and Michail Papamichail and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Assessing the User-Perceived Quality of Source Code Components using Static Analysis Metrics},
chapter={1},
pages={25},
publisher={Springer},
year={2018},
month={01},
date={2018-01-01},
abstract={Nowadays, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may also lead to low quality software products, if the components to be reused exhibit low quality. Thus, several approaches have been developed to measure the quality of software components. Most of them, however, rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by developers. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for source code components (classes or packages): complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are thus applied to estimate the final quality score given metrics from these axes. Preliminary evaluation indicates that our approach effectively estimates software quality at both class and package levels.}
}

2017

Journal Articles

Themistoklis Diamantopoulos, Michael Roth, Andreas Symeonidis and Ewan Klein
"Software requirements as an application domain for natural language processing"
Language Resources and Evaluation, pp. 1-30, 2017 Feb

Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.

@article{Diamantopoulos2017,
author={Themistoklis Diamantopoulos and Michael Roth and Andreas Symeonidis and Ewan Klein},
title={Software requirements as an application domain for natural language processing},
journal={Language Resources and Evaluation},
pages={1-30},
year={2017},
month={02},
date={2017-02-27},
url={http://rdcu.be/tpxd},
doi={http://10.1007/s10579-017-9381-z},
abstract={Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
"Enhancing requirements reusability through semantic modeling and data mining techniques"
Enterprise Information Systems, pp. 1-22, 2017 Dec

Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.

@article{Diamantopoulos2017EIS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Enhancing requirements reusability through semantic modeling and data mining techniques},
journal={Enterprise Information Systems},
pages={1-22},
year={2017},
month={12},
date={2017-12-17},
url={https://doi.org/10.1080/17517575.2017.1416177},
doi={http://10.1080/17517575.2017.1416177},
abstract={Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.}
}

Miltiadis G. Siavvas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"QATCH - An adaptive framework for software product quality assessment"
Expert Systems with Applications, 2017 May

The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.

@article{Siavvas2017,
author={Miltiadis G. Siavvas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={QATCH - An adaptive framework for software product quality assessment},
journal={Expert Systems with Applications},
year={2017},
month={05},
date={2017-05-25},
url={http://www.sciencedirect.com/science/article/pii/S0957417417303883},
doi={https://doi.org/10.1016/j.eswa.2017.05.060},
keywords={Software quality assessment;Software engineering;Multi-criteria decision making;Fuzzy analytic hierarchy process;Software static analysis;Quality metrics},
abstract={The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.}
}

Athanassios M. Kintsakis, Fotis E. Psomopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments"
SoftwareX, 6, pp. 217-224, 2017 Sep

Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

@article{SOFTX89,
author={Athanassios M. Kintsakis and Fotis E. Psomopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments},
journal={SoftwareX},
volume={6},
pages={217-224},
year={2017},
month={09},
date={2017-09-19},
url={http://www.sciencedirect.com/science/article/pii/S2352711017300304},
doi={http://10.1016/j.softx.2017.07.007},
keywords={Bioinformatics;hybrid cloud;scientific workflows;distributed computing},
abstract={Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.}
}

Cezary Zielinski, Maciej Stefanczyk, Tomasz Kornuta, Maksym Figat, Wojciech Dudek, Wojciech Szynkiewicz, Wlodzimierz Kasprzak, Jan Figat, Marcin Szlenk, Tomasz Winiarski, Konrad Banachowicz, Teresa Zielinska, Emmanouil G. Tsardoulias, Andreas L. Symeonidis, Fotis E. Psomopoulos, Athanassios M. Kintsakis, Pericles A. Mitkas, Aristeidis Thallas, Sofia E. Reppou, George T. Karagiannis, Konstantinos Panayiotou, Vincent Prunet, Manuel Serrano, Jean-Pierre Merlet, Stratos Arampatzis, Alexandros Giokas, Lazaros Penteridis, Ilias Trochidis, David Daney and Miren Iturburu
"Variable structure robot control systems: The RAPP approach"
Robotics and Autonomous Systems, 94, pp. 226-244, 2017 May

This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.

@article{Zielnski2017,
author={Cezary Zielinski and Maciej Stefanczyk and Tomasz Kornuta and Maksym Figat and Wojciech Dudek and Wojciech Szynkiewicz and Wlodzimierz Kasprzak and Jan Figat and Marcin Szlenk and Tomasz Winiarski and Konrad Banachowicz and Teresa Zielinska and Emmanouil G. Tsardoulias and Andreas L. Symeonidis and Fotis E. Psomopoulos and Athanassios M. Kintsakis and Pericles A. Mitkas and Aristeidis Thallas and Sofia E. Reppou and George T. Karagiannis and Konstantinos Panayiotou and Vincent Prunet and Manuel Serrano and Jean-Pierre Merlet and Stratos Arampatzis and Alexandros Giokas and Lazaros Penteridis and Ilias Trochidis and David Daney and Miren Iturburu},
title={Variable structure robot control systems: The RAPP approach},
journal={Robotics and Autonomous Systems},
volume={94},
pages={226-244},
year={2017},
month={05},
date={2017-05-05},
url={http://www.sciencedirect.com/science/article/pii/S0921889016306248},
doi={https://doi.org/10.1016/j.robot.2017.05.002},
keywords={robot controllers;variable structure controllers;cloud robotics;RAPP},
abstract={This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.}
}

2017

Inproceedings Papers

Panagiotis Doxopoulos, Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas L. Symeonidis
"Creating an extrovert robotic assistant via IoT networking devices"
International Conference on Cloud and Robotics, Saint Quentin, France, 2017 Nov

The communication and collaboration of Cyber-Physical Systems, including machines and robots, among themselves and with humans, is expected to attract researchers\' interest for the years to come. A key element of the new revolution is the Internet of Things (IoT). IoT infrastructures enable communication between different connected devices using internet protocols. The integration of robots in an IoT platform can improve robot capabilities by providing access to other devices and resources. In this paper we present an IoT-enabled application including a NAO robot which can communicate through an IoT platform with a reflex measurement system and a hardware node that provides robotics-oriented services in the form of RESTful web services. An activity reminder application is also included, illustrating the extension capabilities of the system.

@inproceedings{Doxopoulos2017,
author={Panagiotis Doxopoulos and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis},
title={Creating an extrovert robotic assistant via IoT networking devices},
booktitle={International Conference on Cloud and Robotics},
address={Saint Quentin, France},
year={2017},
month={11},
date={2017-11-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/11/2017-Creating-an-extrovert-robotic-assistant-via-IoT-networking-devices-ICCR17.pdf},
keywords={Web Services;robotics;Internet of Things;IoT platform;Swagger;REST;WAMP},
abstract={The communication and collaboration of Cyber-Physical Systems, including machines and robots, among themselves and with humans, is expected to attract researchers\\' interest for the years to come. A key element of the new revolution is the Internet of Things (IoT). IoT infrastructures enable communication between different connected devices using internet protocols. The integration of robots in an IoT platform can improve robot capabilities by providing access to other devices and resources. In this paper we present an IoT-enabled application including a NAO robot which can communicate through an IoT platform with a reflex measurement system and a hardware node that provides robotics-oriented services in the form of RESTful web services. An activity reminder application is also included, illustrating the extension capabilities of the system.}
}

Valasia Dimaridou, Alexandros-Charalampos Kyprianidis, Michail Papamichail, Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Modeling the User-perceived Quality of Source Code using Static Analysis Metrics"
Proceedings of the 12th International Conference on Software Technologies - Volume 1: ICSOFT, pp. 73-84, SciTePress, 2017 Jul

Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and exam ine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.

@inproceedings{icsoft17,
author={Valasia Dimaridou and Alexandros-Charalampos Kyprianidis and Michail Papamichail and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Modeling the User-perceived Quality of Source Code using Static Analysis Metrics},
booktitle={Proceedings of the 12th International Conference on Software Technologies - Volume 1: ICSOFT},
pages={73-84},
publisher={SciTePress},
year={2017},
month={07},
date={2017-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/08/ICSOFT.pdf},
doi={http://10.5220/0006420000730084},
slideshare={https://www.slideshare.net/isselgroup/towards-modeling-the-userperceived-quality-of-source-code-using-static-analysis-metrics},
abstract={Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and exam ine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.}
}

Konstantinos Panayiotou, Sofia E. Reppou, George Karagiannis, Emmanouil Tsardoulias, Aristeidis G. Thallas and Andreas L. Symeonidis
"Robotic applications towards an interactive alerting system for medical purposes"
30th IEEE International Symposium on Computer-Based Medical Systems (IEEE CBMS), Thessaloniki, 2017 Jan

Social consumer robots are slowly but strongly invading our everyday lives as their prices are becoming lower and lower, constituting them affordable for a wide range of civilians. There has been a lot of research concerning the potential applications of social robots, some of which may implement companionship or proxying technology-related tasks and assisting in everyday household endeavors, among others. In the current work, the RAPP framework is being used towards easily creating robotic applications suitable for utilization as a socially interactive alerting system with the employment of the NAO robot. The developed application stores events in an on-line calendar, directly via the robot or indirectly via a web environment, and asynchronously informs an end-user of imminent events

@inproceedings{Panayiotou2017,
author={Konstantinos Panayiotou and Sofia E. Reppou and George Karagiannis and Emmanouil Tsardoulias and Aristeidis G. Thallas and Andreas L. Symeonidis},
title={Robotic applications towards an interactive alerting system for medical purposes},
booktitle={30th IEEE International Symposium on Computer-Based Medical Systems (IEEE CBMS)},
address={Thessaloniki},
year={2017},
month={01},
date={2017-01-01},
keywords={cloud robotics;robotic applications;social robotics;assistive robotics;mild cognitive impairment},
abstract={Social consumer robots are slowly but strongly invading our everyday lives as their prices are becoming lower and lower, constituting them affordable for a wide range of civilians. There has been a lot of research concerning the potential applications of social robots, some of which may implement companionship or proxying technology-related tasks and assisting in everyday household endeavors, among others. In the current work, the RAPP framework is being used towards easily creating robotic applications suitable for utilization as a socially interactive alerting system with the employment of the NAO robot. The developed application stores events in an on-line calendar, directly via the robot or indirectly via a web environment, and asynchronously informs an end-user of imminent events}
}

Vasilis N. Remmas, Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas L. Symeonidis
"SRCA - The Scalable Robotic Cloud Agents Architecture"
International Conference on Cloud and Robotics, Saint Quentin, France, 2017 Nov

In an effort to penetrate the market at an affordable cost, consumer robots tend to provide limited processing capabilities, just enough to serve the purpose they have been designed for. However, a robot, in principle, should be able to interact and process the constantly increasing information streams generated from sensors or other devices. This would require the implementation of algorithms and mathematical models for the accurate processing of data volumes and significant computational resources. It is clear that as the data deluge continues to grow exponentially, deploying such algorithms on consumer robots will not be easy. Current work presents a cloud-based architecture that aims to offload computational resources from robots to a remote infrastructure, by utilizing and implementing cloud technologies. This way robots are allowed to enjoy functionality offered by complex algorithms that are executed on the cloud. The proposed system architecture allows developers and engineers not specialised in robotic implementation environments to utilize generic robotic algorithms and services off-the-shelf.

@inproceedings{Remmas2017,
author={Vasilis N. Remmas and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis},
title={SRCA - The Scalable Robotic Cloud Agents Architecture},
booktitle={International Conference on Cloud and Robotics},
address={Saint Quentin, France},
year={2017},
month={11},
date={2017-11-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/11/2017-SRCA-The-Scalable-Robotic-Cloud-Agents-Architecture-ICCR17.pdf},
keywords={cloud robotics;robotics;robotic applications;cloud architectures},
abstract={In an effort to penetrate the market at an affordable cost, consumer robots tend to provide limited processing capabilities, just enough to serve the purpose they have been designed for. However, a robot, in principle, should be able to interact and process the constantly increasing information streams generated from sensors or other devices. This would require the implementation of algorithms and mathematical models for the accurate processing of data volumes and significant computational resources. It is clear that as the data deluge continues to grow exponentially, deploying such algorithms on consumer robots will not be easy. Current work presents a cloud-based architecture that aims to offload computational resources from robots to a remote infrastructure, by utilizing and implementing cloud technologies. This way robots are allowed to enjoy functionality offered by complex algorithms that are executed on the cloud. The proposed system architecture allows developers and engineers not specialised in robotic implementation environments to utilize generic robotic algorithms and services off-the-shelf.}
}

2016

Journal Articles

Antonios Chrysopoulos, Christos Diou, Andreas Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar

The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{2015ChrysopoulosEPSR,
author={Antonios Chrysopoulos and Christos Diou and Andreas Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Response-modeling-of-small-scale-energy-consumers-for-effective-demand-response-applications.pdf},
abstract={The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

Pantelis Angelidis, Leslie Berman, Maria de la Luz Casas-Perez, Leo Anthony Celi, George E. Dafoulas, Alon Dagan, Braiam Escobar, Diego M. Lopez, Julieta Noguez, Juan Sebastian Osorio-Valencia, Charles Otine, Kenneth Paik, Luis Rojas-Potosi, Andreas Symeonidis and Eric Winkler
"The hackathon model to spur innovation around global mHealth"
Journal of Medical Engineering & Technology, pp. 1-8, 2016 Sep

The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.

@article{2016AngelidisJMET,
author={Pantelis Angelidis and Leslie Berman and Maria de la Luz Casas-Perez and Leo Anthony Celi and George E. Dafoulas and Alon Dagan and Braiam Escobar and Diego M. Lopez and Julieta Noguez and Juan Sebastian Osorio-Valencia and Charles Otine and Kenneth Paik and Luis Rojas-Potosi and Andreas Symeonidis and Eric Winkler},
title={The hackathon model to spur innovation around global mHealth},
journal={Journal of Medical Engineering & Technology},
pages={1-8},
year={2016},
month={09},
date={2016-09-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/The-hackathon-model-to-spur-innovation-around-global-mHealth.pdf},
abstract={The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.}
}

Sofia E. Reppou, Emmanouil G. Tsardoulias, Athanassios M. Kintsakis, Andreas Symeonidis, Pericles A. Mitkas, Fotis E. Psomopoulos, George T. Karagiannis, Cezary Zielinski, Vincent Prunet, Jean-Pierre Merlet, Miren Iturburu and Alexandros Gkiokas
"RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people"
Journal of Social Robotics, pp. 15, 2016 Jun

It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.

@article{2016ReppouJSR,
author={Sofia E. Reppou and Emmanouil G. Tsardoulias and Athanassios M. Kintsakis and Andreas Symeonidis and Pericles A. Mitkas and Fotis E. Psomopoulos and George T. Karagiannis and Cezary Zielinski and Vincent Prunet and Jean-Pierre Merlet and Miren Iturburu and Alexandros Gkiokas},
title={RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people},
journal={Journal of Social Robotics},
pages={15},
year={2016},
month={06},
date={2016-06-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RAPP-A-Robotic-Oriented-Ecosystem-for-Delivering-Smart-User-Empowering-Applications-for-Older-People.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.}
}

Emmanouil Tsardoulias, Aris Thallas, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech"
audio engineering society, 2016 Dec

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016TsardouliasAES,
author={Emmanouil Tsardoulias and Aris Thallas and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech},
journal={audio engineering society},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Improving-multilingual-interaction-for-consumer-robots-through-signal-enhancement-in-multichannel-speech.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Emmanouil Tsardoulias, Athanassios Kintsakis, Konstantinos Panayiotou, Aristeidis Thallas, Sofia Reppou, George Karagiannis, Miren Iturburu, Stratos Arampatzis, Cezary Zielinskic, Vincent Prunetg, Fotis Psomopoulos, Andreas Symeonidis and Pericles Mitkas
"Towards an integrated robotics architecture for social inclusion – The RAPP paradigm"
Cognitive Systems Research, pp. 1-8, 2016 Sep

Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.

@article{2016TsardouliasCSR,
author={Emmanouil Tsardoulias and Athanassios Kintsakis and Konstantinos Panayiotou and Aristeidis Thallas and Sofia Reppou and George Karagiannis and Miren Iturburu and Stratos Arampatzis and Cezary Zielinskic and Vincent Prunetg and Fotis Psomopoulos and Andreas Symeonidis and Pericles Mitkas},
title={Towards an integrated robotics architecture for social inclusion – The RAPP paradigm},
journal={Cognitive Systems Research},
pages={1-8},
year={2016},
month={09},
date={2016-09-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/COGSYS_2016_R1.pdf},
abstract={Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.}
}

Christoforos Zolotas, Themistoklis Diamantopoulos, Kyriakos Chatzidimitriou and Andreas Symeonidis
"From requirements to source code: a Model-Driven Engineering approach for RESTful web services"
Automated Software Engineering, pp. 1-48, 2016 Sep

During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.

@article{2016ZolotasASE,
author={Christoforos Zolotas and Themistoklis Diamantopoulos and Kyriakos Chatzidimitriou and Andreas Symeonidis},
title={From requirements to source code: a Model-Driven Engineering approach for RESTful web services},
journal={Automated Software Engineering},
pages={1-48},
year={2016},
month={09},
date={2016-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/ReqsToCodeMDE.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.}
}

2016

Conference Papers

Kyriakos Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards defining the structural properties of efficient consumer social networks on the electricity grid"
AI4SG SETN Workshop on AI for the Smart Grid, 2016 May

Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.

@conference{2016ChatzidimitriouSETN,
author={Kyriakos Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards defining the structural properties of efficient consumer social networks on the electricity grid},
booktitle={AI4SG SETN Workshop on AI for the Smart Grid},
year={2016},
month={05},
date={2016-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/06/Cassandra_AI4SG_CameraReady.pdf},
abstract={Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.}
}

Themistoklis Diamantopoulos, Klearchos Thomopoulos and Andreas L. Symeonidis
"QualBoa: Reusability-aware Recommendations of Source Code Components"
IEEE/ACM 13th Working Conference on Mining Software Repositories, 2016 May

Contemporary software development processes involve finding reusable software components from online repositories and integrating them to the source code, both to reduce development time and to ensure that the final software project is of high quality. Although several systems have been designed to automate this procedure by recommending components that cover the desired functionality, the reusability of these components is usually not assessed by these systems. In this work, we present QualBoa, a recommendation system for source code components that covers both the functional and the quality aspects of software component reuse. Upon retrieving components, QualBoa provides a ranking that involves not only functional matching to the query, but also a reusability score based on configurable thresholds of source code metrics. The evaluation of QualBoa indicates that it can be effective for recommending reusable source code.

@conference{2016DiamantopoulosIEEE/ACM,
author={Themistoklis Diamantopoulos and Klearchos Thomopoulos and Andreas L. Symeonidis},
title={QualBoa: Reusability-aware Recommendations of Source Code Components},
booktitle={IEEE/ACM 13th Working Conference on Mining Software Repositories},
year={2016},
month={05},
date={2016-05-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/06/QualBoa-Reusability-aware-Recommendations-of-Source-Code-Components.pdf},
doi={http://2016%20IEEE/ACM%2013th%20Working%20Conference%20on%20Mining%20Software%20Repositories},
abstract={Contemporary software development processes involve finding reusable software components from online repositories and integrating them to the source code, both to reduce development time and to ensure that the final software project is of high quality. Although several systems have been designed to automate this procedure by recommending components that cover the desired functionality, the reusability of these components is usually not assessed by these systems. In this work, we present QualBoa, a recommendation system for source code components that covers both the functional and the quality aspects of software component reuse. Upon retrieving components, QualBoa provides a ranking that involves not only functional matching to the query, but also a reusability score based on configurable thresholds of source code metrics. The evaluation of QualBoa indicates that it can be effective for recommending reusable source code.}
}

Themistoklis Diamantopoulos, Antonis Noutsos and Andreas L. Symeonidis
"DP-CORE: A Design Pattern Detection Tool for Code Reuse"
6th International Symposium on Business Modeling and Software Design (BMSD), -, Rhodes, Greece, 2016 Dec

In order to maintain, extend or reuse software projects one has to primarily understand what a system does and how well it does it. And, while in some cases information on system functionality exists, information covering the non-functional aspects is usually unavailable. Thus, one has to infer such knowledge by extracting design patterns directly from the source code. Several tools have been developed to identify design patterns, however most of them are limited to compilable and in most cases executable code, they rely on complex representations, and do not offer the developer any control over the detected patterns. In this paper we present DP-CORE, a design pattern detection tool that defines a highly descriptive representation to detect known and define custom patterns. DP-CORE is flexible, identifying exact and approximate pattern versions even in non-compilable code. Our analysis indicates that DP-CORE provides an efficient alternative to existing design pattern detection tools.

@conference{2016DiamantopoulosSBMSD,
author={Themistoklis Diamantopoulos and Antonis Noutsos and Andreas L. Symeonidis},
title={DP-CORE: A Design Pattern Detection Tool for Code Reuse},
booktitle={6th International Symposium on Business Modeling and Software Design (BMSD)},
publisher={-},
address={Rhodes, Greece},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/DP-CORE.pdf},
doi={http://2016%20IEEE/ACM%2013th%20Working%20Conference%20on%20Mining%20Software%20Repositories},
abstract={In order to maintain, extend or reuse software projects one has to primarily understand what a system does and how well it does it. And, while in some cases information on system functionality exists, information covering the non-functional aspects is usually unavailable. Thus, one has to infer such knowledge by extracting design patterns directly from the source code. Several tools have been developed to identify design patterns, however most of them are limited to compilable and in most cases executable code, they rely on complex representations, and do not offer the developer any control over the detected patterns. In this paper we present DP-CORE, a design pattern detection tool that defines a highly descriptive representation to detect known and define custom patterns. DP-CORE is flexible, identifying exact and approximate pattern versions even in non-compilable code. Our analysis indicates that DP-CORE provides an efficient alternative to existing design pattern detection tools.}
}

2016

Inproceedings Papers

Michail Papamichail, Themistoklis Diamantopoulos and Andreas L. Symeonidis
"User-Perceived Source Code Quality Estimation based on Static Analysis Metrics"
2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), Vienna, Austria, 2016 Aug

The popularity of open source software repositories and the highly adopted paradigm of software reuse have led to the development of several tools that aspire to assess the quality of source code. However, most software quality estimation tools, even the ones using adaptable models, depend on fixed metric thresholds for defining the ground truth. In this work we argue that the popularity of software components, as perceived by developers, can be considered as an indicator of software quality. We present a generic methodology that relates quality with source code metrics and estimates the quality of software components residing in popular GitHub repositories. Our methodology employs two models: a one-class classifier, used to rule out low quality code, and a neural network, that computes a quality score for each software component. Preliminary evaluation indicates that our approach can be effective for identifying high quality software components in the context of reuse.

@inproceedings{2016PapamichailIEEE,
author={Michail Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={User-Perceived Source Code Quality Estimation based on Static Analysis Metrics},
booktitle={2016 IEEE International Conference on Software Quality, Reliability and Security (QRS)},
address={Vienna, Austria},
year={2016},
month={08},
date={2016-08-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/User-Perceived-Source-Code-Quality-Estimation-based-on-Static-Analysis-Metrics.pdf},
slideshare={http://www.slideshare.net/isselgroup/userperceived-source-code-quality-estimation-based-on-static-analysis-metrics},
abstract={The popularity of open source software repositories and the highly adopted paradigm of software reuse have led to the development of several tools that aspire to assess the quality of source code. However, most software quality estimation tools, even the ones using adaptable models, depend on fixed metric thresholds for defining the ground truth. In this work we argue that the popularity of software components, as perceived by developers, can be considered as an indicator of software quality. We present a generic methodology that relates quality with source code metrics and estimates the quality of software components residing in popular GitHub repositories. Our methodology employs two models: a one-class classifier, used to rule out low quality code, and a neural network, that computes a quality score for each software component. Preliminary evaluation indicates that our approach can be effective for identifying high quality software components in the context of reuse.}
}

2015

Journal Articles

Charalampos Dimoulas and Andreas Symeonidis
"Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation"
IEEE Multimedia, PP, (99), 2015 May

With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

@article{2015DimoulasIEEEM,
author={Charalampos Dimoulas and Andreas Symeonidis},
title={Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation},
journal={IEEE Multimedia},
volume={PP},
number={99},
year={2015},
month={05},
date={2015-05-13},
doi={http://dx.doi.org/10.1109/MMUL.2015.33},
abstract={With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.}
}

Themistoklis Mavridis andAndreas Symeonidis
"Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms"
Engineering Applications of Artificial Intelligence (EAAI), 41, pp. 75–91, 2015 Mar

It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).

@article{2015mavridisEAAI,
author={Themistoklis Mavridis andAndreas Symeonidis},
title={Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms},
journal={Engineering Applications of Artificial Intelligence (EAAI)},
volume={41},
pages={75–91},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Identifying-valid-search-engine-ranking-factors-in-a-Web-2.0-and-Web-3.0-context-for-building-efficient-SEO-mechanisms.pdf},
doi={http://10.1016/j.engappai.2015.02.002},
keywords={semantic web;search engine optimization;Search engine ranking factors analysis;Content quality;Social web},
abstract={It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).}
}

2015

Books

Emmanouil G. Tsardoulias, Cezary Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL Ontologies: The RAPP Methodology"
Springer International Publishing, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@book{2015TsardouliasSIP,
author={Emmanouil G. Tsardoulias and Cezary Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL Ontologies: The RAPP Methodology},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Merging-Robotics-and-AAL-Ontologies-The-RAPP-Methodology.pdf},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

2015

Conference Papers

Themistoklis Diamantopoulos and Andreas Symeonidis
"Employing Source Code Information to Improve Question-Answering in Stack Overflow"
The 12th Working Conference on Mining Software Repositories (MSR 2015), pp. 454-457, Florence, Italy, 2015 May

Nowadays, software development has been greatlyinfluenced by question-answering communities, such as Stack Overflow. A new problem-solving paradigm has emerged, as developers post problems they encounter that are then answered by the community. In this paper, we propose a methodology that allows searching for solutions in Stack Overflow, using the main elements of a question post, including not only its title, tags, and body, but also its source code snippets. We describe a similarity scheme for these elements and demonstrate how structural information can be extracted from source code snippets and compared to further improve the retrieval of questions. The results of our evaluation indicate that our methodology is effective on recommending similar question posts allowing community members to search without fully forming a question

@conference{2015DiamantopoulosMSR,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Employing Source Code Information to Improve Question-Answering in Stack Overflow},
booktitle={The 12th Working Conference on Mining Software Repositories (MSR 2015)},
pages={454-457},
address={Florence, Italy},
year={2015},
month={05},
date={2015-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/MSR2015.pdf},
keywords={Load Forecasting},
abstract={Nowadays, software development has been greatlyinfluenced by question-answering communities, such as Stack Overflow. A new problem-solving paradigm has emerged, as developers post problems they encounter that are then answered by the community. In this paper, we propose a methodology that allows searching for solutions in Stack Overflow, using the main elements of a question post, including not only its title, tags, and body, but also its source code snippets. We describe a similarity scheme for these elements and demonstrate how structural information can be extracted from source code snippets and compared to further improve the retrieval of questions. The results of our evaluation indicate that our methodology is effective on recommending similar question posts allowing community members to search without fully forming a question}
}

Emmanouil G. Tsardoulias, C Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL ontologies: The RAPP methodology"
Automation Conference, 2015 Mar

Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).

@conference{2015TsardouliasMRALL,
author={Emmanouil G. Tsardoulias and C Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL ontologies: The RAPP methodology},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Merging_Robotics_and_AAL_ontologies_-_The_RAPP_methodology.pdf},
keywords={Load Forecasting},
abstract={Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).}
}

Tsardoulias, E. G., Andreas Symeonidis and and P. A. Mitkas.
"An automatic speech detection architecture for social robot oral interaction"
In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM, Island of Rhodes, 2015 Oct

Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn

@conference{2015TsardouliasPAMIWS,
author={Tsardoulias and E. G. and Andreas Symeonidis and and P. A. Mitkas.},
title={An automatic speech detection architecture for social robot oral interaction},
booktitle={In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM},
address={Island of Rhodes},
year={2015},
month={10},
date={2015-10-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/An-automatic-speech-detection-architecture-for-social-robot-oral-interaction.pdf},
abstract={Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn}
}

Konstantinos Vavliakis, Anthony Chrysopoulos, Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders"
SimuTools, 2015 Dec

Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.

@conference{2015VavliakisSimuTools,
author={Konstantinos Vavliakis and Anthony Chrysopoulos and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders},
booktitle={SimuTools},
year={2015},
month={00},
date={2015-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/CASSANDRA_SimuTools.pdf},
abstract={Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.}
}

Christoforos Zolotas and Andreas Symeonidis
"Towards an MDA Mechanism for RESTful Services Development"
The 18th International Conference on Model Driven Engineering Languages and Systems, Ottawa, Canada, 2015 Oct

—Automated software engineering research aspiresto lead to more consistent software, faster delivery and lowerproduction costs. Meanwhile, RESTful design is rapidly gainingmomentum towards becoming the primal software engineeringparadigm for the web, due to its simplicity and reusability. Thispaper attempts to couple the two perspectives and take the firststep towards applying the MDE paradigm to RESTful servicedevelopment at the PIM zone. A UML profile is introduced,which performs PIM meta-modeling of RESTful web servicesabiding by the third level of Richardson’s maturity model. Theprofile embeds a slight variation of the MVC design pattern tocapture the core REST qualities of a resource. The proposedprofile is followed by an indicative example that demonstrateshow to apply the concepts presented, in order to automate PIMproduction of a system according to MOF stack. Next stepsinclude the introduction of the corresponding CIM, PSM andcode production.Index Terms—Model Driven Engineering; RESTful services;UML Profiles; Meta-modeling; Automated Software Engineering

@conference{2015ZolotasICMDELS,
author={Christoforos Zolotas and Andreas Symeonidis},
title={Towards an MDA Mechanism for RESTful Services Development},
booktitle={The 18th International Conference on Model Driven Engineering Languages and Systems},
address={Ottawa, Canada},
year={2015},
month={10},
date={2015-10-02},
url={http://ceur-ws.org/Vol-1563/paper6.pdf},
slideshare={http://www.slideshare.net/isselgroup/towards-an-mda-mechanism-for-restful-services-development},
abstract={—Automated software engineering research aspiresto lead to more consistent software, faster delivery and lowerproduction costs. Meanwhile, RESTful design is rapidly gainingmomentum towards becoming the primal software engineeringparadigm for the web, due to its simplicity and reusability. Thispaper attempts to couple the two perspectives and take the firststep towards applying the MDE paradigm to RESTful servicedevelopment at the PIM zone. A UML profile is introduced,which performs PIM meta-modeling of RESTful web servicesabiding by the third level of Richardson’s maturity model. Theprofile embeds a slight variation of the MVC design pattern tocapture the core REST qualities of a resource. The proposedprofile is followed by an indicative example that demonstrateshow to apply the concepts presented, in order to automate PIMproduction of a system according to MOF stack. Next stepsinclude the introduction of the corresponding CIM, PSM andcode production.Index Terms—Model Driven Engineering; RESTful services;UML Profiles; Meta-modeling; Automated Software Engineering}
}

2015

Inproceedings Papers

Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Interpretable Defect-Prone Component Analysis using Genetic Fuzzy Systems"
IEEE/ACM 4th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE, pp. 32-38, Florence, Italy, 2015 May

The problem of Software Reliability Prediction is attracting the attention of several researchers during the last few years. Various classification techniques are proposed in current literature which involve the use of metrics drawn from version control systems in order to classify software components as defect-prone or defect-free. In this paper, we create a novel genetic fuzzy rule-based system to efficiently model the defect-proneness of each component. The system uses a Mamdani-Assilian inference engine and models the problem as a one-class classification task. System rules are constructed using a genetic algorithm, where each chromosome represents a rule base (Pittsburgh approach). The parameters of our fuzzy system and the operators of the genetic algorithm are designed with regard to producing interpretable output. Thus, the output offers not only effective classification, but also a comprehensive set of rules that can be easily visualized to extract useful conclusions about the metrics of the software.

@inproceedings{2015DiamantopoulosRAISE,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Interpretable Defect-Prone Component Analysis using Genetic Fuzzy Systems},
booktitle={IEEE/ACM 4th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE},
pages={32-38},
address={Florence, Italy},
year={2015},
month={05},
date={2015-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Towards-Interpretable-Defect-Prone-Component-Analysis-using-Genetic-Fuzzy-Systems-.pdf},
keywords={Load Forecasting},
abstract={The problem of Software Reliability Prediction is attracting the attention of several researchers during the last few years. Various classification techniques are proposed in current literature which involve the use of metrics drawn from version control systems in order to classify software components as defect-prone or defect-free. In this paper, we create a novel genetic fuzzy rule-based system to efficiently model the defect-proneness of each component. The system uses a Mamdani-Assilian inference engine and models the problem as a one-class classification task. System rules are constructed using a genetic algorithm, where each chromosome represents a rule base (Pittsburgh approach). The parameters of our fuzzy system and the operators of the genetic algorithm are designed with regard to producing interpretable output. Thus, the output offers not only effective classification, but also a comprehensive set of rules that can be easily visualized to extract useful conclusions about the metrics of the software.}
}

2014

Journal Articles

Anna A. Adamopoulou and Andreas Symeonidis
"A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets"
Electronic Commerce Research and Applications, 35, pp. 114-130, 2014 Oct

Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.

@article{2014AdamopoulouECRA,
author={Anna A. Adamopoulou and Andreas Symeonidis},
title={A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets},
journal={Electronic Commerce Research and Applications},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S1567422314000465-main.pdf},
doi={http://10.1016/j.elerap.2014.07.001},
keywords={Small-scale consumer models},
abstract={Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.}
}

Antonios Chrysopoulos, Christos Diou, A.L. Symeonidis and Pericles A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
EAAI, 35, pp. 299- 315, 2014 Oct

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{2014chrysopoulosEAAI,
author={Antonios Chrysopoulos and Christos Diou and A.L. Symeonidis and Pericles A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={EAAI},
volume={35},
pages={299- 315},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Bottom-up-modeling-of-small-scale-energy-consumers-for-effective-Demand-Response-Applications.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Small-scale consumer models;Demand simulation;Demand Response Applications},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
"Localizing Software Bugs using the Edit Distance of Call Traces"
International Journal On Advances in Software, 7, (1), pp. 277 - 288, 2014 Oct

Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.

@article{2014DiamantopoulosIJAS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Localizing Software Bugs using the Edit Distance of Call Traces},
journal={International Journal On Advances in Software},
volume={7},
number={1},
pages={277 - 288},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Localizing-Software-Bugs-using-the-Edit-Distance-of-Call-Traces.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Intrusion detection systems},
abstract={Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.}
}

G. Mamalakis, C. Diou, A.L. Symeonidis and L. Georgiadis
"Of daemons and men: A file system approach towards intrusion detection"
Applied Soft Computing, 25, pp. 1--14, 2014 Oct

We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.

@article{2014MamalakisASC,
author={G. Mamalakis and C. Diou and A.L. Symeonidis and L. Georgiadis},
title={Of daemons and men: A file system approach towards intrusion detection},
journal={Applied Soft Computing},
volume={25},
pages={1--14},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Of-daemons-and-men-A-file-system-approach-towards-intrusion-detection.pdf},
doi={http://dx.doi.org/10.1016/j.asoc.2014.07.026},
keywords={Intrusion detection systems;Information security;File system;Anomaly detection},
abstract={We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.}
}

Themistoklis Mavridis and Andreas Symeonidis
"Semantic analysis of web documents for the generation of optimal content"
Engineering Applications of Artificial Intelligence, 2014, 35, pp. 114-130, 2014 Oct

The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.

@article{2014MavridisEAAI,
author={Themistoklis Mavridis and Andreas Symeonidis},
title={Semantic analysis of web documents for the generation of optimal content},
journal={Engineering Applications of Artificial Intelligence, 2014},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S0952197614001304-main.pdf},
doi={http://dx.doi.org/10.1016/j.engappai.2014.06.008},
abstract={The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.}
}

2014

Inproceedings Papers

Christos Dimou, Fani Tzima, Andreas L. Symeonidis and and Pericles A. Mitkas
"Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation"
Lecture Notes on Agents and Data Mining Interaction, pp. 50-54, Springer, Baltimore, Maryland, USA, 2014 May

Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.

@inproceedings{2014Dimou,
author={Christos Dimou and Fani Tzima and Andreas L. Symeonidis and and Pericles A. Mitkas},
title={Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation},
booktitle={Lecture Notes on Agents and Data Mining Interaction},
pages={50-54},
publisher={Springer},
address={Baltimore, Maryland, USA},
year={2014},
month={05},
date={2014-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Performance-Evaluation-of-Agents-and-Multi-agent-Systems-using-Formal-Specifications-in-Z-Notation.pdf},
keywords={Small-scale consumer models},
abstract={Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.}
}

Rafaila Grigoriou and Andreas L. Symeonidis
"Towards the Design of User Friendly Search Engines for Software Projects"
Lecture Notes on Natural Language Processing and Information Systems, pp. 164-167, Springer International Publishing, Chicago, Illinois, 2014 Jun

Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users

@inproceedings{2014GrigoriouTDUFSESP,
author={Rafaila Grigoriou and Andreas L. Symeonidis},
title={Towards the Design of User Friendly Search Engines for Software Projects},
booktitle={Lecture Notes on Natural Language Processing and Information Systems},
pages={164-167},
publisher={Springer International Publishing},
address={Chicago, Illinois},
year={2014},
month={06},
date={2014-06-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Towards-the-Design-of-User-Friendly-Search-Engines-for-Software-Projects.pdf},
keywords={Search engine ranking factors analysis},
abstract={Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users}
}

Michael Roth, Themistoklis Diamantopoulos, Ewan Klein and Andreas L. Symeonidis
"Software Requirements: A new Domain for Semantic Parsers"
Proceedings of the ACL 2014 Workshop on Semantic Parsing (SP14), pp. 50-54, Baltimore, Maryland, USA, 2014 Jun

Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.

@inproceedings{roth2014software,
author={Michael Roth and Themistoklis Diamantopoulos and Ewan Klein and Andreas L. Symeonidis},
title={Software Requirements: A new Domain for Semantic Parsers},
booktitle={Proceedings of the ACL 2014 Workshop on Semantic Parsing (SP14)},
pages={50-54},
address={Baltimore, Maryland, USA},
year={2014},
month={06},
date={2014-06-01},
url={http://www.aclweb.org/anthology/W/W14/W14-24.pdf#page=62},
keywords={Load Forecasting},
abstract={Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.}
}

2013

Journal Articles

Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Event identification in web social media through named entity recognition and topic modeling"
Data & Knowledge Engineering, 88, pp. 1-24, 2013 Jan

@article{2013VavliakisDKE,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Event identification in web social media through named entity recognition and topic modeling},
journal={Data & Knowledge Engineering},
volume={88},
pages={1-24},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Event-identification-in-web-social-media-through-named-entity-recognition-and-topic-modeling.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering}
}

2013

Incollection

Themistoklis Diamantopoulos, Andreas Symeonidis and Anthonios Chrysopoulos
"Designing robust strategies for continuous trading in contemporary power markets"
Agent-Mediated Electronic Commerce. Designing Trading Strategies and Mechanisms for Electronic Markets, pp. 30-44, Springer Berlin Heidelberg, 2013 Jan

In contemporary energy markets participants interact with each other via brokers that are responsible for the proper energy flow to and from their clients (usually in the form of long-term or short- term contracts). Power TAC is a realistic simulation of a real-life energy market, aiming towards providing a better understanding and modeling of modern energy markets, while boosting research on innovative trad- ing strategies. Power TAC models brokers as software agents, competing against each other in Double Auction environments, in order to increase their client base and market share. Current work discusses such a bro- ker agent architecture, striving to maximize his own profit. Within the context of our analysis, Double Auction markets are treated as microeco- nomic systems and, based on state-of-the-art price formation strategies, the following policies are designed: an adaptive price formation policy, a policy for forecasting energy consumption that employs Time Series Analysis primitives, and two shout update policies, a rule-based policy that acts rather hastily, and one based on Fuzzy Logic. The results are quite encouraging and will certainly call for future research.

@incollection{2013DiamantopoulosAMEC-DTSMEM,
author={Themistoklis Diamantopoulos and Andreas Symeonidis and Anthonios Chrysopoulos},
title={Designing robust strategies for continuous trading in contemporary power markets},
booktitle={Agent-Mediated Electronic Commerce. Designing Trading Strategies and Mechanisms for Electronic Markets},
pages={30-44},
publisher={Springer Berlin Heidelberg},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Designing-Robust-Strategies-for-Continuous-Trading-in-Contemporary-Power-Markets.pdf},
doi={http://link.springer.com/chapter/10.1007/978-3-642-40864-9_3#page-1},
keywords={aiming towards providing a better understanding and modeling of modern energy markets;competing against each other in Double Auction environments;striving to maximize his own profit. Within the context of our analysis;Double Auction markets are treated as microeconomic systems and;based on state-of-the-art price formation strategies;the following policies are designed: an adaptive price formation policy;a policy for forecasting energy consumption that employs Time Series Analysis primitives;and two shout update policies;a rule-based policy that acts rather hastily},
abstract={In contemporary energy markets participants interact with each other via brokers that are responsible for the proper energy flow to and from their clients (usually in the form of long-term or short- term contracts). Power TAC is a realistic simulation of a real-life energy market, aiming towards providing a better understanding and modeling of modern energy markets, while boosting research on innovative trad- ing strategies. Power TAC models brokers as software agents, competing against each other in Double Auction environments, in order to increase their client base and market share. Current work discusses such a bro- ker agent architecture, striving to maximize his own profit. Within the context of our analysis, Double Auction markets are treated as microeco- nomic systems and, based on state-of-the-art price formation strategies, the following policies are designed: an adaptive price formation policy, a policy for forecasting energy consumption that employs Time Series Analysis primitives, and two shout update policies, a rule-based policy that acts rather hastily, and one based on Fuzzy Logic. The results are quite encouraging and will certainly call for future research.}
}

2013

Inproceedings Papers

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas Symeonidis and Pericles Mitkas
"Redefining the market power of small-scale electricity consumers through consumer social networks"
10th IEEE International Conference on e-Business Engineering (ICEBE 2013), pp. 30-44, Springer Berlin Heidelberg, 2013 Jan

136

@inproceedings{2013ChatzidimitriouICEBE,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas Symeonidis and Pericles Mitkas},
title={Redefining the market power of small-scale electricity consumers through consumer social networks},
booktitle={10th IEEE International Conference on e-Business Engineering (ICEBE 2013)},
pages={30-44},
publisher={Springer Berlin Heidelberg},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Redefining-the-market-power-of-small-scale-electricity-consumers-through-Consumer-Social-Networks.pdf},
doi={http://link.springer.com/chapter/10.1007/978-3-642-40864-9_3#page-1},
keywords={Load Forecasting},
abstract={136}
}

Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles Mitkas
"Agent-based small-scale energy consumer models for energy portfolio management"
Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013), pp. 45-50, Atlanta, GA, USA, 2013 Jan

Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.

@inproceedings{2013ChrysopoulosIAT,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles Mitkas},
title={Agent-based small-scale energy consumer models for energy portfolio management},
booktitle={Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013)},
pages={45-50},
address={Atlanta, GA, USA},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Agent-based-small-scale-energy-consumer-models-for-energy-portfolio-management.pdf},
keywords={Load Forecasting},
abstract={Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.}
}

Themistoklis Diamantopoulos and Andreas L. Symeonidis
"Towards Scalable Bug Localization using the Edit Distance of Call Traces"
The Eighth International Conference on Software Engineering Advances (ICSEA 2013), pp. 45-50, Venice, Italy, 2013 Oct

Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.

@inproceedings{2013DiamantopoulosICSEA,
author={Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Towards Scalable Bug Localization using the Edit Distance of Call Traces},
booktitle={The Eighth International Conference on Software Engineering Advances (ICSEA 2013)},
pages={45-50},
address={Venice, Italy},
year={2013},
month={10},
date={2013-10-27},
url={https://www.thinkmind.org/download.php?articleid=icsea_2013_2_30_10250},
keywords={Load Forecasting},
abstract={Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.}
}

2012

Journal Articles

Wolfgang Ketter and Andreas L. Symeonidis
"Competitive Benchmarking: Lessons learned from the Trading Agent Competition"
AI Magazine, 33, (2), pp. 198-209, 2012 Sep

Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.

@article{2012KetterAIM,
author={Wolfgang Ketter and Andreas L. Symeonidis},
title={Competitive Benchmarking: Lessons learned from the Trading Agent Competition},
journal={AI Magazine},
volume={33},
number={2},
pages={198-209},
year={2012},
month={09},
date={2012-09-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Competitive-Benchmarking-Lessons-learned-from-the-Trading-Agent-Competition.pdf},
keywords={Load Forecasting},
abstract={Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.}
}

2012

Inproceedings Papers

Georgios T. Andreou, Andreas L. Symeonidis, Christos Diou, Pericles A. Mitkas and Dimitrios P. Labridis
"A framework for the implementation of large scale Demand Response"
Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on, Nuremberg, Germany, 2012 Jan

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012andreouSGTEP2012,
author={Georgios T. Andreou and Andreas L. Symeonidis and Christos Diou and Pericles A. Mitkas and Dimitrios P. Labridis},
title={A framework for the implementation of large scale Demand Response},
booktitle={Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on},
address={Nuremberg, Germany},
year={2012},
month={01},
date={2012-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/tada2012.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Kyriakos C. Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Policy Search through Adaptive Function Approximation for Bidding in TAC SCM"
Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce, 2012 May

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012ChatzidimitriouAMEC,
author={Kyriakos C. Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Policy Search through Adaptive Function Approximation for Bidding in TAC SCM},
booktitle={Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce},
year={2012},
month={05},
date={2012-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Policy-Search-through-Adaptive-Function-Approximation-for-Bidding-in-TAC-SCM.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Themistoklis Mavridis and Andreas L. Symeonidis
"Identifying webpage Semantics for Search Engine Optimization"
Paper presented at the 8th International Conference on Web Information Systems and Technologies (WEBIST), pp. 18-21, Porto, Portugal, 2012 Jun

The added-value of search engines is, apparently, undoubted. Their rapid evolution over the last decade has transformed them into the most important source of information and knowledge. From the end user side, search engine success implies correct results in fast and accurate manner, while also ranking of search results on a given query has to be directly correlated to the user anticipated response. From the content providers side (i.e. websites), better ranking in a search engine result set implies numerous advantages like visibility, visitability, and profit. This is the main reason for the flourishing of Search Engine Optimization (SEO) techniques, which aim towards restructuring or enriching website content, so that optimal ranking of websites in relation to search engine results is feasible. SEO techniques are becoming more and more sophisticated. Given that internet marketing is extensively applied, prior quality factors prove insufficient, by themselves, to boost ranking and the improvement of the quality of website content is also introduced. Current paper discusses such a SEO mechanism. Having identified that semantic analysis has not been widely applied in the field of SEO, a semantic approach is adopted, which employs Latent Dirichlet Allocation techniques coupled with Gibbs Sampling in order to analyze the results of search engines based on given keywords. Within the context of the paper, the developed SEO mechanism LDArank is presented, which evaluates query results through state-of-the-art SEO metrics, analyzes results content and extracts new, optimized content.

@inproceedings{2012MavridisWEBIST,
author={Themistoklis Mavridis and Andreas L. Symeonidis},
title={Identifying webpage Semantics for Search Engine Optimization},
booktitle={Paper presented at the 8th International Conference on Web Information Systems and Technologies (WEBIST)},
pages={18-21},
address={Porto, Portugal},
year={2012},
month={06},
date={2012-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/IDENTIFYING-WEBPAGE-SEMANTICS-FOR-SEARCH-ENGINE-OPTIMIZATION.pdf},
keywords={search engine optimization;LDArank;semantic analysis;latent dirichlet allocation;LDA Gibbs sampling;LDArank java application;webpage semantics;semantic analysis SEO},
abstract={The added-value of search engines is, apparently, undoubted. Their rapid evolution over the last decade has transformed them into the most important source of information and knowledge. From the end user side, search engine success implies correct results in fast and accurate manner, while also ranking of search results on a given query has to be directly correlated to the user anticipated response. From the content providers side (i.e. websites), better ranking in a search engine result set implies numerous advantages like visibility, visitability, and profit. This is the main reason for the flourishing of Search Engine Optimization (SEO) techniques, which aim towards restructuring or enriching website content, so that optimal ranking of websites in relation to search engine results is feasible. SEO techniques are becoming more and more sophisticated. Given that internet marketing is extensively applied, prior quality factors prove insufficient, by themselves, to boost ranking and the improvement of the quality of website content is also introduced. Current paper discusses such a SEO mechanism. Having identified that semantic analysis has not been widely applied in the field of SEO, a semantic approach is adopted, which employs Latent Dirichlet Allocation techniques coupled with Gibbs Sampling in order to analyze the results of search engines based on given keywords. Within the context of the paper, the developed SEO mechanism LDArank is presented, which evaluates query results through state-of-the-art SEO metrics, analyzes results content and extracts new, optimized content.}
}

Andreas Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS), Valencia, Spain, 2012 Jun

The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.

@inproceedings{2012SymeonidisADMI,
author={Andreas Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
booktitle={Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS)},
address={Valencia, Spain},
year={2012},
month={06},
date={2012-06-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development.pdf},
abstract={The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.}
}

2012

Inbooks

Andreas L. Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Charpter:1, 7607, pp. 7-21, Springer Berlin Heidelberg, 2012 Jun

Lecture Notes in Computer Science

@inbook{2012SymeonidisLNCS,
author={Andreas L. Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
chapter={1},
volume={7607},
pages={7-21},
publisher={Springer Berlin Heidelberg},
year={2012},
month={06},
date={2012-06-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development-1.pdf},
abstract={Lecture Notes in Computer Science}
}

2011

Journal Articles

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"An integrated framework for enhancing the semantic transformation, editing and querying of relational databases"
Expert Systems with Applications, 38, (4), pp. 3844-3856, 2011 Apr

The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually

@article{2011VavliakisESWA,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={An integrated framework for enhancing the semantic transformation, editing and querying of relational databases},
journal={Expert Systems with Applications},
volume={38},
number={4},
pages={3844-3856},
year={2011},
month={04},
date={2011-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-framework-for-enhancing-the-semantic-transformation-editing-and-querying-of-relational-databases.pdf},
keywords={Ontology editor;OWL-DL restriction creation;Relational database to ontology transformation;SPARQL query builder},
abstract={The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually}
}

2011

Inproceedings Papers

Andreas L. Symeonidis, Vasileios P. Gountis and Georgios T. Andreou
"A Software Agent Framework for exploiting Demand-side Consumer Social Networks in Power Systems"
Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 30--33, Lyon, France, 2011 Aug

This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.

@inproceedings{2011SymeonidisICWEBIIAT,
author={Andreas L. Symeonidis and Vasileios P. Gountis and Georgios T. Andreou},
title={A Software Agent Framework for exploiting Demand-side Consumer Social Networks in Power Systems},
booktitle={Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology},
pages={30--33},
address={Lyon, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Software-Agent-Framework-for-exploiting-Demand-side-Consumer-Social-Networks-in-Power-Systems.pdf},
keywords={agent communication},
abstract={This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.}
}

Iraklis Tsekourakis and Andreas L. Symeonidis
"Dealing with Trust and Reputation in unreliable Multi-agent Trading Environments"
Paper presented at the 2011 Workshop on Trading Agent Design and Analysis (IJCAI 2011), pp. 21-28, Barcelona, Spain, 2011 Aug

In shared competitive environments, where information comes from various sources, agents may interact with each other in a competitive manner in order to achieve their individual goals. Numerous research efforts exist, attempting to define protocols, rules and interfaces for agents to abide by and ensure trustworthy exchange of information. Auction environments and e-commerce platforms are such paradigms, where trust and reputation are vital factors determining agent strategy. And though the process is always secured with a number of safeguards, there is always the issue of unreliability. In this context, the Agent Reputation and Trust (ART) testbed has provided researchers with the ability to test different trust and reputation strategies, in various types of trust/reputation environments. Current work attempts to identify the most viable trust and reputation models stated in the literature, while it further elaborates on the issue by proposing a robust trust and reputation mechanism. This mechanism is incorporated in our agent, HerculAgent, and tested in a variety of environments against the top performing agents of the ART competition. The paper provides a thorough analysis of ART, presents HerculAgent s architecture and dis-cuss its performance.

@inproceedings{2011TsekourakisIJCAI,
author={Iraklis Tsekourakis and Andreas L. Symeonidis},
title={Dealing with Trust and Reputation in unreliable Multi-agent Trading Environments},
booktitle={Paper presented at the 2011 Workshop on Trading Agent Design and Analysis (IJCAI 2011)},
pages={21-28},
address={Barcelona, Spain},
year={2011},
month={08},
date={2011-08-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Dealing-with-Trust-and-Reputation-in-Unreliable-Multi-agent-Trading-Environments.pdf},
abstract={In shared competitive environments, where information comes from various sources, agents may interact with each other in a competitive manner in order to achieve their individual goals. Numerous research efforts exist, attempting to define protocols, rules and interfaces for agents to abide by and ensure trustworthy exchange of information. Auction environments and e-commerce platforms are such paradigms, where trust and reputation are vital factors determining agent strategy. And though the process is always secured with a number of safeguards, there is always the issue of unreliability. In this context, the Agent Reputation and Trust (ART) testbed has provided researchers with the ability to test different trust and reputation strategies, in various types of trust/reputation environments. Current work attempts to identify the most viable trust and reputation models stated in the literature, while it further elaborates on the issue by proposing a robust trust and reputation mechanism. This mechanism is incorporated in our agent, HerculAgent, and tested in a variety of environments against the top performing agents of the ART competition. The paper provides a thorough analysis of ART, presents HerculAgent s architecture and dis-cuss its performance.}
}

Kyriakos C. Chatzidimitriou, Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets"
Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011, pp. 228-247, 2011 Apr

In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.

@inproceedings{2012ChatzidimitriouAAMAS,
author={Kyriakos C. Chatzidimitriou and Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets},
booktitle={Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011},
pages={228-247},
year={2011},
month={04},
date={2011-04-19},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Enhancing-Agent-Intelligence-through-Evolving-Reservoir-Networks-for-Predictions-in-Power-Stock-Markets.pdf},
keywords={Neuroevolution;Power Stock Markets;Reservoir Computing},
abstract={In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.}
}

Kyriakos C. Chatzidimitriou, Lampros C. Stavrogiannis, Andreas Symeonidis and Pericles A. Mitkas
"An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions"
Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011, pp. 21-28, Barcelona, Spain, 2011 Jul

Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.

@inproceedings{Chatzidimitriou2011,
author={Kyriakos C. Chatzidimitriou and Lampros C. Stavrogiannis and Andreas Symeonidis and Pericles A. Mitkas},
title={An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions},
booktitle={Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011},
pages={21-28},
address={Barcelona, Spain},
year={2011},
month={07},
date={2011-07-17},
url={http://link.springer.com/content/pdf/10.1007%2F978-3-642-34889-1_2.pdf},
keywords={advertisement auction;game theory;sponsored search;trading agent},
abstract={Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.}
}

2010

Inproceedings Papers

Nausheen S. Khuram, Andreas L. Symeonidis and Awais Majeed
"Wage – A Web Service- and Agent-based Generic Auctioning Environment"
Paper presented at the 2010 IADIS International Conference on Intelligent Systems and Agents, Freiburg, Germany, 2010 Jul

@inproceedings{2010KhuramISA,
author={Nausheen S. Khuram and Andreas L. Symeonidis and Awais Majeed},
title={Wage – A Web Service- and Agent-based Generic Auctioning Environment},
booktitle={Paper presented at the 2010 IADIS International Conference on Intelligent Systems and Agents},
address={Freiburg, Germany},
year={2010},
month={07},
date={2010-07-29},
keywords={Biomedical framework}
}

Andreas L. Symeonidis and Pericles A. Mitkas
"Monitoring Agent Communication in Soft Real-Time Environments"
Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 265--268, Los Alamitos, CA, USA, 2010 Jan

Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.

@inproceedings{2010SymeonidisWIIAT,
author={Andreas L. Symeonidis and Pericles A. Mitkas},
title={Monitoring Agent Communication in Soft Real-Time Environments},
booktitle={Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology},
pages={265--268},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Monitoring_Agent_Communication_in_Soft_Real-Time_E.pdf},
keywords={soft real-time systems;synchronization},
abstract={Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity"
Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference, pp. 615-618, IEEE Computer Society, Los Alamitos, CA, USA, 2010 Jan

Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.

@inproceedings{2010VavliakisWI,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity},
booktitle={Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference},
pages={615-618},
publisher={IEEE Computer Society},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Towards-Understanding-How-Personality-Motivation-and-Events-Trigger-Web-User-Activity.pdf},
keywords={Crowdsourcing;Flickr;Sharing},
abstract={Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.}
}

2009

Journal Articles

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments"
Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

Special issue on Agents and Data Mining

@article{2009ChatzidimitriouIS,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments},
journal={Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-Enhanced_Agents_in_Dynamic_Supply-Chai.pdf},
keywords={In modern supply chains;so each action can cause ripple reactions and affect the overall result. In this article},
abstract={Special issue on Agents and Data Mining}
}

2009

Inproceedings Papers

Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform"
Agents and Data Mining Interaction workshop AAMAS 2009, pp. 111-125, Springer-Verlag, Budapest, Hungary, 2009 May

Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.

@inproceedings{2009ChrysopoulosADMI,
author={Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform},
booktitle={Agents and Data Mining Interaction workshop AAMAS 2009},
pages={111-125},
publisher={Springer-Verlag},
address={Budapest, Hungary},
year={2009},
month={05},
date={2009-05-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Improving-agent-bidding-in-Power-Stock-Markets-through-a-data-mining-enhanced-agent-platform.pdf},
keywords={exploit data mining;multi-agent platform;predict Power Load;predict Power Settlement},
abstract={Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.}
}

Anthonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
Third Electrical and Computer Engineering Department Student Conference, pp. 245-267, IGI Global, Thessaloniki, Greece, 2009 Apr

The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.

@inproceedings{2009ChrysopoulosECEDSC,
author={Anthonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={Third Electrical and Computer Engineering Department Student Conference},
pages={245-267},
publisher={IGI Global},
address={Thessaloniki, Greece},
year={2009},
month={04},
date={2009-04-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Creating-and-Reusing-Metric-Graphs-for-Evaluating-Agent-Performance-in-the-Supply-Chain-Management-Domain.pdf},
keywords={Evaluating Agent Performance},
abstract={The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.}
}

Christos Dimou, Fani A. Tzima, Andreas Symeonidis and Pericles Mitkas
"Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case"
IADIS International Conference on Intelligent Systems and Agents, Algarve, Portugal, 2009 Jun

@inproceedings{2009DimouIADIS,
author={Christos Dimou and Fani A. Tzima and Andreas Symeonidis and Pericles Mitkas},
title={Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case},
booktitle={IADIS International Conference on Intelligent Systems and Agents},
address={Algarve, Portugal},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Specifying-and-Validating-the-Agent-Performance-Evaluation-Methodology.pdf},
keywords={evaluation methodology;formal specification;metrics representation;Z nota tion}
}

2008

Journal Articles

Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Studies in Health Technology and Informatic, 136, pp. 241-246, 2008 Jan

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@article{2007MitkasSHTI,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
journal={Studies in Health Technology and Informatic},
volume={136},
pages={241-246},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"BioCrawler: An intelligent crawler for the semantic web"
Expert Systems with Applications, 36, (35), 2008 Jul

Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope

@article{2008BatziosESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={BioCrawler: An intelligent crawler for the semantic web},
journal={Expert Systems with Applications},
volume={36},
number={35},
year={2008},
month={07},
date={2008-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/BioCrawler-An-intelligent-crawler-for-the-semantic-web.pdf},
keywords={semantic web;Multi-Agent System;focused crawling;web crawling},
abstract={Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope}
}

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis, Ioannis Kontogounis and Pericles A. Mitkas
"Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments"
Expert Systems with Applications, 35, (3), pp. 591-603, 2008 Jan

Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor

@article{2008ChatzidimitriouESwA,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Ioannis Kontogounis and Pericles A. Mitkas},
title={Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments},
journal={Expert Systems with Applications},
volume={35},
number={3},
pages={591-603},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Mertacor-A-robust-design-for-dealing-with-uncertaintyand-variation-in-SCM-environments.pdf},
keywords={machine learning;Agent intelligence;Autonomous trading agents;Electronic commerce},
abstract={Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems"
Expert Systems with Applications, 36, (4), 2008 Sep

Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.

@article{2008DimouESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems},
journal={Expert Systems with Applications},
volume={36},
number={4},
year={2008},
month={09},
date={2008-09-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-infrastructure-for-monitoring-and-evaluating-agent-based-systems.pdf},
keywords={performance evaluation;automated soft ware engineering;fuzzy measurement aggregation;softrware agents},
abstract={Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Sketching a methodology for efficient supply chain management agents enhanced through data mining"
International Journal of Intelligent Information and Database Systems (IJIIDS), 2, (1), 2008 Feb

Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.

@article{2008SymeoniidsIJIIDS,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Sketching a methodology for efficient supply chain management agents enhanced through data mining},
journal={International Journal of Intelligent Information and Database Systems (IJIIDS)},
volume={2},
number={1},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sketching-a-methodology-for-efficient-supply-chain-management-agents-enhanced-through-data-mining.pdf},
keywords={performance evaluation;Intelligent agents;agent-based systems;multi-agent systems;MAS;trading agent competition;agent-oriented methodology;bidding;forecasting;SCM},
abstract={Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.}
}

2008

Conference Papers

Pericles A. Mitkas, Vassilis Koutkias, Andreas Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
MIE, Goteborg, Sweden, 2008 May

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@conference{2008MitkasMIE,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
booktitle={MIE},
address={Goteborg, Sweden},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

2008

Inproceedings Papers

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks"
IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction, pp. 558-561, IEEE Computer Society, Sydney, Australia, 2008 Dec

In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.

@inproceedings{2008ChatzidimitriouADMI,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks},
booktitle={IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction},
pages={558-561},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data_Mining-Driven_Analysis_and_Decomposition_in_A.pdf},
keywords={fuzzy logic},
abstract={In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.}
}

Christos Dimou, Manolis Falelakis, Andreas Symeonidis, Anastasios Delopoulos and Pericles A. Mitkas
"Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation"
IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208), pp. 336--339, IEEE Computer Society, Sydney, Australia, 2008 Dec

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{2008DimouIAT,
author={Christos Dimou and Manolis Falelakis and Andreas Symeonidis and Anastasios Delopoulos and Pericles A. Mitkas},
title={Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation},
booktitle={IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208)},
pages={336--339},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Constructing-Optimal-Fuzzy-Metric-Trees-for-Agent-Performance-Evaluation.pdf},
keywords={fuzzy logic},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

Christos Dimou, Kyriakos C. Chatzidimitriou, Andreas Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
First Workshop on Knowledge Reuse (KREUSE, Beijing (China), 2008 May

The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.

@inproceedings{2008DimouKREUSE,
author={Christos Dimou and Kyriakos C. Chatzidimitriou and Andreas Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={First Workshop on Knowledge Reuse (KREUSE},
address={Beijing (China)},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KREUSE-08.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining and Agent Technology: a fruitful symbiosis"
Soft Computing for Knowledge Discovery and Data Mining, pp. 327-362, Springer US, Clermont-Ferrand, France, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008DimouSCKDDM,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining and Agent Technology: a fruitful symbiosis},
booktitle={Soft Computing for Knowledge Discovery and Data Mining},
pages={327-362},
publisher={Springer US},
address={Clermont-Ferrand, France},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-Mining-and-Agent-Technology-a-fruitful-symbiosis.pdf},
keywords={Gene Ontology;Parallel Algorithms;Protein Classi fi cation},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Pericles A. Mitkas, Christos Maramis, Anastastios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Manolis Falelakis, Fotis E. Psomopoulos, Alex andros Batzios, Nikolaos Maglaveras, Irini Lekka, Vasilis Koutkias, Theodoros Agorastos, T. Mikos and A. Tatsis
"ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer"
6th European Symposium on Biomedical Engineering, Chania, Greece, 2008 Jun

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{2008MitkasEsbmeAssist,
author={Pericles A. Mitkas and Christos Maramis and Anastastios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Manolis Falelakis and Fotis E. Psomopoulos and Alex andros Batzios and Nikolaos Maglaveras and Irini Lekka and Vasilis Koutkias and Theodoros Agorastos and T. Mikos and A. Tatsis},
title={ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer},
booktitle={6th European Symposium on Biomedical Engineering},
address={Chania, Greece},
year={2008},
month={06},
date={2008-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/ASSIST-EMPLOYING-INFERENCE-AND-SEMANTIC-TECHNOLOGIES-TO-FACILITATE-ASSOCIATION-STUDIES-ON-CERVICAL-CANCER-.pdf},
keywords={cervical cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

Theodoros Agorastos, Pericles A. Mitkas, Manolis Falelakis, Fotis E. Psomopoulos, Anastasios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Christos Maramis, Alexandros Batzios, Irini Lekka, Vasilis Koutkias, Themistoklis Mikos, A. Tatsis and Nikolaos Maglaveras
"Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project"
World Cancer Congress, Geneva, Switzerland, 2008 Aug

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{WCCAssist,
author={Theodoros Agorastos and Pericles A. Mitkas and Manolis Falelakis and Fotis E. Psomopoulos and Anastasios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Christos Maramis and Alexandros Batzios and Irini Lekka and Vasilis Koutkias and Themistoklis Mikos and A. Tatsis and Nikolaos Maglaveras},
title={Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project},
booktitle={World Cancer Congress},
address={Geneva, Switzerland},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/wcc2008.pdf},
keywords={Unified Data for Cervical Cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

2007

Journal Articles

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
International Journal of Product Lifecycle Management, 2, (2), pp. 1097-1111, 2007 Jan

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents

@article{2007MitkasIJPLM,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
journal={International Journal of Product Lifecycle Management},
volume={2},
number={2},
pages={1097-1111},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={multi-agent systems;MAS},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Data mining for agent reasoning: A synergy for training intelligent agents"
Engineering Applications of Artificial Intelligence, 20, (8), pp. 1097-1111, 2007 Dec

The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.

@article{2007SymeonidisEAAI,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Data mining for agent reasoning: A synergy for training intelligent agents},
journal={Engineering Applications of Artificial Intelligence},
volume={20},
number={8},
pages={1097-1111},
year={2007},
month={12},
date={2007-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-mining-for-agent-reasoning-A-synergy-fortraining-intelligent-agents.pdf},
keywords={Agent Technology;Agent reasoning;Agent training;Knowledge model},
abstract={The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.}
}

Andreas L. Symeonidis, Ioannis N. Athanasiadis and Pericles A. Mitkas
"A retraining methodology for enhancing agent intelligence"
Knowledge-Based Systems, 20, (4), pp. 388-396, 2007 Jan

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.

@article{2007SymeonidisKBS,
author={Andreas L. Symeonidis and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={A retraining methodology for enhancing agent intelligence},
journal={Knowledge-Based Systems},
volume={20},
number={4},
pages={388-396},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-retraining-methodology-for-enhancing-agent-intelligence.pdf},
keywords={business data processing;logic programming},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionysios Kehagias and Pericles A. Mitkas
"A Multi-agent Infrastructure for Enhancing ERP system Intelligence"
Scalable Computing: Practice and Experience, 8, (1), pp. 101-114, 2007 Jan

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company

@article{2007SymeonidisSCPE,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionysios Kehagias and Pericles A. Mitkas},
title={A Multi-agent Infrastructure for Enhancing ERP system Intelligence},
journal={Scalable Computing: Practice and Experience},
volume={8},
number={1},
pages={101-114},
year={2007},
month={01},
date={2007-01-01},
url={http://www.scpe.org/index.php/scpe/article/viewFile/401/75},
keywords={Adaptive Decision Making;ERP systems;Mutli-Agent Systems;Soft computing},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company}
}

2007

Inproceedings Papers

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Evaluating Knowledge Intensive Multi-Agent Systems"
Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007), pp. 74-87, Springer Berlin / Heidelberg, St. Petersburg, Russia, 2007 Jun

As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.

@inproceedings{2007DimouAIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Evaluating Knowledge Intensive Multi-Agent Systems},
booktitle={Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007)},
pages={74-87},
publisher={Springer Berlin / Heidelberg},
address={St. Petersburg, Russia},
year={2007},
month={06},
date={2007-06-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Evaluating-Knowledge-Intensive-Multi-agent-Systems.pdf},
keywords={air pollution;decision making;environmental science computing},
abstract={As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards a Generic Methodology for Evaluating MAS Performance"
IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207, pp. 174--179, Springer Berlin / Heidelberg, Waltham, MA, USA, 2007 Apr

As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.

@inproceedings{2007DimouKIMAS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards a Generic Methodology for Evaluating MAS Performance},
booktitle={IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207},
pages={174--179},
publisher={Springer Berlin / Heidelberg},
address={Waltham, MA, USA},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KIMAS-07.pdf},
keywords={agent evaluation},
abstract={As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An agent structure for evaluating micro-level MAS performance"
7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07, pp. 243--250, IEEE Computer Society, Gaithersburg, MD, 2007 Aug

Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.

@inproceedings{2007DimouPERMIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An agent structure for evaluating micro-level MAS performance},
booktitle={7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07},
pages={243--250},
publisher={IEEE Computer Society},
address={Gaithersburg, MD},
year={2007},
month={08},
date={2007-08-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-agent-structure-for-evaluating-micro-level-MAS-performance.pdf},
keywords={automated evaluation;autonomous agents;performance evaluation methodology},
abstract={Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.}
}

Fani A. Tzima, Andreas L. Symeonidis and Pericles. A. Mitkas
"Symbiosis: using predator-prey games as a test bed for studying competitive coevolution"
IEEE KIMAS conference, pp. 115-120, Springer Berlin / Heidelberg, Waltham, Massachusetts, 2007 Apr

The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.

@inproceedings{2007TzimaKIMAS,
author={Fani A. Tzima and Andreas L. Symeonidis and Pericles. A. Mitkas},
title={Symbiosis: using predator-prey games as a test bed for studying competitive coevolution},
booktitle={IEEE KIMAS conference},
pages={115-120},
publisher={Springer Berlin / Heidelberg},
address={Waltham, Massachusetts},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Symbiosis-using-predator-prey-games-as-a-test-bed-for-studying-competitive-coevolution.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\\\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information"
ICTAI, pp. 279--282, IEEE Computer Society, Washington, DC, USA, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\96 either case-specific or generic \\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007VavliakisICTAI,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information},
booktitle={ICTAI},
pages={279--282},
publisher={IEEE Computer Society},
address={Washington, DC, USA},
year={2007},
month={10},
date={2007-10-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Eikonomia-–-An-Integrated-Semantically-Aware-Tool-for-Description-and-Retrieval-of-Byzantine-Art-Information-.pdf},
keywords={art;inference mechanisms;ontologies (artificial intelligence);query processing},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

2006

Journal Articles

Sotiris Diplaris, Andreas L. Symeonidis, Pericles A. Mitkas, Georgios Banos and Z Abas
"A decision-tree-based alarming system for the validation of national genetic evaluations"
Computers and Electronics in Agriculture, 52, (1--2), pp. 21--35, 2006 Jun

The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.

@article{2006DiplarisCEA,
author={Sotiris Diplaris and Andreas L. Symeonidis and Pericles A. Mitkas and Georgios Banos and Z Abas},
title={A decision-tree-based alarming system for the validation of national genetic evaluations},
journal={Computers and Electronics in Agriculture},
volume={52},
number={1--2},
pages={21--35},
year={2006},
month={06},
date={2006-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-decision-tree-based-alarming-system-for-the-validation-of-national-genetic-evaluations.pdf},
keywords={Dairy cattle evaluations;Alarming technique;Genetic evaluations;Quality control},
abstract={The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.}
}

Andreas L. Symeonidis, Dionisis D. Kehagias, Pericles A. Mitkas and Adamantios Koumpis
"Open Source Supply Chains"
International Journal of Advanced Manufacturing Systems (IJAMS), 9, (1), pp. 33-42, 2006 Jan

Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.

@article{2006SymeonidisIJAMS,
author={Andreas L. Symeonidis and Dionisis D. Kehagias and Pericles A. Mitkas and Adamantios Koumpis},
title={Open Source Supply Chains},
journal={International Journal of Advanced Manufacturing Systems (IJAMS)},
volume={9},
number={1},
pages={33-42},
year={2006},
month={01},
date={2006-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Open-Source-Supply-Chains.pdf},
keywords={agent-based social simulation},
abstract={Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.}
}

2006

Inproceedings Papers

Andreas l. Symeonidis, Dionisis Kehagias, Adamantios Koumpis and Apostolos Vontas
"Open source supply chains"
10th ISPE International Conference on Concurrent Engineering (ISPE CE 2003), pp. 31--36, A. A. Balkema Publishers, Dubai/Sharjah, UAE, 2006 Jan

Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains (SC), in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize systemwide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is conformed to strict licensing, since the exploitation of such kind of software is most of the times proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, the developed framework is an Open Source MAS that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.

@inproceedings{2003SymeonidisISPE,
author={Andreas l. Symeonidis and Dionisis Kehagias and Adamantios Koumpis and Apostolos Vontas},
title={Open source supply chains},
booktitle={10th ISPE International Conference on Concurrent Engineering (ISPE CE 2003)},
pages={31--36},
publisher={A. A. Balkema Publishers},
address={Dubai/Sharjah, UAE},
year={2006},
month={01},
date={2006-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Open-Source-Supply-Chains.pdf},
keywords={Agent-mediated E-commerce;Auctions},
abstract={Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains (SC), in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize systemwide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is conformed to strict licensing, since the exploitation of such kind of software is most of the times proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, the developed framework is an Open Source MAS that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.}
}

Z. Abas, Andreas L. Symeonidis, Alex andros Batzios, Zoi Basdagianni, Georgios Banos, Pericles A. Mitkas, E. Sinapis and A. Pampoukidou
"AMNOS-mobile: Exploiting handheld computers in efficient sheep recording"
35th ICAR, pp. 99--104, IEEE Computer Society, Kuopio, Finland, 2006 Jun

This paper focuses on AMNOS-mobile, a PDA application developed to support the tasks undertaken by sheep inspector when visiting the farms. It works in close cooperation with AMNOS, an integrated web-based platform developed to record, monitor, evaluate and manage the dairy sheep population of the Chios and Serres breed in Greece. Within the context of this paper, the design features of AMNOS-mobile are presented and the problems tackled by the use of handheld devices are discussed, illustrating how our application can enhance recording, improve the collection data process, and help farmers to more efficiently manage their flocks.

@inproceedings{2006AbasICAR,
author={Z. Abas and Andreas L. Symeonidis and Alex andros Batzios and Zoi Basdagianni and Georgios Banos and Pericles A. Mitkas and E. Sinapis and A. Pampoukidou},
title={AMNOS-mobile: Exploiting handheld computers in efficient sheep recording},
booktitle={35th ICAR},
pages={99--104},
publisher={IEEE Computer Society},
address={Kuopio, Finland},
year={2006},
month={06},
date={2006-06-06},
url={http://books.google.gr/books?id},
keywords={milk recording;data collection;handheld computers;transparent synchronization},
abstract={This paper focuses on AMNOS-mobile, a PDA application developed to support the tasks undertaken by sheep inspector when visiting the farms. It works in close cooperation with AMNOS, an integrated web-based platform developed to record, monitor, evaluate and manage the dairy sheep population of the Chios and Serres breed in Greece. Within the context of this paper, the design features of AMNOS-mobile are presented and the problems tackled by the use of handheld devices are discussed, illustrating how our application can enhance recording, improve the collection data process, and help farmers to more efficiently manage their flocks.}
}

Christos Dimou, Alexanros Batzios, Andreas L. Symeonidis and Pericles A. Mitkas
"A Multi-Agent Simulation Framework for Spiders Traversing the Semantic Web"
IEEE/WIC/ACM International Conference on Web Intelligence - WI 2006, pp. 736--739, Springer Berlin / Heidelberg, Hong Kong, China, 2006 Dec

Although search engines traditionally use spiders for traversing and indexing the web, there has not yet been any methodological attempt to model, deploy and test learning spiders. Moreover, the flourishing of the Semantic Web provides understandable information that may enhance search engines in providing more accurate results. Considering the above, we introduce BioSpider, an agent-based simulation framework for developing and testing autonomous, intelligent, semantically- focused web spiders. BioSpider assumes a direct analogy of the problem at hand with a multi-variate ecosystem, where each member is self-maintaining. The population of the ecosystem comprises cooperative spiders incorporating communication, mo- bility and learning skills, striving to improve efficiency. Genetic algorithms and classifier rules have been employed for spider adaptation and learning. A set of experiments has been set up in order to qualitatively test the efficacy and applicability of the proposed approach.

@inproceedings{2006DimouWI,
author={Christos Dimou and Alexanros Batzios and Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Multi-Agent Simulation Framework for Spiders Traversing the Semantic Web},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligence - WI 2006},
pages={736--739},
publisher={Springer Berlin / Heidelberg},
address={Hong Kong, China},
year={2006},
month={12},
date={2006-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Multi-Agent-Simulation-Framework-for-Spiders-Traversing-the-Semantic-Web.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={Although search engines traditionally use spiders for traversing and indexing the web, there has not yet been any methodological attempt to model, deploy and test learning spiders. Moreover, the flourishing of the Semantic Web provides understandable information that may enhance search engines in providing more accurate results. Considering the above, we introduce BioSpider, an agent-based simulation framework for developing and testing autonomous, intelligent, semantically- focused web spiders. BioSpider assumes a direct analogy of the problem at hand with a multi-variate ecosystem, where each member is self-maintaining. The population of the ecosystem comprises cooperative spiders incorporating communication, mo- bility and learning skills, striving to improve efficiency. Genetic algorithms and classifier rules have been employed for spider adaptation and learning. A set of experiments has been set up in order to qualitatively test the efficacy and applicability of the proposed approach.}
}

Demetrios G. Eliades, Andreas L. Symeonidis and Pericles A. Mitkas
"GeneCity: A multi-agent simulation environment for hereditary diseases"
4th ACS/IEEE International Conference on Computer Systems and Applications - AICCSA 06, pp. 529--536, Springer-Verlag, Dubai/Sharjah, UAE, 2006 Mar

Simulating the psycho-societal aspects of a human com- munity is an issue always intriguing and challenging, as- piring us to help better understand, macroscopically, the way(s) humans behave. The mathematical models that have extensively been used for the analytical study of the vari- ous related phenomena prove inefficient, since they cannot conceive the notion of population heterogeneity, a parame- ter highly critical when it comes to community interactions. Following the more successful paradigm of artificial soci- eties, coupled with multi-agent systems and other Artificial Intelligence primitives, and extending previous epidemio- logical research work, we have developed GeneCity: an extended agent community, where agents live and interact under the veil of a hereditary epidemic. The members of the community, which can be either healthy, carriers of a trait, or patients, exhibit a number of human-like social (and medical) characteristics: wealth, acceptance and in- fluence, fear and knowledge, phenotype and reproduction ability. GeneCity provides a highly-configurable interface for simulating social environments and the way they are affected with the appearance of a hereditary disease, ei- ther Autosome or X-linked. This paper presents an ana- lytical overview of the work conducted and examines a test- hypothesis based on the spreading of Thalassaemia major.

@inproceedings{2006EliadesAICCSA,
author={Demetrios G. Eliades and Andreas L. Symeonidis and Pericles A. Mitkas},
title={GeneCity: A multi-agent simulation environment for hereditary diseases},
booktitle={4th ACS/IEEE International Conference on Computer Systems and Applications - AICCSA 06},
pages={529--536},
publisher={Springer-Verlag},
address={Dubai/Sharjah, UAE},
year={2006},
month={03},
date={2006-03-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/GeneCity-A-Multi-Agent-Simulation-Environment-for-Hereditary-Diseases.pdf},
keywords={Agent-mediated E-commerce;Auctions},
abstract={Simulating the psycho-societal aspects of a human com- munity is an issue always intriguing and challenging, as- piring us to help better understand, macroscopically, the way(s) humans behave. The mathematical models that have extensively been used for the analytical study of the vari- ous related phenomena prove inefficient, since they cannot conceive the notion of population heterogeneity, a parame- ter highly critical when it comes to community interactions. Following the more successful paradigm of artificial soci- eties, coupled with multi-agent systems and other Artificial Intelligence primitives, and extending previous epidemio- logical research work, we have developed GeneCity: an extended agent community, where agents live and interact under the veil of a hereditary epidemic. The members of the community, which can be either healthy, carriers of a trait, or patients, exhibit a number of human-like social (and medical) characteristics: wealth, acceptance and in- fluence, fear and knowledge, phenotype and reproduction ability. GeneCity provides a highly-configurable interface for simulating social environments and the way they are affected with the appearance of a hereditary disease, ei- ther Autosome or X-linked. This paper presents an ana- lytical overview of the work conducted and examines a test- hypothesis based on the spreading of Thalassaemia major.}
}

Ioannis Kontogounnis, Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"A Robust Agent Design for Dynamic SCM environments"
4th Hellenic Conference on Artificial Intelligence (SETN 06), pp. 127--136, Springer-Verlag, Heraklion, Crete, Greece, 2006 May

The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\92s performance.

@inproceedings{2006KontogounnisSETN,
author={Ioannis Kontogounnis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Robust Agent Design for Dynamic SCM environments},
booktitle={4th Hellenic Conference on Artificial Intelligence (SETN 06)},
pages={127--136},
publisher={Springer-Verlag},
address={Heraklion, Crete, Greece},
year={2006},
month={05},
date={2006-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Robust-Agent-Design-for-Dynamic-SCM-Environments.pdf},
keywords={milk recording;data collection;handheld computers;transparent synchronization},
abstract={The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\92s performance.}
}

Pericles A. Mitkas, Anastasios N. Delopoulos, Andreas L. Symeonidis and Fotis E. Psomopoulos
"A Framework for Semantic Data Integration and Inferencing on Cervical Cancer"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 23-26, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2006 Oct

Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.

@inproceedings{2006MitkasASSISTBioacademy,
author={Pericles A. Mitkas and Anastasios N. Delopoulos and Andreas L. Symeonidis and Fotis E. Psomopoulos},
title={A Framework for Semantic Data Integration and Inferencing on Cervical Cancer},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={23-26},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2006},
month={10},
date={2006-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Framework-for-Semantic-Data-Integration-and-Inferencing-on-Cervical-Cancer.pdf},
keywords={bioinformatics databases},
abstract={Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Exploiting Data Mining Techniques for Improving the Efficiency of a Supply Chain Management Agent"
WI-IATW 06: Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, pp. 23-26, IEEE Computer Society, Hong Kong, China, 2006 Dec

Supply Chain Management (SCM) environments are often dynamic markets providing a plethora of information, either complete or incomplete. It is, therefore, evident that such environments demand intelligent solutions, which can perceive variations and act in order to achieve maximum revenue. To do so, they must also provide some sophisticated mechanism for exploiting the full potential of the environments they inhabit. Advancing on the way autonomous solutions usually deal with the SCM process, we have built a robust and highly-adaptable mechanism for efficiently dealing with all SCM facets, while at the same time incorporating a module that exploits data mining technology in order to forecast the price of the winning bid in a given order and, thus, adjust its bidding strategy. The paper presents our agent, Mertacor, and focuses on the forecasting mechanism it incorporates, aiming to optimal agent efficiency.

@inproceedings{2006SymeonidisIADM,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Exploiting Data Mining Techniques for Improving the Efficiency of a Supply Chain Management Agent},
booktitle={WI-IATW 06: Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology},
pages={23-26},
publisher={IEEE Computer Society},
address={Hong Kong, China},
year={2006},
month={12},
date={2006-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Exploiting-Data-Mining-Techniques-for-Improving-the-Efficiency-of-a-Supply-Chain-Management-Agen.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={Supply Chain Management (SCM) environments are often dynamic markets providing a plethora of information, either complete or incomplete. It is, therefore, evident that such environments demand intelligent solutions, which can perceive variations and act in order to achieve maximum revenue. To do so, they must also provide some sophisticated mechanism for exploiting the full potential of the environments they inhabit. Advancing on the way autonomous solutions usually deal with the SCM process, we have built a robust and highly-adaptable mechanism for efficiently dealing with all SCM facets, while at the same time incorporating a module that exploits data mining technology in order to forecast the price of the winning bid in a given order and, thus, adjust its bidding strategy. The paper presents our agent, Mertacor, and focuses on the forecasting mechanism it incorporates, aiming to optimal agent efficiency.}
}

2005

Journal Articles

Dionisis Kehagias, Andreas L. Symeonidis and Pericles A. Mitkas
"Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting"
Electronic Markets, 15, (1), pp. 53--62, 2005 Jan

Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.

@article{2005KehagiasEM,
author={Dionisis Kehagias and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting},
journal={Electronic Markets},
volume={15},
number={1},
pages={53--62},
year={2005},
month={01},
date={2005-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Designing-Pricing-Mechanisms-for-Autonomous-Agents-Based-on-Bid-Forecasting.pdf},
abstract={Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.}
}

Andreas L. Symeonidis, Evangelos Valtos, Serafeim Seroglou and Pericles A. Mitkas
"Biotope: an integrated framework for simulating distributed multiagent computational systems"
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 35, (3), pp. 420-432, 2005 May

The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \"ecological\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \"living\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.

@article{2005SymeonidisIEEETSMC,
author={Andreas L. Symeonidis and Evangelos Valtos and Serafeim Seroglou and Pericles A. Mitkas},
title={Biotope: an integrated framework for simulating distributed multiagent computational systems},
journal={IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans},
volume={35},
number={3},
pages={420-432},
year={2005},
month={05},
date={2005-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/confaisadmMitkas05.pdf},
doi={http://dx.doi.org/10.1007/11492870_2},
keywords={agent-based systems},
abstract={The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \\"ecological\\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \\"living\\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.}
}

2005

Books

Andreas Symeonidis and Pericles A. Mitkas
"Agent Intelligence Through Data Mining (Multiagent Systems, Artificial Societies, and Simulated Organizations)"
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005 Jul

@book{2005Symeonidis,
author={Andreas Symeonidis and Pericles A. Mitkas},
title={Agent Intelligence Through Data Mining (Multiagent Systems, Artificial Societies, and Simulated Organizations)},
publisher={Springer-Verlag New York, Inc.},
address={Secaucus, NJ, USA},
year={2005},
month={07},
date={2005-07-15}
}

2005

Inproceedings Papers

Pericles A. Mitkas, Andreas L. Symeonidis and Ioannis N. Athanasiadis
"A Retraining Methodology for Enhancing Agent Intelligence"
IEEE Intl Conference on Integration of Knowledge Intensive Multi-Agent Systems - KIMAS 05, pp. 422--428, Springer Berlin / Heidelberg, Waltham, MA, USA, 2005 Apr

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as ‘‘agent training’’. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement – in the long run – of agent intelligence.

@inproceedings{2005MitkasKIMAS,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Ioannis N. Athanasiadis},
title={A Retraining Methodology for Enhancing Agent Intelligence},
booktitle={IEEE Intl Conference on Integration of Knowledge Intensive Multi-Agent Systems - KIMAS 05},
pages={422--428},
publisher={Springer Berlin / Heidelberg},
address={Waltham, MA, USA},
year={2005},
month={04},
date={2005-04-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_retraining_methodology_for_enhancing_agent_intel.pdf},
keywords={retraining},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as ‘‘agent training’’. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement – in the long run – of agent intelligence.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionisis Kehagias and Pericles A. Mitkas
"An Intelligent Recommendation Framework for ERP Systems"
AIA 2005: Artificial Intelligence and Applications, pp. 422--428, ACTA Press, Innsbruck, Austria, 2005 Feb

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. We present an alternative approach for incorporating adaptive business intelligence into the company

@inproceedings{2005SymeonidisAIA,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionisis Kehagias and Pericles A. Mitkas},
title={An Intelligent Recommendation Framework for ERP Systems},
booktitle={AIA 2005: Artificial Intelligence and Applications},
pages={422--428},
publisher={ACTA Press},
address={Innsbruck, Austria},
year={2005},
month={02},
date={2005-02-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Intelligent-Recommendation-Framework-for-ERP-Systems.pdf},
keywords={retraining},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. We present an alternative approach for incorporating adaptive business intelligence into the company}
}

Andreas L. Symeonidis and Pericles A. Mitkas
"A Methodology for Predicting Agent Behavior by the Use of Data Mining Techniques"
Autonomous Intelligent Systems: Agents and Data Mining, pp. 161--174, Springer Berlin / Heidelberg, St. Petersburg, Russia, 2005 Jun

One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.

@inproceedings{2005SymeonidisAISADM,
author={Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Methodology for Predicting Agent Behavior by the Use of Data Mining Techniques},
booktitle={Autonomous Intelligent Systems: Agents and Data Mining},
pages={161--174},
publisher={Springer Berlin / Heidelberg},
address={St. Petersburg, Russia},
year={2005},
month={06},
date={2005-06-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Methodology-for-Predicting-Agent-Behavior-by-the-Use-of-Data-Mining-Techniques.pdf},
abstract={One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.}
}

2004

Inproceedings Papers

Sotiris Diplaris, Andreas Symeonidis, Pericles A. Mitkas, Georgios Banos and Z. Abas
"An Alarm Firing System for National Genetic Evaluation Quality Control"
Interbull Annual Meeting, pp. 146--150, Tunis, Tunisia, 2004 May

In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.

@inproceedings{2004DiplarisIAM,
author={Sotiris Diplaris and Andreas Symeonidis and Pericles A. Mitkas and Georgios Banos and Z. Abas},
title={An Alarm Firing System for National Genetic Evaluation Quality Control},
booktitle={Interbull Annual Meeting},
pages={146--150},
address={Tunis, Tunisia},
year={2004},
month={05},
date={2004-05-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Alarm-Firing-System-for-National-Genetic-Evaluation-Quality-Control.pdf},
abstract={In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.}
}

D. Kehagias, Kyriakos C. Chatzidimitriou, Andreas Symeonidis and Pericles A. Mitkas
"Information Agents Cooperating with Heterogeneous Data Sources for Customer-Order Management"
Paper presented at the 19th Annual ACM Symposium on Applied Computing (SAC 2004), pp. 52--57, Nicosia, Cyprus, 2004 Mar

As multi-agent systems and information agents obtain an in- creasing acceptance by application developers, existing legacy Enterprise Resource Planning (ERP) systems still provide the main source of data used in customer, supplier and inventory resource management. In this paper we present a multi-agent system, comprised of information agents, which cooperates with a legacy ERP in order to carry out orders posted by customers in an enterprise environment. Our system is enriched by the capability of producing recommendations to the interested customer through agent cooperation. At first, we address the problem of information workload in an enterprise environment and explore the opportunity of a plausible solution. Secondly we present the architecture of our system and the types of agents involved in it. Finally, we show how it manipulates retrieved information for efficient and facile customer-order management and illustrate results derived from real-data.

@inproceedings{2004KehagiasSAC,
author={D. Kehagias and Kyriakos C. Chatzidimitriou and Andreas Symeonidis and Pericles A. Mitkas},
title={Information Agents Cooperating with Heterogeneous Data Sources for Customer-Order Management},
booktitle={Paper presented at the 19th Annual ACM Symposium on Applied Computing (SAC 2004)},
pages={52--57},
address={Nicosia, Cyprus},
year={2004},
month={03},
date={2004-03-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Information-Agents-Cooperating-with-Heterogeneous-Data-Sources-for-Customer-Order-Management.pdf},
keywords={information agents;enterprise resource planning;customer-order management},
abstract={As multi-agent systems and information agents obtain an in- creasing acceptance by application developers, existing legacy Enterprise Resource Planning (ERP) systems still provide the main source of data used in customer, supplier and inventory resource management. In this paper we present a multi-agent system, comprised of information agents, which cooperates with a legacy ERP in order to carry out orders posted by customers in an enterprise environment. Our system is enriched by the capability of producing recommendations to the interested customer through agent cooperation. At first, we address the problem of information workload in an enterprise environment and explore the opportunity of a plausible solution. Secondly we present the architecture of our system and the types of agents involved in it. Finally, we show how it manipulates retrieved information for efficient and facile customer-order management and illustrate results derived from real-data.}
}

2003

Journal Articles

Andreas L. Symeonidis, Dionisis Kehagias and Pericles A. Mitkas
"Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques"
Expert Systems with Applications, 25, (4), pp. 589-602, 2003 Jan

Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.

@article{2003SymeonidisESWA,
author={Andreas L. Symeonidis and Dionisis Kehagias and Pericles A. Mitkas},
title={Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques},
journal={Expert Systems with Applications},
volume={25},
number={4},
pages={589-602},
year={2003},
month={01},
date={2003-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Intelligent-policy-recommendations-on-enterprise-resource-planningby-the-use-of-agent-technology-and-data-mining-techniques.pdf},
keywords={agents},
abstract={Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.}
}

2003

Inproceedings Papers

Sotiris Diplaris, Andreas L. Symeonidis, Pericles A. Mitkas, Georgios Banos and Z. Abas
"Quality Control of National Genetic Eva luation Results Using Data-Mining Techniques; A Progress Report"
Interbull Annual Meeting, pp. 8--15, Rome, Italy, 2003 Aug

The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal

@inproceedings{2003BanosIAM,
author={Sotiris Diplaris and Andreas L. Symeonidis and Pericles A. Mitkas and Georgios Banos and Z. Abas},
title={Quality Control of National Genetic Eva luation Results Using Data-Mining Techniques; A Progress Report},
booktitle={Interbull Annual Meeting},
pages={8--15},
address={Rome, Italy},
year={2003},
month={08},
date={2003-08-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Quality-Control-of-National-Genetic-Eva-luation-Results-Using-Data-Mining-Techniques-A-Progress-Report.pdf},
keywords={agent academy},
abstract={The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal}
}

G. Milis, Andreas L. Symeonidis and Pericles A. Mitkas
"Ergasiognomon: A Model System of Advanced Digital Services Designed and Developed to Support the Job Marketplace"
9th Panhellenic Conference in Informatics, pp. 346--360, Thessaloniki, Greece, 2003 Nov

The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal

@inproceedings{2003MilisPCI,
author={G. Milis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Ergasiognomon: A Model System of Advanced Digital Services Designed and Developed to Support the Job Marketplace},
booktitle={9th Panhellenic Conference in Informatics},
pages={346--360},
address={Thessaloniki, Greece},
year={2003},
month={11},
date={2003-11-21},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Ergasiognomon-A-Model-System-of-Advanced-Digital-Services-Designed-and-Developed-to-Support-the-Job-Marketplace.pdf},
keywords={agent academy},
abstract={The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal}
}

Pericles A. Mitkas, Dionisis Kehagias, Andreeas L. Symeonidis and I. N. Athanasiadis
"A Framework for Constructing Multi-Agent Applications and Training Intelligent Agents"
4th International Workshop on Agent-Oriented Software Engineering (AOSE-2003), Autonomous Agents \& Multi-Agent Systems (AAMAS 2003), pp. 96--109, Melbourne, Australia, 2003 Jun

As agent-oriented paradigm is reaching a significant level of acceptance by software developers, there is a lack of integrated high-level abstraction tools for the design and development of agent-based applications. In an effort to mitigate this deficiency, we introduce Agent Academy, an integrated development framework, implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. In addition to these characteristics, our framework goes deeper into agents, by implementing a mechanism for embedding rule-based reasoning into them. We call this procedure «agent training» and it is realized by the application of AI techniques for knowledge discovery on application-specific data, which may be available to the agent developer. In this respect, Agent Academy provides an easy-to-use facility that encourages the substitution of existing, traditionally developed applications by new ones, which follow the agent-orientation paradigm.

@inproceedings{2003MitkasAOSE,
author={Pericles A. Mitkas and Dionisis Kehagias and Andreeas L. Symeonidis and I. N. Athanasiadis},
title={A Framework for Constructing Multi-Agent Applications and Training Intelligent Agents},
booktitle={4th International Workshop on Agent-Oriented Software Engineering (AOSE-2003), Autonomous Agents \& Multi-Agent Systems (AAMAS 2003)},
pages={96--109},
address={Melbourne, Australia},
year={2003},
month={06},
date={2003-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Framework-for-Constructing-Multi-Agent-Applications-and-Training-Intelligent-Agents.pdf},
keywords={concurrent engineering;intelligent agents.},
abstract={As agent-oriented paradigm is reaching a significant level of acceptance by software developers, there is a lack of integrated high-level abstraction tools for the design and development of agent-based applications. In an effort to mitigate this deficiency, we introduce Agent Academy, an integrated development framework, implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. In addition to these characteristics, our framework goes deeper into agents, by implementing a mechanism for embedding rule-based reasoning into them. We call this procedure «agent training» and it is realized by the application of AI techniques for knowledge discovery on application-specific data, which may be available to the agent developer. In this respect, Agent Academy provides an easy-to-use facility that encourages the substitution of existing, traditionally developed applications by new ones, which follow the agent-orientation paradigm.}
}

Pericles A. Mitkas, Dionisis Kehagias, Andreas L. Symeonidis and Ioannis N. Athanasiadis
"Agent Academy: An integrated tool for developing multi-agent systems and embedding decision structures into agents"
First European Workshop on Multi-Agent Systems (EUMAS 2003), Oxford, UK, 2003 Dec

In this paper we present Agent Academy, a framework that enables software developers to quickly develop multi-agent applications, when prior historical data relevant to a desired rule-based behaviour are available. Agent Academy is implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. Once an agent has been designed within the framework, the agent developer can create a specific ontology that describes the historical data. In this way, agents become capable of having embedded rule-based reasoning. We call this procedure «agent training» and it is realized by the application of data mining and knowledge discovery techniques on the application-specific historical data. From this point of view, Agent Academy provides a tool for both creating multi-agent systems and embedding rule-based decision structures into one or more of the participating agents.

@inproceedings{2003MitkasEUMAS,
author={Pericles A. Mitkas and Dionisis Kehagias and Andreas L. Symeonidis and Ioannis N. Athanasiadis},
title={Agent Academy: An integrated tool for developing multi-agent systems and embedding decision structures into agents},
booktitle={First European Workshop on Multi-Agent Systems (EUMAS 2003)},
address={Oxford, UK},
year={2003},
month={12},
date={2003-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Academy-An-integrated-tool-for-developing-multi-agent-systems-and-embedding-decision-structures-into-agents.pdf},
keywords={agent academy},
abstract={In this paper we present Agent Academy, a framework that enables software developers to quickly develop multi-agent applications, when prior historical data relevant to a desired rule-based behaviour are available. Agent Academy is implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. Once an agent has been designed within the framework, the agent developer can create a specific ontology that describes the historical data. In this way, agents become capable of having embedded rule-based reasoning. We call this procedure «agent training» and it is realized by the application of data mining and knowledge discovery techniques on the application-specific historical data. From this point of view, Agent Academy provides a tool for both creating multi-agent systems and embedding rule-based decision structures into one or more of the participating agents.}
}

Pericles A. Mitkas, Andreas Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
10th ISPE International Conference on Concurrent Engineering: Research and Applications, pp. 11--18, Madeira, Portugal, 2003 Jul

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents intelligence can range from rudimentary sensor monitoring and data reporting, to more advanced forms of decision-making and autonomous behaviour. The behaviour and intelligence of each agent in the community can be obtained by performing Data Mining on available application data and the respected knowledge domain. We have developed Agent Academy (AA), a software platform for the design, creation, and deployment of MAS, which combines the power of knowledge discovery algorithms with the versatility of agents. Using this platform, we illustrate how agents, equipped with a data-driven inference engine, can be dynamically and continuously trained. We also discuss three prototype MAS developed with AA.

@inproceedings{2003MitkasISPE,
author={Pericles A. Mitkas and Andreas Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
booktitle={10th ISPE International Conference on Concurrent Engineering: Research and Applications},
pages={11--18},
address={Madeira, Portugal},
year={2003},
month={07},
date={2003-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={concurrent engineering;intelligent agents.},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents intelligence can range from rudimentary sensor monitoring and data reporting, to more advanced forms of decision-making and autonomous behaviour. The behaviour and intelligence of each agent in the community can be obtained by performing Data Mining on available application data and the respected knowledge domain. We have developed Agent Academy (AA), a software platform for the design, creation, and deployment of MAS, which combines the power of knowledge discovery algorithms with the versatility of agents. Using this platform, we illustrate how agents, equipped with a data-driven inference engine, can be dynamically and continuously trained. We also discuss three prototype MAS developed with AA.}
}

2002

Inproceedings Papers

Dionisis Kehagias, Andreas L. Symeonidis, Pericles A. Mitkas and M. Alborg
"Towards improving Multi-Agent Simulation in safety management and hazard control environments"
Simulation and Planning in High Autonomy Systems AIS 2002, pp. 757--764, Lisbon, Portugal, 2002 Apr

This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).

@inproceedings{2002KehagiasAIS,
author={Dionisis Kehagias and Andreas L. Symeonidis and Pericles A. Mitkas and M. Alborg},
title={Towards improving Multi-Agent Simulation in safety management and hazard control environments},
booktitle={Simulation and Planning in High Autonomy Systems AIS 2002},
pages={757--764},
address={Lisbon, Portugal},
year={2002},
month={04},
date={2002-04-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/TOWARDS-IMPROVING-MULTI-AGENT-SIMULATION-IN-SAFETY-MANAGEMENT-AND-HAZARD-CONTROL-ENVIRONMENTS.pdf},
keywords={hazard control},
abstract={This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).}
}

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kechagias, Ioannis N. Athanasiadis, G. Laleci, G. Kurt, Y. Kabak, A. Acar and A. Dogac
"An Agent Framework for Dynamic Agent Retraining: Agent Academy"
eBusiness and eWork 2002 (e2002) 12th annual conference and exhibition, pp. 757--764, Prague, Czech Republic, 2002 Oct

Agent Academy (AA) aims to develop a multi-agent society that can train new agents for specific or general tasks, while constantly retraining existing agents in a recursive mode. The system is based on collecting information both from the environment and the behaviors of the acting agents and their related successes/failures to generate a body of data, stored in the Agent Use Repository, which is mined by the Data Miner module, in order to generate useful knowledge about the application domain. Knowledge extracted by the Data Miner is used by the Agent Training Module as to train new agents or to enhance the behavior of agents already running. In this paper the Agent Academy framework is introduced, and its overall architecture and functionality are presented. Training issues as well as agent ontologies are discussed. Finally, a scenario, which aims to provide environmental alerts to both individuals and public authorities, is described an AA-based use case.

@inproceedings{2002MitkaseBusiness,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kechagias and Ioannis N. Athanasiadis and G. Laleci and G. Kurt and Y. Kabak and A. Acar and A. Dogac},
title={An Agent Framework for Dynamic Agent Retraining: Agent Academy},
booktitle={eBusiness and eWork 2002 (e2002) 12th annual conference and exhibition},
pages={757--764},
address={Prague, Czech Republic},
year={2002},
month={10},
date={2002-10-16},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Agent-Framework-for-Dynamic-Agent-Retraining-Agent-Academy.pdf},
abstract={Agent Academy (AA) aims to develop a multi-agent society that can train new agents for specific or general tasks, while constantly retraining existing agents in a recursive mode. The system is based on collecting information both from the environment and the behaviors of the acting agents and their related successes/failures to generate a body of data, stored in the Agent Use Repository, which is mined by the Data Miner module, in order to generate useful knowledge about the application domain. Knowledge extracted by the Data Miner is used by the Agent Training Module as to train new agents or to enhance the behavior of agents already running. In this paper the Agent Academy framework is introduced, and its overall architecture and functionality are presented. Training issues as well as agent ontologies are discussed. Finally, a scenario, which aims to provide environmental alerts to both individuals and public authorities, is described an AA-based use case.}
}

Andreas Symeonidis, Pericles A. Mitkas and Dionisis Kehagias
"Mining Patterns and Rules for Improving Agent Intelligence Through an Integrated Multi-Agent Platform"
6th IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2002), pp. 757--764, Banff, Alberta, Canada, 2002 Jan

This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).

@inproceedings{2002SymeonidisASC,
author={Andreas Symeonidis and Pericles A. Mitkas and Dionisis Kehagias},
title={Mining Patterns and Rules for Improving Agent Intelligence Through an Integrated Multi-Agent Platform},
booktitle={6th IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2002)},
pages={757--764},
address={Banff, Alberta, Canada},
year={2002},
month={01},
date={2002-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/MINING-PATTERNS-AND-RULES-FOR-IMPROVING-AGENT-INTELLIGENCE-THROUGH-AN-INTEGRATED-MULTI-AGENT-PLATFORM.pdf},
keywords={hazard control},
abstract={This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).}
}