Publications



2019

Journal Articles

Emmanouil Krasanakis, Emmanouil Schinas, Symeon Papadopoulos, Yiannis Kompatsiaris and Andreas Symeonidis
Information Processing & Management, pp. 102053, 2019 Jun

Local community detection is an emerging topic in network analysis that aims to detect well-connected communities encompassing sets of priorly known seed nodes. In this work, we explore the similar problem of ranking network nodes based on their relevance to the communities characterized by seed nodes. However, seed nodes may not be central enough or sufficiently many to produce high quality ranks. To solve this problem, we introduce a methodology we call seed oversampling, which first runs a node ranking algorithm to discover more nodes that belong to the community and then reruns the same ranking algorithm for the new seed nodes. We formally discuss why this process improves the quality of calculated community ranks if the original set of seed nodes is small and introduce a boosting scheme that iteratively repeats seed oversampling to further improve rank quality when certain ranking algorithm properties are met. Finally, we demonstrate the effectiveness of our methods in improving community relevance ranks given only a few random seed nodes of real-world network communities. In our experiments, boosted and simple seed oversampling yielded better rank quality than the previous neighborhood inflation heuristic, which adds the neighborhoods of original seed nodes to seeds.

@article{KRASANAKIS2019102053,
author={Emmanouil Krasanakis and Emmanouil Schinas and Symeon Papadopoulos and Yiannis Kompatsiaris and Andreas Symeonidis},
title={Boosted seed oversampling for local community ranking},
journal={Information Processing & Management},
pages={102053},
year={2019},
month={06},
date={2019-06-19},
doi={https://doi.org/10.1016/j.ipm.2019.06.002},
issn={0306-4573},
publisher's url={http://www.sciencedirect.com/science/article/pii/S0306457318308574},
abstract={Local community detection is an emerging topic in network analysis that aims to detect well-connected communities encompassing sets of priorly known seed nodes. In this work, we explore the similar problem of ranking network nodes based on their relevance to the communities characterized by seed nodes. However, seed nodes may not be central enough or sufficiently many to produce high quality ranks. To solve this problem, we introduce a methodology we call seed oversampling, which first runs a node ranking algorithm to discover more nodes that belong to the community and then reruns the same ranking algorithm for the new seed nodes. We formally discuss why this process improves the quality of calculated community ranks if the original set of seed nodes is small and introduce a boosting scheme that iteratively repeats seed oversampling to further improve rank quality when certain ranking algorithm properties are met. Finally, we demonstrate the effectiveness of our methods in improving community relevance ranks given only a few random seed nodes of real-world network communities. In our experiments, boosted and simple seed oversampling yielded better rank quality than the previous neighborhood inflation heuristic, which adds the neighborhoods of original seed nodes to seeds.}
}

Michail Papamichail, Kyriakos Chatzidimitriou, Thomas Karanikiotis, Napoleon-Christos Oikonomou, Andreas Symeonidis and Sashi Saripalle
"BrainRun: A Behavioral Biometrics Dataset towards Continuous Implicit Authentication"
Data, 4, (2), 2019 May

The widespread use of smartphones has dictated a new paradigm, where mobile applications are the primary channel for dealing with day-to-day tasks. This paradigm is full of sensitive information, making security of utmost importance. To that end, and given the traditional authentication techniques (passwords and/or unlock patterns) which have become ineffective, several research efforts are targeted towards biometrics security, while more advanced techniques are considering continuous implicit authentication on the basis of behavioral biometrics. However, most studies in this direction are performed “in vitro” resulting in small-scale experimentation. In this context, and in an effort to create a solid information basis upon which continuous authentication models can be built, we employ the real-world application “BrainRun”, a brain-training game aiming at boosting cognitive skills of individuals. BrainRun embeds a gestures capturing tool, so that the different types of gestures that describe the swiping behavior of users are recorded and thus can be modeled. Upon releasing the application at both the “Google Play Store” and “Apple App Store”, we construct a dataset containing gestures and sensors data for more than 2000 different users and devices. The dataset is distributed under the CC0 license and can be found at the EU Zenodo repository.

@article{Papamichail2019,
author={Michail Papamichail and Kyriakos Chatzidimitriou and Thomas Karanikiotis and Napoleon-Christos Oikonomou and Andreas Symeonidis and Sashi Saripalle},
title={BrainRun: A Behavioral Biometrics Dataset towards Continuous Implicit Authentication},
journal={Data},
volume={4},
number={2},
year={2019},
month={05},
date={2019-05-03},
url={https://res.mdpi.com/data/data-04-00060/article_deploy/data-04-00060.pdf?filename=&attachment=1},
doi={http://10.3390/data4020060},
issn={2306-5729},
abstract={The widespread use of smartphones has dictated a new paradigm, where mobile applications are the primary channel for dealing with day-to-day tasks. This paradigm is full of sensitive information, making security of utmost importance. To that end, and given the traditional authentication techniques (passwords and/or unlock patterns) which have become ineffective, several research efforts are targeted towards biometrics security, while more advanced techniques are considering continuous implicit authentication on the basis of behavioral biometrics. However, most studies in this direction are performed “in vitro” resulting in small-scale experimentation. In this context, and in an effort to create a solid information basis upon which continuous authentication models can be built, we employ the real-world application “BrainRun”, a brain-training game aiming at boosting cognitive skills of individuals. BrainRun embeds a gestures capturing tool, so that the different types of gestures that describe the swiping behavior of users are recorded and thus can be modeled. Upon releasing the application at both the “Google Play Store” and “Apple App Store”, we construct a dataset containing gestures and sensors data for more than 2000 different users and devices. The dataset is distributed under the CC0 license and can be found at the EU Zenodo repository.}
}

Michail D. Papamichail, Themistoklis Diamantopoulos and Andreas L. Symeonidis
"Software Reusability Dataset based on Static Analysis Metrics and Reuse Rate Information"
Data in Brief, 2019 Dec

The widely adopted component-based development paradigm considers the reuse of proper software components as a primary criterion for successful software development. As a result, various research efforts are directed towards evaluating the extent to which a software component is reusable. Prior efforts follow expert-based approaches, however the continuously increasing open-source software initiative allows the introduction of data-driven alternatives. In this context we have generated a dataset that harnesses information residing in online code hosting facilities and introduces the actual reuse rate of software components as a measure of their reusability. To do so, we have analyzed the most popular projects included in the maven registry and have computed a large number of static analysis metrics at both class and package levels using SourceMeter tool [2] that quantify six major source code properties: complexity, cohesion, coupling, inheritance, documentation and size. For these projects we additionally computed their reuse rate using our self-developed code search engine, AGORA [5]. The generated dataset contains analysis information regarding more than 24,000 classes and 2,000 packages, and can, thus, be used as the information basis towards the design and development of data-driven reusability evaluation methodologies. The dataset is related to the research article entitled "Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information

@article{PAPAMICHAIL2019104687,
author={Michail D. Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Software Reusability Dataset based on Static Analysis Metrics and Reuse Rate Information},
journal={Data in Brief},
year={2019},
month={12},
date={2019-12-31},
url={https://reader.elsevier.com/reader/sd/pii/S235234091931042X?token=9CDEB13940390201A35D26027D763CACB6EE4D49BFA9B920C4D32B348809F1F6A7DE309AA1737161C7E5BF1963BBD952},
doi={https://doi.org/10.1016/j.dib.2019.104687},
keywords={developer-perceived reusability;code reuse;static analysis metrics;Reusability assessment},
abstract={The widely adopted component-based development paradigm considers the reuse of proper software components as a primary criterion for successful software development. As a result, various research efforts are directed towards evaluating the extent to which a software component is reusable. Prior efforts follow expert-based approaches, however the continuously increasing open-source software initiative allows the introduction of data-driven alternatives. In this context we have generated a dataset that harnesses information residing in online code hosting facilities and introduces the actual reuse rate of software components as a measure of their reusability. To do so, we have analyzed the most popular projects included in the maven registry and have computed a large number of static analysis metrics at both class and package levels using SourceMeter tool [2] that quantify six major source code properties: complexity, cohesion, coupling, inheritance, documentation and size. For these projects we additionally computed their reuse rate using our self-developed code search engine, AGORA [5]. The generated dataset contains analysis information regarding more than 24,000 classes and 2,000 packages, and can, thus, be used as the information basis towards the design and development of data-driven reusability evaluation methodologies. The dataset is related to the research article entitled \"Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information}
}

Michail D. Papamichail , Themistoklis Diamantopoulos and Andreas L. Symeonidis
Journal of Systems and Software, pp. 110423, 2019 Sep

Nowadays, the continuously evolving open-source community and the increasing demands of end users are forming a new software development paradigm; developers rely more on reusing components from online sources to minimize the time and cost of software development. An important challenge in this context is to evaluate the degree to which a software component is suitable for reuse, i.e. its reusability. Contemporary approaches assess reusability using static analysis metrics by relying on the help of experts, who usually set metric thresholds or provide ground truth values so that estimation models are built. However, even when expert help is available, it may still be subjective or case-specific. In this work, we refrain from expert-based solutions and employ the actual reuse rate of source code components as ground truth for building a reusability estimation model. We initially build a benchmark dataset, harnessing the power of online repositories to determine the number of reuse occurrences for each component in the dataset. Subsequently, we build a model based on static analysis metrics to assess reusability from five different properties: complexity, cohesion, coupling, inheritance, documentation and size. The evaluation of our methodology indicates that our system can effectively assess reusability as perceived by developers.

@article{PAPAMICHAIL2019110423,
author={Michail D. Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information},
journal={Journal of Systems and Software},
pages={110423},
year={2019},
month={09},
date={2019-09-17},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/09/2019mpapamicJSS.pdf},
doi={https://doi.org/10.1016/j.jss.2019.110423},
issn={0164-1212},
publisher's url={https://www.sciencedirect.com/science/article/pii/S0164121219301979},
keywords={developer-perceived reusability;code reuse;static analysis metrics;reusability estimation},
abstract={Nowadays, the continuously evolving open-source community and the increasing demands of end users are forming a new software development paradigm; developers rely more on reusing components from online sources to minimize the time and cost of software development. An important challenge in this context is to evaluate the degree to which a software component is suitable for reuse, i.e. its reusability. Contemporary approaches assess reusability using static analysis metrics by relying on the help of experts, who usually set metric thresholds or provide ground truth values so that estimation models are built. However, even when expert help is available, it may still be subjective or case-specific. In this work, we refrain from expert-based solutions and employ the actual reuse rate of source code components as ground truth for building a reusability estimation model. We initially build a benchmark dataset, harnessing the power of online repositories to determine the number of reuse occurrences for each component in the dataset. Subsequently, we build a model based on static analysis metrics to assess reusability from five different properties: complexity, cohesion, coupling, inheritance, documentation and size. The evaluation of our methodology indicates that our system can effectively assess reusability as perceived by developers.}
}

Emmanouil G. Tsardoulias, M. Protopapas, Andreas L. Symeonidis and Loukas Petrou
Journal of Intelligent & Robotic Systems, 2019 Jul

The alignment of two occupancy grid maps generated by SLAM algorithms is a quite researched problem, being an obligatory step either for unsupervised map merging techniques or for evaluation of OGMs (Occupancy Grid Maps) against a blueprint of the environment. This paper provides an overview of the existing automatic alignment techniques of two occupancy grid maps that employ pattern matching. Additionally, an alignment pipeline using local features and image descriptors is implemented, as well as a method to eliminate erroneous correspondences, aiming at producing the correct transformation between the two maps. Finally, map quality metrics are proposed and utilized, in order to quantify the produced map’s correctness. A comparative analysis was performed over a number of image processing and OGM-oriented detectors and descriptors, in order to identify the best combinations for the map evaluation problem, performed between two OGMs or between an OGM and a Blueprint map.

@article{Tsardoulias2019,
author={Emmanouil G. Tsardoulias and M. Protopapas and Andreas L. Symeonidis and Loukas Petrou},
title={A Comparative Analysis of Pattern Matching Techniques Towards OGM Evaluation},
journal={Journal of Intelligent & Robotic Systems},
year={2019},
month={07},
date={2019-07-11},
url={https://link.springer.com/content/pdf/10.1007%2Fs10846-019-01053-7.pdf},
doi={http://10.1007/s10846-019-01053-7},
issn={1573-0409},
publisher's url={https://link.springer.com/content/pdf/10.1007%2Fs10846-019-01053-7.pdf},
abstract={The alignment of two occupancy grid maps generated by SLAM algorithms is a quite researched problem, being an obligatory step either for unsupervised map merging techniques or for evaluation of OGMs (Occupancy Grid Maps) against a blueprint of the environment. This paper provides an overview of the existing automatic alignment techniques of two occupancy grid maps that employ pattern matching. Additionally, an alignment pipeline using local features and image descriptors is implemented, as well as a method to eliminate erroneous correspondences, aiming at producing the correct transformation between the two maps. Finally, map quality metrics are proposed and utilized, in order to quantify the produced map’s correctness. A comparative analysis was performed over a number of image processing and OGM-oriented detectors and descriptors, in order to identify the best combinations for the map evaluation problem, performed between two OGMs or between an OGM and a Blueprint map.}
}

2018

Journal Articles

Christoforos Zolotas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"RESTsec: a low-code platform for generating secure by design enterprise services"
Enterprise Information Systems, pp. 1-27, 2018 Mar

In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.

@article{2018Zolotas,
author={Christoforos Zolotas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={RESTsec: a low-code platform for generating secure by design enterprise services},
journal={Enterprise Information Systems},
pages={1-27},
year={2018},
month={03},
date={2018-03-09},
doi={https://doi.org/10.1080/17517575.2018.1462403},
abstract={In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.}
}

George Mamalakis, Christos Diou, Andreas L. Symeonidis and Leonidas Georgiadis
"Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis"
Neural Computing and Applications, 2018 May

In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.

@article{Mamalakis2018,
author={George Mamalakis and Christos Diou and Andreas L. Symeonidis and Leonidas Georgiadis},
title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},
journal={Neural Computing and Applications},
year={2018},
month={05},
date={2018-05-12},
doi={https://doi.org/10.1007/s00521-018-3550-x},
issn={1433-3058},
keywords={Intrusion detection systems;Anomaly detection;Sequences of outliers},
abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}
}

2017

Journal Articles

Themistoklis Diamantopoulos, Michael Roth, Andreas Symeonidis and Ewan Klein
"Software requirements as an application domain for natural language processing"
Language Resources and Evaluation, pp. 1-30, 2017 Feb

Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.

@article{Diamantopoulos2017,
author={Themistoklis Diamantopoulos and Michael Roth and Andreas Symeonidis and Ewan Klein},
title={Software requirements as an application domain for natural language processing},
journal={Language Resources and Evaluation},
pages={1-30},
year={2017},
month={02},
date={2017-02-27},
url={http://rdcu.be/tpxd},
doi={http://10.1007/s10579-017-9381-z},
abstract={Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
Enterprise Information Systems, pp. 1-22, 2017 Dec

Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.

@article{Diamantopoulos2017EIS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Enhancing requirements reusability through semantic modeling and data mining techniques},
journal={Enterprise Information Systems},
pages={1-22},
year={2017},
month={12},
date={2017-12-17},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/08/EIS2017.pdf},
doi={http://10.1080/17517575.2017.1416177},
publisher's url={https://doi.org/10.1080/17517575.2017.1416177},
abstract={Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.}
}

Miltiadis G. Siavvas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"QATCH - An adaptive framework for software product quality assessment"
Expert Systems with Applications, 2017 May

The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.

@article{Siavvas2017,
author={Miltiadis G. Siavvas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={QATCH - An adaptive framework for software product quality assessment},
journal={Expert Systems with Applications},
year={2017},
month={05},
date={2017-05-25},
url={http://www.sciencedirect.com/science/article/pii/S0957417417303883},
doi={https://doi.org/10.1016/j.eswa.2017.05.060},
keywords={Software quality assessment;Software engineering;Multi-criteria decision making;Fuzzy analytic hierarchy process;Software static analysis;Quality metrics},
abstract={The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.}
}

Athanassios M. Kintsakis, Fotis E. Psomopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments"
SoftwareX, 6, pp. 217-224, 2017 Sep

Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

@article{SOFTX89,
author={Athanassios M. Kintsakis and Fotis E. Psomopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments},
journal={SoftwareX},
volume={6},
pages={217-224},
year={2017},
month={09},
date={2017-09-19},
url={http://www.sciencedirect.com/science/article/pii/S2352711017300304},
doi={http://10.1016/j.softx.2017.07.007},
keywords={Bioinformatics;hybrid cloud;scientific workflows;distributed computing},
abstract={Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.}
}

Cezary Zielinski, Maciej Stefanczyk, Tomasz Kornuta, Maksym Figat, Wojciech Dudek, Wojciech Szynkiewicz, Wlodzimierz Kasprzak, Jan Figat, Marcin Szlenk, Tomasz Winiarski, Konrad Banachowicz, Teresa Zielinska, Emmanouil G. Tsardoulias, Andreas L. Symeonidis, Fotis E. Psomopoulos, Athanassios M. Kintsakis, Pericles A. Mitkas, Aristeidis Thallas, Sofia E. Reppou, George T. Karagiannis, Konstantinos Panayiotou, Vincent Prunet, Manuel Serrano, Jean-Pierre Merlet, Stratos Arampatzis, Alexandros Giokas, Lazaros Penteridis, Ilias Trochidis, David Daney and Miren Iturburu
"Variable structure robot control systems: The RAPP approach"
Robotics and Autonomous Systems, 94, pp. 226-244, 2017 May

This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.

@article{Zielnski2017,
author={Cezary Zielinski and Maciej Stefanczyk and Tomasz Kornuta and Maksym Figat and Wojciech Dudek and Wojciech Szynkiewicz and Wlodzimierz Kasprzak and Jan Figat and Marcin Szlenk and Tomasz Winiarski and Konrad Banachowicz and Teresa Zielinska and Emmanouil G. Tsardoulias and Andreas L. Symeonidis and Fotis E. Psomopoulos and Athanassios M. Kintsakis and Pericles A. Mitkas and Aristeidis Thallas and Sofia E. Reppou and George T. Karagiannis and Konstantinos Panayiotou and Vincent Prunet and Manuel Serrano and Jean-Pierre Merlet and Stratos Arampatzis and Alexandros Giokas and Lazaros Penteridis and Ilias Trochidis and David Daney and Miren Iturburu},
title={Variable structure robot control systems: The RAPP approach},
journal={Robotics and Autonomous Systems},
volume={94},
pages={226-244},
year={2017},
month={05},
date={2017-05-05},
url={http://www.sciencedirect.com/science/article/pii/S0921889016306248},
doi={https://doi.org/10.1016/j.robot.2017.05.002},
keywords={robot controllers;variable structure controllers;cloud robotics;RAPP},
abstract={This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.}
}

2016

Journal Articles

Antonios Chrysopoulos, Christos Diou, Andreas Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar

The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{2015ChrysopoulosEPSR,
author={Antonios Chrysopoulos and Christos Diou and Andreas Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Response-modeling-of-small-scale-energy-consumers-for-effective-demand-response-applications.pdf},
abstract={The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

Pantelis Angelidis, Leslie Berman, Maria de la Luz Casas-Perez, Leo Anthony Celi, George E. Dafoulas, Alon Dagan, Braiam Escobar, Diego M. Lopez, Julieta Noguez, Juan Sebastian Osorio-Valencia, Charles Otine, Kenneth Paik, Luis Rojas-Potosi, Andreas Symeonidis and Eric Winkler
"The hackathon model to spur innovation around global mHealth"
Journal of Medical Engineering & Technology, pp. 1-8, 2016 Sep

The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.

@article{2016AngelidisJMET,
author={Pantelis Angelidis and Leslie Berman and Maria de la Luz Casas-Perez and Leo Anthony Celi and George E. Dafoulas and Alon Dagan and Braiam Escobar and Diego M. Lopez and Julieta Noguez and Juan Sebastian Osorio-Valencia and Charles Otine and Kenneth Paik and Luis Rojas-Potosi and Andreas Symeonidis and Eric Winkler},
title={The hackathon model to spur innovation around global mHealth},
journal={Journal of Medical Engineering & Technology},
pages={1-8},
year={2016},
month={09},
date={2016-09-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/The-hackathon-model-to-spur-innovation-around-global-mHealth.pdf},
abstract={The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.}
}

Michael Chatzidimopoulos, Fotis Psomopoulos, Emmanouil Malandrakis, Ioannis Ganopoulos, Panagiotis Madesis, Evangelos Vellios and Pavlidis Drogoudi
"Comparative Genomics of Botrytis cinerea Strains with Differential Multi-Drug Resistance"
Frontiers in Plant Science, 2016 Apr

Botrytis cinerea is a ubiquitous fungus difficult to control because it possess a variety of attack modes, diverse hosts as inoculum sources, and it can survive as mycelia and/or conidia or for extended periods as sclerotia in crop debris. For these reasons the use of any single control measure is unlikely to succeed and a combination of cultural practices with the application of site-specific synthetic compounds provide the best protection for the crops (Williamson et al., 2007). However, the chemical control has been adversely affected by the development of fungicide resistance. The selection of resistant individuals in a fungal population subjected to selective pressure due to fungicides is an evolutionary mechanism that promotes advantageous genotypes (Walker et al., 2013). High levels of resistance to site-specific fungicides are commonly associated with point mutations. For example the mutations G143A, H272R, and F412S leading to changes in the target proteins CytB, SdhB, and Erg27 are conferring resistance of the pathogen to the chemical classes of QoIs, SDHIs, and hydroxyanilides, respectively (Leroux, 2007). Multidrug resistance is another mechanism associated with resistance in B. cinerea which involves mutations leading to overexpression of individual transporters such as ABC and MFS (Kretschmer et al., 2009). This mechanism is associated with low levels of resistance to multiple fungicides including the anilinopyrimidines and phenylpyrroles. However, a subdivision of gray mold populations was found to be more tolerant to these two classes of fungicides (Leroch et al., 2013).Previous reports have clearly demonstrated that the resistance to anilinopyrimidines has a qualitative, disruptive pattern, and is monogenically controlled (Chapeland et al., 1999). In order to elucidate the mechanism of the resistance, the whole genome of three different samples (gene pools) was sequenced, each containing DNA of 10 selected strains of the same genotype regarding resistance to seven different classes of fungicides including anilinopyrimidines. This report presents the publicly available genomic data.

@article{2016ChatzidimopoulosFPS,
author={Michael Chatzidimopoulos and Fotis Psomopoulos and Emmanouil Malandrakis and Ioannis Ganopoulos and Panagiotis Madesis and Evangelos Vellios and Pavlidis Drogoudi},
title={Comparative Genomics of Botrytis cinerea Strains with Differential Multi-Drug Resistance},
journal={Frontiers in Plant Science},
year={2016},
month={04},
date={2016-04-28},
url={http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4849417/pdf/fpls-07-00554.pdf},
abstract={Botrytis cinerea is a ubiquitous fungus difficult to control because it possess a variety of attack modes, diverse hosts as inoculum sources, and it can survive as mycelia and/or conidia or for extended periods as sclerotia in crop debris. For these reasons the use of any single control measure is unlikely to succeed and a combination of cultural practices with the application of site-specific synthetic compounds provide the best protection for the crops (Williamson et al., 2007). However, the chemical control has been adversely affected by the development of fungicide resistance. The selection of resistant individuals in a fungal population subjected to selective pressure due to fungicides is an evolutionary mechanism that promotes advantageous genotypes (Walker et al., 2013). High levels of resistance to site-specific fungicides are commonly associated with point mutations. For example the mutations G143A, H272R, and F412S leading to changes in the target proteins CytB, SdhB, and Erg27 are conferring resistance of the pathogen to the chemical classes of QoIs, SDHIs, and hydroxyanilides, respectively (Leroux, 2007). Multidrug resistance is another mechanism associated with resistance in B. cinerea which involves mutations leading to overexpression of individual transporters such as ABC and MFS (Kretschmer et al., 2009). This mechanism is associated with low levels of resistance to multiple fungicides including the anilinopyrimidines and phenylpyrroles. However, a subdivision of gray mold populations was found to be more tolerant to these two classes of fungicides (Leroch et al., 2013).Previous reports have clearly demonstrated that the resistance to anilinopyrimidines has a qualitative, disruptive pattern, and is monogenically controlled (Chapeland et al., 1999). In order to elucidate the mechanism of the resistance, the whole genome of three different samples (gene pools) was sequenced, each containing DNA of 10 selected strains of the same genotype regarding resistance to seven different classes of fungicides including anilinopyrimidines. This report presents the publicly available genomic data.}
}

Sofia E. Reppou, Emmanouil G. Tsardoulias, Athanassios M. Kintsakis, Andreas Symeonidis, Pericles A. Mitkas, Fotis E. Psomopoulos, George T. Karagiannis, Cezary Zielinski, Vincent Prunet, Jean-Pierre Merlet, Miren Iturburu and Alexandros Gkiokas
"RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people"
Journal of Social Robotics, pp. 15, 2016 Jun

It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.

@article{2016ReppouJSR,
author={Sofia E. Reppou and Emmanouil G. Tsardoulias and Athanassios M. Kintsakis and Andreas Symeonidis and Pericles A. Mitkas and Fotis E. Psomopoulos and George T. Karagiannis and Cezary Zielinski and Vincent Prunet and Jean-Pierre Merlet and Miren Iturburu and Alexandros Gkiokas},
title={RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people},
journal={Journal of Social Robotics},
pages={15},
year={2016},
month={06},
date={2016-06-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RAPP-A-Robotic-Oriented-Ecosystem-for-Delivering-Smart-User-Empowering-Applications-for-Older-People.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.}
}

Emmanouil Tsardoulias, Aris Thallas, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech"
audio engineering society, 2016 Dec

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016TsardouliasAES,
author={Emmanouil Tsardoulias and Aris Thallas and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech},
journal={audio engineering society},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Improving-multilingual-interaction-for-consumer-robots-through-signal-enhancement-in-multichannel-speech.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Emmanouil Tsardoulias, Athanassios Kintsakis, Konstantinos Panayiotou, Aristeidis Thallas, Sofia Reppou, George Karagiannis, Miren Iturburu, Stratos Arampatzis, Cezary Zielinskic, Vincent Prunetg, Fotis Psomopoulos, Andreas Symeonidis and Pericles Mitkas
"Towards an integrated robotics architecture for social inclusion – The RAPP paradigm"
Cognitive Systems Research, pp. 1-8, 2016 Sep

Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.

@article{2016TsardouliasCSR,
author={Emmanouil Tsardoulias and Athanassios Kintsakis and Konstantinos Panayiotou and Aristeidis Thallas and Sofia Reppou and George Karagiannis and Miren Iturburu and Stratos Arampatzis and Cezary Zielinskic and Vincent Prunetg and Fotis Psomopoulos and Andreas Symeonidis and Pericles Mitkas},
title={Towards an integrated robotics architecture for social inclusion – The RAPP paradigm},
journal={Cognitive Systems Research},
pages={1-8},
year={2016},
month={09},
date={2016-09-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/COGSYS_2016_R1.pdf},
abstract={Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.}
}

Aliki Xanthopoulou, Fotis Psomopoulos, Ioannis Ganopoulos, Maria Manioudaki, Athanasios Tsaftaris, Irini Nianiou-Obeidat and Panagiotis Madesis
"De novo transcriptome assembly of two contrasting pumpkin cultivars"
Genomics Data pp 200-201, 2016 Jan

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016XanthopoulouGD,
author={Aliki Xanthopoulou and Fotis Psomopoulos and Ioannis Ganopoulos and Maria Manioudaki and Athanasios Tsaftaris and Irini Nianiou-Obeidat and Panagiotis Madesis},
title={De novo transcriptome assembly of two contrasting pumpkin cultivars},
journal={Genomics Data pp 200-201},
year={2016},
month={01},
date={2016-01-15},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/De-novo-transcriptome-assembly-of-two-contrasting-pumpkin-cultivars.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Christoforos Zolotas, Themistoklis Diamantopoulos, Kyriakos Chatzidimitriou and Andreas Symeonidis
"From requirements to source code: a Model-Driven Engineering approach for RESTful web services"
Automated Software Engineering, pp. 1-48, 2016 Sep

During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.

@article{2016ZolotasASE,
author={Christoforos Zolotas and Themistoklis Diamantopoulos and Kyriakos Chatzidimitriou and Andreas Symeonidis},
title={From requirements to source code: a Model-Driven Engineering approach for RESTful web services},
journal={Automated Software Engineering},
pages={1-48},
year={2016},
month={09},
date={2016-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/ReqsToCodeMDE.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.}
}

2015

Journal Articles

Charalampos Dimoulas and Andreas Symeonidis
"Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation"
IEEE Multimedia, PP, (99), 2015 May

With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

@article{2015DimoulasIEEEM,
author={Charalampos Dimoulas and Andreas Symeonidis},
title={Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation},
journal={IEEE Multimedia},
volume={PP},
number={99},
year={2015},
month={05},
date={2015-05-13},
doi={http://dx.doi.org/10.1109/MMUL.2015.33},
abstract={With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.}
}

Alfonso M Duarte, Fotis Psomopoulos, Christophe Blanchet, Alexandre M Bonvin, Manuel Corpas, Alain Franc, Rafael C Jimenez, Jesus M de Lucas, Tommi Nyrönen, Gargely Sipos and Stephanie B Suhr
"Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis"
Frontiers in Genetics, Vol. 6, No. 197 (2015), 2015 Jun

With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

@article{2015DuarteFG,
author={Alfonso M Duarte and Fotis Psomopoulos and Christophe Blanchet and Alexandre M Bonvin and Manuel Corpas and Alain Franc and Rafael C Jimenez and Jesus M de Lucas and Tommi Nyrönen and Gargely Sipos and Stephanie B Suhr},
title={Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis},
journal={Frontiers in Genetics, Vol. 6, No. 197 (2015)},
year={2015},
month={06},
date={2015-06-23},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Future-opportunities-and-trends-for-e-infrastructures-and-life-sciences-going-beyond-the-grid-to-enable-life-science-data-analysis.pdf},
abstract={With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.}
}

Themistoklis Mavridis andAndreas Symeonidis
"Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms"
Engineering Applications of Artificial Intelligence (EAAI), 41, pp. 75–91, 2015 Mar

It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).

@article{2015mavridisEAAI,
author={Themistoklis Mavridis andAndreas Symeonidis},
title={Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms},
journal={Engineering Applications of Artificial Intelligence (EAAI)},
volume={41},
pages={75–91},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Identifying-valid-search-engine-ranking-factors-in-a-Web-2.0-and-Web-3.0-context-for-building-efficient-SEO-mechanisms.pdf},
doi={http://10.1016/j.engappai.2015.02.002},
keywords={semantic web;search engine optimization;Search engine ranking factors analysis;Content quality;Social web},
abstract={It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).}
}

Dimitrios Vitsios, Fotis Psomopoulos, Pericles Mitkas and Christos Ouzounis
"Inference of pathway decomposition across multiple species through gene clustering"
International Journal on Artificial Intelligence Tools, 24, pp. 25, 2015 Feb

In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm

@article{2015vitsiosIJAIT,
author={Dimitrios Vitsios and Fotis Psomopoulos and Pericles Mitkas and Christos Ouzounis},
title={Inference of pathway decomposition across multiple species through gene clustering},
journal={International Journal on Artificial Intelligence Tools},
volume={24},
pages={25},
year={2015},
month={02},
date={2015-02-23},
url={http://www.worldscientific.com/doi/pdf/10.1142/S0218213015400035},
abstract={In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm}
}

2014

Journal Articles

Anna A. Adamopoulou and Andreas Symeonidis
"A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets"
Electronic Commerce Research and Applications, 35, pp. 114-130, 2014 Oct

Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.

@article{2014AdamopoulouECRA,
author={Anna A. Adamopoulou and Andreas Symeonidis},
title={A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets},
journal={Electronic Commerce Research and Applications},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S1567422314000465-main.pdf},
doi={http://10.1016/j.elerap.2014.07.001},
keywords={Small-scale consumer models},
abstract={Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.}
}

Antonios Chrysopoulos, Christos Diou, A.L. Symeonidis and Pericles A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
EAAI, 35, pp. 299- 315, 2014 Oct

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{2014chrysopoulosEAAI,
author={Antonios Chrysopoulos and Christos Diou and A.L. Symeonidis and Pericles A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={EAAI},
volume={35},
pages={299- 315},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Bottom-up-modeling-of-small-scale-energy-consumers-for-effective-Demand-Response-Applications.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Small-scale consumer models;Demand simulation;Demand Response Applications},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
"Localizing Software Bugs using the Edit Distance of Call Traces"
International Journal On Advances in Software, 7, (1), pp. 277 - 288, 2014 Oct

Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.

@article{2014DiamantopoulosIJAS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Localizing Software Bugs using the Edit Distance of Call Traces},
journal={International Journal On Advances in Software},
volume={7},
number={1},
pages={277 - 288},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Localizing-Software-Bugs-using-the-Edit-Distance-of-Call-Traces.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Intrusion detection systems},
abstract={Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.}
}

G. Mamalakis, C. Diou, A.L. Symeonidis and L. Georgiadis
"Of daemons and men: A file system approach towards intrusion detection"
Applied Soft Computing, 25, pp. 1--14, 2014 Oct

We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.

@article{2014MamalakisASC,
author={G. Mamalakis and C. Diou and A.L. Symeonidis and L. Georgiadis},
title={Of daemons and men: A file system approach towards intrusion detection},
journal={Applied Soft Computing},
volume={25},
pages={1--14},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Of-daemons-and-men-A-file-system-approach-towards-intrusion-detection.pdf},
doi={http://dx.doi.org/10.1016/j.asoc.2014.07.026},
keywords={Intrusion detection systems;Anomaly detection;Information security;File system},
abstract={We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.}
}

Themistoklis Mavridis and Andreas Symeonidis
"Semantic analysis of web documents for the generation of optimal content"
Engineering Applications of Artificial Intelligence, 2014, 35, pp. 114-130, 2014 Oct

The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.

@article{2014MavridisEAAI,
author={Themistoklis Mavridis and Andreas Symeonidis},
title={Semantic analysis of web documents for the generation of optimal content},
journal={Engineering Applications of Artificial Intelligence, 2014},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S0952197614001304-main.pdf},
doi={http://dx.doi.org/10.1016/j.engappai.2014.06.008},
abstract={The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.}
}

2013

Journal Articles

Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"Adaptive reservoir computing through evolution and learning"
Neurocomputing, 103, pp. 198-209, 2013 Jan

The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.

@article{2013ChatzidimitriouN,
author={Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={Adaptive reservoir computing through evolution and learning},
journal={Neurocomputing},
volume={103},
pages={198-209},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Adaptive-reservoir-computing-through-evolution-and-learning.pdf},
keywords={Load Forecasting},
abstract={The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.}
}

Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles A. Mitkas and Anastasios Delopoulos
"Applying semantic technologies in cervical cancer research"
Data Knowl. Eng., 86, pp. 160-178, 2013 Jan

In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.

@article{2013MaramisDKE,
author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles A. Mitkas and Anastasios Delopoulos},
title={Applying semantic technologies in cervical cancer research},
journal={Data Knowl. Eng.},
volume={86},
pages={160-178},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Applying-semantic-technologies-in-cervical-cancer-research.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering},
abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile"
Plos ONE, 2013 Jan

Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.

@article{2013PsomopoulosPlosOne,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile},
journal={Plos ONE},
year={2013},
month={01},
date={2013-01-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/journal.pone_.0052854.pdf},
abstract={Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Event identification in web social media through named entity recognition and topic modeling"
Data & Knowledge Engineering, 88, pp. 1-24, 2013 Jan

@article{2013VavliakisDKE,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Event identification in web social media through named entity recognition and topic modeling},
journal={Data & Knowledge Engineering},
volume={88},
pages={1-24},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Event-identification-in-web-social-media-through-named-entity-recognition-and-topic-modeling.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering}
}

2012

Journal Articles

Wolfgang Ketter and Andreas L. Symeonidis
"Competitive Benchmarking: Lessons learned from the Trading Agent Competition"
AI Magazine, 33, (2), pp. 198-209, 2012 Sep

Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.

@article{2012KetterAIM,
author={Wolfgang Ketter and Andreas L. Symeonidis},
title={Competitive Benchmarking: Lessons learned from the Trading Agent Competition},
journal={AI Magazine},
volume={33},
number={2},
pages={198-209},
year={2012},
month={09},
date={2012-09-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Competitive-Benchmarking-Lessons-learned-from-the-Trading-Agent-Competition.pdf},
keywords={Load Forecasting},
abstract={Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.}
}

Fotis Psomopoulos, Victoria Siarkou, Nikolas Papanikolaou, Ioannis Iliopoulos, Athanasios Tsaftaris, Vasilis Promponas and Christos Ouzounis
"The Chlamydiales Pangenome Revisited: Structural Stability and Functional Coherence"
Genes, Vol 3, No 2 (2012), pp. 291-319, 16, 2012 May

The entire publicly available set of 37 genome sequences from the bacterial order Chlamydiales has been subjected to comparative analysis in order to reveal the salient features of this pangenome and its evolutionary history. Over 2,000 protein families are detected across multiple species, with a distribution consistent to other studied pangenomes. Of these, there are 180 protein families with multiple members, 312 families with exactly 37 members corresponding to core genes, 428 families with peripheral genes with varying taxonomic distribution and finally 1,125 smaller families. The fact that, even for smaller genomes of Chlamydiales, core genes represent over a quarter of the average protein complement, signifies a certain degree of structural stability, given the wide range of phylogenetic relationships within the group. In addition, the propagation of a corpus of manually curated annotations within the discovered core families reveals key functional properties, reflecting a coherent repertoire of cellular capabilities for Chlamydiales. We further investigate over 2,000 genes without homologs in the pangenome and discover two new protein sequence domains. Our results, supported by the genome-based phylogeny for this group, are fully consistent with previous analyses and current knowledge, and point to future research directions towards a better understanding of the structural and functional properties of Chlamydiales.

@article{2012PsomopoulosGenes,
author={Fotis Psomopoulos and Victoria Siarkou and Nikolas Papanikolaou and Ioannis Iliopoulos and Athanasios Tsaftaris and Vasilis Promponas and Christos Ouzounis},
title={The Chlamydiales Pangenome Revisited: Structural Stability and Functional Coherence},
journal={Genes, Vol 3, No 2 (2012), pp. 291-319},
volume={16},
year={2012},
month={05},
date={2012-05-16},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/The-Chlamydiales-Pangenome-Revisited-Structural-Stability-and-Functional-Coherence.pdf},
doi={http://10.3390/genes3020291},
keywords={Classification;Initialization;Learning Classifier Systems (LCS);Supervised Learning},
abstract={The entire publicly available set of 37 genome sequences from the bacterial order Chlamydiales has been subjected to comparative analysis in order to reveal the salient features of this pangenome and its evolutionary history. Over 2,000 protein families are detected across multiple species, with a distribution consistent to other studied pangenomes. Of these, there are 180 protein families with multiple members, 312 families with exactly 37 members corresponding to core genes, 428 families with peripheral genes with varying taxonomic distribution and finally 1,125 smaller families. The fact that, even for smaller genomes of Chlamydiales, core genes represent over a quarter of the average protein complement, signifies a certain degree of structural stability, given the wide range of phylogenetic relationships within the group. In addition, the propagation of a corpus of manually curated annotations within the discovered core families reveals key functional properties, reflecting a coherent repertoire of cellular capabilities for Chlamydiales. We further investigate over 2,000 genes without homologs in the pangenome and discover two new protein sequence domains. Our results, supported by the genome-based phylogeny for this group, are fully consistent with previous analyses and current knowledge, and point to future research directions towards a better understanding of the structural and functional properties of Chlamydiales.}
}

Fani A. Tzima, John B. Theocharis and Pericles A. Mitkas
"Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time."
To appear in Soft Computing, 16, 2012 Jul

The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.

@article{2012TzimaTASC,
author={Fani A. Tzima and John B. Theocharis and Pericles A. Mitkas},
title={Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time.},
journal={To appear in Soft Computing},
volume={16},
year={2012},
month={07},
date={2012-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Clustering-based-initialization-of-Learning-Classifier-Systems.pdf},
keywords={Classification;Initialization;Learning Classifier Systems (LCS);Supervised Learning},
abstract={The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.}
}

2011

Journal Articles

Fani A. Tzima, Pericles A. Mitkas, Dimitris Voukantsis and Kostas Karatzas
"Sparse episode identification in environmental datasets: the case of air quality assessment"
Expert Systems with Applications, 38, 2011 May

Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.

@article{2011TzimaESWA,
author={Fani A. Tzima and Pericles A. Mitkas and Dimitris Voukantsis and Kostas Karatzas},
title={Sparse episode identification in environmental datasets: the case of air quality assessment},
journal={Expert Systems with Applications},
volume={38},
year={2011},
month={05},
date={2011-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S095741741001105X-main.pdf},
keywords={Air quality (AQ);Domain-driven data mining;Model evaluation;Sparse episode identification},
abstract={Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"An integrated framework for enhancing the semantic transformation, editing and querying of relational databases"
Expert Systems with Applications, 38, (4), pp. 3844-3856, 2011 Apr

The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually

@article{2011VavliakisESWA,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={An integrated framework for enhancing the semantic transformation, editing and querying of relational databases},
journal={Expert Systems with Applications},
volume={38},
number={4},
pages={3844-3856},
year={2011},
month={04},
date={2011-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-framework-for-enhancing-the-semantic-transformation-editing-and-querying-of-relational-databases.pdf},
keywords={Ontology editor;OWL-DL restriction creation;Relational database to ontology transformation;SPARQL query builder},
abstract={The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually}
}

2010

Journal Articles

Giorgos Papachristoudis, Sotiris Diplaris and Pericles A. Mitkas
"SoFoCles: Feature filtering for microarray classification based on Gene Ontology"
Journal of Biomedical Informatics, 43, (1), 2010 Feb

Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\"curse of dimensionality\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

@article{2010Papachristoudis-JBI,
author={Giorgos Papachristoudis and Sotiris Diplaris and Pericles A. Mitkas},
title={SoFoCles: Feature filtering for microarray classification based on Gene Ontology},
journal={Journal of Biomedical Informatics},
volume={43},
number={1},
year={2010},
month={02},
date={2010-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/SoFoCles-Feature-filtering-for-microarray-classification-based-on-Gene-Ontology.pdf},
keywords={Data Mining;Feature filtering;Microarray classification;Ontologies;Semantic similarity},
abstract={Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\\\"curse of dimensionality\\\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Bioinformatics algorithm development for Grid environments"
Journal of Systems and Software, 83, (7), 2010 Jul

A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.

@article{2010PsomopoulosJOSAS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Bioinformatics algorithm development for Grid environments},
journal={Journal of Systems and Software},
volume={83},
number={7},
year={2010},
month={07},
date={2010-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Bioinformatics-algorithm-development-for-Grid-environments.pdf},
keywords={Bioinformatics;Data analysis;Grid computing;Protein classification;Semi-automated tool;Workflow design},
abstract={A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.}
}

2009

Journal Articles

Theodoros Agorastos, Vassilis Koutkias, Manolis Falelakis, Irini Lekka, T. Mikos, Anastasios Delopoulos, Periklis A. Mitkas, A. Tantsis, S. Weyers, P. Coorevits, A. M. Kaufmann, R. Kurzeja and Nicos Maglaveras
"Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach"
Cancer Informatics Journal, Special Issue on Semantic Technologies, 8, (9), pp. 31-44, 2009 Feb

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009AgorastosCIJSIOST,
author={Theodoros Agorastos and Vassilis Koutkias and Manolis Falelakis and Irini Lekka and T. Mikos and Anastasios Delopoulos and Periklis A. Mitkas and A. Tantsis and S. Weyers and P. Coorevits and A. M. Kaufmann and R. Kurzeja and Nicos Maglaveras},
title={Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach},
journal={Cancer Informatics Journal, Special Issue on Semantic Technologies},
volume={8},
number={9},
pages={31-44},
year={2009},
month={02},
date={2009-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Semantic-Integration-of-Cervical-Cancer-Data-Repositories-to-Facilitate-Multicenter-Association-Studies-The-ASSIST-Approach.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments"
Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

Special issue on Agents and Data Mining

@article{2009ChatzidimitriouIS,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments},
journal={Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-Enhanced_Agents_in_Dynamic_Supply-Chai.pdf},
keywords={In modern supply chains;so each action can cause ripple reactions and affect the overall result. In this article},
abstract={Special issue on Agents and Data Mining}
}

Georgios Karagiannis, Konstantinos Vavliakis, Sophia Sotiropoulou, Argirios Damtsios, Dimitrios Alexiadis and Christos Salpistis
"Using Signal Processing and Semantic Web Technologies to Analyze Byzantine Iconography"
IEEE Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

A bottom-up approach for documenting art objects processes data from innovative nondestructive analysis with signal processing and neural network techniques to provide a good estimation of the paint layer profile and pigments of artwork. The approach also uses Semantic Web technologies and maps concepts relevant to the analysis of paintings and Byzantine iconography to the Conceptual Reference Model of the International Committee for Documentation (CIDOC-CRM). This approach has introduced three main contributions: the development of an integrated nondestructive technique system combining spectroscopy and acoustic microscopy, supported by intelligent algorithms, for estimating the artworks

@article{2009KaragiannisIS,
author={Georgios Karagiannis and Konstantinos Vavliakis and Sophia Sotiropoulou and Argirios Damtsios and Dimitrios Alexiadis and Christos Salpistis},
title={Using Signal Processing and Semantic Web Technologies to Analyze Byzantine Iconography},
journal={IEEE Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Using-Signal-Processing-and-Semantic-Web-Technologies-to-Analyze-Byzantine-Iconography.pdf},
keywords={Acoustic Microscopy;CIDOC - CRM;Multispectral Imaging;Non - Destructive Identification;Reasoning;Spectroscopy},
abstract={A bottom-up approach for documenting art objects processes data from innovative nondestructive analysis with signal processing and neural network techniques to provide a good estimation of the paint layer profile and pigments of artwork. The approach also uses Semantic Web technologies and maps concepts relevant to the analysis of paintings and Byzantine iconography to the Conceptual Reference Model of the International Committee for Documentation (CIDOC-CRM). This approach has introduced three main contributions: the development of an integrated nondestructive technique system combining spectroscopy and acoustic microscopy, supported by intelligent algorithms, for estimating the artworks}
}

John M. Konstantinides, Athanasios Mademlis, Petros Daras, Pericles A. Mitkas and Michael G. Strintzis
"Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics"
IEEE Transactions on Multimedia, 11, (1), pp. 23-38, 2009 Jan

In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.

@article{2009KonstantinidesIEEEToM,
author={John M. Konstantinides and Athanasios Mademlis and Petros Daras and Pericles A. Mitkas and Michael G. Strintzis},
title={Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics},
journal={IEEE Transactions on Multimedia},
volume={11},
number={1},
pages={23-38},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Blind-Robust-3D-Mesh-Watermarking-Based-onOblate-Spheroidal-Harmonics.pdf},
abstract={In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas, Christos S. Krinas and Ioannis N. Demetropoulos
"A grid-enabled algorithm yields figure-eight molecular knot"
Molecular Simulation, 35, (9), pp. 725-736, 2009 Jun

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009PsomopoulosMS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos S. Krinas and Ioannis N. Demetropoulos},
title={A grid-enabled algorithm yields figure-eight molecular knot},
journal={Molecular Simulation},
volume={35},
number={9},
pages={725-736},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-grid-enabled-algorithm-yields-Figure-Eight-molecular-knot.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

2008

Journal Articles

Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Studies in Health Technology and Informatic, 136, pp. 241-246, 2008 Jan

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@article{2007MitkasSHTI,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
journal={Studies in Health Technology and Informatic},
volume={136},
pages={241-246},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"BioCrawler: An intelligent crawler for the semantic web"
Expert Systems with Applications, 36, (35), 2008 Jul

Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope

@article{2008BatziosESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={BioCrawler: An intelligent crawler for the semantic web},
journal={Expert Systems with Applications},
volume={36},
number={35},
year={2008},
month={07},
date={2008-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/BioCrawler-An-intelligent-crawler-for-the-semantic-web.pdf},
keywords={semantic web;Multi-Agent System;focused crawling;web crawling},
abstract={Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope}
}

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis, Ioannis Kontogounis and Pericles A. Mitkas
"Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments"
Expert Systems with Applications, 35, (3), pp. 591-603, 2008 Jan

Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor

@article{2008ChatzidimitriouESwA,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Ioannis Kontogounis and Pericles A. Mitkas},
title={Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments},
journal={Expert Systems with Applications},
volume={35},
number={3},
pages={591-603},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Mertacor-A-robust-design-for-dealing-with-uncertaintyand-variation-in-SCM-environments.pdf},
keywords={machine learning;Agent intelligence;Autonomous trading agents;Electronic commerce},
abstract={Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems"
Expert Systems with Applications, 36, (4), 2008 Sep

Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.

@article{2008DimouESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems},
journal={Expert Systems with Applications},
volume={36},
number={4},
year={2008},
month={09},
date={2008-09-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-infrastructure-for-monitoring-and-evaluating-agent-based-systems.pdf},
keywords={performance evaluation;automated soft ware engineering;fuzzy measurement aggregation;softrware agents},
abstract={Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Sketching a methodology for efficient supply chain management agents enhanced through data mining"
International Journal of Intelligent Information and Database Systems (IJIIDS), 2, (1), 2008 Feb

Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.

@article{2008SymeoniidsIJIIDS,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Sketching a methodology for efficient supply chain management agents enhanced through data mining},
journal={International Journal of Intelligent Information and Database Systems (IJIIDS)},
volume={2},
number={1},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sketching-a-methodology-for-efficient-supply-chain-management-agents-enhanced-through-data-mining.pdf},
keywords={performance evaluation;Intelligent agents;agent-based systems;multi-agent systems;MAS;trading agent competition;agent-oriented methodology;bidding;forecasting;SCM},
abstract={Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.}
}

2007

Journal Articles

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
International Journal of Product Lifecycle Management, 2, (2), pp. 1097-1111, 2007 Jan

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents

@article{2007MitkasIJPLM,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
journal={International Journal of Product Lifecycle Management},
volume={2},
number={2},
pages={1097-1111},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={multi-agent systems;MAS},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Data mining for agent reasoning: A synergy for training intelligent agents"
Engineering Applications of Artificial Intelligence, 20, (8), pp. 1097-1111, 2007 Dec

The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.

@article{2007SymeonidisEAAI,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Data mining for agent reasoning: A synergy for training intelligent agents},
journal={Engineering Applications of Artificial Intelligence},
volume={20},
number={8},
pages={1097-1111},
year={2007},
month={12},
date={2007-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-mining-for-agent-reasoning-A-synergy-fortraining-intelligent-agents.pdf},
keywords={Agent Technology;Agent reasoning;Agent training;Knowledge model},
abstract={The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.}
}

Andreas L. Symeonidis, Ioannis N. Athanasiadis and Pericles A. Mitkas
"A retraining methodology for enhancing agent intelligence"
Knowledge-Based Systems, 20, (4), pp. 388-396, 2007 Jan

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.

@article{2007SymeonidisKBS,
author={Andreas L. Symeonidis and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={A retraining methodology for enhancing agent intelligence},
journal={Knowledge-Based Systems},
volume={20},
number={4},
pages={388-396},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-retraining-methodology-for-enhancing-agent-intelligence.pdf},
keywords={business data processing;logic programming},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionysios Kehagias and Pericles A. Mitkas
"A Multi-agent Infrastructure for Enhancing ERP system Intelligence"
Scalable Computing: Practice and Experience, 8, (1), pp. 101-114, 2007 Jan

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company

@article{2007SymeonidisSCPE,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionysios Kehagias and Pericles A. Mitkas},
title={A Multi-agent Infrastructure for Enhancing ERP system Intelligence},
journal={Scalable Computing: Practice and Experience},
volume={8},
number={1},
pages={101-114},
year={2007},
month={01},
date={2007-01-01},
url={http://www.scpe.org/index.php/scpe/article/viewFile/401/75},
keywords={Adaptive Decision Making;ERP systems;Mutli-Agent Systems;Soft computing},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company}
}

2006

Journal Articles

Sotiris Diplaris, Andreas L. Symeonidis, Pericles A. Mitkas, Georgios Banos and Z Abas
"A decision-tree-based alarming system for the validation of national genetic evaluations"
Computers and Electronics in Agriculture, 52, (1--2), pp. 21--35, 2006 Jun

The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.

@article{2006DiplarisCEA,
author={Sotiris Diplaris and Andreas L. Symeonidis and Pericles A. Mitkas and Georgios Banos and Z Abas},
title={A decision-tree-based alarming system for the validation of national genetic evaluations},
journal={Computers and Electronics in Agriculture},
volume={52},
number={1--2},
pages={21--35},
year={2006},
month={06},
date={2006-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-decision-tree-based-alarming-system-for-the-validation-of-national-genetic-evaluations.pdf},
keywords={Dairy cattle evaluations;Alarming technique;Genetic evaluations;Quality control},
abstract={The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.}
}

Andreas L. Symeonidis, Dionisis D. Kehagias, Pericles A. Mitkas and Adamantios Koumpis
"Open Source Supply Chains"
International Journal of Advanced Manufacturing Systems (IJAMS), 9, (1), pp. 33-42, 2006 Jan

Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.

@article{2006SymeonidisIJAMS,
author={Andreas L. Symeonidis and Dionisis D. Kehagias and Pericles A. Mitkas and Adamantios Koumpis},
title={Open Source Supply Chains},
journal={International Journal of Advanced Manufacturing Systems (IJAMS)},
volume={9},
number={1},
pages={33-42},
year={2006},
month={01},
date={2006-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Open-Source-Supply-Chains.pdf},
keywords={agent-based social simulation},
abstract={Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.}
}

2005

Journal Articles

Ioannis N. Athanasiadis, Alexandros K. Mentes, Pericles Alexandros Mitkas and Yiannis A. Mylopoulos
"A hybrid agent-based model for estimating residential water demand"
Simulation: Transactions of The Society for Modeling and Simulation International, 81, (3), pp. 175--187, 2005 Mar

The global effort toward sustainable development has initiated a transition in water management. Water utility companies use water-pricing policies as an instrument for controlling residential water demand. To support policy makers in their decisions, the authors have developed DAWN, a hybrid model for evaluating water-pricing policies. DAWN integrates an agent-based social model for the consumer with conventional econometric models and simulates the residential water demand-supply chain, enabling the evaluation of different scenarios for policy making. An agent community is assigned to behave as water consumers, while econometric and social models are incorporated into them for estimating water consumption. DAWN\\\\92s main advantage is that it supports social interaction between consumers, through an influence diffusion mechanism, implemented via inter-agent communication. Parameters affecting water consumption and associated with consumers\\\\92 social behavior can be simulated with DAWN. Real-world results of DAWN\\\\92s application for the evaluation of five water-pricing policies in Thessaloniki, Greece, are presented.

@article{2005Athanasiadis-SIMULATION,
author={Ioannis N. Athanasiadis and Alexandros K. Mentes and Pericles Alexandros Mitkas and Yiannis A. Mylopoulos},
title={A hybrid agent-based model for estimating residential water demand},
journal={Simulation: Transactions of The Society for Modeling and Simulation International},
volume={81},
number={3},
pages={175--187},
year={2005},
month={03},
date={2005-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Hybrid-Agent-Based-Model-for-EstimatingResidential-Water-Demand.pdf},
keywords={residential water demand;multiagent systems;social influence;pricing policies},
abstract={The global effort toward sustainable development has initiated a transition in water management. Water utility companies use water-pricing policies as an instrument for controlling residential water demand. To support policy makers in their decisions, the authors have developed DAWN, a hybrid model for evaluating water-pricing policies. DAWN integrates an agent-based social model for the consumer with conventional econometric models and simulates the residential water demand-supply chain, enabling the evaluation of different scenarios for policy making. An agent community is assigned to behave as water consumers, while econometric and social models are incorporated into them for estimating water consumption. DAWN\\\\\\\\92s main advantage is that it supports social interaction between consumers, through an influence diffusion mechanism, implemented via inter-agent communication. Parameters affecting water consumption and associated with consumers\\\\\\\\92 social behavior can be simulated with DAWN. Real-world results of DAWN\\\\\\\\92s application for the evaluation of five water-pricing policies in Thessaloniki, Greece, are presented.}
}

Ioannis N. Athanasiadis and Pericles A. Mitkas
"Social influence and water conservation: An agent-based approach"
IEEE Computing in Science and Engineering, 7, (1), pp. 175--187, 2005 Jan

Every day, consumers are exposed to advertising campaigns that attempt to influence their decisions and affect their behavior. Word-of-mouth communication-he informal channels of daily interactions among friends, relatives, coworkers, neighbors, and acquaintances-plays a much more significant role in how consumer behavior is shaped, fashion is introduced, and product reputation is built. Macrolevel simulations that include this kind of social parameter are usually limited to generalized, often simplistic assumptions. In an effort to represent the phenomenon in a semantically coherent way and model it more realistically, we developed an influence-diffusion mechanism that follows agent-based social simulation primitives. The model is realized as a multiagent software platform, which we call Dawn (for distributed agents for water simulation).

@article{2005AthanasiadisIEEECSE,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Social influence and water conservation: An agent-based approach},
journal={IEEE Computing in Science and Engineering},
volume={7},
number={1},
pages={175--187},
year={2005},
month={01},
date={2005-01-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Social-influence-and-water-conservation-An-agent-based-approach.pdf},
keywords={training},
abstract={Every day, consumers are exposed to advertising campaigns that attempt to influence their decisions and affect their behavior. Word-of-mouth communication-he informal channels of daily interactions among friends, relatives, coworkers, neighbors, and acquaintances-plays a much more significant role in how consumer behavior is shaped, fashion is introduced, and product reputation is built. Macrolevel simulations that include this kind of social parameter are usually limited to generalized, often simplistic assumptions. In an effort to represent the phenomenon in a semantically coherent way and model it more realistically, we developed an influence-diffusion mechanism that follows agent-based social simulation primitives. The model is realized as a multiagent software platform, which we call Dawn (for distributed agents for water simulation).}
}

Dionisis Kehagias, Andreas L. Symeonidis and Pericles A. Mitkas
"Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting"
Electronic Markets, 15, (1), pp. 53--62, 2005 Jan

Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.

@article{2005KehagiasEM,
author={Dionisis Kehagias and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting},
journal={Electronic Markets},
volume={15},
number={1},
pages={53--62},
year={2005},
month={01},
date={2005-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Designing-Pricing-Mechanisms-for-Autonomous-Agents-Based-on-Bid-Forecasting.pdf},
abstract={Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.}
}

Pericles A. Mitkas
"Knowledge Discovery for Training Intelligent Agents: Methodology, Tools and Applications"
Lecture Notes in Artificial Intelligent, 3505, pp. 2-18, 2005 May

In this paper we address a relatively young but important area of research: the intersection of agent technology and data mining. This intersection can take two forms: a) the more mundane use of intelligent agents for improved knowledge discovery and b) the use of data mining techniques for producing smarter, more efficient agents. The paper focuses on the second approach. Knowledge, hidden in voluminous data repositories routinely created and maintained by today\\'s applications, can be extracted by data mining. The next step is to transform this knowledge into the inference mechanisms or simply the behavior of agents in multi-agent systems. We call this procedure “agent training.” We define different levels of agent training and we present a software engineering methodology that combines the application of deductive logic for generating intelligence from data with a process for transferring this knowledge into agents. We introduce Agent Academy, an integrated open-source framework, which supports data mining techniques and agent development tools. We also provide several examples of multi-agent systems developed with this approach.

@article{2005MitkasLNAI,
author={Pericles A. Mitkas},
title={Knowledge Discovery for Training Intelligent Agents: Methodology, Tools and Applications},
journal={Lecture Notes in Artificial Intelligent},
volume={3505},
pages={2-18},
year={2005},
month={05},
date={2005-05-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Knowledge-Discovery-for-Training-Intelligent-Agents-Methodology-Tools-and-Applications.pdf},
doi={http://dx.doi.org/10.1007/11492870_2},
abstract={In this paper we address a relatively young but important area of research: the intersection of agent technology and data mining. This intersection can take two forms: a) the more mundane use of intelligent agents for improved knowledge discovery and b) the use of data mining techniques for producing smarter, more efficient agents. The paper focuses on the second approach. Knowledge, hidden in voluminous data repositories routinely created and maintained by today\\\\'s applications, can be extracted by data mining. The next step is to transform this knowledge into the inference mechanisms or simply the behavior of agents in multi-agent systems. We call this procedure “agent training.” We define different levels of agent training and we present a software engineering methodology that combines the application of deductive logic for generating intelligence from data with a process for transferring this knowledge into agents. We introduce Agent Academy, an integrated open-source framework, which supports data mining techniques and agent development tools. We also provide several examples of multi-agent systems developed with this approach.}
}

Andreas L. Symeonidis, Evangelos Valtos, Serafeim Seroglou and Pericles A. Mitkas
"Biotope: an integrated framework for simulating distributed multiagent computational systems"
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 35, (3), pp. 420-432, 2005 May

The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \\"ecological\\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \\"living\\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.

@article{2005SymeonidisIEEETSMC,
author={Andreas L. Symeonidis and Evangelos Valtos and Serafeim Seroglou and Pericles A. Mitkas},
title={Biotope: an integrated framework for simulating distributed multiagent computational systems},
journal={IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans},
volume={35},
number={3},
pages={420-432},
year={2005},
month={05},
date={2005-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/confaisadmMitkas05.pdf},
doi={http://dx.doi.org/10.1007/11492870_2},
keywords={agent-based systems},
abstract={The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \\\\"ecological\\\\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \\\\"living\\\\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.}
}

2003

Journal Articles

Andreas L. Symeonidis, Dionisis Kehagias and Pericles A. Mitkas
"Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques"
Expert Systems with Applications, 25, (4), pp. 589-602, 2003 Jan

Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.

@article{2003SymeonidisESWA,
author={Andreas L. Symeonidis and Dionisis Kehagias and Pericles A. Mitkas},
title={Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques},
journal={Expert Systems with Applications},
volume={25},
number={4},
pages={589-602},
year={2003},
month={01},
date={2003-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Intelligent-policy-recommendations-on-enterprise-resource-planningby-the-use-of-agent-technology-and-data-mining-techniques.pdf},
keywords={agents},
abstract={Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.}
}