Δημοσιεύσεις



2023

Journal Articles

Rafail Brouzos, Konstantinos Panayiotou and Emmanouil Tsardoulias & Andreas Symeonidis
"A Low-Code Approach for Connected Robots"
Journal of Intelligent & Robotic Systems, 108, 2023 Jun

Advanced robotic systems are finally becoming a reality; following the increased attention that robotics have attracted during the last few decades, new types of robotic applications are launched, from robotic space vessels and fully autonomous cars to robotic dancers and robot companions. Even more, following the advancements in the Internet of Things (IoT) domain, robots can now participate in more complex systems, namely Cyber-physical systems (CPS). In such systems, robots, software, sensors and/or “things” cooperate seamlessly in order to exhibit the desired outcome. However, the high heterogeneity of the components comprising CPS systems requires expertise in various scientific domains, a fact that makes development of CPS applications a resource- and time-consuming process. In order to alleviate this pain, model-driven (or model-based) approaches have been introduced. They employ a low code software engineering approach and hide the domain-specific knowledge needed, by providing an abstract representation that can be more easily understood. Following the low-code paradigm, current work focuses on the development of Domain-specific Languages (DSL) for ROS2 (Robot Operating System 2) systems in order to hide low-level middleware-specific setup and configuration details and enable access to robot development by non ROS experts. Furthermore, in order to enable the integration of ROS2 robots in CPS, a second DSL was developed. The first language, GeneROS, is used for the development and configuration of the core functionalities of the robot (such as hardware drivers and algorithms), while the second language, ROSbridge-DSL, implements the interfaces for connecting robots to the Edge and the Cloud, enabling this way remote monitoring and control in the context of IoT and CPS.

@article{Brouzos2023,
author={Rafail Brouzos and Konstantinos Panayiotou and Emmanouil Tsardoulias & Andreas Symeonidis},
title={A Low-Code Approach for Connected Robots},
journal={Journal of Intelligent & Robotic Systems},
volume={108},
year={2023},
month={06},
date={2023-06-19},
url={https://link.springer.com/article/10.1007/s10846-023-01861-y},
doi={https://doi.org/10.1007/s10846-023-01861-y},
keywords={robotics;Internet of Things;cyber-physical systems;Low-code development;Model-driven engineering;Domain-specific languages;Robot operating system 2},
abstract={Advanced robotic systems are finally becoming a reality; following the increased attention that robotics have attracted during the last few decades, new types of robotic applications are launched, from robotic space vessels and fully autonomous cars to robotic dancers and robot companions. Even more, following the advancements in the Internet of Things (IoT) domain, robots can now participate in more complex systems, namely Cyber-physical systems (CPS). In such systems, robots, software, sensors and/or “things” cooperate seamlessly in order to exhibit the desired outcome. However, the high heterogeneity of the components comprising CPS systems requires expertise in various scientific domains, a fact that makes development of CPS applications a resource- and time-consuming process. In order to alleviate this pain, model-driven (or model-based) approaches have been introduced. They employ a low code software engineering approach and hide the domain-specific knowledge needed, by providing an abstract representation that can be more easily understood. Following the low-code paradigm, current work focuses on the development of Domain-specific Languages (DSL) for ROS2 (Robot Operating System 2) systems in order to hide low-level middleware-specific setup and configuration details and enable access to robot development by non ROS experts. Furthermore, in order to enable the integration of ROS2 robots in CPS, a second DSL was developed. The first language, GeneROS, is used for the development and configuration of the core functionalities of the robot (such as hardware drivers and algorithms), while the second language, ROSbridge-DSL, implements the interfaces for connecting robots to the Edge and the Cloud, enabling this way remote monitoring and control in the context of IoT and CPS.}
}

Thomas Karanikiotis, Themistoklis Diamantopoulos and Andreas L. Symeonidis
"Employing Source Code Quality Analytics for Enriching Code Snippets Data"
Data, 8, (9), 2023 Aug

The availability of code snippets in online repositories like GitHub has led to an uptick in code reuse, this way further supporting an open-source component-based development paradigm. The likelihood of code reuse rises when the code components or snippets are of high quality, especially in terms of readability, making their integration and upkeep simpler. Toward this direction, we have developed a dataset of code snippets that takes into account both the functional and the quality characteristics of the snippets. The dataset is based on the CodeSearchNet corpus and comprises additional information, including static analysis metrics, code violations, readability assessments, and source code similarity metrics. Thus, using this dataset, both software researchers and practitioners can conveniently find and employ code snippets that satisfy diverse functional needs while also demonstrating excellent readability and maintainability.

@article{data8090140,
author={Thomas Karanikiotis and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Employing Source Code Quality Analytics for Enriching Code Snippets Data},
journal={Data},
volume={8},
number={9},
year={2023},
month={08},
date={2023-08-31},
url={https://www.mdpi.com/2306-5729/8/9/140},
doi={http://10.3390/data8090140},
issn={2306-5729},
keywords={static analysis metrics;mining software repositories;source code mining;readability;code snippets},
abstract={The availability of code snippets in online repositories like GitHub has led to an uptick in code reuse, this way further supporting an open-source component-based development paradigm. The likelihood of code reuse rises when the code components or snippets are of high quality, especially in terms of readability, making their integration and upkeep simpler. Toward this direction, we have developed a dataset of code snippets that takes into account both the functional and the quality characteristics of the snippets. The dataset is based on the CodeSearchNet corpus and comprises additional information, including static analysis metrics, code violations, readability assessments, and source code similarity metrics. Thus, using this dataset, both software researchers and practitioners can conveniently find and employ code snippets that satisfy diverse functional needs while also demonstrating excellent readability and maintainability.}
}

Alexandros Delitzas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"Calista: A deep learning-based system for understanding and evaluating website aesthetics"
International Journal of Human-Computer Studies, 175, pp. 103019, 2023 Jan

Website aesthetics play an important role in attracting users and customers, as well as in enhancing user experience. In this work, we propose a tool that performs automatic evaluation of website aesthetics using deep learning models that display high correlation to human perception. These models were developed using two different datasets. The first dataset was created by employing a rating-based ranking approach and contains user judgments on websites in the form of an explicit numerical value on a scale. Using the first dataset, we developed models following three different approaches and managed to outperform previous works. In addition, we created a new dataset by employing a comparison-based ranking approach, which is a more reliable dataset in the sense that it follows a more “natural” data collection method. In this case, users were asked to compare two websites at a time and choose which is more attractive. Data collection was performed via a web application especially designed and developed for this purpose. In the experiments conducted, we evaluated each model and compared the two data collection methods. This work aims to illustrate the effectiveness of deep learning as a solution to the problem as well as to highlight the importance of comparison-based ranking in order to achieve reliable results. In order to further promote our work, we also developed a tool that scores the aesthetics of a website, simply by providing the website URL. We argue that such a tool will serve as a reliable guide in the hands of designers and developers during the design process.

@article{DELITZAS2023103019,
author={Alexandros Delitzas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={Calista: A deep learning-based system for understanding and evaluating website aesthetics},
journal={International Journal of Human-Computer Studies},
volume={175},
pages={103019},
year={2023},
month={01},
date={2023-01-01},
url={https://www.sciencedirect.com/science/article/pii/S1071581923000253},
doi={https://doi.org/10.1016/j.ijhcs.2023.103019},
issn={1071-5819},
keywords={Crowdsourcing;User experience;Website aesthetics;Deep learning;Rating-based evaluation;Comparison-based evaluation},
abstract={Website aesthetics play an important role in attracting users and customers, as well as in enhancing user experience. In this work, we propose a tool that performs automatic evaluation of website aesthetics using deep learning models that display high correlation to human perception. These models were developed using two different datasets. The first dataset was created by employing a rating-based ranking approach and contains user judgments on websites in the form of an explicit numerical value on a scale. Using the first dataset, we developed models following three different approaches and managed to outperform previous works. In addition, we created a new dataset by employing a comparison-based ranking approach, which is a more reliable dataset in the sense that it follows a more “natural” data collection method. In this case, users were asked to compare two websites at a time and choose which is more attractive. Data collection was performed via a web application especially designed and developed for this purpose. In the experiments conducted, we evaluated each model and compared the two data collection methods. This work aims to illustrate the effectiveness of deep learning as a solution to the problem as well as to highlight the importance of comparison-based ranking in order to achieve reliable results. In order to further promote our work, we also developed a tool that scores the aesthetics of a website, simply by providing the website URL. We argue that such a tool will serve as a reliable guide in the hands of designers and developers during the design process.}
}

Themistoklis Diamantopoulos, Nikolaos Saoulidis and Andreas Symeonidis
"Automated Issue Assignment using Topic Modeling on Jira Issue Tracking Data"
IET Software, 17, (3), pp. 333-344, 2023 May

As more and more software teams use online issue tracking systems to collaborate on software projects, the accurate assignment of new issues to the most suitable contributors may have significant impact on the success of the project. As a result, several research efforts have been directed towards automating this process to save considerable time and effort. However, most approaches focus mainly on software bugs and employ models that do not sufficiently take into account the semantics and the non-textual metadata of issues and/or produce models that may require manual tuning. A methodology that extracts both textual and non-textual features from different types of issues is designed, providing a Jira dataset that involves not only bugs but also new features, issues related to documentation, patches, etc. Moreover, the semantics of issue text are effectively captured by employing a topic modelling technique that is optimised using the assignment result. Finally, this methodology aggregates probabilities from a set of individual models to provide the final assignment. Upon evaluating this approach in an automated issue assignment setting using a dataset of Jira issues, the authors conclude that it can be effective for automated issue assignment.

@article{IETSoftware2023,
author={Themistoklis Diamantopoulos and Nikolaos Saoulidis and Andreas Symeonidis},
title={Automated Issue Assignment using Topic Modeling on Jira Issue Tracking Data},
journal={IET Software},
volume={17},
number={3},
pages={333-344},
year={2023},
month={05},
date={2023-05-30},
url={https://issel.ee.auth.gr/wp-content/uploads/2023/05/IETSoftware2023.pdf},
doi={https://doi.org/10.1049/sfw2.12129},
keywords={Software engineering;software development management;software maintenance;software management},
abstract={As more and more software teams use online issue tracking systems to collaborate on software projects, the accurate assignment of new issues to the most suitable contributors may have significant impact on the success of the project. As a result, several research efforts have been directed towards automating this process to save considerable time and effort. However, most approaches focus mainly on software bugs and employ models that do not sufficiently take into account the semantics and the non-textual metadata of issues and/or produce models that may require manual tuning. A methodology that extracts both textual and non-textual features from different types of issues is designed, providing a Jira dataset that involves not only bugs but also new features, issues related to documentation, patches, etc. Moreover, the semantics of issue text are effectively captured by employing a topic modelling technique that is optimised using the assignment result. Finally, this methodology aggregates probabilities from a set of individual models to provide the final assignment. Upon evaluating this approach in an automated issue assignment setting using a dataset of Jira issues, the authors conclude that it can be effective for automated issue assignment.}
}

Eleni Poptsi, Despina Moraitou, Emmanouil Tsardoulias, Andreas L. Symeonidis, Vasileios Papaliagkas and Magdalini Tsolaki
"R4Alz-Revised: A Tool Able to Strongly Discriminate ‘Subjective Cognitive Decline’ from Healthy Cognition and ‘Minor Neurocognitive Disorder’"
Diagnostics, 13, (3), pp. 338, 2023 Jan

Background: The diagnosis of the minor neurocognitive diseases in the clinical course of dementia before the clinical symptoms’ appearance is the holy grail of neuropsychological research. The R4Alz battery is a novel and valid tool that was designed to assess cognitive control in people with minor cognitive disorders. The aim of the current study is the R4Alz battery’s extension (namely R4Alz-R), enhanced by the design and administration of extra episodic memory tasks, as well as extra cognitive control tasks, towards improving the overall R4Alz discriminant validity. Methods: The study comprised 80 people: (a) 20 Healthy adults (HC), (b) 29 people with Subjective Cognitive Decline (SCD), and (c) 31 people with Mild Cognitive Impairment (MCI). The groups differed in age and educational level. Results: Updating, inhibition, attention switching, and cognitive flexibility tasks discriminated SCD from HC (p ≤ 0.003). Updating, switching, cognitive flexibility, and episodic memory tasks discriminated SCD from MCI (p ≤ 0.001). All the R4Alz-R’s tasks discriminated HC from MCI (p ≤ 0.001). The R4Alz-R was free of age and educational level effects. The battery discriminated perfectly SCD from HC and HC from MCI (100% sensitivity—95% specificity and 100% sensitivity—90% specificity, respectively), whilst it discriminated excellently SCD from MCI (90.3% sensitivity—82.8% specificity). Conclusion: SCD seems to be stage a of neurodegeneration since it can be objectively evaluated via the R4Alz-R battery, which seems to be a useful tool for early diagnosis.

@article{r4alzR,
author={Eleni Poptsi and Despina Moraitou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Vasileios Papaliagkas and Magdalini Tsolaki},
title={R4Alz-Revised: A Tool Able to Strongly Discriminate ‘Subjective Cognitive Decline’ from Healthy Cognition and ‘Minor Neurocognitive Disorder’},
journal={Diagnostics},
volume={13},
number={3},
pages={338},
year={2023},
month={01},
date={2023-01-17},
url={https://www.mdpi.com/2075-4418/13/3/338},
doi={https://doi.org/10.3390/diagnostics13030338},
keywords={subjective cognitive decline;early diagnosis;neurodegeneration;R4Alz-R battery},
abstract={Background: The diagnosis of the minor neurocognitive diseases in the clinical course of dementia before the clinical symptoms’ appearance is the holy grail of neuropsychological research. The R4Alz battery is a novel and valid tool that was designed to assess cognitive control in people with minor cognitive disorders. The aim of the current study is the R4Alz battery’s extension (namely R4Alz-R), enhanced by the design and administration of extra episodic memory tasks, as well as extra cognitive control tasks, towards improving the overall R4Alz discriminant validity. Methods: The study comprised 80 people: (a) 20 Healthy adults (HC), (b) 29 people with Subjective Cognitive Decline (SCD), and (c) 31 people with Mild Cognitive Impairment (MCI). The groups differed in age and educational level. Results: Updating, inhibition, attention switching, and cognitive flexibility tasks discriminated SCD from HC (p ≤ 0.003). Updating, switching, cognitive flexibility, and episodic memory tasks discriminated SCD from MCI (p ≤ 0.001). All the R4Alz-R’s tasks discriminated HC from MCI (p ≤ 0.001). The R4Alz-R was free of age and educational level effects. The battery discriminated perfectly SCD from HC and HC from MCI (100% sensitivity—95% specificity and 100% sensitivity—90% specificity, respectively), whilst it discriminated excellently SCD from MCI (90.3% sensitivity—82.8% specificity). Conclusion: SCD seems to be stage a of neurodegeneration since it can be objectively evaluated via the R4Alz-R battery, which seems to be a useful tool for early diagnosis.}
}

2023

Conference Papers

Νικόλαος Αλτάνης, Ελένη Πόπτση, Μάγδα Τσολάκη, Ανδρέας Συμεωνίδης and Εμμανουήλ Τσαρδούλιας
"Σχεδιασμός οικολογικού συστήματος εικονικής πραγματικότητας (3D) προς εκτίμηση σφαιρικών νοητικών ικανοτήτων σε άτομα με ήπια νοητική έκπτωση"
13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2023 Feb

@conference{altanisAlz2023,
author={Νικόλαος Αλτάνης and Ελένη Πόπτση and Μάγδα Τσολάκη and Ανδρέας Συμεωνίδης and Εμμανουήλ Τσαρδούλιας},
title={Σχεδιασμός οικολογικού συστήματος εικονικής πραγματικότητας (3D) προς εκτίμηση σφαιρικών νοητικών ικανοτήτων σε άτομα με ήπια νοητική έκπτωση},
booktitle={13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece},
year={2023},
month={02},
date={2023-02-09}
}

Dimitrios-Nikitas Nastos, Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Interpretable Monitoring and Assignment of Jira Issues"
Proceedings of the 18th International Conference on Software Technologies (ICSOFT 2023), pp. 696-703, 2023 Jul

Lately, online issue tracking systems like Jira are used extensively for monitoring open-source software projects. Using these systems, different contributors can collaborate towards planning features and resolving issues that may arise during the software development process. In this context, several approaches have been proposed to extract knowledge from these systems in order to automate issue assignment. Though effective under certain scenarios, these approaches also have limitations; most of them are based mainly on textual features and they may use techniques that do not extract the underlying semantics and/or the expertise of the different contributors. Furthermore, they typically provide black-box recommendations, thus not helping the developers to interpret the issue assignments. In this work, we present an issue mining system that extracts semantic topics from issues and provides interpretable recommendations for issue assignments. Our system employs a dataset of Jira issues and extracts information not only from the textual features of issues but also from their components and their labels. These features, along with the extracted semantic topics, produce an aggregated model that outputs interpretable recommendations and useful statistics to support issue assignment. The results of our evaluation indicate that our system can be effective, leaving room for future research.

@conference{ICSOFT2023Issues,
author={Dimitrios-Nikitas Nastos and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Interpretable Monitoring and Assignment of Jira Issues},
booktitle={Proceedings of the 18th International Conference on Software Technologies (ICSOFT 2023)},
pages={696-703},
year={2023},
month={07},
date={2023-07-10},
url={https://issel.ee.auth.gr/wp-content/uploads/2023/07/ICSOFT2023Issues.pdf},
doi={https://doi.org/10.5220/0012146400003538},
keywords={Task Management;Jira Issues;Topic Modeling;Project Management},
abstract={Lately, online issue tracking systems like Jira are used extensively for monitoring open-source software projects. Using these systems, different contributors can collaborate towards planning features and resolving issues that may arise during the software development process. In this context, several approaches have been proposed to extract knowledge from these systems in order to automate issue assignment. Though effective under certain scenarios, these approaches also have limitations; most of them are based mainly on textual features and they may use techniques that do not extract the underlying semantics and/or the expertise of the different contributors. Furthermore, they typically provide black-box recommendations, thus not helping the developers to interpret the issue assignments. In this work, we present an issue mining system that extracts semantic topics from issues and provides interpretable recommendations for issue assignments. Our system employs a dataset of Jira issues and extracts information not only from the textual features of issues but also from their components and their labels. These features, along with the extracted semantic topics, produce an aggregated model that outputs interpretable recommendations and useful statistics to support issue assignment. The results of our evaluation indicate that our system can be effective, leaving room for future research.}
}

Athanasios Michailoudis, Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Readability-aware Recommendations of Source Code Snippets"
Proceedings of the 18th International Conference on Software Technologies (ICSOFT 2023), pp. 688-695, 2023 Jul

Nowadays developers search online for reusable solutions to their problems in the form of source code snippets. As this paradigm can greatly reduce the time and effort required for software development, several systems have been proposed to automate the process of finding reusable snippets. However, contemporary systems also have certain limitations; several of them do not support queries in natural language and/or they only output API calls, thus limiting their ease of use. Moreover, the retrieved snippets are often not grouped according to the APIs/libraries used, while they are only assessed for their functionality, disregarding their readability. In this work, we design a snippet mining methodology that receives queries in natural language and retrieves snippets, which are assessed not only for their functionality but also for their readability. The snippets are grouped according to their used API calls (libraries), thus enabling the developer to determine which solution is best fitted for his/her own source code, and making sure that it will be easily integrated and maintained. Upon providing a preliminary evaluation of our methodology on a set of different programming queries, we conclude that it can be effective in providing reusable and readable source code snippets.

@conference{ICSOFT2023Snippets,
author={Athanasios Michailoudis and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Readability-aware Recommendations of Source Code Snippets},
booktitle={Proceedings of the 18th International Conference on Software Technologies (ICSOFT 2023)},
pages={688-695},
year={2023},
month={07},
date={2023-07-10},
url={https://issel.ee.auth.gr/wp-content/uploads/2023/07/ICSOFT2023Snippets.pdf},
doi={https://doi.org/10.5220/0012145500003538},
keywords={Snippet Mining;API Usage Mining;Code Readability},
abstract={Nowadays developers search online for reusable solutions to their problems in the form of source code snippets. As this paradigm can greatly reduce the time and effort required for software development, several systems have been proposed to automate the process of finding reusable snippets. However, contemporary systems also have certain limitations; several of them do not support queries in natural language and/or they only output API calls, thus limiting their ease of use. Moreover, the retrieved snippets are often not grouped according to the APIs/libraries used, while they are only assessed for their functionality, disregarding their readability. In this work, we design a snippet mining methodology that receives queries in natural language and retrieves snippets, which are assessed not only for their functionality but also for their readability. The snippets are grouped according to their used API calls (libraries), thus enabling the developer to determine which solution is best fitted for his/her own source code, and making sure that it will be easily integrated and maintained. Upon providing a preliminary evaluation of our methodology on a set of different programming queries, we conclude that it can be effective in providing reusable and readable source code snippets.}
}

Δημήτριος Φ.Καβελίδης, Εμμανουήλ Τσαρδούλιας, Ελένη Πόπτση, Θωμάς Καρανικιώτης, Μάγδα Τσολάκη and Ανδρέας Συμεωνίδης
"Αναγνώριση Κατηγορίας Νοητικής Έκπτωσης μέσω Χαρακτηριστικών Ομιλίας"
13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2023 Feb

@conference{kaveAlz2023,
author={Δημήτριος Φ.Καβελίδης and Εμμανουήλ Τσαρδούλιας and Ελένη Πόπτση and Θωμάς Καρανικιώτης and Μάγδα Τσολάκη and Ανδρέας Συμεωνίδης},
title={Αναγνώριση Κατηγορίας Νοητικής Έκπτωσης μέσω Χαρακτηριστικών Ομιλίας},
booktitle={13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece},
year={2023},
month={02},
date={2023-02-09}
}

Themistoklis Diamantopoulos, Dimitrios-Nikitas Nastos and Andreas Symeonidis
"Semantically-enriched Jira Issue Tracking Data"
20th International Conference on Mining Software Repositories (MSR 2023), pp. 218-222, ACM, 2023 May

Current state of practice dictates that software developers host their projects online and employ project management systems to monitor the development of product features, keep track of bugs, and prioritize task assignments. The data stored in these systems, if their semantics are extracted effectively, can be used to answer several interesting questions, such as finding who is the most suitable developer for a task, what the priority of a task should be, or even what is the actual workload of the software team. To support researchers and practitioners that work towards these directions, we have built a system that crawls data from the Jira management system, performs topic modeling on the data to extract useful semantics and stores them in a practical database schema. We have used our system to retrieve and analyze 656 projects of the Apache Software Foundation, comprising data from more than a million Jira issues.

@conference{MSR2023,
author={Themistoklis Diamantopoulos and Dimitrios-Nikitas Nastos and Andreas Symeonidis},
title={Semantically-enriched Jira Issue Tracking Data},
booktitle={20th International Conference on Mining Software Repositories (MSR 2023)},
pages={218-222},
publisher={ACM},
year={2023},
key={MSR2023},
month={05},
date={2023-05-15},
url={https://issel.ee.auth.gr/wp-content/uploads/2023/04/MSR2023JiraIssuesDataset.pdf},
doi={https://doi.org/10.1109/MSR59073.2023.00039},
keywords={mining software repositories;Task Management;Jira Issues;Topic Modeling;BERT},
abstract={Current state of practice dictates that software developers host their projects online and employ project management systems to monitor the development of product features, keep track of bugs, and prioritize task assignments. The data stored in these systems, if their semantics are extracted effectively, can be used to answer several interesting questions, such as finding who is the most suitable developer for a task, what the priority of a task should be, or even what is the actual workload of the software team. To support researchers and practitioners that work towards these directions, we have built a system that crawls data from the Jira management system, performs topic modeling on the data to extract useful semantics and stores them in a practical database schema. We have used our system to retrieve and analyze 656 projects of the Apache Software Foundation, comprising data from more than a million Jira issues.}
}

Ελένη Πόπτση, Δέσποινα Μωραΐτου, Εμμανουήλ Τσαρδούλιας, Ανδρέας Συμεωνίδης and Μάγδα Τσολάκη
"Υποκειμενική νοητική εξασθένιση: Φυσικό επακόλουθο του νοητικά υγιούς γήρατος ή προ-στάδιο της ήπιας νοητικής διαταραχής; Διερεύνηση της διακριτικής ικανότητας της αναθεωρημένης έκδοσης της συστοιχίας R4Alz-R"
13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2023 Feb

@conference{poptsiAlz2023,
author={Ελένη Πόπτση and Δέσποινα Μωραΐτου and Εμμανουήλ Τσαρδούλιας and Ανδρέας Συμεωνίδης and Μάγδα Τσολάκη},
title={Υποκειμενική νοητική εξασθένιση: Φυσικό επακόλουθο του νοητικά υγιούς γήρατος ή προ-στάδιο της ήπιας νοητικής διαταραχής; Διερεύνηση της διακριτικής ικανότητας της αναθεωρημένης έκδοσης της συστοιχίας R4Alz-R},
booktitle={13th Panhellenic Conference on Alzheimer's Disease & 4th Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece},
year={2023},
month={02},
date={2023-02-09}
}

2023

Inbooks

Thomas Karanikiotis and Andreas L. Symeonidis
"Towards Extracting Reusable and Maintainable Code Snippets"
Charpter:-, Fill, Hans-Georg and van Sinderen, Marten and Maciaszek, Leszek A. edition, 1859, pp. 187-206, Springer International Publishing, Communications in Computer and Information Science, Cham, 2023 Jul

Given the wide adoption of the agile software development paradigm, where efficient collaboration as well as effective maintenance are of utmost importance, and the (re)use of software residing in code hosting platforms, the need to produce qualitative code is evident. A condition for acceptable software reusability and maintainability is the use of idiomatic code, based on syntactic fragments that recur frequently across software projects and are characterized by high quality. In this work, we propose a methodology that can harness data from the most popular GitHub repositories in order to automatically identify reusable and maintainable code idioms, by grouping code blocks that have similar structural and semantic information. We also apply the same methodology on a single-project level, in an attempt to identify frequently recurring blocks of code across the files of a team. Preliminary evaluation of our methodology indicates that our approach can identify commonly used, reusable and maintainable code idioms and code blocks that can be effectively given as actionable recommendations to the developers.

@inbook{icsoft2022karanikiotisbook,
author={Thomas Karanikiotis and Andreas L. Symeonidis},
title={Towards Extracting Reusable and Maintainable Code Snippets},
chapter={-},
edition={Fill, Hans-Georg and van Sinderen, Marten and Maciaszek, Leszek A.},
volume={1859},
pages={187-206},
publisher={Springer International Publishing},
series={seriesS},
address={Cham},
year={2023},
month={07},
date={2023-07-19},
url={https://doi.org/10.1007/978-3-031-37231-5_9},
doi={http://10.1007/978-3-031-37231-5_9},
isbn={978-3-031-37231-5},
keywords={Software engineering;Code Idioms;Syntactic Fragment;Software Reusability;Software Maintainability;Software repositories},
abstract={Given the wide adoption of the agile software development paradigm, where efficient collaboration as well as effective maintenance are of utmost importance, and the (re)use of software residing in code hosting platforms, the need to produce qualitative code is evident. A condition for acceptable software reusability and maintainability is the use of idiomatic code, based on syntactic fragments that recur frequently across software projects and are characterized by high quality. In this work, we propose a methodology that can harness data from the most popular GitHub repositories in order to automatically identify reusable and maintainable code idioms, by grouping code blocks that have similar structural and semantic information. We also apply the same methodology on a single-project level, in an attempt to identify frequently recurring blocks of code across the files of a team. Preliminary evaluation of our methodology indicates that our approach can identify commonly used, reusable and maintainable code idioms and code blocks that can be effectively given as actionable recommendations to the developers.}
}

2022

Journal Articles

Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas L. Symeonidis
"Commlib: An easy-to-use communication library for Cyber–Physical Systems"
SoftwareX, 2022 Jul

Communication and data exchange between objects is a fundamental aspect of Cyber–Physical Systems. Due to the highly distributed nature of the domain, physical and virtual objects rely on the Sense-Think-Act-Communicate model in order to provide remote interfaces for sending and receiving sensor data and actuation commands and for interconnecting processing artifacts with sensing and actuation endpoints. In order to build such interfaces, thing/object specifications must be taken into account in order to write/adapt/use their drivers and ensure appropriate connectivity; this approach is, at least cumbersome and requires hardware engineering expertise. In this paper we present Commlib, a Python library that abstracts low-level protocol-specific properties and specifications and provides a high-level API for creating and managing communication interfaces of distributed nodes over asynchronous message-driven and event-driven communication middleware, such as MQTT, AMQP, Kafka and Redis brokers. Our approach follows the Component-Port-Connector paradigm to model interconnection and intercommunication of distributed nodes via input and output ports from where messages are transferred over open connections between nodes. Commlib is easy-to-use and enables rapid development of Cyber–Physical Systems and applications, allowing users to focus on the envisioned functionality rather than the obvious connectivity/compatibility issues residing in such systems.

@article{commlib,
author={Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis},
title={Commlib: An easy-to-use communication library for Cyber–Physical Systems},
journal={SoftwareX},
year={2022},
month={07},
date={2022-07-01},
url={https://www.sciencedirect.com/science/article/pii/S2352711022001091},
doi={https://doi.org/10.1016/j.softx.2022.101180},
keywords={Cyber–Physical Systems;Internet-of-things;Distributed systems;Python;Rapid development;Communication middleware},
abstract={Communication and data exchange between objects is a fundamental aspect of Cyber–Physical Systems. Due to the highly distributed nature of the domain, physical and virtual objects rely on the Sense-Think-Act-Communicate model in order to provide remote interfaces for sending and receiving sensor data and actuation commands and for interconnecting processing artifacts with sensing and actuation endpoints. In order to build such interfaces, thing/object specifications must be taken into account in order to write/adapt/use their drivers and ensure appropriate connectivity; this approach is, at least cumbersome and requires hardware engineering expertise. In this paper we present Commlib, a Python library that abstracts low-level protocol-specific properties and specifications and provides a high-level API for creating and managing communication interfaces of distributed nodes over asynchronous message-driven and event-driven communication middleware, such as MQTT, AMQP, Kafka and Redis brokers. Our approach follows the Component-Port-Connector paradigm to model interconnection and intercommunication of distributed nodes via input and output ports from where messages are transferred over open connections between nodes. Commlib is easy-to-use and enables rapid development of Cyber–Physical Systems and applications, allowing users to focus on the envisioned functionality rather than the obvious connectivity/compatibility issues residing in such systems.}
}

Alexandros Filotheou, Anastasios Tzitzis, Emmanouil Tsardoulias, Antonis Dimitriou, Andreas Symeonidis and George Sergiadis & Loukas Petrou
"Passive Global Localisation of Mobile Robot via 2D Fourier-Mellin Invariant Matching"
Journal of Intelligent & Robotic Systems, 26, 2022 Jan

Passive global localisation is defined as locating a robot on a map, under global pose uncertainty, without prescribing motion controls. The majority of current solutions either assume structured environments or require tuning of parameters relevant to establishing correspondences between sensor measurements and segments of the map. This article advocates for a solution that dispenses with both in order to achieve greater portability and universality across disparate static environments. A single 2D panoramic LIght Detection And Ranging (LIDAR) sensor is used as the measurement device, this way reducing computational and investment costs. The proposed method disperses pose hypotheses on the map of the robot’s environment and then captures virtual scans from each of them. Subsequently, each virtual scan is matched against the one derived from the physical sensor. Angular alignment is performed via 2D Fourier-Mellin Invariant (FMI) matching; positional alignment is performed via feedback of the position estimation error. In order to deduce the robot’s pose the method sifts through hypotheses by using measures extracted from FMI. Simulations and experiments illustrate the efficacy of the proposed global localisation solution in realistic surroundings and scenarios. In addition, the proposed method is pitted against the most effective Iterative Closest Point (ICP) variant under the same task, and three conclusions are drawn. The first is that the proposed method is effective in both structured and unstructured environments. The second is that it concludes to fewer false positives. The third is that the two methods are largely equivalent in terms of pose error.

@article{filotheou2022Fourier,
author={Alexandros Filotheou and Anastasios Tzitzis and Emmanouil Tsardoulias and Antonis Dimitriou and Andreas Symeonidis and George Sergiadis & Loukas Petrou},
title={Passive Global Localisation of Mobile Robot via 2D Fourier-Mellin Invariant Matching},
journal={Journal of Intelligent & Robotic Systems},
volume={26},
year={2022},
month={01},
date={2022-01-22},
url={https://link.springer.com/article/10.1007/s10846-021-01535-7},
doi={https://doi.org/10.1007/s10846-021-01535-7},
abstract={Passive global localisation is defined as locating a robot on a map, under global pose uncertainty, without prescribing motion controls. The majority of current solutions either assume structured environments or require tuning of parameters relevant to establishing correspondences between sensor measurements and segments of the map. This article advocates for a solution that dispenses with both in order to achieve greater portability and universality across disparate static environments. A single 2D panoramic LIght Detection And Ranging (LIDAR) sensor is used as the measurement device, this way reducing computational and investment costs. The proposed method disperses pose hypotheses on the map of the robot’s environment and then captures virtual scans from each of them. Subsequently, each virtual scan is matched against the one derived from the physical sensor. Angular alignment is performed via 2D Fourier-Mellin Invariant (FMI) matching; positional alignment is performed via feedback of the position estimation error. In order to deduce the robot’s pose the method sifts through hypotheses by using measures extracted from FMI. Simulations and experiments illustrate the efficacy of the proposed global localisation solution in realistic surroundings and scenarios. In addition, the proposed method is pitted against the most effective Iterative Closest Point (ICP) variant under the same task, and three conclusions are drawn. The first is that the proposed method is effective in both structured and unstructured environments. The second is that it concludes to fewer false positives. The third is that the two methods are largely equivalent in terms of pose error.}
}

Panagiotis Antoniadis and Emmanouil Tsardoulias & Andreas Symeonidis
"A mechanism for personalized Automatic Speech Recognition for less frequently spoken languages: the Greek case"
Multimedia Tools and Applications, 2022 May

Automatic Speech Recognition (ASR) has become increasingly popular since it significantly simplifies human-computer interaction, providing a more intuitive way of communication. Building an accurate, general-purpose ASR system is a challenging task that requires a lot of data and computing power. Especially for languages not widely spoken, such as Greek, the lack of adequately large speech datasets leads to the development of ASR systems adapted to a restricted corpus and/or for specific topics. When used in specific domains, these systems can be both accurate and fast, without the need for large datasets and extended training. An interesting application domain of such narrow-scope ASR systems is the development of personalized models that can be used for dictation. In the current work we present three personalization-via-adaptation modules, that can be integrated into any ASR/dictation system and increase its accuracy. The adaptation can be applied both on the language model (based on past text samples of the user) as well as on the acoustic model (using a set of user’s narrations). To provide more precise recommendations, clustering algorithms are applied and topic-specific language models are created. Also, heterogeneous adaptation methods are combined to provide recommendations to the user. Evaluation performed on a self-created database containing 746 corpora included in messaging applications and e-mails from the same user, demonstrates that the proposed approach can achieve better results than the vanilla existing Greek models.

@article{gsoc19antoniadis,
author={Panagiotis Antoniadis and Emmanouil Tsardoulias & Andreas Symeonidis},
title={A mechanism for personalized Automatic Speech Recognition for less frequently spoken languages: the Greek case},
journal={Multimedia Tools and Applications},
year={2022},
month={05},
date={2022-05-12},
doi={https://doi.org/10.1007/s11042-022-12953-6},
keywords={Clustering;personalization;Automatic Speech recognition;Dictation},
abstract={Automatic Speech Recognition (ASR) has become increasingly popular since it significantly simplifies human-computer interaction, providing a more intuitive way of communication. Building an accurate, general-purpose ASR system is a challenging task that requires a lot of data and computing power. Especially for languages not widely spoken, such as Greek, the lack of adequately large speech datasets leads to the development of ASR systems adapted to a restricted corpus and/or for specific topics. When used in specific domains, these systems can be both accurate and fast, without the need for large datasets and extended training. An interesting application domain of such narrow-scope ASR systems is the development of personalized models that can be used for dictation. In the current work we present three personalization-via-adaptation modules, that can be integrated into any ASR/dictation system and increase its accuracy. The adaptation can be applied both on the language model (based on past text samples of the user) as well as on the acoustic model (using a set of user’s narrations). To provide more precise recommendations, clustering algorithms are applied and topic-specific language models are created. Also, heterogeneous adaptation methods are combined to provide recommendations to the user. Evaluation performed on a self-created database containing 746 corpora included in messaging applications and e-mails from the same user, demonstrates that the proposed approach can achieve better results than the vanilla existing Greek models.}
}

Nikolaos Malamas, Konstantinos Papangelou and Andreas L. Symeonidis
"Upon Improving the Performance of Localized Healthcare Virtual Assistants"
Healthcare, 10, (1), 2022 Jan

Virtual assistants are becoming popular in a variety of domains, responsible for automating repetitive tasks or allowing users to seamlessly access useful information. With the advances in Machine Learning and Natural Language Processing, there has been an increasing interest in applying such assistants in new areas and with new capabilities. In particular, their application in e-healthcare is becoming attractive and is driven by the need to access medically-related knowledge, as well as providing first-level assistance in an efficient manner. In such types of virtual assistants, localization is of utmost importance, since the general population (especially the aging population) is not familiar with the needed “healthcare vocabulary” to communicate facts properly; and state-of-practice proves relatively poor in performance when it comes to specialized virtual assistants for less frequently spoken languages. In this context, we present a Greek ML-based virtual assistant specifically designed to address some commonly occurring tasks in the healthcare domain, such as doctor’s appointments or distress (panic situations) management. We build on top of an existing open-source framework, discuss the necessary modifications needed to address the language-specific characteristics and evaluate various combinations of word embeddings and machine learning models to enhance the assistant’s behaviour. Results show that we are able to build an efficient Greek-speaking virtual assistant to support e-healthcare, while the NLP pipeline proposed can be applied in other (less frequently spoken) languages, without loss of generality.

@article{malamas-healthcare,
author={Nikolaos Malamas and Konstantinos Papangelou and Andreas L. Symeonidis},
title={Upon Improving the Performance of Localized Healthcare Virtual Assistants},
journal={Healthcare},
volume={10},
number={1},
year={2022},
month={01},
date={2022-01-04},
url={https://www.mdpi.com/2227-9032/10/1/99},
doi={https://doi.org/10.3390/healthcare10010099},
issn={2227-9032},
keywords={chatbot; virtual assistant; Rasa; ehealthcare},
abstract={Virtual assistants are becoming popular in a variety of domains, responsible for automating repetitive tasks or allowing users to seamlessly access useful information. With the advances in Machine Learning and Natural Language Processing, there has been an increasing interest in applying such assistants in new areas and with new capabilities. In particular, their application in e-healthcare is becoming attractive and is driven by the need to access medically-related knowledge, as well as providing first-level assistance in an efficient manner. In such types of virtual assistants, localization is of utmost importance, since the general population (especially the aging population) is not familiar with the needed “healthcare vocabulary” to communicate facts properly; and state-of-practice proves relatively poor in performance when it comes to specialized virtual assistants for less frequently spoken languages. In this context, we present a Greek ML-based virtual assistant specifically designed to address some commonly occurring tasks in the healthcare domain, such as doctor’s appointments or distress (panic situations) management. We build on top of an existing open-source framework, discuss the necessary modifications needed to address the language-specific characteristics and evaluate various combinations of word embeddings and machine learning models to enhance the assistant’s behaviour. Results show that we are able to build an efficient Greek-speaking virtual assistant to support e-healthcare, while the NLP pipeline proposed can be applied in other (less frequently spoken) languages, without loss of generality.}
}

Panayiotou, Konstantinos, Emmanouil Tsardoulias, Christoforos Zolotas, Andreas L. Symeonidis, and Loukas Petrou
"A Framework for Rapid Robotic Application Development for Citizen Developers"
Software, 1, (1), pp. 53-79, 2022 Mar

It is common knowledge among computer scientists and software engineers that ”building robotics systems is hard”: it includes applied and specialized knowledge from various scientific fields, such as mechanical, electrical and computer engineering, computer science and physics, among others. To expedite the development of robots, a significant number of robotics-oriented middleware solutions and frameworks exist that provide high-level functionality for the implementation of the in-robot software stack, such as ready-to-use algorithms and sensor/actuator drivers. While the aforementioned focus is on the implementation of the core functionalities and control layer of robots, these specialized tools still require extensive training, while not providing the envisaged freedom in design choices. In this paper, we discuss most of the robotics software development methodologies and frameworks, analyze the way robotics applications are built and propose a new resource-oriented architecture towards the rapid development of robot-agnostic applications. The contribution of our work is a methodology and a model-based middleware that can be used to provide remote robot-agnostic interfaces. Such interfaces may support robotics application development from citizen developers by reducing hand-coding and technical knowledge requirements. This way, non-robotics experts will be able to integrate and use robotics in a wide range of application domains, such as healthcare, home assistance, home automation and cyber–physical systems in general.

@article{r4aarchitecture,
author={Panayiotou and Konstantinos and Emmanouil Tsardoulias and Christoforos Zolotas and Andreas L. Symeonidis and and Loukas Petrou},
title={A Framework for Rapid Robotic Application Development for Citizen Developers},
journal={Software},
volume={1},
number={1},
pages={53-79},
year={2022},
month={03},
date={2022-03-03},
url={https://www.mdpi.com/2674-113X/1/1/4/htm},
doi={https://doi.org/10.3390/software1010004},
abstract={It is common knowledge among computer scientists and software engineers that ”building robotics systems is hard”: it includes applied and specialized knowledge from various scientific fields, such as mechanical, electrical and computer engineering, computer science and physics, among others. To expedite the development of robots, a significant number of robotics-oriented middleware solutions and frameworks exist that provide high-level functionality for the implementation of the in-robot software stack, such as ready-to-use algorithms and sensor/actuator drivers. While the aforementioned focus is on the implementation of the core functionalities and control layer of robots, these specialized tools still require extensive training, while not providing the envisaged freedom in design choices. In this paper, we discuss most of the robotics software development methodologies and frameworks, analyze the way robotics applications are built and propose a new resource-oriented architecture towards the rapid development of robot-agnostic applications. The contribution of our work is a methodology and a model-based middleware that can be used to provide remote robot-agnostic interfaces. Such interfaces may support robotics application development from citizen developers by reducing hand-coding and technical knowledge requirements. This way, non-robotics experts will be able to integrate and use robotics in a wide range of application domains, such as healthcare, home assistance, home automation and cyber–physical systems in general.}
}

Konstantinos Strantzalis, Fotios Gioulekas, Panagiotis Katsaros and Andreas L. Symeonidis
"Operational State Recognition of a DC Motor Using Edge Artificial Intelligence"
Sensors, 22, (24), 2022 Dec

@article{s22249658,
author={Konstantinos Strantzalis and Fotios Gioulekas and Panagiotis Katsaros and Andreas L. Symeonidis},
title={Operational State Recognition of a DC Motor Using Edge Artificial Intelligence},
journal={Sensors},
volume={22},
number={24},
year={2022},
month={12},
date={2022-12-09},
url={https://www.mdpi.com/1424-8220/22/24/9658},
doi={http://10.3390/s22249658}
}

2022

Conference Papers

Dimitrios Kavelidis Frantzis, Emmanouil Tsardoulias, Thomas Karanikiotis, Eleni Poptsi, Magda Tsolaki and Andreas Symeonidis
National Conference ACOUSTICS 2022, 2022 Oct

In this study, the validity of a Machine Learning multiclass classification process is examined, as to classify a speaker in a cognitive decline stage, aiming to develop a simple screening test. The target classes comprise Cognitively Healthy controls, Subjective Cognitive Decline and Early & Late Mild Cognitive Impairment. Speech data was collected from structured interviews on 84 people, split in stages of increasing required levels of cognitive difficulty. Audio features were extracted based on Silence, Prosody and Zero-Crossings, as well as on the feature vectors’ differences between stages, and were evaluated with the Random Forest, Extra-Trees and Support Vector Machines classifiers. The best classification was achieved using models trained with stage differences features (on SVM), resulting in a mean accuracy of 80.99±3.29%.

@conference{2022kavAlzSpeech,
author={Dimitrios Kavelidis Frantzis and Emmanouil Tsardoulias and Thomas Karanikiotis and Eleni Poptsi and Magda Tsolaki and Andreas Symeonidis},
title={Αναγνώριση Κατηγορίας Νοητικής Έκπτωσης μέσω Χαρακτηριστικών Ομιλίας},
booktitle={National Conference ACOUSTICS 2022},
year={2022},
month={10},
date={2022-10-14},
publisher's url={https://conferences.helina.gr/2022/en/},
abstract={In this study, the validity of a Machine Learning multiclass classification process is examined, as to classify a speaker in a cognitive decline stage, aiming to develop a simple screening test. The target classes comprise Cognitively Healthy controls, Subjective Cognitive Decline and Early & Late Mild Cognitive Impairment. Speech data was collected from structured interviews on 84 people, split in stages of increasing required levels of cognitive difficulty. Audio features were extracted based on Silence, Prosody and Zero-Crossings, as well as on the feature vectors’ differences between stages, and were evaluated with the Random Forest, Extra-Trees and Support Vector Machines classifiers. The best classification was achieved using models trained with stage differences features (on SVM), resulting in a mean accuracy of 80.99±3.29%.}
}

Eleni Poptsi, Despoina Moraitou, Emmanouil Tsardoulias, Andreas Symeonidis and Magda Tsolaki
"Υποκειμενική νοητική εξασθένιση: Κομμάτι της υγιούς γήρανσης ή έναρξη νευροεκφύλισης; Νεότερα δεδομένα της συστοιχίας R4Alz"
8ο Παγκρήτιο Διεπιστημονικό Συνέδριο Νόσου Alzheimer και Συναφών Διαταραχών και 4ο Πανελλήνιο Συνέδριο στην ενεργό και υγιή γήρανση, Σεπτεμβρίος 2022, Εμπορικό και Βιομηχανικό Επιμελητήριο Ηρακλείου, 2022 Sep

@conference{2022Kretepub1,
author={Eleni Poptsi and Despoina Moraitou and Emmanouil Tsardoulias and Andreas Symeonidis and Magda Tsolaki},
title={Υποκειμενική νοητική εξασθένιση: Κομμάτι της υγιούς γήρανσης ή έναρξη νευροεκφύλισης; Νεότερα δεδομένα της συστοιχίας R4Alz},
booktitle={8ο Παγκρήτιο Διεπιστημονικό Συνέδριο Νόσου Alzheimer και Συναφών Διαταραχών και 4ο Πανελλήνιο Συνέδριο στην ενεργό και υγιή γήρανση, Σεπτεμβρίος 2022, Εμπορικό και Βιομηχανικό Επιμελητήριο Ηρακλείου},
year={2022},
month={09},
date={2022-09-11}
}

Evangelos Papathomas, Themistoklis Diamantopoulos and Andreas Symeonidis
"Semantic Code Search in Software Repositories using Neural Machine Translation"
Fundamental Approaches to Software Engineering, pp. 225-244, Springer International Publishing, Cham, 2022 Apr

Nowadays, software development is accelerated through the reuse of code snippets found online in question-answering platforms and software repositories. In order to be efficient, this process requires forming an appropriate query and identifying the most suitable code snippet, which can sometimes be challenging and particularly time-consuming. Over the last years, several code recommendation systems have been developed to offer a solution to this problem. Nevertheless, most of them recommend API calls or sequences instead of reusable code snippets. Furthermore, they do not employ architectures advanced enough to exploit the semantics of natural language and code in order to form the optimal query from the question posed. To overcome these issues, we propose CodeTransformer, a code recommendation system that provides useful, reusable code snippets extracted from open-source GitHub repositories. By employing a neural network architecture that comprises advanced attention mechanisms, our system effectively understands and models natural language queries and code snippets in a joint vector space. Upon evaluating CodeTransformer quantitatively against a similar system and qualitatively using a dataset from Stack Overflow, we conclude that our approach can recommend useful and reusable snippets to developers.

@conference{FASE2022,
author={Evangelos Papathomas and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Semantic Code Search in Software Repositories using Neural Machine Translation},
booktitle={Fundamental Approaches to Software Engineering},
pages={225-244},
publisher={Springer International Publishing},
address={Cham},
year={2022},
month={04},
date={2022-04-04},
url={https://link.springer.com/content/pdf/10.1007/978-3-030-99429-7_13.pdf},
doi={https://doi.org/10.1007/978-3-030-99429-7_13},
isbn={978-3-030-99428-0},
keywords={semantic analysis;code reuse;neural transformers},
abstract={Nowadays, software development is accelerated through the reuse of code snippets found online in question-answering platforms and software repositories. In order to be efficient, this process requires forming an appropriate query and identifying the most suitable code snippet, which can sometimes be challenging and particularly time-consuming. Over the last years, several code recommendation systems have been developed to offer a solution to this problem. Nevertheless, most of them recommend API calls or sequences instead of reusable code snippets. Furthermore, they do not employ architectures advanced enough to exploit the semantics of natural language and code in order to form the optimal query from the question posed. To overcome these issues, we propose CodeTransformer, a code recommendation system that provides useful, reusable code snippets extracted from open-source GitHub repositories. By employing a neural network architecture that comprises advanced attention mechanisms, our system effectively understands and models natural language queries and code snippets in a joint vector space. Upon evaluating CodeTransformer quantitatively against a similar system and qualitatively using a dataset from Stack Overflow, we conclude that our approach can recommend useful and reusable snippets to developers.}
}

Andreas Goulas, Nikolaos Malamas and Andreas L. Symeonidis
"A Methodology for Enabling NLP Capabilities on Edge and Low-Resource Devices"
Natural Language Processing and Information Systems, pp. 197--208, Springer International Publishing, Cham, 2022 Jun

Conversational assistants with increasing NLP capabilities are becoming commodity functionality for most new devices. However, the underlying language models responsible for language-related intelligence are typically characterized by a large number of parameters and high demand for memory and resources. This makes them a no-go for edge and low-resource devices, forcing them to be cloud-hosted, hence experiencing delays. To this end, we design a systematic language-agnostic methodology to develop powerful lightweight NLP models using knowledge distillation techniques, this way building models suitable for such low resource devices. We follow the steps of the proposed approach for the Greek language and build the first - to the best of our knowledge - lightweight Greek language model, which we make publicly available. We train and evaluate GloVe word embeddings in Greek and efficiently distill Greek-BERT into various BiLSTM models, without considerable loss in performance. Experiments indicate that knowledge distillation and data augmentation can improve the performance of simple BiLSTM models for two NLP tasks in Modern Greek, i.e., Topic Classification and Natural Language Inference, making them suitable candidates for low-resource devices.

@inproceedings{goulas-et-al,
author={Andreas Goulas and Nikolaos Malamas and Andreas L. Symeonidis},
title={A Methodology for Enabling NLP Capabilities on Edge and Low-Resource Devices},
booktitle={Natural Language Processing and Information Systems},
pages={197--208},
publisher={Springer International Publishing},
address={Cham},
year={2022},
month={06},
date={2022-06-13},
url={https://link.springer.com/chapter/10.1007/978-3-031-08473-7_18},
doi={https://doi.org/10.1007/978-3-031-08473-7_18},
isbn={978-3-031-08473-7},
keywords={Natural language processing;Knowledge distillation;Word embeddings;Lightweight models},
abstract={Conversational assistants with increasing NLP capabilities are becoming commodity functionality for most new devices. However, the underlying language models responsible for language-related intelligence are typically characterized by a large number of parameters and high demand for memory and resources. This makes them a no-go for edge and low-resource devices, forcing them to be cloud-hosted, hence experiencing delays. To this end, we design a systematic language-agnostic methodology to develop powerful lightweight NLP models using knowledge distillation techniques, this way building models suitable for such low resource devices. We follow the steps of the proposed approach for the Greek language and build the first - to the best of our knowledge - lightweight Greek language model, which we make publicly available. We train and evaluate GloVe word embeddings in Greek and efficiently distill Greek-BERT into various BiLSTM models, without considerable loss in performance. Experiments indicate that knowledge distillation and data augmentation can improve the performance of simple BiLSTM models for two NLP tasks in Modern Greek, i.e., Topic Classification and Natural Language Inference, making them suitable candidates for low-resource devices.}
}

Argyrios Papoudakis, Thomas Karanikiotis and Andreas Symeonidis
"A Mechanism for Automatically Extracting Reusable and Maintainable Code Idioms from Software Repositories"
Proceedings of the 17th International Conference on Software Technologies - ICSOFT, pp. 79-90, SciTePress, 2022 Jul

The importance of correct, qualitative and evolvable code is non-negotiable, when considering the maintainability potential of software. At the same time, the deluge of software residing in code hosting platforms has led to a new component-based software development paradigm, where reuse of suitable software components/ snippets is important for software projects to be implemented as fast as possible. However, ensuring acceptable quality that will guarantee basic maintainability is also required. A condition for acceptable software reusability and maintainability is the use of idiomatic code, based on syntactic fragments that recur frequently across software projects and are characterized by high quality. In this work, we present a mechanism that employs the top repositories from GitHub in order to automatically identify reusable and maintainable code idioms. By extracting the Abstract Syntax Tree representation of each project we group code snippets that appear to have similar struc tural and semantic information. Preliminary evaluation of our methodology indicates that our approach can identify commonly used, reusable and maintainable code idioms that can be effectively given as actionable recommendations to the developers.

@conference{icsoft22karanikiotis,
author={Argyrios Papoudakis and Thomas Karanikiotis and Andreas Symeonidis},
title={A Mechanism for Automatically Extracting Reusable and Maintainable Code Idioms from Software Repositories},
booktitle={Proceedings of the 17th International Conference on Software Technologies - ICSOFT},
pages={79-90},
publisher={SciTePress},
organization={INSTICC},
year={2022},
month={07},
date={2022-07-13},
url={https://www.researchgate.net/publication/362010246_A_Mechanism_for_Automatically_Extracting_Reusable_and_Maintainable_Code_Idioms_from_Software_Repositories},
doi={http://10.5220/0011279300003266},
issn={2184-2833},
isbn={978-989-758-588-3},
keywords={Software engineering;Code Idioms;Syntactic Fragment;Software Reusability;Software Maintainability},
abstract={The importance of correct, qualitative and evolvable code is non-negotiable, when considering the maintainability potential of software. At the same time, the deluge of software residing in code hosting platforms has led to a new component-based software development paradigm, where reuse of suitable software components/ snippets is important for software projects to be implemented as fast as possible. However, ensuring acceptable quality that will guarantee basic maintainability is also required. A condition for acceptable software reusability and maintainability is the use of idiomatic code, based on syntactic fragments that recur frequently across software projects and are characterized by high quality. In this work, we present a mechanism that employs the top repositories from GitHub in order to automatically identify reusable and maintainable code idioms. By extracting the Abstract Syntax Tree representation of each project we group code snippets that appear to have similar struc tural and semantic information. Preliminary evaluation of our methodology indicates that our approach can identify commonly used, reusable and maintainable code idioms that can be effectively given as actionable recommendations to the developers.}
}

Georgios Kalantzis, Gerasimos Papakostas, Thomas Karanikiotis, Michail Papamichail and Andreas Symeonidis
"A Heuristic Approach towards Continuous Implicit Authentication"
2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-7, IEEE, 2022 Oct

Smartphones nowadays handle large amounts of sensitive user information, since users exchange undisclosed information on an everyday basis. This generates the need for more effective authentication mechanisms, deviating from the traditional ones. In this direction, many research approaches are targeted towards continuous implicit authentication, on the basis of modelling the constant interaction of the user with the device. These approaches yield promising results, however certain improvements can be made by exploiting the sequential order of the predictions and the known performance metrics. In this work, we propose a heuristics algorithm, which, given a series of predictions from any continuous implicit authentication model, can ex-ploit the sequential order in order to fix any false predictions and improve the accuracy of the smartphone security system. Preliminary evaluation on several axes indicates that our approach can effectively improve any CIA model and achieve significantly better results.

@conference{ijcb2022karanikiotis,
author={Georgios Kalantzis and Gerasimos Papakostas and Thomas Karanikiotis and Michail Papamichail and Andreas Symeonidis},
title={A Heuristic Approach towards Continuous Implicit Authentication},
booktitle={2022 IEEE International Joint Conference on Biometrics (IJCB)},
pages={1-7},
publisher={IEEE},
year={2022},
month={10},
date={2022-10-01},
url={https://ieeexplore.ieee.org/abstract/document/10007940},
doi={http://10.1109/IJCB54206.2022.10007940},
issn={2474-9699},
isbn={978-1-6654-6394-2},
abstract={Smartphones nowadays handle large amounts of sensitive user information, since users exchange undisclosed information on an everyday basis. This generates the need for more effective authentication mechanisms, deviating from the traditional ones. In this direction, many research approaches are targeted towards continuous implicit authentication, on the basis of modelling the constant interaction of the user with the device. These approaches yield promising results, however certain improvements can be made by exploiting the sequential order of the predictions and the known performance metrics. In this work, we propose a heuristics algorithm, which, given a series of predictions from any continuous implicit authentication model, can ex-ploit the sequential order in order to fix any false predictions and improve the accuracy of the smartphone security system. Preliminary evaluation on several axes indicates that our approach can effectively improve any CIA model and achieve significantly better results.}
}

Eleni Poptsi, Despoina Moraitou, Emmanouil Tsardoulias, Andreas Symeonidis and Magda Tsolaki
"Νευροψυχολογική συστοιχία REMEDES for Alzheimer (R4Alz): Παρουσίαση ενός εργαλείου πρώιμης διάγνωσης των νευροεκφυλιστικών νοσημάτων"
8ο Παγκρήτιο Διεπιστημονικό Συνέδριο Νόσου Alzheimer και Συναφών Διαταραχών και 4ο Πανελλήνιο Συνέδριο στην ενεργό και υγιή γήρανση, Σεπτεμβρίος 2022, Εμπορικό και Βιομηχανικό Επιμελητήριο Ηρακλείου, 2022 Sep

@conference{Kreteconf2_2022,
author={Eleni Poptsi and Despoina Moraitou and Emmanouil Tsardoulias and Andreas Symeonidis and Magda Tsolaki},
title={Νευροψυχολογική συστοιχία REMEDES for Alzheimer (R4Alz): Παρουσίαση ενός εργαλείου πρώιμης διάγνωσης των νευροεκφυλιστικών νοσημάτων},
booktitle={8ο Παγκρήτιο Διεπιστημονικό Συνέδριο Νόσου Alzheimer και Συναφών Διαταραχών και 4ο Πανελλήνιο Συνέδριο στην ενεργό και υγιή γήρανση, Σεπτεμβρίος 2022, Εμπορικό και Βιομηχανικό Επιμελητήριο Ηρακλείου},
year={2022},
month={09},
date={2022-09-11}
}

Emmanouil Tsardoulias, Eleni Poptsi, Dimitrios F. Kavelidis, Thomas Karanikiotis, Magda Tsolaki, Despoina Moraitou and Andreas L. Symeonidis
"Early detection of neurocognitive decline using Cyber Physical Systems and Artificial Intelligence"
9th Technology Forum, Thessaloniki, 2022 Sep

@conference{tf20221,
author={Emmanouil Tsardoulias and Eleni Poptsi and Dimitrios F. Kavelidis and Thomas Karanikiotis and Magda Tsolaki and Despoina Moraitou and Andreas L. Symeonidis},
title={Early detection of neurocognitive decline using Cyber Physical Systems and Artificial Intelligence},
booktitle={9th Technology Forum, Thessaloniki},
year={2022},
month={09},
date={2022-09-22},
url={https://www.dropbox.com/s/emknhbo7cf9xiac/2022-09%20-%20TF_Poster_Alzheimers-Tsardoulias.pdf?dl=0}
}

Theodoros Papafotiou, Efthymia Amarantidou, Efseveia Nestoropoulou and Emmanouil Tsardoulias
"Autonomous Driving Vehicle in 1:10 scaled environment"
9th Technology Forum, Thessaloniki, 2022 Sep

@conference{tf20222,
author={Theodoros Papafotiou and Efthymia Amarantidou and Efseveia Nestoropoulou and Emmanouil Tsardoulias},
title={Autonomous Driving Vehicle in 1:10 scaled environment},
booktitle={9th Technology Forum, Thessaloniki},
year={2022},
month={09},
date={2022-09-11},
url={https://www.dropbox.com/s/d599oix2o41dnnq/2022-09%20-%20TF_Poster_VROOM.pdf?dl=0}
}

Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas Symeonidis
"Low-code development & verification of Cyber-Physical Systems"
9th Technology Forum, Thessaloniki, 2022 Sep

@conference{tf20223,
author={Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas Symeonidis},
title={Low-code development & verification of Cyber-Physical Systems},
booktitle={9th Technology Forum, Thessaloniki},
year={2022},
month={09},
date={2022-09-11},
url={https://www.dropbox.com/s/ftqkdjxyyuapffx/2022-09%20-%20TF_Poster_CPS-Panayotou.pdf?dl=0}
}

2022

Inbooks

Thomas Karanikiotis, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"A Personalized Code Formatter: Detection and Fixing"
Charpter:-, Fill, Hans-Georg and van Sinderen, Marten and Maciaszek, Leszek A. edition, 1622, pp. 169-192, Springer International Publishing, Communications in Computer and Information Science, Cham, 2022 Jul

The wide adoption of component-based software development and the (re)use of software residing in code hosting platforms have led to an increased interest shown towards source code readability and comprehensibility. One factor that can undeniably improve readability is the consistent code styling and formatting used across a project. To that end, many code formatting approaches usually define a set of rules, in order to model a commonly accepted formatting. However, this approach is mostly based on the experts’ expertise, is time-consuming and ignores the specific styling and formatting a team selects to use. Thus, it becomes too intrusive and may be not adopted. In this work, we present an automated mechanism that can be trained to identify deviations from the selected formatting style of a given project, given a set of source code files, and provide recommendations towards maintaining a common styling across all files of the project. At first, source code is transformed into small meaningful pieces, called tokens, which are used to train the models of our mechanism, in order to predict the probability of a token being wrongly positioned. Then, a number of possible fixes are examined as replacements of the wrongly positioned token and, based on a scoring function, the most suitable fixes are given as recommendations to the developer. Preliminary evaluation on various axes indicates that our approach can effectively detect formatting deviations from the project’s code styling and provide actionable recommendations to the developer.

@inbook{icsoft2021karanikiotisbook,
author={Thomas Karanikiotis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={A Personalized Code Formatter: Detection and Fixing},
chapter={-},
edition={Fill, Hans-Georg and van Sinderen, Marten and Maciaszek, Leszek A.},
volume={1622},
pages={169-192},
publisher={Springer International Publishing},
series={seriesS},
address={Cham},
year={2022},
month={07},
date={2022-07-18},
url={https://doi.org/10.1007/978-3-031-11513-4_8},
doi={http://10.1007/978-3-031-11513-4_8},
isbn={978-3-031-11513-4},
keywords={Source Code Formatting;Source Code Readability;LSTM;SVM One-Class;Code styling},
abstract={The wide adoption of component-based software development and the (re)use of software residing in code hosting platforms have led to an increased interest shown towards source code readability and comprehensibility. One factor that can undeniably improve readability is the consistent code styling and formatting used across a project. To that end, many code formatting approaches usually define a set of rules, in order to model a commonly accepted formatting. However, this approach is mostly based on the experts’ expertise, is time-consuming and ignores the specific styling and formatting a team selects to use. Thus, it becomes too intrusive and may be not adopted. In this work, we present an automated mechanism that can be trained to identify deviations from the selected formatting style of a given project, given a set of source code files, and provide recommendations towards maintaining a common styling across all files of the project. At first, source code is transformed into small meaningful pieces, called tokens, which are used to train the models of our mechanism, in order to predict the probability of a token being wrongly positioned. Then, a number of possible fixes are examined as replacements of the wrongly positioned token and, based on a scoring function, the most suitable fixes are given as recommendations to the developer. Preliminary evaluation on various axes indicates that our approach can effectively detect formatting deviations from the project’s code styling and provide actionable recommendations to the developer.}
}

2021

Journal Articles

Maria Th. Kotouza, Alexandros-Charalampos Kyprianidis, Sotirios-Filippos Tsarouchis, Antonios C. Chrysopoulos and Pericles A. Mitkas
"Science4Fashion: an end-to-end decision support system for fashion designers"
Evolving Systems, 2021 Mar

Nowadays, the fashion clothing industry is moving towards “fast” fashion, offering a wide variety of products based on different patterns and styles, usually characterized by lower costs and ambiguous quality. The retails markets are trying to present regularly new fashion collections while trying to follow the latest fashion trends at the same time. The main reason is to remain competitive and keep up with ever-changing customer demands. Fashion designers draw inspiration from social media, e-shops, and fashion shows that set the new fashion trends. In this direction, we propose Science4Fashion, an AI end-to-end system that facilitates fashion designers by collecting and analyzing data from many different sources and suggesting products according to their needs. An overview of the system’s modules is presented, emphasizing data collection, data annotation using deep learning models, and product recommendation and user feedback processes. The experiments presented in this paper are twofold: (a) experiments regarding the evaluation of clothing attribute classification, and (b) experiments regarding product recommendation using the baseline kNN enriched by the frequency-based clustering algorithm (FBC), achieving promising results.

@article{Kotouza2021,
author={Maria Th. Kotouza and Alexandros-Charalampos Kyprianidis and Sotirios-Filippos Tsarouchis and Antonios C. Chrysopoulos and Pericles A. Mitkas},
title={Science4Fashion: an end-to-end decision support system for fashion designers},
journal={Evolving Systems},
year={2021},
month={03},
date={2021-03-12},
url={https://link.springer.com/article/10.1007/s12530-021-09372-7},
doi={https://doi.org/10.1007/s12530-021-09372-7},
issn={1868-6486},
abstract={Nowadays, the fashion clothing industry is moving towards “fast” fashion, offering a wide variety of products based on different patterns and styles, usually characterized by lower costs and ambiguous quality. The retails markets are trying to present regularly new fashion collections while trying to follow the latest fashion trends at the same time. The main reason is to remain competitive and keep up with ever-changing customer demands. Fashion designers draw inspiration from social media, e-shops, and fashion shows that set the new fashion trends. In this direction, we propose Science4Fashion, an AI end-to-end system that facilitates fashion designers by collecting and analyzing data from many different sources and suggesting products according to their needs. An overview of the system’s modules is presented, emphasizing data collection, data annotation using deep learning models, and product recommendation and user feedback processes. The experiments presented in this paper are twofold: (a) experiments regarding the evaluation of clothing attribute classification, and (b) experiments regarding product recommendation using the baseline kNN enriched by the frequency-based clustering algorithm (FBC), achieving promising results.}
}

Emmanouil Krasanakis and Andreas L. Symeonidis
"Defining behaviorizeable relations to enable inference in semi-automatic program synthesis"
Journal of Logical and Algebraic Methods in Programming, 123, pp. 100714, 2021 Nov

@article{krasanakis2021defining,
author={Emmanouil Krasanakis and Andreas L. Symeonidis},
title={Defining behaviorizeable relations to enable inference in semi-automatic program synthesis},
journal={Journal of Logical and Algebraic Methods in Programming},
volume={123},
pages={100714},
year={2021},
month={11},
date={2021-11-01},
url={https://www.sciencedirect.com/science/article/pii/S2352220821000778},
doi={https://doi.org/10.1016/j.jlamp.2021.100714}
}

Nikolaos Malamas and Andreas Symeonidis
"Embedding Rasa in edge Devices: Capabilities and Limitations"
Procedia Computer Science, 192, pp. 109-118, 2021 Jan

Over the past few years, there has been a boost in the use of commercial virtual assistants. Obviously, these proprietary tools are well-performing, however the functionality they offer is limited, users are ”vendor-locked”, while possible user privacy issues rise. In this paper we argue that low-cost, open hardware solutions may also perform well, given the proper setup. Specifically, we perform an initial assessment of a low-cost virtual agent employing the Rasa framework integrated into a Raspberry Pi 4. We set up three different architectures, discuss their capabilities and limitations and evaluate the dialogue system against three axes: assistant comprehension, task success and assistant usability. Our experiments show that our low-cost virtual assistant performs in a satisfactory manner, even when a small-sized training dataset is used.

@article{malamas2021-rasa,
author={Nikolaos Malamas and Andreas Symeonidis},
title={Embedding Rasa in edge Devices: Capabilities and Limitations},
journal={Procedia Computer Science},
volume={192},
pages={109-118},
year={2021},
month={01},
date={2021-01-01},
url={https://www.sciencedirect.com/science/article/pii/S187705092101499X},
doi={https://doi.org/10.1016/j.procs.2021.08.012},
issn={1877-0509},
keywords={Spoken Dialogue Systems;NLU;Rasa;Chatbots},
abstract={Over the past few years, there has been a boost in the use of commercial virtual assistants. Obviously, these proprietary tools are well-performing, however the functionality they offer is limited, users are ”vendor-locked”, while possible user privacy issues rise. In this paper we argue that low-cost, open hardware solutions may also perform well, given the proper setup. Specifically, we perform an initial assessment of a low-cost virtual agent employing the Rasa framework integrated into a Raspberry Pi 4. We set up three different architectures, discuss their capabilities and limitations and evaluate the dialogue system against three axes: assistant comprehension, task success and assistant usability. Our experiments show that our low-cost virtual assistant performs in a satisfactory manner, even when a small-sized training dataset is used.}
}

Michail D. Papamichail and Andreas L. Symeonidis
"Data-Driven Analytics towards Software Sustainability: The Case of Open-Source Multimedia Tools on Cultural Storytelling"
Sustainability, 13, (3), pp. 1079, 2021 Jan

@article{papamichail2021data,
author={Michail D. Papamichail and Andreas L. Symeonidis},
title={Data-Driven Analytics towards Software Sustainability: The Case of Open-Source Multimedia Tools on Cultural Storytelling},
journal={Sustainability},
volume={13},
number={3},
pages={1079},
year={2021},
month={01},
date={2021-01-21},
url={https://www.mdpi.com/2071-1050/13/3/1079},
doi={https://doi.org/10.3390/su13031079}
}

Thomas Karanikiotis, Michail D. Papamichail and Andreas L. Symeonidis
"Analyzing Static Analysis Metric Trends towards Early Identification of Non-Maintainable Software Components"
Sustainability, 13, (22), 2021 Nov

Nowadays, agile software development is considered a mainstream approach for software with fast release cycles and frequent changes in requirements. Most of the time, high velocity in software development implies poor software quality, especially when it comes to maintainability. In this work, we argue that ensuring the maintainability of a software component is not the result of a one-time only (or few-times only) set of fixes that eliminate technical debt, but the result of a continuous process across the software’s life cycle. We propose a maintainability evaluation methodology, where data residing in code hosting platforms are being used in order to identify non-maintainable software classes. Upon detecting classes that have been dropped from their project, we examine the progressing behavior of their static analysis metrics and evaluate maintainability upon the four primary source code properties: complexity, cohesion, inheritance and coupling. The evaluation of our methodology upon various axes, both qualitative and quantitative, indicates that our approach can provide actionable and interpretable maintainability evaluation at class level and identify non-maintainable components around 50% ahead of the software life cycle. Based on these results, we argue that the progressing behavior of static analysis metrics at a class level can provide valuable information about the maintainability degree of the component in time.

@article{su132212848,
author={Thomas Karanikiotis and Michail D. Papamichail and Andreas L. Symeonidis},
title={Analyzing Static Analysis Metric Trends towards Early Identification of Non-Maintainable Software Components},
journal={Sustainability},
volume={13},
number={22},
year={2021},
month={11},
date={2021-11-20},
url={https://www.mdpi.com/2071-1050/13/22/12848},
doi={https://doi.org/10.3390/su132212848},
issn={2071-1050},
abstract={Nowadays, agile software development is considered a mainstream approach for software with fast release cycles and frequent changes in requirements. Most of the time, high velocity in software development implies poor software quality, especially when it comes to maintainability. In this work, we argue that ensuring the maintainability of a software component is not the result of a one-time only (or few-times only) set of fixes that eliminate technical debt, but the result of a continuous process across the software’s life cycle. We propose a maintainability evaluation methodology, where data residing in code hosting platforms are being used in order to identify non-maintainable software classes. Upon detecting classes that have been dropped from their project, we examine the progressing behavior of their static analysis metrics and evaluate maintainability upon the four primary source code properties: complexity, cohesion, inheritance and coupling. The evaluation of our methodology upon various axes, both qualitative and quantitative, indicates that our approach can provide actionable and interpretable maintainability evaluation at class level and identify non-maintainable components around 50% ahead of the software life cycle. Based on these results, we argue that the progressing behavior of static analysis metrics at a class level can provide valuable information about the maintainability degree of the component in time.}
}

Nikolaos L. Tsakiridis, John B. Theocharis, Andreas L. Symeonidis and G.C. Zalidis
"Improving the predictions of soil properties from VNIR–SWIR spectra in an unlabeled region using semi-supervised and active learning"
Geoderma, 2021 Apr

Monitoring the status of the soil ecosystem to identify the spatio-temporal extent of the pressures exerted and mitigate the effects of climate change and land degradation necessitates the need for reliable and cost-effective solutions. To address this need, soil spectroscopy in the visible, near- and shortwave-infrared (VNIR–SWIR) has emerged as a viable alternative to traditional analytical approaches. To this end, large-scale soil spectral libraries coupled with advanced machine learning tools have been developed to infer the soil properties from the hyperspectral signatures. However, models developed from one region may exhibit diminished performance when applied to a new, unseen by the model, region due to the large and inherent soil variability (e.g. pedogenetical differences, diverse soil types etc.). Given an existing spectral library with labeled data and a new unlabeled region (i.e. where no soil samples are analytically measured) the question then becomes how to best develop a model which can more accurately predict the soil properties of the unlabeled region. In this paper, a machine learning technique leveraging on the capabilities of semi-supervised learning which exploits the predictors’ distribution of the unlabeled dataset and of active learning which expertly selects a small set of data from the unlabeled dataset as a spiking subset in order to develop a more robust model is proposed. The semi-supervised learning approach is the Laplacian Support Vector Regression following the manifold regularization framework. As far as the active learning component is concerned, the pool-based approach is utilized as it best matches with the aforementioned use-case scenario, which iteratively selects a subset of data from the unlabeled region to spike the calibration set. As a query strategy, a novel machine learning–based strategy is proposed herein to best identify the spiking subset at each iteration. The experimental analysis was conducted using data from the Land Use and Coverage Area Frame Survey of 2009 which covered most of the then member-states of the European Union, and in particular by focusing on the mineral cropland soil samples from 5 different countries. The statistical analysis conducted ascertained the efficacy of our approach when compared to the current state-of-the-art in soil spectroscopy.

@article{TSAKIRIDIS2021114830,
author={Nikolaos L. Tsakiridis and John B. Theocharis and Andreas L. Symeonidis and G.C. Zalidis},
title={Improving the predictions of soil properties from VNIR–SWIR spectra in an unlabeled region using semi-supervised and active learning},
journal={Geoderma},
year={2021},
month={04},
date={2021-04-01},
url={https://www.sciencedirect.com/science/article/pii/S0016706120325854},
doi={https://doi.org/10.1016/j.geoderma.2020.114830},
keywords={Soil spectroscopy;Spiking;Active learning;Semi-supervised learning;vis-NIR},
abstract={Monitoring the status of the soil ecosystem to identify the spatio-temporal extent of the pressures exerted and mitigate the effects of climate change and land degradation necessitates the need for reliable and cost-effective solutions. To address this need, soil spectroscopy in the visible, near- and shortwave-infrared (VNIR–SWIR) has emerged as a viable alternative to traditional analytical approaches. To this end, large-scale soil spectral libraries coupled with advanced machine learning tools have been developed to infer the soil properties from the hyperspectral signatures. However, models developed from one region may exhibit diminished performance when applied to a new, unseen by the model, region due to the large and inherent soil variability (e.g. pedogenetical differences, diverse soil types etc.). Given an existing spectral library with labeled data and a new unlabeled region (i.e. where no soil samples are analytically measured) the question then becomes how to best develop a model which can more accurately predict the soil properties of the unlabeled region. In this paper, a machine learning technique leveraging on the capabilities of semi-supervised learning which exploits the predictors’ distribution of the unlabeled dataset and of active learning which expertly selects a small set of data from the unlabeled dataset as a spiking subset in order to develop a more robust model is proposed. The semi-supervised learning approach is the Laplacian Support Vector Regression following the manifold regularization framework. As far as the active learning component is concerned, the pool-based approach is utilized as it best matches with the aforementioned use-case scenario, which iteratively selects a subset of data from the unlabeled region to spike the calibration set. As a query strategy, a novel machine learning–based strategy is proposed herein to best identify the spiking subset at each iteration. The experimental analysis was conducted using data from the Land Use and Coverage Area Frame Survey of 2009 which covered most of the then member-states of the European Union, and in particular by focusing on the mineral cropland soil samples from 5 different countries. The statistical analysis conducted ascertained the efficacy of our approach when compared to the current state-of-the-art in soil spectroscopy.}
}

2021

Conference Papers

Eleni Poptsi, Despina Moraitou, Emmanouil Tsardoulias, Andreas L. Symeonidis and Magda Tsolaki
"Είναι εφικτός ο διαχωρισμός του Υγιούς Νοητικά Γήρατος από την Υποκειμενική Νοητική Εξασθένιση; Πιλοτικά αποτελέσματα της καινοτόμου συστοιχίας REMEDES for Alzheimer (R4Alz)"
12th Panhellenic Conference on Alzheimer's Disease & 3rdMediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki,Greece, 2021 Feb

@conference{elena2021alzconf,
author={Eleni Poptsi and Despina Moraitou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Magda Tsolaki},
title={Είναι εφικτός ο διαχωρισμός του Υγιούς Νοητικά Γήρατος από την Υποκειμενική Νοητική Εξασθένιση; Πιλοτικά αποτελέσματα της καινοτόμου συστοιχίας REMEDES for Alzheimer (R4Alz)},
booktitle={12th Panhellenic Conference on Alzheimer's Disease & 3rdMediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki,Greece},
year={2021},
month={02},
date={2021-02-18}
}

E. Tsardoulias, C. Zolotas, S. Siouli, P. Antoniou, S. Amanatiadis, T. Karanikiotis, E. Chondromatidis, P. Bamidis, G. Karagiannis and A. Symeonidis
"Science and mathematics education via remote robotics deployment - The TekTrain paradigm"
14th annual International Conference of Education, Research and Innovation - 2021, 2021 Nov

@conference{ets2021iceriTektrain,
author={E. Tsardoulias and C. Zolotas and S. Siouli and P. Antoniou and S. Amanatiadis and T. Karanikiotis and E. Chondromatidis and P. Bamidis and G. Karagiannis and A. Symeonidis},
title={Science and mathematics education via remote robotics deployment - The TekTrain paradigm},
booktitle={14th annual International Conference of Education, Research and Innovation - 2021},
year={2021},
month={11},
date={2021-11-08},
url={https://iated.org/concrete3/paper_detail.php?paper_id=92520}
}

Themistoklis Diamantopoulos, Christiana Galegalidou and Andreas L. Symeonidis
"Software Task Importance Prediction based on Project Management Data"
Proceedings of the 16th International Conference on Software Technologies (ICSOFT 2021), pp. 269-276, 2021 Jul

With the help of project management tools and code hosting facilities, software development has been transformed into an easy-to-decentralize business. However, determining the importance of tasks within a software engineering process in order to better prioritize and act on has always been an interesting challenge. Although several approaches on bug severity/priority prediction exist, the challenge of task importance prediction has not been sufficiently addressed in current research. Most approaches do not consider the meta-data and the temporal characteristics of the data, while they also do not take into account the ordinal characteristics of the importance/severity variable. In this work, we analyze the challenge of task importance prediction and propose a prototype methodology that extracts both textual (titles, descriptions) and meta-data (type, assignee) characteristics from tasks and employs a sliding window technique to model their time frame. After that, we evaluate three different prediction methods, a multi-class classifier, a regression algorithm, and an ordinal classification technique, in order to assess which model is the most effective for encompassing the relative ordering between different importance values. The results of our evaluation are promising, leaving room for future research.

@conference{ICSOFT2021,
author={Themistoklis Diamantopoulos and Christiana Galegalidou and Andreas L. Symeonidis},
title={Software Task Importance Prediction based on Project Management Data},
booktitle={Proceedings of the 16th International Conference on Software Technologies (ICSOFT 2021)},
pages={269-276},
year={2021},
month={07},
date={2021-07-06},
url={https://issel.ee.auth.gr/wp-content/uploads/2021/07/ICSOFT2021TaskImportance.pdf},
doi={https://doi.org/10.5220/0010578302690276},
keywords={Task Management;Task Importance;Bug Severity;Ordinal Classification;Project Management},
abstract={With the help of project management tools and code hosting facilities, software development has been transformed into an easy-to-decentralize business. However, determining the importance of tasks within a software engineering process in order to better prioritize and act on has always been an interesting challenge. Although several approaches on bug severity/priority prediction exist, the challenge of task importance prediction has not been sufficiently addressed in current research. Most approaches do not consider the meta-data and the temporal characteristics of the data, while they also do not take into account the ordinal characteristics of the importance/severity variable. In this work, we analyze the challenge of task importance prediction and propose a prototype methodology that extracts both textual (titles, descriptions) and meta-data (type, assignee) characteristics from tasks and employs a sliding window technique to model their time frame. After that, we evaluate three different prediction methods, a multi-class classifier, a regression algorithm, and an ordinal classification technique, in order to assess which model is the most effective for encompassing the relative ordering between different importance values. The results of our evaluation are promising, leaving room for future research.}
}

Thomas Karanikiotis, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"Towards Automatically Generating a Personalized Code Formatting Mechanism"
Proceedings of the 16th International Conference on Software Technologies - ICSOFT, pp. 90-101, SciTePress, 2021 Jul

Source code readability and comprehensibility are continuously gaining interest, due to the wide adoption of component-based software development and the (re)use of software residing in code hosting platforms. Consistent code styling and formatting across a project tend to improve readability, while most code formatting approaches rely on a set of rules defined by experts, that aspire to model a commonly accepted formatting. This approach is usually based on the experts’ expertise, is time-consuming and does not take into account the way a team develops software. Thus, it becomes too intrusive and, in many cases, is not adopted. In this work we present an automated mechanism, that, given a set of source code files, can be trained to recognize the formatting style used across a project and identify deviations, in a completely unsupervised manner. At first, source code is transformed into small meaningful pieces, called tokens, which are used to train the models of our mechanism, in or der to predict the probability of a token being wrongly positioned. Preliminary evaluation on various axes indicates that our approach can effectively detect formatting deviations from the project’s code styling and provide actionable recommendations to the developer.

@conference{icsoft2021Codrep,
author={Thomas Karanikiotis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={Towards Automatically Generating a Personalized Code Formatting Mechanism},
booktitle={Proceedings of the 16th International Conference on Software Technologies - ICSOFT},
pages={90-101},
publisher={SciTePress},
editor={Hans-Georg Fill and Marten van Sinderen and Leszek A. Maciaszek},
organization={INSTICC},
year={2021},
month={07},
date={2021-07-28},
url={https://doi.org/10.5220/0010579900900101},
doi={http://10.5220/0010579900900101},
issn={2184-2833},
isbn={978-989-758-523-4},
keywords={Source Code Formatting;Code Style;Source Code Readability;LSTM;SVM One-Class},
abstract={Source code readability and comprehensibility are continuously gaining interest, due to the wide adoption of component-based software development and the (re)use of software residing in code hosting platforms. Consistent code styling and formatting across a project tend to improve readability, while most code formatting approaches rely on a set of rules defined by experts, that aspire to model a commonly accepted formatting. This approach is usually based on the experts’ expertise, is time-consuming and does not take into account the way a team develops software. Thus, it becomes too intrusive and, in many cases, is not adopted. In this work we present an automated mechanism, that, given a set of source code files, can be trained to recognize the formatting style used across a project and identify deviations, in a completely unsupervised manner. At first, source code is transformed into small meaningful pieces, called tokens, which are used to train the models of our mechanism, in or der to predict the probability of a token being wrongly positioned. Preliminary evaluation on various axes indicates that our approach can effectively detect formatting deviations from the project’s code styling and provide actionable recommendations to the developer.}
}

Antonis Dimitriou, Anastasios Tzitzis, Alexandros Filotheou, Spyros Megalou, Stavroula Siachalou, Aristidis R. Chatzistefanou, Andreana Malama, Emmanouil Tsardoulias, Konstantinos Panayiotou, Evaggelos Giannelos, Thodoris Vasiliadis, Ioannis Mouroutsos, Ioannis Karanikas, Loukas Petrou, Andreas Symeonidis, John Sahalos, Traianos Yioultsis and Aggelos Bletsas
"Autonomous Robots, Drones and Repeaters for Fast, Reliable, Low-Cost RFID Inventorying and Localization"
2021 6th International Conference on Smart and Sustainable Technologies (SpliTech), 2021 Sep

@conference{tsa2021rfidSplitech,
author={Antonis Dimitriou and Anastasios Tzitzis and Alexandros Filotheou and Spyros Megalou and Stavroula Siachalou and Aristidis R. Chatzistefanou and Andreana Malama and Emmanouil Tsardoulias and Konstantinos Panayiotou and Evaggelos Giannelos and Thodoris Vasiliadis and Ioannis Mouroutsos and Ioannis Karanikas and Loukas Petrou and Andreas Symeonidis and John Sahalos and Traianos Yioultsis and Aggelos Bletsas},
title={Autonomous Robots, Drones and Repeaters for Fast, Reliable, Low-Cost RFID Inventorying and Localization},
booktitle={2021 6th International Conference on Smart and Sustainable Technologies (SpliTech)},
year={2021},
month={09},
date={2021-09-11},
url={https://ieeexplore.ieee.org/document/9566425},
doi={https://doi.org/10.23919/SpliTech52315.2021.9566425}
}

2021

Inbooks

Thomas Karanikiotis, Michail D. Papamichail and Andreas L. Symeonidis
"Multilevel Readability Interpretation Against Software Properties: A Data-Centric Approach"
Charpter:-, van Sinderen, Marten and Maciaszek, Leszek A. and Fill, Hans-Georg edition, 1447, pp. 203-226, Springer International Publishing, Communications in Computer and Information Science, Cham, 2021 Jul

Given the wide adoption of the agile software development paradigm, where efficient collaboration as well as effective maintenance are of utmost importance, the need to produce readable source code is evident. To that end, several research efforts aspire to assess the extent to which a software component is readable. Several metrics and evaluation criteria have been proposed; however, they are mostly empirical or rely on experts who are responsible for determining the ground truth and/or set custom thresholds, leading to results that are context-dependent and subjective. In this work, we employ a large set of static analysis metrics along with various coding violations towards interpreting readability as perceived by developers. Unlike already existing approaches, we refrain from using experts and we provide a fully automated and extendible methodology built upon data residing in online code hosting facilities. We perform static analysis at two levels (method and class) and construct a benchmark dataset that includes more than one million methods and classes covering diverse development scenarios. After performing clustering based on source code size, we employ Support Vector Regression in order to interpret the extent to which a software component is readable against the source code properties: cohesion, inheritance, complexity, coupling, and documentation. The evaluation of our methodology indicates that our models effectively interpret readability as perceived by developers against the above mentioned source code properties.

@inbook{icsoft2020BookChapter,
author={Thomas Karanikiotis and Michail D. Papamichail and Andreas L. Symeonidis},
title={Multilevel Readability Interpretation Against Software Properties: A Data-Centric Approach},
chapter={-},
edition={van Sinderen, Marten and Maciaszek, Leszek A. and Fill, Hans-Georg},
volume={1447},
pages={203-226},
publisher={Springer International Publishing},
series={seriesS},
address={Cham},
year={2021},
month={07},
date={2021-07-21},
url={https://doi.org/10.1007/978-3-030-83007-6_10},
doi={http://10.1007/978-3-030-83007-6_10},
isbn={978-3-030-83007-6},
abstract={Given the wide adoption of the agile software development paradigm, where efficient collaboration as well as effective maintenance are of utmost importance, the need to produce readable source code is evident. To that end, several research efforts aspire to assess the extent to which a software component is readable. Several metrics and evaluation criteria have been proposed; however, they are mostly empirical or rely on experts who are responsible for determining the ground truth and/or set custom thresholds, leading to results that are context-dependent and subjective. In this work, we employ a large set of static analysis metrics along with various coding violations towards interpreting readability as perceived by developers. Unlike already existing approaches, we refrain from using experts and we provide a fully automated and extendible methodology built upon data residing in online code hosting facilities. We perform static analysis at two levels (method and class) and construct a benchmark dataset that includes more than one million methods and classes covering diverse development scenarios. After performing clustering based on source code size, we employ Support Vector Regression in order to interpret the extent to which a software component is readable against the source code properties: cohesion, inheritance, complexity, coupling, and documentation. The evaluation of our methodology indicates that our models effectively interpret readability as perceived by developers against the above mentioned source code properties.}
}

2020

Journal Articles

Alexandros Filotheou, Emmanouil Tsardoulias, Antonis Dimitriou, Andreas Symeonidis and Loukas Petrou
"Pose Selection and Feedback Methods in Tandem Combinations of Particle Filters with Scan-Matching for 2D Mobile Robot Localisation"
Journal of Intelligent & Robotic Systems, 100, pp. 925-944, 2020 Sep

Robot localisation is predominantly resolved via parametric or non-parametric probabilistic methods. The particle filter, the most common non-parametric approach, is a Monte Carlo Localisation (MCL) method that is extensively used in robot localisation, as it can represent arbitrary probabilistic distributions, in contrast to Kalman filters, which is the standard parametric representation. In particle filters, a weight is internally assigned to each particle, and this weight serves as an indicator of a particle’s estimation certainty. Their output, the tracked object’s pose estimate, is implicitly assumed to be the weighted average pose of all particles; however, we argue that disregarding low-weight particles from this averaging process may yield an increase in accuracy. Furthermore, we argue that scan-matching, treated as a prosthesis of (or, put differently, fit in tandem with) a particle filter, can also lead to better accuracy. Moreover, we study the effect of feeding back this improved estimate to MCL, and introduce a feedback method that outperforms current state-of-the-art feedback approaches in accuracy and robustness, while alleviating their drawbacks. In the process of formulating these hypotheses we construct a localisation pipeline that admits configurations that are a superset of state-of-the-art configurations of tandem combinations of particle filters with scan-matching. The above hypotheses are tested in two simulated environments and results support our argumentation.

@article{alexPoseSelection2020,
author={Alexandros Filotheou and Emmanouil Tsardoulias and Antonis Dimitriou and Andreas Symeonidis and Loukas Petrou},
title={Pose Selection and Feedback Methods in Tandem Combinations of Particle Filters with Scan-Matching for 2D Mobile Robot Localisation},
journal={Journal of Intelligent & Robotic Systems},
volume={100},
pages={925-944},
year={2020},
month={09},
date={2020-09-15},
url={https://link.springer.com/article/10.1007/s10846-020-01253-6},
doi={https://doi.org/10.1007/s10846-020-01253-6},
abstract={Robot localisation is predominantly resolved via parametric or non-parametric probabilistic methods. The particle filter, the most common non-parametric approach, is a Monte Carlo Localisation (MCL) method that is extensively used in robot localisation, as it can represent arbitrary probabilistic distributions, in contrast to Kalman filters, which is the standard parametric representation. In particle filters, a weight is internally assigned to each particle, and this weight serves as an indicator of a particle’s estimation certainty. Their output, the tracked object’s pose estimate, is implicitly assumed to be the weighted average pose of all particles; however, we argue that disregarding low-weight particles from this averaging process may yield an increase in accuracy. Furthermore, we argue that scan-matching, treated as a prosthesis of (or, put differently, fit in tandem with) a particle filter, can also lead to better accuracy. Moreover, we study the effect of feeding back this improved estimate to MCL, and introduce a feedback method that outperforms current state-of-the-art feedback approaches in accuracy and robustness, while alleviating their drawbacks. In the process of formulating these hypotheses we construct a localisation pipeline that admits configurations that are a superset of state-of-the-art configurations of tandem combinations of particle filters with scan-matching. The above hypotheses are tested in two simulated environments and results support our argumentation.}
}

D. Geromichalos, M.Azkarate, E. Tsardoulias, L. Gerdes, L. Petrou and C. Perez Del Pulgar
Journal of Field Robotics, pp. 1-18, 2020 Feb

This paper describes a novel approach to simultaneous localization and mapping (SLAM) techniques applied to the autonomous planetary rover exploration scenario to reduce both the relative and absolute localization errors, using two well‐proven techniques: particle filters and scan matching. Continuous relative localization is improved by matching high‐resolution sensor scans to the online created local map. Additionally, to avoid issues with drifting localization, absolute localization is globally corrected at discrete times, according to predefined event criteria, by matching the current local map to the orbiter's global map. The resolutions of local and global maps can be appropriately chosen for computation and accuracy purposes. Further, the online generated local map, of the form of a structured elevation grid map, can also be used to evaluate the traversability of the surrounding environment and allow for continuous navigation. The objective of this study is to support long‐range low‐supervision planetary exploration. The implemented SLAM technique has been validated with a data set acquired during a field test campaign performed at the Teide Volcano on the island of Tenerife, representative of a Mars/Moon exploration scenario.

@article{etsardouEsa2020,
author={D. Geromichalos and M.Azkarate and E. Tsardoulias and L. Gerdes and L. Petrou and C. Perez Del Pulgar},
title={SLAM for Autonomous Planetary Rovers with Global Localization},
journal={Journal of Field Robotics},
pages={1-18},
year={2020},
month={02},
date={2020-02-28},
url={https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21943},
doi={https://doi.org/10.1002/rob.21943},
publisher's url={https://onlinelibrary.wiley.com/journal/15564967},
keywords={autonomous navigation;long term localization;planetary rovers;SLAM},
abstract={This paper describes a novel approach to simultaneous localization and mapping (SLAM) techniques applied to the autonomous planetary rover exploration scenario to reduce both the relative and absolute localization errors, using two well‐proven techniques: particle filters and scan matching. Continuous relative localization is improved by matching high‐resolution sensor scans to the online created local map. Additionally, to avoid issues with drifting localization, absolute localization is globally corrected at discrete times, according to predefined event criteria, by matching the current local map to the orbiter\'s global map. The resolutions of local and global maps can be appropriately chosen for computation and accuracy purposes. Further, the online generated local map, of the form of a structured elevation grid map, can also be used to evaluate the traversability of the surrounding environment and allow for continuous navigation. The objective of this study is to support long‐range low‐supervision planetary exploration. The implemented SLAM technique has been validated with a data set acquired during a field test campaign performed at the Teide Volcano on the island of Tenerife, representative of a Mars/Moon exploration scenario.}
}

A. Tzitzis, S. Megalou, S. Siachalou, E. Tsardoulias, A. Filotheou, T. Yioultsis, and A. G. Dimitriou
IEEE Journal of Radio Frequency Identification, 2020 Jun

In this work, we present a method for 3D localization of RFID tags by a reader-equipped robot with a single antenna. The robot carries a set of sensors, which enable it to create a map of the environment and locate itself in it (Simultaneous Localization and Mapping -SLAM). Then we exploit the collected phase measurements to localize large tag populations in real-time. We show that by forcing the robot to move along non-straight trajectories, thus creating non-linear synthetic apertures, the circular ambiguity of the possible tag’s locations is eliminated and 3D localization is accomplished. A reliability metric is introduced, suitable for real-time assessment of the localization error. We investigate how the curvature of the robot’s trajectory affects the accuracy under varying multipath conditions. It is found that increasing the trajectory’s slope and number of turns improves the accuracy of the method. We introduce a phase model that accounts for the effects of multipath and derive the closed form expression of the resultant’s phase probability density function. Finally, the proposed method is extended when multiple antennas are available. Experimental results in a "multipath-rich" indoor environment demonstrate a mean 3D error of 35cm, achieved in a few seconds.

@article{etsardouRfid22020,
author={A. Tzitzis and S. Megalou and S. Siachalou and E. Tsardoulias and A. Filotheou and T. Yioultsis and and A. G. Dimitriou},
title={Trajectory Planning of a Moving Robot Empowers 3D Localization of RFID Tags with a Single Antenna},
journal={IEEE Journal of Radio Frequency Identification},
year={2020},
month={06},
date={2020-06-05},
url={https://ieeexplore.ieee.org/document/9109328},
doi={https://doi.org/10.1109/JRFID.2020.3000332},
publisher's url={https://doi.org/10.1109/JRFID.2020.3000332},
keywords={robotics;SLAM;RFID;3D Localization;Non Linear Optimization;Phase Unwrapping;Trajectory Evaluation},
abstract={In this work, we present a method for 3D localization of RFID tags by a reader-equipped robot with a single antenna. The robot carries a set of sensors, which enable it to create a map of the environment and locate itself in it (Simultaneous Localization and Mapping -SLAM). Then we exploit the collected phase measurements to localize large tag populations in real-time. We show that by forcing the robot to move along non-straight trajectories, thus creating non-linear synthetic apertures, the circular ambiguity of the possible tag’s locations is eliminated and 3D localization is accomplished. A reliability metric is introduced, suitable for real-time assessment of the localization error. We investigate how the curvature of the robot’s trajectory affects the accuracy under varying multipath conditions. It is found that increasing the trajectory’s slope and number of turns improves the accuracy of the method. We introduce a phase model that accounts for the effects of multipath and derive the closed form expression of the resultant’s phase probability density function. Finally, the proposed method is extended when multiple antennas are available. Experimental results in a \"multipath-rich\" indoor environment demonstrate a mean 3D error of 35cm, achieved in a few seconds.}
}

Michail D. Papamichail and Andreas L. Symeonidis
"A Generic Methodology for Early Identification of Non-Maintainable Source Code Components through Analysis of Software Releases"
Information and Software Technology, 118, pp. 106218, 2020 Feb

Contemporary development approaches consider that time-to-market is of utmost importance and assume that software projects are constantly evolving, driven by the continuously changing requirements of end-users. This practically requires an iterative process where software is changing by introducing new or updating existing software/user features, while at the same time continuing to support the stable ones. In order to ensure efficient software evolution, the need to produce maintainable software is evident. In this work, we argue that non-maintainable software is not the outcome of a single change, but the consequence of a series of changes throughout the development lifecycle. To that end, we define a maintainability evaluation methodology across releases and employ various information residing in software repositories, so as to decide on the maintainability of software. Upon using the dropping of packages as a non-maintainability indicator (accompanied by a series of quality-related criteria), the proposed methodology involves using one-class-classification techniques for evaluating maintainability at a package level, on four different axes each targeting a primary source code property: complexity, cohesion, coupling, and inheritance. Given the qualitative and quantitative evaluation of our methodology, we argue that apart from providing accurate and interpretable maintainability evaluation at package level, we can also identify non-maintainable components at an early stage. This early stage is in many cases around 50% of the software package lifecycle. Based on our findings, we conclude that modeling the trending behavior of certain static analysis metrics enables the effective identification of non-maintainable software components and thus can be a valuable tool for the software engineers.

@article{ISTmaintainabilityPaper,
author={Michail D. Papamichail and Andreas L. Symeonidis},
title={A Generic Methodology for Early Identification of Non-Maintainable Source Code Components through Analysis of Software Releases},
journal={Information and Software Technology},
volume={118},
pages={106218},
year={2020},
month={02},
date={2020-02-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2020/06/ISTmaintainabilityPaper.pdf},
doi={https://doi.org/10.1016/j.infsof.2019.106218},
issn={0950-5849},
keywords={static analysis metrics;Software releases;maintainability evaluation;software quality;trend analysis},
abstract={Contemporary development approaches consider that time-to-market is of utmost importance and assume that software projects are constantly evolving, driven by the continuously changing requirements of end-users. This practically requires an iterative process where software is changing by introducing new or updating existing software/user features, while at the same time continuing to support the stable ones. In order to ensure efficient software evolution, the need to produce maintainable software is evident. In this work, we argue that non-maintainable software is not the outcome of a single change, but the consequence of a series of changes throughout the development lifecycle. To that end, we define a maintainability evaluation methodology across releases and employ various information residing in software repositories, so as to decide on the maintainability of software. Upon using the dropping of packages as a non-maintainability indicator (accompanied by a series of quality-related criteria), the proposed methodology involves using one-class-classification techniques for evaluating maintainability at a package level, on four different axes each targeting a primary source code property: complexity, cohesion, coupling, and inheritance. Given the qualitative and quantitative evaluation of our methodology, we argue that apart from providing accurate and interpretable maintainability evaluation at package level, we can also identify non-maintainable components at an early stage. This early stage is in many cases around 50% of the software package lifecycle. Based on our findings, we conclude that modeling the trending behavior of certain static analysis metrics enables the effective identification of non-maintainable software components and thus can be a valuable tool for the software engineers.}
}

Evridiki Papachristou, Antonios Chrysopoulos and Nikolaos Bilalis
"Machine learning for clothing manufacture as a mean to respond quicker and better to the demands of clothing brands: a Greek case study"
The International Journal of Advanced Manufacturing Technology, 2020 Oct

In the clothing industry, design, development and procurement teams have been affected more than any other industry and are constantly being under pressure to present more products with fewer resources in a shorter time. The diversity of garment designs created as new products is not found in any other industry and is almost independent of the size of the business. The proposed research is being applied to a Greek clothing manufacturing company with operations in two different countries and a portfolio of diverse brands and moves in two dimensions: The first dimension concerns the perfect transformation of the product design field into a field of action planning that can be supported by artificial intelligence, providing timely and valid information to the designer drawing information from a wider range of sources than today’s method. The second dimension of the research concerns the design and implementation of an intelligent and semi-autonomous decision support system for everyone involved in the sample room. This system utilizes various machine learning techniques in order to become a versatile, robust and useful “assistant”: multiple clustering and classification models are utilized for grouping and combining similar/relevant products, Computer Vision state-of-the-art algorithms are extracting meaningful attributes from images and, finally, a reinforcement learning system is used to evolve the existing models based on user’s preferences.

@article{Papachristou2020,
author={Evridiki Papachristou and Antonios Chrysopoulos and Nikolaos Bilalis},
title={Machine learning for clothing manufacture as a mean to respond quicker and better to the demands of clothing brands: a Greek case study},
journal={The International Journal of Advanced Manufacturing Technology},
year={2020},
month={10},
date={2020-10-06},
url={https://link.springer.com/article/10.1007/s00170-020-06157-1},
doi={https://doi.org/10.1007/s00170-020-06157-1},
issn={1433-3015},
abstract={In the clothing industry, design, development and procurement teams have been affected more than any other industry and are constantly being under pressure to present more products with fewer resources in a shorter time. The diversity of garment designs created as new products is not found in any other industry and is almost independent of the size of the business. The proposed research is being applied to a Greek clothing manufacturing company with operations in two different countries and a portfolio of diverse brands and moves in two dimensions: The first dimension concerns the perfect transformation of the product design field into a field of action planning that can be supported by artificial intelligence, providing timely and valid information to the designer drawing information from a wider range of sources than today’s method. The second dimension of the research concerns the design and implementation of an intelligent and semi-autonomous decision support system for everyone involved in the sample room. This system utilizes various machine learning techniques in order to become a versatile, robust and useful “assistant”: multiple clustering and classification models are utilized for grouping and combining similar/relevant products, Computer Vision state-of-the-art algorithms are extracting meaningful attributes from images and, finally, a reinforcement learning system is used to evolve the existing models based on user’s preferences.}
}

Eleni Poptsi, Despina Moraitou, Emmanouil Tsardoulias, Andreas L. Symeonidis and Magda Tsolaki
"Is the Discrimination of Subjective Cognitive Decline from Cognitively Healthy Adulthood and Mild Cognitive Impairment Possible? A Pilot Study Utilizing the R4Alz Battery"
Journal of Alzheimers Disease, 77, pp. 715-732, 2020 Sep

Background: The early diagnosis of neurocognitive disorders before the symptoms' onset is the ultimate goal of the scientific community. REMEDES for Alzheimer (R4Alz) is a battery, designed for assessing cognitive control abilities in people with minor and major neurocognitive disorders. Objective: To investigate whether the R4Alz battery's tasks differentiate subjective cognitive decline (SCD) from cognitively healthy adults (CHA) and mild cognitive impairment (MCI). Methods: The R4Alz battery was administered to 175 Greek adults, categorized in five groups a) healthy young adults (HYA; n = 42), b) healthy middle-aged adults (HMaA; n = 33), c) healthy older adults (HOA; n = 14), d) community-dwelling older adults with SCD (n = 34), and e) people with MCI (n = 52). Results: Between the seven R4Alz subtasks, four showcased the best results for differentiating HOA from SCD: the working memory updating (WMCUT-S3), the inhibition and switching subtask (ICT/RST-S1&S2), the failure sets (FS) of the ICT/RST-S1&S2, and the cognitive flexibility subtask (ICT/RST-S3). The total score of the four R4Alz subtasks (R4AlzTot4) leads to an excellent discrimination among SCD and healthy adulthood, and to fare discrimination among SCD and MCI. Conclusion: The R4Alz battery is a novel approach regarding the neuropsychological assessment of people with SCD, since it can very well assist toward discriminating SCD from HOA. The R4Alz is able to measure decline of specific cognitive control abilities - namely of working memory updating, and complex executive functions - which seem to be the neuropsychological substrate of cognitive complaints in community dwelling adults of advancing age.

@article{poptsiJad2020,
author={Eleni Poptsi and Despina Moraitou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Magda Tsolaki},
title={Is the Discrimination of Subjective Cognitive Decline from Cognitively Healthy Adulthood and Mild Cognitive Impairment Possible? A Pilot Study Utilizing the R4Alz Battery},
journal={Journal of Alzheimers Disease},
volume={77},
pages={715-732},
year={2020},
month={09},
date={2020-09-01},
url={https://pubmed.ncbi.nlm.nih.gov/32741834/},
doi={https://doi.org/10.3233/jad-200562},
keywords={mild cognitive impairment;Cognitive control assessment battery;cognitively healthy adults;normative data;subjective cognitive decline},
abstract={Background: The early diagnosis of neurocognitive disorders before the symptoms\' onset is the ultimate goal of the scientific community. REMEDES for Alzheimer (R4Alz) is a battery, designed for assessing cognitive control abilities in people with minor and major neurocognitive disorders. Objective: To investigate whether the R4Alz battery\'s tasks differentiate subjective cognitive decline (SCD) from cognitively healthy adults (CHA) and mild cognitive impairment (MCI). Methods: The R4Alz battery was administered to 175 Greek adults, categorized in five groups a) healthy young adults (HYA; n = 42), b) healthy middle-aged adults (HMaA; n = 33), c) healthy older adults (HOA; n = 14), d) community-dwelling older adults with SCD (n = 34), and e) people with MCI (n = 52). Results: Between the seven R4Alz subtasks, four showcased the best results for differentiating HOA from SCD: the working memory updating (WMCUT-S3), the inhibition and switching subtask (ICT/RST-S1&S2), the failure sets (FS) of the ICT/RST-S1&S2, and the cognitive flexibility subtask (ICT/RST-S3). The total score of the four R4Alz subtasks (R4AlzTot4) leads to an excellent discrimination among SCD and healthy adulthood, and to fare discrimination among SCD and MCI. Conclusion: The R4Alz battery is a novel approach regarding the neuropsychological assessment of people with SCD, since it can very well assist toward discriminating SCD from HOA. The R4Alz is able to measure decline of specific cognitive control abilities - namely of working memory updating, and complex executive functions - which seem to be the neuropsychological substrate of cognitive complaints in community dwelling adults of advancing age.}
}

2020

Conference Papers

Nikolaos L. Tsakiridis, Themistoklis Diamantopoulos, Andreas L. Symeonidis, John B. Theocharis, Athanasios Iossifides, Periklis Chatzimisios, George Pratos and Dimitris Kouvas
"Versatile Internet of Things for Agriculture: An eXplainable AI Approach"
International Conference on Artificial Intelligence Applications and Innovations, 2020 Jun

The increase of the adoption of IoT devices and the contemporary problem of food production have given rise to numerous applications of IoT in agriculture. These applications typically comprise a set of sensors that are installed in open fields and measure metrics, such as temperature or humidity, which are used for irrigation control systems. Though useful, most contemporary systems have high installation and maintenance costs, and they do not offer automated control or, if they do, they are usually not interpretable, and thus cannot be trusted for such critical applications. In this work, we design Vital, a system that incorporates a set of low-cost sensors, a robust data store, and most importantly an explainable AI decision support system. Our system outputs a fuzzy rule-base, which is interpretable and allows fully automating the irrigation of the fields. Upon evaluating Vital in two pilot cases, we conclude that it can be effective for monitoring open-field installations.

@conference{AIAI2020,
author={Nikolaos L. Tsakiridis and Themistoklis Diamantopoulos and Andreas L. Symeonidis and John B. Theocharis and Athanasios Iossifides and Periklis Chatzimisios and George Pratos and Dimitris Kouvas},
title={Versatile Internet of Things for Agriculture: An eXplainable AI Approach},
booktitle={International Conference on Artificial Intelligence Applications and Innovations},
year={2020},
month={06},
date={2020-06-06},
url={https://issel.ee.auth.gr/wp-content/uploads/2020/05/AIAI2020.pdf},
keywords={Internet of Things;Precision Irrigation;eXplainable AI},
abstract={The increase of the adoption of IoT devices and the contemporary problem of food production have given rise to numerous applications of IoT in agriculture. These applications typically comprise a set of sensors that are installed in open fields and measure metrics, such as temperature or humidity, which are used for irrigation control systems. Though useful, most contemporary systems have high installation and maintenance costs, and they do not offer automated control or, if they do, they are usually not interpretable, and thus cannot be trusted for such critical applications. In this work, we design Vital, a system that incorporates a set of low-cost sensors, a robust data store, and most importantly an explainable AI decision support system. Our system outputs a fuzzy rule-base, which is interpretable and allows fully automating the irrigation of the fields. Upon evaluating Vital in two pilot cases, we conclude that it can be effective for monitoring open-field installations.}
}

Thomas Karanikiotis, Michail D. Papamichail, Kyriakos C. Chatzidimitriou, Napoleon-Christos I. Oikonomou, Andreas L. Symeonidis, and Sashi K. Saripalle
"Continuous Implicit Authentication through Touch Traces Modelling"
20th International Conference on Software Quality, Reliability and Security (QRS), pp. 111-120, 2020 Nov

Nowadays, the continuously increasing use of smartphones as the primary way of dealing with day-to-day tasks raises several concerns mainly focusing on privacy and security. In this context and given the known limitations and deficiencies of traditional authentication mechanisms, a lot of research efforts are targeted towards continuous implicit authentication on the basis of behavioral biometrics. In this work, we propose a methodology towards continuous implicit authentication that refrains from the limitations imposed by small-scale and/or controlled environment experiments by employing a real-world application used widely by a large number of individuals. Upon constructing our models using Support Vector Machines, we introduce a confidence-based methodology, in order to strengthen the effectiveness and the efficiency of our approach. The evaluation of our methodology on a set of diverse scenarios indicates that our approach achieves good results both in terms of efficiency and usability.

@inproceedings{ciaQRS2020,
author={Thomas Karanikiotis and Michail D. Papamichail and Kyriakos C. Chatzidimitriou and Napoleon-Christos I. Oikonomou and Andreas L. Symeonidis and and Sashi K. Saripalle},
title={Continuous Implicit Authentication through Touch Traces Modelling},
booktitle={20th International Conference on Software Quality, Reliability and Security (QRS)},
pages={111-120},
year={2020},
month={11},
date={2020-11-04},
url={https://cassiopia.ee.auth.gr/index.php/s/suNwCr8hXVdmJFp/download},
keywords={Implicit Authentication;Smartphone Security;Touch Traces Modelling;Support Vector Machines},
abstract={Nowadays, the continuously increasing use of smartphones as the primary way of dealing with day-to-day tasks raises several concerns mainly focusing on privacy and security. In this context and given the known limitations and deficiencies of traditional authentication mechanisms, a lot of research efforts are targeted towards continuous implicit authentication on the basis of behavioral biometrics. In this work, we propose a methodology towards continuous implicit authentication that refrains from the limitations imposed by small-scale and/or controlled environment experiments by employing a real-world application used widely by a large number of individuals. Upon constructing our models using Support Vector Machines, we introduce a confidence-based methodology, in order to strengthen the effectiveness and the efficiency of our approach. The evaluation of our methodology on a set of diverse scenarios indicates that our approach achieves good results both in terms of efficiency and usability.}
}

Themistoklis Diamantopoulos, Nikolaos Oikonomou and Andreas Symeonidis
"Extracting Semantics from Question-Answering Services for Snippet Reuse"
Fundamental Approaches to Software Engineering, pp. 119-139, Springer International Publishing, Cham, 2020 Apr

Nowadays, software developers typically search online for reusable solutions to common programming problems. However, forming the question appropriately, and locating and integrating the best solution back to the code can be tricky and time consuming. As a result, several mining systems have been proposed to aid developers in the task of locating reusable snippets and integrating them into their source code. Most of these systems, however, do not model the semantics of the snippets in the context of source code provided. In this work, we propose a snippet mining system, named StackSearch, that extracts semantic information from Stack Overlow posts and recommends useful and in-context snippets to the developer. Using a hybrid language model that combines Tf-Idf and fastText, our system effectively understands the meaning of the given query and retrieves semantically similar posts. Moreover, the results are accompanied with useful metadata using a named entity recognition technique. Upon evaluating our system in a set of common programming queries, in a dataset based on post links, and against a similar tool, we argue that our approach can be useful for recommending ready-to-use snippets to the developer.

@conference{FASE2020,
author={Themistoklis Diamantopoulos and Nikolaos Oikonomou and Andreas Symeonidis},
title={Extracting Semantics from Question-Answering Services for Snippet Reuse},
booktitle={Fundamental Approaches to Software Engineering},
pages={119-139},
publisher={Springer International Publishing},
address={Cham},
year={2020},
month={04},
date={2020-04-17},
url={https://link.springer.com/content/pdf/10.1007/978-3-030-45234-6_6.pdf},
doi={https://doi.org/10.1007/978-3-030-45234-6_6},
isbn={978-3-030-45234-6},
keywords={Code Search;Snippet Mining;Code Semantic Analysis;Question-Answering Systems},
abstract={Nowadays, software developers typically search online for reusable solutions to common programming problems. However, forming the question appropriately, and locating and integrating the best solution back to the code can be tricky and time consuming. As a result, several mining systems have been proposed to aid developers in the task of locating reusable snippets and integrating them into their source code. Most of these systems, however, do not model the semantics of the snippets in the context of source code provided. In this work, we propose a snippet mining system, named StackSearch, that extracts semantic information from Stack Overlow posts and recommends useful and in-context snippets to the developer. Using a hybrid language model that combines Tf-Idf and fastText, our system effectively understands the meaning of the given query and retrieves semantically similar posts. Moreover, the results are accompanied with useful metadata using a named entity recognition technique. Upon evaluating our system in a set of common programming queries, in a dataset based on post links, and against a similar tool, we argue that our approach can be useful for recommending ready-to-use snippets to the developer.}
}

Thomas Karanikiotis, Michail D. Papamichail, Giannis Gonidelis, Dimitra Karatza and Andreas L. Symeonidis
"A Data-driven Methodology towards Interpreting Readability against Software Properties"
Proceedings of the 15th International Conference on Software Technologies - ICSOFT, pp. 61-72, SciTePress, 2020 Jan

In the context of collaborative, agile software development, where effective and efficient software maintenance is of utmost importance, the need to produce readable source code is evident. Towards this direction, several approaches aspire to assess the extent to which a software component is readable. Most of them rely on experts who are responsible for determining the ground truth and/or set custom evaluation criteria, leading to results that are context-dependent and subjective. In this work, we employ a large set of static analysis metrics along with various coding violations towards interpreting readability as perceived by developers. In an effort to provide a fully automated and extendible methodology, we refrain from using experts; rather we harness data residing in online code hosting facilities towards constructing a dataset that includes more than one million methods that cover diverse development scenarios. After performing clustering based on source code size, we employ S upport Vector Regression in order to interpret the extent to which a software component is readable on three axes: complexity, coupling, and documentation. Preliminary evaluation on several axes indicates that our approach effectively interprets readability as perceived by developers against the aforementioned three primary source code properties.

@inproceedings{karanikiotisICSOFT2020,
author={Thomas Karanikiotis and Michail D. Papamichail and Giannis Gonidelis and Dimitra Karatza and Andreas L. Symeonidis},
title={A Data-driven Methodology towards Interpreting Readability against Software Properties},
booktitle={Proceedings of the 15th International Conference on Software Technologies - ICSOFT},
pages={61-72},
publisher={SciTePress},
organization={INSTICC},
year={2020},
month={01},
date={2020-01-20},
url={https://doi.org/10.5220/0009891000610072},
doi={http://10.5220/0009891000610072},
issn={2184-2833},
isbn={978-989-758-443-5},
keywords={Developer-perceived Readability;Readability Interpretation;Size-based Clustering;Support Vector Regression.},
abstract={In the context of collaborative, agile software development, where effective and efficient software maintenance is of utmost importance, the need to produce readable source code is evident. Towards this direction, several approaches aspire to assess the extent to which a software component is readable. Most of them rely on experts who are responsible for determining the ground truth and/or set custom evaluation criteria, leading to results that are context-dependent and subjective. In this work, we employ a large set of static analysis metrics along with various coding violations towards interpreting readability as perceived by developers. In an effort to provide a fully automated and extendible methodology, we refrain from using experts; rather we harness data residing in online code hosting facilities towards constructing a dataset that includes more than one million methods that cover diverse development scenarios. After performing clustering based on source code size, we employ S upport Vector Regression in order to interpret the extent to which a software component is readable on three axes: complexity, coupling, and documentation. Preliminary evaluation on several axes indicates that our approach effectively interprets readability as perceived by developers against the aforementioned three primary source code properties.}
}

Themistoklis Diamantopoulos, Michail D. Papamichail, Thomas Karanikiotis, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"Employing Contribution and Quality Metrics for Quantifying the Software Development Process"
The 17th International Conference on Mining Software Repositories (MSR 2020), 2020 Jun

The full integration of online repositories in the contemporary software development process promotes remote work and remote collaboration. Apart from the apparent benefits, online repositories offer a deluge of data that can be utilized to monitor and improve the software development process. Towards this direction, we have designed and implemented a platform that analyzes data from GitHub in order to compute a series of metrics that quantify the contributions of project collaborators, both from a development as well as an operations (communication) perspective. We analyze contributions in an evolutionary manner throughout the projects' lifecycle and track the number of coding violations generated, this way aspiring to identify cases of software development that need closer monitoring and (possibly) further actions to be taken. In this context, we have analyzed the 3000 most popular Java GitHub projects and provide the data to the community.

@conference{MSR2020,
author={Themistoklis Diamantopoulos and Michail D. Papamichail and Thomas Karanikiotis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={Employing Contribution and Quality Metrics for Quantifying the Software Development Process},
booktitle={The 17th International Conference on Mining Software Repositories (MSR 2020)},
year={2020},
month={06},
date={2020-06-29},
url={https://issel.ee.auth.gr/wp-content/uploads/2020/05/MSR2020.pdf},
keywords={mining software repositories;contribution analysis;DevOps;GitHub issues;code violations},
abstract={The full integration of online repositories in the contemporary software development process promotes remote work and remote collaboration. Apart from the apparent benefits, online repositories offer a deluge of data that can be utilized to monitor and improve the software development process. Towards this direction, we have designed and implemented a platform that analyzes data from GitHub in order to compute a series of metrics that quantify the contributions of project collaborators, both from a development as well as an operations (communication) perspective. We analyze contributions in an evolutionary manner throughout the projects\' lifecycle and track the number of coding violations generated, this way aspiring to identify cases of software development that need closer monitoring and (possibly) further actions to be taken. In this context, we have analyzed the 3000 most popular Java GitHub projects and provide the data to the community.}
}

Eleni Poptsi, Despina Moraitou, Emmanouil Tsardoulias, Andreas L. Symeonidis and Magda Tsolaki
"Towards novel tools for discriminating healthy adults from people with neurocognitivedisorders: A pilot study utilizing the REMEDES for Alzheimer (R4Alz) battery"
2020 Alzheimer's Disease International Conference, 2020 Dec

Background: The early diagnosis of neurocognitive disorders before the onset of the symptoms of the clinical diagnosis is the ultimate goal of the scientific community. REMEDES for Alzheimer (R4Alz) is a battery, designed for assessing cognitive control abilities in people with minor and major neurocognitive disorders. The battery utilizes the “REMEDES” system, capable of measuring reflexes using visual and auditory triggers. The battery comprises three (3) tasks for assessing working memory capacity, attention control and inhibitory control, plus cognitive flexibility. Objectives: To investigate (a) whether the R4Alz battery’s tasks differentiate healthy adults controls (HAc) aged 20-85 years old from people with Subjective Cognitive Decline (SCD) and Mild Cognitive Impairment (MCI), (b) whether the battery is free of age, gender and educational level effects, and (c) the criterion-related validity of the R4Alz in all groups. Methods: The R4Alz battery administered in 100 Greek adults, categorized in healthy adult controls (HAc) (n = 39), community-dwelling older adults with SCD (n = 25) and patients with MCI (n = 36). Statistical analysis comprised Analysis of Variance (ANOVA) and Multivariate Analysis of Covariance (MANCOVA) with age and demographics as covariates when was necessary. The Scheffe post hoc test, applied in batteries’ tasks, as well. Pearson’s Correlation was also used for the investigation of the criterion-related validity. Results: The updating of working memory task discriminates the three groups and is free of gender (p = 0.184), age (p = 0.280) and education (p = 0.367) effects. The attention control task also discriminates the three diagnostic groups, while is independent from gender (p = 0.465) and education (p = 0.061). The inhibition control task is also gender (p = 0.697), age (p = 0.604) and education (p = 0.111) independent and can discriminate HAc from MCI and SCD from MCI. Criterion-related validity in all groups was supported by significant correlations. The updating of working memory task was correlated with the n-back test, where the attention control task was correlated with the Paper and pencil Dual test and the Test of Everyday Αttention (TEA). Finally the inhibition control task of the R4Alz battery was correlated with the Color-Word Interference Test of D-KEFS. Conclusion: The preliminary data of this study indicates that the R4Alz battery is a novel technological approach regarding the psychometric assessment of people with minor and major cognitive deficits, since it is free of demographic effects and it can help with discriminating HAc from SCI and MCI and SCI from MCI.

@conference{poptsiadi2020,
author={Eleni Poptsi and Despina Moraitou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Magda Tsolaki},
title={Towards novel tools for discriminating healthy adults from people with neurocognitivedisorders: A pilot study utilizing the REMEDES for Alzheimer (R4Alz) battery},
booktitle={2020 Alzheimer's Disease International Conference},
year={2020},
month={12},
date={2020-12-18},
url={https://adi2020.org/},
abstract={Background: The early diagnosis of neurocognitive disorders before the onset of the symptoms of the clinical diagnosis is the ultimate goal of the scientific community. REMEDES for Alzheimer (R4Alz) is a battery, designed for assessing cognitive control abilities in people with minor and major neurocognitive disorders. The battery utilizes the “REMEDES” system, capable of measuring reflexes using visual and auditory triggers. The battery comprises three (3) tasks for assessing working memory capacity, attention control and inhibitory control, plus cognitive flexibility. Objectives: To investigate (a) whether the R4Alz battery’s tasks differentiate healthy adults controls (HAc) aged 20-85 years old from people with Subjective Cognitive Decline (SCD) and Mild Cognitive Impairment (MCI), (b) whether the battery is free of age, gender and educational level effects, and (c) the criterion-related validity of the R4Alz in all groups. Methods: The R4Alz battery administered in 100 Greek adults, categorized in healthy adult controls (HAc) (n = 39), community-dwelling older adults with SCD (n = 25) and patients with MCI (n = 36). Statistical analysis comprised Analysis of Variance (ANOVA) and Multivariate Analysis of Covariance (MANCOVA) with age and demographics as covariates when was necessary. The Scheffe post hoc test, applied in batteries’ tasks, as well. Pearson’s Correlation was also used for the investigation of the criterion-related validity. Results: The updating of working memory task discriminates the three groups and is free of gender (p = 0.184), age (p = 0.280) and education (p = 0.367) effects. The attention control task also discriminates the three diagnostic groups, while is independent from gender (p = 0.465) and education (p = 0.061). The inhibition control task is also gender (p = 0.697), age (p = 0.604) and education (p = 0.111) independent and can discriminate HAc from MCI and SCD from MCI. Criterion-related validity in all groups was supported by significant correlations. The updating of working memory task was correlated with the n-back test, where the attention control task was correlated with the Paper and pencil Dual test and the Test of Everyday Αttention (TEA). Finally the inhibition control task of the R4Alz battery was correlated with the Color-Word Interference Test of D-KEFS. Conclusion: The preliminary data of this study indicates that the R4Alz battery is a novel technological approach regarding the psychometric assessment of people with minor and major cognitive deficits, since it is free of demographic effects and it can help with discriminating HAc from SCI and MCI and SCI from MCI.}
}

Vasileios Matsoukas, Themistoklis Diamantopoulos, Michail D. Papamichail and Andreas L. Symeonidis
"Towards Analyzing Contributions from Software Repositories to Optimize Issue Assignment"
Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability and Security (QRS), IEEE, Vilnius, Lithuania, 2020 Jul

Most software teams nowadays host their projects online and monitor software development in the form of issues/tasks. This process entails communicating through comments and reporting progress through commits and closing issues. In this context, assigning new issues, tasks or bugs to the most suitable contributor largely improves efficiency. Thus, several automated issue assignment approaches have been proposed, which however have major limitations. Most systems focus only on assigning bugs using textual data, are limited to projects explicitly using bug tracking systems, and may require manually tuning parameters per project. In this work, we build an automated issue assignment system for GitHub, taking into account the commits and issues of the repository under analysis. Our system aggregates feature probabilities using a neural network that adapts to each project, thus not requiring manual parameter tuning. Upon evaluating our methodology, we conclude that it can be efficient for automated issue assignment.

@conference{QRS2020IssueAssignment,
author={Vasileios Matsoukas and Themistoklis Diamantopoulos and Michail D. Papamichail and Andreas L. Symeonidis},
title={Towards Analyzing Contributions from Software Repositories to Optimize Issue Assignment},
booktitle={Proceedings of the 2020 IEEE International Conference on Software Quality, Reliability and Security (QRS)},
publisher={IEEE},
address={Vilnius, Lithuania},
year={2020},
month={07},
date={2020-07-31},
url={https://issel.ee.auth.gr/wp-content/uploads/2020/07/QRS2020IssueAssignment.pdf},
keywords={GitHub issues;automated issue assignment;issue triaging},
abstract={Most software teams nowadays host their projects online and monitor software development in the form of issues/tasks. This process entails communicating through comments and reporting progress through commits and closing issues. In this context, assigning new issues, tasks or bugs to the most suitable contributor largely improves efficiency. Thus, several automated issue assignment approaches have been proposed, which however have major limitations. Most systems focus only on assigning bugs using textual data, are limited to projects explicitly using bug tracking systems, and may require manually tuning parameters per project. In this work, we build an automated issue assignment system for GitHub, taking into account the commits and issues of the repository under analysis. Our system aggregates feature probabilities using a neural network that adapts to each project, thus not requiring manual parameter tuning. Upon evaluating our methodology, we conclude that it can be efficient for automated issue assignment.}
}

Anastasios Tzitzis, Alexandros Filotheou, Stavroula Siachalou, Emmanouil Tsardoulias, Spyros Megalou, Aggelos Bletsas, Konstantinos Panayiotou, Andreas Symeonidis, Traianos Yioultsis and Antonis G. Dimitriou
"Real-time 3D localization of RFID-tagged products by ground robots and drones with commercial off-the-shelf RFID equipment: Challenges and Solutions"
2020 IEEE International Conference on RFID (RFID), 2020 Oct

In this paper we investigate the problem of localizing passive RFID tags by ground robots and drones. We focus on autonomous robots, capable of entering a previously unknown environment, creating a 3D map of it, navigating safely in it, localizing themselves while moving, then localizing all RFID tagged objects and pinpointing their locations in the 3D map with cm accuracy. To the best of our knowledge, this is the first paper that presents the complex joint problem, including challenges from the field of robotics - i) sensors utilization, ii) local and global path planners, iii) navigation, iv) simultaneous localization of the robot and mapping - and from the field of RFIDs - vi) localization of the tags. We restrict our analysis to solutions, involving commercial UHF EPC Gen2 RFID tags, commercial off-the-self RFID readers and 3D real-time-only methods for tag-localization. We briefly present a new method, suitable for real-time 3D inventorying, and compare it with our two recent methods. Comparison is carried out on a new set of experiments, conducted in a multipath-rich indoor environment, where the actual problem is treated; i.e. our prototype robot constructs a 3D map, navigates in the environment, continuously estimates its poses as well as the locations of the surrounding tags. Localization results are given in a few seconds for 100 tags, parsing approximately 100000 measured samples from 4 antennas, collected within 4 minutes and achieving a mean 3D error of 25cm, which includes the error propagating from robotics and the uncertainty related to the "ground truth" of the tags' placement.

@conference{tzitzis2020realtime,
author={Anastasios Tzitzis and Alexandros Filotheou and Stavroula Siachalou and Emmanouil Tsardoulias and Spyros Megalou and Aggelos Bletsas and Konstantinos Panayiotou and Andreas Symeonidis and Traianos Yioultsis and Antonis G. Dimitriou},
title={Real-time 3D localization of RFID-tagged products by ground robots and drones with commercial off-the-shelf RFID equipment: Challenges and Solutions},
booktitle={2020 IEEE International Conference on RFID (RFID)},
year={2020},
month={10},
date={2020-10-28},
url={https://ieeexplore.ieee.org/document/9244904},
doi={https://doi.org/10.1109/RFID49298.2020.9244904},
abstract={In this paper we investigate the problem of localizing passive RFID tags by ground robots and drones. We focus on autonomous robots, capable of entering a previously unknown environment, creating a 3D map of it, navigating safely in it, localizing themselves while moving, then localizing all RFID tagged objects and pinpointing their locations in the 3D map with cm accuracy. To the best of our knowledge, this is the first paper that presents the complex joint problem, including challenges from the field of robotics - i) sensors utilization, ii) local and global path planners, iii) navigation, iv) simultaneous localization of the robot and mapping - and from the field of RFIDs - vi) localization of the tags. We restrict our analysis to solutions, involving commercial UHF EPC Gen2 RFID tags, commercial off-the-self RFID readers and 3D real-time-only methods for tag-localization. We briefly present a new method, suitable for real-time 3D inventorying, and compare it with our two recent methods. Comparison is carried out on a new set of experiments, conducted in a multipath-rich indoor environment, where the actual problem is treated; i.e. our prototype robot constructs a 3D map, navigates in the environment, continuously estimates its poses as well as the locations of the surrounding tags. Localization results are given in a few seconds for 100 tags, parsing approximately 100000 measured samples from 4 antennas, collected within 4 minutes and achieving a mean 3D error of 25cm, which includes the error propagating from robotics and the uncertainty related to the \"ground truth\" of the tags\' placement.}
}

2020

Inbooks

Antonis G. Dimitriou, Stavroula Siachalou, Emmanouil Tsardoulias and Loukas Petrou
Charpter:7, pp. -, John Wiley & Sons, Inc., 2020 Feb

Localization of wirelessly powered devices is essential for many applications related to the Internet of Things and Ubiquitous Computing. The chapter is focused on deploying a moving robotic platform, i.e. a robot, which hosts radio frequency identification (RFID) equipment and aims to locate passive RFID tags attached on objects in the surrounding area. The robot hosts additional sensors, namely lidar and depth cameras, enabling it to perform SLAM – simultaneous localization (of its own location) and mapping of any (including previously unknown) area. Furthermore, it can avoid obstacles, including people and perform and update path planning. Thanks to its movement, the robot collects a huge amount of data related to received signal strength information (RSSI) and phase information of each tag, realizing the concept of a “virtual antenna array”; i.e. a moving antenna at multiple locations. The antenna‐equipped robot behaves similarly to a synthetic‐aperture radar. The main application is continuous inventorying and localization; focusing on warehouse management, large retail stores, libraries, etc. The main advantage of the robotic approach versus static‐reader‐antenna deployments arises from the equivalent cost‐reduction per square meter of target area, since a single robot can circulate continuously around any area, whereas a fixed RFID‐network would necessitate for infrastructure costs analogous to the size of the area. Another advantage is the huge amount of data from different locations (of the robot) available to be exploited for more accurate RFID localization. Compared to a fixed installation, the disadvantage is that the robot does not cover the entire area simultaneously. Depending on the size of the target area and the desired inventorying update rate, additional robots could be deployed. In this chapter, the localization problem is presented and linked to practical applications. Representative prior‐art is analyzed and discussed. The SLAM problem is also discussed, while related state‐of‐the‐art is presented. Moreover, experimental results by an actual robot are demonstrated. A robot collects phase and RSSI measurements by RFID tags. It is shown that positioning accuracy is affected by both robotics' SLAM accuracy as well as the disruption of tags' backscattered signal due to fading. Finally, techniques to improve the system are discussed.

@inbook{etsardouRfid2020,
author={Antonis G. Dimitriou and Stavroula Siachalou and Emmanouil Tsardoulias and Loukas Petrou},
title={Robotics Meets RFID for Simultaneous Localization (of Robots and Objects) and Mapping (SLAM) – A Joined Problem},
chapter={7},
pages={-},
publisher={John Wiley & Sons, Inc.},
year={2020},
month={02},
date={2020-02-04},
url={https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119578598.ch7},
doi={https://doi.org/10.1002/9781119578598.ch7},
publisher's url={https://doi.org/10.1002/9781119578598.ch7},
abstract={Localization of wirelessly powered devices is essential for many applications related to the Internet of Things and Ubiquitous Computing. The chapter is focused on deploying a moving robotic platform, i.e. a robot, which hosts radio frequency identification (RFID) equipment and aims to locate passive RFID tags attached on objects in the surrounding area. The robot hosts additional sensors, namely lidar and depth cameras, enabling it to perform SLAM – simultaneous localization (of its own location) and mapping of any (including previously unknown) area. Furthermore, it can avoid obstacles, including people and perform and update path planning. Thanks to its movement, the robot collects a huge amount of data related to received signal strength information (RSSI) and phase information of each tag, realizing the concept of a “virtual antenna array”; i.e. a moving antenna at multiple locations. The antenna‐equipped robot behaves similarly to a synthetic‐aperture radar. The main application is continuous inventorying and localization; focusing on warehouse management, large retail stores, libraries, etc. The main advantage of the robotic approach versus static‐reader‐antenna deployments arises from the equivalent cost‐reduction per square meter of target area, since a single robot can circulate continuously around any area, whereas a fixed RFID‐network would necessitate for infrastructure costs analogous to the size of the area. Another advantage is the huge amount of data from different locations (of the robot) available to be exploited for more accurate RFID localization. Compared to a fixed installation, the disadvantage is that the robot does not cover the entire area simultaneously. Depending on the size of the target area and the desired inventorying update rate, additional robots could be deployed. In this chapter, the localization problem is presented and linked to practical applications. Representative prior‐art is analyzed and discussed. The SLAM problem is also discussed, while related state‐of‐the‐art is presented. Moreover, experimental results by an actual robot are demonstrated. A robot collects phase and RSSI measurements by RFID tags. It is shown that positioning accuracy is affected by both robotics\' SLAM accuracy as well as the disruption of tags\' backscattered signal due to fading. Finally, techniques to improve the system are discussed.}
}

2019

Journal Articles

Alexandros Filotheou, Emmanouil Tsardoulias, Antonis Dimitriou, Andreas Symeonidis and Loukas Petrou
"Quantitative and Qualitative Evaluation of ROS-Enabled Local and Global Planners in 2D Static Environments"
Journal of Intelligent & Robotic Systems, 2019 Oct

Apart from perception, one of the most fundamental aspects of an autonomous mobile robot is the ability to adequately and safely traverse the environment it operates in. This ability is called Navigation and is performed in a two- or three-dimensional fashion, except for cases where the robot is neither a ground vehicle nor articulated (e.g. robotics arms). The planning part of navigation comprises a global planner, suitable for generating a path from an initial to a target pose, and a local planner tasked with traversing the aforementioned path while dealing with environmental, sensorial and motion uncertainties. However, the task of selecting the optimal global and/or local planner combination is quite hard since no research provides insight on which is best regarding the domain and planner limitations. In this context, current work performs a comparative analysis on qualitative and quantitative aspects of the most common ROS-enabled global and local planners for robots operating in two-dimensional static environments, on the basis of mission-centered and planner-related metrics, optimality and traversability aspects, as well as non-measurable aspects, such as documentation quality, parameterisability, ease of use, etc.

@article{Filotheou2019,
author={Alexandros Filotheou and Emmanouil Tsardoulias and Antonis Dimitriou and Andreas Symeonidis and Loukas Petrou},
title={Quantitative and Qualitative Evaluation of ROS-Enabled Local and Global Planners in 2D Static Environments},
journal={Journal of Intelligent & Robotic Systems},
year={2019},
month={10},
date={2019-10-21},
url={https://bit.ly/2yylSu4},
doi={http://10.1007/s10846-019-01086-y},
issn={1573-0409},
abstract={Apart from perception, one of the most fundamental aspects of an autonomous mobile robot is the ability to adequately and safely traverse the environment it operates in. This ability is called Navigation and is performed in a two- or three-dimensional fashion, except for cases where the robot is neither a ground vehicle nor articulated (e.g. robotics arms). The planning part of navigation comprises a global planner, suitable for generating a path from an initial to a target pose, and a local planner tasked with traversing the aforementioned path while dealing with environmental, sensorial and motion uncertainties. However, the task of selecting the optimal global and/or local planner combination is quite hard since no research provides insight on which is best regarding the domain and planner limitations. In this context, current work performs a comparative analysis on qualitative and quantitative aspects of the most common ROS-enabled global and local planners for robots operating in two-dimensional static environments, on the basis of mission-centered and planner-related metrics, optimality and traversability aspects, as well as non-measurable aspects, such as documentation quality, parameterisability, ease of use, etc.}
}

Emmanouil Krasanakis, Emmanouil Schinas, Symeon Papadopoulos, Yiannis Kompatsiaris and Andreas Symeonidis
Information Processing & Management, pp. 102053, 2019 Jun

Local community detection is an emerging topic in network analysis that aims to detect well-connected communities encompassing sets of priorly known seed nodes. In this work, we explore the similar problem of ranking network nodes based on their relevance to the communities characterized by seed nodes. However, seed nodes may not be central enough or sufficiently many to produce high quality ranks. To solve this problem, we introduce a methodology we call seed oversampling, which first runs a node ranking algorithm to discover more nodes that belong to the community and then reruns the same ranking algorithm for the new seed nodes. We formally discuss why this process improves the quality of calculated community ranks if the original set of seed nodes is small and introduce a boosting scheme that iteratively repeats seed oversampling to further improve rank quality when certain ranking algorithm properties are met. Finally, we demonstrate the effectiveness of our methods in improving community relevance ranks given only a few random seed nodes of real-world network communities. In our experiments, boosted and simple seed oversampling yielded better rank quality than the previous neighborhood inflation heuristic, which adds the neighborhoods of original seed nodes to seeds.

@article{KRASANAKIS2019102053,
author={Emmanouil Krasanakis and Emmanouil Schinas and Symeon Papadopoulos and Yiannis Kompatsiaris and Andreas Symeonidis},
title={Boosted seed oversampling for local community ranking},
journal={Information Processing & Management},
pages={102053},
year={2019},
month={06},
date={2019-06-19},
doi={https://doi.org/10.1016/j.ipm.2019.06.002},
issn={0306-4573},
publisher's url={http://www.sciencedirect.com/science/article/pii/S0306457318308574},
abstract={Local community detection is an emerging topic in network analysis that aims to detect well-connected communities encompassing sets of priorly known seed nodes. In this work, we explore the similar problem of ranking network nodes based on their relevance to the communities characterized by seed nodes. However, seed nodes may not be central enough or sufficiently many to produce high quality ranks. To solve this problem, we introduce a methodology we call seed oversampling, which first runs a node ranking algorithm to discover more nodes that belong to the community and then reruns the same ranking algorithm for the new seed nodes. We formally discuss why this process improves the quality of calculated community ranks if the original set of seed nodes is small and introduce a boosting scheme that iteratively repeats seed oversampling to further improve rank quality when certain ranking algorithm properties are met. Finally, we demonstrate the effectiveness of our methods in improving community relevance ranks given only a few random seed nodes of real-world network communities. In our experiments, boosted and simple seed oversampling yielded better rank quality than the previous neighborhood inflation heuristic, which adds the neighborhoods of original seed nodes to seeds.}
}

Michail Papamichail, Kyriakos Chatzidimitriou, Thomas Karanikiotis, Napoleon-Christos Oikonomou, Andreas Symeonidis and Sashi Saripalle
"BrainRun: A Behavioral Biometrics Dataset towards Continuous Implicit Authentication"
Data, 4, (2), 2019 May

The widespread use of smartphones has dictated a new paradigm, where mobile applications are the primary channel for dealing with day-to-day tasks. This paradigm is full of sensitive information, making security of utmost importance. To that end, and given the traditional authentication techniques (passwords and/or unlock patterns) which have become ineffective, several research efforts are targeted towards biometrics security, while more advanced techniques are considering continuous implicit authentication on the basis of behavioral biometrics. However, most studies in this direction are performed “in vitro” resulting in small-scale experimentation. In this context, and in an effort to create a solid information basis upon which continuous authentication models can be built, we employ the real-world application “BrainRun”, a brain-training game aiming at boosting cognitive skills of individuals. BrainRun embeds a gestures capturing tool, so that the different types of gestures that describe the swiping behavior of users are recorded and thus can be modeled. Upon releasing the application at both the “Google Play Store” and “Apple App Store”, we construct a dataset containing gestures and sensors data for more than 2000 different users and devices. The dataset is distributed under the CC0 license and can be found at the EU Zenodo repository.

@article{Papamichail2019,
author={Michail Papamichail and Kyriakos Chatzidimitriou and Thomas Karanikiotis and Napoleon-Christos Oikonomou and Andreas Symeonidis and Sashi Saripalle},
title={BrainRun: A Behavioral Biometrics Dataset towards Continuous Implicit Authentication},
journal={Data},
volume={4},
number={2},
year={2019},
month={05},
date={2019-05-03},
url={https://res.mdpi.com/data/data-04-00060/article_deploy/data-04-00060.pdf?filename=&attachment=1},
doi={http://10.3390/data4020060},
issn={2306-5729},
abstract={The widespread use of smartphones has dictated a new paradigm, where mobile applications are the primary channel for dealing with day-to-day tasks. This paradigm is full of sensitive information, making security of utmost importance. To that end, and given the traditional authentication techniques (passwords and/or unlock patterns) which have become ineffective, several research efforts are targeted towards biometrics security, while more advanced techniques are considering continuous implicit authentication on the basis of behavioral biometrics. However, most studies in this direction are performed “in vitro” resulting in small-scale experimentation. In this context, and in an effort to create a solid information basis upon which continuous authentication models can be built, we employ the real-world application “BrainRun”, a brain-training game aiming at boosting cognitive skills of individuals. BrainRun embeds a gestures capturing tool, so that the different types of gestures that describe the swiping behavior of users are recorded and thus can be modeled. Upon releasing the application at both the “Google Play Store” and “Apple App Store”, we construct a dataset containing gestures and sensors data for more than 2000 different users and devices. The dataset is distributed under the CC0 license and can be found at the EU Zenodo repository.}
}

Michail D. Papamichail, Themistoklis Diamantopoulos and Andreas L. Symeonidis
"Software Reusability Dataset based on Static Analysis Metrics and Reuse Rate Information"
Data in Brief, 2019 Dec

The widely adopted component-based development paradigm considers the reuse of proper software components as a primary criterion for successful software development. As a result, various research efforts are directed towards evaluating the extent to which a software component is reusable. Prior efforts follow expert-based approaches, however the continuously increasing open-source software initiative allows the introduction of data-driven alternatives. In this context we have generated a dataset that harnesses information residing in online code hosting facilities and introduces the actual reuse rate of software components as a measure of their reusability. To do so, we have analyzed the most popular projects included in the maven registry and have computed a large number of static analysis metrics at both class and package levels using SourceMeter tool [2] that quantify six major source code properties: complexity, cohesion, coupling, inheritance, documentation and size. For these projects we additionally computed their reuse rate using our self-developed code search engine, AGORA [5]. The generated dataset contains analysis information regarding more than 24,000 classes and 2,000 packages, and can, thus, be used as the information basis towards the design and development of data-driven reusability evaluation methodologies. The dataset is related to the research article entitled "Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information

@article{PAPAMICHAIL2019104687,
author={Michail D. Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Software Reusability Dataset based on Static Analysis Metrics and Reuse Rate Information},
journal={Data in Brief},
year={2019},
month={12},
date={2019-12-31},
url={https://reader.elsevier.com/reader/sd/pii/S235234091931042X?token=9CDEB13940390201A35D26027D763CACB6EE4D49BFA9B920C4D32B348809F1F6A7DE309AA1737161C7E5BF1963BBD952},
doi={https://doi.org/10.1016/j.dib.2019.104687},
keywords={developer-perceived reusability;code reuse;static analysis metrics;Reusability assessment},
abstract={The widely adopted component-based development paradigm considers the reuse of proper software components as a primary criterion for successful software development. As a result, various research efforts are directed towards evaluating the extent to which a software component is reusable. Prior efforts follow expert-based approaches, however the continuously increasing open-source software initiative allows the introduction of data-driven alternatives. In this context we have generated a dataset that harnesses information residing in online code hosting facilities and introduces the actual reuse rate of software components as a measure of their reusability. To do so, we have analyzed the most popular projects included in the maven registry and have computed a large number of static analysis metrics at both class and package levels using SourceMeter tool [2] that quantify six major source code properties: complexity, cohesion, coupling, inheritance, documentation and size. For these projects we additionally computed their reuse rate using our self-developed code search engine, AGORA [5]. The generated dataset contains analysis information regarding more than 24,000 classes and 2,000 packages, and can, thus, be used as the information basis towards the design and development of data-driven reusability evaluation methodologies. The dataset is related to the research article entitled \"Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information}
}

Michail D. Papamichail , Themistoklis Diamantopoulos and Andreas L. Symeonidis
Journal of Systems and Software, pp. 110423, 2019 Sep

Nowadays, the continuously evolving open-source community and the increasing demands of end users are forming a new software development paradigm; developers rely more on reusing components from online sources to minimize the time and cost of software development. An important challenge in this context is to evaluate the degree to which a software component is suitable for reuse, i.e. its reusability. Contemporary approaches assess reusability using static analysis metrics by relying on the help of experts, who usually set metric thresholds or provide ground truth values so that estimation models are built. However, even when expert help is available, it may still be subjective or case-specific. In this work, we refrain from expert-based solutions and employ the actual reuse rate of source code components as ground truth for building a reusability estimation model. We initially build a benchmark dataset, harnessing the power of online repositories to determine the number of reuse occurrences for each component in the dataset. Subsequently, we build a model based on static analysis metrics to assess reusability from five different properties: complexity, cohesion, coupling, inheritance, documentation and size. The evaluation of our methodology indicates that our system can effectively assess reusability as perceived by developers.

@article{PAPAMICHAIL2019110423,
author={Michail D. Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Measuring the Reusability of Software Components using Static Analysis Metrics and Reuse Rate Information},
journal={Journal of Systems and Software},
pages={110423},
year={2019},
month={09},
date={2019-09-17},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/09/2019mpapamicJSS.pdf},
doi={https://doi.org/10.1016/j.jss.2019.110423},
issn={0164-1212},
publisher's url={https://www.sciencedirect.com/science/article/pii/S0164121219301979},
keywords={developer-perceived reusability;code reuse;static analysis metrics;reusability estimation},
abstract={Nowadays, the continuously evolving open-source community and the increasing demands of end users are forming a new software development paradigm; developers rely more on reusing components from online sources to minimize the time and cost of software development. An important challenge in this context is to evaluate the degree to which a software component is suitable for reuse, i.e. its reusability. Contemporary approaches assess reusability using static analysis metrics by relying on the help of experts, who usually set metric thresholds or provide ground truth values so that estimation models are built. However, even when expert help is available, it may still be subjective or case-specific. In this work, we refrain from expert-based solutions and employ the actual reuse rate of source code components as ground truth for building a reusability estimation model. We initially build a benchmark dataset, harnessing the power of online repositories to determine the number of reuse occurrences for each component in the dataset. Subsequently, we build a model based on static analysis metrics to assess reusability from five different properties: complexity, cohesion, coupling, inheritance, documentation and size. The evaluation of our methodology indicates that our system can effectively assess reusability as perceived by developers.}
}

Eleni Poptsi, Emmanouil Tsardoulias, Despina Moraitou, Andreas Symeonidis and Magda Tsolaki
Journal of Alzheimer's Disease, pp. 1-19, 2019 Oct

Background:Subjective cognitive decline (SCD) and mild cognitive impairment (MCI) are acknowledged stages of the clinical spectrum of Alzheimer’s disease (AD), and cognitive control seems to be among the first neuropsychological predictors of cognitive decline. Existing tests are usually affected by educational level, linguistic abilities, cultural differences, and social status, constituting them error-prone when differentiating between the aforementioned stages. Creating robust neuropsychological tests is therefore prominent. Objective:The design of a novel psychometric battery for the cognitive control and attention assessment, free of demographic effects, capable to discriminate cognitively healthy aging, SCD, MCI, and mild Dementia (mD). Methods:The battery initial hypothesis was tuned using iterations of administration on random sampling healthy older adults and people with SCD, MCI, and mD, from the area of Thessaloniki, Greece. This resulted in the first release of the REflexes MEasurement DEviceS for Alzheimer battery (REMEDES for Alzheimer-R4Alz). Results:The first release lasts for almost an hour. The battery was design to assess working memory (WM) including WM storage, processing, and updating, enriched by episodic buffer recruitment. It was also designed to assess attention control abilities comprising selective, sustained, and divided attention subtasks. Finally, it comprises an inhibitory control, a task/rule switching or set-shifting, and a cognitive flexibility subtask as a combination of inhibition and task/rule switching abilities. Conclusion:The R4Alz battery is an easy to use psychometric battery with increasing difficulty levels and assumingly ecological validity, being entertaining for older adults, potentially free of demographic effects, and promising as a more accurate and early diagnosis tool of neurodegeneration.

@article{poptsi2019remedes,
author={Eleni Poptsi and Emmanouil Tsardoulias and Despina Moraitou and Andreas Symeonidis and Magda Tsolaki},
title={REMEDES for Alzheimer-R4Alz Battery: Design and Development of a New Tool of Cognitive Control Assessment for the Diagnosis of Minor and Major Neurocognitive Disorders},
journal={Journal of Alzheimer's Disease},
pages={1-19},
year={2019},
month={10},
date={2019-10-18},
doi={http://10.3233/JAD-190798},
publisher's url={https://content.iospress.com/},
abstract={Background:Subjective cognitive decline (SCD) and mild cognitive impairment (MCI) are acknowledged stages of the clinical spectrum of Alzheimer’s disease (AD), and cognitive control seems to be among the first neuropsychological predictors of cognitive decline. Existing tests are usually affected by educational level, linguistic abilities, cultural differences, and social status, constituting them error-prone when differentiating between the aforementioned stages. Creating robust neuropsychological tests is therefore prominent. Objective:The design of a novel psychometric battery for the cognitive control and attention assessment, free of demographic effects, capable to discriminate cognitively healthy aging, SCD, MCI, and mild Dementia (mD). Methods:The battery initial hypothesis was tuned using iterations of administration on random sampling healthy older adults and people with SCD, MCI, and mD, from the area of Thessaloniki, Greece. This resulted in the first release of the REflexes MEasurement DEviceS for Alzheimer battery (REMEDES for Alzheimer-R4Alz). Results:The first release lasts for almost an hour. The battery was design to assess working memory (WM) including WM storage, processing, and updating, enriched by episodic buffer recruitment. It was also designed to assess attention control abilities comprising selective, sustained, and divided attention subtasks. Finally, it comprises an inhibitory control, a task/rule switching or set-shifting, and a cognitive flexibility subtask as a combination of inhibition and task/rule switching abilities. Conclusion:The R4Alz battery is an easy to use psychometric battery with increasing difficulty levels and assumingly ecological validity, being entertaining for older adults, potentially free of demographic effects, and promising as a more accurate and early diagnosis tool of neurodegeneration.}
}

Emmanouil G. Tsardoulias, M. Protopapas, Andreas L. Symeonidis and Loukas Petrou
Journal of Intelligent & Robotic Systems, 2019 Jul

The alignment of two occupancy grid maps generated by SLAM algorithms is a quite researched problem, being an obligatory step either for unsupervised map merging techniques or for evaluation of OGMs (Occupancy Grid Maps) against a blueprint of the environment. This paper provides an overview of the existing automatic alignment techniques of two occupancy grid maps that employ pattern matching. Additionally, an alignment pipeline using local features and image descriptors is implemented, as well as a method to eliminate erroneous correspondences, aiming at producing the correct transformation between the two maps. Finally, map quality metrics are proposed and utilized, in order to quantify the produced map’s correctness. A comparative analysis was performed over a number of image processing and OGM-oriented detectors and descriptors, in order to identify the best combinations for the map evaluation problem, performed between two OGMs or between an OGM and a Blueprint map.

@article{Tsardoulias2019,
author={Emmanouil G. Tsardoulias and M. Protopapas and Andreas L. Symeonidis and Loukas Petrou},
title={A Comparative Analysis of Pattern Matching Techniques Towards OGM Evaluation},
journal={Journal of Intelligent & Robotic Systems},
year={2019},
month={07},
date={2019-07-11},
url={https://link.springer.com/content/pdf/10.1007%2Fs10846-019-01053-7.pdf},
doi={http://10.1007/s10846-019-01053-7},
issn={1573-0409},
publisher's url={https://link.springer.com/content/pdf/10.1007%2Fs10846-019-01053-7.pdf},
abstract={The alignment of two occupancy grid maps generated by SLAM algorithms is a quite researched problem, being an obligatory step either for unsupervised map merging techniques or for evaluation of OGMs (Occupancy Grid Maps) against a blueprint of the environment. This paper provides an overview of the existing automatic alignment techniques of two occupancy grid maps that employ pattern matching. Additionally, an alignment pipeline using local features and image descriptors is implemented, as well as a method to eliminate erroneous correspondences, aiming at producing the correct transformation between the two maps. Finally, map quality metrics are proposed and utilized, in order to quantify the produced map’s correctness. A comparative analysis was performed over a number of image processing and OGM-oriented detectors and descriptors, in order to identify the best combinations for the map evaluation problem, performed between two OGMs or between an OGM and a Blueprint map.}
}

Anastasios Tzitzis, Spyros Megalou, Stavroula Siachalou, Emmanouil Tsardoulias, Athanasios Kehagias, Traianos Yioultsis and Antonis Dimitriou
"Localization of RFID Tags by a Moving Robot, via Phase Unwrapping and Non-Linear Optimization"
IEEE Journal of Radio Frequency Identification, 3, (4), pp. 216 - 226, 2019 Aug

In this paper, we propose a new method for the localization of RFID tags, by deploying off-the-shelf RFID equipment on a robotic platform. The constructed robot is capable to perform Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locate the RFID tags around its path. The proposed method is based on properly treating the measured phase of the backscattered signal by each tag at the reader’s antenna, located on top of the robot. More specifically, the measured phase samples are reconstructed, such that the $2\pi $ discontinuities are eliminated (phase-unwrapping). This allows for the formation of an optimization problem, which can be solved rapidly by standard methods. The proposed method is experimentally compared against the SAR/imaging methods, which represent the accuracy benchmark in prior-art, deploying off-the-shelf equipment. It is shown that the proposed method solves exactly the same problem as holographic-imaging methods, overcoming the grid-density constraints of the latter. Furthermore, the problem, being calculations-grid-independent, is solved orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization. It is also shown that the state-of-the-art SLAM method, which is used for the estimation of the trace of the robot, also suffers from errors, which directly affect the accuracy of the RFID localization method. Deployment of reference RFID tags at known positions, seems to significantly reduce such errors.

@article{tzitzis2019localization,
author={Anastasios Tzitzis and Spyros Megalou and Stavroula Siachalou and Emmanouil Tsardoulias and Athanasios Kehagias and Traianos Yioultsis and Antonis Dimitriou},
title={Localization of RFID Tags by a Moving Robot, via Phase Unwrapping and Non-Linear Optimization},
journal={IEEE Journal of Radio Frequency Identification},
volume={3},
number={4},
pages={216 - 226},
year={2019},
month={08},
date={2019-08-26},
url={https://bit.ly/2KYVgbq},
doi={https://doi.org/10.1109/JRFID.2019.2936969},
abstract={In this paper, we propose a new method for the localization of RFID tags, by deploying off-the-shelf RFID equipment on a robotic platform. The constructed robot is capable to perform Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locate the RFID tags around its path. The proposed method is based on properly treating the measured phase of the backscattered signal by each tag at the reader’s antenna, located on top of the robot. More specifically, the measured phase samples are reconstructed, such that the $2\\pi $ discontinuities are eliminated (phase-unwrapping). This allows for the formation of an optimization problem, which can be solved rapidly by standard methods. The proposed method is experimentally compared against the SAR/imaging methods, which represent the accuracy benchmark in prior-art, deploying off-the-shelf equipment. It is shown that the proposed method solves exactly the same problem as holographic-imaging methods, overcoming the grid-density constraints of the latter. Furthermore, the problem, being calculations-grid-independent, is solved orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization. It is also shown that the state-of-the-art SLAM method, which is used for the estimation of the trace of the robot, also suffers from errors, which directly affect the accuracy of the RFID localization method. Deployment of reference RFID tags at known positions, seems to significantly reduce such errors.}
}

2019

Conference Papers

Kyriakos C Chatzidimitriou, Michail D Papamichail, Napoleon-Christos I Oikonomou, Dimitrios Lampoudis and Andreas L Symeonidis
"Cenote: A Big Data Management and Analytics Infrastructure for the Web of Things"
IEEE/WIC/ACM International Conference on Web Intelligence, pp. 282-285, ACM, 2019 Oct

In the era of Big Data, Cloud Computing and Internet of Things, most of the existing, integrated solutions that attempt to solve their challenges are either proprietary, limit functionality to a predefined set of requirements, or hide the way data are stored and accessed. In this work we propose Cenote, an open source Big Data management and analytics infrastructure for the Web of Things that overcomes the above limitations. Cenote is built on component-based software engineering principles and provides an all-inclusive solution based on components that work well individually.

@inproceedings{Chatzidimitriou:2019:CBD:3350546.3352531,
author={Kyriakos C Chatzidimitriou and Michail D Papamichail and Napoleon-Christos I Oikonomou and Dimitrios Lampoudis and Andreas L Symeonidis},
title={Cenote: A Big Data Management and Analytics Infrastructure for the Web of Things},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligence},
pages={282-285},
publisher={ACM},
year={2019},
month={10},
date={2019-10-17},
url={http://doi.acm.org/10.1145/3350546.3352531},
doi={http://10.1145/3350546.3352531},
keywords={Internet of Things;analytics;apache kafka;apache storm;cockroachdb;infrastructure;restful api;web of things},
abstract={In the era of Big Data, Cloud Computing and Internet of Things, most of the existing, integrated solutions that attempt to solve their challenges are either proprietary, limit functionality to a predefined set of requirements, or hide the way data are stored and accessed. In this work we propose Cenote, an open source Big Data management and analytics infrastructure for the Web of Things that overcomes the above limitations. Cenote is built on component-based software engineering principles and provides an all-inclusive solution based on components that work well individually.}
}

Themistoklis Diamantopoulos, Maria-Ioanna Sifaki and Andreas L. Symeonidis
"Towards Mining Answer Edits to Extract Evolution Patterns in Stack Overflow"
16th International Conference on Mining Software Repositories, 2019 Mar

Thecurrentstateofpracticedictatesthatinorderto solve a problem encountered when building software, developers ask for help in online platforms, such as Stack Overflow. In this context of collaboration, answers to question posts often undergo several edits to provide the best solution to the problem stated. In this work, we explore the potential of mining Stack Overflow answer edits to extract common patterns when answering a post. In particular, we design a similarity scheme that takes into account the text and code of answer edits and cluster edits according to their semantics. Upon applying our methodology, we provide frequent edit patterns and indicate how they could be used to answer future research questions. Our evaluation indicates that our approach can be effective for identifying commonly applied edits, thus illustrating the transformation path from the initial answer to the optimal solution.

@conference{Diamantopoulos2019,
author={Themistoklis Diamantopoulos and Maria-Ioanna Sifaki and Andreas L. Symeonidis},
title={Towards Mining Answer Edits to Extract Evolution Patterns in Stack Overflow},
booktitle={16th International Conference on Mining Software Repositories},
year={2019},
month={03},
date={2019-03-15},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/03/MSR2019.pdf},
abstract={Thecurrentstateofpracticedictatesthatinorderto solve a problem encountered when building software, developers ask for help in online platforms, such as Stack Overflow. In this context of collaboration, answers to question posts often undergo several edits to provide the best solution to the problem stated. In this work, we explore the potential of mining Stack Overflow answer edits to extract common patterns when answering a post. In particular, we design a similarity scheme that takes into account the text and code of answer edits and cluster edits according to their semantics. Upon applying our methodology, we provide frequent edit patterns and indicate how they could be used to answer future research questions. Our evaluation indicates that our approach can be effective for identifying commonly applied edits, thus illustrating the transformation path from the initial answer to the optimal solution.}
}

Tsardoulias Emmanouil, Panayiotou Konstantinos, Symeonidis Andreas and Petrou Loukas
"REMEDES: Τεχνικά χαρακτηριστικά και προδιαγραφές συστήματος αποτίμησης κιναισθησίας προς διάγνωση της νόσου Alzheimer"
11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2019 Feb

Το REMEDES αποτελεί ένα σύστημα προσανατολισμένο στην μέτρηση και καταγραφή αντανακλαστικών και αντίδρασης με υψηλή ακρίβεια, κάνοντας χρήση οπτικών ή/και ακουστικών ερεθισμάτων. Το σύστημα είναι κατάλληλο για την ποσοτικοποίηση της ιδιοδεκτικότητας/κιναισθησίας, καθώς στηρίζεται στο βασικό πεδίο της ανθρώπινης δράσης/αντίδρασης, έχοντας ως είσοδο την όραση ή την ακοή και έξοδο το μυοσκελετικό σύστημα. Ως σύστημα, το REMEDES αποτελείται από έναν αριθμό ασύρματων φορητών συσκευών (Pads), οι οποίες μπορούν να τοποθετηθούν στον χώρο και να “προγραμματιστούν” ανάλογα, υλοποιώντας έτσι διάφορους τύπους ασκήσεων. Μέσω του κατάλληλου λογισμικού, για κάθε άσκηση γίνεται ανάλυση αποτελεσμάτων, ενώ παρέχονται στοιχεία επίδοσης χρήστη. Το σύστημα δίνει την δυνατότητα σύγκρισης των επιδόσεων ανάμεσα σε άλλους χρήστες ή ομάδες χρηστών. Κάθε REMEDES Pad ενεργοποιείται, παράγοντας φως συγκεκριμένου χρώματος/φωτεινότητας ή ήχο συγκεκριμένης έντασης/συχνότητας. Στη συνέχεια, ο εκάστοτε χρήστης καλείται να το “απενεργοποιήσει”, περνώντας το χέρι (ή άλλο μέλος του σώματος ανάλογα με την άσκηση) μπροστά από το εμπρόσθιο μέρος της συσκευής, οπότε και καταγράφεται με ακρίβεια ο χρόνος που πέρασε από την ενεργοποίηση έως την απενεργοποίηση του Pad. Κάθε άσκηση αποτελείται από έναν αριθμό τέτοιων ενεργοποιήσεων/απενεργοποιήσεων. Συνεπώς συνδυάζοντας διαφορετικές τοπολογίες και διαφορετικά ερεθίσματα (χρώματα, φωτεινότητες, ήχο), μπορεί να δημιουργηθεί ένα μεγάλο εύρος ασκήσεων διαφορετικής πολυπλοκότητας και δυσκολίας. Το σύστημα καταμετρά τις έγκυρες, άκυρες και εσφαλμένες απενεργοποιήσεις, όπως και όλους τους χρόνους απόκρισης, και παρουσιάζει τα αποτελέσματα σε γραφική κι επεξεργάσιμη μορφή. Ένα από τα ανταγωνιστικά πλεονεκτήματα του συστήματος REMEDES σε σχέση με άλλα, παρόμοια, συστήματα είναι ότι υποστηρίζει μέσα από τη διαδικτυακή γραφική του διεπαφή τη δημιουργία και εκτέλεση ασκήσεων τυχαίας ενεργοποίησης (όπου το σύστημα αποφασίζει ποιες συσκευές θα ενεργοποιηθούν ανάλογα με παραμέτρους εισόδου), ασκήσεις προκαθορισμένων βημάτων, όπως και ασκήσεις ελέγχου μνήμης. Στη συγκεκριμένη ομιλία θα παρουσιαστούν ο τρόπος λειτουργίας του συστήματος, οι οθόνες διεπαφής όπου εμφανίζονται τα αποτελέσματα και μία μικρή επίδειξη ενδεικτικών ασκήσεων.

@conference{EmmanouilPICAD2019,
author={Tsardoulias Emmanouil and Panayiotou Konstantinos and Symeonidis Andreas and Petrou Loukas},
title={REMEDES: Τεχνικά χαρακτηριστικά και προδιαγραφές συστήματος αποτίμησης κιναισθησίας προς διάγνωση της νόσου Alzheimer},
booktitle={11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND},
address={Thessaloniki, Greece},
year={2019},
month={02},
date={2019-02-14},
abstract={Το REMEDES αποτελεί ένα σύστημα προσανατολισμένο στην μέτρηση και καταγραφή αντανακλαστικών και αντίδρασης με υψηλή ακρίβεια, κάνοντας χρήση οπτικών ή/και ακουστικών ερεθισμάτων. Το σύστημα είναι κατάλληλο για την ποσοτικοποίηση της ιδιοδεκτικότητας/κιναισθησίας, καθώς στηρίζεται στο βασικό πεδίο της ανθρώπινης δράσης/αντίδρασης, έχοντας ως είσοδο την όραση ή την ακοή και έξοδο το μυοσκελετικό σύστημα. Ως σύστημα, το REMEDES αποτελείται από έναν αριθμό ασύρματων φορητών συσκευών (Pads), οι οποίες μπορούν να τοποθετηθούν στον χώρο και να “προγραμματιστούν” ανάλογα, υλοποιώντας έτσι διάφορους τύπους ασκήσεων. Μέσω του κατάλληλου λογισμικού, για κάθε άσκηση γίνεται ανάλυση αποτελεσμάτων, ενώ παρέχονται στοιχεία επίδοσης χρήστη. Το σύστημα δίνει την δυνατότητα σύγκρισης των επιδόσεων ανάμεσα σε άλλους χρήστες ή ομάδες χρηστών. Κάθε REMEDES Pad ενεργοποιείται, παράγοντας φως συγκεκριμένου χρώματος/φωτεινότητας ή ήχο συγκεκριμένης έντασης/συχνότητας. Στη συνέχεια, ο εκάστοτε χρήστης καλείται να το “απενεργοποιήσει”, περνώντας το χέρι (ή άλλο μέλος του σώματος ανάλογα με την άσκηση) μπροστά από το εμπρόσθιο μέρος της συσκευής, οπότε και καταγράφεται με ακρίβεια ο χρόνος που πέρασε από την ενεργοποίηση έως την απενεργοποίηση του Pad. Κάθε άσκηση αποτελείται από έναν αριθμό τέτοιων ενεργοποιήσεων/απενεργοποιήσεων. Συνεπώς συνδυάζοντας διαφορετικές τοπολογίες και διαφορετικά ερεθίσματα (χρώματα, φωτεινότητες, ήχο), μπορεί να δημιουργηθεί ένα μεγάλο εύρος ασκήσεων διαφορετικής πολυπλοκότητας και δυσκολίας. Το σύστημα καταμετρά τις έγκυρες, άκυρες και εσφαλμένες απενεργοποιήσεις, όπως και όλους τους χρόνους απόκρισης, και παρουσιάζει τα αποτελέσματα σε γραφική κι επεξεργάσιμη μορφή. Ένα από τα ανταγωνιστικά πλεονεκτήματα του συστήματος REMEDES σε σχέση με άλλα, παρόμοια, συστήματα είναι ότι υποστηρίζει μέσα από τη διαδικτυακή γραφική του διεπαφή τη δημιουργία και εκτέλεση ασκήσεων τυχαίας ενεργοποίησης (όπου το σύστημα αποφασίζει ποιες συσκευές θα ενεργοποιηθούν ανάλογα με παραμέτρους εισόδου), ασκήσεις προκαθορισμένων βημάτων, όπως και ασκήσεις ελέγχου μνήμης. Στη συγκεκριμένη ομιλία θα παρουσιαστούν ο τρόπος λειτουργίας του συστήματος, οι οθόνες διεπαφής όπου εμφανίζονται τα αποτελέσματα και μία μικρή επίδειξη ενδεικτικών ασκήσεων.}
}

Konstantinos Panayiotou, Emmanouil Tsardoulias, Christopher Zolotas, Iason Paraskevopoulos, Alexandra Chatzicharistou, Alexandros Sahinis, Stathis Dimitriadis, Dimitra Ntzioni, Christopher Mpekos, Giannis Manousaridis, Aris Georgoulas and Andreas Symeonidis
"Ms Pacman and the Robotic Ghost: A Modern Cyber-Physical Remake of the Famous Pacman Game"
2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS), 2019 Oct

Robotics and Internet of Things (IoT) are two of the most blooming scientific areas during the last years. Robotics has gained a lot of attention in the last decades and includes several disciplines (mapping, localization, planning, control etc.), while IoT is a quite new and exciting area, where seamless data aggregation and resource utilization from heterogeneous physical objects (e.g. devices, sensor networks and robots) is defined via multi-layer architectures. Moreover, Cyber-Physical systems (CPS) share similar concepts and principles with the IoT, focused on interconnecting physical and computational resources via multi-layer architectures. The current paper joins the Robotics and CPS disciplines via an architecture where heterogeneous physical and computational elements exist (robots, web app, message broker etc.), so as to implement a cyber-physical port of the famous Pacman game, called RoboPacman.

@conference{etsardouPacman2019,
author={Konstantinos Panayiotou and Emmanouil Tsardoulias and Christopher Zolotas and Iason Paraskevopoulos and Alexandra Chatzicharistou and Alexandros Sahinis and Stathis Dimitriadis and Dimitra Ntzioni and Christopher Mpekos and Giannis Manousaridis and Aris Georgoulas and Andreas Symeonidis},
title={Ms Pacman and the Robotic Ghost: A Modern Cyber-Physical Remake of the Famous Pacman Game},
booktitle={2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS)},
year={2019},
month={10},
date={2019-10-22},
url={https://ieeexplore.ieee.org/document/8939255},
doi={https://doi.org/10.1109/IOTSMS48152.2019.8939255},
keywords={Internet of Things;Robots;computer games;cyber-physical systems},
abstract={Robotics and Internet of Things (IoT) are two of the most blooming scientific areas during the last years. Robotics has gained a lot of attention in the last decades and includes several disciplines (mapping, localization, planning, control etc.), while IoT is a quite new and exciting area, where seamless data aggregation and resource utilization from heterogeneous physical objects (e.g. devices, sensor networks and robots) is defined via multi-layer architectures. Moreover, Cyber-Physical systems (CPS) share similar concepts and principles with the IoT, focused on interconnecting physical and computational resources via multi-layer architectures. The current paper joins the Robotics and CPS disciplines via an architecture where heterogeneous physical and computational elements exist (robots, web app, message broker etc.), so as to implement a cyber-physical port of the famous Pacman game, called RoboPacman.}
}

Anastasios Tzitzis, Spyros Megalou, Stavroula Siachalou, Emmanouil Tsardoulias, Traianos Yioultsis and Antonis G. Dimitriou
"3D Localization of RFID Tags with a Single Antenna by a Moving Robot and ”Phase ReLock”"
2019 IEEE International Conference on RFID Technology and Applications (RFID-TA), 2019 Sep

In this paper, we propose a novel method for the three dimensional (3D) localization of RFID tags, by deploying a single RFID antenna on a robotic platform. The constructed robot is capable of performing Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locating the tags around its path. The proposed method exploits the unwrapped measured phase of the backscattered signal, in such manner that the localization problem can be solved rapidly by standard optimization methods. Three dimensional solution is accomplished with a single antenna on top of the robot, by forcing the robot to traverse non-straight paths (e.g. s-shaped) along the environment. It is proven theoretically and experimentally that any non-straight path reduces the locus of possible solutions to only two points along the 3D space, instead of the circle that represents the corresponding locus for typical straight robot trajectories. As a consequence, by applying our proposed method ”Phase Relock” along the known half-plane of the search-space, the unique solution is rapidly found. We experimentally compare our method against the ”holographic” method, which represents the accuracy benchmark in priorart, deploying commercial off-the-shelf (COTS) equipment. Both algorithms find the unique solution, as expected. Furthermore, ”Phase ReLock” overcomes the calculations-grid constraints of the latter. Thus, better accuracy is achieved, while, more importantly, Phase-Relock is orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization.

@conference{etsardouRfid12019,
author={Anastasios Tzitzis and Spyros Megalou and Stavroula Siachalou and Emmanouil Tsardoulias and Traianos Yioultsis and Antonis G. Dimitriou},
title={3D Localization of RFID Tags with a Single Antenna by a Moving Robot and ”Phase ReLock”},
booktitle={2019 IEEE International Conference on RFID Technology and Applications (RFID-TA)},
year={2019},
month={09},
date={2019-09-25},
url={https://ieeexplore.ieee.org/document/8892256},
doi={https://ieeexplore.ieee.org/document/8892256},
keywords={Robots;Three-dimensional displays;Antenna measurements;Phase measurement;Antenna arrays;Radiofrequency identification},
abstract={In this paper, we propose a novel method for the three dimensional (3D) localization of RFID tags, by deploying a single RFID antenna on a robotic platform. The constructed robot is capable of performing Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locating the tags around its path. The proposed method exploits the unwrapped measured phase of the backscattered signal, in such manner that the localization problem can be solved rapidly by standard optimization methods. Three dimensional solution is accomplished with a single antenna on top of the robot, by forcing the robot to traverse non-straight paths (e.g. s-shaped) along the environment. It is proven theoretically and experimentally that any non-straight path reduces the locus of possible solutions to only two points along the 3D space, instead of the circle that represents the corresponding locus for typical straight robot trajectories. As a consequence, by applying our proposed method ”Phase Relock” along the known half-plane of the search-space, the unique solution is rapidly found. We experimentally compare our method against the ”holographic” method, which represents the accuracy benchmark in priorart, deploying commercial off-the-shelf (COTS) equipment. Both algorithms find the unique solution, as expected. Furthermore, ”Phase ReLock” overcomes the calculations-grid constraints of the latter. Thus, better accuracy is achieved, while, more importantly, Phase-Relock is orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization.}
}

Stavroula Siachalou, Spyros Megalou, Anastasios Tzitzis, Emmanouil Tsardoulias, John Sahalos, Traianos Yioultsis and Antonis G. Dimitriou
"Robotic Inventorying and Localization of RFID Tags, Exploiting Phase-Fingerprinting"
2019 IEEE International Conference on RFID Technology and Applications (RFID-TA), 2019 Sep

In this paper we investigate the performance of phase-based fingerprinting for the localization of RFID-tagged items in warehouses and large retail stores, by deploying ground and aerial RFID-equipped robots. The measured phases of the target RFID tags, collected along a given robot's trajectory, are compared to the corresponding phase-measurements of reference RFID tags; i.e. tags placed at known locations. The advantage of the method is that it doesn't need to estimate the robot's trajectory, since estimation is carried out by comparing phase measurements collected at neighboring time-intervals. This is of paramount importance for an RFID equipped drone, destined to fly indoors, since its weight should be kept as low as possible, in order to constrain its diameter correspondingly small. The phase measurements are initially unwrapped and then fingerprinting is applied. We compare the phase-fingerprinting with RSSI based fingerprinting. Phase-fingerprinting is significantly more accurate, because of the shape of the phase-function, which is typically U-shaped, with its minimum, measured at the point of the trajectory, when the robot-tag distance is minimised. Experimental accuracy of 15cm is typically achieved, depending on the density of the reference tags' grid.

@conference{etsardouRfid22019,
author={Stavroula Siachalou and Spyros Megalou and Anastasios Tzitzis and Emmanouil Tsardoulias and John Sahalos and Traianos Yioultsis and Antonis G. Dimitriou},
title={Robotic Inventorying and Localization of RFID Tags, Exploiting Phase-Fingerprinting},
booktitle={2019 IEEE International Conference on RFID Technology and Applications (RFID-TA)},
year={2019},
month={09},
date={2019-09-25},
url={https://ieeexplore.ieee.org/document/8892183},
doi={https://doi.org/10.1109/RFID-TA.2019.8892183},
keywords={Antenna measurements;Phase measurement;Drones;Robot sensing systems;RFID tags},
abstract={In this paper we investigate the performance of phase-based fingerprinting for the localization of RFID-tagged items in warehouses and large retail stores, by deploying ground and aerial RFID-equipped robots. The measured phases of the target RFID tags, collected along a given robot\'s trajectory, are compared to the corresponding phase-measurements of reference RFID tags; i.e. tags placed at known locations. The advantage of the method is that it doesn\'t need to estimate the robot\'s trajectory, since estimation is carried out by comparing phase measurements collected at neighboring time-intervals. This is of paramount importance for an RFID equipped drone, destined to fly indoors, since its weight should be kept as low as possible, in order to constrain its diameter correspondingly small. The phase measurements are initially unwrapped and then fingerprinting is applied. We compare the phase-fingerprinting with RSSI based fingerprinting. Phase-fingerprinting is significantly more accurate, because of the shape of the phase-function, which is typically U-shaped, with its minimum, measured at the point of the trajectory, when the robot-tag distance is minimised. Experimental accuracy of 15cm is typically achieved, depending on the density of the reference tags\' grid.}
}

Michail D. Papamichail, Themistoklis Diamantopoulos, Vasileios Matsoukas, Christos Athanasiadis and Andreas L. Symeonidis
"Towards Extracting the Role and Behavior of Contributors in Open-source Projects"
Proceedings of the 14th International Conference on Software Technologies - Volume 1: ICSOFT, pp. 536-543, SciTePress, 2019 Jul

Lately, the popular open source paradigm and the adoption of agile methodologies have changed the way soft-ware is developed. Effective collaboration within software teams has become crucial for building successful products. In this context, harnessing the data available in online code hosting facilities can help towards understanding how teams work and optimizing the development process. Although there are several approaches that mine contributions’ data, they usually view contributors as a uniform body of engineers, and focus mainlyon the aspect of productivity while neglecting the quality of the work performed. In this work, we design a methodology for identifying engineer roles in development teams and determine the behaviors that prevail for each role. Using a dataset of GitHub projects, we perform clustering against the DevOps axis, thus identifying three roles: developers that are mainly preoccupied with code commits, operations engineers that focus on task assignment and acceptance testing, and the lately popular role of DevOps engineers that are a mix of both.Our analysis further extracts behavioral patterns for each role, this way assisting team leaders in knowing their team and effectively directing responsibilities to achieve optimal workload balancing and task allocati

@inproceedings{icsoft19devops,
author={Michail D. Papamichail and Themistoklis Diamantopoulos and Vasileios Matsoukas and Christos Athanasiadis and Andreas L. Symeonidis},
title={Towards Extracting the Role and Behavior of Contributors in Open-source Projects},
booktitle={Proceedings of the 14th International Conference on Software Technologies - Volume 1: ICSOFT},
pages={536-543},
publisher={SciTePress},
organization={INSTICC},
year={2019},
month={07},
date={2019-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2019/08/ICSOFT_DevOps.pdf},
doi={http://10.5220/0007966505360543},
isbn={978-989-758-379-7},
abstract={Lately, the popular open source paradigm and the adoption of agile methodologies have changed the way soft-ware is developed. Effective collaboration within software teams has become crucial for building successful products. In this context, harnessing the data available in online code hosting facilities can help towards understanding how teams work and optimizing the development process. Although there are several approaches that mine contributions’ data, they usually view contributors as a uniform body of engineers, and focus mainlyon the aspect of productivity while neglecting the quality of the work performed. In this work, we design a methodology for identifying engineer roles in development teams and determine the behaviors that prevail for each role. Using a dataset of GitHub projects, we perform clustering against the DevOps axis, thus identifying three roles: developers that are mainly preoccupied with code commits, operations engineers that focus on task assignment and acceptance testing, and the lately popular role of DevOps engineers that are a mix of both.Our analysis further extracts behavioral patterns for each role, this way assisting team leaders in knowing their team and effectively directing responsibilities to achieve optimal workload balancing and task allocati}
}

Kyriakos C. Chatzidimitriou, Michail D. Papamichail, Themistoklis Diamantopoulos, Napoleon-Christos Oikonomou and Andreas L. Symeonidis
"npm Packages as Ingredients: A Recipe-based Approach - Volume 1: ICSOFT"
Proceedings of the 14th International Conference on Software Technologies, pp. 544-551, SciTePress, 2019 Jul

The sharing and growth of open source software packages in the npm JavaScript (JS) ecosystem has beenexponential, not only in numbers but also in terms of interconnectivity, to the extend that often the size of de-pendencies has become more than the size of the written code. This reuse-oriented paradigm, often attributedto the lack of a standard library in node and/or in the micropackaging culture of the ecosystem, yields interest-ing insights on the way developers build their packages. In this work we view the dependency network of thenpm ecosystem from a “culinary” perspective. We assume that dependencies are the ingredients in a recipe,which corresponds to the produced software package. We employ network analysis and information retrievaltechniques in order to capture the dependencies that tend to co-occur in the development of npm packages andidentify the communities that have been evolved as the main drivers for npm’s exponential grow.

@inproceedings{icsoft19npm,
author={Kyriakos C. Chatzidimitriou and Michail D. Papamichail and Themistoklis Diamantopoulos and Napoleon-Christos Oikonomou and Andreas L. Symeonidis},
title={npm Packages as Ingredients: A Recipe-based Approach - Volume 1: ICSOFT},
booktitle={Proceedings of the 14th International Conference on Software Technologies},
pages={544-551},
publisher={SciTePress},
organization={INSTICC},
year={2019},
month={07},
date={2019-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2019/08/ICSOFT_NPMRecipes.pdf},
doi={http://10.5220/0007966805440551},
isbn={978-989-758-379-7},
abstract={The sharing and growth of open source software packages in the npm JavaScript (JS) ecosystem has beenexponential, not only in numbers but also in terms of interconnectivity, to the extend that often the size of de-pendencies has become more than the size of the written code. This reuse-oriented paradigm, often attributedto the lack of a standard library in node and/or in the micropackaging culture of the ecosystem, yields interest-ing insights on the way developers build their packages. In this work we view the dependency network of thenpm ecosystem from a “culinary” perspective. We assume that dependencies are the ingredients in a recipe,which corresponds to the produced software package. We employ network analysis and information retrievaltechniques in order to capture the dependencies that tend to co-occur in the development of npm packages andidentify the communities that have been evolved as the main drivers for npm’s exponential grow.}
}

Maria Kotouza, Fotis Psomopoulos and Periklis A. Mitkas
New Trends in Databases and Information Systems, pp. 564-569, Springer International Publishing, Cham, 2019 Sep

Nowadays, a wide range of sciences are moving towards the Big Data era, producing large volumes of data that require processing for new knowledge extraction. Scientific workflows are often the key tools for solving problems characterized by computational complexity and data diversity, whereas cloud computing can effectively facilitate their efficient execution. In this paper, we present a generative big data analysis workflow that can provide analytics, clustering, prediction and visualization services to datasets coming from various scientific fields, by transforming input data into strings. The workflow consists of novel algorithms for data processing and relationship discovery, that are scalable and suitable for cloud infrastructures. Domain experts can interact with the workflow components, set their parameters, run personalized pipelines and have support for decision-making processes. As case studies in this paper, two datasets consisting of (i) Documents and (ii) Gene sequence data are used, showing promising results in terms of efficiency and performance.

@inproceedings{Kotouza19NTDIS,
author={Maria Kotouza and Fotis Psomopoulos and Periklis A. Mitkas},
title={A Dockerized String Analysis Workflow for Big Data},
booktitle={New Trends in Databases and Information Systems},
pages={564-569},
publisher={Springer International Publishing},
address={Cham},
year={2019},
month={09},
date={2019-09-01},
doi={https://doi.org/10.1007/978-3-030-30278-8_55},
isbn={978-3-030-30278-8},
publisher's url={https://link.springer.com/chapter/10.1007%2F978-3-030-30278-8_55},
abstract={Nowadays, a wide range of sciences are moving towards the Big Data era, producing large volumes of data that require processing for new knowledge extraction. Scientific workflows are often the key tools for solving problems characterized by computational complexity and data diversity, whereas cloud computing can effectively facilitate their efficient execution. In this paper, we present a generative big data analysis workflow that can provide analytics, clustering, prediction and visualization services to datasets coming from various scientific fields, by transforming input data into strings. The workflow consists of novel algorithms for data processing and relationship discovery, that are scalable and suitable for cloud infrastructures. Domain experts can interact with the workflow components, set their parameters, run personalized pipelines and have support for decision-making processes. As case studies in this paper, two datasets consisting of (i) Documents and (ii) Gene sequence data are used, showing promising results in terms of efficiency and performance.}
}

Ιoannis Maniadis, Konstantinos N. Vavliakis and Andreas L. Symeonidis
"Banner Personalization for e-Commerce"
AIAI 2019: Artificial Intelligence Applications and Innovations, pp. 635-646, 2019 May

@inproceedings{kvavAIAI2019,
author={Ιoannis Maniadis and Konstantinos N. Vavliakis and Andreas L. Symeonidis},
title={Banner Personalization for e-Commerce},
booktitle={AIAI 2019: Artificial Intelligence Applications and Innovations},
pages={635-646},
editor={Springer},
year={2019},
month={05},
date={2019-05-12},
doi={https://doi.org/10.1007/978-3-030-19823-7_53}
}

Spyros Megalou, Anastasios Tzitzis, Stavroula Siachalou, Traianos Yioultsis, John Sahalos, Emmanouil Tsardoulias, Alexandros Filotheou, Andreas Symeonidis, Loukas Petrou and Antonis G. Dimitriou
"Fingerprinting Localization of RFID tags with Real-Time Performance-Assessment, using a Moving Robot"
13th European Conference of Antennas and Propagation, Krakow, Poland, 2019 Jan

@conference{Megalou2019,
author={Spyros Megalou and Anastasios Tzitzis and Stavroula Siachalou and Traianos Yioultsis and John Sahalos and Emmanouil Tsardoulias and Alexandros Filotheou and Andreas Symeonidis and Loukas Petrou and Antonis G. Dimitriou},
title={Fingerprinting Localization of RFID tags with Real-Time Performance-Assessment, using a Moving Robot},
booktitle={13th European Conference of Antennas and Propagation},
address={Krakow, Poland},
year={2019},
month={01},
date={2019-01-01}
}

Konstantinos Panayiotou, Emmanouil Tsardoulias, Christopher Zolotas, Iason Paraskevopoulos, Alexandra Chatzicharistou, Alexandros Sahinis, Stathis Dimitriadis, Dimitra Ntzioni, Christopher Mpekos, Giannis Manousaridis, Aris Georgoulas and Andreas L. Symeonidis
"Ms Pacman and the Robotic Ghost: A Modern Cyber-Physical Remake of the Famous Pacman Game"
2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS), pp. 147-154, 2019 Oct

@inproceedings{panayiotou2019ms,
author={Konstantinos Panayiotou and Emmanouil Tsardoulias and Christopher Zolotas and Iason Paraskevopoulos and Alexandra Chatzicharistou and Alexandros Sahinis and Stathis Dimitriadis and Dimitra Ntzioni and Christopher Mpekos and Giannis Manousaridis and Aris Georgoulas and Andreas L. Symeonidis},
title={Ms Pacman and the Robotic Ghost: A Modern Cyber-Physical Remake of the Famous Pacman Game},
booktitle={2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS)},
pages={147-154},
editor={IEEE},
year={2019},
month={10},
date={2019-10-22},
url={https://bit.ly/33RgGyZ},
doi={https://doi.org/10.1109/IOTSMS48152.2019.8939255}
}

Eleni Poptsi, Despoina Moraitou, Tsardoulias Emmanouil, Panayiotou Konstantinos, Symeonidis Andreas, Petrou Loukas and Magda Tsolaki
"Συστοιχία REMEDES: Ένα νέο ηλεκτρονικό εργαλείο αξιολόγησης ικανοτήτων νοητικού ελέγχου στη γήρανση"
11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2019 Feb

Στις μέρες μας, υπάρχουν αρκετά νευροψυχολογικά εργαλεία που έχουν χρησιμοποιηθεί για τον διαχωρισμό των νοητικά υγιών ατόμων άνω των 65 ετών, από τα άτομα με Υποκειμενική Νοητική Δυσλειτουργία (ΥΝΔ), με Ήπια Νοητική Δυσλειτουργία (ΗΝΔ) και άνοια. Με βάση την υπάρχουσα βιβλιογραφία, οι ικανότητες νοητικού ελέγχου όπως η αναστολή και η εργαζόμενη μνήμη έχουν συσχετιστεί με νοητική έκπτωση και άνοια. Ωστόσο, οι δοκιμασίες που χρησιμοποιούνται έως σήμερα τείνουν να επηρεάζονται είτε από το εκπαιδευτικό επίπεδο του εξεταζόμενου, είτε από αντίστοιχες γλωσσικές μειονεξίες. Γι’ αυτό το λόγο τα υπάρχοντα εργαλεία φαίνεται να μην είναι ιδιαίτερα ευαίσθητα στη διαφορική διάγνωση μεταξύ των παραπάνω ομάδων. Κατά συνέπεια, η σχεδίαση κατάλληλων συστοιχιών/εργαλείων που μπορούν να εκτιμήσουν της ικανότητες νοητικού ελέγχου, χωρίς να απαιτούν γλωσσικές ικανότητες (μειώνοντας έτσι την επίδραση του εκπαιδευτικού επιπέδου των συμμετεχόντων) παραμένει ένα θέμα ιδιαίτερα επίκαιρο. Για το σκοπό αυτό δημιουργήθηκε μια συστοιχία αξιολόγησης του νοητικού ελέγχου προσαρμόζοντας το σύστημα “REMEDES1”, ένα σύστημα μέτρησης αντανακλαστικών/αντίδρασης. Η συστοιχία αυτή επικεντρώνεται σε τρεις διαφορετικές πτυχές του νοητικού ελέγχου (εργαζόμενη μνήμη, προσοχή κι εκτελεστική λειτουργία). Η πρώτη δοκιμασία εξετάζει ικανότητες εργαζόμενης μνήμης, ενώ η επόμενη εκτιμά ικανότητες εποπτικού συστήματος προσοχής. Οι τελευταία δοκιμασία διερευνά τον ανασταλτικό έλεγχο και την εναλλαγή κανόνων/έργων. Η συστοιχία δοκιμασιών REMEDES4Alzheimer θα εφαρμοστεί σε 150 συμμετέχοντες (n=150), οι οποίοι θα χωριστούν σε τέσσερις ομάδες: α) υγιείς ηλικιωμένοι, β) ηλικιωμένοι με Υποκειμενική Νοητική Διαταραχή (ΥNΔ), γ) διαγνωσθέντες με Ήπια Νοητική Διαταραχή (ΗNΔ) και δ) διαγνωσθέντες με ήπια άνοια. Στη συγκεκριμένη ομιλία θα παρουσιαστεί η φιλοσοφία και η δομή της συστοιχίας, τα πλεονεκτήματά της σε σχέση με τις υπόλοιπες συστοιχίες νοητικού ελέγχου που υπάρχουν, καθώς και τα πρώτα αποτελέσματα από το πιλοτικό στάδιο της μελέτης.

@conference{PoptsiMeCoND2019,
author={Eleni Poptsi and Despoina Moraitou and Tsardoulias Emmanouil and Panayiotou Konstantinos and Symeonidis Andreas and Petrou Loukas and Magda Tsolaki},
title={Συστοιχία REMEDES: Ένα νέο ηλεκτρονικό εργαλείο αξιολόγησης ικανοτήτων νοητικού ελέγχου στη γήρανση},
booktitle={11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND},
address={Thessaloniki, Greece},
year={2019},
month={02},
date={2019-02-14},
abstract={Στις μέρες μας, υπάρχουν αρκετά νευροψυχολογικά εργαλεία που έχουν χρησιμοποιηθεί για τον διαχωρισμό των νοητικά υγιών ατόμων άνω των 65 ετών, από τα άτομα με Υποκειμενική Νοητική Δυσλειτουργία (ΥΝΔ), με Ήπια Νοητική Δυσλειτουργία (ΗΝΔ) και άνοια. Με βάση την υπάρχουσα βιβλιογραφία, οι ικανότητες νοητικού ελέγχου όπως η αναστολή και η εργαζόμενη μνήμη έχουν συσχετιστεί με νοητική έκπτωση και άνοια. Ωστόσο, οι δοκιμασίες που χρησιμοποιούνται έως σήμερα τείνουν να επηρεάζονται είτε από το εκπαιδευτικό επίπεδο του εξεταζόμενου, είτε από αντίστοιχες γλωσσικές μειονεξίες. Γι’ αυτό το λόγο τα υπάρχοντα εργαλεία φαίνεται να μην είναι ιδιαίτερα ευαίσθητα στη διαφορική διάγνωση μεταξύ των παραπάνω ομάδων. Κατά συνέπεια, η σχεδίαση κατάλληλων συστοιχιών/εργαλείων που μπορούν να εκτιμήσουν της ικανότητες νοητικού ελέγχου, χωρίς να απαιτούν γλωσσικές ικανότητες (μειώνοντας έτσι την επίδραση του εκπαιδευτικού επιπέδου των συμμετεχόντων) παραμένει ένα θέμα ιδιαίτερα επίκαιρο. Για το σκοπό αυτό δημιουργήθηκε μια συστοιχία αξιολόγησης του νοητικού ελέγχου προσαρμόζοντας το σύστημα “REMEDES1”, ένα σύστημα μέτρησης αντανακλαστικών/αντίδρασης. Η συστοιχία αυτή επικεντρώνεται σε τρεις διαφορετικές πτυχές του νοητικού ελέγχου (εργαζόμενη μνήμη, προσοχή κι εκτελεστική λειτουργία). Η πρώτη δοκιμασία εξετάζει ικανότητες εργαζόμενης μνήμης, ενώ η επόμενη εκτιμά ικανότητες εποπτικού συστήματος προσοχής. Οι τελευταία δοκιμασία διερευνά τον ανασταλτικό έλεγχο και την εναλλαγή κανόνων/έργων. Η συστοιχία δοκιμασιών REMEDES4Alzheimer θα εφαρμοστεί σε 150 συμμετέχοντες (n=150), οι οποίοι θα χωριστούν σε τέσσερις ομάδες: α) υγιείς ηλικιωμένοι, β) ηλικιωμένοι με Υποκειμενική Νοητική Διαταραχή (ΥNΔ), γ) διαγνωσθέντες με Ήπια Νοητική Διαταραχή (ΗNΔ) και δ) διαγνωσθέντες με ήπια άνοια. Στη συγκεκριμένη ομιλία θα παρουσιαστεί η φιλοσοφία και η δομή της συστοιχίας, τα πλεονεκτήματά της σε σχέση με τις υπόλοιπες συστοιχίες νοητικού ελέγχου που υπάρχουν, καθώς και τα πρώτα αποτελέσματα από το πιλοτικό στάδιο της μελέτης.}
}

Eleni Poptsi, Despoina Moraitou, Tsardoulias Emmanouil, Panayiotou Konstantinos, Symeonidis Andreas, Petrou Loukas and Magda Tsolaki
"Αξιολόγηση του νοητικού ελέγχου στη γήρανση με τη χρήση ηλεκτρονικών εργαλείων μέσω του συστήματος αντανακλαστικών/αντίδρασης REMEDES4Alzheimer"
11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND, Thessaloniki, Greece, 2019 Feb

Η συστοιχία REMEDES4Alzheimer είναι ένα νέο ηλεκτρονικό εργαλείο που στοχεύει στην αξιολόγηση ικανοτήτων νοητικού ελέγχου και απευθύνεται σε ηλικιωμένους με νοητικά ελλείμματα. Η συστοιχία αυτή αποτελεί προσαρμογή του ήδη υπάρχοντος συστήματος αντανακλαστικών/αντίδρασης REMEDES. Στόχος της παρούσας συστοιχίας είναι η διαφορική διάγνωση μεταξύ ήπιων και μείζονων νοητικών διαταραχών από το φυσιολογικό γήρας και από το φυσιολογικό γήρας με ήπια νοητικά παράπονα. Το σύστημα αποτελείται από 7 φορητές συσκευές (REMEDES pads), οι οποίες είναι προγραμματισμένες να ενεργοποιούνται, δηλαδή να παράγουν χρώμα ή/και ήχο ανάλογα με τις απαιτήσεις της εκάστοτε υποδοκιμασίας. Για τις ανάγκες της αξιολόγησης του νοητικού ελέγχου στη γήρανση έχουν προσαρτηθεί στα REMEDES pads γραφικές αναπαραστάσεις ζώων, οι οποίες συνδυάζονται με τις αντίστοιχες ηχητικές αναπαραστάσεις. Ο εξεταζόμενος καλείται να απενεργοποιήσει τα REMEDES pads, περνώντας το χέρι του πάνω από κάθε ένα, ανάλογα με τις οδηγίες της κάθε υπο-δοκιμασίας. Κατά τη διάρκεια της εκτέλεσης της συστοιχίας δοκιμασιών, οι οδηγίες που αναφέρονται στα έργα δίνονται τόσο λεκτικά όσο και μη λεκτικά (μέσω εικονικών αναπαραστάσεων-σκίτσων). Η συστοιχία περιλαμβάνει δοκιμασίες οι οποίες αξιολογούν τρεις βασικές πλευρές των ικανοτήτων νοητικού ελέγχου. Η πρώτη δοκιμασία αξιολογεί ικανότητες εργαζόμενης μνήμης και συγκεκριμένα ικανότητες αποθήκευσης, επεξεργασίας και ενημέρωσης της εργαζόμενης μνήμης. Η δεύτερη δοκιμασία αξιολογεί το εποπτικό σύστημα προσοχής και συγκεκριμένα την οπτική και ακουστική επιλεκτική προσοχή, την συντηρούμενη και διαμοιραζόμενη προσοχή. Η τρίτη και τελευταία δοκιμασία αξιολογεί εκτελεστικές ικανότητες και συγκεκριμένα τον ανασταλτικό έλεγχο, την εναλλαγή των κανόνων/έργων και τη νοητική ευελιξία. Στη συγκεκριμένη ομιλία θα παρουσιαστεί η δομή και το περιεχόμενο της κάθε δοκιμασίας, ο τρόπος βαθμολόγησης της συστοιχίας καθώς και οι δυνατότητες που δίνει το γραφικό περιβάλλον του συστήματος.

@conference{PoptsiPICAD2019,
author={Eleni Poptsi and Despoina Moraitou and Tsardoulias Emmanouil and Panayiotou Konstantinos and Symeonidis Andreas and Petrou Loukas and Magda Tsolaki},
title={Αξιολόγηση του νοητικού ελέγχου στη γήρανση με τη χρήση ηλεκτρονικών εργαλείων μέσω του συστήματος αντανακλαστικών/αντίδρασης REMEDES4Alzheimer},
booktitle={11th Panhellenic Conference on Alzheimer's Disease & 3rd Mediterranean Conference on Neurodegenerative Diseases PICAD & MeCoND},
address={Thessaloniki, Greece},
year={2019},
month={02},
date={2019-02-14},
abstract={Η συστοιχία REMEDES4Alzheimer είναι ένα νέο ηλεκτρονικό εργαλείο που στοχεύει στην αξιολόγηση ικανοτήτων νοητικού ελέγχου και απευθύνεται σε ηλικιωμένους με νοητικά ελλείμματα. Η συστοιχία αυτή αποτελεί προσαρμογή του ήδη υπάρχοντος συστήματος αντανακλαστικών/αντίδρασης REMEDES. Στόχος της παρούσας συστοιχίας είναι η διαφορική διάγνωση μεταξύ ήπιων και μείζονων νοητικών διαταραχών από το φυσιολογικό γήρας και από το φυσιολογικό γήρας με ήπια νοητικά παράπονα. Το σύστημα αποτελείται από 7 φορητές συσκευές (REMEDES pads), οι οποίες είναι προγραμματισμένες να ενεργοποιούνται, δηλαδή να παράγουν χρώμα ή/και ήχο ανάλογα με τις απαιτήσεις της εκάστοτε υποδοκιμασίας. Για τις ανάγκες της αξιολόγησης του νοητικού ελέγχου στη γήρανση έχουν προσαρτηθεί στα REMEDES pads γραφικές αναπαραστάσεις ζώων, οι οποίες συνδυάζονται με τις αντίστοιχες ηχητικές αναπαραστάσεις. Ο εξεταζόμενος καλείται να απενεργοποιήσει τα REMEDES pads, περνώντας το χέρι του πάνω από κάθε ένα, ανάλογα με τις οδηγίες της κάθε υπο-δοκιμασίας. Κατά τη διάρκεια της εκτέλεσης της συστοιχίας δοκιμασιών, οι οδηγίες που αναφέρονται στα έργα δίνονται τόσο λεκτικά όσο και μη λεκτικά (μέσω εικονικών αναπαραστάσεων-σκίτσων). Η συστοιχία περιλαμβάνει δοκιμασίες οι οποίες αξιολογούν τρεις βασικές πλευρές των ικανοτήτων νοητικού ελέγχου. Η πρώτη δοκιμασία αξιολογεί ικανότητες εργαζόμενης μνήμης και συγκεκριμένα ικανότητες αποθήκευσης, επεξεργασίας και ενημέρωσης της εργαζόμενης μνήμης. Η δεύτερη δοκιμασία αξιολογεί το εποπτικό σύστημα προσοχής και συγκεκριμένα την οπτική και ακουστική επιλεκτική προσοχή, την συντηρούμενη και διαμοιραζόμενη προσοχή. Η τρίτη και τελευταία δοκιμασία αξιολογεί εκτελεστικές ικανότητες και συγκεκριμένα τον ανασταλτικό έλεγχο, την εναλλαγή των κανόνων/έργων και τη νοητική ευελιξία. Στη συγκεκριμένη ομιλία θα παρουσιαστεί η δομή και το περιεχόμενο της κάθε δοκιμασίας, ο τρόπος βαθμολόγησης της συστοιχίας καθώς και οι δυνατότητες που δίνει το γραφικό περιβάλλον του συστήματος.}
}

Christos Psarras, Themistoklis Diamantopoulos and Andreas Symeonidis
"A Mechanism for Automatically Summarizing Software Functionality from Source Code"
Proceedings of the 2019 IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 121-130, IEEE, Sofia, Bulgaria, 2019 Jul

When developers search online to find software components to reuse, they usually first need to understand the container projects/libraries, and subsequently identify the required functionality. Several approaches identify and summarize the offerings of projects from their source code, however they often require that the developer has knowledge of the underlying topic modeling techniques; they do not provide a mechanism for tuning the number of topics, and they offer no control over the top terms for each topic. In this work, we use a vectorizer to extract information from variable/method names and comments, and apply Latent Dirichlet Allocation to cluster the source code files of a project into different semantic topics.The number of topics is optimized based on their purity with respect to project packages, while topic categories are constructed to provide further intuition and Stack Exchange tags are used to express the topics in more abstract terms

@inproceedings{QRS2019,
author={Christos Psarras and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={A Mechanism for Automatically Summarizing Software Functionality from Source Code},
booktitle={Proceedings of the 2019 IEEE International Conference on Software Quality, Reliability and Security (QRS)},
pages={121-130},
publisher={IEEE},
address={Sofia, Bulgaria},
year={2019},
month={07},
date={2019-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2019/08/QRS2019.pdf},
abstract={When developers search online to find software components to reuse, they usually first need to understand the container projects/libraries, and subsequently identify the required functionality. Several approaches identify and summarize the offerings of projects from their source code, however they often require that the developer has knowledge of the underlying topic modeling techniques; they do not provide a mechanism for tuning the number of topics, and they offer no control over the top terms for each topic. In this work, we use a vectorizer to extract information from variable/method names and comments, and apply Latent Dirichlet Allocation to cluster the source code files of a project into different semantic topics.The number of topics is optimized based on their purity with respect to project packages, while topic categories are constructed to provide further intuition and Stack Exchange tags are used to express the topics in more abstract terms}
}

Stavroula Siachalou, Spyros Megalou, Anastasios Tzitzis, Emmanouil Tsardoulias, John Sahalos, Traianos Yioultsis and Antonis Dimitriou
"Robotic Inventorying and Localization of RFID Tags"
2019 IEEE International Conference on RFID Technology and Applications (RFID-TA), pp. 362-367, IEEE, 2019 Sep

In this paper we investigate the performance of phase-based fingerprinting for the localization of RFID-tagged items in warehouses and large retail stores, by deploying ground and aerial RFID-equipped robots. The measured phases of the target RFID tags, collected along a given robot’s trajectory, are compared to the corresponding phase-measurements of reference RFID tags; i.e. tags placed at known locations. The advantage of the method is that it doesn’t need to estimate the robot’s trajectory, since estimation is carried out by comparing phase measurements collected at neighboring time-intervals. This is of paramount importance for an RFID equipped drone, destined to fly indoors, since its weight should be kept as low as possible, in order to constrain its diameter correspondingly small. The phase measurements are initially unwrapped and then fingerprinting is applied. We compare the phase-fingerprinting with RSSI based fingerprinting. Phase-fingerprinting is significantly more accurate, because of the shape of the phase-function, which is typically U-shaped, with its minimum, measured at the point of the trajectory, when the robot-tag distance is minimised. Experimental accuracy of 15cm is typically achieved, depending on the density of the reference tags’ grid.

@inproceedings{siachalou2019robotic,
author={Stavroula Siachalou and Spyros Megalou and Anastasios Tzitzis and Emmanouil Tsardoulias and John Sahalos and Traianos Yioultsis and Antonis Dimitriou},
title={Robotic Inventorying and Localization of RFID Tags},
booktitle={2019 IEEE International Conference on RFID Technology and Applications (RFID-TA)},
pages={362-367},
publisher={IEEE},
year={2019},
month={09},
date={2019-09-25},
url={https://bit.ly/2KcgMKy},
doi={https://doi.org/10.1109/RFID-TA.2019.8892183},
abstract={In this paper we investigate the performance of phase-based fingerprinting for the localization of RFID-tagged items in warehouses and large retail stores, by deploying ground and aerial RFID-equipped robots. The measured phases of the target RFID tags, collected along a given robot’s trajectory, are compared to the corresponding phase-measurements of reference RFID tags; i.e. tags placed at known locations. The advantage of the method is that it doesn’t need to estimate the robot’s trajectory, since estimation is carried out by comparing phase measurements collected at neighboring time-intervals. This is of paramount importance for an RFID equipped drone, destined to fly indoors, since its weight should be kept as low as possible, in order to constrain its diameter correspondingly small. The phase measurements are initially unwrapped and then fingerprinting is applied. We compare the phase-fingerprinting with RSSI based fingerprinting. Phase-fingerprinting is significantly more accurate, because of the shape of the phase-function, which is typically U-shaped, with its minimum, measured at the point of the trajectory, when the robot-tag distance is minimised. Experimental accuracy of 15cm is typically achieved, depending on the density of the reference tags’ grid.}
}

Anastasios Tzitzis, Spyros Megalou, Stavroula Siachalou, Traianos Yioultsis, John Sahalos, Emmanouil Tsardoulias, Alexandros Filotheou, Andreas Symeonidis, Loukas Petrou and Antonis G. Dimitriou
"Phase ReLock - Localization of RFID Tags by a Moving Robot"
13th European Conference of Antennas and Propagation, Krakow, Poland, 2019 Jan

@conference{Tzitzis2019,
author={Anastasios Tzitzis and Spyros Megalou and Stavroula Siachalou and Traianos Yioultsis and John Sahalos and Emmanouil Tsardoulias and Alexandros Filotheou and Andreas Symeonidis and Loukas Petrou and Antonis G. Dimitriou},
title={Phase ReLock - Localization of RFID Tags by a Moving Robot},
booktitle={13th European Conference of Antennas and Propagation},
address={Krakow, Poland},
year={2019},
month={01},
date={2019-01-01}
}

Anastasios Tzitzis, Spyros Megalou, Stavroula Siachalou, Emmanouil Tsardoulias, Traianos Yioultsis and Antonis Dimitriou
"3D Localization of RFID Tags with a Single Antenna by a Moving Robot and” Phase ReLock”"
2019 IEEE International Conference on RFID Technology and Applications (RFID-TA), pp. 273-278, IEEE, 2019 Sep

In this paper, we propose a novel method for the three dimensional (3D) localization of RFID tags, by deploying a single RFID antenna on a robotic platform. The constructed robot is capable of performing Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locating the tags around its path. The proposed method exploits the unwrapped measured phase of the backscattered signal, in such manner that the localization problem can be solved rapidly by standard optimization methods. Three dimensional solution is accomplished with a single antenna on top of the robot, by forcing the robot to traverse non-straight paths (e.g. s-shaped) along the environment. It is proven theoretically and experimentally that any non-straight path reduces the locus of possible solutions to only two points along the 3D space, instead of the circle that represents the corresponding locus for typical straight robot trajectories. As a consequence, by applying our proposed method ”Phase Relock” along the known half-plane of the search-space, the unique solution is rapidly found. We experimentally compare our method against the ”holographic” method, which represents the accuracy benchmark in priorart, deploying commercial off-the-shelf (COTS) equipment. Both algorithms find the unique solution, as expected. Furthermore, ”Phase ReLock” overcomes the calculations-grid constraints of the latter. Thus, better accuracy is achieved, while, more importantly, Phase-Relock is orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization.

@inproceedings{tzitzis20193d,
author={Anastasios Tzitzis and Spyros Megalou and Stavroula Siachalou and Emmanouil Tsardoulias and Traianos Yioultsis and Antonis Dimitriou},
title={3D Localization of RFID Tags with a Single Antenna by a Moving Robot and” Phase ReLock”},
booktitle={2019 IEEE International Conference on RFID Technology and Applications (RFID-TA)},
pages={273-278},
publisher={IEEE},
year={2019},
month={09},
date={2019-09-25},
url={https://bit.ly/2KfiuLt},
doi={https://doi.org/10.1109/RFID-TA.2019.8892256},
abstract={In this paper, we propose a novel method for the three dimensional (3D) localization of RFID tags, by deploying a single RFID antenna on a robotic platform. The constructed robot is capable of performing Simultaneous Localization (of its own position) and Mapping (SLAM) of the environment and then locating the tags around its path. The proposed method exploits the unwrapped measured phase of the backscattered signal, in such manner that the localization problem can be solved rapidly by standard optimization methods. Three dimensional solution is accomplished with a single antenna on top of the robot, by forcing the robot to traverse non-straight paths (e.g. s-shaped) along the environment. It is proven theoretically and experimentally that any non-straight path reduces the locus of possible solutions to only two points along the 3D space, instead of the circle that represents the corresponding locus for typical straight robot trajectories. As a consequence, by applying our proposed method ”Phase Relock” along the known half-plane of the search-space, the unique solution is rapidly found. We experimentally compare our method against the ”holographic” method, which represents the accuracy benchmark in priorart, deploying commercial off-the-shelf (COTS) equipment. Both algorithms find the unique solution, as expected. Furthermore, ”Phase ReLock” overcomes the calculations-grid constraints of the latter. Thus, better accuracy is achieved, while, more importantly, Phase-Relock is orders of magnitude faster, allowing for the applicability of the method in real-time inventorying and localization.}
}

Konstantinos N. Vavliakis, George Katsikopoulos and Andreas L. Symeonidis
"E-commerce Personalization with Elasticsearch"
International Workshop on Web Search and Data Mining in conjunction with The 10th International Conference on Ambient Systems, Networks and Technologies (ANT 2019), Leuven, Belgium, 2019 Apr

Personalization techniques are constantly gaining traction among e-commerce retailers, since major advancements have been made at research level and the benefits are clear and pertinent. However, effectively applying personalization in real life is a challenging task, since the proper mixture of technology, data and content is complex and differs between organizations. In fact, personalization applications such as personalized search remain largely unfulfilled, especially by small and medium sized retailers, due to time and space limitations. In this paper we propose a novel approach for near real-time personalized e-commerce search that provides improved personalized results within the limited accepted time frames required for online browsing. We propose combining features such as product popularity, user interests, and query-product relevance with collaborative filtering, and implement our solution in Elasticsearch in order to achieve acceptable execution timings. We evaluate our approach against a publicly available dataset, as well as a running e-commerce store.

@inproceedings{VavliakisWSDM2018,
author={Konstantinos N. Vavliakis and George Katsikopoulos and Andreas L. Symeonidis},
title={E-commerce Personalization with Elasticsearch},
booktitle={International Workshop on Web Search and Data Mining in conjunction with The 10th International Conference on Ambient Systems, Networks and Technologies (ANT 2019)},
address={Leuven, Belgium},
year={2019},
month={04},
date={2019-04-29},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/02/WSDM_6_6382.pdf},
abstract={Personalization techniques are constantly gaining traction among e-commerce retailers, since major advancements have been made at research level and the benefits are clear and pertinent. However, effectively applying personalization in real life is a challenging task, since the proper mixture of technology, data and content is complex and differs between organizations. In fact, personalization applications such as personalized search remain largely unfulfilled, especially by small and medium sized retailers, due to time and space limitations. In this paper we propose a novel approach for near real-time personalized e-commerce search that provides improved personalized results within the limited accepted time frames required for online browsing. We propose combining features such as product popularity, user interests, and query-product relevance with collaborative filtering, and implement our solution in Elasticsearch in order to achieve acceptable execution timings. We evaluate our approach against a publicly available dataset, as well as a running e-commerce store.}
}

2018

Journal Articles

Christoforos Zolotas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"RESTsec: a low-code platform for generating secure by design enterprise services"
Enterprise Information Systems, pp. 1-27, 2018 Mar

In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.

@article{2018Zolotas,
author={Christoforos Zolotas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={RESTsec: a low-code platform for generating secure by design enterprise services},
journal={Enterprise Information Systems},
pages={1-27},
year={2018},
month={03},
date={2018-03-09},
doi={https://doi.org/10.1080/17517575.2018.1462403},
abstract={In the modern business world it is increasingly often that Enterprises opt to bring their business model online, in their effort to reach out to more end users and increase their customer base. While transitioning to the new model, enterprises consider securing their data of pivotal importance. In fact, many efforts have been introduced to automate this ‘webification’ process; however, they all fall short in some aspect: a) they either generate only the security infrastructure, assigning implementation to the developers, b) they embed mainstream, less powerful authorisation schemes, or c) they disregard the merits of the dominating REST architecture and adopt less suitable approaches. In this paper we present RESTsec, a Low-Code platform that supports rapid security requirements modelling for Enterprise Services, abiding by the state of the art ABAC authorisation scheme. RESTsec enables the developer to seamlessly embed the desired access control policy and generate the service, the security infrastructure and the code. Evaluation shows that our approach is valid and can help developers deliver secure by design enterprise services in a rapid and automated manner.}
}

George Mamalakis, Christos Diou, Andreas L. Symeonidis and Leonidas Georgiadis
"Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis"
Neural Computing and Applications, 2018 May

In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.

@article{Mamalakis2018,
author={George Mamalakis and Christos Diou and Andreas L. Symeonidis and Leonidas Georgiadis},
title={Of daemons and men: reducing false positive rate in intrusion detection systems with file system footprint analysis},
journal={Neural Computing and Applications},
year={2018},
month={05},
date={2018-05-12},
doi={https://doi.org/10.1007/s00521-018-3550-x},
issn={1433-3058},
keywords={Intrusion detection systems;Anomaly detection;Sequences of outliers},
abstract={In this work, we propose a methodology for reducing false alarms in file system intrusion detection systems, by taking into account the daemon’s file system footprint. More specifically, we experimentally show that sequences of outliers can serve as a distinguishing characteristic between true and false positives, and we show how analysing sequences of outliers can lead to lower false positive rates, while maintaining high detection rates. Based on this analysis, we developed an anomaly detection filter that learns outlier sequences using k-nearest neighbours with normalised longest common subsequence. Outlier sequences are then used as a filter to reduce false positives on the FI2DS file system intrusion detection system. This filter is evaluated on both overlapping and non-overlapping sequences of outliers. In both cases, experiments performed on three real-world web servers and a honeynet show that our approach achieves significant false positive reduction rates (up to 50 times), without any degradation of the corresponding true positive detection rates.}
}

2018

Conference Papers

Eleni Nisioti, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"ICML 2018 AutoML WorkshopPredicting hyperparameters from meta-features in binary classification problems"
AutoML, http://assets.ctfassets.net/c5lel8y1n83c/5uAPDjSvcseoko2cCcQcEi/8bd1d8e3630e246946feac86271fe03b/PPC17-automl2018.pdf, Stockholm, Sweden, 2018 Jul

The presence of computationally demanding problems and the current inability to auto-matically transfer experience from the application of past experiments to new ones delaysthe evolution of knowledge itself. In this paper we present the Automated Data Scientist1,a system that employs meta-learning for hyperparameter selection and builds a rich ensem-ble of models through forward model selection in order to automate binary classificationtasks. Preliminary evaluation shows that the system is capable of coping with classificationproblems of medium complexity.

@conference{2018Nisioti,
author={Eleni Nisioti and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={ICML 2018 AutoML WorkshopPredicting hyperparameters from meta-features in binary classification problems},
booktitle={AutoML},
publisher={http://assets.ctfassets.net/c5lel8y1n83c/5uAPDjSvcseoko2cCcQcEi/8bd1d8e3630e246946feac86271fe03b/PPC17-automl2018.pdf},
address={Stockholm, Sweden},
year={2018},
month={07},
date={2018-07-14},
keywords={meta-features;hyperparameter selection;automl;binary classification},
abstract={The presence of computationally demanding problems and the current inability to auto-matically transfer experience from the application of past experiments to new ones delaysthe evolution of knowledge itself. In this paper we present the Automated Data Scientist1,a system that employs meta-learning for hyperparameter selection and builds a rich ensem-ble of models through forward model selection in order to automate binary classificationtasks. Preliminary evaluation shows that the system is capable of coping with classificationproblems of medium complexity.}
}

Sotirios-Filippos Tsarouchis, Maria Th. Kotouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Multi-metric Algorithm for Hierarchical Clustering of Same-Length Protein Sequences"
IFIP International Conference on Artificial Intelligence Applications and Innovations, pp. 189-199, Springer, Cham, 2018 May

The identification of meaningful groups of proteins has always been a major area of interest for structural and functional genomics. Successful protein clustering can lead to significant insight, assisting in both tracing the evolutionary history of the respective molecules as well as in identifying potential functions and interactions of novel sequences. Here we propose a clustering algorithm for same-length sequences, which allows the construction of subset hierarchy and facilitates the identification of the underlying patterns for any given subset. The proposed method utilizes the metrics of sequence identity and amino-acid similarity simultaneously as direct measures. The algorithm was applied on a real-world dataset consisting of clonotypic immunoglobulin (IG) sequences from Chronic lymphocytic leukemia (CLL) patients, showing promising results.

@inproceedings{2018Tsarouchis,
author={Sotirios-Filippos Tsarouchis and Maria Th. Kotouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Multi-metric Algorithm for Hierarchical Clustering of Same-Length Protein Sequences},
booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations},
pages={189-199},
publisher={Springer},
address={Cham},
year={2018},
month={05},
date={2018-05-22},
doi={https://doi.org/10.1007/978-3-319-92016-0_18},
isbn={978-3-319-92016-0},
abstract={The identification of meaningful groups of proteins has always been a major area of interest for structural and functional genomics. Successful protein clustering can lead to significant insight, assisting in both tracing the evolutionary history of the respective molecules as well as in identifying potential functions and interactions of novel sequences. Here we propose a clustering algorithm for same-length sequences, which allows the construction of subset hierarchy and facilitates the identification of the underlying patterns for any given subset. The proposed method utilizes the metrics of sequence identity and amino-acid similarity simultaneously as direct measures. The algorithm was applied on a real-world dataset consisting of clonotypic immunoglobulin (IG) sequences from Chronic lymphocytic leukemia (CLL) patients, showing promising results.}
}

Kyriakos C. Chatzidimitriou, Michail Papamichail, Themistoklis Diamantopoulos, Michail Tsapanos and Andreas L. Symeonidis
"npm-miner: An Infrastructure for Measuring the Quality of the npm Registry"
MSR ’18: 15th International Conference on Mining Software Repositories, pp. 4, ACM, Gothenburg, Sweden, 2018 May

As the popularity of the JavaScript language is constantly increasing, one of the most important challenges today is to assess the quality of JavaScript packages. Developers often employ tools for code linting and for the extraction of static analysis metrics in order to assess and/or improve their code. In this context, we have developed npn-miner, a platform that crawls the npm registry and analyzes the packages using static analysis tools in order to extract detailed quality metrics as well as high-level quality attributes, such as maintainability and security. Our infrastructure includes an index that is accessible through a web interface, while we have also constructed a dataset with the results of a detailed analysis for 2000 popular npm packages.

@inproceedings{Chatzidimitriou2018MSR,
author={Kyriakos C. Chatzidimitriou and Michail Papamichail and Themistoklis Diamantopoulos and Michail Tsapanos and Andreas L. Symeonidis},
title={npm-miner: An Infrastructure for Measuring the Quality of the npm Registry},
booktitle={MSR ’18: 15th International Conference on Mining Software Repositories},
pages={4},
publisher={ACM},
address={Gothenburg, Sweden},
year={2018},
month={05},
date={2018-05-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2018/03/msr2018.pdf},
doi={https:%20//doi.org/10.1145/3196398.3196465},
abstract={As the popularity of the JavaScript language is constantly increasing, one of the most important challenges today is to assess the quality of JavaScript packages. Developers often employ tools for code linting and for the extraction of static analysis metrics in order to assess and/or improve their code. In this context, we have developed npn-miner, a platform that crawls the npm registry and analyzes the packages using static analysis tools in order to extract detailed quality metrics as well as high-level quality attributes, such as maintainability and security. Our infrastructure includes an index that is accessible through a web interface, while we have also constructed a dataset with the results of a detailed analysis for 2000 popular npm packages.}
}

Themistoklis Diamantopoulos, Georgios Karagiannopoulos and Andreas Symeonidis
"CodeCatch: Extracting Source Code Snippets from Online Sources"
IEEE/ACM 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE), pp. 21-27, https://dl.acm.org/ft_gateway.cfm?id=3194107&ftid=1982571&dwn=1&CFID=87644405&CFTOKEN=833260e7cb501a7d-48967D35-AFC5-4678-82812B13D64D3DD3, 2018 May

https://dl.acm.org/ft_gateway.cfm?id=3194107&ftid=1982571&dwn=1&CFID=87644405&CFTOKEN=833260e7cb501a7d-48967D35-AFC5-4678-82812B13D64D3DD3

@inproceedings{Diamantopoulos2018,
author={Themistoklis Diamantopoulos and Georgios Karagiannopoulos and Andreas Symeonidis},
title={CodeCatch: Extracting Source Code Snippets from Online Sources},
booktitle={IEEE/ACM 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE)},
pages={21-27},
publisher={https://dl.acm.org/ft_gateway.cfm?id=3194107&ftid=1982571&dwn=1&CFID=87644405&CFTOKEN=833260e7cb501a7d-48967D35-AFC5-4678-82812B13D64D3DD3},
year={2018},
month={05},
date={2018-05-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2018/11/RAISE2018.pdf},
doi={http://10.1145/3194104.3194107},
abstract={https://dl.acm.org/ft_gateway.cfm?id=3194107&ftid=1982571&dwn=1&CFID=87644405&CFTOKEN=833260e7cb501a7d-48967D35-AFC5-4678-82812B13D64D3DD3}
}

Anastasios Dimanidis, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"A Natural Language Driven Approach for Automated Web API Development: Gherkin2OAS"
WWW ’18 Companion: The 2018 Web Conference Companion, pp. 6, Lyon, France, 2018 Apr

Speeding up the development process of Web Services, while adhering to high quality software standards is a typical requirement in the software industry. This is why industry specialists usually suggest \\"driven by\\" development approaches to tackle this problem. In this paper, we propose such a methodology that employs Specification Driven Development and Behavior Driven Development in order to facilitate the phases of Web Service requirements elicitation and specification. Furthermore, we introduce gherkin2OAS, a software tool that aspires to bridge the aforementioned development approaches. Through the suggested methodology and tool, one may design and build RESTful services fast, while ensuring proper functionality.

@inproceedings{Dimanidis2018,
author={Anastasios Dimanidis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={A Natural Language Driven Approach for Automated Web API Development: Gherkin2OAS},
booktitle={WWW ’18 Companion: The 2018 Web Conference Companion},
pages={6},
address={Lyon, France},
year={2018},
month={04},
date={2018-04-23},
url={https://issel.ee.auth.gr/wp-content/uploads/2018/03/gherkin2oas.pdf},
doi={https://doi.org/10.1145/3184558.3191654%20},
abstract={Speeding up the development process of Web Services, while adhering to high quality software standards is a typical requirement in the software industry. This is why industry specialists usually suggest \\\\"driven by\\\\" development approaches to tackle this problem. In this paper, we propose such a methodology that employs Specification Driven Development and Behavior Driven Development in order to facilitate the phases of Web Service requirements elicitation and specification. Furthermore, we introduce gherkin2OAS, a software tool that aspires to bridge the aforementioned development approaches. Through the suggested methodology and tool, one may design and build RESTful services fast, while ensuring proper functionality.}
}

Maria Th. Kotouza, Konstantinos N. Vavliakis, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Hierarchical Multi-Metric Framework for Item Clustering"
5th International Conference on Big Data Computing Applications and Technologies, pp. 191-197, IEEE/ACM, Zurich, Switzerland, 2018 Dec

Item clustering is commonly used for dimensionality reduction, uncovering item similarities and connections, gaining insights of the market structure and recommendations. Hierarchical clustering methods produce a hierarchy structure along with the clusters that can be useful for managing item categories and sub-categories, dealing with indirect competition and new item categorization as well. Nevertheless, baseline hierarchical clustering algorithms have high computational cost and memory usage. In this paper we propose an innovative scalable hierarchical clustering framework, which overcomes these limitations. Our work consists of a binary tree construction algorithm that creates a hierarchy of the items using three metrics, a) Identity, b) Similarity and c) Entropy, as well as a branch breaking algorithm which composes the final clusters by applying thresholds to each branch of the tree. ?he proposed framework is evaluated on the popular MovieLens 20M dataset achieving significant reduction in both memory consumption and computational time over a baseline hierarchical clustering algorithm.

@inproceedings{KotouzaVPM18,
author={Maria Th. Kotouza and Konstantinos N. Vavliakis and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Hierarchical Multi-Metric Framework for Item Clustering},
booktitle={5th International Conference on Big Data Computing Applications and Technologies},
pages={191-197},
publisher={IEEE/ACM},
address={Zurich, Switzerland},
year={2018},
month={12},
date={2018-12-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2019/02/BDCAT_2018_paper_24_Proceedings.pdf},
doi={http://10.1109/BDCAT.2018.00031},
abstract={Item clustering is commonly used for dimensionality reduction, uncovering item similarities and connections, gaining insights of the market structure and recommendations. Hierarchical clustering methods produce a hierarchy structure along with the clusters that can be useful for managing item categories and sub-categories, dealing with indirect competition and new item categorization as well. Nevertheless, baseline hierarchical clustering algorithms have high computational cost and memory usage. In this paper we propose an innovative scalable hierarchical clustering framework, which overcomes these limitations. Our work consists of a binary tree construction algorithm that creates a hierarchy of the items using three metrics, a) Identity, b) Similarity and c) Entropy, as well as a branch breaking algorithm which composes the final clusters by applying thresholds to each branch of the tree. ?he proposed framework is evaluated on the popular MovieLens 20M dataset achieving significant reduction in both memory consumption and computational time over a baseline hierarchical clustering algorithm.}
}

Panagiotis G. Mousouliotis, Konstantinos L. Panayiotou, Emmanouil G. Tsardoulias, Loukas P. Petrou and Andreas L. Symeonidis
"Expanding a robots life: Low power object recognition via FPGA-based DCNN deployment"
MOCAST, https://arxiv.org/abs/1804.00512, 2018 Mar

FPGAs are commonly used to accelerate domain-specific algorithmic implementations, as they can achieve impressive performance boosts, are reprogrammable and exhibit minimal power consumption. In this work, the SqueezeNet DCNN is accelerated using an SoC FPGA in order for the offered object recognition resource to be employed in a robotic application. Experiments are conducted to investigate the performance and power consumption of the implementation in comparison to deployment on other widely-used computational systems. thanks you!

@conference{Mousouliotis2018,
author={Panagiotis G. Mousouliotis and Konstantinos L. Panayiotou and Emmanouil G. Tsardoulias and Loukas P. Petrou and Andreas L. Symeonidis},
title={Expanding a robots life: Low power object recognition via FPGA-based DCNN deployment},
booktitle={MOCAST},
publisher={https://arxiv.org/abs/1804.00512},
year={2018},
month={03},
date={2018-03-01},
abstract={FPGAs are commonly used to accelerate domain-specific algorithmic implementations, as they can achieve impressive performance boosts, are reprogrammable and exhibit minimal power consumption. In this work, the SqueezeNet DCNN is accelerated using an SoC FPGA in order for the offered object recognition resource to be employed in a robotic application. Experiments are conducted to investigate the performance and power consumption of the implementation in comparison to deployment on other widely-used computational systems. thanks you!}
}

Michail Papamichail, Themistoklis Diamantopoulos, Ilias Chrysovergis, Philippos Samlidis and Andreas Symeonidis
Proceedings of the 2018 Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE), https://www.researchgate.net/publication/324106989_User-Perceived_Reusability_Estimation_based_on_Analysis_of_Software_Repositories, 2018 Mar

The popularity of open-source software repositories has led to a new reuse paradigm, where online resources can be thoroughly analyzed to identify reusable software components. Obviously, assessing the quality and specifically the reusability potential of source code residing in open software repositories poses a major challenge for the research community. Although several systems have been designed towards this direction, most of them do not focus on reusability. In this paper, we define and formulate a reusability score by employing information from GitHub stars and forks, which indicate the extent to which software components are adopted/accepted by developers. Our methodology involves applying and assessing different state-of-the-practice machine learning algorithms, in order to construct models for reusability estimation at both class and package levels. Preliminary evaluation of our methodology indicates that our approach can successfully assess reusability, as perceived by developers.

@inproceedings{Papamichail2018MaLTeSQuE,
author={Michail Papamichail and Themistoklis Diamantopoulos and Ilias Chrysovergis and Philippos Samlidis and Andreas Symeonidis},
title={User-Perceived Reusability Estimation based on Analysis of Software Repositories},
booktitle={Proceedings of the 2018 Workshop on Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE)},
publisher={https://www.researchgate.net/publication/324106989_User-Perceived_Reusability_Estimation_based_on_Analysis_of_Software_Repositories},
year={2018},
month={03},
date={2018-03-20},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/08/maLTeSQuE.pdf},
publisher's url={https://www.researchgate.net/publication/324106989_User-Perceived_Reusability_Estimation_based_on_Analysis_of_Software_Repositories},
abstract={The popularity of open-source software repositories has led to a new reuse paradigm, where online resources can be thoroughly analyzed to identify reusable software components. Obviously, assessing the quality and specifically the reusability potential of source code residing in open software repositories poses a major challenge for the research community. Although several systems have been designed towards this direction, most of them do not focus on reusability. In this paper, we define and formulate a reusability score by employing information from GitHub stars and forks, which indicate the extent to which software components are adopted/accepted by developers. Our methodology involves applying and assessing different state-of-the-practice machine learning algorithms, in order to construct models for reusability estimation at both class and package levels. Preliminary evaluation of our methodology indicates that our approach can successfully assess reusability, as perceived by developers.}
}

Emmanouil G. Tsardoulias, Konstantinos L. Panayiotou, Christoforos Zolotas, Alexandros Philotheou, Anreas L. Symeonidis and Loukas Petrou
"From classical to cloud robotics: Challenges and potential"
3rd International Workshop on Microsystems, Sindos Campus, ATEI Thessaloniki, Greece, 2018 Dec

Nowadays, a rapid transition from the classical robotic systems to more modern concepts like Cloud or IoT robotics is being experienced. The current paper briefly overviews the benefits robots can have, as parts of the increasingly interconnected world.

@conference{TsardouliasMicrosystems2018,
author={Emmanouil G. Tsardoulias and Konstantinos L. Panayiotou and Christoforos Zolotas and Alexandros Philotheou and Anreas L. Symeonidis and Loukas Petrou},
title={From classical to cloud robotics: Challenges and potential},
booktitle={3rd International Workshop on Microsystems},
address={Sindos Campus, ATEI Thessaloniki, Greece},
year={2018},
month={12},
date={2018-12-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/02/From-classical-to-cloud-robotics-Challenges-and-potential.pdf},
abstract={Nowadays, a rapid transition from the classical robotic systems to more modern concepts like Cloud or IoT robotics is being experienced. The current paper briefly overviews the benefits robots can have, as parts of the increasingly interconnected world.}
}

Konstantinos N. Vavliakis, Maria Th. Kotouza, Andreas L. Symeonidis and Pericles A. Mitkas
"Recommendation Systems in a Conversational Web"
Proceedings of the 14th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,, pp. 68-77, SciTePress, 2018 Jan

In this paper we redefine the concept of Conversation Web in the context of hyper-personalization. We argue that hyper-personalization in the WWW is only possible within a conversational web where websites and users continuously “discuss” (interact in any way). We present a modular system architecture for the conversational WWW, given that adapting to various user profiles and multivariate websites in terms of size and user traffic is necessary, especially in e-commerce. Obviously there cannot be a unique fit-to-all algorithm, but numerous complementary personalization algorithms and techniques are needed. In this context, we propose PRCW, a novel hybrid approach combining offline and online recommendations using RFMG, an extension of RFM modeling. We evaluate our approach against the results of a deep neural network in two datasets coming from different online retailers. Our evaluation indicates that a) the proposed approach outperforms current state-of-art methods in small-medium datasets and can improve performance in large datasets when combined with other methods, b) results can greatly vary in different datasets, depending on size and characteristics, thus locating the proper method for each dataset can be a rather complex task, and c) offline algorithms should be combined with online methods in order to get optimal results since offline algorithms tend to offer better performance but online algorithms are necessary for exploiting new users and trends that turn up.

@conference{webist18,
author={Konstantinos N. Vavliakis and Maria Th. Kotouza and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Recommendation Systems in a Conversational Web},
booktitle={Proceedings of the 14th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,},
pages={68-77},
publisher={SciTePress},
year={2018},
month={01},
date={2018-01-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/02/WEBIST_2018_29.pdf},
doi={http://10.5220/0006935300680077},
isbn={978-989-758-324-7},
abstract={In this paper we redefine the concept of Conversation Web in the context of hyper-personalization. We argue that hyper-personalization in the WWW is only possible within a conversational web where websites and users continuously “discuss” (interact in any way). We present a modular system architecture for the conversational WWW, given that adapting to various user profiles and multivariate websites in terms of size and user traffic is necessary, especially in e-commerce. Obviously there cannot be a unique fit-to-all algorithm, but numerous complementary personalization algorithms and techniques are needed. In this context, we propose PRCW, a novel hybrid approach combining offline and online recommendations using RFMG, an extension of RFM modeling. We evaluate our approach against the results of a deep neural network in two datasets coming from different online retailers. Our evaluation indicates that a) the proposed approach outperforms current state-of-art methods in small-medium datasets and can improve performance in large datasets when combined with other methods, b) results can greatly vary in different datasets, depending on size and characteristics, thus locating the proper method for each dataset can be a rather complex task, and c) offline algorithms should be combined with online methods in order to get optimal results since offline algorithms tend to offer better performance but online algorithms are necessary for exploiting new users and trends that turn up.}
}

2018

Inbooks

Valasia Dimaridou, Alexandros-Charalampos Kyprianidis, Michail Papamichail, Themistoklis Diamantopoulos and Andreas Symeonidis
Charpter:1, pp. 25, Springer, 2018 Jan

Nowadays, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may also lead to low quality software products, if the components to be reused exhibit low quality. Thus, several approaches have been developed to measure the quality of software components. Most of them, however, rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by developers. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for source code components (classes or packages): complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are thus applied to estimate the final quality score given metrics from these axes. Preliminary evaluation indicates that our approach effectively estimates software quality at both class and package levels.

@inbook{Dimaridou2018,
author={Valasia Dimaridou and Alexandros-Charalampos Kyprianidis and Michail Papamichail and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Assessing the User-Perceived Quality of Source Code Components using Static Analysis Metrics},
chapter={1},
pages={25},
publisher={Springer},
year={2018},
month={01},
date={2018-01-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/08/ccis_book_chapter.pdf},
publisher's url={https://www.researchgate.net/publication/325627162_Assessing_the_User-Perceived_Quality_of_Source_Code_Components_Using_Static_Analysis_Metrics},
abstract={Nowadays, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may also lead to low quality software products, if the components to be reused exhibit low quality. Thus, several approaches have been developed to measure the quality of software components. Most of them, however, rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by developers. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for source code components (classes or packages): complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are thus applied to estimate the final quality score given metrics from these axes. Preliminary evaluation indicates that our approach effectively estimates software quality at both class and package levels.}
}

2017

Journal Articles

Themistoklis Diamantopoulos, Michael Roth, Andreas Symeonidis and Ewan Klein
"Software requirements as an application domain for natural language processing"
Language Resources and Evaluation, pp. 1-30, 2017 Feb

Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.

@article{Diamantopoulos2017,
author={Themistoklis Diamantopoulos and Michael Roth and Andreas Symeonidis and Ewan Klein},
title={Software requirements as an application domain for natural language processing},
journal={Language Resources and Evaluation},
pages={1-30},
year={2017},
month={02},
date={2017-02-27},
url={http://rdcu.be/tpxd},
doi={http://10.1007/s10579-017-9381-z},
abstract={Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
Enterprise Information Systems, pp. 1-22, 2017 Dec

Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.

@article{Diamantopoulos2017EIS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Enhancing requirements reusability through semantic modeling and data mining techniques},
journal={Enterprise Information Systems},
pages={1-22},
year={2017},
month={12},
date={2017-12-17},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/08/EIS2017.pdf},
doi={http://10.1080/17517575.2017.1416177},
publisher's url={https://doi.org/10.1080/17517575.2017.1416177},
abstract={Enhancing the requirements elicitation process has always been of added value to software engineers, since it expedites the software lifecycle and reduces errors in the conceptualization phase of software products. The challenge posed to the research community is to construct formal models that are capable of storing requirements from multimodal formats (text and UML diagrams) and promote easy requirements reuse, while at the same time being traceable to allow full control of the system design, as well as comprehensible to software engineers and end users. In this work, we present an approach that enhances requirements reuse while capturing the static (functional requirements, use case diagrams) and dynamic (activity diagrams) view of software projects. Our ontology-based approach allows for reasoning over the stored requirements, while the mining methodologies employed detect incomplete or missing software requirements, this way reducing the effort required for requirements elicitation at an early stage of the project lifecycle.}
}

A. Thallas, E.G. Tsardoulias and L. Petrou
"Topological Based Scan Matching – Odometry Posterior Sampling in RBPF Under Kinematic Model Failures"
Journal of Intelligent & Robotic Systems, 91, pp. 543-568, 2017 Nov

Rao-Blackwellized Particle Filters (RBPF) have been utilized to provide a solution to the SLAM problem. One of the main factors that cause RBPF failure is the potential particle impoverishment. Another popular approach to the SLAM problem are Scan Matching methods, whose good results require environments with lots of information, however fail in the lack thereof. To face these issues, in the current work techniques are presented to combine Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM). The particle filter maintains the correct hypothesis in environments lacking features and CRSM is employed in feature-rich environments while simultaneously reduces the particle filter dispersion. Since CRSM’s good performance is based on its high iteration frequency, a multi-threaded combination is presented which allows CRSM to operate while RBPF updates its particles. Additionally, a novel method utilizing topological information is proposed, in order to reduce the number of particle filter resamplings. Finally, we present methods to address anomalous situations where scan matching can not be performed and the vehicle displays behaviors not modeled by the kinematic model, causing the whole method to collapse. Numerous experiments are conducted to support the aforementioned methods’ advantages.

@article{etsardouRbpf2017,
author={A. Thallas and E.G. Tsardoulias and L. Petrou},
title={Topological Based Scan Matching – Odometry Posterior Sampling in RBPF Under Kinematic Model Failures},
journal={Journal of Intelligent & Robotic Systems},
volume={91},
pages={543-568},
year={2017},
month={11},
date={2017-11-15},
url={https://link.springer.com/article/10.1007/s10846-017-0730-3},
doi={https://doi.org/10.1007/s10846-017-0730-3},
keywords={SLAM;Scan matching;Occupancy grid map;Autonomous robots;Rao-blackwellized particle filter;CRSM},
abstract={Rao-Blackwellized Particle Filters (RBPF) have been utilized to provide a solution to the SLAM problem. One of the main factors that cause RBPF failure is the potential particle impoverishment. Another popular approach to the SLAM problem are Scan Matching methods, whose good results require environments with lots of information, however fail in the lack thereof. To face these issues, in the current work techniques are presented to combine Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM). The particle filter maintains the correct hypothesis in environments lacking features and CRSM is employed in feature-rich environments while simultaneously reduces the particle filter dispersion. Since CRSM’s good performance is based on its high iteration frequency, a multi-threaded combination is presented which allows CRSM to operate while RBPF updates its particles. Additionally, a novel method utilizing topological information is proposed, in order to reduce the number of particle filter resamplings. Finally, we present methods to address anomalous situations where scan matching can not be performed and the vehicle displays behaviors not modeled by the kinematic model, causing the whole method to collapse. Numerous experiments are conducted to support the aforementioned methods’ advantages.}
}

Miltiadis G. Siavvas, Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis
"QATCH - An adaptive framework for software product quality assessment"
Expert Systems with Applications, 2017 May

The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.

@article{Siavvas2017,
author={Miltiadis G. Siavvas and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis},
title={QATCH - An adaptive framework for software product quality assessment},
journal={Expert Systems with Applications},
year={2017},
month={05},
date={2017-05-25},
url={http://www.sciencedirect.com/science/article/pii/S0957417417303883},
doi={https://doi.org/10.1016/j.eswa.2017.05.060},
keywords={Software quality assessment;Software engineering;Multi-criteria decision making;Fuzzy analytic hierarchy process;Software static analysis;Quality metrics},
abstract={The subjectivity that underlies the notion of quality does not allow the design and development of a universally accepted mechanism for software quality assessment. This is why contemporary research is now focused on seeking mechanisms able to produce software quality models that can be easily adjusted to custom user needs. In this context, we introduce QATCH, an integrated framework that applies static analysis to benchmark repositories in order to generate software quality models tailored to stakeholder specifications. Fuzzy multi-criteria decision-making is employed in order to model the uncertainty imposed by experts’ judgments. These judgments can be expressed into linguistic values, which makes the process more intuitive. Furthermore, a robust software quality model, the base model, is generated by the system, which is used in the experiments for QATCH system verification. The paper provides an extensive analysis of QATCH and thoroughly discusses its validity and added value in the field of software quality through a number of individual experiments.}
}

Athanassios M. Kintsakis, Fotis E. Psomopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments"
SoftwareX, 6, pp. 217-224, 2017 Sep

Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

@article{SOFTX89,
author={Athanassios M. Kintsakis and Fotis E. Psomopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments},
journal={SoftwareX},
volume={6},
pages={217-224},
year={2017},
month={09},
date={2017-09-19},
url={http://www.sciencedirect.com/science/article/pii/S2352711017300304},
doi={http://10.1016/j.softx.2017.07.007},
keywords={Bioinformatics;hybrid cloud;scientific workflows;distributed computing},
abstract={Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.}
}

Cezary Zielinski, Maciej Stefanczyk, Tomasz Kornuta, Maksym Figat, Wojciech Dudek, Wojciech Szynkiewicz, Wlodzimierz Kasprzak, Jan Figat, Marcin Szlenk, Tomasz Winiarski, Konrad Banachowicz, Teresa Zielinska, Emmanouil G. Tsardoulias, Andreas L. Symeonidis, Fotis E. Psomopoulos, Athanassios M. Kintsakis, Pericles A. Mitkas, Aristeidis Thallas, Sofia E. Reppou, George T. Karagiannis, Konstantinos Panayiotou, Vincent Prunet, Manuel Serrano, Jean-Pierre Merlet, Stratos Arampatzis, Alexandros Giokas, Lazaros Penteridis, Ilias Trochidis, David Daney and Miren Iturburu
"Variable structure robot control systems: The RAPP approach"
Robotics and Autonomous Systems, 94, pp. 226-244, 2017 May

This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.

@article{Zielnski2017,
author={Cezary Zielinski and Maciej Stefanczyk and Tomasz Kornuta and Maksym Figat and Wojciech Dudek and Wojciech Szynkiewicz and Wlodzimierz Kasprzak and Jan Figat and Marcin Szlenk and Tomasz Winiarski and Konrad Banachowicz and Teresa Zielinska and Emmanouil G. Tsardoulias and Andreas L. Symeonidis and Fotis E. Psomopoulos and Athanassios M. Kintsakis and Pericles A. Mitkas and Aristeidis Thallas and Sofia E. Reppou and George T. Karagiannis and Konstantinos Panayiotou and Vincent Prunet and Manuel Serrano and Jean-Pierre Merlet and Stratos Arampatzis and Alexandros Giokas and Lazaros Penteridis and Ilias Trochidis and David Daney and Miren Iturburu},
title={Variable structure robot control systems: The RAPP approach},
journal={Robotics and Autonomous Systems},
volume={94},
pages={226-244},
year={2017},
month={05},
date={2017-05-05},
url={http://www.sciencedirect.com/science/article/pii/S0921889016306248},
doi={https://doi.org/10.1016/j.robot.2017.05.002},
keywords={robot controllers;variable structure controllers;cloud robotics;RAPP},
abstract={This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.}
}

2017

Conference Papers

Maria Th. Kotouza, Antonios C. Chrysopoulos and Pericles A. Mitkas
"Segmentation of Low Voltage Consumers for Designing Individualized Pricing Policies"
European Energy Market (EEM), 2017 14th International Conference, pp. 1-6, IEEE, Dresden, Germany, 2017 Jun

In recent years, the Smart Grid paradigm has opened a vast set of opportunities for all participating parties in the Energy Markets (i.e. producers, Distribution and Transmission System Operators, retailers, consumers), providing two-way data communication, increased security and grid stability. Furthermore, the liberation of distribution and energy services has led towards competitive Energy Market environments [4]. In order to maintain their existing customers\' satisfaction level high, as well as reaching out to new ones, suppliers must provide better and more reliable energy services, that are specifically tailored to each customer or to a group of customers with similar needs. Thus, it is necessary to identify segments of customers that have common energy characteristics via a process called Consumer Load Profiling (CLP) [16].

@inproceedings{2017Kotouza,
author={Maria Th. Kotouza and Antonios C. Chrysopoulos and Pericles A. Mitkas},
title={Segmentation of Low Voltage Consumers for Designing Individualized Pricing Policies},
booktitle={European Energy Market (EEM), 2017 14th International Conference},
pages={1-6},
publisher={IEEE},
address={Dresden, Germany},
year={2017},
month={06},
date={2017-06-06},
doi={https://doi.org/10.1109/EEM.2017.7981862},
issn={2165-4093},
isbn={978-1-5090-5499-2},
abstract={In recent years, the Smart Grid paradigm has opened a vast set of opportunities for all participating parties in the Energy Markets (i.e. producers, Distribution and Transmission System Operators, retailers, consumers), providing two-way data communication, increased security and grid stability. Furthermore, the liberation of distribution and energy services has led towards competitive Energy Market environments [4]. In order to maintain their existing customers\\' satisfaction level high, as well as reaching out to new ones, suppliers must provide better and more reliable energy services, that are specifically tailored to each customer or to a group of customers with similar needs. Thus, it is necessary to identify segments of customers that have common energy characteristics via a process called Consumer Load Profiling (CLP) [16].}
}

Panagiotis Doxopoulos, Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas L. Symeonidis
"Creating an extrovert robotic assistant via IoT networking devices"
International Conference on Cloud and Robotics, Saint Quentin, France, 2017 Nov

The communication and collaboration of Cyber-Physical Systems, including machines and robots, among themselves and with humans, is expected to attract researchers\\' interest for the years to come. A key element of the new revolution is the Internet of Things (IoT). IoT infrastructures enable communication between different connected devices using internet protocols. The integration of robots in an IoT platform can improve robot capabilities by providing access to other devices and resources. In this paper we present an IoT-enabled application including a NAO robot which can communicate through an IoT platform with a reflex measurement system and a hardware node that provides robotics-oriented services in the form of RESTful web services. An activity reminder application is also included, illustrating the extension capabilities of the system.

@inproceedings{Doxopoulos2017,
author={Panagiotis Doxopoulos and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis},
title={Creating an extrovert robotic assistant via IoT networking devices},
booktitle={International Conference on Cloud and Robotics},
address={Saint Quentin, France},
year={2017},
month={11},
date={2017-11-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/11/2017-Creating-an-extrovert-robotic-assistant-via-IoT-networking-devices-ICCR17.pdf},
keywords={Web Services;robotics;Internet of Things;IoT platform;Swagger;REST;WAMP},
abstract={The communication and collaboration of Cyber-Physical Systems, including machines and robots, among themselves and with humans, is expected to attract researchers\\\\' interest for the years to come. A key element of the new revolution is the Internet of Things (IoT). IoT infrastructures enable communication between different connected devices using internet protocols. The integration of robots in an IoT platform can improve robot capabilities by providing access to other devices and resources. In this paper we present an IoT-enabled application including a NAO robot which can communicate through an IoT platform with a reflex measurement system and a hardware node that provides robotics-oriented services in the form of RESTful web services. An activity reminder application is also included, illustrating the extension capabilities of the system.}
}

Valasia Dimaridou, Alexandros-Charalampos Kyprianidis, Michail Papamichail, Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Modeling the User-perceived Quality of Source Code using Static Analysis Metrics"
Proceedings of the 12th International Conference on Software Technologies - Volume 1: ICSOFT, pp. 73-84, SciTePress, 2017 Jul

Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and exam ine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.

@inproceedings{icsoft17,
author={Valasia Dimaridou and Alexandros-Charalampos Kyprianidis and Michail Papamichail and Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Modeling the User-perceived Quality of Source Code using Static Analysis Metrics},
booktitle={Proceedings of the 12th International Conference on Software Technologies - Volume 1: ICSOFT},
pages={73-84},
publisher={SciTePress},
year={2017},
month={07},
date={2017-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/08/ICSOFT.pdf},
doi={http://10.5220/0006420000730084},
slideshare={https://www.slideshare.net/isselgroup/towards-modeling-the-userperceived-quality-of-source-code-using-static-analysis-metrics},
abstract={Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and exam ine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.}
}

Emmanouil Krasanakis, Eleftherios Spyromitros-Xioufis, Symeon Papadopoulos and Yiannis Kompatsiaris
"Tunable Plug-In Rules with Reduced Posterior Certainty Loss in Imbalanced Datasets"
Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, pp. 116-128, PMLR, ECML-PKDD, Skopje, Macedonia, 2017 Sep

Classifiers have difficulty recognizing under-represented minorities in imbalanced datasets, due to their focus on minimizing the overall misclassification error. This introduces predictive biases against minority classes. Post-processing plug-in rules are popular for tackling class imbalance, but they often affect the certainty of base classifier posteriors, when the latter already perform correct classification. This shortcoming makes them ill-suited to scoring tasks, where informative posterior scores are required for human interpretation. To this end, we propose the ILoss metric to measure the impact of imbalance-aware classifiers on the certainty of posterior distributions. We then generalize post-processing plug-in rules in an easily tunable framework and theoretically show that this framework tends to improve performance balance. Finally, we experimentally assert that appropriate usage of our framework can reduce ILoss while yielding similar performance, with respect to common imbalance-aware measures, to existing plug-in rules for binary problems.

@inproceedings{Krasanakis2017,
author={Emmanouil Krasanakis and Eleftherios Spyromitros-Xioufis and Symeon Papadopoulos and Yiannis Kompatsiaris},
title={Tunable Plug-In Rules with Reduced Posterior Certainty Loss in Imbalanced Datasets},
booktitle={Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications},
pages={116-128},
publisher={PMLR},
editor={Luís Torgo,Bartosz Krawczyk,Paula Branco,Nuno Moniz},
address={ECML-PKDD, Skopje, Macedonia},
year={2017},
month={09},
date={2017-09-22},
url={http://proceedings.mlr.press/v74/krasanakis17a/krasanakis17a.pdf},
abstract={Classifiers have difficulty recognizing under-represented minorities in imbalanced datasets, due to their focus on minimizing the overall misclassification error. This introduces predictive biases against minority classes. Post-processing plug-in rules are popular for tackling class imbalance, but they often affect the certainty of base classifier posteriors, when the latter already perform correct classification. This shortcoming makes them ill-suited to scoring tasks, where informative posterior scores are required for human interpretation. To this end, we propose the ILoss metric to measure the impact of imbalance-aware classifiers on the certainty of posterior distributions. We then generalize post-processing plug-in rules in an easily tunable framework and theoretically show that this framework tends to improve performance balance. Finally, we experimentally assert that appropriate usage of our framework can reduce ILoss while yielding similar performance, with respect to common imbalance-aware measures, to existing plug-in rules for binary problems.}
}

Konstantinos Panayiotou, Sofia E. Reppou, George Karagiannis, Emmanouil Tsardoulias, Aristeidis G. Thallas and Andreas L. Symeonidis
"Robotic applications towards an interactive alerting system for medical purposes"
30th IEEE International Symposium on Computer-Based Medical Systems (IEEE CBMS), Thessaloniki, 2017 Jan

Social consumer robots are slowly but strongly invading our everyday lives as their prices are becoming lower and lower, constituting them affordable for a wide range of civilians. There has been a lot of research concerning the potential applications of social robots, some of which may implement companionship or proxying technology-related tasks and assisting in everyday household endeavors, among others. In the current work, the RAPP framework is being used towards easily creating robotic applications suitable for utilization as a socially interactive alerting system with the employment of the NAO robot. The developed application stores events in an on-line calendar, directly via the robot or indirectly via a web environment, and asynchronously informs an end-user of imminent events

@inproceedings{Panayiotou2017,
author={Konstantinos Panayiotou and Sofia E. Reppou and George Karagiannis and Emmanouil Tsardoulias and Aristeidis G. Thallas and Andreas L. Symeonidis},
title={Robotic applications towards an interactive alerting system for medical purposes},
booktitle={30th IEEE International Symposium on Computer-Based Medical Systems (IEEE CBMS)},
address={Thessaloniki},
year={2017},
month={01},
date={2017-01-01},
keywords={cloud robotics;robotic applications;social robotics;assistive robotics;mild cognitive impairment},
abstract={Social consumer robots are slowly but strongly invading our everyday lives as their prices are becoming lower and lower, constituting them affordable for a wide range of civilians. There has been a lot of research concerning the potential applications of social robots, some of which may implement companionship or proxying technology-related tasks and assisting in everyday household endeavors, among others. In the current work, the RAPP framework is being used towards easily creating robotic applications suitable for utilization as a socially interactive alerting system with the employment of the NAO robot. The developed application stores events in an on-line calendar, directly via the robot or indirectly via a web environment, and asynchronously informs an end-user of imminent events}
}

Vasilis N. Remmas, Konstantinos Panayiotou, Emmanouil Tsardoulias and Andreas L. Symeonidis
"SRCA - The Scalable Robotic Cloud Agents Architecture"
International Conference on Cloud and Robotics, Saint Quentin, France, 2017 Nov

In an effort to penetrate the market at an affordable cost, consumer robots tend to provide limited processing capabilities, just enough to serve the purpose they have been designed for. However, a robot, in principle, should be able to interact and process the constantly increasing information streams generated from sensors or other devices. This would require the implementation of algorithms and mathematical models for the accurate processing of data volumes and significant computational resources. It is clear that as the data deluge continues to grow exponentially, deploying such algorithms on consumer robots will not be easy. Current work presents a cloud-based architecture that aims to offload computational resources from robots to a remote infrastructure, by utilizing and implementing cloud technologies. This way robots are allowed to enjoy functionality offered by complex algorithms that are executed on the cloud. The proposed system architecture allows developers and engineers not specialised in robotic implementation environments to utilize generic robotic algorithms and services off-the-shelf.

@inproceedings{Remmas2017,
author={Vasilis N. Remmas and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis},
title={SRCA - The Scalable Robotic Cloud Agents Architecture},
booktitle={International Conference on Cloud and Robotics},
address={Saint Quentin, France},
year={2017},
month={11},
date={2017-11-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/11/2017-SRCA-The-Scalable-Robotic-Cloud-Agents-Architecture-ICCR17.pdf},
keywords={cloud robotics;robotics;robotic applications;cloud architectures},
abstract={In an effort to penetrate the market at an affordable cost, consumer robots tend to provide limited processing capabilities, just enough to serve the purpose they have been designed for. However, a robot, in principle, should be able to interact and process the constantly increasing information streams generated from sensors or other devices. This would require the implementation of algorithms and mathematical models for the accurate processing of data volumes and significant computational resources. It is clear that as the data deluge continues to grow exponentially, deploying such algorithms on consumer robots will not be easy. Current work presents a cloud-based architecture that aims to offload computational resources from robots to a remote infrastructure, by utilizing and implementing cloud technologies. This way robots are allowed to enjoy functionality offered by complex algorithms that are executed on the cloud. The proposed system architecture allows developers and engineers not specialised in robotic implementation environments to utilize generic robotic algorithms and services off-the-shelf.}
}

2016

Journal Articles

Antonios Chrysopoulos, Christos Diou, Andreas Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar

The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{2015ChrysopoulosEPSR,
author={Antonios Chrysopoulos and Christos Diou and Andreas Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Response-modeling-of-small-scale-energy-consumers-for-effective-demand-response-applications.pdf},
abstract={The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

Pantelis Angelidis, Leslie Berman, Maria de la Luz Casas-Perez, Leo Anthony Celi, George E. Dafoulas, Alon Dagan, Braiam Escobar, Diego M. Lopez, Julieta Noguez, Juan Sebastian Osorio-Valencia, Charles Otine, Kenneth Paik, Luis Rojas-Potosi, Andreas Symeonidis and Eric Winkler
"The hackathon model to spur innovation around global mHealth"
Journal of Medical Engineering & Technology, pp. 1-8, 2016 Sep

The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.

@article{2016AngelidisJMET,
author={Pantelis Angelidis and Leslie Berman and Maria de la Luz Casas-Perez and Leo Anthony Celi and George E. Dafoulas and Alon Dagan and Braiam Escobar and Diego M. Lopez and Julieta Noguez and Juan Sebastian Osorio-Valencia and Charles Otine and Kenneth Paik and Luis Rojas-Potosi and Andreas Symeonidis and Eric Winkler},
title={The hackathon model to spur innovation around global mHealth},
journal={Journal of Medical Engineering & Technology},
pages={1-8},
year={2016},
month={09},
date={2016-09-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/The-hackathon-model-to-spur-innovation-around-global-mHealth.pdf},
abstract={The challenge of providing quality healthcare to underserved populations in low- and middle-income countries (LMICs) has attracted increasing attention from information and communication technology (ICT) professionals interested in providing societal impact through their work. Sana is an organisation hosted at the Institute for Medical Engineering and Science at the Massachusetts Institute of Technology that was established out of this interest. Over the past several years, Sana has developed a model of organising mobile health bootcamp and hackathon events in LMICs with the goal of encouraging increased collaboration between ICT and medical professionals and leveraging the growing prevalence of cellphones to provide health solutions in resource limited settings. Most recently, these events have been based in Colombia, Uganda, Greece and Mexico. The lessons learned from these events can provide a framework for others working to create sustainable health solutions in the developing world.}
}

Michael Chatzidimopoulos, Fotis Psomopoulos, Emmanouil Malandrakis, Ioannis Ganopoulos, Panagiotis Madesis, Evangelos Vellios and Pavlidis Drogoudi
"Comparative Genomics of Botrytis cinerea Strains with Differential Multi-Drug Resistance"
Frontiers in Plant Science, 2016 Apr

Botrytis cinerea is a ubiquitous fungus difficult to control because it possess a variety of attack modes, diverse hosts as inoculum sources, and it can survive as mycelia and/or conidia or for extended periods as sclerotia in crop debris. For these reasons the use of any single control measure is unlikely to succeed and a combination of cultural practices with the application of site-specific synthetic compounds provide the best protection for the crops (Williamson et al., 2007). However, the chemical control has been adversely affected by the development of fungicide resistance. The selection of resistant individuals in a fungal population subjected to selective pressure due to fungicides is an evolutionary mechanism that promotes advantageous genotypes (Walker et al., 2013). High levels of resistance to site-specific fungicides are commonly associated with point mutations. For example the mutations G143A, H272R, and F412S leading to changes in the target proteins CytB, SdhB, and Erg27 are conferring resistance of the pathogen to the chemical classes of QoIs, SDHIs, and hydroxyanilides, respectively (Leroux, 2007). Multidrug resistance is another mechanism associated with resistance in B. cinerea which involves mutations leading to overexpression of individual transporters such as ABC and MFS (Kretschmer et al., 2009). This mechanism is associated with low levels of resistance to multiple fungicides including the anilinopyrimidines and phenylpyrroles. However, a subdivision of gray mold populations was found to be more tolerant to these two classes of fungicides (Leroch et al., 2013).Previous reports have clearly demonstrated that the resistance to anilinopyrimidines has a qualitative, disruptive pattern, and is monogenically controlled (Chapeland et al., 1999). In order to elucidate the mechanism of the resistance, the whole genome of three different samples (gene pools) was sequenced, each containing DNA of 10 selected strains of the same genotype regarding resistance to seven different classes of fungicides including anilinopyrimidines. This report presents the publicly available genomic data.

@article{2016ChatzidimopoulosFPS,
author={Michael Chatzidimopoulos and Fotis Psomopoulos and Emmanouil Malandrakis and Ioannis Ganopoulos and Panagiotis Madesis and Evangelos Vellios and Pavlidis Drogoudi},
title={Comparative Genomics of Botrytis cinerea Strains with Differential Multi-Drug Resistance},
journal={Frontiers in Plant Science},
year={2016},
month={04},
date={2016-04-28},
url={http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4849417/pdf/fpls-07-00554.pdf},
abstract={Botrytis cinerea is a ubiquitous fungus difficult to control because it possess a variety of attack modes, diverse hosts as inoculum sources, and it can survive as mycelia and/or conidia or for extended periods as sclerotia in crop debris. For these reasons the use of any single control measure is unlikely to succeed and a combination of cultural practices with the application of site-specific synthetic compounds provide the best protection for the crops (Williamson et al., 2007). However, the chemical control has been adversely affected by the development of fungicide resistance. The selection of resistant individuals in a fungal population subjected to selective pressure due to fungicides is an evolutionary mechanism that promotes advantageous genotypes (Walker et al., 2013). High levels of resistance to site-specific fungicides are commonly associated with point mutations. For example the mutations G143A, H272R, and F412S leading to changes in the target proteins CytB, SdhB, and Erg27 are conferring resistance of the pathogen to the chemical classes of QoIs, SDHIs, and hydroxyanilides, respectively (Leroux, 2007). Multidrug resistance is another mechanism associated with resistance in B. cinerea which involves mutations leading to overexpression of individual transporters such as ABC and MFS (Kretschmer et al., 2009). This mechanism is associated with low levels of resistance to multiple fungicides including the anilinopyrimidines and phenylpyrroles. However, a subdivision of gray mold populations was found to be more tolerant to these two classes of fungicides (Leroch et al., 2013).Previous reports have clearly demonstrated that the resistance to anilinopyrimidines has a qualitative, disruptive pattern, and is monogenically controlled (Chapeland et al., 1999). In order to elucidate the mechanism of the resistance, the whole genome of three different samples (gene pools) was sequenced, each containing DNA of 10 selected strains of the same genotype regarding resistance to seven different classes of fungicides including anilinopyrimidines. This report presents the publicly available genomic data.}
}

Sofia E. Reppou, Emmanouil G. Tsardoulias, Athanassios M. Kintsakis, Andreas Symeonidis, Pericles A. Mitkas, Fotis E. Psomopoulos, George T. Karagiannis, Cezary Zielinski, Vincent Prunet, Jean-Pierre Merlet, Miren Iturburu and Alexandros Gkiokas
"RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people"
Journal of Social Robotics, pp. 15, 2016 Jun

It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.

@article{2016ReppouJSR,
author={Sofia E. Reppou and Emmanouil G. Tsardoulias and Athanassios M. Kintsakis and Andreas Symeonidis and Pericles A. Mitkas and Fotis E. Psomopoulos and George T. Karagiannis and Cezary Zielinski and Vincent Prunet and Jean-Pierre Merlet and Miren Iturburu and Alexandros Gkiokas},
title={RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people},
journal={Journal of Social Robotics},
pages={15},
year={2016},
month={06},
date={2016-06-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RAPP-A-Robotic-Oriented-Ecosystem-for-Delivering-Smart-User-Empowering-Applications-for-Older-People.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.}
}

Emmanouil Tsardoulias, Aris Thallas, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech"
audio engineering society, 2016 Dec

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016TsardouliasAES,
author={Emmanouil Tsardoulias and Aris Thallas and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech},
journal={audio engineering society},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Improving-multilingual-interaction-for-consumer-robots-through-signal-enhancement-in-multichannel-speech.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Emmanouil Tsardoulias, Athanassios Kintsakis, Konstantinos Panayiotou, Aristeidis Thallas, Sofia Reppou, George Karagiannis, Miren Iturburu, Stratos Arampatzis, Cezary Zielinskic, Vincent Prunetg, Fotis Psomopoulos, Andreas Symeonidis and Pericles Mitkas
"Towards an integrated robotics architecture for social inclusion – The RAPP paradigm"
Cognitive Systems Research, pp. 1-8, 2016 Sep

Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.

@article{2016TsardouliasCSR,
author={Emmanouil Tsardoulias and Athanassios Kintsakis and Konstantinos Panayiotou and Aristeidis Thallas and Sofia Reppou and George Karagiannis and Miren Iturburu and Stratos Arampatzis and Cezary Zielinskic and Vincent Prunetg and Fotis Psomopoulos and Andreas Symeonidis and Pericles Mitkas},
title={Towards an integrated robotics architecture for social inclusion – The RAPP paradigm},
journal={Cognitive Systems Research},
pages={1-8},
year={2016},
month={09},
date={2016-09-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/COGSYS_2016_R1.pdf},
abstract={Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.}
}

Aliki Xanthopoulou, Fotis Psomopoulos, Ioannis Ganopoulos, Maria Manioudaki, Athanasios Tsaftaris, Irini Nianiou-Obeidat and Panagiotis Madesis
"De novo transcriptome assembly of two contrasting pumpkin cultivars"
Genomics Data pp 200-201, 2016 Jan

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016XanthopoulouGD,
author={Aliki Xanthopoulou and Fotis Psomopoulos and Ioannis Ganopoulos and Maria Manioudaki and Athanasios Tsaftaris and Irini Nianiou-Obeidat and Panagiotis Madesis},
title={De novo transcriptome assembly of two contrasting pumpkin cultivars},
journal={Genomics Data pp 200-201},
year={2016},
month={01},
date={2016-01-15},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/De-novo-transcriptome-assembly-of-two-contrasting-pumpkin-cultivars.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Christoforos Zolotas, Themistoklis Diamantopoulos, Kyriakos Chatzidimitriou and Andreas Symeonidis
"From requirements to source code: a Model-Driven Engineering approach for RESTful web services"
Automated Software Engineering, pp. 1-48, 2016 Sep

During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.

@article{2016ZolotasASE,
author={Christoforos Zolotas and Themistoklis Diamantopoulos and Kyriakos Chatzidimitriou and Andreas Symeonidis},
title={From requirements to source code: a Model-Driven Engineering approach for RESTful web services},
journal={Automated Software Engineering},
pages={1-48},
year={2016},
month={09},
date={2016-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/ReqsToCodeMDE.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={During the last few years, the REST architectural style has drastically changed the way web services are developed. Due to its transparent resource-oriented model, the RESTful paradigm has been incorporated into several development frameworks that allow rapid development and aspire to automate parts of the development process. However, most of the frameworks lack automation of essential web service functionality, such as authentication or database searching, while the end product is usually not fully compliant to REST. Furthermore, most frameworks rely heavily on domain specific modeling and require developers to be familiar with the employed modeling technologies. In this paper, we present a Model-Driven Engineering (MDE) engine that supports fast design and implementation of web services with advanced functionality. Our engine provides a front-end interface that allows developers to design their envisioned system through software requirements in multimodal formats. Input in the form of textual requirements and graphical storyboards is analyzed using natural language processing techniques and semantics, to semi-automatically construct the input model for the MDE engine. The engine subsequently applies model-to-model transformations to produce a RESTful, ready-to-deploy web service. The procedure is traceable, ensuring that changes in software requirements propagate to the underlying software artefacts and models. Upon assessing our methodology through a case study and measuring the effort reduction of using our tools, we conclude that our system can be effective for the fast design and implementation of web services, while it allows easy wrapping of services that have been engineered with traditional methods to the MDE realm.}
}

E.G. Tsardoulias, A. Iliakopoulou, A. Kargakos and L. Petrou
"Cost-Based Target Selection Techniques Towards Full Space Exploration and Coverage for USAR Applications in a Priori Unknown Environments"
Journal of Intelligent & Robotic Systems, 87, pp. 313-340, 2016 Oct

Full coverage and exploration of an environment is essential in robot rescue operations where victim identification is required. Three methods of target selection towards full exploration and coverage of an unknown space oriented for Urban Search and Rescue (USAR) applications have been developed. These are the Selection of the closest topological node, the Selection of the minimum cost topological node and the Selection of the minimum cost sub-graph. All methods employ a topological graph extracted from the Generalized Voronoi Diagram (GVD), in order to select the next best target during exploration. The first method utilizes a distance metric for determining the next best target whereas the Selection of the minimum cost topological node method assigns four different weights on the graph’s nodes, based on certain environmental attributes. The Selection of the minimum cost sub-graph uses a similar technique, but instead of single nodes, sets of graph nodes are examined. In addition, a modification of A* algorithm for biased path creation towards uncovered areas, aiming at a faster spatial coverage, is introduced. The proposed methods’ performance is verified by experiments conducted in two heterogeneous simulated environments. Finally, the results are compared with two common exploration methods.

@article{etsardouCost2016,
author={E.G. Tsardoulias and A. Iliakopoulou and A. Kargakos and L. Petrou},
title={Cost-Based Target Selection Techniques Towards Full Space Exploration and Coverage for USAR Applications in a Priori Unknown Environments},
journal={Journal of Intelligent & Robotic Systems},
volume={87},
pages={313-340},
year={2016},
month={10},
date={2016-10-19},
url={https://link.springer.com/article/10.1007/s10846-016-0434-0},
doi={https://doi.org/10.1007/s10846-016-0434-0},
keywords={Topological graph;Autonomous robot;Exploration;Full coverage;Costs;A* algorithm},
abstract={Full coverage and exploration of an environment is essential in robot rescue operations where victim identification is required. Three methods of target selection towards full exploration and coverage of an unknown space oriented for Urban Search and Rescue (USAR) applications have been developed. These are the Selection of the closest topological node, the Selection of the minimum cost topological node and the Selection of the minimum cost sub-graph. All methods employ a topological graph extracted from the Generalized Voronoi Diagram (GVD), in order to select the next best target during exploration. The first method utilizes a distance metric for determining the next best target whereas the Selection of the minimum cost topological node method assigns four different weights on the graph’s nodes, based on certain environmental attributes. The Selection of the minimum cost sub-graph uses a similar technique, but instead of single nodes, sets of graph nodes are examined. In addition, a modification of A* algorithm for biased path creation towards uncovered areas, aiming at a faster spatial coverage, is introduced. The proposed methods’ performance is verified by experiments conducted in two heterogeneous simulated environments. Finally, the results are compared with two common exploration methods.}
}

Tsardoulias, E. G., A. Iliakopoulou, A. Kargakos, and L. Petrou
"A Review of Global Path Planning Methods for Occupancy Grid Maps Regardless of Obstacle Density"
Journal of Intelligent & Robotic Systems, 84, pp. 829-858, 2016 May

Path planning constitutes one of the most crucial abilities an autonomous robot should possess, apart from Simultaneous Localization and Mapping algorithms (SLAM) and navigation modules. Path planning is the capability to construct safe and collision free paths from a point of interest to another. Many different approaches exist, which are tightly dependent on the map representation method (metric or feature-based). In this work four path planning algorithmic families are described, that can be applied on metric Occupancy Grid Maps (OGMs): Probabilistic RoadMaps (PRMs), Visibility Graphs (VGs), Rapidly exploring Random Trees (RRTs) and Space Skeletonization. The contribution of this work includes the definition of metrics for path planning benchmarks, actual benchmarks of the most common global path planning algorithms and an educated algorithm parameterization based on a global obstacle density coefficient.

@article{etsardouPp2016,
author={Tsardoulias and E. G. and A. Iliakopoulou and A. Kargakos and and L. Petrou},
title={A Review of Global Path Planning Methods for Occupancy Grid Maps Regardless of Obstacle Density},
journal={Journal of Intelligent & Robotic Systems},
volume={84},
pages={829-858},
year={2016},
month={05},
date={2016-05-23},
url={https://link.springer.com/article/10.1007/s10846-016-0362-z},
doi={https://doi.org/10.1007/s10846-016-0362-z},
keywords={Path planning;Probabilistic RoadMaps;Visibility graph;Generalized Voronoi graph;Rapidly exploring random trees;Occupancy grid maps},
abstract={Path planning constitutes one of the most crucial abilities an autonomous robot should possess, apart from Simultaneous Localization and Mapping algorithms (SLAM) and navigation modules. Path planning is the capability to construct safe and collision free paths from a point of interest to another. Many different approaches exist, which are tightly dependent on the map representation method (metric or feature-based). In this work four path planning algorithmic families are described, that can be applied on metric Occupancy Grid Maps (OGMs): Probabilistic RoadMaps (PRMs), Visibility Graphs (VGs), Rapidly exploring Random Trees (RRTs) and Space Skeletonization. The contribution of this work includes the definition of metrics for path planning benchmarks, actual benchmarks of the most common global path planning algorithms and an educated algorithm parameterization based on a global obstacle density coefficient.}
}

2016

Conference Papers

Kyriakos Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards defining the structural properties of efficient consumer social networks on the electricity grid"
AI4SG SETN Workshop on AI for the Smart Grid, 2016 May

Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.

@conference{2016ChatzidimitriouSETN,
author={Kyriakos Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards defining the structural properties of efficient consumer social networks on the electricity grid},
booktitle={AI4SG SETN Workshop on AI for the Smart Grid},
year={2016},
month={05},
date={2016-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/06/Cassandra_AI4SG_CameraReady.pdf},
abstract={Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.}
}

Themistoklis Diamantopoulos, Klearchos Thomopoulos and Andreas L. Symeonidis
"QualBoa: Reusability-aware Recommendations of Source Code Components"
IEEE/ACM 13th Working Conference on Mining Software Repositories, 2016 May

Contemporary software development processes involve finding reusable software components from online repositories and integrating them to the source code, both to reduce development time and to ensure that the final software project is of high quality. Although several systems have been designed to automate this procedure by recommending components that cover the desired functionality, the reusability of these components is usually not assessed by these systems. In this work, we present QualBoa, a recommendation system for source code components that covers both the functional and the quality aspects of software component reuse. Upon retrieving components, QualBoa provides a ranking that involves not only functional matching to the query, but also a reusability score based on configurable thresholds of source code metrics. The evaluation of QualBoa indicates that it can be effective for recommending reusable source code.

@conference{2016DiamantopoulosIEEE/ACM,
author={Themistoklis Diamantopoulos and Klearchos Thomopoulos and Andreas L. Symeonidis},
title={QualBoa: Reusability-aware Recommendations of Source Code Components},
booktitle={IEEE/ACM 13th Working Conference on Mining Software Repositories},
year={2016},
month={05},
date={2016-05-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/06/QualBoa-Reusability-aware-Recommendations-of-Source-Code-Components.pdf},
doi={http://2016%20IEEE/ACM%2013th%20Working%20Conference%20on%20Mining%20Software%20Repositories},
abstract={Contemporary software development processes involve finding reusable software components from online repositories and integrating them to the source code, both to reduce development time and to ensure that the final software project is of high quality. Although several systems have been designed to automate this procedure by recommending components that cover the desired functionality, the reusability of these components is usually not assessed by these systems. In this work, we present QualBoa, a recommendation system for source code components that covers both the functional and the quality aspects of software component reuse. Upon retrieving components, QualBoa provides a ranking that involves not only functional matching to the query, but also a reusability score based on configurable thresholds of source code metrics. The evaluation of QualBoa indicates that it can be effective for recommending reusable source code.}
}

Themistoklis Diamantopoulos, Antonis Noutsos and Andreas L. Symeonidis
"DP-CORE: A Design Pattern Detection Tool for Code Reuse"
6th International Symposium on Business Modeling and Software Design (BMSD), -, Rhodes, Greece, 2016 Dec

In order to maintain, extend or reuse software projects one has to primarily understand what a system does and how well it does it. And, while in some cases information on system functionality exists, information covering the non-functional aspects is usually unavailable. Thus, one has to infer such knowledge by extracting design patterns directly from the source code. Several tools have been developed to identify design patterns, however most of them are limited to compilable and in most cases executable code, they rely on complex representations, and do not offer the developer any control over the detected patterns. In this paper we present DP-CORE, a design pattern detection tool that defines a highly descriptive representation to detect known and define custom patterns. DP-CORE is flexible, identifying exact and approximate pattern versions even in non-compilable code. Our analysis indicates that DP-CORE provides an efficient alternative to existing design pattern detection tools.

@conference{2016DiamantopoulosSBMSD,
author={Themistoklis Diamantopoulos and Antonis Noutsos and Andreas L. Symeonidis},
title={DP-CORE: A Design Pattern Detection Tool for Code Reuse},
booktitle={6th International Symposium on Business Modeling and Software Design (BMSD)},
publisher={-},
address={Rhodes, Greece},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/DP-CORE.pdf},
doi={http://2016%20IEEE/ACM%2013th%20Working%20Conference%20on%20Mining%20Software%20Repositories},
abstract={In order to maintain, extend or reuse software projects one has to primarily understand what a system does and how well it does it. And, while in some cases information on system functionality exists, information covering the non-functional aspects is usually unavailable. Thus, one has to infer such knowledge by extracting design patterns directly from the source code. Several tools have been developed to identify design patterns, however most of them are limited to compilable and in most cases executable code, they rely on complex representations, and do not offer the developer any control over the detected patterns. In this paper we present DP-CORE, a design pattern detection tool that defines a highly descriptive representation to detect known and define custom patterns. DP-CORE is flexible, identifying exact and approximate pattern versions even in non-compilable code. Our analysis indicates that DP-CORE provides an efficient alternative to existing design pattern detection tools.}
}

Michail Papamichail, Themistoklis Diamantopoulos and Andreas L. Symeonidis
"User-Perceived Source Code Quality Estimation based on Static Analysis Metrics"
2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), Vienna, Austria, 2016 Aug

The popularity of open source software repositories and the highly adopted paradigm of software reuse have led to the development of several tools that aspire to assess the quality of source code. However, most software quality estimation tools, even the ones using adaptable models, depend on fixed metric thresholds for defining the ground truth. In this work we argue that the popularity of software components, as perceived by developers, can be considered as an indicator of software quality. We present a generic methodology that relates quality with source code metrics and estimates the quality of software components residing in popular GitHub repositories. Our methodology employs two models: a one-class classifier, used to rule out low quality code, and a neural network, that computes a quality score for each software component. Preliminary evaluation indicates that our approach can be effective for identifying high quality software components in the context of reuse.

@inproceedings{2016PapamichailIEEE,
author={Michail Papamichail and Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={User-Perceived Source Code Quality Estimation based on Static Analysis Metrics},
booktitle={2016 IEEE International Conference on Software Quality, Reliability and Security (QRS)},
address={Vienna, Austria},
year={2016},
month={08},
date={2016-08-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/User-Perceived-Source-Code-Quality-Estimation-based-on-Static-Analysis-Metrics.pdf},
slideshare={http://www.slideshare.net/isselgroup/userperceived-source-code-quality-estimation-based-on-static-analysis-metrics},
abstract={The popularity of open source software repositories and the highly adopted paradigm of software reuse have led to the development of several tools that aspire to assess the quality of source code. However, most software quality estimation tools, even the ones using adaptable models, depend on fixed metric thresholds for defining the ground truth. In this work we argue that the popularity of software components, as perceived by developers, can be considered as an indicator of software quality. We present a generic methodology that relates quality with source code metrics and estimates the quality of software components residing in popular GitHub repositories. Our methodology employs two models: a one-class classifier, used to rule out low quality code, and a neural network, that computes a quality score for each software component. Preliminary evaluation indicates that our approach can be effective for identifying high quality software components in the context of reuse.}
}

Fotis Psomopoulos, Athanassios Kintsakis and Pericles Mitkas
"A pan-genome approach and application to species with photosynthetic capabilities"
15th European Conference on Computational Biology, The Hague, Netherlands, 2016 Sep

The abundance of genome data being produced by the new sequencing techniques is providing the opportunity to investigate gene diversity at a new level. A pan-genome analysis can provide the framework for estimating the genomic diversity of the data set at hand and give insights towards the understanding of its observed characteristics. Currently, there exist several tools for pan-genome studies, mostly focused on prokaryote genomes and their respective attributes. Here we provide a systematic approach for constructing the groups inherently associated with a pan-genome analysis, using the complete proteome data of photosynthetic genomes as the driving case study. As opposed to similar studies, the presented method requires a complete information system (i.e. complete genomes) in order to produce meaningful results. The method was applied to 95 genomes with photosynthetic capabilities, including cyanobacteria and green plants, as retrieved from UniProt and Plaza. Due to the significant computational requirements of the analysis, we utilized the Federated Cloud computing resources provided by the EGI infrastructure. The analysis ultimately produced 37,680 protein families, with a core genome comprising of 102 families. An investigation of the families’ distribution revealed two underlying but expected subsets, roughly corresponding to bacteria and eukaryotes. Finally, an automated functional annotation of the produced clusters, through assignment of PFAM domains to the participating protein sequences, allowed the identification of the key characteristics present in the core genome, as well as of selected multi-member families.

@inproceedings{2016PsomopoulosECCB,
author={Fotis Psomopoulos and Athanassios Kintsakis and Pericles Mitkas},
title={A pan-genome approach and application to species with photosynthetic capabilities},
booktitle={15th European Conference on Computational Biology},
address={The Hague, Netherlands},
year={2016},
month={09},
date={2016-09-01},
abstract={The abundance of genome data being produced by the new sequencing techniques is providing the opportunity to investigate gene diversity at a new level. A pan-genome analysis can provide the framework for estimating the genomic diversity of the data set at hand and give insights towards the understanding of its observed characteristics. Currently, there exist several tools for pan-genome studies, mostly focused on prokaryote genomes and their respective attributes. Here we provide a systematic approach for constructing the groups inherently associated with a pan-genome analysis, using the complete proteome data of photosynthetic genomes as the driving case study. As opposed to similar studies, the presented method requires a complete information system (i.e. complete genomes) in order to produce meaningful results. The method was applied to 95 genomes with photosynthetic capabilities, including cyanobacteria and green plants, as retrieved from UniProt and Plaza. Due to the significant computational requirements of the analysis, we utilized the Federated Cloud computing resources provided by the EGI infrastructure. The analysis ultimately produced 37,680 protein families, with a core genome comprising of 102 families. An investigation of the families’ distribution revealed two underlying but expected subsets, roughly corresponding to bacteria and eukaryotes. Finally, an automated functional annotation of the produced clusters, through assignment of PFAM domains to the participating protein sequences, allowed the identification of the key characteristics present in the core genome, as well as of selected multi-member families.}
}

Emmanouil Stergiadis, Athanassios Kintsakis, Fotis Psomopoulos and Pericles A. Mitkas
"A scalable Grid Computing framework for extensible phylogenetic profile construction"
12th International Conference on Artificial Intelligence Applications and Innovations, pp. 455-462, 12th International Conference on Artificial Intelligence Applications and Innovations, Thessaloniki, Greece, September, 2016 Sep

Current research in Life Sciences without doubt has been established as a Big Data discipline. Beyond the expected domain-specific requirements, this perspective has put scalability as one of the most crucial aspects of any state-of-the-art bioinformatics framework. Sequence alignment and construction of phylogenetic profiles are common tasks evident in a wide range of life science analyses as, given an arbitrary big volume of genomes, they can provide useful insights on the functionality and relationships of the involved entities. This process is often a computational bottleneck in existing solutions, due to its inherent complexity. Our proposed distributed framework manages to perform both tasks with significant speed-up by employing Grid Computing resources provided by EGI in an efficient and optimal manner. The overall workflow is both fully automated, thus making it user friendly, and fully detached from the end-users terminal, since all computations take place on Grid worker nodes.

@inproceedings{2016Stergiadis,
author={Emmanouil Stergiadis and Athanassios Kintsakis and Fotis Psomopoulos and Pericles A. Mitkas},
title={A scalable Grid Computing framework for extensible phylogenetic profile construction},
booktitle={12th International Conference on Artificial Intelligence Applications and Innovations},
pages={455-462},
publisher={12th International Conference on Artificial Intelligence Applications and Innovations},
address={Thessaloniki, Greece, September},
year={2016},
month={09},
date={2016-09-02},
abstract={Current research in Life Sciences without doubt has been established as a Big Data discipline. Beyond the expected domain-specific requirements, this perspective has put scalability as one of the most crucial aspects of any state-of-the-art bioinformatics framework. Sequence alignment and construction of phylogenetic profiles are common tasks evident in a wide range of life science analyses as, given an arbitrary big volume of genomes, they can provide useful insights on the functionality and relationships of the involved entities. This process is often a computational bottleneck in existing solutions, due to its inherent complexity. Our proposed distributed framework manages to perform both tasks with significant speed-up by employing Grid Computing resources provided by EGI in an efficient and optimal manner. The overall workflow is both fully automated, thus making it user friendly, and fully detached from the end-users terminal, since all computations take place on Grid worker nodes.}
}

Aristeidis Thallas, Emmanouil Tsardoulias and Loukas Petrou
"Particle Filter - Scan Matching SLAM Recovery Under Kinematic Model Failures"
2016 24th Mediterranean Conference on Control and Automation (MED), 2016 Jun

Two of the most predominant approaches regarding the SLAM problem are the Rao-Blackwellized particle filters and the Scan Matching algorithms, each approach presenting its own deficiencies. In particular, particle filters suffer from potential particle impoverishment, whereas lack of environmental features can cause scan matching methods to collapse. In the current paper a multi-threaded combination of Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM) aiming to overcome those defects, whilst exploiting each method's advantages is presented. CRSM is employed in feature-rich environments while concurrently reducing the particle filter dispersion, whilst the particle filter allows the maintenance of the correct hypothesis in environments with scarcity of information. Finally, a method to reduce the number of particle filter resamplings, employing topological information is proposed.

@conference{etsardouMed12016,
author={Aristeidis Thallas and Emmanouil Tsardoulias and Loukas Petrou},
title={Particle Filter - Scan Matching SLAM Recovery Under Kinematic Model Failures},
booktitle={2016 24th Mediterranean Conference on Control and Automation (MED)},
year={2016},
month={06},
date={2016-06-21},
url={https://ieeexplore.ieee.org/document/7535844},
doi={https://doi.org/10.1109/MED.2016.7535844},
keywords={Simultaneous localization and mapping;Particle filters;Trajectory},
abstract={Two of the most predominant approaches regarding the SLAM problem are the Rao-Blackwellized particle filters and the Scan Matching algorithms, each approach presenting its own deficiencies. In particular, particle filters suffer from potential particle impoverishment, whereas lack of environmental features can cause scan matching methods to collapse. In the current paper a multi-threaded combination of Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM) aiming to overcome those defects, whilst exploiting each method\'s advantages is presented. CRSM is employed in feature-rich environments while concurrently reducing the particle filter dispersion, whilst the particle filter allows the maintenance of the correct hypothesis in environments with scarcity of information. Finally, a method to reduce the number of particle filter resamplings, employing topological information is proposed.}
}

Aristeidis Thallas, Emmanouil Tsardoulias and Loukas Petrou
"Particle Filter - Scan Matching Hybrid SLAM Employing Topological Information"
2016 24th Mediterranean Conference on Control and Automation (MED), 2016 Jun

Two of the most predominant approaches regarding the SLAM problem are the Rao-Blackwellized particle filters and the Scan Matching algorithms, each approach presenting its own deficiencies. In particular, particle filters suffer from potential particle impoverishment, whereas lack of environmental features can cause scan matching methods to collapse. In the current paper a multi-threaded combination of Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM) aiming to overcome those defects, whilst exploiting each method's advantages is presented. CRSM is employed in feature-rich environments while concurrently reducing the particle filter dispersion, whilst the particle filter allows the maintenance of the correct hypothesis in environments with scarcity of information. Finally, a method to reduce the number of particle filter resamplings, employing topological information is proposed.

@conference{etsardouMed22016,
author={Aristeidis Thallas and Emmanouil Tsardoulias and Loukas Petrou},
title={Particle Filter - Scan Matching Hybrid SLAM Employing Topological Information},
booktitle={2016 24th Mediterranean Conference on Control and Automation (MED)},
year={2016},
month={06},
date={2016-06-21},
url={https://ieeexplore.ieee.org/document/7535844},
doi={https://doi.org/10.1109/MED.2016.7535844},
keywords={Simultaneous localization and mapping;Particle filters;Trajectory},
abstract={Two of the most predominant approaches regarding the SLAM problem are the Rao-Blackwellized particle filters and the Scan Matching algorithms, each approach presenting its own deficiencies. In particular, particle filters suffer from potential particle impoverishment, whereas lack of environmental features can cause scan matching methods to collapse. In the current paper a multi-threaded combination of Rao-Blackwellized particle filters with a scan matching algorithm (CRSM SLAM) aiming to overcome those defects, whilst exploiting each method\'s advantages is presented. CRSM is employed in feature-rich environments while concurrently reducing the particle filter dispersion, whilst the particle filter allows the maintenance of the correct hypothesis in environments with scarcity of information. Finally, a method to reduce the number of particle filter resamplings, employing topological information is proposed.}
}

Aristeidis G. Thallas, Konstantinos Panayiotou, Emmanouil Tsardoulias, Andreas L. Symeonidis, Pericles A. Mitkas and George G. Karagiannis
"Relieving robots from their burdens: The Cloud Agent concept"
2016 5th IEEE International Conference on Cloud Networking (Cloudnet), 2016 Oct

The consumer robotics concept has already invaded our everyday lives, however two major drawbacks have become apparent both for the roboticists and the consumers. The first is that these robots are pre-programmed to perform specific tasks and usually their software is proprietary, thus not open to "interventions". The second is that even if their software is open source, low-cost robots usually lack sufficient resources such as CPU power or memory capabilities, thus forbidding advanced algorithms to be executed in-robot. Within the context of RAPP (Robotic Applications for Delivering Smart User Empowering Applications) we treat robots as platforms, where applications can be downloaded and automatically deployed. Furthermore, we propose and implement a novel multi-agent architecture, empowering robots to offload computations in entities denoted as Cloud Agents. This paper discusses the respective architecture in detail.

@conference{etsardouRobotBurden2016,
author={Aristeidis G. Thallas and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Pericles A. Mitkas and George G. Karagiannis},
title={Relieving robots from their burdens: The Cloud Agent concept},
booktitle={2016 5th IEEE International Conference on Cloud Networking (Cloudnet)},
year={2016},
month={10},
date={2016-10-05},
url={https://ieeexplore.ieee.org/document/7776599/authors#authors},
doi={https://doi.org/10.1109/CloudNet.2016.38},
keywords={Robots;Containers;Cloud computing;Computer architecture;Web servers;Sockets},
abstract={The consumer robotics concept has already invaded our everyday lives, however two major drawbacks have become apparent both for the roboticists and the consumers. The first is that these robots are pre-programmed to perform specific tasks and usually their software is proprietary, thus not open to \"interventions\". The second is that even if their software is open source, low-cost robots usually lack sufficient resources such as CPU power or memory capabilities, thus forbidding advanced algorithms to be executed in-robot. Within the context of RAPP (Robotic Applications for Delivering Smart User Empowering Applications) we treat robots as platforms, where applications can be downloaded and automatically deployed. Furthermore, we propose and implement a novel multi-agent architecture, empowering robots to offload computations in entities denoted as Cloud Agents. This paper discusses the respective architecture in detail.}
}

2015

Journal Articles

Charalampos Dimoulas and Andreas Symeonidis
"Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation"
IEEE Multimedia, PP, (99), 2015 May

With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

@article{2015DimoulasIEEEM,
author={Charalampos Dimoulas and Andreas Symeonidis},
title={Enhancing social multimedia matching and management through audio-adaptive audiovisual bimodal segmentation},
journal={IEEE Multimedia},
volume={PP},
number={99},
year={2015},
month={05},
date={2015-05-13},
doi={http://dx.doi.org/10.1109/MMUL.2015.33},
abstract={With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.}
}

Alfonso M Duarte, Fotis Psomopoulos, Christophe Blanchet, Alexandre M Bonvin, Manuel Corpas, Alain Franc, Rafael C Jimenez, Jesus M de Lucas, Tommi Nyrönen, Gargely Sipos and Stephanie B Suhr
"Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis"
Frontiers in Genetics, Vol. 6, No. 197 (2015), 2015 Jun

With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

@article{2015DuarteFG,
author={Alfonso M Duarte and Fotis Psomopoulos and Christophe Blanchet and Alexandre M Bonvin and Manuel Corpas and Alain Franc and Rafael C Jimenez and Jesus M de Lucas and Tommi Nyrönen and Gargely Sipos and Stephanie B Suhr},
title={Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis},
journal={Frontiers in Genetics, Vol. 6, No. 197 (2015)},
year={2015},
month={06},
date={2015-06-23},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Future-opportunities-and-trends-for-e-infrastructures-and-life-sciences-going-beyond-the-grid-to-enable-life-science-data-analysis.pdf},
abstract={With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.}
}

Themistoklis Mavridis andAndreas Symeonidis
"Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms"
Engineering Applications of Artificial Intelligence (EAAI), 41, pp. 75–91, 2015 Mar

It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).

@article{2015mavridisEAAI,
author={Themistoklis Mavridis andAndreas Symeonidis},
title={Identifying valid search engine ranking factors in a Web 2.0 and Web 3.0 context for building efficient SEO mechanisms},
journal={Engineering Applications of Artificial Intelligence (EAAI)},
volume={41},
pages={75–91},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Identifying-valid-search-engine-ranking-factors-in-a-Web-2.0-and-Web-3.0-context-for-building-efficient-SEO-mechanisms.pdf},
doi={http://10.1016/j.engappai.2015.02.002},
keywords={semantic web;search engine optimization;Search engine ranking factors analysis;Content quality;Social web},
abstract={It is common knowledge that the web has been continuously evolving, from a read medium to a read/write scheme and, lately, to a read/write/infer corpus. To follow the evolution, Search Engines have been undergoing continuous updates in order to provide the user with a well-targeted, personalized and improved experience of the web. Along with this focus on content quality and user preferences, search engines have also been striving to integrate Semantic Web primitives, in order to enhance their intelligence. Current work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context. A benchmark crawler LSHrank, has been developed, which employs known search engine APIs and evaluates results against various already established metrics, in different domains and types of web content. The ultimate LSHrank objective is the development of a Search Engine Optimization (SEO) mechanism that will enrich and alter the content of a website in order to achieve its optimal ranking in search engine result pages (SERPs).}
}

Dimitrios Vitsios, Fotis Psomopoulos, Pericles Mitkas and Christos Ouzounis
"Inference of pathway decomposition across multiple species through gene clustering"
International Journal on Artificial Intelligence Tools, 24, pp. 25, 2015 Feb

In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm

@article{2015vitsiosIJAIT,
author={Dimitrios Vitsios and Fotis Psomopoulos and Pericles Mitkas and Christos Ouzounis},
title={Inference of pathway decomposition across multiple species through gene clustering},
journal={International Journal on Artificial Intelligence Tools},
volume={24},
pages={25},
year={2015},
month={02},
date={2015-02-23},
url={http://www.worldscientific.com/doi/pdf/10.1142/S0218213015400035},
abstract={In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm}
}

2015

Books

Alexandros Gkiokas , Emmanouil G. Tsardoulias and and Pericles A. Mitkas
"Hive Collective Intelligence for Cloud Robotics: A Hybrid Distributed Robotic Controller Design for Learning and Adaptation."
Springer International Publishing, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@book{2015GkiokasSIP,
author={Alexandros Gkiokas and Emmanouil G. Tsardoulias and and Pericles A. Mitkas},
title={Hive Collective Intelligence for Cloud Robotics: A Hybrid Distributed Robotic Controller Design for Learning and Adaptation.},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Hive-Collective-Intelligence-for-Cloud-Robotics-A-Hybrid-Distributed-Robotic-Controller-Design-for-Learning-and-Adaptation.pdf},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

Pericles A. Mitkas
"Assistive Robots as Future Caregivers: The RAPP Approach."
Springer International Publishing, 2015 Mar

As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not only by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing robotic applications, the RAPP approach introduces the idea of robots as platforms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computational and storage capabilities and enable machine learning operations, distributed data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deployable for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications development and it is expected to have a profound effect in the robotics market.

@book{2015MitkasSIP,
author={Pericles A. Mitkas},
title={Assistive Robots as Future Caregivers: The RAPP Approach.},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Assistive-Robots-as-Future-Caregivers-The-RAPP-Approach.pdf},
abstract={As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not only by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing robotic applications, the RAPP approach introduces the idea of robots as platforms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computational and storage capabilities and enable machine learning operations, distributed data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deployable for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications development and it is expected to have a profound effect in the robotics market.}
}

Emmanouil G. Tsardoulias, Cezary Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL Ontologies: The RAPP Methodology"
Springer International Publishing, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@book{2015TsardouliasSIP,
author={Emmanouil G. Tsardoulias and Cezary Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL Ontologies: The RAPP Methodology},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Merging-Robotics-and-AAL-Ontologies-The-RAPP-Methodology.pdf},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

2015

Conference Papers

Themistoklis Diamantopoulos and Andreas Symeonidis
"Employing Source Code Information to Improve Question-Answering in Stack Overflow"
The 12th Working Conference on Mining Software Repositories (MSR 2015), pp. 454-457, Florence, Italy, 2015 May

Nowadays, software development has been greatlyinfluenced by question-answering communities, such as Stack Overflow. A new problem-solving paradigm has emerged, as developers post problems they encounter that are then answered by the community. In this paper, we propose a methodology that allows searching for solutions in Stack Overflow, using the main elements of a question post, including not only its title, tags, and body, but also its source code snippets. We describe a similarity scheme for these elements and demonstrate how structural information can be extracted from source code snippets and compared to further improve the retrieval of questions. The results of our evaluation indicate that our methodology is effective on recommending similar question posts allowing community members to search without fully forming a question

@conference{2015DiamantopoulosMSR,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Employing Source Code Information to Improve Question-Answering in Stack Overflow},
booktitle={The 12th Working Conference on Mining Software Repositories (MSR 2015)},
pages={454-457},
address={Florence, Italy},
year={2015},
month={05},
date={2015-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/MSR2015.pdf},
keywords={Load Forecasting},
abstract={Nowadays, software development has been greatlyinfluenced by question-answering communities, such as Stack Overflow. A new problem-solving paradigm has emerged, as developers post problems they encounter that are then answered by the community. In this paper, we propose a methodology that allows searching for solutions in Stack Overflow, using the main elements of a question post, including not only its title, tags, and body, but also its source code snippets. We describe a similarity scheme for these elements and demonstrate how structural information can be extracted from source code snippets and compared to further improve the retrieval of questions. The results of our evaluation indicate that our methodology is effective on recommending similar question posts allowing community members to search without fully forming a question}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
"Towards Interpretable Defect-Prone Component Analysis using Genetic Fuzzy Systems"
IEEE/ACM 4th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE, pp. 32-38, Florence, Italy, 2015 May

The problem of Software Reliability Prediction is attracting the attention of several researchers during the last few years. Various classification techniques are proposed in current literature which involve the use of metrics drawn from version control systems in order to classify software components as defect-prone or defect-free. In this paper, we create a novel genetic fuzzy rule-based system to efficiently model the defect-proneness of each component. The system uses a Mamdani-Assilian inference engine and models the problem as a one-class classification task. System rules are constructed using a genetic algorithm, where each chromosome represents a rule base (Pittsburgh approach). The parameters of our fuzzy system and the operators of the genetic algorithm are designed with regard to producing interpretable output. Thus, the output offers not only effective classification, but also a comprehensive set of rules that can be easily visualized to extract useful conclusions about the metrics of the software.

@inproceedings{2015DiamantopoulosRAISE,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Towards Interpretable Defect-Prone Component Analysis using Genetic Fuzzy Systems},
booktitle={IEEE/ACM 4th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE},
pages={32-38},
address={Florence, Italy},
year={2015},
month={05},
date={2015-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Towards-Interpretable-Defect-Prone-Component-Analysis-using-Genetic-Fuzzy-Systems-.pdf},
keywords={Load Forecasting},
abstract={The problem of Software Reliability Prediction is attracting the attention of several researchers during the last few years. Various classification techniques are proposed in current literature which involve the use of metrics drawn from version control systems in order to classify software components as defect-prone or defect-free. In this paper, we create a novel genetic fuzzy rule-based system to efficiently model the defect-proneness of each component. The system uses a Mamdani-Assilian inference engine and models the problem as a one-class classification task. System rules are constructed using a genetic algorithm, where each chromosome represents a rule base (Pittsburgh approach). The parameters of our fuzzy system and the operators of the genetic algorithm are designed with regard to producing interpretable output. Thus, the output offers not only effective classification, but also a comprehensive set of rules that can be easily visualized to extract useful conclusions about the metrics of the software.}
}

Athanassios M. Kintsakis, Antonios Chysopoulos and Pericles Mitkas
"Agent-based short-term load and price forecasting using a parallel implementation of an adaptive PSO-trained local linear wavelet neural network"
European Energy Market (EEM), pp. 1 - 5, 2015 May

Short-Term Load and Price forecasting are crucial to the stability of electricity markets and to the profitability of the involved parties. The work presented here makes use of a Local Linear Wavelet Neural Network (LLWNN) trained by a special adaptive version of the Particle Swarm Optimization algorithm and implemented as parallel process in CUDA. Experiments for short term load and price forecasting, up to 24 hours ahead, were conducted for energy market datasets from Greece and the USA. In addition, the fast response time of the system enabled its encapsulation in a PowerTAC agent, competing in a real time environment. The system displayed robust all-around performance in a plethora of real and simulated energy markets, each characterized by unique patterns and deviations. The low forecasting error, real time performance and the significant increase in the profitability of an energy market agent show that our approach is a powerful prediction tool, with multiple expansion possibilities.

@conference{2015KintsakisEEM,
author={Athanassios M. Kintsakis and Antonios Chysopoulos and Pericles Mitkas},
title={Agent-based short-term load and price forecasting using a parallel implementation of an adaptive PSO-trained local linear wavelet neural network},
booktitle={European Energy Market (EEM)},
pages={1 - 5},
year={2015},
month={05},
date={2015-05-19},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Agent-based-Short-Term-Load-and-Price-Forecasting-Using-a-Parallel-Implementation-of-an-Adaptive-PSO-Trained-Local-Linear-Wavelet-Neural-Network.pdf},
doi={http://10.1109/EEM.2015.7216611},
keywords={Load Forecasting;Neural Networks;Parallel architectures Particle swarm optimization;Price Forecasting;Wavelet Neural Networks},
abstract={Short-Term Load and Price forecasting are crucial to the stability of electricity markets and to the profitability of the involved parties. The work presented here makes use of a Local Linear Wavelet Neural Network (LLWNN) trained by a special adaptive version of the Particle Swarm Optimization algorithm and implemented as parallel process in CUDA. Experiments for short term load and price forecasting, up to 24 hours ahead, were conducted for energy market datasets from Greece and the USA. In addition, the fast response time of the system enabled its encapsulation in a PowerTAC agent, competing in a real time environment. The system displayed robust all-around performance in a plethora of real and simulated energy markets, each characterized by unique patterns and deviations. The low forecasting error, real time performance and the significant increase in the profitability of an energy market agent show that our approach is a powerful prediction tool, with multiple expansion possibilities.}
}

Pericles A. Mitkas
"Assistive Robots as Future Caregivers: The RAPP Approach"
Automation Conference, 2015 Mar

As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not on- ly by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing ro- botic applications, the RAPP approach introduces the idea of robots as plat- forms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employ- ing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computa- tional and storage capabilities and enable machine learning operations, distri- buted data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deploya- ble for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications de- velopment and it is expected to have a profound effect in the robotics market

@conference{2015MitkasACRAPP,
author={Pericles A. Mitkas},
title={Assistive Robots as Future Caregivers: The RAPP Approach},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Assistive-Robots-as-Future-Caregivers-The-RAPP-Approach.pdf},
keywords={Load Forecasting},
abstract={As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not on- ly by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing ro- botic applications, the RAPP approach introduces the idea of robots as plat- forms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employ- ing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computa- tional and storage capabilities and enable machine learning operations, distri- buted data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deploya- ble for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications de- velopment and it is expected to have a profound effect in the robotics market}
}

Fotis Psomopoulos, Olga Vrousgou and Pericles A. Mitkas
"Large-scale modular comparative genomics: the Grid approach"
23rd Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) / 14th European Conference on Computational Biology (ECCB), 2015 Jul

@conference{2015PsomopoulosAICISMB,
author={Fotis Psomopoulos and Olga Vrousgou and Pericles A. Mitkas},
title={Large-scale modular comparative genomics: the Grid approach},
booktitle={23rd Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) / 14th European Conference on Computational Biology (ECCB)},
year={2015},
month={07},
date={2015-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Large-scale-modular-comparative-genomics-the-Grid-approach.pdf}
}

Alexandros Gkiokas, Emmanouil G. Tsardoulias and Pericles A. Mitkas
"Hive Collective Intelligence for Cloud Robotics A Hybrid Distributed Robotic Controller Design for Learning and Adaptation"
Automation Conference, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@conference{2015TsardouliasHCIAC,
author={Alexandros Gkiokas and Emmanouil G. Tsardoulias and Pericles A. Mitkas},
title={Hive Collective Intelligence for Cloud Robotics A Hybrid Distributed Robotic Controller Design for Learning and Adaptation},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Hive-Collective-Intelligence-for-Cloud-Robotics-A-Hybrid-Distributed-Robotic-Controller-Design-for-Learning-and-Adaptation.pdf},
keywords={Load Forecasting},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

Emmanouil G. Tsardoulias, C Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL ontologies: The RAPP methodology"
Automation Conference, 2015 Mar

Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).

@conference{2015TsardouliasMRALL,
author={Emmanouil G. Tsardoulias and C Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL ontologies: The RAPP methodology},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Merging_Robotics_and_AAL_ontologies_-_The_RAPP_methodology.pdf},
keywords={Load Forecasting},
abstract={Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).}
}

Tsardoulias, E. G., Andreas Symeonidis and and P. A. Mitkas.
"An automatic speech detection architecture for social robot oral interaction"
In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM, Island of Rhodes, 2015 Oct

Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn

@conference{2015TsardouliasPAMIWS,
author={Tsardoulias and E. G. and Andreas Symeonidis and and P. A. Mitkas.},
title={An automatic speech detection architecture for social robot oral interaction},
booktitle={In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM},
address={Island of Rhodes},
year={2015},
month={10},
date={2015-10-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/An-automatic-speech-detection-architecture-for-social-robot-oral-interaction.pdf},
abstract={Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn}
}

Konstantinos Vavliakis, Anthony Chrysopoulos, Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders"
SimuTools, 2015 Dec

Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.

@conference{2015VavliakisSimuTools,
author={Konstantinos Vavliakis and Anthony Chrysopoulos and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders},
booktitle={SimuTools},
year={2015},
month={00},
date={2015-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/CASSANDRA_SimuTools.pdf},
abstract={Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.}
}

Olga Vrousgou, Fotis Psomopoulos and Pericles Mitkas
"A grid-enabled modular framework for efficient sequence analysis workflows"
16th International Conference on Engineering Applications of Neural Network, Island of Rhodes, 2015 Oct

In the era of Big Data in Life Sciences, efficient processing and analysis of vast amounts of sequence data is becoming an ever daunting challenge. Among such analyses, sequence alignment is one of the most commonly used procedures, as it provides useful insights on the functionality and relationship of the involved entities. Sequence alignment is one of the most common computational bottlenecks in several bioinformatics workflows. We have designed and implemented a time-efficient distributed modular application for sequence alignment, phylogenetic profiling and clustering of protein sequences, by utilizing the European Grid Infrastructure. The optimal utilization of the Grid with regards to the respective modules, allowed us to achieve significant speedups to the order of 1400%.

@conference{2015VrousgouICEANN,
author={Olga Vrousgou and Fotis Psomopoulos and Pericles Mitkas},
title={A grid-enabled modular framework for efficient sequence analysis workflows},
booktitle={16th International Conference on Engineering Applications of Neural Network},
address={Island of Rhodes},
year={2015},
month={10},
date={2015-10-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Grid-Enabled-Modular-Framework-for-Efficient-Sequence-Analysis-Workflows.pdf},
abstract={In the era of Big Data in Life Sciences, efficient processing and analysis of vast amounts of sequence data is becoming an ever daunting challenge. Among such analyses, sequence alignment is one of the most commonly used procedures, as it provides useful insights on the functionality and relationship of the involved entities. Sequence alignment is one of the most common computational bottlenecks in several bioinformatics workflows. We have designed and implemented a time-efficient distributed modular application for sequence alignment, phylogenetic profiling and clustering of protein sequences, by utilizing the European Grid Infrastructure. The optimal utilization of the Grid with regards to the respective modules, allowed us to achieve significant speedups to the order of 1400%.}
}

Christoforos Zolotas and Andreas Symeonidis
"Towards an MDA Mechanism for RESTful Services Development"
The 18th International Conference on Model Driven Engineering Languages and Systems, Ottawa, Canada, 2015 Oct

—Automated software engineering research aspiresto lead to more consistent software, faster delivery and lowerproduction costs. Meanwhile, RESTful design is rapidly gainingmomentum towards becoming the primal software engineeringparadigm for the web, due to its simplicity and reusability. Thispaper attempts to couple the two perspectives and take the firststep towards applying the MDE paradigm to RESTful servicedevelopment at the PIM zone. A UML profile is introduced,which performs PIM meta-modeling of RESTful web servicesabiding by the third level of Richardson’s maturity model. Theprofile embeds a slight variation of the MVC design pattern tocapture the core REST qualities of a resource. The proposedprofile is followed by an indicative example that demonstrateshow to apply the concepts presented, in order to automate PIMproduction of a system according to MOF stack. Next stepsinclude the introduction of the corresponding CIM, PSM andcode production.Index Terms—Model Driven Engineering; RESTful services;UML Profiles; Meta-modeling; Automated Software Engineering

@conference{2015ZolotasICMDELS,
author={Christoforos Zolotas and Andreas Symeonidis},
title={Towards an MDA Mechanism for RESTful Services Development},
booktitle={The 18th International Conference on Model Driven Engineering Languages and Systems},
address={Ottawa, Canada},
year={2015},
month={10},
date={2015-10-02},
url={http://ceur-ws.org/Vol-1563/paper6.pdf},
slideshare={http://www.slideshare.net/isselgroup/towards-an-mda-mechanism-for-restful-services-development},
abstract={—Automated software engineering research aspiresto lead to more consistent software, faster delivery and lowerproduction costs. Meanwhile, RESTful design is rapidly gainingmomentum towards becoming the primal software engineeringparadigm for the web, due to its simplicity and reusability. Thispaper attempts to couple the two perspectives and take the firststep towards applying the MDE paradigm to RESTful servicedevelopment at the PIM zone. A UML profile is introduced,which performs PIM meta-modeling of RESTful web servicesabiding by the third level of Richardson’s maturity model. Theprofile embeds a slight variation of the MVC design pattern tocapture the core REST qualities of a resource. The proposedprofile is followed by an indicative example that demonstrateshow to apply the concepts presented, in order to automate PIMproduction of a system according to MOF stack. Next stepsinclude the introduction of the corresponding CIM, PSM andcode production.Index Terms—Model Driven Engineering; RESTful services;UML Profiles; Meta-modeling; Automated Software Engineering}
}

2014

Journal Articles

Anna A. Adamopoulou and Andreas Symeonidis
"A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets"
Electronic Commerce Research and Applications, 35, pp. 114-130, 2014 Oct

Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.

@article{2014AdamopoulouECRA,
author={Anna A. Adamopoulou and Andreas Symeonidis},
title={A simulation testbed for analyzing trust and reputation mechanisms in unreliable online markets},
journal={Electronic Commerce Research and Applications},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S1567422314000465-main.pdf},
doi={http://10.1016/j.elerap.2014.07.001},
keywords={Small-scale consumer models},
abstract={Modern online markets are becoming extremely dynamic, indirectly dictating the need for (semi-) autonomous approaches for constant monitoring and immediate action in order to satisfy one’s needs/preferences. In such open and versatile environments, software agents may be considered as a suitable metaphor for dealing with the increasing complexity of the problem. Additionally, trust and reputation have been recognized as key issues in online markets and many researchers have, in different perspectives, surveyed the related notions, mechanisms and models. Within the context of this work we present an adaptable, multivariate agent testbed for the simulation of open online markets and the study of various factors affecting the quality of the service consumed. This testbed, which we call Euphemus, is highly parameterized and can be easily customized to suit a particular application domain. It allows for building various market scenarios and analyzing interesting properties of e-commerce environments from a trust perspective. The architecture of Euphemus is presented and a number of well-known trust and reputation models are built with Euphemus, in order to show how the testbed can be used to apply and adapt models. Extensive experimentation has been performed in order to show how models behave in unreliable online markets, results are discussed and interesting conclusions are drawn.}
}

Antonios Chrysopoulos, Christos Diou, A.L. Symeonidis and Pericles A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
EAAI, 35, pp. 299- 315, 2014 Oct

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{2014chrysopoulosEAAI,
author={Antonios Chrysopoulos and Christos Diou and A.L. Symeonidis and Pericles A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={EAAI},
volume={35},
pages={299- 315},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Bottom-up-modeling-of-small-scale-energy-consumers-for-effective-Demand-Response-Applications.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Small-scale consumer models;Demand simulation;Demand Response Applications},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

Themistoklis Diamantopoulos and Andreas Symeonidis
"Localizing Software Bugs using the Edit Distance of Call Traces"
International Journal On Advances in Software, 7, (1), pp. 277 - 288, 2014 Oct

Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.

@article{2014DiamantopoulosIJAS,
author={Themistoklis Diamantopoulos and Andreas Symeonidis},
title={Localizing Software Bugs using the Edit Distance of Call Traces},
journal={International Journal On Advances in Software},
volume={7},
number={1},
pages={277 - 288},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Localizing-Software-Bugs-using-the-Edit-Distance-of-Call-Traces.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Intrusion detection systems},
abstract={Automating the localization of software bugs that do not lead to crashes is a difficult task that has drawn the attention of several researchers. Several popular methods follow the same approach; function call traces are collected and represented as graphs, which are subsequently mined using subgraph mining algorithms in order to provide a ranking of potentially buggy functions-nodes. Recent work has indicated that the scalability of state-of-the-art methods can be improved by reducing the graph dataset using tree edit distance algorithms. The call traces that are closer to each other, but belong to different sets, are the ones that are most significant in localizing bugs. In this work, we further explore the task of selecting the most significant traces, by proposing different call trace selection techniques, based on the Stable Marriage problem, and testing their effectiveness against current solutions. Upon evaluating our methods on a real-world dataset, we prove that our methodology is scalable and effective enough to be applied on dynamic bug detection scenarios.}
}

G. Mamalakis, C. Diou, A.L. Symeonidis and L. Georgiadis
"Of daemons and men: A file system approach towards intrusion detection"
Applied Soft Computing, 25, pp. 1--14, 2014 Oct

We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.

@article{2014MamalakisASC,
author={G. Mamalakis and C. Diou and A.L. Symeonidis and L. Georgiadis},
title={Of daemons and men: A file system approach towards intrusion detection},
journal={Applied Soft Computing},
volume={25},
pages={1--14},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Of-daemons-and-men-A-file-system-approach-towards-intrusion-detection.pdf},
doi={http://dx.doi.org/10.1016/j.asoc.2014.07.026},
keywords={Intrusion detection systems;Anomaly detection;Information security;File system},
abstract={We present FI^2DS a file system, host based anomaly detection system that monitors Basic Security Module (BSM) audit records and determines whether a web server has been compromised by comparing monitored activity generated from the web server to a normal usage profile. Additionally, we propose a set of features extracted from file system specific BSM audit records, as well as an IDS that identifies attacks based on a decision engine that employs one-class classification using a moving window on incoming data. We have used two different machine learning algorithms, Support Vector Machines (SVMs) and Gaussian Mixture Models (GMMs) and our evaluation is performed on real-world datasets collected from three web servers and a honeynet. Results are very promising, since FI^2DS detection rates range between 91% and 95.9% with corresponding false positive rates ranging between 8.1× 10^?2 % and 9.3× 10^?4 %. Comparison of FI^2DS to another state-of-the-art filesystem-based IDS, FWRAP, indicates higher effectiveness of the proposed IDS in all three datasets. Within the context of this paper FI2DS is evaluated for the web daemon user; nevertheless, it can be directly extended to model any daemon-user for both intrusion detection and postmortem analysis.}
}

Themistoklis Mavridis and Andreas Symeonidis
"Semantic analysis of web documents for the generation of optimal content"
Engineering Applications of Artificial Intelligence, 2014, 35, pp. 114-130, 2014 Oct

The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.

@article{2014MavridisEAAI,
author={Themistoklis Mavridis and Andreas Symeonidis},
title={Semantic analysis of web documents for the generation of optimal content},
journal={Engineering Applications of Artificial Intelligence, 2014},
volume={35},
pages={114-130},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S0952197614001304-main.pdf},
doi={http://dx.doi.org/10.1016/j.engappai.2014.06.008},
abstract={The Web has been under major evolution over the last decade and search engines have been trying to incorporate the changes of the web and provide the user with in terms of content. In order to evaluate the quality of a document there has been a plethora of attempts, some of which have considered the use of semantic analysis for extracting conclusions upon documents around the web. In turn, Search Engine Optimization (SEO) has been under development in order to cope with the changes of search engines and the web. SEOsss aim has been the creation of effective strategies for optimal ranking of websites and webpages in search engines. Current work probes on semantic analysis of web content. We further elaborate on LDArank, a mechanism that employs Latent Dirichlet Allocation (LDA) for the semantic analysis of web content and the generation of optimal content for given queries. We apply the new proposed mechanism, LSHrank, and explore the effect of generating web content against various SEO factors. We demonstrate LSHrank robustness to produce semantically prominent content in comparison to different semantic analysis based SEO approaches.}
}

2014

Conference Papers

Christos Dimou, Fani Tzima, Andreas L. Symeonidis and and Pericles A. Mitkas
"Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation"
Lecture Notes on Agents and Data Mining Interaction, pp. 50-54, Springer, Baltimore, Maryland, USA, 2014 May

Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.

@inproceedings{2014Dimou,
author={Christos Dimou and Fani Tzima and Andreas L. Symeonidis and and Pericles A. Mitkas},
title={Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation},
booktitle={Lecture Notes on Agents and Data Mining Interaction},
pages={50-54},
publisher={Springer},
address={Baltimore, Maryland, USA},
year={2014},
month={05},
date={2014-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Performance-Evaluation-of-Agents-and-Multi-agent-Systems-using-Formal-Specifications-in-Z-Notation.pdf},
keywords={Small-scale consumer models},
abstract={Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.}
}

Rafaila Grigoriou and Andreas L. Symeonidis
"Towards the Design of User Friendly Search Engines for Software Projects"
Lecture Notes on Natural Language Processing and Information Systems, pp. 164-167, Springer International Publishing, Chicago, Illinois, 2014 Jun

Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users

@inproceedings{2014GrigoriouTDUFSESP,
author={Rafaila Grigoriou and Andreas L. Symeonidis},
title={Towards the Design of User Friendly Search Engines for Software Projects},
booktitle={Lecture Notes on Natural Language Processing and Information Systems},
pages={164-167},
publisher={Springer International Publishing},
address={Chicago, Illinois},
year={2014},
month={06},
date={2014-06-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Towards-the-Design-of-User-Friendly-Search-Engines-for-Software-Projects.pdf},
keywords={Search engine ranking factors analysis},
abstract={Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users}
}

Fotis Psomopoulos, Emmanouil Tsardoulias, Alexandros Giokas, Cezary Zielinski, Vincent Prunet, Ilias Trochidis, David Daney, Manuel Serrano, Ludovic Courtes, Stratos Arampatzis and Pericles A. Mitkas
"RAPP System Architecture, Assistance and Service Robotics in a Human Environment"
International Conference on Intelligent Robots and Systems (IEEE/RSJ), Chicago, Illinois, 2014 Sep

Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users

@conference{2014PsomopoulosIEEE/RSJ,
author={Fotis Psomopoulos and Emmanouil Tsardoulias and Alexandros Giokas and Cezary Zielinski and Vincent Prunet and Ilias Trochidis and David Daney and Manuel Serrano and Ludovic Courtes and Stratos Arampatzis and Pericles A. Mitkas},
title={RAPP System Architecture, Assistance and Service Robotics in a Human Environment},
booktitle={International Conference on Intelligent Robots and Systems (IEEE/RSJ)},
address={Chicago, Illinois},
year={2014},
month={09},
date={2014-09-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/RAPP-System-Architecture-Assistance-and-Service-Robotics-in-a-Human-Environment.pdf},
keywords={Load Forecasting},
abstract={Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users}
}

Michael Roth, Themistoklis Diamantopoulos, Ewan Klein and Andreas L. Symeonidis
"Software Requirements: A new Domain for Semantic Parsers"
Proceedings of the ACL 2014 Workshop on Semantic Parsing (SP14), pp. 50-54, Baltimore, Maryland, USA, 2014 Jun

Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.

@inproceedings{roth2014software,
author={Michael Roth and Themistoklis Diamantopoulos and Ewan Klein and Andreas L. Symeonidis},
title={Software Requirements: A new Domain for Semantic Parsers},
booktitle={Proceedings of the ACL 2014 Workshop on Semantic Parsing (SP14)},
pages={50-54},
address={Baltimore, Maryland, USA},
year={2014},
month={06},
date={2014-06-01},
url={http://www.aclweb.org/anthology/W/W14/W14-24.pdf#page=62},
keywords={Load Forecasting},
abstract={Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.}
}

2013

Journal Articles

Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"Adaptive reservoir computing through evolution and learning"
Neurocomputing, 103, pp. 198-209, 2013 Jan

The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.

@article{2013ChatzidimitriouN,
author={Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={Adaptive reservoir computing through evolution and learning},
journal={Neurocomputing},
volume={103},
pages={198-209},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Adaptive-reservoir-computing-through-evolution-and-learning.pdf},
keywords={Load Forecasting},
abstract={The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.}
}

Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles A. Mitkas and Anastasios Delopoulos
"Applying semantic technologies in cervical cancer research"
Data Knowl. Eng., 86, pp. 160-178, 2013 Jan

In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.

@article{2013MaramisDKE,
author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles A. Mitkas and Anastasios Delopoulos},
title={Applying semantic technologies in cervical cancer research},
journal={Data Knowl. Eng.},
volume={86},
pages={160-178},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Applying-semantic-technologies-in-cervical-cancer-research.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering},
abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile"
Plos ONE, 2013 Jan

Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.

@article{2013PsomopoulosPlosOne,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile},
journal={Plos ONE},
year={2013},
month={01},
date={2013-01-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/journal.pone_.0052854.pdf},
abstract={Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Event identification in web social media through named entity recognition and topic modeling"
Data & Knowledge Engineering, 88, pp. 1-24, 2013 Jan

@article{2013VavliakisDKE,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Event identification in web social media through named entity recognition and topic modeling},
journal={Data & Knowledge Engineering},
volume={88},
pages={1-24},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Event-identification-in-web-social-media-through-named-entity-recognition-and-topic-modeling.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering}
}

Tsardoulias, Emmanouil, and Loukas Petrou
"Critical Rays Scan Match SLAM"
Journal of Intelligent & Robotic Systems, 72, pp. 441-462, 2013 Feb

Scan matching is one of the oldest and simplest methods for occupancy grid based SLAM. The general idea is to find the pose of a robot and update its map simply by calculating the 2-D transformation between a laser scan and its predecessor. Due to its simplicity many solutions were proposed and used in various systems, the vast majority of which are iterative. The fact is, that although scan matching is simple in its implementation, it suffers from accumulative noise. Of course, there is certainly a trade-off between the quality of results and the execution time required. Many algorithms have been introduced, in order to achieve good quality maps in a small iteration time, so that on-line execution would be achievable. The proposed SLAM scheme performs scan matching by implementing a ray-selection method. The main idea is to reduce complexity and time needed for matching by pre-processing the scan and selecting rays that are critical for the matching process. In this paper, several different methods of ray-selection are compared. In addition matching is performed between the current scan and the global robot map, in order to minimize the accumulated errors. RRHC (Random Restart Hill Climbing) is employed for matching the scan to the map, which is a local search optimization procedure that can be easily parameterized and is much faster than a traditional genetic algorithm (GA), largely because of the low complexity of the problem. The general idea is to construct a parameterizable SLAM that can be used in an on-line system that requires low computational cost. The proposed algorithm assumes a structured civil environment, is oriented for use in the RoboCup - RoboRescue competition, and its main purpose is to construct high quality maps.

@article{etsardouCritical2013,
author={Tsardoulias and Emmanouil and and Loukas Petrou},
title={Critical Rays Scan Match SLAM},
journal={Journal of Intelligent & Robotic Systems},
volume={72},
pages={441-462},
year={2013},
month={02},
date={2013-02-09},
url={https://link.springer.com/article/10.1007/s10846-012-9811-5},
doi={https://doi.org/10.1007/s10846-012-9811-5},
keywords={SLAM;Scan matching;Random restart hill climbing;Critical rays;Occupancy grid map},
abstract={Scan matching is one of the oldest and simplest methods for occupancy grid based SLAM. The general idea is to find the pose of a robot and update its map simply by calculating the 2-D transformation between a laser scan and its predecessor. Due to its simplicity many solutions were proposed and used in various systems, the vast majority of which are iterative. The fact is, that although scan matching is simple in its implementation, it suffers from accumulative noise. Of course, there is certainly a trade-off between the quality of results and the execution time required. Many algorithms have been introduced, in order to achieve good quality maps in a small iteration time, so that on-line execution would be achievable. The proposed SLAM scheme performs scan matching by implementing a ray-selection method. The main idea is to reduce complexity and time needed for matching by pre-processing the scan and selecting rays that are critical for the matching process. In this paper, several different methods of ray-selection are compared. In addition matching is performed between the current scan and the global robot map, in order to minimize the accumulated errors. RRHC (Random Restart Hill Climbing) is employed for matching the scan to the map, which is a local search optimization procedure that can be easily parameterized and is much faster than a traditional genetic algorithm (GA), largely because of the low complexity of the problem. The general idea is to construct a parameterizable SLAM that can be used in an on-line system that requires low computational cost. The proposed algorithm assumes a structured civil environment, is oriented for use in the RoboCup - RoboRescue competition, and its main purpose is to construct high quality maps.}
}

Tsardoulias, E. G., A. T. Serafi, M. N. Panourgia, A. Papazoglou, and L. Petrou
"Construction of Minimized Topological Graphs on Occupancy Grid Maps Based on GVD and Sensor Coverage Information"
Journal of Intelligent & Robotic Systems, 75, pp. 457-474, 2013 Dec

One of the tasks to be carried out during the robot exploration of an unknown environment, is the construction of a complete map of the environment at bounded time interval. In order for the exploration to be efficient, a smart planning method must be implemented so that the robot can cover the space as fast as possible. One of the most important information that an intelligent agent can have, is a representation of the environment, not necessarily in the form of a map, but of a topological graph of the plane, which can be used to perform efficient planning. This work proposes a method to produce a topological graph of an Occupancy Grid Map (OGM) by using a Manhattan distance function to create the Approximate Generalized Voronoi Diagram (AGVD). Several improvements in the AGVD are made, in order to produce a crisp representation of the spaces skeleton, but in the same time to avoid the complex results of other methods. To smooth the final AGVD, morphological operations are performed. A topological graph is constructed from the AGVD, which is minimized by using sensor coverage information, aiming at planning complexity reduction.

@article{etsardouTopo2013,
author={Tsardoulias and E. G. and A. T. Serafi and M. N. Panourgia and A. Papazoglou and and L. Petrou},
title={Construction of Minimized Topological Graphs on Occupancy Grid Maps Based on GVD and Sensor Coverage Information},
journal={Journal of Intelligent & Robotic Systems},
volume={75},
pages={457-474},
year={2013},
month={12},
date={2013-12-21},
url={https://link.springer.com/article/10.1007/s10846-013-9995-3},
doi={https://doi.org/10.1007/s10846-013-9995-3},
keywords={Approximate Generalized Voronoi Diagram (AGVD);Coverage;Planning;Rescue robot;Topological graph},
abstract={One of the tasks to be carried out during the robot exploration of an unknown environment, is the construction of a complete map of the environment at bounded time interval. In order for the exploration to be efficient, a smart planning method must be implemented so that the robot can cover the space as fast as possible. One of the most important information that an intelligent agent can have, is a representation of the environment, not necessarily in the form of a map, but of a topological graph of the plane, which can be used to perform efficient planning. This work proposes a method to produce a topological graph of an Occupancy Grid Map (OGM) by using a Manhattan distance function to create the Approximate Generalized Voronoi Diagram (AGVD). Several improvements in the AGVD are made, in order to produce a crisp representation of the spaces skeleton, but in the same time to avoid the complex results of other methods. To smooth the final AGVD, morphological operations are performed. A topological graph is constructed from the AGVD, which is minimized by using sensor coverage information, aiming at planning complexity reduction.}
}

2013

Conference Papers

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas Symeonidis and Pericles Mitkas
"Redefining the market power of small-scale electricity consumers through consumer social networks"
10th IEEE International Conference on e-Business Engineering (ICEBE 2013), pp. 30-44, Springer Berlin Heidelberg, 2013 Jan

136

@inproceedings{2013ChatzidimitriouICEBE,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas Symeonidis and Pericles Mitkas},
title={Redefining the market power of small-scale electricity consumers through consumer social networks},
booktitle={10th IEEE International Conference on e-Business Engineering (ICEBE 2013)},
pages={30-44},
publisher={Springer Berlin Heidelberg},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Redefining-the-market-power-of-small-scale-electricity-consumers-through-Consumer-Social-Networks.pdf},
doi={http://link.springer.com/chapter/10.1007/978-3-642-40864-9_3#page-1},
keywords={Load Forecasting},
abstract={136}
}

Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles Mitkas
"Agent-based small-scale energy consumer models for energy portfolio management"
Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013), pp. 45-50, Atlanta, GA, USA, 2013 Jan

Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.

@inproceedings{2013ChrysopoulosIAT,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles Mitkas},
title={Agent-based small-scale energy consumer models for energy portfolio management},
booktitle={Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013)},
pages={45-50},
address={Atlanta, GA, USA},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Agent-based-small-scale-energy-consumer-models-for-energy-portfolio-management.pdf},
keywords={Load Forecasting},
abstract={Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.}
}

Themistoklis Diamantopoulos and Andreas L. Symeonidis
"Towards Scalable Bug Localization using the Edit Distance of Call Traces"
The Eighth International Conference on Software Engineering Advances (ICSEA 2013), pp. 45-50, Venice, Italy, 2013 Oct

Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.

@inproceedings{2013DiamantopoulosICSEA,
author={Themistoklis Diamantopoulos and Andreas L. Symeonidis},
title={Towards Scalable Bug Localization using the Edit Distance of Call Traces},
booktitle={The Eighth International Conference on Software Engineering Advances (ICSEA 2013)},
pages={45-50},
address={Venice, Italy},
year={2013},
month={10},
date={2013-10-27},
url={https://www.thinkmind.org/download.php?articleid=icsea_2013_2_30_10250},
keywords={Load Forecasting},
abstract={Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.}
}

Tsardoulias, E. G., A. Iliakopoulou, A. Kargakos, and L. Petrou
"On Global Path Planning for Occupancy Grid Maps"
22nd International Workshop on Robotics in Alpe-Adria-Danube Region, 2013 Sep

This paper considers the problem of robot path planning in indoors environments. Several approaches to tackle this problem have been proposed, which employ structures such as graphs or trees to direct robot’s movement throughout space. The current document constitutes a survey of eight well-known path planning methods, aiming at comparing and evaluating their performances in various environments of different characteristics.

@conference{etsardouRaad2013,
author={Tsardoulias and E. G. and A. Iliakopoulou and A. Kargakos and and L. Petrou},
title={On Global Path Planning for Occupancy Grid Maps},
booktitle={22nd International Workshop on Robotics in Alpe-Adria-Danube Region},
year={2013},
month={09},
date={2013-09-11},
url={https://bit.ly/2ZunqRJ},
keywords={Robot Path Planning;Visibility Graphs;RRTs;PRMs;Dijkstra’s algorithm},
abstract={This paper considers the problem of robot path planning in indoors environments. Several approaches to tackle this problem have been proposed, which employ structures such as graphs or trees to direct robot’s movement throughout space. The current document constitutes a survey of eight well-known path planning methods, aiming at comparing and evaluating their performances in various environments of different characteristics.}
}

2013

Incollection

Themistoklis Diamantopoulos, Andreas Symeonidis and Anthonios Chrysopoulos
"Designing robust strategies for continuous trading in contemporary power markets"
Agent-Mediated Electronic Commerce. Designing Trading Strategies and Mechanisms for Electronic Markets, pp. 30-44, Springer Berlin Heidelberg, 2013 Jan

In contemporary energy markets participants interact with each other via brokers that are responsible for the proper energy flow to and from their clients (usually in the form of long-term or short- term contracts). Power TAC is a realistic simulation of a real-life energy market, aiming towards providing a better understanding and modeling of modern energy markets, while boosting research on innovative trad- ing strategies. Power TAC models brokers as software agents, competing against each other in Double Auction environments, in order to increase their client base and market share. Current work discusses such a bro- ker agent architecture, striving to maximize his own profit. Within the context of our analysis, Double Auction markets are treated as microeco- nomic systems and, based on state-of-the-art price formation strategies, the following policies are designed: an adaptive price formation policy, a policy for forecasting energy consumption that employs Time Series Analysis primitives, and two shout update policies, a rule-based policy that acts rather hastily, and one based on Fuzzy Logic. The results are quite encouraging and will certainly call for future research.

@incollection{2013DiamantopoulosAMEC-DTSMEM,
author={Themistoklis Diamantopoulos and Andreas Symeonidis and Anthonios Chrysopoulos},
title={Designing robust strategies for continuous trading in contemporary power markets},
booktitle={Agent-Mediated Electronic Commerce. Designing Trading Strategies and Mechanisms for Electronic Markets},
pages={30-44},
publisher={Springer Berlin Heidelberg},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Designing-Robust-Strategies-for-Continuous-Trading-in-Contemporary-Power-Markets.pdf},
doi={http://link.springer.com/chapter/10.1007/978-3-642-40864-9_3#page-1},
keywords={aiming towards providing a better understanding and modeling of modern energy markets;competing against each other in Double Auction environments;striving to maximize his own profit. Within the context of our analysis;Double Auction markets are treated as microeconomic systems and;based on state-of-the-art price formation strategies;the following policies are designed: an adaptive price formation policy;a policy for forecasting energy consumption that employs Time Series Analysis primitives;and two shout update policies;a rule-based policy that acts rather hastily},
abstract={In contemporary energy markets participants interact with each other via brokers that are responsible for the proper energy flow to and from their clients (usually in the form of long-term or short- term contracts). Power TAC is a realistic simulation of a real-life energy market, aiming towards providing a better understanding and modeling of modern energy markets, while boosting research on innovative trad- ing strategies. Power TAC models brokers as software agents, competing against each other in Double Auction environments, in order to increase their client base and market share. Current work discusses such a bro- ker agent architecture, striving to maximize his own profit. Within the context of our analysis, Double Auction markets are treated as microeco- nomic systems and, based on state-of-the-art price formation strategies, the following policies are designed: an adaptive price formation policy, a policy for forecasting energy consumption that employs Time Series Analysis primitives, and two shout update policies, a rule-based policy that acts rather hastily, and one based on Fuzzy Logic. The results are quite encouraging and will certainly call for future research.}
}

2012

Journal Articles

Wolfgang Ketter and Andreas L. Symeonidis
"Competitive Benchmarking: Lessons learned from the Trading Agent Competition"
AI Magazine, 33, (2), pp. 198-209, 2012 Sep

Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.

@article{2012KetterAIM,
author={Wolfgang Ketter and Andreas L. Symeonidis},
title={Competitive Benchmarking: Lessons learned from the Trading Agent Competition},
journal={AI Magazine},
volume={33},
number={2},
pages={198-209},
year={2012},
month={09},
date={2012-09-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Competitive-Benchmarking-Lessons-learned-from-the-Trading-Agent-Competition.pdf},
keywords={Load Forecasting},
abstract={Over the years, competitions have been important catalysts for progress in artificial intelligence. We describe the goal of the overall Trading Agent Competition and highlight particular competitions. We discuss its significance in the context of today’s global market economy as well as AI research, the ways in which it breaks away from limiting assumptions made in prior work, and some of the advances it has engendered over the past ten years. Since its introduction in 2000, TAC has attracted more than 350 entries and brought together researchers from AI and beyond.}
}

Fotis Psomopoulos, Victoria Siarkou, Nikolas Papanikolaou, Ioannis Iliopoulos, Athanasios Tsaftaris, Vasilis Promponas and Christos Ouzounis
"The Chlamydiales Pangenome Revisited: Structural Stability and Functional Coherence"
Genes, Vol 3, No 2 (2012), pp. 291-319, 16, 2012 May

The entire publicly available set of 37 genome sequences from the bacterial order Chlamydiales has been subjected to comparative analysis in order to reveal the salient features of this pangenome and its evolutionary history. Over 2,000 protein families are detected across multiple species, with a distribution consistent to other studied pangenomes. Of these, there are 180 protein families with multiple members, 312 families with exactly 37 members corresponding to core genes, 428 families with peripheral genes with varying taxonomic distribution and finally 1,125 smaller families. The fact that, even for smaller genomes of Chlamydiales, core genes represent over a quarter of the average protein complement, signifies a certain degree of structural stability, given the wide range of phylogenetic relationships within the group. In addition, the propagation of a corpus of manually curated annotations within the discovered core families reveals key functional properties, reflecting a coherent repertoire of cellular capabilities for Chlamydiales. We further investigate over 2,000 genes without homologs in the pangenome and discover two new protein sequence domains. Our results, supported by the genome-based phylogeny for this group, are fully consistent with previous analyses and current knowledge, and point to future research directions towards a better understanding of the structural and functional properties of Chlamydiales.

@article{2012PsomopoulosGenes,
author={Fotis Psomopoulos and Victoria Siarkou and Nikolas Papanikolaou and Ioannis Iliopoulos and Athanasios Tsaftaris and Vasilis Promponas and Christos Ouzounis},
title={The Chlamydiales Pangenome Revisited: Structural Stability and Functional Coherence},
journal={Genes, Vol 3, No 2 (2012), pp. 291-319},
volume={16},
year={2012},
month={05},
date={2012-05-16},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/The-Chlamydiales-Pangenome-Revisited-Structural-Stability-and-Functional-Coherence.pdf},
doi={http://10.3390/genes3020291},
keywords={Classification;Initialization;Learning Classifier Systems (LCS);Supervised Learning},
abstract={The entire publicly available set of 37 genome sequences from the bacterial order Chlamydiales has been subjected to comparative analysis in order to reveal the salient features of this pangenome and its evolutionary history. Over 2,000 protein families are detected across multiple species, with a distribution consistent to other studied pangenomes. Of these, there are 180 protein families with multiple members, 312 families with exactly 37 members corresponding to core genes, 428 families with peripheral genes with varying taxonomic distribution and finally 1,125 smaller families. The fact that, even for smaller genomes of Chlamydiales, core genes represent over a quarter of the average protein complement, signifies a certain degree of structural stability, given the wide range of phylogenetic relationships within the group. In addition, the propagation of a corpus of manually curated annotations within the discovered core families reveals key functional properties, reflecting a coherent repertoire of cellular capabilities for Chlamydiales. We further investigate over 2,000 genes without homologs in the pangenome and discover two new protein sequence domains. Our results, supported by the genome-based phylogeny for this group, are fully consistent with previous analyses and current knowledge, and point to future research directions towards a better understanding of the structural and functional properties of Chlamydiales.}
}

Fani A. Tzima, John B. Theocharis and Pericles A. Mitkas
"Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time."
To appear in Soft Computing, 16, 2012 Jul

The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.

@article{2012TzimaTASC,
author={Fani A. Tzima and John B. Theocharis and Pericles A. Mitkas},
title={Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time.},
journal={To appear in Soft Computing},
volume={16},
year={2012},
month={07},
date={2012-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Clustering-based-initialization-of-Learning-Classifier-Systems.pdf},
keywords={Classification;Initialization;Learning Classifier Systems (LCS);Supervised Learning},
abstract={The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.}
}

Mingas, Grigorios, Emmanouil Tsardoulias, and Loukas Petrou
"An FPGA implementation of the SMG-SLAM algorithm"
Microprocessors and Microsystems, 36, (3), pp. 190-204, 2012 May

One of the main tasks of a mobile robot in an unknown environment is to build and update a map of the environment and simultaneously determine its location within this map. This problem is referred to as the simultaneous localization and mapping (SLAM) problem. The article introduces scan-matching genetic SLAM (SMG-SLAM), a novel SLAM algorithm. It is based on a genetic algorithm that uses scan-matching for gene fitness evaluation. The main scope of the article is to present a hardware implementation of SMG-SLAM using an field programmable gate array (FPGA). The architecture of the system is described and it is shown that it is up to 14.83 times faster compared to the software algorithm without significant loss in accuracy. The proposed implementation can be used as part of a larger system, providing efficient SLAM for autonomous robotic applications.

@article{etsardouFpga2012,
author={Mingas and Grigorios and Emmanouil Tsardoulias and and Loukas Petrou},
title={An FPGA implementation of the SMG-SLAM algorithm},
journal={Microprocessors and Microsystems},
volume={36},
number={3},
pages={190-204},
year={2012},
month={05},
date={2012-05-01},
url={https://www.sciencedirect.com/science/article/abs/pii/S0141933111001244},
doi={https://doi.org/10.1016/j.micpro.2011.12.002},
abstract={One of the main tasks of a mobile robot in an unknown environment is to build and update a map of the environment and simultaneously determine its location within this map. This problem is referred to as the simultaneous localization and mapping (SLAM) problem. The article introduces scan-matching genetic SLAM (SMG-SLAM), a novel SLAM algorithm. It is based on a genetic algorithm that uses scan-matching for gene fitness evaluation. The main scope of the article is to present a hardware implementation of SMG-SLAM using an field programmable gate array (FPGA). The architecture of the system is described and it is shown that it is up to 14.83 times faster compared to the software algorithm without significant loss in accuracy. The proposed implementation can be used as part of a larger system, providing efficient SLAM for autonomous robotic applications.}
}

2012

Conference Papers

Georgios T. Andreou, Andreas L. Symeonidis, Christos Diou, Pericles A. Mitkas and Dimitrios P. Labridis
"A framework for the implementation of large scale Demand Response"
Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on, Nuremberg, Germany, 2012 Jan

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012andreouSGTEP2012,
author={Georgios T. Andreou and Andreas L. Symeonidis and Christos Diou and Pericles A. Mitkas and Dimitrios P. Labridis},
title={A framework for the implementation of large scale Demand Response},
booktitle={Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on},
address={Nuremberg, Germany},
year={2012},
month={01},
date={2012-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/tada2012.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Kyriakos C. Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Policy Search through Adaptive Function Approximation for Bidding in TAC SCM"
Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce, 2012 May

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012ChatzidimitriouAMEC,
author={Kyriakos C. Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Policy Search through Adaptive Function Approximation for Bidding in TAC SCM},
booktitle={Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce},
year={2012},
month={05},
date={2012-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Policy-Search-through-Adaptive-Function-Approximation-for-Bidding-in-TAC-SCM.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Themistoklis Mavridis and Andreas L. Symeonidis
"Identifying webpage Semantics for Search Engine Optimization"
Paper presented at the 8th International Conference on Web Information Systems and Technologies (WEBIST), pp. 18-21, Porto, Portugal, 2012 Jun

The added-value of search engines is, apparently, undoubted. Their rapid evolution over the last decade has transformed them into the most important source of information and knowledge. From the end user side, search engine success implies correct results in fast and accurate manner, while also ranking of search results on a given query has to be directly correlated to the user anticipated response. From the content providers side (i.e. websites), better ranking in a search engine result set implies numerous advantages like visibility, visitability, and profit. This is the main reason for the flourishing of Search Engine Optimization (SEO) techniques, which aim towards restructuring or enriching website content, so that optimal ranking of websites in relation to search engine results is feasible. SEO techniques are becoming more and more sophisticated. Given that internet marketing is extensively applied, prior quality factors prove insufficient, by themselves, to boost ranking and the improvement of the quality of website content is also introduced. Current paper discusses such a SEO mechanism. Having identified that semantic analysis has not been widely applied in the field of SEO, a semantic approach is adopted, which employs Latent Dirichlet Allocation techniques coupled with Gibbs Sampling in order to analyze the results of search engines based on given keywords. Within the context of the paper, the developed SEO mechanism LDArank is presented, which evaluates query results through state-of-the-art SEO metrics, analyzes results content and extracts new, optimized content.

@inproceedings{2012MavridisWEBIST,
author={Themistoklis Mavridis and Andreas L. Symeonidis},
title={Identifying webpage Semantics for Search Engine Optimization},
booktitle={Paper presented at the 8th International Conference on Web Information Systems and Technologies (WEBIST)},
pages={18-21},
address={Porto, Portugal},
year={2012},
month={06},
date={2012-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/IDENTIFYING-WEBPAGE-SEMANTICS-FOR-SEARCH-ENGINE-OPTIMIZATION.pdf},
keywords={search engine optimization;LDArank;semantic analysis;latent dirichlet allocation;LDA Gibbs sampling;LDArank java application;webpage semantics;semantic analysis SEO},
abstract={The added-value of search engines is, apparently, undoubted. Their rapid evolution over the last decade has transformed them into the most important source of information and knowledge. From the end user side, search engine success implies correct results in fast and accurate manner, while also ranking of search results on a given query has to be directly correlated to the user anticipated response. From the content providers side (i.e. websites), better ranking in a search engine result set implies numerous advantages like visibility, visitability, and profit. This is the main reason for the flourishing of Search Engine Optimization (SEO) techniques, which aim towards restructuring or enriching website content, so that optimal ranking of websites in relation to search engine results is feasible. SEO techniques are becoming more and more sophisticated. Given that internet marketing is extensively applied, prior quality factors prove insufficient, by themselves, to boost ranking and the improvement of the quality of website content is also introduced. Current paper discusses such a SEO mechanism. Having identified that semantic analysis has not been widely applied in the field of SEO, a semantic approach is adopted, which employs Latent Dirichlet Allocation techniques coupled with Gibbs Sampling in order to analyze the results of search engines based on given keywords. Within the context of the paper, the developed SEO mechanism LDArank is presented, which evaluates query results through state-of-the-art SEO metrics, analyzes results content and extracts new, optimized content.}
}

Athanasios Papadopoulos, Konstantinos Toumpas, Antonios Chrysopoulos and Pericles A. Mitkas
"Exploring Optimization Strategies in Board Game Abalone for Alpha-Beta Seach"
IEEE Conference on Computational Intelligent and Games (CIG), pp. 63-70, Granada, Spain, 2012 Sep

This paper discusses the design and implementation of a highly efficient MiniMax algorithm for the game Abalone.For perfect information games with relatively low branching factor for their decision tree (such as Chess, Checkers etc.) anda highly accurate evaluation function, Alpha-Beta search proved to be far more efficient than Monte Carlo Tree Search. In recentyears many new techniques have been developed to improve the efficiency of the Alpha-Beta tree, applied to a variety of scientific fields. This paper explores several techniques for increasing the efficiency of Alpha-Beta Search on the board game of Abalone while introducing some new innovative techniques that proved to be very effective. The main idea behind them is the incorporation of probabilistic features to the otherwise deterministic Alpha-Beta search.

@inproceedings{2012PapadopoulosCIG,
author={Athanasios Papadopoulos and Konstantinos Toumpas and Antonios Chrysopoulos and Pericles A. Mitkas},
title={Exploring Optimization Strategies in Board Game Abalone for Alpha-Beta Seach},
booktitle={IEEE Conference on Computational Intelligent and Games (CIG)},
pages={63-70},
address={Granada, Spain},
year={2012},
month={09},
date={2012-09-11},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Exploring-Optimization-Strategies-in-Board-Game-Abalone-for-Alpha-Beta-Search.pdf},
abstract={This paper discusses the design and implementation of a highly efficient MiniMax algorithm for the game Abalone.For perfect information games with relatively low branching factor for their decision tree (such as Chess, Checkers etc.) anda highly accurate evaluation function, Alpha-Beta search proved to be far more efficient than Monte Carlo Tree Search. In recentyears many new techniques have been developed to improve the efficiency of the Alpha-Beta tree, applied to a variety of scientific fields. This paper explores several techniques for increasing the efficiency of Alpha-Beta Search on the board game of Abalone while introducing some new innovative techniques that proved to be very effective. The main idea behind them is the incorporation of probabilistic features to the otherwise deterministic Alpha-Beta search.}
}

Andreas Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS), Valencia, Spain, 2012 Jun

The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.

@inproceedings{2012SymeonidisADMI,
author={Andreas Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
booktitle={Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS)},
address={Valencia, Spain},
year={2012},
month={06},
date={2012-06-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development.pdf},
abstract={The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.}
}

Konstantinos N. Vavliakis, Georgios T. Karagiannis and Periklis A. Mitkas
"Semantic Web in Cultural Heritage After 2020"
What will the Semantic Web look like 10 Years From Now? Workshop held in conjunction with the 11th International Semantic Web Conference 2012 (ISWC 2012), Boston, USA, 2012 Nov

In this paper we present the current status of semantic data management in the cultural heritage field and we focus on the challenges imposed by the multidimensionality of the information in this domain. We identify current shortcomings, thus needs, that should be addressed in the coming years to enable the integration and exploitation of the rich information deriving from the multidisciplinary analysis of cultural heritage objects, monuments and sites. Our goal is to disseminate the needsof the cultural heritage community and drive Semantic web research towards these directions.

@inproceedings{2012VavliakisISWC,
author={Konstantinos N. Vavliakis and Georgios T. Karagiannis and Periklis A. Mitkas},
title={Semantic Web in Cultural Heritage After 2020},
booktitle={What will the Semantic Web look like 10 Years From Now? Workshop held in conjunction with the 11th International Semantic Web Conference 2012 (ISWC 2012)},
address={Boston, USA},
year={2012},
month={11},
date={2012-11-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Semantic-Web-in-Cultural-Heritage-After-2020.pdf},
keywords={Cultural Heritage},
abstract={In this paper we present the current status of semantic data management in the cultural heritage field and we focus on the challenges imposed by the multidimensionality of the information in this domain. We identify current shortcomings, thus needs, that should be addressed in the coming years to enable the integration and exploitation of the rich information deriving from the multidisciplinary analysis of cultural heritage objects, monuments and sites. Our goal is to disseminate the needsof the cultural heritage community and drive Semantic web research towards these directions.}
}

Konstantinos N. Vavliakis, Fani A. Tzima and Pericles A. Mitkas
"Event Detection via LDA for the MediaEval2012 SED Task"
Working Notes Proceedings of the MediaEval 2012, Santa Corce in Fossabanda, Pisa, Italy, 2012 Oct

In this paper we present our methodology for the Social Event Detection Task of the MediaEval 2012 BenchmarkingInitiative. We adopt topic discovery using Latent Dirichlet Allocation (LDA), city classification using TF-IDF analysis, and other statistical and natural language processing methods. After describing the approach we employed, we present the corresponding results, and discuss the problems we faced, as well as the conclusions we drew.

@inproceedings{2012VavliakisLDA,
author={Konstantinos N. Vavliakis and Fani A. Tzima and Pericles A. Mitkas},
title={Event Detection via LDA for the MediaEval2012 SED Task},
booktitle={Working Notes Proceedings of the MediaEval 2012},
address={Santa Corce in Fossabanda, Pisa, Italy},
year={2012},
month={10},
date={2012-10-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Event-Detection-via-LDA-for-the-MediaEval2012-SED-Task.pdf},
keywords={Event Detection;Latent Dirichlet Allocation (LDA);Topic Identification;MediaEval},
abstract={In this paper we present our methodology for the Social Event Detection Task of the MediaEval 2012 BenchmarkingInitiative. We adopt topic discovery using Latent Dirichlet Allocation (LDA), city classification using TF-IDF analysis, and other statistical and natural language processing methods. After describing the approach we employed, we present the corresponding results, and discuss the problems we faced, as well as the conclusions we drew.}
}

Dimitrios M. Vitsios, Fotis E. Psomopoulos, Pericles A. Mitkas and Chistos A. Ouzounis
"Mutli-gemone Core Pathway Identification Through Gene Clustering"
1st Workshop on Algorithms for Data and Text Mining in Bionformatics (WADTMB 2012) in conjunction with the 8th AIAI, Halkidiki, Greece, 2012 Sep

In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel methodology has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm’s complexity, evaluated experimentally, is presented and the results on a characteristic case study are discussed.

@inproceedings{2012VitsiosWADTMB,
author={Dimitrios M. Vitsios and Fotis E. Psomopoulos and Pericles A. Mitkas and Chistos A. Ouzounis},
title={Mutli-gemone Core Pathway Identification Through Gene Clustering},
booktitle={1st Workshop on Algorithms for Data and Text Mining in Bionformatics (WADTMB 2012) in conjunction with the 8th AIAI},
address={Halkidiki, Greece},
year={2012},
month={09},
date={2012-09-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Multi-genome-Core-Pathway-Identification-through-Gene-Clustering.pdf},
abstract={In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel methodology has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm’s complexity, evaluated experimentally, is presented and the results on a characteristic case study are discussed.}
}

2012

Inbooks

Kiriakos C. Chatzidimitriou, Ioannis Partalas, Pericles A. Mitkas and Ioannis Vlahavas
"Transferring Evolved Reservoir Features in Reinforcement Learning Tasks"
Charpter:1, 7188, pp. 213-224, Springer Berlin Heidelberg, 2012 Jan

Lecture Notes in Artificial Intelligent (LNAI)

@inbook{2012ChatzidimitriouLNAI,
author={Kiriakos C. Chatzidimitriou and Ioannis Partalas and Pericles A. Mitkas and Ioannis Vlahavas},
title={Transferring Evolved Reservoir Features in Reinforcement Learning Tasks},
chapter={1},
volume={7188},
pages={213-224},
publisher={Springer Berlin Heidelberg},
year={2012},
month={01},
date={2012-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Transferring-Evolved-Reservoir-Features-in-Reinforcement-Learning-Tasks.pdf},
doi={http://issel.ee.auth.gr/wp-content/uploads/publications/chp_LNAI.pdf},
abstract={Lecture Notes in Artificial Intelligent (LNAI)}
}

Andreas L. Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Charpter:1, 7607, pp. 7-21, Springer Berlin Heidelberg, 2012 Jun

Lecture Notes in Computer Science

@inbook{2012SymeonidisLNCS,
author={Andreas L. Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
chapter={1},
volume={7607},
pages={7-21},
publisher={Springer Berlin Heidelberg},
year={2012},
month={06},
date={2012-06-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development-1.pdf},
abstract={Lecture Notes in Computer Science}
}

2011

Journal Articles

Fani A. Tzima, Pericles A. Mitkas, Dimitris Voukantsis and Kostas Karatzas
"Sparse episode identification in environmental datasets: the case of air quality assessment"
Expert Systems with Applications, 38, 2011 May

Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.

@article{2011TzimaESWA,
author={Fani A. Tzima and Pericles A. Mitkas and Dimitris Voukantsis and Kostas Karatzas},
title={Sparse episode identification in environmental datasets: the case of air quality assessment},
journal={Expert Systems with Applications},
volume={38},
year={2011},
month={05},
date={2011-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S095741741001105X-main.pdf},
keywords={Air quality (AQ);Domain-driven data mining;Model evaluation;Sparse episode identification},
abstract={Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"An integrated framework for enhancing the semantic transformation, editing and querying of relational databases"
Expert Systems with Applications, 38, (4), pp. 3844-3856, 2011 Apr

The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually

@article{2011VavliakisESWA,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={An integrated framework for enhancing the semantic transformation, editing and querying of relational databases},
journal={Expert Systems with Applications},
volume={38},
number={4},
pages={3844-3856},
year={2011},
month={04},
date={2011-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-framework-for-enhancing-the-semantic-transformation-editing-and-querying-of-relational-databases.pdf},
keywords={Ontology editor;OWL-DL restriction creation;Relational database to ontology transformation;SPARQL query builder},
abstract={The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually}
}

2011

Conference Papers

Zinovia Alepidou, Konstantinos N. Vavliakis and Pericles A. Mitkas
"A Semantic Tag Recommendation Framework for Collaborative Tagging Systems"
Proceedings of the Third IEEE International Conference on Social Computing, pp. 633-636, Cambridge, MA, USA, 2011 Oct

In this work we focus on folksonomies. Our goal is to develop techniques that coordinate information processing, by taking advantage of user preferences, in order to automatically produce semantic tag recommendations. To this end, we propose a generalized tag recommendation framework that conveys the semantics of resources according to different user pro?les. We present the integration of various models that take into account content, historic values, user preferences and tagging behavior to produce accurate personalized tag recommendations. Based on this information we build several Bayesian models, we evaluate their performance, and we dis-cuss differences in accuracy with respect to semantic matching criteria, and other approaches.

@inproceedings{2011AlepidouSocialCom,
author={Zinovia Alepidou and Konstantinos N. Vavliakis and Pericles A. Mitkas},
title={A Semantic Tag Recommendation Framework for Collaborative Tagging Systems},
booktitle={Proceedings of the Third IEEE International Conference on Social Computing},
pages={633-636},
address={Cambridge, MA, USA},
year={2011},
month={10},
date={2011-10-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Semantic_Tag_Recommendation_Framework_for_Collab.pdf},
keywords={folksonomy;personalization;recommendation;semantic evaluation;tagging},
abstract={In this work we focus on folksonomies. Our goal is to develop techniques that coordinate information processing, by taking advantage of user preferences, in order to automatically produce semantic tag recommendations. To this end, we propose a generalized tag recommendation framework that conveys the semantics of resources according to different user pro?les. We present the integration of various models that take into account content, historic values, user preferences and tagging behavior to produce accurate personalized tag recommendations. Based on this information we build several Bayesian models, we evaluate their performance, and we dis-cuss differences in accuracy with respect to semantic matching criteria, and other approaches.}
}

Kyriakos C. Chatzidimitriou, Ioannis Partalas, Pericles A. Mitkas and Ioannis Vlahavas
"Transferring Evolved Reservoir Features in Reinforcement Learning Tasks"
European Workshop on Reinforcement Learning, pp. 213-224, Springer Berlin Heidelberg, Athens, Greece, 2011 Sep

The major goal of transfer learning is to transfer knowledge acquired on a source task in order to facilitate learning on another, different, but usually related, target task. In this paper, we are using neuroevolution to evolve echo state networks on the source task and transfer the best performing reservoirs to be used as initial population on the target task. The idea is that any non-linear, temporal features, represented by the neurons of the reservoir and evolved on the source task, along with reservoir properties, will be a good starting point for a stochastic search on the target task. In a step towards full autonomy and by taking advantage of the random and fully connected nature of echo state networks, we examine a transfer method that renders any inter-task mappings of states and actions unnecessary. We tested our approach and that of inter-task mappings in two RL testbeds: the mountain car and the server job scheduling domains. Under various setups the results we obtained in both cases are promising.

@inproceedings{2011Chatzidimitriou,
author={Kyriakos C. Chatzidimitriou and Ioannis Partalas and Pericles A. Mitkas and Ioannis Vlahavas},
title={Transferring Evolved Reservoir Features in Reinforcement Learning Tasks},
booktitle={European Workshop on Reinforcement Learning},
pages={213-224},
publisher={Springer Berlin Heidelberg},
address={Athens, Greece},
year={2011},
month={09},
date={2011-09-09},
url={http://link.springer.com/content/pdf/10.1007%2F978-3-642-29946-9_22.pdf},
keywords={Transfer knowledge},
abstract={The major goal of transfer learning is to transfer knowledge acquired on a source task in order to facilitate learning on another, different, but usually related, target task. In this paper, we are using neuroevolution to evolve echo state networks on the source task and transfer the best performing reservoirs to be used as initial population on the target task. The idea is that any non-linear, temporal features, represented by the neurons of the reservoir and evolved on the source task, along with reservoir properties, will be a good starting point for a stochastic search on the target task. In a step towards full autonomy and by taking advantage of the random and fully connected nature of echo state networks, we examine a transfer method that renders any inter-task mappings of states and actions unnecessary. We tested our approach and that of inter-task mappings in two RL testbeds: the mountain car and the server job scheduling domains. Under various setups the results we obtained in both cases are promising.}
}

Andreas L. Symeonidis, Vasileios P. Gountis and Georgios T. Andreou
"A Software Agent Framework for exploiting Demand-side Consumer Social Networks in Power Systems"
Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 30--33, Lyon, France, 2011 Aug

This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.

@inproceedings{2011SymeonidisICWEBIIAT,
author={Andreas L. Symeonidis and Vasileios P. Gountis and Georgios T. Andreou},
title={A Software Agent Framework for exploiting Demand-side Consumer Social Networks in Power Systems},
booktitle={Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology},
pages={30--33},
address={Lyon, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Software-Agent-Framework-for-exploiting-Demand-side-Consumer-Social-Networks-in-Power-Systems.pdf},
keywords={agent communication},
abstract={This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.}
}

Michael Tsapanos, Kiriakos C. Chatzidimitriou and Pericles A. Mitkas
"Combining Zeroth-Level Classifier System and Eligibility Traces for Real Time Strategy Games"
IEEE/WIC/ACM International Conference on Web Intelligent and Intelligent Agent Technology (WI-IAT'11), pp. 244-247, Lyons, France, 2011 Aug

This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.

@inproceedings{2011TsapanosIEEE,
author={Michael Tsapanos and Kiriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={Combining Zeroth-Level Classifier System and Eligibility Traces for Real Time Strategy Games},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligent and Intelligent Agent Technology (WI-IAT'11)},
pages={244-247},
address={Lyons, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/4513b030.pdf},
keywords={agent communication},
abstract={This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.}
}

Michalis Tsapanos, Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"A Zeroth-Level Classifier System for Real Time Strategy Games"
Web Intelligence and Intelligent Agent Technology (WI-IAT), 2011 IEEE/WIC/ACM International Conference, pp. 244-247, Springer Berlin Heidelberg, Lyons, France, 2011 Aug

Real Time Strategy games (RTS) provide an interesting test bed for agents that use Reinforcement Learning (RL) algorithms. From an agent

@conference{2011TsapanosWI-IAT,
author={Michalis Tsapanos and Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={A Zeroth-Level Classifier System for Real Time Strategy Games},
booktitle={Web Intelligence and Intelligent Agent Technology (WI-IAT), 2011 IEEE/WIC/ACM International Conference},
pages={244-247},
publisher={Springer Berlin Heidelberg},
address={Lyons, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Zeroth-Level_Classifier_System_for_Real_Time_Str.pdf},
keywords={Learning Classifier Systems;Real Time Strategy Games},
abstract={Real Time Strategy games (RTS) provide an interesting test bed for agents that use Reinforcement Learning (RL) algorithms. From an agent}
}

Iraklis Tsekourakis and Andreas L. Symeonidis
"Dealing with Trust and Reputation in unreliable Multi-agent Trading Environments"
Paper presented at the 2011 Workshop on Trading Agent Design and Analysis (IJCAI 2011), pp. 21-28, Barcelona, Spain, 2011 Aug

In shared competitive environments, where information comes from various sources, agents may interact with each other in a competitive manner in order to achieve their individual goals. Numerous research efforts exist, attempting to define protocols, rules and interfaces for agents to abide by and ensure trustworthy exchange of information. Auction environments and e-commerce platforms are such paradigms, where trust and reputation are vital factors determining agent strategy. And though the process is always secured with a number of safeguards, there is always the issue of unreliability. In this context, the Agent Reputation and Trust (ART) testbed has provided researchers with the ability to test different trust and reputation strategies, in various types of trust/reputation environments. Current work attempts to identify the most viable trust and reputation models stated in the literature, while it further elaborates on the issue by proposing a robust trust and reputation mechanism. This mechanism is incorporated in our agent, HerculAgent, and tested in a variety of environments against the top performing agents of the ART competition. The paper provides a thorough analysis of ART, presents HerculAgent s architecture and dis-cuss its performance.

@inproceedings{2011TsekourakisIJCAI,
author={Iraklis Tsekourakis and Andreas L. Symeonidis},
title={Dealing with Trust and Reputation in unreliable Multi-agent Trading Environments},
booktitle={Paper presented at the 2011 Workshop on Trading Agent Design and Analysis (IJCAI 2011)},
pages={21-28},
address={Barcelona, Spain},
year={2011},
month={08},
date={2011-08-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Dealing-with-Trust-and-Reputation-in-Unreliable-Multi-agent-Trading-Environments.pdf},
abstract={In shared competitive environments, where information comes from various sources, agents may interact with each other in a competitive manner in order to achieve their individual goals. Numerous research efforts exist, attempting to define protocols, rules and interfaces for agents to abide by and ensure trustworthy exchange of information. Auction environments and e-commerce platforms are such paradigms, where trust and reputation are vital factors determining agent strategy. And though the process is always secured with a number of safeguards, there is always the issue of unreliability. In this context, the Agent Reputation and Trust (ART) testbed has provided researchers with the ability to test different trust and reputation strategies, in various types of trust/reputation environments. Current work attempts to identify the most viable trust and reputation models stated in the literature, while it further elaborates on the issue by proposing a robust trust and reputation mechanism. This mechanism is incorporated in our agent, HerculAgent, and tested in a variety of environments against the top performing agents of the ART competition. The paper provides a thorough analysis of ART, presents HerculAgent s architecture and dis-cuss its performance.}
}

Kyriakos C. Chatzidimitriou, Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets"
Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011, pp. 228-247, 2011 Apr

In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.

@inproceedings{2012ChatzidimitriouAAMAS,
author={Kyriakos C. Chatzidimitriou and Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets},
booktitle={Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011},
pages={228-247},
year={2011},
month={04},
date={2011-04-19},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Enhancing-Agent-Intelligence-through-Evolving-Reservoir-Networks-for-Predictions-in-Power-Stock-Markets.pdf},
keywords={Neuroevolution;Power Stock Markets;Reservoir Computing},
abstract={In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.}
}

Kyriakos C. Chatzidimitriou, Lampros C. Stavrogiannis, Andreas Symeonidis and Pericles A. Mitkas
"An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions"
Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011, pp. 21-28, Barcelona, Spain, 2011 Jul

Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.

@inproceedings{Chatzidimitriou2011,
author={Kyriakos C. Chatzidimitriou and Lampros C. Stavrogiannis and Andreas Symeonidis and Pericles A. Mitkas},
title={An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions},
booktitle={Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011},
pages={21-28},
address={Barcelona, Spain},
year={2011},
month={07},
date={2011-07-17},
url={http://link.springer.com/content/pdf/10.1007%2F978-3-642-34889-1_2.pdf},
keywords={advertisement auction;game theory;sponsored search;trading agent},
abstract={Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.}
}

Dimitrios Vitsios, Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Detecting Species Evolution Through Metabolic Pathways"
6th Conference of the Hellenic Society for computational Biology & Bioinformatics (HSCBB11), pp. 16, Patra, Greece, 2011 Oct

The emergence and evolution of metabolic pathways represented a crucial step in molecular and cellular evolution. Withthe current advances in genomics and proteomics, it has become imperative to explore the impact of gene evolution as reflected in the metabolic signature of each genome (Zhang et al. (2006)). To this end a methodology is presented, which applies a clustering algorithm to genes from different species participating in the same pathway.

@inproceedings{PsomopoulosHSCBB11,
author={Dimitrios Vitsios and Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Detecting Species Evolution Through Metabolic Pathways},
booktitle={6th Conference of the Hellenic Society for computational Biology & Bioinformatics (HSCBB11)},
pages={16},
address={Patra, Greece},
year={2011},
month={10},
date={2011-10-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Detecting-species-evolution-through-metabolic-pathways..pdf},
keywords={folksonomy;personalization;recommendation;semantic evaluation;tagging},
abstract={The emergence and evolution of metabolic pathways represented a crucial step in molecular and cellular evolution. Withthe current advances in genomics and proteomics, it has become imperative to explore the impact of gene evolution as reflected in the metabolic signature of each genome (Zhang et al. (2006)). To this end a methodology is presented, which applies a clustering algorithm to genes from different species participating in the same pathway.}
}

Konstantinos N. Vavliakis, Konstantina Gemenetzi and Pericles A. Mitkas
"A correlation analysis of web social media"
Proceedings of the International Conference on Web Intelligence, Mining and Semantics, pp. 54:1--54:5, ACM, Songdal, Norway, 2011 Jan

In this paper we analyze and compare three popular content creation and sharing websites, namely Panoramio, YouTube and Epinions. This analysis aims in advancing our understanding of Web Social Media and their impact, and may be useful in creating feedback mechanisms for increasing user participation and sharing. For each of the three websites, we select ?ve fundamental factors appearing in all content centered Web Social Media and we use regression analysis to calculate their correlation. We present findings of statistically important correlations among these key factors and we rank the discovered correlations according to the degree of their in?uence. Furthermore, we perform analysis of variance in distinct subgroups of the collected data and we discuss differences found in the characteristics of these subgroups and how these differences may affect correlation results. Although we acknowledge that correlation does not imply causality, the discovered correlations may be a ?rst step towards discovering causality laws behind content contribution, commenting and the formulation of friendship relations. These causality laws are useful for boosting the user participation in social media

@inproceedings{Vavliakis:2011:CAW:1988688.1988752,
author={Konstantinos N. Vavliakis and Konstantina Gemenetzi and Pericles A. Mitkas},
title={A correlation analysis of web social media},
booktitle={Proceedings of the International Conference on Web Intelligence, Mining and Semantics},
pages={54:1--54:5},
publisher={ACM},
address={Songdal, Norway},
year={2011},
month={01},
date={2011-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Correlation-Analysis-of-Web-Social-Media.pdf},
keywords={ANOVA;correlation;regression analysis;social media},
abstract={In this paper we analyze and compare three popular content creation and sharing websites, namely Panoramio, YouTube and Epinions. This analysis aims in advancing our understanding of Web Social Media and their impact, and may be useful in creating feedback mechanisms for increasing user participation and sharing. For each of the three websites, we select ?ve fundamental factors appearing in all content centered Web Social Media and we use regression analysis to calculate their correlation. We present findings of statistically important correlations among these key factors and we rank the discovered correlations according to the degree of their in?uence. Furthermore, we perform analysis of variance in distinct subgroups of the collected data and we discuss differences found in the characteristics of these subgroups and how these differences may affect correlation results. Although we acknowledge that correlation does not imply causality, the discovered correlations may be a ?rst step towards discovering causality laws behind content contribution, commenting and the formulation of friendship relations. These causality laws are useful for boosting the user participation in social media}
}

2010

Journal Articles

Giorgos Papachristoudis, Sotiris Diplaris and Pericles A. Mitkas
"SoFoCles: Feature filtering for microarray classification based on Gene Ontology"
Journal of Biomedical Informatics, 43, (1), 2010 Feb

Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\"curse of dimensionality\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

@article{2010Papachristoudis-JBI,
author={Giorgos Papachristoudis and Sotiris Diplaris and Pericles A. Mitkas},
title={SoFoCles: Feature filtering for microarray classification based on Gene Ontology},
journal={Journal of Biomedical Informatics},
volume={43},
number={1},
year={2010},
month={02},
date={2010-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/SoFoCles-Feature-filtering-for-microarray-classification-based-on-Gene-Ontology.pdf},
keywords={Data Mining;Feature filtering;Microarray classification;Ontologies;Semantic similarity},
abstract={Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\\\"curse of dimensionality\\\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Bioinformatics algorithm development for Grid environments"
Journal of Systems and Software, 83, (7), 2010 Jul

A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.

@article{2010PsomopoulosJOSAS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Bioinformatics algorithm development for Grid environments},
journal={Journal of Systems and Software},
volume={83},
number={7},
year={2010},
month={07},
date={2010-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Bioinformatics-algorithm-development-for-Grid-environments.pdf},
keywords={Bioinformatics;Data analysis;Grid computing;Protein classification;Semi-automated tool;Workflow design},
abstract={A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.}
}

2010

Conference Papers

Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"A NEAT Way for Evolving Echo State Networks"
European Conference on Artificial Intelligence, pp. 909-914, IOS Press, Alexandroupoli, Greece, 2010 Aug

The Reinforcement Learning (RL) paradigm is an appropriateformulation for agent, goal-directed, sequential decisionmaking. In order though for RL methods to perform well in difficult,complex, real-world tasks, the choice and the architecture ofan appropriate function approximator is of crucial importance. Thiswork presents a method of automatically discovering such functionapproximators, based on a synergy of ideas and techniques that areproven to be working on their own. Using Echo State Networks(ESNs) as our function approximators of choice, we try to adaptthem, by combining evolution and learning, for developing the appropriatead-hoc architectures to solve the problem at hand. Thechoice of ESNs was made for their ability to handle both non-linearand non-Markovian tasks, while also being capable of learning online,through simple gradient descent temporal difference learning.For creating networks that enable efficient learning, a neuroevolutionprocedure was applied. Appropriate topologies and weights wereacquired by applying the NeuroEvolution of Augmented Topologies(NEAT) method as a meta-search algorithm and by adaptingideas like historical markings, complexification and speciation, to thespecifics of ESNs. Our methodology is tested on both supervised andreinforcement learning testbeds with promising results.

@inproceedings{2010ChatzidimitriouECAI,
author={Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={A NEAT Way for Evolving Echo State Networks},
booktitle={European Conference on Artificial Intelligence},
pages={909-914},
publisher={IOS Press},
address={Alexandroupoli, Greece},
year={2010},
month={08},
date={2010-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_NEAT_way_for_evolving_Echo_State_Networks.pdf},
keywords={Echo State Networks;NeuroEvolution of Augmented Topologies;Reinforcement Learning},
abstract={The Reinforcement Learning (RL) paradigm is an appropriateformulation for agent, goal-directed, sequential decisionmaking. In order though for RL methods to perform well in difficult,complex, real-world tasks, the choice and the architecture ofan appropriate function approximator is of crucial importance. Thiswork presents a method of automatically discovering such functionapproximators, based on a synergy of ideas and techniques that areproven to be working on their own. Using Echo State Networks(ESNs) as our function approximators of choice, we try to adaptthem, by combining evolution and learning, for developing the appropriatead-hoc architectures to solve the problem at hand. Thechoice of ESNs was made for their ability to handle both non-linearand non-Markovian tasks, while also being capable of learning online,through simple gradient descent temporal difference learning.For creating networks that enable efficient learning, a neuroevolutionprocedure was applied. Appropriate topologies and weights wereacquired by applying the NeuroEvolution of Augmented Topologies(NEAT) method as a meta-search algorithm and by adaptingideas like historical markings, complexification and speciation, to thespecifics of ESNs. Our methodology is tested on both supervised andreinforcement learning testbeds with promising results.}
}

Kyriakos C. Chatzidimitriou, Fotis E. Psomopoulos and Pericles A. Mitkas
"Grid-enabled parameter initialization for high performance machine learning tasks"
5th EGEE User Forum, pp. 113-114, 2010 Apr

In this work we use the NeuroEvolution of augmented Topologies (NEAT) methodology, for optimising Echo State Networks (ESNs), in order to achieve high performance in machine learning tasks. The large parameter space of NEAT, the many variations of ESNs and the stochastic nature of enolutionary computation, requiring manyevaluations for staatistically valid conclusions, promotes the Grid as a a viable solution for robustly evaluationg the alternatives and deriving significant conclusions.

@inproceedings{2010ChatzidimitriouEGEEForum,
author={Kyriakos C. Chatzidimitriou and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Grid-enabled parameter initialization for high performance machine learning tasks},
booktitle={5th EGEE User Forum},
pages={113-114},
year={2010},
month={04},
date={2010-04-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Grid-enabled-parameter-initialization-for-high-performance-machine-learning-tasks.pdf},
keywords={Neuroenolution;Parameter optimisation},
abstract={In this work we use the NeuroEvolution of augmented Topologies (NEAT) methodology, for optimising Echo State Networks (ESNs), in order to achieve high performance in machine learning tasks. The large parameter space of NEAT, the many variations of ESNs and the stochastic nature of enolutionary computation, requiring manyevaluations for staatistically valid conclusions, promotes the Grid as a a viable solution for robustly evaluationg the alternatives and deriving significant conclusions.}
}

Nausheen S. Khuram, Andreas L. Symeonidis and Awais Majeed
"Wage – A Web Service- and Agent-based Generic Auctioning Environment"
Paper presented at the 2010 IADIS International Conference on Intelligent Systems and Agents, Freiburg, Germany, 2010 Jul

@inproceedings{2010KhuramISA,
author={Nausheen S. Khuram and Andreas L. Symeonidis and Awais Majeed},
title={Wage – A Web Service- and Agent-based Generic Auctioning Environment},
booktitle={Paper presented at the 2010 IADIS International Conference on Intelligent Systems and Agents},
address={Freiburg, Germany},
year={2010},
month={07},
date={2010-07-29},
keywords={Biomedical framework}
}

Pericles A. Mitkas
"From Theory and the Research Lav to an Innocative Product for the Greek and the International Market: Agent MerTACor"
1st Private Equity Forum, Transforming the Crisis to Opportunities for Greece, Athens, Greece, 2010 Oct

During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.

@inproceedings{2010MitkasTCOG10,
author={Pericles A. Mitkas},
title={From Theory and the Research Lav to an Innocative Product for the Greek and the International Market: Agent MerTACor},
booktitle={1st Private Equity Forum, Transforming the Crisis to Opportunities for Greece},
address={Athens, Greece},
year={2010},
month={10},
date={2010-10-26},
keywords={Relational Databases to Ontology Transformatio},
abstract={During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Multi Level Clustering of Phylogenetic Profiles"
BioInformatics and BioEngineering (BIBE), 2010 IEEE International Conference, pp. 308-309, Freiburg, Germany, 2010 May

The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infer gene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles. Phylogenetic profiles are vectors which indicate the presence or absence of a gene in other genomes. The main concept of phylogenetic profiles is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of phylogenetic profiles is presented, which aims to detect inter- and intra-genome gene clusters.

@conference{2010PsomopoulosBIBE,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Multi Level Clustering of Phylogenetic Profiles},
booktitle={BioInformatics and BioEngineering (BIBE), 2010 IEEE International Conference},
pages={308-309},
address={Freiburg, Germany},
year={2010},
month={05},
date={2010-05-31},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Multi-Level-Clustering-of-Phylogenetic-Profiles.pdf},
keywords={Algorithm;Clustering;Phylogenetic profiles},
abstract={The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infer gene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles. Phylogenetic profiles are vectors which indicate the presence or absence of a gene in other genomes. The main concept of phylogenetic profiles is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of phylogenetic profiles is presented, which aims to detect inter- and intra-genome gene clusters.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Clustering of discrete and fuzzy phylogenetic profiles"
5th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB, pp. 58, Alexandroupoli, Greece, 2010 Oct

Phylogenetic profiles have long been a focus of interest in computational genomics. Encoding the subset of organisms that contain a homolog of a gene or protein, phylogenetic profiles are originally defined as binary vectors of n entries, where n corresponds to the number of target genomes. It is widely accepted that similar profiles especially those not connected by sequence similarity correspond to a correlated pattern of functional linkage. To this end, our study presents two methods of phylogenetic profile data analysis, aiming at detecting genes with peculiar, unique characteristics. Genes with similar phylogenetic profiles are likely to have similar structure or function, such as participating to a common structural complex or to a common pathway. Our two methods aim at detecting those outlier profiles of “interesting” genes, or groups of genes, with different characteristics from their parent genome.

@inproceedings{2010PsomopoulosHSCBB,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Clustering of discrete and fuzzy phylogenetic profiles},
booktitle={5th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB},
pages={58},
address={Alexandroupoli, Greece},
year={2010},
month={10},
date={2010-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Clustering-of-discrete-and-fuzzy-phylogenetic-profiles.pdf},
keywords={Computational genomics},
abstract={Phylogenetic profiles have long been a focus of interest in computational genomics. Encoding the subset of organisms that contain a homolog of a gene or protein, phylogenetic profiles are originally defined as binary vectors of n entries, where n corresponds to the number of target genomes. It is widely accepted that similar profiles especially those not connected by sequence similarity correspond to a correlated pattern of functional linkage. To this end, our study presents two methods of phylogenetic profile data analysis, aiming at detecting genes with peculiar, unique characteristics. Genes with similar phylogenetic profiles are likely to have similar structure or function, such as participating to a common structural complex or to a common pathway. Our two methods aim at detecting those outlier profiles of “interesting” genes, or groups of genes, with different characteristics from their parent genome.}
}

Andreas L. Symeonidis and Pericles A. Mitkas
"Monitoring Agent Communication in Soft Real-Time Environments"
Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 265--268, Los Alamitos, CA, USA, 2010 Jan

Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.

@inproceedings{2010SymeonidisWIIAT,
author={Andreas L. Symeonidis and Pericles A. Mitkas},
title={Monitoring Agent Communication in Soft Real-Time Environments},
booktitle={Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology},
pages={265--268},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Monitoring_Agent_Communication_in_Soft_Real-Time_E.pdf},
keywords={soft real-time systems;synchronization},
abstract={Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.}
}

Fani A. Tzima, Fotis E. Psomopoulos and Pericles A. Mitkas
"An investigation of the effect of clustering-based initialization on Learning Classifiers Systems"
5th EGEE User Forum, pp. 111-112, 2010 Apr

Strength-based Learning Classifier Systems (LCS) are machine learning systems designed to tackle both sequential and single-step decision tasks by coupling a gradually evolving population of rules with a reinforcement component. ZCS-DM, a Zeroth-level Classifier System for Data Mining, is a novel algorithm in this field, recently shown to be very effective in several benchmark classification problems. In this paper, we evaluate the effect of clustering-based initialization on the algorithm’s performance, utilizing the EGEE infrastructure as a robust framework for an efficient parameter sweep.

@inproceedings{2010TzimaEGEEForum,
author={Fani A. Tzima and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={An investigation of the effect of clustering-based initialization on Learning Classifiers Systems},
booktitle={5th EGEE User Forum},
pages={111-112},
year={2010},
month={04},
date={2010-04-01},
keywords={Algorithm Optimization;Parameter Sweep},
abstract={Strength-based Learning Classifier Systems (LCS) are machine learning systems designed to tackle both sequential and single-step decision tasks by coupling a gradually evolving population of rules with a reinforcement component. ZCS-DM, a Zeroth-level Classifier System for Data Mining, is a novel algorithm in this field, recently shown to be very effective in several benchmark classification problems. In this paper, we evaluate the effect of clustering-based initialization on the algorithm’s performance, utilizing the EGEE infrastructure as a robust framework for an efficient parameter sweep.}
}

Konstantinos N. Vavliakis, Theofanis K Grollios and Pericles A. Mitkas
"RDOTE - Transforming Relational Databases into Semantic Web Data"
9th International Semantic Web Conference (ISWC2010), 2010 Nov

During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.

@inproceedings{2010Vavliakis-ISWC,
author={Konstantinos N. Vavliakis and Theofanis K Grollios and Pericles A. Mitkas},
title={RDOTE - Transforming Relational Databases into Semantic Web Data},
booktitle={9th International Semantic Web Conference (ISWC2010)},
year={2010},
month={11},
date={2010-11-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/RDOTE-Transforming-Relational-Databases-into-Semantic-Web-Data.pdf},
keywords={Relational Databases to Ontology Transformatio},
abstract={During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.}
}

Konstantinos N. Vavliakis, Theofanis K. Grollios and Pericles A. Mitkas
"R. - Transforming Relational Databases into Semantic Web Data"
International Semantic Web Conference, 2010 Jan

@inproceedings{2010VavliakisISWC,
author={Konstantinos N. Vavliakis and Theofanis K. Grollios and Pericles A. Mitkas},
title={R. - Transforming Relational Databases into Semantic Web Data},
booktitle={International Semantic Web Conference},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RDOTE-Transforming-Relational-Databases-into-Semantic-Web-Data.pdf},
keywords={Relational Databases;Semantic Web Data;Transform}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity"
Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference, pp. 615-618, IEEE Computer Society, Los Alamitos, CA, USA, 2010 Jan

Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.

@inproceedings{2010VavliakisWI,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity},
booktitle={Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference},
pages={615-618},
publisher={IEEE Computer Society},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Towards-Understanding-How-Personality-Motivation-and-Events-Trigger-Web-User-Activity.pdf},
keywords={Crowdsourcing;Flickr;Sharing},
abstract={Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.}
}

2009

Journal Articles

Theodoros Agorastos, Vassilis Koutkias, Manolis Falelakis, Irini Lekka, T. Mikos, Anastasios Delopoulos, Periklis A. Mitkas, A. Tantsis, S. Weyers, P. Coorevits, A. M. Kaufmann, R. Kurzeja and Nicos Maglaveras
"Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach"
Cancer Informatics Journal, Special Issue on Semantic Technologies, 8, (9), pp. 31-44, 2009 Feb

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009AgorastosCIJSIOST,
author={Theodoros Agorastos and Vassilis Koutkias and Manolis Falelakis and Irini Lekka and T. Mikos and Anastasios Delopoulos and Periklis A. Mitkas and A. Tantsis and S. Weyers and P. Coorevits and A. M. Kaufmann and R. Kurzeja and Nicos Maglaveras},
title={Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach},
journal={Cancer Informatics Journal, Special Issue on Semantic Technologies},
volume={8},
number={9},
pages={31-44},
year={2009},
month={02},
date={2009-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Semantic-Integration-of-Cervical-Cancer-Data-Repositories-to-Facilitate-Multicenter-Association-Studies-The-ASSIST-Approach.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments"
Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

Special issue on Agents and Data Mining

@article{2009ChatzidimitriouIS,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments},
journal={Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-Enhanced_Agents_in_Dynamic_Supply-Chai.pdf},
keywords={In modern supply chains;so each action can cause ripple reactions and affect the overall result. In this article},
abstract={Special issue on Agents and Data Mining}
}

Georgios Karagiannis, Konstantinos Vavliakis, Sophia Sotiropoulou, Argirios Damtsios, Dimitrios Alexiadis and Christos Salpistis
"Using Signal Processing and Semantic Web Technologies to Analyze Byzantine Iconography"
IEEE Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

A bottom-up approach for documenting art objects processes data from innovative nondestructive analysis with signal processing and neural network techniques to provide a good estimation of the paint layer profile and pigments of artwork. The approach also uses Semantic Web technologies and maps concepts relevant to the analysis of paintings and Byzantine iconography to the Conceptual Reference Model of the International Committee for Documentation (CIDOC-CRM). This approach has introduced three main contributions: the development of an integrated nondestructive technique system combining spectroscopy and acoustic microscopy, supported by intelligent algorithms, for estimating the artworks

@article{2009KaragiannisIS,
author={Georgios Karagiannis and Konstantinos Vavliakis and Sophia Sotiropoulou and Argirios Damtsios and Dimitrios Alexiadis and Christos Salpistis},
title={Using Signal Processing and Semantic Web Technologies to Analyze Byzantine Iconography},
journal={IEEE Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Using-Signal-Processing-and-Semantic-Web-Technologies-to-Analyze-Byzantine-Iconography.pdf},
keywords={Acoustic Microscopy;CIDOC - CRM;Multispectral Imaging;Non - Destructive Identification;Reasoning;Spectroscopy},
abstract={A bottom-up approach for documenting art objects processes data from innovative nondestructive analysis with signal processing and neural network techniques to provide a good estimation of the paint layer profile and pigments of artwork. The approach also uses Semantic Web technologies and maps concepts relevant to the analysis of paintings and Byzantine iconography to the Conceptual Reference Model of the International Committee for Documentation (CIDOC-CRM). This approach has introduced three main contributions: the development of an integrated nondestructive technique system combining spectroscopy and acoustic microscopy, supported by intelligent algorithms, for estimating the artworks}
}

John M. Konstantinides, Athanasios Mademlis, Petros Daras, Pericles A. Mitkas and Michael G. Strintzis
"Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics"
IEEE Transactions on Multimedia, 11, (1), pp. 23-38, 2009 Jan

In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.

@article{2009KonstantinidesIEEEToM,
author={John M. Konstantinides and Athanasios Mademlis and Petros Daras and Pericles A. Mitkas and Michael G. Strintzis},
title={Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics},
journal={IEEE Transactions on Multimedia},
volume={11},
number={1},
pages={23-38},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Blind-Robust-3D-Mesh-Watermarking-Based-onOblate-Spheroidal-Harmonics.pdf},
abstract={In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas, Christos S. Krinas and Ioannis N. Demetropoulos
"A grid-enabled algorithm yields figure-eight molecular knot"
Molecular Simulation, 35, (9), pp. 725-736, 2009 Jun

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009PsomopoulosMS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos S. Krinas and Ioannis N. Demetropoulos},
title={A grid-enabled algorithm yields figure-eight molecular knot},
journal={Molecular Simulation},
volume={35},
number={9},
pages={725-736},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-grid-enabled-algorithm-yields-Figure-Eight-molecular-knot.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

2009

Books

Fotis Psomopoulos and Pericles Mitkas
"Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine, and Healthcare"
2, UK: IGI Global., Catanzaro, Italy, 2009 May

@book{2009PsomopoulosHRCGTLSBH,
author={Fotis Psomopoulos and Pericles Mitkas},
title={Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine, and Healthcare},
volume={2},
publisher={UK: IGI Global.},
address={Catanzaro, Italy},
year={2009},
month={05},
date={2009-05-00}
}

2009

Conference Papers

Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform"
Agents and Data Mining Interaction workshop AAMAS 2009, pp. 111-125, Springer-Verlag, Budapest, Hungary, 2009 May

Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.

@inproceedings{2009ChrysopoulosADMI,
author={Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform},
booktitle={Agents and Data Mining Interaction workshop AAMAS 2009},
pages={111-125},
publisher={Springer-Verlag},
address={Budapest, Hungary},
year={2009},
month={05},
date={2009-05-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Improving-agent-bidding-in-Power-Stock-Markets-through-a-data-mining-enhanced-agent-platform.pdf},
keywords={exploit data mining;multi-agent platform;predict Power Load;predict Power Settlement},
abstract={Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\\\\\\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\\\\\\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.}
}

Anthonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
Third Electrical and Computer Engineering Department Student Conference, pp. 245-267, IGI Global, Thessaloniki, Greece, 2009 Apr

The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.

@inproceedings{2009ChrysopoulosECEDSC,
author={Anthonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={Third Electrical and Computer Engineering Department Student Conference},
pages={245-267},
publisher={IGI Global},
address={Thessaloniki, Greece},
year={2009},
month={04},
date={2009-04-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Creating-and-Reusing-Metric-Graphs-for-Evaluating-Agent-Performance-in-the-Supply-Chain-Management-Domain.pdf},
keywords={Evaluating Agent Performance},
abstract={The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.}
}

Christos Dimou, Fani A. Tzima, Andreas Symeonidis and Pericles Mitkas
"Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case"
IADIS International Conference on Intelligent Systems and Agents, Algarve, Portugal, 2009 Jun

@inproceedings{2009DimouIADIS,
author={Christos Dimou and Fani A. Tzima and Andreas Symeonidis and Pericles Mitkas},
title={Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case},
booktitle={IADIS International Conference on Intelligent Systems and Agents},
address={Algarve, Portugal},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Specifying-and-Validating-the-Agent-Performance-Evaluation-Methodology.pdf},
keywords={evaluation methodology;formal specification;metrics representation;Z nota tion}
}

Manolis Falelakis, Christos Maramis, Irini Lekka, Pericles Mitkas and Anastasios Delopoulos
"An Ontology for Supporting Clincal Research on Cervical Cancer"
International Conference on Knowledge Engineering and Ontology Development, pp. 103--108, Springer-Verlag, Madeira, Portugal, 2009 Jan

This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.

@inproceedings{2009FalelakisICKEOD,
author={Manolis Falelakis and Christos Maramis and Irini Lekka and Pericles Mitkas and Anastasios Delopoulos},
title={An Ontology for Supporting Clincal Research on Cervical Cancer},
booktitle={International Conference on Knowledge Engineering and Ontology Development},
pages={103--108},
publisher={Springer-Verlag},
address={Madeira, Portugal},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/keod2009v22.pdf},
keywords={Domain modelling;Medical ontology},
abstract={This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.}
}

Konstantinos M. Karagiannis, Fotis E. Psomopoulos and Pericles A. Mitkas
"Multi Level Clustering of Phylogenetic Profiles"
4th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB '09, Athens, Greece, 2009 Dec

The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infergene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles (Pellegriniet al. (1999)). Phylogenetic profiles (pp) are vectors which indicate the presence or absence of a gene in other genomes. The main concept of pp’s is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of pp’s is presented, which aims to detect inter- and intra-genome gene clusters

@inproceedings{2009KaragiannisHSCBB,
author={Konstantinos M. Karagiannis and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Multi Level Clustering of Phylogenetic Profiles},
booktitle={4th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB '09},
address={Athens, Greece},
year={2009},
month={12},
date={2009-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Multi-Level-Clustering-of-Phylogenetic-Profiles.pdf},
keywords={infer gene function;prediction of gene},
abstract={The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infergene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles (Pellegriniet al. (1999)). Phylogenetic profiles (pp) are vectors which indicate the presence or absence of a gene in other genomes. The main concept of pp’s is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of pp’s is presented, which aims to detect inter- and intra-genome gene clusters}
}

Pericles A. Mitkas, Anastasios Ntelopoulos, Konstantinos N. Vavliakis, Christos Maramis and Manolis Falelakis andSotiris Diplaris andKoutkias Vasilis andLekka Irini andA. Tantsis andT. Mikos andNikolaos Maglaveras andTheodoros Agorastos
"Pooling data from different sources towards cervical cancer prevention - The ASSIST Project"
8th Scientific Meeting, New Developments in Prevention and Confrontation of Gynecological Cancer, Thessaloniki, Greece, 2009 Jan

@inproceedings{2009MitkasNDPCGC,
author={Pericles A. Mitkas and Anastasios Ntelopoulos and Konstantinos N. Vavliakis and Christos Maramis and Manolis Falelakis andSotiris Diplaris andKoutkias Vasilis andLekka Irini andA. Tantsis andT. Mikos andNikolaos Maglaveras andTheodoros Agorastos},
title={Pooling data from different sources towards cervical cancer prevention - The ASSIST Project},
booktitle={8th Scientific Meeting, New Developments in Prevention and Confrontation of Gynecological Cancer},
address={Thessaloniki, Greece},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Pooling-data-from-different-sources-towards-cervical-cancer-prevention-The-ASSIST-Project.pdf},
keywords={cervical cancer prevention}
}

Vivia Nikolaidou and Pericles A. Mitkas
"A Sequence Mining Method to Predict the Bidding Strategy of Trading Agents"
4th International Workshop on Agents and Data Mining Interaction (ADMI 2009), pp. 139-151, Springer-Verlag, Berlin, Heidelberg, 2009 Jan

In this work, we describe the process used in order to predict the bidding strategy of trading agents. This was done in the context of the Reverse TAC, or CAT, game of the Trading Agent Competition. In this game, a set of trading agents, buyers or sellers, are provided by the server and they trade their goods in one of the markets operated by the competing agents. Better knowledge of the strategy of the trading agents will allow a market maker to adapt its incentives and attract more agents to its own market. Our prediction was based on the time series of the traders\\' past bids, taking into account the variation of each bid compared to its history. The results proved to be of satisfactory accuracy, both in the game\\'s context and when compared to other existing approaches.

@inproceedings{2009NikolaidouADMI,
author={Vivia Nikolaidou and Pericles A. Mitkas},
title={A Sequence Mining Method to Predict the Bidding Strategy of Trading Agents},
booktitle={4th International Workshop on Agents and Data Mining Interaction (ADMI 2009)},
pages={139-151},
publisher={Springer-Verlag},
address={Berlin, Heidelberg},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Sequence_Mining_Method_to_Predict_the_Bidding_St.pdf},
keywords={bidding strategy;trading agents},
abstract={In this work, we describe the process used in order to predict the bidding strategy of trading agents. This was done in the context of the Reverse TAC, or CAT, game of the Trading Agent Competition. In this game, a set of trading agents, buyers or sellers, are provided by the server and they trade their goods in one of the markets operated by the competing agents. Better knowledge of the strategy of the trading agents will allow a market maker to adapt its incentives and attract more agents to its own market. Our prediction was based on the time series of the traders\\\\' past bids, taking into account the variation of each bid compared to its history. The results proved to be of satisfactory accuracy, both in the game\\\\'s context and when compared to other existing approaches.}
}

John E. Psaroudakis, Fani A. Tzima and Pericles A. Mitkas
"EVADING: An Evolutionary Algorithm with Dynamic Niching for Data Classification"
2009 International Conference on Genetic and Evolutionary Methods (GEM, pp. 59--65, Las Vegas, Nevada, USA, 2009 Jul

Multimodal optimization problems (MMOPs) have been widely studied in many fields of machine learning, including pattern recognition and data classification. Formulating the process of rule induction for the latter task as a MMOP and inspired by corresponding findings in the field of function optimization, our current work proposes an evolutionary algorithm (EVADING) capable of discovering a set of accurate and diverse classification rules. The proposed algorithm uses a dynamic clustering technique as a parallel niching method to maintain rule population diversity and converge to the optimal rules for the attribute-space defined by the target dataset. To demonstrate its applicability and potential, EVADING is applied to a series of real-life classification problems and its prediction accuracy is compared to that of other popular non-evolutionary machine learning techniques. Results are encouraging, since EVADING manages to achieve the best overall average ranking and performs significantly better (at significance level a

@inproceedings{2009PsaroudakisGEM,
author={John E. Psaroudakis and Fani A. Tzima and Pericles A. Mitkas},
title={EVADING: An Evolutionary Algorithm with Dynamic Niching for Data Classification},
booktitle={2009 International Conference on Genetic and Evolutionary Methods (GEM},
pages={59--65},
address={Las Vegas, Nevada, USA},
year={2009},
month={07},
date={2009-07-13},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/EVADING-An-Evolutionary-Algorithm-with-Dynamic-Niching-for-Data-Classification.pdf},
keywords={agent performance},
abstract={Multimodal optimization problems (MMOPs) have been widely studied in many fields of machine learning, including pattern recognition and data classification. Formulating the process of rule induction for the latter task as a MMOP and inspired by corresponding findings in the field of function optimization, our current work proposes an evolutionary algorithm (EVADING) capable of discovering a set of accurate and diverse classification rules. The proposed algorithm uses a dynamic clustering technique as a parallel niching method to maintain rule population diversity and converge to the optimal rules for the attribute-space defined by the target dataset. To demonstrate its applicability and potential, EVADING is applied to a series of real-life classification problems and its prediction accuracy is compared to that of other popular non-evolutionary machine learning techniques. Results are encouraging, since EVADING manages to achieve the best overall average ranking and performs significantly better (at significance level a}
}

Marina Riga, Fani A. Tzima, Kostas Karatzas and Pericles A. Mitkas
"Development and evaluation of data mining models for air quality prediction in Athens, Greece"
Information Technologies in Environmental Engineering, Proceedings of the 4th International ICSC Symposium, ITEE 2009, pp. 331--344, Springer Berlin Heidelberg, Thessaloniki, Greece, 2009 May

Air pollution is a major problem in the world today, causing undesirable effects on both the environment and human health and, at the same time, stressing the need for effective simulation and forecasting models of atmospheric quality. Targeting this adverse situation, our current work focuses on investigating the potential of data mining algorithms in air pollution modeling and short-term forecasting problems. In this direction, various data mining methods are adopted for the qualitative forecasting of concentration levels of air pollutants or the quantitative prediction of their values (through the development of different classification and regression models respectively) in five locations of the greater Athens area. An additional aim of this work is the systematic assessment of the quality of experimental results, in order to discover the best performing algorithm (or set of algorithms) that can be proved to be significantly different from its rivals. Obtained experimental results are deemed satisfactory in terms of the aforementioned goals of the investigation, as high percentages of correct classifications are achieved in the set of monitoring stations and clear conclusions are drawn, as far as the determination of significantly best performing algorithms is concerned, for the development of air quality (AQ) prediction models.

@inproceedings{2009TzimaITEE,
author={Marina Riga and Fani A. Tzima and Kostas Karatzas and Pericles A. Mitkas},
title={Development and evaluation of data mining models for air quality prediction in Athens, Greece},
booktitle={Information Technologies in Environmental Engineering, Proceedings of the 4th International ICSC Symposium, ITEE 2009},
pages={331--344},
publisher={Springer Berlin Heidelberg},
address={Thessaloniki, Greece},
year={2009},
month={05},
date={2009-05-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Development-and-evaluation-of-data-mining-models-for-air-quality-prediction-in-Athens-Greece.pdf},
keywords={air pollution model;air quality;data mining algorithms},
abstract={Air pollution is a major problem in the world today, causing undesirable effects on both the environment and human health and, at the same time, stressing the need for effective simulation and forecasting models of atmospheric quality. Targeting this adverse situation, our current work focuses on investigating the potential of data mining algorithms in air pollution modeling and short-term forecasting problems. In this direction, various data mining methods are adopted for the qualitative forecasting of concentration levels of air pollutants or the quantitative prediction of their values (through the development of different classification and regression models respectively) in five locations of the greater Athens area. An additional aim of this work is the systematic assessment of the quality of experimental results, in order to discover the best performing algorithm (or set of algorithms) that can be proved to be significantly different from its rivals. Obtained experimental results are deemed satisfactory in terms of the aforementioned goals of the investigation, as high percentages of correct classifications are achieved in the set of monitoring stations and clear conclusions are drawn, as far as the determination of significantly best performing algorithms is concerned, for the development of air quality (AQ) prediction models.}
}

2009

Incollection

Fotis E. Psomopoulos and Pericles A. Mitkas
"Data Mining in Proteomics using Grid Computing"
Handbook of Research on Computational Grid Technologies for LifeSciences, Biomedicine and Healthcare, pp. 245-267, IGI Global, UK, 2009 May

The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.

@incollection{2009PsomopoulosHRCGT,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Data Mining in Proteomics using Grid Computing},
booktitle={Handbook of Research on Computational Grid Technologies for LifeSciences, Biomedicine and Healthcare},
pages={245-267},
publisher={IGI Global},
address={UK},
year={2009},
month={05},
date={2009-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-in-Proteomics-Using-Grid-Computing.pdf},
keywords={Data Mining techniques;knowledge extraction in proteomics},
abstract={The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"BADGE: Bioinformatics Algorithm Development for Grid Environments"
13th Panhellenic Conference on Informatics, pp. 93-107, Corfu, Greece, 2009 Sep

A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods focus on specific groups of proteins or reduce either the size of the original data set or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.

@incollection{2009PsomopoulosPCI,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={BADGE: Bioinformatics Algorithm Development for Grid Environments},
booktitle={13th Panhellenic Conference on Informatics},
pages={93-107},
address={Corfu, Greece},
year={2009},
month={09},
date={2009-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/fpsompci20091.pdf},
abstract={A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods focus on specific groups of proteins or reduce either the size of the original data set or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.}
}

2008

Journal Articles

Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Studies in Health Technology and Informatic, 136, pp. 241-246, 2008 Jan

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@article{2007MitkasSHTI,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
journal={Studies in Health Technology and Informatic},
volume={136},
pages={241-246},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"BioCrawler: An intelligent crawler for the semantic web"
Expert Systems with Applications, 36, (35), 2008 Jul

Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope

@article{2008BatziosESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={BioCrawler: An intelligent crawler for the semantic web},
journal={Expert Systems with Applications},
volume={36},
number={35},
year={2008},
month={07},
date={2008-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/BioCrawler-An-intelligent-crawler-for-the-semantic-web.pdf},
keywords={semantic web;Multi-Agent System;focused crawling;web crawling},
abstract={Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope}
}

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis, Ioannis Kontogounis and Pericles A. Mitkas
"Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments"
Expert Systems with Applications, 35, (3), pp. 591-603, 2008 Jan

Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor

@article{2008ChatzidimitriouESwA,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Ioannis Kontogounis and Pericles A. Mitkas},
title={Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments},
journal={Expert Systems with Applications},
volume={35},
number={3},
pages={591-603},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Mertacor-A-robust-design-for-dealing-with-uncertaintyand-variation-in-SCM-environments.pdf},
keywords={machine learning;Agent intelligence;Autonomous trading agents;Electronic commerce},
abstract={Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems"
Expert Systems with Applications, 36, (4), 2008 Sep

Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.

@article{2008DimouESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems},
journal={Expert Systems with Applications},
volume={36},
number={4},
year={2008},
month={09},
date={2008-09-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-infrastructure-for-monitoring-and-evaluating-agent-based-systems.pdf},
keywords={performance evaluation;automated soft ware engineering;fuzzy measurement aggregation;softrware agents},
abstract={Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Sketching a methodology for efficient supply chain management agents enhanced through data mining"
International Journal of Intelligent Information and Database Systems (IJIIDS), 2, (1), 2008 Feb

Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.

@article{2008SymeoniidsIJIIDS,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Sketching a methodology for efficient supply chain management agents enhanced through data mining},
journal={International Journal of Intelligent Information and Database Systems (IJIIDS)},
volume={2},
number={1},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sketching-a-methodology-for-efficient-supply-chain-management-agents-enhanced-through-data-mining.pdf},
keywords={performance evaluation;Intelligent agents;agent-based systems;multi-agent systems;MAS;trading agent competition;agent-oriented methodology;bidding;forecasting;SCM},
abstract={Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.}
}

2008

Conference Papers

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks"
IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction, pp. 558-561, IEEE Computer Society, Sydney, Australia, 2008 Dec

In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.

@inproceedings{2008ChatzidimitriouADMI,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks},
booktitle={IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction},
pages={558-561},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data_Mining-Driven_Analysis_and_Decomposition_in_A.pdf},
keywords={fuzzy logic},
abstract={In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"Exploiting parallel data mining processing for protein annotation"
Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference, pp. 242-252, Samos, Greece, 2008 Aug

Proteins are large organic compounds consisting of amino acids arranged in a linear chain and joined together by peptide bonds. One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. In this paper a novel parallel methodology for automatic protein function annotation is presented. Data mining techniques are employed in order to construct models based on data generated from already annotated protein sequences. The first step of the methodology is to obtain the motifs present in these sequences, which are then provided as input to the data mining algorithms in order to create a model for every term. Experiments conducted using the EGEE Grid environment as a source of multiple CPUs clearly indicate that the methodology is highly efficient and accurate, as the utilization of many processors substantially reduces the execution time.

@inproceedings{2008CkekasEURECA,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Exploiting parallel data mining processing for protein annotation},
booktitle={Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference},
pages={242-252},
address={Samos, Greece},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Exploiting-parallel-data-mining-processing-for-protein-annotation-.pdf},
keywords={Finite State Automata;Parallel Processing},
abstract={Proteins are large organic compounds consisting of amino acids arranged in a linear chain and joined together by peptide bonds. One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. In this paper a novel parallel methodology for automatic protein function annotation is presented. Data mining techniques are employed in order to construct models based on data generated from already annotated protein sequences. The first step of the methodology is to obtain the motifs present in these sequences, which are then provided as input to the data mining algorithms in order to create a model for every term. Experiments conducted using the EGEE Grid environment as a source of multiple CPUs clearly indicate that the methodology is highly efficient and accurate, as the utilization of many processors substantially reduces the execution time.}
}

Christos Dimou, Manolis Falelakis, Andreas Symeonidis, Anastasios Delopoulos and Pericles A. Mitkas
"Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation"
IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208), pp. 336--339, IEEE Computer Society, Sydney, Australia, 2008 Dec

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{2008DimouIAT,
author={Christos Dimou and Manolis Falelakis and Andreas Symeonidis and Anastasios Delopoulos and Pericles A. Mitkas},
title={Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation},
booktitle={IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208)},
pages={336--339},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Constructing-Optimal-Fuzzy-Metric-Trees-for-Agent-Performance-Evaluation.pdf},
keywords={fuzzy logic},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

Christos Dimou, Kyriakos C. Chatzidimitriou, Andreas Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
First Workshop on Knowledge Reuse (KREUSE, Beijing (China), 2008 May

The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.

@inproceedings{2008DimouKREUSE,
author={Christos Dimou and Kyriakos C. Chatzidimitriou and Andreas Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={First Workshop on Knowledge Reuse (KREUSE},
address={Beijing (China)},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KREUSE-08.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining and Agent Technology: a fruitful symbiosis"
Soft Computing for Knowledge Discovery and Data Mining, pp. 327-362, Springer US, Clermont-Ferrand, France, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008DimouSCKDDM,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining and Agent Technology: a fruitful symbiosis},
booktitle={Soft Computing for Knowledge Discovery and Data Mining},
pages={327-362},
publisher={Springer US},
address={Clermont-Ferrand, France},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-Mining-and-Agent-Technology-a-fruitful-symbiosis.pdf},
keywords={Gene Ontology;Parallel Algorithms;Protein Classi fi cation},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Parallel Data Mining Application for Gene Ontology Term Prediction"
3rd EGEE User Forum, Clermont-Ferrand, France, 2008 Feb

One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.

@inproceedings{2008GkekasEGEEForum,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Parallel Data Mining Application for Gene Ontology Term Prediction},
booktitle={3rd EGEE User Forum},
address={Clermont-Ferrand, France},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A_parallel_data_mining_application_for_Gene_Ontology_term_prediction_-_Contribution.pdf},
keywords={Gene Ontology;Parallel Algorithms;Protein Classi fi cation},
abstract={One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Parallel Data Mining Methodology for Protein Function Prediction Utilizing Finite State Automata"
2nd Electrical and Computer Engineering Student Conference, Athens, Greece, 2008 Apr

One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.

@inproceedings{2008GkekasSFHMMY,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Parallel Data Mining Methodology for Protein Function Prediction Utilizing Finite State Automata},
booktitle={2nd Electrical and Computer Engineering Student Conference},
address={Athens, Greece},
year={2008},
month={04},
date={2008-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Parallel-Data-Mining-Methodology-for-Protein-Function-Prediction-Utilizing-Finite-State-Automata.pdf},
keywords={Parallel Data Mining for Protein Function},
abstract={One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.}
}

Georgios Karagiannis, Konstantinos N. Vavliakis, Stella Markantonatou, Sister Daniilia, Sophia Sotiropoulou, Maria Alexopoulou, Olga Yanoutsou, Klimis Dalianis and Thodoros Kavalieros
"EIKONOGNOSIA An Integrated System for Advanced Retrieval of Scientific Data and Metadata of Byzantine Artworks Using Semantic Web Technologies"
Annual Conference of CIDOC, pp. 558-561, IEEE Computer Society, Athens, Greece, 2008 Sep

The documentation and analysis of Byzantine Art is an important component of the overall effort to maintain cultural heritage and contributes to learning and comprehending ones history traversal path. Efficient publishing of the multi-dimensional and multifaceted information that is necessary for the complete documentation of artworks should draw on a good organization of the data. Eikonognosia is a research project funded by the Greek General Secretariat of Research and Technology (GSRT) that aims to efficiently organize and publish detailed information about icons in the World Wide Web. Information derived from the analysis conducted in the Art Diagnosis Center of Ormylia Foundation is taken as a case study. Eikonognosia provides the means for organising detailed and multidimensional information about Byzantine icons in a way that is compatible to international standards (CIDOC-CRM - ISO 21127:2006) and allows for an easy retrieval of data with advanced semantic web technologies. The ultimate goal for Eikonognosia is to foster the cultural heritage community by providing an integrated framework that helps to facilitate organization, retrieval and presentation of data from the cultural heritage domain.

@inproceedings{2008KaragiannisCIDOC,
author={Georgios Karagiannis and Konstantinos N. Vavliakis and Stella Markantonatou and Sister Daniilia and Sophia Sotiropoulou and Maria Alexopoulou and Olga Yanoutsou and Klimis Dalianis and Thodoros Kavalieros},
title={EIKONOGNOSIA An Integrated System for Advanced Retrieval of Scientific Data and Metadata of Byzantine Artworks Using Semantic Web Technologies},
booktitle={Annual Conference of CIDOC},
pages={558-561},
publisher={IEEE Computer Society},
address={Athens, Greece},
year={2008},
month={09},
date={2008-09-15},
url={http://www.ilsp.gr/administrator/components/com_jresearch/files/publications/EIKONOGNOSIA.pdf},
keywords={Byzantine Iconography;CIDOC-CRM;Relational Database;ultural Heritage;Web Presentation},
abstract={The documentation and analysis of Byzantine Art is an important component of the overall effort to maintain cultural heritage and contributes to learning and comprehending ones history traversal path. Efficient publishing of the multi-dimensional and multifaceted information that is necessary for the complete documentation of artworks should draw on a good organization of the data. Eikonognosia is a research project funded by the Greek General Secretariat of Research and Technology (GSRT) that aims to efficiently organize and publish detailed information about icons in the World Wide Web. Information derived from the analysis conducted in the Art Diagnosis Center of Ormylia Foundation is taken as a case study. Eikonognosia provides the means for organising detailed and multidimensional information about Byzantine icons in a way that is compatible to international standards (CIDOC-CRM - ISO 21127:2006) and allows for an easy retrieval of data with advanced semantic web technologies. The ultimate goal for Eikonognosia is to foster the cultural heritage community by providing an integrated framework that helps to facilitate organization, retrieval and presentation of data from the cultural heritage domain.}
}

Kostas Karatzas, Anastasios S Bassoukos, Dimitris Voukantsis, Fani A. Tzima, Kostas Nikolaou and Stavros Karathanasis
"ICT technologies and computational intelligence methods for the creation of an early warning air pollution information system"
22nd Conference on Environmental Informatics and Industrial Ecology, 2008 Sep

Contemporary air quality management calls for effective, and in advance, AQ information dissemination. Such dissemination requires for communication that should not be based solely on written or oral language forms, but should make use of graphical, symbolical and multimedia language communication schemes, via available communication channels. Previous experiences and published research results indicate that the content of environmental information systems should include both realtime information and forecasting for key parameters of interest, like the maximum concentration values of air pollutants. The latter are difficult to achieve, as air quality forecasting requires both do main expertise and modelling skills for the complicated phenomenon of atmospheric pollution. One of the ways to address this need and to extract useful knowledge for better forecasting and understanding of air pollution problems, is the application of Computational Intelligence (CI) methodsand tools. The present paper discusses the creation of an environmental information portal for the dissemination of air quality information and warnings, for the city of Thessaloniki, Greece. The system is developed with the aid of state-of-the art, web-based technologies, including modular, on the fly software integration to operating applications, and implements CI for the forecasting of parameters of interest. In addition, observation data are made accessible to the public via an internet-based, graphics environment that deploys open source geographic information services.

@inproceedings{2008KaratzasCEIIE,
author={Kostas Karatzas and Anastasios S Bassoukos and Dimitris Voukantsis and Fani A. Tzima and Kostas Nikolaou and Stavros Karathanasis},
title={ICT technologies and computational intelligence methods for the creation of an early warning air pollution information system},
booktitle={22nd Conference on Environmental Informatics and Industrial Ecology},
year={2008},
month={09},
date={2008-09-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/ICT-technologies-and-computational-intelligence-methods-for-the-creation-of-an-early-warning-air-pollution-information-system.pdf},
keywords={Computational intelligence;early warning air pollution information system;ICT technologies},
abstract={Contemporary air quality management calls for effective, and in advance, AQ information dissemination. Such dissemination requires for communication that should not be based solely on written or oral language forms, but should make use of graphical, symbolical and multimedia language communication schemes, via available communication channels. Previous experiences and published research results indicate that the content of environmental information systems should include both realtime information and forecasting for key parameters of interest, like the maximum concentration values of air pollutants. The latter are difficult to achieve, as air quality forecasting requires both do main expertise and modelling skills for the complicated phenomenon of atmospheric pollution. One of the ways to address this need and to extract useful knowledge for better forecasting and understanding of air pollution problems, is the application of Computational Intelligence (CI) methodsand tools. The present paper discusses the creation of an environmental information portal for the dissemination of air quality information and warnings, for the city of Thessaloniki, Greece. The system is developed with the aid of state-of-the art, web-based technologies, including modular, on the fly software integration to operating applications, and implements CI for the forecasting of parameters of interest. In addition, observation data are made accessible to the public via an internet-based, graphics environment that deploys open source geographic information services.}
}

Pericles A. Mitkas
"Training Intelligent Agents and Evaluating Their Performance"
International Workshop on Agents and Data Mining Interaction (ADMI), pp. 336--339, IEEE Computer Society, Sydney,Australia, 2008 Dec

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{2008MitkasADMI,
author={Pericles A. Mitkas},
title={Training Intelligent Agents and Evaluating Their Performance},
booktitle={International Workshop on Agents and Data Mining Interaction (ADMI)},
pages={336--339},
publisher={IEEE Computer Society},
address={Sydney,Australia},
year={2008},
month={12},
date={2008-12-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Constructing-Optimal-Fuzzy-Metric-Trees-for-Agent-Performance-Evaluation.pdf},
keywords={fuzzy logic},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

Pericles A. Mitkas, Christos Maramis, Anastastios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Manolis Falelakis, Fotis E. Psomopoulos, Alex andros Batzios, Nikolaos Maglaveras, Irini Lekka, Vasilis Koutkias, Theodoros Agorastos, T. Mikos and A. Tatsis
"ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer"
6th European Symposium on Biomedical Engineering, Chania, Greece, 2008 Jun

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{2008MitkasEsbmeAssist,
author={Pericles A. Mitkas and Christos Maramis and Anastastios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Manolis Falelakis and Fotis E. Psomopoulos and Alex andros Batzios and Nikolaos Maglaveras and Irini Lekka and Vasilis Koutkias and Theodoros Agorastos and T. Mikos and A. Tatsis},
title={ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer},
booktitle={6th European Symposium on Biomedical Engineering},
address={Chania, Greece},
year={2008},
month={06},
date={2008-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/ASSIST-EMPLOYING-INFERENCE-AND-SEMANTIC-TECHNOLOGIES-TO-FACILITATE-ASSOCIATION-STUDIES-ON-CERVICAL-CANCER-.pdf},
keywords={cervical cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

Pericles A. Mitkas, Vassilis Koutkias, Andreas Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
MIE, Goteborg, Sweden, 2008 May

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@conference{2008MitkasMIE,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
booktitle={MIE},
address={Goteborg, Sweden},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

Ioanna K. Mprouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"AMoS: Agent-based Molecular Simulations"
Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference, pp. 175-186, Samos, Greece, 2008 Aug

Molecular dynamics (MD) is a form of computer simulation wherein atoms and molecules are allowed to interact for a period of time under known laws of physics, giving a view of the motion of the atoms. Usually the number of particles involved in a simulation is so large, that the properties of the system in question are virtually impossible to compute analytically. MD circumvents this problem by employing numerical approaches. Utilizing theories and concepts from mathematics, physics and chemistry and employing algorithms from computer science and information theory, MD is a clear example of a multidisciplinary method. In this paper a new framework for MD simulations is presented, which utilizes software agents as particle representations and an empirical potential function as the means of interaction. The framework is applied on protein structural data (PDB files), using an implicit solvent environment and a time step of 5 femto-seconds (5×10−15 sec). The goal of the simulation is to provide another view to the study of emergent behaviours and trends in the movement of the agent-particles in the protein complex. This information can then be used to construct an abstract model of the rules that govern the motion of the particles.

@inproceedings{2008MprouzaEURECA,
author={Ioanna K. Mprouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={AMoS: Agent-based Molecular Simulations},
booktitle={Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference},
pages={175-186},
address={Samos, Greece},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/AMoS-Agent-based-Molecular-Simulations.pdf},
keywords={Force Field Equations;Molecular Dynamics;Protein Data Bank;Protein Prediction Structure;Simulation},
abstract={Molecular dynamics (MD) is a form of computer simulation wherein atoms and molecules are allowed to interact for a period of time under known laws of physics, giving a view of the motion of the atoms. Usually the number of particles involved in a simulation is so large, that the properties of the system in question are virtually impossible to compute analytically. MD circumvents this problem by employing numerical approaches. Utilizing theories and concepts from mathematics, physics and chemistry and employing algorithms from computer science and information theory, MD is a clear example of a multidisciplinary method. In this paper a new framework for MD simulations is presented, which utilizes software agents as particle representations and an empirical potential function as the means of interaction. The framework is applied on protein structural data (PDB files), using an implicit solvent environment and a time step of 5 femto-seconds (5×10−15 sec). The goal of the simulation is to provide another view to the study of emergent behaviours and trends in the movement of the agent-particles in the protein complex. This information can then be used to construct an abstract model of the rules that govern the motion of the particles.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Sizing Up: Bioinformatics in a Grid Context"
3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB, pp. 558-561, IEEE Computer Society, Thessaloniki, Greece, 2008 Oct

A Frid environmeent can be viewed sa a virtual computing architecture that provides the ability to perform higher thoughput computing by taking advantage of many computer geographically distributed and connected by a network. Bioinformatics applications stand to gain in such environment both in regards of cimputational resources available, but in reliability and efficiency as well. There are several approaches in literature which present the use of Grid resources in bioinformatics. Nevertheless, scientific progress is hindered by the fact that each researcher operates in relative isolation, regarding datasets and efforts, since there is no universally accepted methodology for performing bioinformatics tasks in Grid. Given the complexity of both the data and the algorithms invilvde in the majorityof cases, a case study on protein classification utilizing the Frid infrastructure, may be the first step in presenting a unifying methodology for bioinformatics in a Grind context.

@inproceedings{2008PsomopoulosHSCBB,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Sizing Up: Bioinformatics in a Grid Context},
booktitle={3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB},
pages={558-561},
publisher={IEEE Computer Society},
address={Thessaloniki, Greece},
year={2008},
month={10},
date={2008-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sizing-Up-Bioinformatics-in-a-Grid-Context.pdf},
keywords={Bioinformatics in Grid Context},
abstract={A Frid environmeent can be viewed sa a virtual computing architecture that provides the ability to perform higher thoughput computing by taking advantage of many computer geographically distributed and connected by a network. Bioinformatics applications stand to gain in such environment both in regards of cimputational resources available, but in reliability and efficiency as well. There are several approaches in literature which present the use of Grid resources in bioinformatics. Nevertheless, scientific progress is hindered by the fact that each researcher operates in relative isolation, regarding datasets and efforts, since there is no universally accepted methodology for performing bioinformatics tasks in Grid. Given the complexity of both the data and the algorithms invilvde in the majorityof cases, a case study on protein classification utilizing the Frid infrastructure, may be the first step in presenting a unifying methodology for bioinformatics in a Grind context.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas, Christos S. Krinas and Ioannis N. Demetropoulos
"G-MolKnot: A grid enabled systematic algorithm to produce open molecular knots"
1st HellasGrid User Forum, pp. 327-362, Springer US, Athens, Greece, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008PsomopoulosHUF,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos S. Krinas and Ioannis N. Demetropoulos},
title={G-MolKnot: A grid enabled systematic algorithm to produce open molecular knots},
booktitle={1st HellasGrid User Forum},
pages={327-362},
publisher={Springer US},
address={Athens, Greece},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/G-MolKnot-A-grid-enabled-systematic-algorithm-to-produce-open-molecular-knots-.pdf},
keywords={open molecular knots},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Fani A. Tzima and Pericles A. Mitkas
"ZCS Revisited: Zeroth-level Classifier Systems for Data Mining"
2008 IEEE International Conference on Data Mining Workshops, pp. 700--709, IEEE Computer Society, Washington, DC, 2008 Dec

Learning classifier systems (LCS) are machine learning systems designed to work for both multi-step and singlestep decision tasks. The latter case presents an interesting, though not widely studied, challenge for such algorithms, especially when they are applied to real-world data mining problems. The present investigation departs from the popular approach of applying accuracy-based LCS to data mining problems and aims to uncover the potential of strengthbased LCS in such tasks. In this direction, ZCS-DM, a Zeroth-level Classifier System for data mining, is applied to a series of real world classification problems and its performance is compared to that of other state-of-the-art machine learning techniques (C4.5, HIDER and XCS). Results are encouraging, since with only a modest parameter exploration phase, ZCS-DM manages to outperform its rival algorithms in eleven out of the twelve benchmark datasets used in this study. We conclude this work by identifying future research directions.

@inproceedings{2008TzimaICDMW,
author={Fani A. Tzima and Pericles A. Mitkas},
title={ZCS Revisited: Zeroth-level Classifier Systems for Data Mining},
booktitle={2008 IEEE International Conference on Data Mining Workshops},
pages={700--709},
publisher={IEEE Computer Society},
address={Washington, DC},
year={2008},
month={12},
date={2008-12-15},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/ZCS-Revisited-Zeroth-level-Classifier-Systems-for-Data-Mining.pdf},
keywords={Learning Classifier System;Zeroth-level Classifier System (ZCS)},
abstract={Learning classifier systems (LCS) are machine learning systems designed to work for both multi-step and singlestep decision tasks. The latter case presents an interesting, though not widely studied, challenge for such algorithms, especially when they are applied to real-world data mining problems. The present investigation departs from the popular approach of applying accuracy-based LCS to data mining problems and aims to uncover the potential of strengthbased LCS in such tasks. In this direction, ZCS-DM, a Zeroth-level Classifier System for data mining, is applied to a series of real world classification problems and its performance is compared to that of other state-of-the-art machine learning techniques (C4.5, HIDER and XCS). Results are encouraging, since with only a modest parameter exploration phase, ZCS-DM manages to outperform its rival algorithms in eleven out of the twelve benchmark datasets used in this study. We conclude this work by identifying future research directions.}
}

Konstantinos N. Vavliakis, Georgios Th. Karagiannis and Sophia Sotiropoulou
"The AKMON Project: Semantic Web in Byzantine Iconography"
Paving the way to a semantic web for cultural heritage, Workshop held in conjunction with Vast 2008 Conference, pp. 327-362, Springer US, Braga, Portugal, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008VavliakisVAST,
author={Konstantinos N. Vavliakis and Georgios Th. Karagiannis and Sophia Sotiropoulou},
title={The AKMON Project: Semantic Web in Byzantine Iconography},
booktitle={Paving the way to a semantic web for cultural heritage, Workshop held in conjunction with Vast 2008 Conference},
pages={327-362},
publisher={Springer US},
address={Braga, Portugal},
year={2008},
month={01},
date={2008-01-01},
keywords={Semantic Web in Byzantine Iconography},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Theodoros Agorastos, Pericles A. Mitkas, Manolis Falelakis, Fotis E. Psomopoulos, Anastasios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Christos Maramis, Alexandros Batzios, Irini Lekka, Vasilis Koutkias, Themistoklis Mikos, A. Tatsis and Nikolaos Maglaveras
"Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project"
World Cancer Congress, Geneva, Switzerland, 2008 Aug

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{WCCAssist,
author={Theodoros Agorastos and Pericles A. Mitkas and Manolis Falelakis and Fotis E. Psomopoulos and Anastasios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Christos Maramis and Alexandros Batzios and Irini Lekka and Vasilis Koutkias and Themistoklis Mikos and A. Tatsis and Nikolaos Maglaveras},
title={Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project},
booktitle={World Cancer Congress},
address={Geneva, Switzerland},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/wcc2008.pdf},
keywords={Unified Data for Cervical Cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

2007

Journal Articles

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
International Journal of Product Lifecycle Management, 2, (2), pp. 1097-1111, 2007 Jan

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents

@article{2007MitkasIJPLM,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
journal={International Journal of Product Lifecycle Management},
volume={2},
number={2},
pages={1097-1111},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={multi-agent systems;MAS},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Data mining for agent reasoning: A synergy for training intelligent agents"
Engineering Applications of Artificial Intelligence, 20, (8), pp. 1097-1111, 2007 Dec

The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.

@article{2007SymeonidisEAAI,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Data mining for agent reasoning: A synergy for training intelligent agents},
journal={Engineering Applications of Artificial Intelligence},
volume={20},
number={8},
pages={1097-1111},
year={2007},
month={12},
date={2007-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-mining-for-agent-reasoning-A-synergy-fortraining-intelligent-agents.pdf},
keywords={Agent Technology;Agent reasoning;Agent training;Knowledge model},
abstract={The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.}
}

Andreas L. Symeonidis, Ioannis N. Athanasiadis and Pericles A. Mitkas
"A retraining methodology for enhancing agent intelligence"
Knowledge-Based Systems, 20, (4), pp. 388-396, 2007 Jan

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.

@article{2007SymeonidisKBS,
author={Andreas L. Symeonidis and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={A retraining methodology for enhancing agent intelligence},
journal={Knowledge-Based Systems},
volume={20},
number={4},
pages={388-396},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-retraining-methodology-for-enhancing-agent-intelligence.pdf},
keywords={business data processing;logic programming},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionysios Kehagias and Pericles A. Mitkas
"A Multi-agent Infrastructure for Enhancing ERP system Intelligence"
Scalable Computing: Practice and Experience, 8, (1), pp. 101-114, 2007 Jan

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company

@article{2007SymeonidisSCPE,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionysios Kehagias and Pericles A. Mitkas},
title={A Multi-agent Infrastructure for Enhancing ERP system Intelligence},
journal={Scalable Computing: Practice and Experience},
volume={8},
number={1},
pages={101-114},
year={2007},
month={01},
date={2007-01-01},
url={http://www.scpe.org/index.php/scpe/article/viewFile/401/75},
keywords={Adaptive Decision Making;ERP systems;Mutli-Agent Systems;Soft computing},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company}
}

2007

Conference Papers

Chrysa Collyda, Sotiris Diplaris, Pericles A. Mitkas, Nicos Maglaveras and Costas Pappas
"Profile Fuzzy Hidden Markov Models for Phylogenetic Analysis and Protein Classification"
5th Annual Rocky Mountain Bioinformatics Conference, pp. 327-362, Springer US, Aspen/Snowmass, CO, USA, 2007 Nov

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2007CollydaARMBC,
author={Chrysa Collyda and Sotiris Diplaris and Pericles A. Mitkas and Nicos Maglaveras and Costas Pappas},
title={Profile Fuzzy Hidden Markov Models for Phylogenetic Analysis and Protein Classification},
booktitle={5th Annual Rocky Mountain Bioinformatics Conference},
pages={327-362},
publisher={Springer US},
address={Aspen/Snowmass, CO, USA},
year={2007},
month={11},
date={2007-11-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/G-MolKnot-A-grid-enabled-systematic-algorithm-to-produce-open-molecular-knots-.pdf},
keywords={Fuzzy Hidden Markov Models},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Evaluating Knowledge Intensive Multi-Agent Systems"
Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007), pp. 74-87, Springer Berlin / Heidelberg, St. Petersburg, Russia, 2007 Jun

As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.

@inproceedings{2007DimouAIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Evaluating Knowledge Intensive Multi-Agent Systems},
booktitle={Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007)},
pages={74-87},
publisher={Springer Berlin / Heidelberg},
address={St. Petersburg, Russia},
year={2007},
month={06},
date={2007-06-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Evaluating-Knowledge-Intensive-Multi-agent-Systems.pdf},
keywords={air pollution;decision making;environmental science computing},
abstract={As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards a Generic Methodology for Evaluating MAS Performance"
IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207, pp. 174--179, Springer Berlin / Heidelberg, Waltham, MA, USA, 2007 Apr

As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.

@inproceedings{2007DimouKIMAS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards a Generic Methodology for Evaluating MAS Performance},
booktitle={IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207},
pages={174--179},
publisher={Springer Berlin / Heidelberg},
address={Waltham, MA, USA},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KIMAS-07.pdf},
keywords={agent evaluation},
abstract={As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An agent structure for evaluating micro-level MAS performance"
7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07, pp. 243--250, IEEE Computer Society, Gaithersburg, MD, 2007 Aug

Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.

@inproceedings{2007DimouPERMIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An agent structure for evaluating micro-level MAS performance},
booktitle={7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07},
pages={243--250},
publisher={IEEE Computer Society},
address={Gaithersburg, MD},
year={2007},
month={08},
date={2007-08-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-agent-structure-for-evaluating-micro-level-MAS-performance.pdf},
keywords={automated evaluation;autonomous agents;performance evaluation methodology},
abstract={Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.}
}

Sotiris Diplaris, G. Papachristoudis and Pericles A. Mitkas
"SoFoCles: Feature Filtering for Microarray Classification Based on Gene Ontology"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007DiplarisHBMIM,
author={Sotiris Diplaris and G. Papachristoudis and Pericles A. Mitkas},
title={SoFoCles: Feature Filtering for Microarray Classification Based on Gene Ontology},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Athens, Greece},
year={2007},
month={10},
date={2007-10-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/SoFoCles-Feature-filtering-for-microarray-classification-based-on-Gene-Ontology.pdf},
keywords={art;inference mechanisms;ontologies (artificial intelligence);query processing},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"Modeling Gene Ontology Terms using Finite State Automata"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007GkekasBioacademy,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Modeling Gene Ontology Terms using Finite State Automata},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2007},
month={10},
date={2007-10-01},
keywords={Modeling Gene Ontology},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

Ioanna K. Mprouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"Simulating molecular dynamics through intelligent software agents"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007MprouzaBioacademy,
author={Ioanna K. Mprouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Simulating molecular dynamics through intelligent software agents},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2007},
month={10},
date={2007-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Simulating-molecular-dynamics-through-intelligent-software-agents.pdf},
keywords={Modeling Gene Ontology},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

P. Tsimpos, Sotiris Diplaris, Pericles A. Mitkas and Georgios Banos
"Mendelian Samples Mining and Cluster Monitoring for National Genetic Evaluations with AGELI"
Interbull Annual Meeting, pp. 73-77, Dublin, Ireland, 2007 Aug

We present an innovative approach for pre-processing, analysis, alarm issuing and presentation of national genetic evaluation data with AGELI using Mendelian sampling mining and clustering techniques. AGELI (Eleftherohorinou et al.,2005) is a software platform that integrates the whole data mining procedure in order to produce a qualitative description of national genetic evaluation results, concerning three milk yield traits. Quality assurance constitutes a critical issue in the range of services provided by Interbull. Although the standard method appears sufficiently functional (Klei et al.,2002), during the last years there has been progress concerning an alternative validation method of genetic evaluation results using data mining (Banoset al.,2003; Diplaris et al.,2004), potentially leading to inference on data quality. This methodology was incorporated in AGELI in order to assess and assure data quality. The whole idea waImport your BibTex here!! :Ds to exploit decision trees and apply a goodness of fit test to individual tree nodes and an F-test to corresponding nodes from consecutive evaluation runs, aiming at discovering possible abnormalities in bull proof distributions. In a previous report (Banos et al.,2003) predictions led to associations, which were qualitatively compared to actual proofs, and existing discrepancies were confirmed using a data set with known errors. In this report we present AGELI’s novel methods of performing data mining by using a series of decision tree and clustering algorithms. Different decision tree models can now be created in order to assess data quality by evaluating data with various criteria. To further assess data quality, a novel technique for cluster monitoring is implemented in AGELI. It is possible to form clusters of bulls and perform unsupervised monitoring on them over the entire period of genetic evaluation runs. Finally, analyses were conducted using bull Mendelian sampling over the whole dataset.

@inproceedings{2007TsimposIAM,
author={P. Tsimpos and Sotiris Diplaris and Pericles A. Mitkas and Georgios Banos},
title={Mendelian Samples Mining and Cluster Monitoring for National Genetic Evaluations with AGELI},
booktitle={Interbull Annual Meeting},
pages={73-77},
address={Dublin, Ireland},
year={2007},
month={08},
date={2007-08-23},
url={http://issel.ee.auth.gr/wp-content/uploads/Tsimpos.pdf},
keywords={AGELI;Cluster Monitoring;Mendelian Samples Mining},
abstract={We present an innovative approach for pre-processing, analysis, alarm issuing and presentation of national genetic evaluation data with AGELI using Mendelian sampling mining and clustering techniques. AGELI (Eleftherohorinou et al.,2005) is a software platform that integrates the whole data mining procedure in order to produce a qualitative description of national genetic evaluation results, concerning three milk yield traits. Quality assurance constitutes a critical issue in the range of services provided by Interbull. Although the standard method appears sufficiently functional (Klei et al.,2002), during the last years there has been progress concerning an alternative validation method of genetic evaluation results using data mining (Banoset al.,2003; Diplaris et al.,2004), potentially leading to inference on data quality. This methodology was incorporated in AGELI in order to assess and assure data quality. The whole idea waImport your BibTex here!! :Ds to exploit decision trees and apply a goodness of fit test to individual tree nodes and an F-test to corresponding nodes from consecutive evaluation runs, aiming at discovering possible abnormalities in bull proof distributions. In a previous report (Banos et al.,2003) predictions led to associations, which were qualitatively compared to actual proofs, and existing discrepancies were confirmed using a data set with known errors. In this report we present AGELI’s novel methods of performing data mining by using a series of decision tree and clustering algorithms. Different decision tree models can now be created in order to assess data quality by evaluating data with various criteria. To further assess data quality, a novel technique for cluster monitoring is implemented in AGELI. It is possible to form clusters of bulls and perform unsupervised monitoring on them over the entire period of genetic evaluation runs. Finally, analyses were conducted using bull Mendelian sampling over the whole dataset.}
}

Fani A. Tzima, Kostas D. Karatzas, Pericles A. Mitkas and Stavros Karathanasis
"Using data-mining techniques for PM10 forecasting in the metropolitan area of Thessaloniki, Greece"
IJCNN 2007 International - Joint Conference on Neural Netwroks, pp. 2752--2757, Orlando, Florida, 2007 Aug

Knowledge extraction and acute forecasting are among the most challenging issues concerning the use of computational intelligence (CI) methods in real world applications. Both aspects are essential in cases where decision making is required, especially in domains directly related to the quality of life, like the quality of the atmospheric environment. In the present paper we emphasize on short term Air Quality (AQ) forecasting as a key constituent of every AQ management system, and we apply various CI methods and tools for assessing PM10 concentration values. We report our experimental strategy and preliminary results that reveal interesting interrelations between AQ and various city operations, while performing satisfactory in predicting concentration values.

@inproceedings{2007TzimaIJCNN,
author={Fani A. Tzima and Kostas D. Karatzas and Pericles A. Mitkas and Stavros Karathanasis},
title={Using data-mining techniques for PM10 forecasting in the metropolitan area of Thessaloniki, Greece},
booktitle={IJCNN 2007 International - Joint Conference on Neural Netwroks},
pages={2752--2757},
address={Orlando, Florida},
year={2007},
month={08},
date={2007-08-12},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Using-data-mining-techniques-for-PM10-forecasting-in-the-metropolitan-area-of-Thessaloniki-Greece.pdf},
keywords={air pollution;decision making;environmental science computing},
abstract={Knowledge extraction and acute forecasting are among the most challenging issues concerning the use of computational intelligence (CI) methods in real world applications. Both aspects are essential in cases where decision making is required, especially in domains directly related to the quality of life, like the quality of the atmospheric environment. In the present paper we emphasize on short term Air Quality (AQ) forecasting as a key constituent of every AQ management system, and we apply various CI methods and tools for assessing PM10 concentration values. We report our experimental strategy and preliminary results that reveal interesting interrelations between AQ and various city operations, while performing satisfactory in predicting concentration values.}
}

Fani A. Tzima, Andreas L. Symeonidis and Pericles. A. Mitkas
"Symbiosis: using predator-prey games as a test bed for studying competitive coevolution"
IEEE KIMAS conference, pp. 115-120, Springer Berlin / Heidelberg, Waltham, Massachusetts, 2007 Apr

The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\\\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.

@inproceedings{2007TzimaKIMAS,
author={Fani A. Tzima and Andreas L. Symeonidis and Pericles. A. Mitkas},
title={Symbiosis: using predator-prey games as a test bed for studying competitive coevolution},
booktitle={IEEE KIMAS conference},
pages={115-120},
publisher={Springer Berlin / Heidelberg},
address={Waltham, Massachusetts},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Symbiosis-using-predator-prey-games-as-a-test-bed-for-studying-competitive-coevolution.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\\\\\\\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.}
}

Fani A.Tzima, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Agent-based modelling and simulation in the irrigation management sector: applications and potential"
Options Mediterraneennes, Series B: Studies and Research, Proceedings of the WASAMED International Conference, pp. 273--286, 2007 Feb

In the field of sustainable development, the management of common-pool resources is an issue of major importance. Several models that attempt to address the problem can be found in the literature, especially in the case of irrigation management. In fact, the latter task represents a great challenge for researchers and decision makers, as it has to cope with various water-related activities and conflicting user perspectives within a specified geographic area. Simulation models, and particularly Agent-Based Modelling and Simulation (ABMS), can facilitate overcoming these limitations: their inherent ability of integrating ecological and socio-economic dimensions, allows their effective use as tools for evaluating the possible effects of different management plans, as well as for communicating with stakeholders. This great potential has already been recognized in the irrigation management sector, where a great number of test cases have already adopted the modelling paradigm of multi-agent simulation. Our current study of agent-based models for irrigation management draws some interesting conclusions, regarding the geographic and representation scale of the reviewed models, as well as the degree of stakeholder involvement in the various development phases. Overall, we argue that ABMS tools have a great potential in representing dynamic processes in integrated assessment tools for irrigation management. Such tools, when effectively capturing social interactions and coupling them with environmental and economical models, can promote active involvement of interested parties and produce sustainable and approvable solutions to irrigation management problems.

@inproceedings{2007TzimaWASAMED,
author={Fani A.Tzima and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Agent-based modelling and simulation in the irrigation management sector: applications and potential},
booktitle={Options Mediterraneennes, Series B: Studies and Research, Proceedings of the WASAMED International Conference},
pages={273--286},
year={2007},
month={02},
date={2007-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-based-modelling-and-simulation-in-the-irrigation-management-sector.pdf},
keywords={agent;agent-based modeling;irrigation management;stakeholder participation},
abstract={In the field of sustainable development, the management of common-pool resources is an issue of major importance. Several models that attempt to address the problem can be found in the literature, especially in the case of irrigation management. In fact, the latter task represents a great challenge for researchers and decision makers, as it has to cope with various water-related activities and conflicting user perspectives within a specified geographic area. Simulation models, and particularly Agent-Based Modelling and Simulation (ABMS), can facilitate overcoming these limitations: their inherent ability of integrating ecological and socio-economic dimensions, allows their effective use as tools for evaluating the possible effects of different management plans, as well as for communicating with stakeholders. This great potential has already been recognized in the irrigation management sector, where a great number of test cases have already adopted the modelling paradigm of multi-agent simulation. Our current study of agent-based models for irrigation management draws some interesting conclusions, regarding the geographic and representation scale of the reviewed models, as well as the degree of stakeholder involvement in the various development phases. Overall, we argue that ABMS tools have a great potential in representing dynamic processes in integrated assessment tools for irrigation management. Such tools, when effectively capturing social interactions and coupling them with environmental and economical models, can promote active involvement of interested parties and produce sustainable and approvable solutions to irrigation management problems.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information"
ICTAI, pp. 279--282, IEEE Computer Society, Washington, DC, USA, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007VavliakisICTAI,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information},
booktitle={ICTAI},
pages={279--282},
publisher={IEEE Computer Society},
address={Washington, DC, USA},
year={2007},
month={10},
date={2007-10-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Eikonomia-–-An-Integrated-Semantically-Aware-Tool-for-Description-and-Retrieval-of-Byzantine-Art-Information-.pdf},
keywords={art;inference mechanisms;ontologies (artificial intelligence);query processing},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

2007

Incollection

Pericles A. Mitkas and Paraskevi Nikolaidou
"Agents and Multi-Agent Systems in Supply Chain Management: An Overview"
Agents and Web Services in Virtual Enterprises, pp. 223-243, IGI Global, 2007 Jan

This chapter discuss