Περικλής Α. Μήτκας

Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης
Τμήμα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών
54124, Θεσσαλονίκη

Τηλ: +30 2310 99 6390
Fax: +30 2310 99 6447
Email: mitkas (at) eng [dot] auth [dot] gr
LinkedIn | Βιογραφικό Σημείωμα (Οκτώβριος 2019)

Προσωπικά Στοιχεία

Γεννημένος το 1962 στη Φλώρινα. Παντρεμένος με τη Σοφία Μαρδύρη. Δύο παιδιά: Αλέξανδρος – Ακύλας και Δανάη – Ζωή.

Τίτλοι Σπουδών

1990Ph.D. in Computer Engineering, Syracuse University, Syracuse, NY.
1987M.Sc. in Computer Engineering, Syracuse University, Syracuse, NY.
1985Δίπλωμα Ηλεκτρολόγου Μηχανικού, ΑΠΘ

Ακαδημαϊκή Εμπειρία

University of Pennsylvania, Philadelphia, USA

2019 – 2020 Visiting Professor, Department of Electrical and Systems Engineering
Ερευνητική συνεργασία με το Εργαστήριο GRASP και την ομάδα PRECISE
του Τμήματος σε θέματα Μηχανικής Μάθησης

Αριστοτέλειο Πανεπιστήμιο Θεσσαλονίκης

2006 – Καθηγητής
Τμήμα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών
1999 – 2005Αναπληρωτής Καθηγητής
Τμήμα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών
Διδακτικό Έργο:
  • Προπτυχιακά μαθήματα
    1. Δομές Δεδομένων
    2. Τεχνολογία Λογισμικού
    3. Βάσεις Δεδομένων
  • Μεταπτυχιακά Μαθήματα
    1. Ειδικά Κεφάλαια Βάσεων Δεδομένων
    2. Τεχνικές Σχεδίασης και Ανάπτυξης Λογισμικού (με κ. Α. Συμεωνίδη)
    3. Βάσεις Δεδομένων και Εξόρυξη Γνώσης (με κ. Α. Συμεωνίδη)
Ερευνητικό Έργο:
  • 2005 – Διευθυντής του Εργαστηρίου Επεξεργασίας Πληροφορίας και Υπολογισμών
Ερευνητικά Ενδιαφέροντα:
  • Μηχανική μάθηση και ανάλυση μεγάλων όγκων δεδομένων
  • Παράλληλες αρχιτεκτονικές για πολύ μεγάλες βάσεις δεδομένων
  • Ευφυείς πράκτορες λογισμικού
  • Εξόρυξη δεδομένων (data mining) και ανακάλυψη γνώσης (knowledge discovery)
  • Συστήματα εμπειρογνώμονες για εκτίμηση περιβαλλοντικών επιδράσεων
  • Βιοπληροφορική
  • Σημασιολογικός Ιστός (semantic web)
  • Οπτοηλεκτρονικά υπολογιστικά συστήματα για μεγάλες βάσεις δεδομένων
Διοικητικό Έργο:
  • 2014 – 2019 ΠΡΥΤΑΝΗΣ
  • 2014 – 2019 Εκπρόσωπος της Συνόδου Πρυτάνεων Ελληνικών Πανεπιστημίων στην Ένωση Ευρωπαϊκών
    Πανεπιστημίων (European University Association)
  • 2018 – 2019 Πρόεδρος της Ένωσης Βαλκανικών Πανεπιστημίων (Balkan University Association)
  • 2018 – 2020 Πρόεδρος Δικτύου Παν/μίων Μαύρης Θάλασσας (Black Sea University Network)
  • 2011 – 2013 Πρόεδρος του Τμήματος ΗΜΜΥ
  • 2010 – 2014 Αναπληρωτής Πρόεδρος της Επιτροπής Δικτύων, Επικοινωνιών και Πληροφορικής (ΕΔΕΠ) του Πανεπιστημίου
  • 2013 – 2020 Μέλος ΔΣ της Αλεξάνδρειας Ζώνης Καινοτομίας της Θεσσαλονίκης
    Εκπρόσωπος ΑΠΘ
  • 2005 – 2007 Διευθυντής του Τομέα Ηλεκτρονικής Υπολογιστών
  • 2001 – 2005 Αναπληρωτής Πρόεδρος του ΤΗΜΜΥ

Ινστιτούτο Τεχνολογιών Πληροφορικής (ΙΠΤΗΛ)
Εθνικό Κέντρο Έρευνας και Τεχνολογικής Ανάπτυξης (ΕΚΕΤΑ)

  • 2001 – 2014 Συνεργαζόμενο μέλος ΔΕΠ
  • 2002 – 2009 Αναπληρωτής Διευθυντής του ΙΠΤΗΛ
  • 2005 – 2009 Μέλος του Επιστημονικού Συμβουλίου του ΙΠΤΗΛ
  • Διευθυντής του Εργαστηρίου Ευφυών Συστημάτων & Τεχνολογίας Λογισμικού

Πανεπιστήμιο Στερεάς Ελλάδας

2009 – 2010Μέλος Διοικούσας Επιτροπής Πανεπιστημίου
2009 – 2010Πρόεδρος Τμήματος και της Προσωρινής Γενικής Συνέλευσης
Τμήμα Πληροφορικής με Εφαρμογές στη Βιοϊατρική
2006 – 2010 Μέλος Προσωρινής Γενικής Συνέλευσης
Τμήμα Πληροφορικής με Εφαρμογές στη Βιοϊατρική

Colorado State University, Colorado, U.S.A.

1996 – 2000Associate Professor
Department of Electrical and Computer Engineering
1990 – 1996Assistant Professor
Department of Electrical and Computer Engineering
Διδακτικό Έργο:
  • Προπτυχιακά μαθήματα
    1. EE101 Engineering Computing
    2. EE102 Digital Circuit Logic
    3. EE454 Database Computer Systems
    4. EE457 Optical Information Processing
  • Μεταπτυχιακά μαθήματα
    1. EE554 Computer Architecture
    2. EE557 Digital Optical Computing
    3. EE580 Database Computers
Ερευνητικό Έργο:
  • 1992 – 2000 Ιδρυτής και Διευθυντής του Εργαστηρίου Οπτικών Υπολογιστών
  • 1991 – 2000 Μέλος του Ερευνητικού Κέντρου (NSF Research Engineering Center) Οπτοηλεκτρονικών Υπολογιστικών Συστημάτων

Vrije University of Brussels, Belgium

1996 – 1997Visiting Researcher
Department of Applied Physics
Έρευνα σε συστήματα οπτικών υπολογιστών

Syracuse University, New York, U.S.A.

1985
1989 – 1990
Teaching Assistant
Department of Electrical and Computer Engineering
Electric Circuits Lab και Expert Systems
1986 – 1988Research Assistant
Department of Electrical and Computer Engineering
Studied problems associated with Very Large Data/Knowledge Bases

Διακρίσεις και Βραβεία

20121st Place in the International Trading Agent Competition (TAC2012), Ad Auctions Game
(με τους κκ. K. Χατζηδημητρίου και Α. Συμεωνίδη)
20121st Place in the International Competition MediaEval 2012 (με τον κ. Κ. Βαβλιάκη)
2010 1st Place in the Intern. Trading Agent Competition (TAC2010), Market Design Game
(με τον κ. Λ. Σταυρογιάννη)
20051st Place in the International Trading Agent Competition (TAC2005), Classic Game
(με τους κκ Δ. Κεχαγιά και Π. Τουλή)
2010, 20133rd Place in the Intern. Trading Agent Competition (TAC2010), Ad Auctions Game
(με τον κ. Κ. Χατζηδημητρίου)
2005 3rd Place in the International Trading Agent Competition (TAC2005), SCM Game
(με τους κκ Α. Συμεωνίδη, Κ. Χατζηδημητρίου και Ι. Κοντογούνη)
2008, 2009 5th Place in the Intern. Trading Agent Competition, Market Design Game
(με τον κ. Λ. Σταυρογιάννη)
2001 Marquis Who’s Who in America, 55th Ed.
2000 – 2001 Marquis Who’s Who in America Science and Engineering, 5th Ed.
2000 – 2001 Strathmore’s Who’s Who
1995 Engineering Dean’s Council Award for best performance
in the Electrical and Computer Engineering Department, Colorado State Univ
1992 Outstanding Faculty Counselor for an IEEE Student Branch – IEEE, Region V

Υποτροφίες

9/1988 – 5/1989 University Fellow, Syracuse University
9/1985 – 8/1988 Υποτροφία Ιδρύματος Μποδοσάκη
1981 – 1985 Υποτροφία ΙΚΥ
1980 – 1985 Υποτροφία Αθηναϊκής Λέσχης

Μέλος Επαγγελματικών Επιμελητηρίων και Επιστημονικών Εταιρειών

  • IEEE and the IEEE Computer Society
    • 1998 – Senior Member
    • 1990 – 1998 Member
    • 1983 – 1990 Student Member
  • Optical Society of America (OSA)
  • Society for Photooptical Instrumentation Engineering (SPIE)
    SPIE Working Groups on Optical Processing & Computing and on Holography
  • Association of Computing Machinery (ACM)
  • Ελληνική Επιστημονική Εταιρεία Υπολογιστικής Βιολογίας και Βιοπληροφορικής
  • Τεχνικό Επιμελητήριο Ελλάδας (από το 1985)
    1. 2006 – 2013 Εκλεγμένο μέλος της Αντιπροσωπείας του ΤΕΕ/Κεντρ. Μακεδονίας
    2. 2010 – 2013 Πρόεδρος της Μόνιμης Επιτροπής Νέων Τεχνολογιών, Έρευνας και Τεχνολογικής Ανάπτυξης του ΤΕΕ/ΚΜ
    3. 2007 – 2010 Πρόεδρος της Μόνιμης Επιτροπής Δικτύων και Τηλεματικής του ΤΕΕ/ΚΜ
    4. 2007 – 2012 Εκπρόσωπος του ΤΕΕ στη Ζώνη Καινοτομίας Θεσσαλονίκης        
    5. 2004 – 2006 Μέλος της Μόνιμης Επιτροπής Τεχνικής Παιδείας του ΤΕΕ/ΚΜ
    6. 2005 – Πρόεδρος της Ομάδας Εργασίας για την ‘Καταγραφή συστημάτων αξιολόγησης ΑΕΙ και πιστοποίησης Μηχανικών στις χώρες της Ευρωπαϊκής Ένωσης’

Διπλώματα Ευρεσιτεχνίας Πνευματικά Δικαιώματα

  • P. A. Mitkas and J. Lurkins
    “Tomographic Sorting Chip,” patent disclosure with Symbios Logic, Inc. 1998.
  • Ι. Αθανασιάδης και Π. Α. Μήτκας,
    “Σύστημα Διασύνδεσης Αισθητήρων και Παροχής Επεξεργασμένης Πληροφορίας μέσω της Σημασιολογικής Διάχυσης Γνώσης”
    Ελληνικό Δίπλωμα Ευρεσιτεχνίας, #1004936, Ιούλιος 2005.
  • Π. Α. Μήτκας, Γ. Μπάνος και Ζ. Άμπας,
    “ΑΜΝΟΣ: Ολοκληρωμένο σύστημα λογισμικού για τον έλεγχο και τη διαχείριση προβατοτροφικών μονάδων”
    Κατάθεση σε συμβολαιογράφο για διασφάλιση των πνευματικών δικαιωμάτων, 2005.

Μεταπτυχιακοί Φοιτητές

Διδακτορικά (Ph.D.)

  • Επιβλέπων καθηγητής σε 3 διδακτορικές διατριβές στο Colorado State University
  • Επιβλέπων καθηγητής σε 13 διδακτορικές διατριβές στο Α.Π.Θ.. 5 διδακτορικές διατριβές σε εξέλιξη.

Master of Science (with Thesis)

  • Επιβλέπων καθηγητής σε 11 μεταπτυχιακές διπλωματικές εργασίες στο Colorado State University
  • Επιβλέπων καθηγητής σε 4 μεταπτυχιακές διπλωματικές εργασίες στο Α.Π.Θ. (Ευρωπαϊκό Διαπανεπιστημιακό Πρόγραμμα ERASMUS-MUNDUS:  ‘M.Sc. on Network and e-Business centered Computing).
  • Επιβλέπων καθηγητής σε 5 μεταπτυχιακές διπλωματικές εργασίες στο Α.Π.Θ. (Διαπανεπιστημιακό Πρόγραμμα Μεταπτυχιακών Σπουδών ‘Προηγμένα Συστήματα Υπολογιστών και Επικοινωνιών).

Προπτυχιακοί Φοιτητές

  • Επιβλέπων καθηγητής σε 7 διπλωματικές εργασίες στο Colorado State University
    13 ερευνητικοί υπότροφοι του κέντρου COS.
  • Επιβλέπων καθηγητής σε 132 διπλωματικές εργασίες από συνολικά 136 φοιτητές/τριες στο Α.Π.Θ.. Όλες αφορούσαν ανάπτυξη λογισμικού.

Χρηματοδοτούμενα Ερευνητικά Προγράμματα

Α.Π.Θ

Φορέας
Χρηματοδότησης
Θέση στο
Έργο
ΠοσόνΔιάρκειαΤίτλος προγράμματος
ΕΣΠΑ Ερευνώ-
Δημιουργώ-
Καινοτομώ
Επιστ.
Υπεύθυνος
€201.2008/2018-7/2021AI-CFPD:
Εννοιολογικός Σχεδιασμός Προϊόντων
Μόδας με την βοήθεια Τεχνητής
Νοημοσύνης
ΕΛΚΕ – ΑΠΘ Επιστ.
Υπεύθυνος
€50.0002/2018-6/2020Δράσεις Προεδρίας του Black Sea
Universities Network (BSUN)
ΕΛΚΕ – ΑΠΘ Επιστ.
Υπεύθυνος
€60.0009/2014-8/2019ΙΑΣΩΝ:
Προώθηση της Διδασκαλίας της Ελληνικής
Γλώσσας στις Παρευξείνιες Χώρες
European
Commission
ICT Program CP
Επιστημον.
Υπεύθυνος
και Γενικός
Συντονιστής
€2.473.542
(our part is
~€527.332)
11/2013-10/2016RAPP:
Robotic Applications for Delivering Smart
User Empowering Applications
Υπουργείο ΠαιδείαςΙδρυματικός
Υπεύθυνος
€1.617.5205/2012-10/2014Open Courses:
Ανάπτυξη και διάθεση ψηφιακού
εκπαιδευτικού περιεχομένου για το ΑΠΘ –
Ανοικτά ακαδημαϊκά μαθήματα
ΕΔΕΤ Α.Ε.Επιστ.
Υπεύθυνος
€170.0003/2013-8/2014Υποστήριξη υπηρεσιών και αρωγής φορέων
πρώτου επιπέδου
Υπουργείο Παιδείας Επιστ.
Υπεύθυνος
(Ε.Υ.)
€108.6794/2011-3/2012Υποστήριξη Λειτουργίας Πανελλήνιου
Σχολικού Δικτύου κατά το 2011 στην
περιοχή των Διευθύνσεων Εκπαίδευσης:
Α΄ Θεσσαλονίκης, Σερρών και Πέλλας
Αγροτικοί και
Κτηνοτροφικοί
Συνεταιρισμοί
Επιστ.
Υπεύθυνος
€479.02512/2009-4/2015ΑΜΝΟΣ – Σύστημα λογισμικού για τη
διαχείριση αιγοπροβατοτροφικών μονάδων
και την τήρηση γενεαλογικού βιβλίου
ΓΓΕΤΕπιστ.
Υπεύθυνος
€ 184.1922006-2009ΠΕΝΕΔ’03:
Γενικευμένο πλαίσιο και εφαρμογές για την
αξιολόγηση της ευφυΐας και τη βελτίωση
της συμπεριφοράς πρακτόρων λογισμικού
Kuwait
Government
Consultant –
Software
provider
~€ 100.000
(~€20.000)
9/2005-12/2006Using advanced AI techniques to enhance
the decision-making process towards the
industrial environment problem in Kuwait:
Case Study of Amgra Industrial Area
Κέντρο Γενετικής
Βελτίωσης και
Συνεταιρισμοί
Επιστ.
Υπεύθυνος
€ 30.0002003-2005ΑΜΝΟΣ: Ανάπτυξη διαδικτυακού
πληροφοριακού συστήματος για την
υποστήριξη του προγράμματος γενετικής
βελτίωσης
Πολυτεχνική
Σχολή, ΑΠΘ
Επιστ.
Υπεύθυνος
€ 13.0002004-2005Ανάπτυξη εφαρμογής για τη συνδιαχείριση
των αιθουσών διδασκαλίας στην
Πολυτεχνική Σχολή
ΕΠΕΑΕΚ ΙΙ
(ΕΚΤ και ΕΤΠΑ)
Ιδρυματικός
Υπεύθυνος
και Ε.Υ.
για το ΤΗΜΜΥ
€ 2.620.085
(€ 1.394.110)
3/2003-8/2008Ενίσχυση Σπουδών Πληροφορικής στα
Τμήματα α) Ηλεκτρολόγων Μηχανικών και
Μηχανικών Υπολογιστών και β)
Πληροφορικής του Α.Π.Θ
ΓΓΕΤΕπιστ.
Υπεύθυνος
€ 32.1009/2003-12/2005ΗΡΑΚΛΕΙΤΟΣ: Προηγμένες τεχνικές
εξόρυξης δεδομένων και γνώσης σε βάσεις
βιολογικών δεδομένων

ΕΚΕΤΑ

Φορέας
Χρηματοδότησης
Θέση στο
Έργο
ΠοσόνΔιάρκειαΤίτλος προγράμματος
European
Commission
ICT Program
STREP
Επιστημον.
Υπεύθυνος
και Γενικός
Συντονιστής
€3.607.654
(our part is
~€668.600)
10/2011-4/2014CASSANDRA:
A multivariate platform for assessing the
impact of strategic decisions in electrical
power systems
European
Commission
IST Program
STREP
Επιστημον.
Υπεύθυνος
και Γενικός
Συντονιστής
€4.170.154
(our part was
~€789.480)
1/2006-4/2009ASSIST:
Association Studies Assisted by Inference
and Semantic Technologies
A software platform for enabling large
scale analysis of genetic and phenotypic
data for the study of Cervical Cancer
ΝΑ Φλώρινας € 99.50010/2008-4/2009Μελέτη Εφαρμογής Συστήματος
Τηλεϊατρικής για τη Νομαρχία Φλώρινας
ΓΓΕΤ/ΕΚΕΤΑ Επιστ.
Υπεύθυνος
€ 35.0001/2005-12/2008Συντήρηση, βελτίωση και επέκταση της
πλατφόρμας Agent Academy
European
Commission
IST Program
Επιστημον.
Υπεύθυνος
και Γενικός
Συντονιστής
€3.100.467
(our part was
~€880.000)
11/2001-4/2004AGENT ACADEMY:
A Data Mining Framework for Training
Intelligent Agents
European
Comm.
INCO Program
Εταίρος€ 15.6008/2004-7/2007NOSTRUM-DSS:
DSS Tools for sustainable water resource
management in the Mediterranean.
European Comm.
IST Program
Υπεργολάβος€ 16.6381/2003-9/2003MUMMY:
Mobile Knowledge Management

Συμμετοχή σε δίκτυα αριστείας

Φορέας
Χρηματοδότησης
Θέση στο
Έργο
ΠοσόνΔιάρκειαΤίτλος προγράμματος
European Comm.Εταίρος 2005-2007KD-ubiq – Α blueprint for ubiquitous
knowledge discovery systems
European Comm.Εταίρος 2004-2006Agent Link III – Node 061
European Comm.Εταίρος 2002-2004KDnet: The knowledge discovery network
European Comm.Εταίρος 2002-2003Agent Cities
European Comm.Εταίρος 2003-2004Agent Link II

Colorado State University, Η.Π.Α.

Φορέας
Χρηματοδότησης
Θέση στο
Έργο
ΠοσόνΔιάρκειαΤίτλος προγράμματος
National
Science
Foundation, USA
Co-Principal
Investigator
$250.0006/1999-5/2002VCSEL Based Free-Space
Processing System for a Biomolecular
Database Scanner
Air Force
SBIR I
Consultant$99.1901/1998 – 9/1998Dynamic Data Mining using an
Electro-optical Data Warehouse
DARPAPrincipal
Investigator (PI)
$98.1689/1997-3/1999Holographic Search Engine for
Multimedia Databases
AFOSRPrincipal
Investigator
$281.8352/1994-6/1997Holographic Storage and Processing for Very
Large Relational Databases
Rome LabsPrincipal
Investigator
$141.0386/1995-9/1996Error Detection and Correction Codes for
Optical Memories with 2D Output
National
Science
Foundation
Co-PI$330.0009/1994-8/1998Optoelectronic Parallel Processing with
Logic Gate Arrays based on Surface Emitting
Lasers
NATO
Collaborative
Research grant
Principal
Investigator
$192.0009/1996-11/1997Reference Beam Reconstruction
during Associative Recall in
Digital Holographic Memories
Symbios, IncPrincipal
Investigator
$50.000
Equipment
and
services
4/1997-10/1997Fabrication of a Smart Photodetector Array
for Cluster Error Correction
in 0.35 μm CMOS Technology
Symbios, IncPrincipal
Investigator
$50.000
Equipment
and
services
1/1997-6/1997Fabrication of a VLSI Chip in
0.5 μm CMOS Technology for
Tomographic Sorting
Storage
Technology Corp.
and Colorado
Advanced
Technology
Institute
Principal
Investigator
$50.000
and
$13.970
Equipment
7/1996-6/1998Volume Holographic Storage for Digital Data
Storage
Technology Corp.
and Colorado
Advanced
Technology
Institute
Principal
Investigator
$50.0007/1994-6/1996Two–dimensional Parallel Read–out of
Optical Tape
NSF
Engineering
Research
Center on
Optoelectronic
Computing
Systems
Participant

(only own
funds
reported)
$71.3505/1997-4/1998Holographic Search Engine for
Multimedia Databases
NSF
Engineering
Research
Center on
Optoelectronic
Computing
Systems
Participant

(only own
funds
reported)
$223.7005/1994-4/1997OE 3D Non-numerical Processing:
Algorithms and Architectures
NSF
Engineering
Research
Center on
Optoelectronic
Computing
Systems
Participant

(only own
funds
reported)
$13.0005/1996-4/1997Parallel Data Storage and Processing for
Advanced Displays
NSF
Engineering
Research
Center on
Optoelectronic
Computing
Systems
Participant

(only own
funds
reported)
$68.0005/1993-4/1994Nonnumerical Algorithms for a 3–D
Computer
NSF
Engineering
Research
Center on
Optoelectronic
Computing
Systems
Participant

(only own
funds
reported)
$85.0005/1991-4/1993Optical Storage and Processing for Very
Large Relational Databases
Colorado
Commission on
Higher Education
Participant$80.000
$23.000
$15.000
7/1994-6/1997
1/1997-6/1997
Equipment
Optoelectronics Center of
Excellence Award
Colorado
Commission on
Higher Education
Participant $30.0005/1992-4/1994OE Center of Excellence Award
CASI
Undergraduate
Research Grants
Principal
Investigator
$3.0009/94-5/95A Graphical User Interface for an
Optical Systems Simulator
CASI
Undergraduate
Research Grants
Principal
Investigator
$3.0009/93-5/94An Expert System for Environmental Impact
Assessment Applications
Comlinear
Corporation
Principal
Investigator
$8.000
Equipment
6/1992-12/1993High Speed Text Pattern Matching based on
a Signal Microprocessor

Δημοσιεύσεις και Ανακοινώσεις

2021

Journal Articles

Maria Th. Kotouza, Alexandros-Charalampos Kyprianidis, Sotirios-Filippos Tsarouchis, Antonios C. Chrysopoulos and Pericles A. Mitkas
"Science4Fashion: an end-to-end decision support system for fashion designers"
Evolving Systems, 2021 Mar

Nowadays, the fashion clothing industry is moving towards “fast” fashion, offering a wide variety of products based on different patterns and styles, usually characterized by lower costs and ambiguous quality. The retails markets are trying to present regularly new fashion collections while trying to follow the latest fashion trends at the same time. The main reason is to remain competitive and keep up with ever-changing customer demands. Fashion designers draw inspiration from social media, e-shops, and fashion shows that set the new fashion trends. In this direction, we propose Science4Fashion, an AI end-to-end system that facilitates fashion designers by collecting and analyzing data from many different sources and suggesting products according to their needs. An overview of the system’s modules is presented, emphasizing data collection, data annotation using deep learning models, and product recommendation and user feedback processes. The experiments presented in this paper are twofold: (a) experiments regarding the evaluation of clothing attribute classification, and (b) experiments regarding product recommendation using the baseline kNN enriched by the frequency-based clustering algorithm (FBC), achieving promising results.

@article{Kotouza2021,
author={Maria Th. Kotouza and Alexandros-Charalampos Kyprianidis and Sotirios-Filippos Tsarouchis and Antonios C. Chrysopoulos and Pericles A. Mitkas},
title={Science4Fashion: an end-to-end decision support system for fashion designers},
journal={Evolving Systems},
year={2021},
month={03},
date={2021-03-12},
url={https://link.springer.com/article/10.1007/s12530-021-09372-7},
doi={https://doi.org/10.1007/s12530-021-09372-7},
issn={1868-6486},
abstract={Nowadays, the fashion clothing industry is moving towards “fast” fashion, offering a wide variety of products based on different patterns and styles, usually characterized by lower costs and ambiguous quality. The retails markets are trying to present regularly new fashion collections while trying to follow the latest fashion trends at the same time. The main reason is to remain competitive and keep up with ever-changing customer demands. Fashion designers draw inspiration from social media, e-shops, and fashion shows that set the new fashion trends. In this direction, we propose Science4Fashion, an AI end-to-end system that facilitates fashion designers by collecting and analyzing data from many different sources and suggesting products according to their needs. An overview of the system’s modules is presented, emphasizing data collection, data annotation using deep learning models, and product recommendation and user feedback processes. The experiments presented in this paper are twofold: (a) experiments regarding the evaluation of clothing attribute classification, and (b) experiments regarding product recommendation using the baseline kNN enriched by the frequency-based clustering algorithm (FBC), achieving promising results.}
}

2019

Inproceedings Papers

Maria Kotouza, Fotis Psomopoulos and Periklis A. Mitkas
New Trends in Databases and Information Systems, pp. 564-569, Springer International Publishing, Cham, 2019 Sep

Nowadays, a wide range of sciences are moving towards the Big Data era, producing large volumes of data that require processing for new knowledge extraction. Scientific workflows are often the key tools for solving problems characterized by computational complexity and data diversity, whereas cloud computing can effectively facilitate their efficient execution. In this paper, we present a generative big data analysis workflow that can provide analytics, clustering, prediction and visualization services to datasets coming from various scientific fields, by transforming input data into strings. The workflow consists of novel algorithms for data processing and relationship discovery, that are scalable and suitable for cloud infrastructures. Domain experts can interact with the workflow components, set their parameters, run personalized pipelines and have support for decision-making processes. As case studies in this paper, two datasets consisting of (i) Documents and (ii) Gene sequence data are used, showing promising results in terms of efficiency and performance.

@inproceedings{Kotouza19NTDIS,
author={Maria Kotouza and Fotis Psomopoulos and Periklis A. Mitkas},
title={A Dockerized String Analysis Workflow for Big Data},
booktitle={New Trends in Databases and Information Systems},
pages={564-569},
publisher={Springer International Publishing},
address={Cham},
year={2019},
month={09},
date={2019-09-01},
doi={https://doi.org/10.1007/978-3-030-30278-8_55},
isbn={978-3-030-30278-8},
publisher's url={https://link.springer.com/chapter/10.1007%2F978-3-030-30278-8_55},
abstract={Nowadays, a wide range of sciences are moving towards the Big Data era, producing large volumes of data that require processing for new knowledge extraction. Scientific workflows are often the key tools for solving problems characterized by computational complexity and data diversity, whereas cloud computing can effectively facilitate their efficient execution. In this paper, we present a generative big data analysis workflow that can provide analytics, clustering, prediction and visualization services to datasets coming from various scientific fields, by transforming input data into strings. The workflow consists of novel algorithms for data processing and relationship discovery, that are scalable and suitable for cloud infrastructures. Domain experts can interact with the workflow components, set their parameters, run personalized pipelines and have support for decision-making processes. As case studies in this paper, two datasets consisting of (i) Documents and (ii) Gene sequence data are used, showing promising results in terms of efficiency and performance.}
}

2018

Conference Papers

Konstantinos N. Vavliakis, Maria Th. Kotouza, Andreas L. Symeonidis and Pericles A. Mitkas
"Recommendation Systems in a Conversational Web"
Proceedings of the 14th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,, pp. 68-77, SciTePress, 2018 Jan

In this paper we redefine the concept of Conversation Web in the context of hyper-personalization. We argue that hyper-personalization in the WWW is only possible within a conversational web where websites and users continuously “discuss” (interact in any way). We present a modular system architecture for the conversational WWW, given that adapting to various user profiles and multivariate websites in terms of size and user traffic is necessary, especially in e-commerce. Obviously there cannot be a unique fit-to-all algorithm, but numerous complementary personalization algorithms and techniques are needed. In this context, we propose PRCW, a novel hybrid approach combining offline and online recommendations using RFMG, an extension of RFM modeling. We evaluate our approach against the results of a deep neural network in two datasets coming from different online retailers. Our evaluation indicates that a) the proposed approach outperforms current state-of-art methods in small-medium datasets and can improve performance in large datasets when combined with other methods, b) results can greatly vary in different datasets, depending on size and characteristics, thus locating the proper method for each dataset can be a rather complex task, and c) offline algorithms should be combined with online methods in order to get optimal results since offline algorithms tend to offer better performance but online algorithms are necessary for exploiting new users and trends that turn up.

@conference{webist18,
author={Konstantinos N. Vavliakis and Maria Th. Kotouza and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Recommendation Systems in a Conversational Web},
booktitle={Proceedings of the 14th International Conference on Web Information Systems and Technologies - Volume 1: WEBIST,},
pages={68-77},
publisher={SciTePress},
year={2018},
month={01},
date={2018-01-01},
url={https://issel.ee.auth.gr/wp-content/uploads/2019/02/WEBIST_2018_29.pdf},
doi={http://10.5220/0006935300680077},
isbn={978-989-758-324-7},
abstract={In this paper we redefine the concept of Conversation Web in the context of hyper-personalization. We argue that hyper-personalization in the WWW is only possible within a conversational web where websites and users continuously “discuss” (interact in any way). We present a modular system architecture for the conversational WWW, given that adapting to various user profiles and multivariate websites in terms of size and user traffic is necessary, especially in e-commerce. Obviously there cannot be a unique fit-to-all algorithm, but numerous complementary personalization algorithms and techniques are needed. In this context, we propose PRCW, a novel hybrid approach combining offline and online recommendations using RFMG, an extension of RFM modeling. We evaluate our approach against the results of a deep neural network in two datasets coming from different online retailers. Our evaluation indicates that a) the proposed approach outperforms current state-of-art methods in small-medium datasets and can improve performance in large datasets when combined with other methods, b) results can greatly vary in different datasets, depending on size and characteristics, thus locating the proper method for each dataset can be a rather complex task, and c) offline algorithms should be combined with online methods in order to get optimal results since offline algorithms tend to offer better performance but online algorithms are necessary for exploiting new users and trends that turn up.}
}

2018

Inproceedings Papers

Sotirios-Filippos Tsarouchis, Maria Th. Kotouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Multi-metric Algorithm for Hierarchical Clustering of Same-Length Protein Sequences"
IFIP International Conference on Artificial Intelligence Applications and Innovations, pp. 189-199, Springer, Cham, 2018 May

The identification of meaningful groups of proteins has always been a major area of interest for structural and functional genomics. Successful protein clustering can lead to significant insight, assisting in both tracing the evolutionary history of the respective molecules as well as in identifying potential functions and interactions of novel sequences. Here we propose a clustering algorithm for same-length sequences, which allows the construction of subset hierarchy and facilitates the identification of the underlying patterns for any given subset. The proposed method utilizes the metrics of sequence identity and amino-acid similarity simultaneously as direct measures. The algorithm was applied on a real-world dataset consisting of clonotypic immunoglobulin (IG) sequences from Chronic lymphocytic leukemia (CLL) patients, showing promising results.

@inproceedings{2018Tsarouchis,
author={Sotirios-Filippos Tsarouchis and Maria Th. Kotouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Multi-metric Algorithm for Hierarchical Clustering of Same-Length Protein Sequences},
booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations},
pages={189-199},
publisher={Springer},
address={Cham},
year={2018},
month={05},
date={2018-05-22},
doi={https://doi.org/10.1007/978-3-319-92016-0_18},
isbn={978-3-319-92016-0},
abstract={The identification of meaningful groups of proteins has always been a major area of interest for structural and functional genomics. Successful protein clustering can lead to significant insight, assisting in both tracing the evolutionary history of the respective molecules as well as in identifying potential functions and interactions of novel sequences. Here we propose a clustering algorithm for same-length sequences, which allows the construction of subset hierarchy and facilitates the identification of the underlying patterns for any given subset. The proposed method utilizes the metrics of sequence identity and amino-acid similarity simultaneously as direct measures. The algorithm was applied on a real-world dataset consisting of clonotypic immunoglobulin (IG) sequences from Chronic lymphocytic leukemia (CLL) patients, showing promising results.}
}

Maria Th. Kotouza, Konstantinos N. Vavliakis, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Hierarchical Multi-Metric Framework for Item Clustering"
5th International Conference on Big Data Computing Applications and Technologies, pp. 191-197, IEEE/ACM, Zurich, Switzerland, 2018 Dec

Item clustering is commonly used for dimensionality reduction, uncovering item similarities and connections, gaining insights of the market structure and recommendations. Hierarchical clustering methods produce a hierarchy structure along with the clusters that can be useful for managing item categories and sub-categories, dealing with indirect competition and new item categorization as well. Nevertheless, baseline hierarchical clustering algorithms have high computational cost and memory usage. In this paper we propose an innovative scalable hierarchical clustering framework, which overcomes these limitations. Our work consists of a binary tree construction algorithm that creates a hierarchy of the items using three metrics, a) Identity, b) Similarity and c) Entropy, as well as a branch breaking algorithm which composes the final clusters by applying thresholds to each branch of the tree. ?he proposed framework is evaluated on the popular MovieLens 20M dataset achieving significant reduction in both memory consumption and computational time over a baseline hierarchical clustering algorithm.

@inproceedings{KotouzaVPM18,
author={Maria Th. Kotouza and Konstantinos N. Vavliakis and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Hierarchical Multi-Metric Framework for Item Clustering},
booktitle={5th International Conference on Big Data Computing Applications and Technologies},
pages={191-197},
publisher={IEEE/ACM},
address={Zurich, Switzerland},
year={2018},
month={12},
date={2018-12-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2019/02/BDCAT_2018_paper_24_Proceedings.pdf},
doi={http://10.1109/BDCAT.2018.00031},
abstract={Item clustering is commonly used for dimensionality reduction, uncovering item similarities and connections, gaining insights of the market structure and recommendations. Hierarchical clustering methods produce a hierarchy structure along with the clusters that can be useful for managing item categories and sub-categories, dealing with indirect competition and new item categorization as well. Nevertheless, baseline hierarchical clustering algorithms have high computational cost and memory usage. In this paper we propose an innovative scalable hierarchical clustering framework, which overcomes these limitations. Our work consists of a binary tree construction algorithm that creates a hierarchy of the items using three metrics, a) Identity, b) Similarity and c) Entropy, as well as a branch breaking algorithm which composes the final clusters by applying thresholds to each branch of the tree. ?he proposed framework is evaluated on the popular MovieLens 20M dataset achieving significant reduction in both memory consumption and computational time over a baseline hierarchical clustering algorithm.}
}

2017

Journal Articles

Athanassios M. Kintsakis, Fotis E. Psomopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments"
SoftwareX, 6, pp. 217-224, 2017 Sep

Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

@article{SOFTX89,
author={Athanassios M. Kintsakis and Fotis E. Psomopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments},
journal={SoftwareX},
volume={6},
pages={217-224},
year={2017},
month={09},
date={2017-09-19},
url={http://www.sciencedirect.com/science/article/pii/S2352711017300304},
doi={http://10.1016/j.softx.2017.07.007},
keywords={Bioinformatics;hybrid cloud;scientific workflows;distributed computing},
abstract={Hermes introduces a new ”describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.}
}

Cezary Zielinski, Maciej Stefanczyk, Tomasz Kornuta, Maksym Figat, Wojciech Dudek, Wojciech Szynkiewicz, Wlodzimierz Kasprzak, Jan Figat, Marcin Szlenk, Tomasz Winiarski, Konrad Banachowicz, Teresa Zielinska, Emmanouil G. Tsardoulias, Andreas L. Symeonidis, Fotis E. Psomopoulos, Athanassios M. Kintsakis, Pericles A. Mitkas, Aristeidis Thallas, Sofia E. Reppou, George T. Karagiannis, Konstantinos Panayiotou, Vincent Prunet, Manuel Serrano, Jean-Pierre Merlet, Stratos Arampatzis, Alexandros Giokas, Lazaros Penteridis, Ilias Trochidis, David Daney and Miren Iturburu
"Variable structure robot control systems: The RAPP approach"
Robotics and Autonomous Systems, 94, pp. 226-244, 2017 May

This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.

@article{Zielnski2017,
author={Cezary Zielinski and Maciej Stefanczyk and Tomasz Kornuta and Maksym Figat and Wojciech Dudek and Wojciech Szynkiewicz and Wlodzimierz Kasprzak and Jan Figat and Marcin Szlenk and Tomasz Winiarski and Konrad Banachowicz and Teresa Zielinska and Emmanouil G. Tsardoulias and Andreas L. Symeonidis and Fotis E. Psomopoulos and Athanassios M. Kintsakis and Pericles A. Mitkas and Aristeidis Thallas and Sofia E. Reppou and George T. Karagiannis and Konstantinos Panayiotou and Vincent Prunet and Manuel Serrano and Jean-Pierre Merlet and Stratos Arampatzis and Alexandros Giokas and Lazaros Penteridis and Ilias Trochidis and David Daney and Miren Iturburu},
title={Variable structure robot control systems: The RAPP approach},
journal={Robotics and Autonomous Systems},
volume={94},
pages={226-244},
year={2017},
month={05},
date={2017-05-05},
url={http://www.sciencedirect.com/science/article/pii/S0921889016306248},
doi={https://doi.org/10.1016/j.robot.2017.05.002},
keywords={robot controllers;variable structure controllers;cloud robotics;RAPP},
abstract={This paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.}
}

2017

Inproceedings Papers

Maria Th. Kotouza, Antonios C. Chrysopoulos and Pericles A. Mitkas
"Segmentation of Low Voltage Consumers for Designing Individualized Pricing Policies"
European Energy Market (EEM), 2017 14th International Conference, pp. 1-6, IEEE, Dresden, Germany, 2017 Jun

In recent years, the Smart Grid paradigm has opened a vast set of opportunities for all participating parties in the Energy Markets (i.e. producers, Distribution and Transmission System Operators, retailers, consumers), providing two-way data communication, increased security and grid stability. Furthermore, the liberation of distribution and energy services has led towards competitive Energy Market environments [4]. In order to maintain their existing customers\' satisfaction level high, as well as reaching out to new ones, suppliers must provide better and more reliable energy services, that are specifically tailored to each customer or to a group of customers with similar needs. Thus, it is necessary to identify segments of customers that have common energy characteristics via a process called Consumer Load Profiling (CLP) [16].

@inproceedings{2017Kotouza,
author={Maria Th. Kotouza and Antonios C. Chrysopoulos and Pericles A. Mitkas},
title={Segmentation of Low Voltage Consumers for Designing Individualized Pricing Policies},
booktitle={European Energy Market (EEM), 2017 14th International Conference},
pages={1-6},
publisher={IEEE},
address={Dresden, Germany},
year={2017},
month={06},
date={2017-06-06},
doi={https://doi.org/10.1109/EEM.2017.7981862},
issn={2165-4093},
isbn={978-1-5090-5499-2},
abstract={In recent years, the Smart Grid paradigm has opened a vast set of opportunities for all participating parties in the Energy Markets (i.e. producers, Distribution and Transmission System Operators, retailers, consumers), providing two-way data communication, increased security and grid stability. Furthermore, the liberation of distribution and energy services has led towards competitive Energy Market environments [4]. In order to maintain their existing customers\\' satisfaction level high, as well as reaching out to new ones, suppliers must provide better and more reliable energy services, that are specifically tailored to each customer or to a group of customers with similar needs. Thus, it is necessary to identify segments of customers that have common energy characteristics via a process called Consumer Load Profiling (CLP) [16].}
}

2016

Journal Articles

Antonios Chrysopoulos, Christos Diou, Andreas Symeonidis and Pericles A. Mitkas
"Response modeling of small-scale energy consumers for effective demand response applications"
Electric Power Systems Research, 132, pp. 78-93, 2016 Mar

The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.

@article{2015ChrysopoulosEPSR,
author={Antonios Chrysopoulos and Christos Diou and Andreas Symeonidis and Pericles A. Mitkas},
title={Response modeling of small-scale energy consumers for effective demand response applications},
journal={Electric Power Systems Research},
volume={132},
pages={78-93},
year={2016},
month={03},
date={2016-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Response-modeling-of-small-scale-energy-consumers-for-effective-demand-response-applications.pdf},
abstract={The Smart Grid paradigm can be economically and socially sustainable by engaging potential consumers through understanding, trust and clear tangible benefits. Interested consumers may assume a more active role in the energy market by claiming new energy products/services on offer and changing their consumption behavior. To this end, suppliers, aggregators and Distribution System Operators can provide monetary incentives for customer behavioral change through demand response programs, which are variable pricing schemes aiming at consumption shifting and/or reduction. However, forecasting the effect of such programs on power demand requires accurate models that can efficiently describe and predict changes in consumer activities as a response to pricing alterations. Current work proposes such a detailed bottom-up response modeling methodology, as a first step towards understanding and formulating consumer response. We build upon previous work on small-scale consumer activity modeling and provide a novel approach for describing and predicting consumer response at the level of individual activities. The proposed models are used to predict shifting of demand as a result of modified pricing policies and they incorporate consumer preferences and comfort through sensitivity factors. Experiments indicate the effectiveness of the proposed method on real-life data collected from two different pilot sites: 32 apartments of a multi-residential building in Sweden, as well as 11 shops in a large commercial center in Italy.}
}

Sofia E. Reppou, Emmanouil G. Tsardoulias, Athanassios M. Kintsakis, Andreas Symeonidis, Pericles A. Mitkas, Fotis E. Psomopoulos, George T. Karagiannis, Cezary Zielinski, Vincent Prunet, Jean-Pierre Merlet, Miren Iturburu and Alexandros Gkiokas
"RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people"
Journal of Social Robotics, pp. 15, 2016 Jun

It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.

@article{2016ReppouJSR,
author={Sofia E. Reppou and Emmanouil G. Tsardoulias and Athanassios M. Kintsakis and Andreas Symeonidis and Pericles A. Mitkas and Fotis E. Psomopoulos and George T. Karagiannis and Cezary Zielinski and Vincent Prunet and Jean-Pierre Merlet and Miren Iturburu and Alexandros Gkiokas},
title={RAPP: A robotic-oriented ecosystem for delivering smart user empowering applications for older people},
journal={Journal of Social Robotics},
pages={15},
year={2016},
month={06},
date={2016-06-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RAPP-A-Robotic-Oriented-Ecosystem-for-Delivering-Smart-User-Empowering-Applications-for-Older-People.pdf},
doi={http://10.1007/s10515-016-0206-x},
abstract={It is a general truth that increase of age is associated with a level of mental and physical decline but unfortunately the former are often accompanied by social exclusion leading to marginalization and eventually further acceleration of the aging process. A new approach in alleviating the social exclusion of older people involves the use of assistive robots. As robots rapidly invade everyday life, the need of new software paradigms in order to address the user’s unique needs becomes critical. In this paper we present a novel architectural design, the RAPP [a software platform to deliver smart, user empowering robotic applications (RApps)] framework that attempts to address this issue. The proposed framework has been designed in a cloud-based approach, integrating robotic devices and their respective applications. We aim to facilitate seamless development of RApps compatible with a wide range of supported robots and available to the public through a unified online store.}
}

Emmanouil Tsardoulias, Aris Thallas, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech"
audio engineering society, 2016 Dec

Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,

@article{2016TsardouliasAES,
author={Emmanouil Tsardoulias and Aris Thallas and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech},
journal={audio engineering society},
year={2016},
month={00},
date={2016-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Improving-multilingual-interaction-for-consumer-robots-through-signal-enhancement-in-multichannel-speech.pdf},
abstract={Cucurbita pepo (squash, pumpkin, gourd), a worldwide-cultivated vegetable of American origin, is extremely variable in fruit characteristics. However, the information associated with genes and genetic markers for pumpkin is very limited. In order to identify new genes and to develop genetic markers, we performed a transcriptome analysis (RNA-Seq) of two contrasting pumpkin cultivars. Leaves and female flowers of cultivars,}
}

Emmanouil Tsardoulias, Athanassios Kintsakis, Konstantinos Panayiotou, Aristeidis Thallas, Sofia Reppou, George Karagiannis, Miren Iturburu, Stratos Arampatzis, Cezary Zielinskic, Vincent Prunetg, Fotis Psomopoulos, Andreas Symeonidis and Pericles Mitkas
"Towards an integrated robotics architecture for social inclusion – The RAPP paradigm"
Cognitive Systems Research, pp. 1-8, 2016 Sep

Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.

@article{2016TsardouliasCSR,
author={Emmanouil Tsardoulias and Athanassios Kintsakis and Konstantinos Panayiotou and Aristeidis Thallas and Sofia Reppou and George Karagiannis and Miren Iturburu and Stratos Arampatzis and Cezary Zielinskic and Vincent Prunetg and Fotis Psomopoulos and Andreas Symeonidis and Pericles Mitkas},
title={Towards an integrated robotics architecture for social inclusion – The RAPP paradigm},
journal={Cognitive Systems Research},
pages={1-8},
year={2016},
month={09},
date={2016-09-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/COGSYS_2016_R1.pdf},
abstract={Scientific breakthroughs have led to an increase in life expectancy, to the point where senior citizens comprise an ever increasing percentage of the general population. In this direction, the EU funded RAPP project “Robotic Applications for Delivering Smart User Empowering Applications” introduces socially interactive robots that will not only physically assist, but also serve as a companion to senior citizens. The proposed RAPP framework has been designed aiming towards a cloud-based integrated approach that enables robotic devices to seamlessly deploy robotic applications, relieving the actual robots from computational burdens. The Robotic Applications (RApps) developed according to the RAPP paradigm will empower consumer social robots, allowing them to adapt to versatile situations and materialize complex behaviors and scenarios. The RAPP pilot cases involve the development of RApps for the NAO humanoid robot and the ANG-MED rollator targeting senior citizens that (a) are technology illiterate, (b) have been diagnosed with mild cognitive impairment or (c) are in the process of hip fracture rehabilitation. Initial results establish the robustness of RAPP in addressing the needs of end users and developers, as well as its contribution in significantly increasing the quality of life of senior citizens.}
}

2016

Conference Papers

Kyriakos Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards defining the structural properties of efficient consumer social networks on the electricity grid"
AI4SG SETN Workshop on AI for the Smart Grid, 2016 May

Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.

@conference{2016ChatzidimitriouSETN,
author={Kyriakos Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards defining the structural properties of efficient consumer social networks on the electricity grid},
booktitle={AI4SG SETN Workshop on AI for the Smart Grid},
year={2016},
month={05},
date={2016-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/06/Cassandra_AI4SG_CameraReady.pdf},
abstract={Energy markets have undergone important changes at the conceptual level over the last years. Decentralized supply, small-scale pro- duction, smart grid optimization and control are the new building blocks. These changes o er substantial opportunities for all energy market stake- holders, some of which however, remain largely unexploited. Small-scale consumers as a whole, account for signi cant amount of energy in current markets (up to 40%). As individuals though, their consumption is triv- ial, and their market power practically non-existent. Thus, it is necessary to assist small-scale energy market stakeholders, combine their market power. Within the context of this work, we propose Consumer Social Networks (CSNs) as a means to achieve the objective. We model con- sumers and present a simulation environment for the creation of CSNs and provide a proof of concept on how CSNs can be formulated based on various criteria. We also provide an indication on how demand response programs designed based on targeted incentives may lead to energy peak reductions.}
}

Aristeidis G. Thallas, Konstantinos Panayiotou, Emmanouil Tsardoulias, Andreas L. Symeonidis, Pericles A. Mitkas and George G. Karagiannis
"Relieving robots from their burdens: The Cloud Agent concept"
2016 5th IEEE International Conference on Cloud Networking (Cloudnet), 2016 Oct

The consumer robotics concept has already invaded our everyday lives, however two major drawbacks have become apparent both for the roboticists and the consumers. The first is that these robots are pre-programmed to perform specific tasks and usually their software is proprietary, thus not open to "interventions". The second is that even if their software is open source, low-cost robots usually lack sufficient resources such as CPU power or memory capabilities, thus forbidding advanced algorithms to be executed in-robot. Within the context of RAPP (Robotic Applications for Delivering Smart User Empowering Applications) we treat robots as platforms, where applications can be downloaded and automatically deployed. Furthermore, we propose and implement a novel multi-agent architecture, empowering robots to offload computations in entities denoted as Cloud Agents. This paper discusses the respective architecture in detail.

@conference{etsardouRobotBurden2016,
author={Aristeidis G. Thallas and Konstantinos Panayiotou and Emmanouil Tsardoulias and Andreas L. Symeonidis and Pericles A. Mitkas and George G. Karagiannis},
title={Relieving robots from their burdens: The Cloud Agent concept},
booktitle={2016 5th IEEE International Conference on Cloud Networking (Cloudnet)},
year={2016},
month={10},
date={2016-10-05},
url={https://ieeexplore.ieee.org/document/7776599/authors#authors},
doi={https://doi.org/10.1109/CloudNet.2016.38},
keywords={Robots;Containers;Cloud computing;Computer architecture;Web servers;Sockets},
abstract={The consumer robotics concept has already invaded our everyday lives, however two major drawbacks have become apparent both for the roboticists and the consumers. The first is that these robots are pre-programmed to perform specific tasks and usually their software is proprietary, thus not open to \"interventions\". The second is that even if their software is open source, low-cost robots usually lack sufficient resources such as CPU power or memory capabilities, thus forbidding advanced algorithms to be executed in-robot. Within the context of RAPP (Robotic Applications for Delivering Smart User Empowering Applications) we treat robots as platforms, where applications can be downloaded and automatically deployed. Furthermore, we propose and implement a novel multi-agent architecture, empowering robots to offload computations in entities denoted as Cloud Agents. This paper discusses the respective architecture in detail.}
}

2016

Inproceedings Papers

Fotis Psomopoulos, Athanassios Kintsakis and Pericles Mitkas
"A pan-genome approach and application to species with photosynthetic capabilities"
15th European Conference on Computational Biology, The Hague, Netherlands, 2016 Sep

The abundance of genome data being produced by the new sequencing techniques is providing the opportunity to investigate gene diversity at a new level. A pan-genome analysis can provide the framework for estimating the genomic diversity of the data set at hand and give insights towards the understanding of its observed characteristics. Currently, there exist several tools for pan-genome studies, mostly focused on prokaryote genomes and their respective attributes. Here we provide a systematic approach for constructing the groups inherently associated with a pan-genome analysis, using the complete proteome data of photosynthetic genomes as the driving case study. As opposed to similar studies, the presented method requires a complete information system (i.e. complete genomes) in order to produce meaningful results. The method was applied to 95 genomes with photosynthetic capabilities, including cyanobacteria and green plants, as retrieved from UniProt and Plaza. Due to the significant computational requirements of the analysis, we utilized the Federated Cloud computing resources provided by the EGI infrastructure. The analysis ultimately produced 37,680 protein families, with a core genome comprising of 102 families. An investigation of the families’ distribution revealed two underlying but expected subsets, roughly corresponding to bacteria and eukaryotes. Finally, an automated functional annotation of the produced clusters, through assignment of PFAM domains to the participating protein sequences, allowed the identification of the key characteristics present in the core genome, as well as of selected multi-member families.

@inproceedings{2016PsomopoulosECCB,
author={Fotis Psomopoulos and Athanassios Kintsakis and Pericles Mitkas},
title={A pan-genome approach and application to species with photosynthetic capabilities},
booktitle={15th European Conference on Computational Biology},
address={The Hague, Netherlands},
year={2016},
month={09},
date={2016-09-01},
abstract={The abundance of genome data being produced by the new sequencing techniques is providing the opportunity to investigate gene diversity at a new level. A pan-genome analysis can provide the framework for estimating the genomic diversity of the data set at hand and give insights towards the understanding of its observed characteristics. Currently, there exist several tools for pan-genome studies, mostly focused on prokaryote genomes and their respective attributes. Here we provide a systematic approach for constructing the groups inherently associated with a pan-genome analysis, using the complete proteome data of photosynthetic genomes as the driving case study. As opposed to similar studies, the presented method requires a complete information system (i.e. complete genomes) in order to produce meaningful results. The method was applied to 95 genomes with photosynthetic capabilities, including cyanobacteria and green plants, as retrieved from UniProt and Plaza. Due to the significant computational requirements of the analysis, we utilized the Federated Cloud computing resources provided by the EGI infrastructure. The analysis ultimately produced 37,680 protein families, with a core genome comprising of 102 families. An investigation of the families’ distribution revealed two underlying but expected subsets, roughly corresponding to bacteria and eukaryotes. Finally, an automated functional annotation of the produced clusters, through assignment of PFAM domains to the participating protein sequences, allowed the identification of the key characteristics present in the core genome, as well as of selected multi-member families.}
}

Emmanouil Stergiadis, Athanassios Kintsakis, Fotis Psomopoulos and Pericles A. Mitkas
"A scalable Grid Computing framework for extensible phylogenetic profile construction"
12th International Conference on Artificial Intelligence Applications and Innovations, pp. 455-462, 12th International Conference on Artificial Intelligence Applications and Innovations, Thessaloniki, Greece, September, 2016 Sep

Current research in Life Sciences without doubt has been established as a Big Data discipline. Beyond the expected domain-specific requirements, this perspective has put scalability as one of the most crucial aspects of any state-of-the-art bioinformatics framework. Sequence alignment and construction of phylogenetic profiles are common tasks evident in a wide range of life science analyses as, given an arbitrary big volume of genomes, they can provide useful insights on the functionality and relationships of the involved entities. This process is often a computational bottleneck in existing solutions, due to its inherent complexity. Our proposed distributed framework manages to perform both tasks with significant speed-up by employing Grid Computing resources provided by EGI in an efficient and optimal manner. The overall workflow is both fully automated, thus making it user friendly, and fully detached from the end-users terminal, since all computations take place on Grid worker nodes.

@inproceedings{2016Stergiadis,
author={Emmanouil Stergiadis and Athanassios Kintsakis and Fotis Psomopoulos and Pericles A. Mitkas},
title={A scalable Grid Computing framework for extensible phylogenetic profile construction},
booktitle={12th International Conference on Artificial Intelligence Applications and Innovations},
pages={455-462},
publisher={12th International Conference on Artificial Intelligence Applications and Innovations},
address={Thessaloniki, Greece, September},
year={2016},
month={09},
date={2016-09-02},
abstract={Current research in Life Sciences without doubt has been established as a Big Data discipline. Beyond the expected domain-specific requirements, this perspective has put scalability as one of the most crucial aspects of any state-of-the-art bioinformatics framework. Sequence alignment and construction of phylogenetic profiles are common tasks evident in a wide range of life science analyses as, given an arbitrary big volume of genomes, they can provide useful insights on the functionality and relationships of the involved entities. This process is often a computational bottleneck in existing solutions, due to its inherent complexity. Our proposed distributed framework manages to perform both tasks with significant speed-up by employing Grid Computing resources provided by EGI in an efficient and optimal manner. The overall workflow is both fully automated, thus making it user friendly, and fully detached from the end-users terminal, since all computations take place on Grid worker nodes.}
}

2015

Journal Articles

Dimitrios Vitsios, Fotis Psomopoulos, Pericles Mitkas and Christos Ouzounis
"Inference of pathway decomposition across multiple species through gene clustering"
International Journal on Artificial Intelligence Tools, 24, pp. 25, 2015 Feb

In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm

@article{2015vitsiosIJAIT,
author={Dimitrios Vitsios and Fotis Psomopoulos and Pericles Mitkas and Christos Ouzounis},
title={Inference of pathway decomposition across multiple species through gene clustering},
journal={International Journal on Artificial Intelligence Tools},
volume={24},
pages={25},
year={2015},
month={02},
date={2015-02-23},
url={http://www.worldscientific.com/doi/pdf/10.1142/S0218213015400035},
abstract={In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel algorithm has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm}
}

2015

Βιβλία

Alexandros Gkiokas , Emmanouil G. Tsardoulias and and Pericles A. Mitkas
"Hive Collective Intelligence for Cloud Robotics: A Hybrid Distributed Robotic Controller Design for Learning and Adaptation."
Springer International Publishing, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@book{2015GkiokasSIP,
author={Alexandros Gkiokas and Emmanouil G. Tsardoulias and and Pericles A. Mitkas},
title={Hive Collective Intelligence for Cloud Robotics: A Hybrid Distributed Robotic Controller Design for Learning and Adaptation.},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Hive-Collective-Intelligence-for-Cloud-Robotics-A-Hybrid-Distributed-Robotic-Controller-Design-for-Learning-and-Adaptation.pdf},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

Pericles A. Mitkas
"Assistive Robots as Future Caregivers: The RAPP Approach."
Springer International Publishing, 2015 Mar

As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not only by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing robotic applications, the RAPP approach introduces the idea of robots as platforms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computational and storage capabilities and enable machine learning operations, distributed data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deployable for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications development and it is expected to have a profound effect in the robotics market.

@book{2015MitkasSIP,
author={Pericles A. Mitkas},
title={Assistive Robots as Future Caregivers: The RAPP Approach.},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Assistive-Robots-as-Future-Caregivers-The-RAPP-Approach.pdf},
abstract={As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not only by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing robotic applications, the RAPP approach introduces the idea of robots as platforms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computational and storage capabilities and enable machine learning operations, distributed data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deployable for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications development and it is expected to have a profound effect in the robotics market.}
}

Emmanouil G. Tsardoulias, Cezary Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL Ontologies: The RAPP Methodology"
Springer International Publishing, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@book{2015TsardouliasSIP,
author={Emmanouil G. Tsardoulias and Cezary Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL Ontologies: The RAPP Methodology},
publisher={Springer International Publishing},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Merging-Robotics-and-AAL-Ontologies-The-RAPP-Methodology.pdf},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

2015

Conference Papers

Athanassios M. Kintsakis, Antonios Chysopoulos and Pericles Mitkas
"Agent-based short-term load and price forecasting using a parallel implementation of an adaptive PSO-trained local linear wavelet neural network"
European Energy Market (EEM), pp. 1 - 5, 2015 May

Short-Term Load and Price forecasting are crucial to the stability of electricity markets and to the profitability of the involved parties. The work presented here makes use of a Local Linear Wavelet Neural Network (LLWNN) trained by a special adaptive version of the Particle Swarm Optimization algorithm and implemented as parallel process in CUDA. Experiments for short term load and price forecasting, up to 24 hours ahead, were conducted for energy market datasets from Greece and the USA. In addition, the fast response time of the system enabled its encapsulation in a PowerTAC agent, competing in a real time environment. The system displayed robust all-around performance in a plethora of real and simulated energy markets, each characterized by unique patterns and deviations. The low forecasting error, real time performance and the significant increase in the profitability of an energy market agent show that our approach is a powerful prediction tool, with multiple expansion possibilities.

@conference{2015KintsakisEEM,
author={Athanassios M. Kintsakis and Antonios Chysopoulos and Pericles Mitkas},
title={Agent-based short-term load and price forecasting using a parallel implementation of an adaptive PSO-trained local linear wavelet neural network},
booktitle={European Energy Market (EEM)},
pages={1 - 5},
year={2015},
month={05},
date={2015-05-19},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Agent-based-Short-Term-Load-and-Price-Forecasting-Using-a-Parallel-Implementation-of-an-Adaptive-PSO-Trained-Local-Linear-Wavelet-Neural-Network.pdf},
doi={http://10.1109/EEM.2015.7216611},
keywords={Load Forecasting;Neural Networks;Parallel architectures Particle swarm optimization;Price Forecasting;Wavelet Neural Networks},
abstract={Short-Term Load and Price forecasting are crucial to the stability of electricity markets and to the profitability of the involved parties. The work presented here makes use of a Local Linear Wavelet Neural Network (LLWNN) trained by a special adaptive version of the Particle Swarm Optimization algorithm and implemented as parallel process in CUDA. Experiments for short term load and price forecasting, up to 24 hours ahead, were conducted for energy market datasets from Greece and the USA. In addition, the fast response time of the system enabled its encapsulation in a PowerTAC agent, competing in a real time environment. The system displayed robust all-around performance in a plethora of real and simulated energy markets, each characterized by unique patterns and deviations. The low forecasting error, real time performance and the significant increase in the profitability of an energy market agent show that our approach is a powerful prediction tool, with multiple expansion possibilities.}
}

Pericles A. Mitkas
"Assistive Robots as Future Caregivers: The RAPP Approach"
Automation Conference, 2015 Mar

As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not on- ly by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing ro- botic applications, the RAPP approach introduces the idea of robots as plat- forms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employ- ing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computa- tional and storage capabilities and enable machine learning operations, distri- buted data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deploya- ble for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications de- velopment and it is expected to have a profound effect in the robotics market

@conference{2015MitkasACRAPP,
author={Pericles A. Mitkas},
title={Assistive Robots as Future Caregivers: The RAPP Approach},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Assistive-Robots-as-Future-Caregivers-The-RAPP-Approach.pdf},
keywords={Load Forecasting},
abstract={As our societies are affected by a dramatic demographic change, the percentage of elderly and people requiring support in their daily life is expected to increase in the near future and caregivers will not be enough to assist and support them. Socially interactive robots can help confront this situation not on- ly by physically assisting people but also by functioning as a companion. The rising sales figures of robots point towards a trend break concerning robotics. To lower the cost for developers and to increase their interest in developing ro- botic applications, the RAPP approach introduces the idea of robots as plat- forms. RAPP (A Software Platform for Delivering Smart User Empowering Robotic Applications) aims to provide a software platform in order to support the creation and delivery of robotic applications (RApps) targeting people at risk of exclusion, especially older people. The open-source software platform will provide an API with the required functionality for the implementation of RApps. It will also provide access to the robots’ sensors and actuators employ- ing higher level commands, by adding a middleware stack with functionalities suitable for different kinds of robots. RAPP will expand the robots’ computa- tional and storage capabilities and enable machine learning operations, distri- buted data collection and processing. Through a special repository for RApps, the platform will support knowledge sharing among robots in order to provide personalized applications based on adaptation to individuals. The use of a common API will facilitate the development of improved applications deploya- ble for a variety of robots. These applications target people with different needs, capabilities and expectations, while at the same time respect their privacy and autonomy. The RAPP approach can lower the cost of robotic applications de- velopment and it is expected to have a profound effect in the robotics market}
}

Fotis Psomopoulos, Olga Vrousgou and Pericles A. Mitkas
"Large-scale modular comparative genomics: the Grid approach"
23rd Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) / 14th European Conference on Computational Biology (ECCB), 2015 Jul

@conference{2015PsomopoulosAICISMB,
author={Fotis Psomopoulos and Olga Vrousgou and Pericles A. Mitkas},
title={Large-scale modular comparative genomics: the Grid approach},
booktitle={23rd Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) / 14th European Conference on Computational Biology (ECCB)},
year={2015},
month={07},
date={2015-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Large-scale-modular-comparative-genomics-the-Grid-approach.pdf}
}

Alexandros Gkiokas, Emmanouil G. Tsardoulias and Pericles A. Mitkas
"Hive Collective Intelligence for Cloud Robotics A Hybrid Distributed Robotic Controller Design for Learning and Adaptation"
Automation Conference, 2015 Mar

The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.

@conference{2015TsardouliasHCIAC,
author={Alexandros Gkiokas and Emmanouil G. Tsardoulias and Pericles A. Mitkas},
title={Hive Collective Intelligence for Cloud Robotics A Hybrid Distributed Robotic Controller Design for Learning and Adaptation},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Hive-Collective-Intelligence-for-Cloud-Robotics-A-Hybrid-Distributed-Robotic-Controller-Design-for-Learning-and-Adaptation.pdf},
keywords={Load Forecasting},
abstract={The recent advent of Cloud Computing, inevitably gave rise to Cloud Robotics. Whilst the field is arguably still in its infancy, great promise is shown regarding the problem of limited computational power in Robotics. This is the most evident advantage of Cloud Robotics, but, other much more significant yet subtle advantages can now be identified. Moving away from traditional Robotics, and approaching Cloud Robotics through the prism of distributed systems or Swarm Intelligence offers quite an interesting composure; physical robots deployed across different areas, may delegate tasks to higher intelligence agents residing in the cloud. This design has certain distinct attributes, similar with the organisation of a Hive or bee colony. Such a parallelism is crucial for the foundations set hereinafter, as they express through the hive design, a new scheme of distributed robotic architectures. Delegation of agent intelligence, from the physical robot swarms to the cloud controllers, creates a unique type of Hive Intelligence, where the controllers residing in the cloud, may act as the brain of a ubiquitous group of robots, whilst the robots themselves act as proxies for the Hive Intelligence. The sensors of the hive system providing the input and output are the robots, yet the information processing may take place collectively, individually or on a central hub, thus offering the advantages of a hybrid swarm and cloud controller. The realisation that radical robotic architectures can be created and implemented with current Artificial Intelligence models, raises interesting questions, such as if robots belonging to a hive, can perform tasks and procedures better or faster, and if can they learn through their interactions, and hence become more adaptive and intelligent.}
}

Emmanouil G. Tsardoulias, C Zielinski, Wlodzimierz Kasprzak, Sofia Reppou, Andreas L. Symeonidis, Pericles A. Mitkas and George Karagiannis
"Merging Robotics and AAL ontologies: The RAPP methodology"
Automation Conference, 2015 Mar

Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).

@conference{2015TsardouliasMRALL,
author={Emmanouil G. Tsardoulias and C Zielinski and Wlodzimierz Kasprzak and Sofia Reppou and Andreas L. Symeonidis and Pericles A. Mitkas and George Karagiannis},
title={Merging Robotics and AAL ontologies: The RAPP methodology},
booktitle={Automation Conference},
year={2015},
month={03},
date={2015-03-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Merging_Robotics_and_AAL_ontologies_-_The_RAPP_methodology.pdf},
keywords={Load Forecasting},
abstract={Cloud robotics is becoming a trend in the modern robotics field, as it became evident that true artificial intelligence can be achieved only by sharing collective knowledge. In the ICT area, the most common way to formulate knowledge is via the ontology form, where different meanings connect semantically. Additionally, there is a considerable effort to merge robotics with assisted living concepts, as the modern societies suffer from lack of caregivers for the persons in need. In the current work, an attempt is performed to merge a robotic and an AAL ontology, as well as utilize it in the RAPP Project (EU-FP7).}
}

Tsardoulias, E. G., Andreas Symeonidis and and P. A. Mitkas.
"An automatic speech detection architecture for social robot oral interaction"
In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM, Island of Rhodes, 2015 Oct

Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn

@conference{2015TsardouliasPAMIWS,
author={Tsardoulias and E. G. and Andreas Symeonidis and and P. A. Mitkas.},
title={An automatic speech detection architecture for social robot oral interaction},
booktitle={In Proceedings of the Audio Mostly 2015 on Interaction With Sound, p. 33. ACM},
address={Island of Rhodes},
year={2015},
month={10},
date={2015-10-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/An-automatic-speech-detection-architecture-for-social-robot-oral-interaction.pdf},
abstract={Social robotics have become a trend in contemporary robotics research, since they can be successfully used in a wide range of applications. One of the most fundamental communication skills a robot must have is the oral interaction with a human, in order to provide feedback or accept commands. And, although text-to-speech is an almost solved problem, this isn}
}

Konstantinos Vavliakis, Anthony Chrysopoulos, Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders"
SimuTools, 2015 Dec

Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.

@conference{2015VavliakisSimuTools,
author={Konstantinos Vavliakis and Anthony Chrysopoulos and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={CASSANDRA: a simulation-based, decision-support tool for energy market stakeholders},
booktitle={SimuTools},
year={2015},
month={00},
date={2015-00-00},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/09/CASSANDRA_SimuTools.pdf},
abstract={Energy gives personal comfort to people, and is essential for the generation of commercial and societal wealth. Nevertheless, energy production and consumption place considerable pressures on the environment, such as the emission of greenhouse gases and air pollutants. They contribute to climate change, damage natural ecosystems and the man-made environment, and cause adverse e ects to human health. Lately, novel market schemes emerge, such as the formation and operation of customer coalitions aiming to improve their market power through the pursuit of common bene ts.In this paper we present CASSANDRA, an open source1,expandable software platform for modelling the demand side of power systems, focusing on small scale consumers. The structural elements of the platform are a) the electrical installations (i.e. households, commercial stores, small industries etc.), b) the respective appliances installed, and c) the electrical consumption-related activities of the people residing in the installations.CASSANDRA serves as a tool for simulation of real demandside environments providing decision support for energy market stakeholders. The ultimate goal of the CASSANDRA simulation functionality is the identi cation of good practices that lead to energy eciency, clustering electric energy consumers according to their consumption patterns, and the studying consumer change behaviour when presented with various demand response programs.}
}

Olga Vrousgou, Fotis Psomopoulos and Pericles Mitkas
"A grid-enabled modular framework for efficient sequence analysis workflows"
16th International Conference on Engineering Applications of Neural Network, Island of Rhodes, 2015 Oct

In the era of Big Data in Life Sciences, efficient processing and analysis of vast amounts of sequence data is becoming an ever daunting challenge. Among such analyses, sequence alignment is one of the most commonly used procedures, as it provides useful insights on the functionality and relationship of the involved entities. Sequence alignment is one of the most common computational bottlenecks in several bioinformatics workflows. We have designed and implemented a time-efficient distributed modular application for sequence alignment, phylogenetic profiling and clustering of protein sequences, by utilizing the European Grid Infrastructure. The optimal utilization of the Grid with regards to the respective modules, allowed us to achieve significant speedups to the order of 1400%.

@conference{2015VrousgouICEANN,
author={Olga Vrousgou and Fotis Psomopoulos and Pericles Mitkas},
title={A grid-enabled modular framework for efficient sequence analysis workflows},
booktitle={16th International Conference on Engineering Applications of Neural Network},
address={Island of Rhodes},
year={2015},
month={10},
date={2015-10-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Grid-Enabled-Modular-Framework-for-Efficient-Sequence-Analysis-Workflows.pdf},
abstract={In the era of Big Data in Life Sciences, efficient processing and analysis of vast amounts of sequence data is becoming an ever daunting challenge. Among such analyses, sequence alignment is one of the most commonly used procedures, as it provides useful insights on the functionality and relationship of the involved entities. Sequence alignment is one of the most common computational bottlenecks in several bioinformatics workflows. We have designed and implemented a time-efficient distributed modular application for sequence alignment, phylogenetic profiling and clustering of protein sequences, by utilizing the European Grid Infrastructure. The optimal utilization of the Grid with regards to the respective modules, allowed us to achieve significant speedups to the order of 1400%.}
}

2014

Journal Articles

Antonios Chrysopoulos, Christos Diou, A.L. Symeonidis and Pericles A. Mitkas
"Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications"
EAAI, 35, pp. 299- 315, 2014 Oct

In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.

@article{2014chrysopoulosEAAI,
author={Antonios Chrysopoulos and Christos Diou and A.L. Symeonidis and Pericles A. Mitkas},
title={Bottom-up modeling of small-scale energy consumers for effective Demand Response Applications},
journal={EAAI},
volume={35},
pages={299- 315},
year={2014},
month={10},
date={2014-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Bottom-up-modeling-of-small-scale-energy-consumers-for-effective-Demand-Response-Applications.pdf},
doi={http://10.1016/j.engappai.2014.06.015},
keywords={Small-scale consumer models;Demand simulation;Demand Response Applications},
abstract={In contemporary power systems, small-scale consumers account for up to 50% of a country?s total electrical energy consumption. Nevertheless, not much has been achieved towards eliminating the problems caused by their inelastic consumption habits, namely the peaks in their daily power demand and the inability of energy suppliers to perform short-term forecasting and/or long-term portfolio management. Typical approaches applied in large-scale consumers, like providing targeted incentives for behavioral change, cannot be employed in this case due to the lack of models for everyday habits, activities and consumption patterns, as well as the inability to model consumer response based on personal comfort. Current work aspires to tackle these issues; it introduces a set of small-scale consumer models that provide statistical descriptions of electrical consumption patterns, parameterized from the analysis of real-life consumption measurements. These models allow (i) bottom-up aggregation of appliance use up to the overall installation load, (ii) simulation of various energy efficiency scenarios that involve changes at appliance and/or activity level and (iii) the assessment of change in consumer habits, and therefore the power consumption, as a result of applying different pricing policies. Furthermore, an autonomous agent architecture is introduced that adopts the proposed consumer models to perform simulation and result analysis. The conducted experiments indicate that (i) the proposed approach leads to accurate prediction of small-scale consumption (in terms of energy consumption and consumption activities) and (ii) small shifts in appliance usage times are sufficient to achieve significant peak power reduction.}
}

2014

Conference Papers

Fotis Psomopoulos, Emmanouil Tsardoulias, Alexandros Giokas, Cezary Zielinski, Vincent Prunet, Ilias Trochidis, David Daney, Manuel Serrano, Ludovic Courtes, Stratos Arampatzis and Pericles A. Mitkas
"RAPP System Architecture, Assistance and Service Robotics in a Human Environment"
International Conference on Intelligent Robots and Systems (IEEE/RSJ), Chicago, Illinois, 2014 Sep

Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users

@conference{2014PsomopoulosIEEE/RSJ,
author={Fotis Psomopoulos and Emmanouil Tsardoulias and Alexandros Giokas and Cezary Zielinski and Vincent Prunet and Ilias Trochidis and David Daney and Manuel Serrano and Ludovic Courtes and Stratos Arampatzis and Pericles A. Mitkas},
title={RAPP System Architecture, Assistance and Service Robotics in a Human Environment},
booktitle={International Conference on Intelligent Robots and Systems (IEEE/RSJ)},
address={Chicago, Illinois},
year={2014},
month={09},
date={2014-09-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/RAPP-System-Architecture-Assistance-and-Service-Robotics-in-a-Human-Environment.pdf},
keywords={Load Forecasting},
abstract={Robots are fast becoming a part of everyday life. This rise can be evidenced both through the public news and announcements, as well as in recent literature in the robotics scientific communities. This expanding development requires new paradigms in producing the necessary software to allow for the users}
}

2014

Inproceedings Papers

Christos Dimou, Fani Tzima, Andreas L. Symeonidis and and Pericles A. Mitkas
"Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation"
Lecture Notes on Agents and Data Mining Interaction, pp. 50-54, Springer, Baltimore, Maryland, USA, 2014 May

Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.

@inproceedings{2014Dimou,
author={Christos Dimou and Fani Tzima and Andreas L. Symeonidis and and Pericles A. Mitkas},
title={Performance Evaluation of Agents and Multi-agent Systems using Formal Specifications in Z Notation},
booktitle={Lecture Notes on Agents and Data Mining Interaction},
pages={50-54},
publisher={Springer},
address={Baltimore, Maryland, USA},
year={2014},
month={05},
date={2014-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Performance-Evaluation-of-Agents-and-Multi-agent-Systems-using-Formal-Specifications-in-Z-Notation.pdf},
keywords={Small-scale consumer models},
abstract={Software requirements are commonlywritten in natural language, making themprone to ambiguity, incompleteness and inconsistency. By converting requirements to formal emantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.}
}

2013

Journal Articles

Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"Adaptive reservoir computing through evolution and learning"
Neurocomputing, 103, pp. 198-209, 2013 Jan

The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.

@article{2013ChatzidimitriouN,
author={Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={Adaptive reservoir computing through evolution and learning},
journal={Neurocomputing},
volume={103},
pages={198-209},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Adaptive-reservoir-computing-through-evolution-and-learning.pdf},
keywords={Load Forecasting},
abstract={The development of real-world, fully autonomous agents would require mechanisms that would offer generalization capabilities from experience, suitable for a large range of machine learning tasks, like those from the areas of supervised and reinforcement learning. Such capacities could be offered by parametric function approximators that could either model the environment or the agent\\\\'s policy. To promote autonomy, these structures should be adapted to the problem at hand with no or little human expert input. Towards this goal, we propose an adaptive function approximator method for developing appropriate neural networks in the form of reservoir computing systems through evolution and learning. Our neuro-evolution of augmenting reservoirs approach comprises of several ideas, successful on their own, in an effort to develop an algorithm that could handle a large range of problems, more efficiently. In particular, we use the neuro-evolution of augmented topologies algorithm as a meta-search method for the adaptation of echo state networks for handling problems to be encountered by autonomous entities. We test our approach on several test-beds from the realms of time series prediction and reinforcement learning. We compare our methodology against similar state-of-the-art algorithms with promising results.}
}

Christos Maramis, Manolis Falelakis, Irini Lekka, Christos Diou, Pericles A. Mitkas and Anastasios Delopoulos
"Applying semantic technologies in cervical cancer research"
Data Knowl. Eng., 86, pp. 160-178, 2013 Jan

In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.

@article{2013MaramisDKE,
author={Christos Maramis and Manolis Falelakis and Irini Lekka and Christos Diou and Pericles A. Mitkas and Anastasios Delopoulos},
title={Applying semantic technologies in cervical cancer research},
journal={Data Knowl. Eng.},
volume={86},
pages={160-178},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Applying-semantic-technologies-in-cervical-cancer-research.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering},
abstract={In this paper we present a research system that follows a semantic approach to facilitate medical association studies in the area of cervical cancer. Our system, named ASSIST and developed as an EU research project, assists in cervical cancer research by unifying multiple patient record repositories, physically located in different medical centers or hospitals. Semantic modeling of medical data and rules for inferring domain-specific information allow the system to (i) homogenize the information contained in the isolated repositories by translating it into the terms of a unified semantic representation, (ii) extract diagnostic information not explicitly stored in the individual repositories, and (iii) automate the process of evaluating medical hypotheses by performing case control association studies, which is the ultimate goal of the system.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile"
Plos ONE, 2013 Jan

Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.

@article{2013PsomopoulosPlosOne,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profile},
journal={Plos ONE},
year={2013},
month={01},
date={2013-01-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/journal.pone_.0052854.pdf},
abstract={Phylogenetic profiles express the presence or absence of genes and their homologs across a number of reference genomes. They have emerged as an elegant representation framework for comparative genomics and have been used for the genome-wide inference and discovery of functionally linked genes or metabolic pathways. As the number of reference genomes grows, there is an acute need for faster and more accurate methods for phylogenetic profile analysis with increased performance in speed and quality. We propose a novel, efficient method for the detection of genomic idiosyncrasies, i.e. sets of genes found in a specific genome with peculiar phylogenetic properties, such as intra-genome correlations or inter-genome relationships. Our algorithm is a four-step process where genome profiles are first defined as fuzzy vectors, then discretized to binary vectors, followed by a de-noising step, and finally a comparison step to generate intra- and inter-genome distances for each gene profile. The method is validated with a carefully selected benchmark set of five reference genomes, using a range of approaches regarding similarity metrics and pre-processing stages for noise reduction. We demonstrate that the fuzzy profile method consistently identifies the actual phylogenetic relationship and origin of the genes under consideration for the majority of the cases, while the detected outliers are found to be particular genes with peculiar phylogenetic patterns. The proposed method provides a time-efficient and highly scalable approach for phylogenetic stratification, with the detected groups of genes being either similar to their own genome profile or different from it, thus revealing atypical evolutionary histories.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Event identification in web social media through named entity recognition and topic modeling"
Data & Knowledge Engineering, 88, pp. 1-24, 2013 Jan

@article{2013VavliakisDKE,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Event identification in web social media through named entity recognition and topic modeling},
journal={Data & Knowledge Engineering},
volume={88},
pages={1-24},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Event-identification-in-web-social-media-through-named-entity-recognition-and-topic-modeling.pdf},
keywords={event identification;social media analysis;topic maps;peak detectiontopic clustering}
}

2013

Inproceedings Papers

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas Symeonidis and Pericles Mitkas
"Redefining the market power of small-scale electricity consumers through consumer social networks"
10th IEEE International Conference on e-Business Engineering (ICEBE 2013), pp. 30-44, Springer Berlin Heidelberg, 2013 Jan

136

@inproceedings{2013ChatzidimitriouICEBE,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas Symeonidis and Pericles Mitkas},
title={Redefining the market power of small-scale electricity consumers through consumer social networks},
booktitle={10th IEEE International Conference on e-Business Engineering (ICEBE 2013)},
pages={30-44},
publisher={Springer Berlin Heidelberg},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Redefining-the-market-power-of-small-scale-electricity-consumers-through-Consumer-Social-Networks.pdf},
doi={http://link.springer.com/chapter/10.1007/978-3-642-40864-9_3#page-1},
keywords={Load Forecasting},
abstract={136}
}

Antonios Chrysopoulos, Christos Diou, Andreas L. Symeonidis and Pericles Mitkas
"Agent-based small-scale energy consumer models for energy portfolio management"
Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013), pp. 45-50, Atlanta, GA, USA, 2013 Jan

Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.

@inproceedings{2013ChrysopoulosIAT,
author={Antonios Chrysopoulos and Christos Diou and Andreas L. Symeonidis and Pericles Mitkas},
title={Agent-based small-scale energy consumer models for energy portfolio management},
booktitle={Proceedings of the 2013 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2013)},
pages={45-50},
address={Atlanta, GA, USA},
year={2013},
month={01},
date={2013-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Agent-based-small-scale-energy-consumer-models-for-energy-portfolio-management.pdf},
keywords={Load Forecasting},
abstract={Locating software bugs is a difficult task, especially if they do not lead to crashes. Current research on automating non-crashing bug detection dictates collecting function call traces and representing them as graphs, and reducing the graphs before applying a subgraph mining algorithm. A ranking of potentially buggy functions is derived using frequency statistics for each node (function) in the correct and incorrect set of traces. Although most existing techniques are effective, they do not achieve scalability. To address this issue, this paper suggests reducing the graph dataset in order to isolate the graphs that are significant in localizing bugs. To this end, we propose the use of tree edit distance algorithms to identify the traces that are closer to each other, while belonging to different sets. The scalability of two proposed algorithms, an exact and a faster approximate one, is evaluated using a dataset derived from a real-world application. Finally, although the main scope of this work lies in scalability, the results indicate that there is no compromise in effectiveness.}
}

2012

Journal Articles

Fani A. Tzima, John B. Theocharis and Pericles A. Mitkas
"Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time."
To appear in Soft Computing, 16, 2012 Jul

The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.

@article{2012TzimaTASC,
author={Fani A. Tzima and John B. Theocharis and Pericles A. Mitkas},
title={Clustering-based initialization of Learning Classifier Systems. Effects on model performance, readability and induction time.},
journal={To appear in Soft Computing},
volume={16},
year={2012},
month={07},
date={2012-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Clustering-based-initialization-of-Learning-Classifier-Systems.pdf},
keywords={Classification;Initialization;Learning Classifier Systems (LCS);Supervised Learning},
abstract={The present paper investigates whether an “informed” initialization process can help supervised LCS algorithms evolve rulesets with better characteristics, including greater predictive accuracy, shorter training times, and/or more compact knowledge representations. Inspired by previous research suggesting that the initialization phase of evolutionary algorithms may have a considerable impact on their convergence speed and the quality of the achieved solutions, we present an initialization method for the class of supervised Learning Classifier Systems (LCS) that extracts information about the structure of studied problems through a pre-training clustering phase and exploits this information by transforming it into rules suitable for the initialization of the learning process. The effectiveness of our approach is evaluated through an extensive experimental phase, involving a variety of real-world classification tasks. Obtained results suggest that clustering-based initialization can indeed improve the predictive accuracy, as well as the interpretability of the induced knowledge representations, and paves the way for further investigations of the potential of better-than-random initialization methods for LCS algorithms.}
}

2012

Inproceedings Papers

Georgios T. Andreou, Andreas L. Symeonidis, Christos Diou, Pericles A. Mitkas and Dimitrios P. Labridis
"A framework for the implementation of large scale Demand Response"
Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on, Nuremberg, Germany, 2012 Jan

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012andreouSGTEP2012,
author={Georgios T. Andreou and Andreas L. Symeonidis and Christos Diou and Pericles A. Mitkas and Dimitrios P. Labridis},
title={A framework for the implementation of large scale Demand Response},
booktitle={Smart Grid Technology, Economics and Policies (SG-TEP), 2012 International Conference on},
address={Nuremberg, Germany},
year={2012},
month={01},
date={2012-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/tada2012.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Kyriakos C. Chatzidimitriou, Konstantinos Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Policy Search through Adaptive Function Approximation for Bidding in TAC SCM"
Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce, 2012 May

Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.

@inproceedings{2012ChatzidimitriouAMEC,
author={Kyriakos C. Chatzidimitriou and Konstantinos Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Policy Search through Adaptive Function Approximation for Bidding in TAC SCM},
booktitle={Joint Workshop on Trading Agents Design and Analysis and Agent Mediated Electronic Commerce},
year={2012},
month={05},
date={2012-05-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Policy-Search-through-Adaptive-Function-Approximation-for-Bidding-in-TAC-SCM.pdf},
abstract={Agent autonomy is strongly related to learning and adaptation. Machine learning models generated, either by off-line or on-line adaptation, through the use of historical data or current environmental signals, provide agents with the necessary decision-making and generalization capabilities in competitive, dynamic, partially observable and stochastic environments. In this work, we discuss learning and adaptation in the context of the TAC SCM game. We apply a variety of machine learning and computational intelligence methods for generating the most efficient sales component of the agent, dealing with customer orders and production throughput. Along with utility maximization and bid acceptance probability estimation methods, we evaluate regression trees, particle swarm optimization, heuristic control and policy search via adaptive function approximation in order to build an efficient, near-real time, bidding mechanism. Results indicate that a suitable reinforcement learning setup coupled with the power of adaptive function approximation techniques adjusted to the problem at hand, is a good candidate for enabling high performance strategies.}
}

Athanasios Papadopoulos, Konstantinos Toumpas, Antonios Chrysopoulos and Pericles A. Mitkas
"Exploring Optimization Strategies in Board Game Abalone for Alpha-Beta Seach"
IEEE Conference on Computational Intelligent and Games (CIG), pp. 63-70, Granada, Spain, 2012 Sep

This paper discusses the design and implementation of a highly efficient MiniMax algorithm for the game Abalone.For perfect information games with relatively low branching factor for their decision tree (such as Chess, Checkers etc.) anda highly accurate evaluation function, Alpha-Beta search proved to be far more efficient than Monte Carlo Tree Search. In recentyears many new techniques have been developed to improve the efficiency of the Alpha-Beta tree, applied to a variety of scientific fields. This paper explores several techniques for increasing the efficiency of Alpha-Beta Search on the board game of Abalone while introducing some new innovative techniques that proved to be very effective. The main idea behind them is the incorporation of probabilistic features to the otherwise deterministic Alpha-Beta search.

@inproceedings{2012PapadopoulosCIG,
author={Athanasios Papadopoulos and Konstantinos Toumpas and Antonios Chrysopoulos and Pericles A. Mitkas},
title={Exploring Optimization Strategies in Board Game Abalone for Alpha-Beta Seach},
booktitle={IEEE Conference on Computational Intelligent and Games (CIG)},
pages={63-70},
address={Granada, Spain},
year={2012},
month={09},
date={2012-09-11},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Exploring-Optimization-Strategies-in-Board-Game-Abalone-for-Alpha-Beta-Search.pdf},
abstract={This paper discusses the design and implementation of a highly efficient MiniMax algorithm for the game Abalone.For perfect information games with relatively low branching factor for their decision tree (such as Chess, Checkers etc.) anda highly accurate evaluation function, Alpha-Beta search proved to be far more efficient than Monte Carlo Tree Search. In recentyears many new techniques have been developed to improve the efficiency of the Alpha-Beta tree, applied to a variety of scientific fields. This paper explores several techniques for increasing the efficiency of Alpha-Beta Search on the board game of Abalone while introducing some new innovative techniques that proved to be very effective. The main idea behind them is the incorporation of probabilistic features to the otherwise deterministic Alpha-Beta search.}
}

Andreas Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS), Valencia, Spain, 2012 Jun

The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.

@inproceedings{2012SymeonidisADMI,
author={Andreas Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
booktitle={Agents and Data Mining Interaction workshop (ADMI 2012), at the 2012 Conference on Autonimous Agents and Multiagent Systems (AAMAS)},
address={Valencia, Spain},
year={2012},
month={06},
date={2012-06-05},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development.pdf},
abstract={The emergence of Multi-Agent systems as a software paradigm that most suitably fits all types of problems and architectures is already experiencing significant revisions. A more consistent approach on agent programming, and the adoption of Software Engineering standards has indicated the pros and cons of Agent Technology and has limited the scope of the, once considered, programming ‘panacea’. Nowadays, the most active area of agent development is by far that of intelligent agent systems, where learning, adaptation, and knowledge extraction are at the core of the related research effort. Discussing knowledge extraction, data mining, once infamous for its application on bank processing and intelligence agencies, has become an unmatched enabling technology for intelligent systems. Naturally enough, a fruitful synergy of the aforementioned technologies has already been proposed that would combine the benefits of both worlds and would offer computer scientists with new tools in their effort to build more sophisticated software systems. Current work discusses Agent Academy, an agent toolkit that supports: a) rapid agent application development and, b) dynamic incorporation of knowledge extracted by the use of data mining techniques into agent behaviors in an as much untroubled manner as possible.}
}

Konstantinos N. Vavliakis, Georgios T. Karagiannis and Periklis A. Mitkas
"Semantic Web in Cultural Heritage After 2020"
What will the Semantic Web look like 10 Years From Now? Workshop held in conjunction with the 11th International Semantic Web Conference 2012 (ISWC 2012), Boston, USA, 2012 Nov

In this paper we present the current status of semantic data management in the cultural heritage field and we focus on the challenges imposed by the multidimensionality of the information in this domain. We identify current shortcomings, thus needs, that should be addressed in the coming years to enable the integration and exploitation of the rich information deriving from the multidisciplinary analysis of cultural heritage objects, monuments and sites. Our goal is to disseminate the needsof the cultural heritage community and drive Semantic web research towards these directions.

@inproceedings{2012VavliakisISWC,
author={Konstantinos N. Vavliakis and Georgios T. Karagiannis and Periklis A. Mitkas},
title={Semantic Web in Cultural Heritage After 2020},
booktitle={What will the Semantic Web look like 10 Years From Now? Workshop held in conjunction with the 11th International Semantic Web Conference 2012 (ISWC 2012)},
address={Boston, USA},
year={2012},
month={11},
date={2012-11-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Semantic-Web-in-Cultural-Heritage-After-2020.pdf},
keywords={Cultural Heritage},
abstract={In this paper we present the current status of semantic data management in the cultural heritage field and we focus on the challenges imposed by the multidimensionality of the information in this domain. We identify current shortcomings, thus needs, that should be addressed in the coming years to enable the integration and exploitation of the rich information deriving from the multidisciplinary analysis of cultural heritage objects, monuments and sites. Our goal is to disseminate the needsof the cultural heritage community and drive Semantic web research towards these directions.}
}

Konstantinos N. Vavliakis, Fani A. Tzima and Pericles A. Mitkas
"Event Detection via LDA for the MediaEval2012 SED Task"
Working Notes Proceedings of the MediaEval 2012, Santa Corce in Fossabanda, Pisa, Italy, 2012 Oct

In this paper we present our methodology for the Social Event Detection Task of the MediaEval 2012 BenchmarkingInitiative. We adopt topic discovery using Latent Dirichlet Allocation (LDA), city classification using TF-IDF analysis, and other statistical and natural language processing methods. After describing the approach we employed, we present the corresponding results, and discuss the problems we faced, as well as the conclusions we drew.

@inproceedings{2012VavliakisLDA,
author={Konstantinos N. Vavliakis and Fani A. Tzima and Pericles A. Mitkas},
title={Event Detection via LDA for the MediaEval2012 SED Task},
booktitle={Working Notes Proceedings of the MediaEval 2012},
address={Santa Corce in Fossabanda, Pisa, Italy},
year={2012},
month={10},
date={2012-10-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Event-Detection-via-LDA-for-the-MediaEval2012-SED-Task.pdf},
keywords={Event Detection;Latent Dirichlet Allocation (LDA);Topic Identification;MediaEval},
abstract={In this paper we present our methodology for the Social Event Detection Task of the MediaEval 2012 BenchmarkingInitiative. We adopt topic discovery using Latent Dirichlet Allocation (LDA), city classification using TF-IDF analysis, and other statistical and natural language processing methods. After describing the approach we employed, we present the corresponding results, and discuss the problems we faced, as well as the conclusions we drew.}
}

Dimitrios M. Vitsios, Fotis E. Psomopoulos, Pericles A. Mitkas and Chistos A. Ouzounis
"Mutli-gemone Core Pathway Identification Through Gene Clustering"
1st Workshop on Algorithms for Data and Text Mining in Bionformatics (WADTMB 2012) in conjunction with the 8th AIAI, Halkidiki, Greece, 2012 Sep

In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel methodology has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm’s complexity, evaluated experimentally, is presented and the results on a characteristic case study are discussed.

@inproceedings{2012VitsiosWADTMB,
author={Dimitrios M. Vitsios and Fotis E. Psomopoulos and Pericles A. Mitkas and Chistos A. Ouzounis},
title={Mutli-gemone Core Pathway Identification Through Gene Clustering},
booktitle={1st Workshop on Algorithms for Data and Text Mining in Bionformatics (WADTMB 2012) in conjunction with the 8th AIAI},
address={Halkidiki, Greece},
year={2012},
month={09},
date={2012-09-27},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Multi-genome-Core-Pathway-Identification-through-Gene-Clustering.pdf},
abstract={In the wake of gene-oriented data analysis in large-scale bioinformatics studies, focus in research is currently shifting towards the analysis of the functional association of genes, namely the metabolic pathways in which genes participate. The goal of this paper is to attempt to identify the core genes in a specific pathway, based on a user-defined selection of genomes. To this end, a novel methodology has been developed that uses data from the KEGG database, and through the application of the MCL clustering algorithm, identifies clusters that correspond to different “layers” of genes, either on a phylogenetic or a functional level. The algorithm’s complexity, evaluated experimentally, is presented and the results on a characteristic case study are discussed.}
}

2012

Inbooks

Kiriakos C. Chatzidimitriou, Ioannis Partalas, Pericles A. Mitkas and Ioannis Vlahavas
"Transferring Evolved Reservoir Features in Reinforcement Learning Tasks"
Charpter:1, 7188, pp. 213-224, Springer Berlin Heidelberg, 2012 Jan

Lecture Notes in Artificial Intelligent (LNAI)

@inbook{2012ChatzidimitriouLNAI,
author={Kiriakos C. Chatzidimitriou and Ioannis Partalas and Pericles A. Mitkas and Ioannis Vlahavas},
title={Transferring Evolved Reservoir Features in Reinforcement Learning Tasks},
chapter={1},
volume={7188},
pages={213-224},
publisher={Springer Berlin Heidelberg},
year={2012},
month={01},
date={2012-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Transferring-Evolved-Reservoir-Features-in-Reinforcement-Learning-Tasks.pdf},
doi={http://issel.ee.auth.gr/wp-content/uploads/publications/chp_LNAI.pdf},
abstract={Lecture Notes in Artificial Intelligent (LNAI)}
}

Andreas L. Symeonidis, Panagiotis Toulis and Pericles A. Mitkas
"Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development"
Charpter:1, 7607, pp. 7-21, Springer Berlin Heidelberg, 2012 Jun

Lecture Notes in Computer Science

@inbook{2012SymeonidisLNCS,
author={Andreas L. Symeonidis and Panagiotis Toulis and Pericles A. Mitkas},
title={Supporting Agent-Oriented Software Engineering for Data Mining Enhanced Agent Development},
chapter={1},
volume={7607},
pages={7-21},
publisher={Springer Berlin Heidelberg},
year={2012},
month={06},
date={2012-06-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Supporting-Agent-Oriented-Software-Engineering-for-Data-Mining-Enhanced-Agent-Development-1.pdf},
abstract={Lecture Notes in Computer Science}
}

2011

Journal Articles

Fani A. Tzima, Pericles A. Mitkas, Dimitris Voukantsis and Kostas Karatzas
"Sparse episode identification in environmental datasets: the case of air quality assessment"
Expert Systems with Applications, 38, 2011 May

Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.

@article{2011TzimaESWA,
author={Fani A. Tzima and Pericles A. Mitkas and Dimitris Voukantsis and Kostas Karatzas},
title={Sparse episode identification in environmental datasets: the case of air quality assessment},
journal={Expert Systems with Applications},
volume={38},
year={2011},
month={05},
date={2011-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2015/06/1-s2.0-S095741741001105X-main.pdf},
keywords={Air quality (AQ);Domain-driven data mining;Model evaluation;Sparse episode identification},
abstract={Sparse episode identification in environmental datasets is not only a multi-faceted and computationally challenging problem for machine learning algorithms, but also a difficult task for human-decision makers: the strict regulatory framework, in combination with the public demand for better information services, poses the need for robust, efficient and, more importantly, understandable forecasting models. Additionally, these models need to provide decision-makers with “summarized” and valuable knowledge, that has to be subjected to a thorough evaluation procedure, easily translated to services and/or actions in actual decision making situations, and integratable with existing Environmental Management Systems (EMSs). On this basis, our current study investigates the potential of various machine learning algorithms as tools for air quality (AQ) episode forecasting and assesses them – given the corresponding domain-specific requirements – using an evaluation procedure, tailored to the task at hand. Among the algorithms employed in the experimental phase, our main focus is on ZCS-DM, an evolutionary rule-induction algorithm specifically designed to tackle this class of problems – that is classification problems with skewed class distributions, where cost-sensitive model building is required. Overall, we consider this investigation successful, in terms of its aforementioned goals and constraints: obtained experimental results reveal the potential of rule-based algorithms for urban AQ forecasting, and point towards ZCS-DM as the most suitable algorithm for the target domain, providing the best trade-off between model performance and understandability.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"An integrated framework for enhancing the semantic transformation, editing and querying of relational databases"
Expert Systems with Applications, 38, (4), pp. 3844-3856, 2011 Apr

The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually

@article{2011VavliakisESWA,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={An integrated framework for enhancing the semantic transformation, editing and querying of relational databases},
journal={Expert Systems with Applications},
volume={38},
number={4},
pages={3844-3856},
year={2011},
month={04},
date={2011-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-framework-for-enhancing-the-semantic-transformation-editing-and-querying-of-relational-databases.pdf},
keywords={Ontology editor;OWL-DL restriction creation;Relational database to ontology transformation;SPARQL query builder},
abstract={The transition from the traditional to the Semantic Web has proven much more difficult than initially expected. The volume, complexity and versatility of data of various domains, the computational limitations imposed on semantic querying and inferencing have drastically reduced the thrust semantic technologies had when initially introduced. In order for the semantic web to actually}
}

2011

Conference Papers

Michalis Tsapanos, Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"A Zeroth-Level Classifier System for Real Time Strategy Games"
Web Intelligence and Intelligent Agent Technology (WI-IAT), 2011 IEEE/WIC/ACM International Conference, pp. 244-247, Springer Berlin Heidelberg, Lyons, France, 2011 Aug

Real Time Strategy games (RTS) provide an interesting test bed for agents that use Reinforcement Learning (RL) algorithms. From an agent

@conference{2011TsapanosWI-IAT,
author={Michalis Tsapanos and Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={A Zeroth-Level Classifier System for Real Time Strategy Games},
booktitle={Web Intelligence and Intelligent Agent Technology (WI-IAT), 2011 IEEE/WIC/ACM International Conference},
pages={244-247},
publisher={Springer Berlin Heidelberg},
address={Lyons, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Zeroth-Level_Classifier_System_for_Real_Time_Str.pdf},
keywords={Learning Classifier Systems;Real Time Strategy Games},
abstract={Real Time Strategy games (RTS) provide an interesting test bed for agents that use Reinforcement Learning (RL) algorithms. From an agent}
}

2011

Inproceedings Papers

Zinovia Alepidou, Konstantinos N. Vavliakis and Pericles A. Mitkas
"A Semantic Tag Recommendation Framework for Collaborative Tagging Systems"
Proceedings of the Third IEEE International Conference on Social Computing, pp. 633-636, Cambridge, MA, USA, 2011 Oct

In this work we focus on folksonomies. Our goal is to develop techniques that coordinate information processing, by taking advantage of user preferences, in order to automatically produce semantic tag recommendations. To this end, we propose a generalized tag recommendation framework that conveys the semantics of resources according to different user pro?les. We present the integration of various models that take into account content, historic values, user preferences and tagging behavior to produce accurate personalized tag recommendations. Based on this information we build several Bayesian models, we evaluate their performance, and we dis-cuss differences in accuracy with respect to semantic matching criteria, and other approaches.

@inproceedings{2011AlepidouSocialCom,
author={Zinovia Alepidou and Konstantinos N. Vavliakis and Pericles A. Mitkas},
title={A Semantic Tag Recommendation Framework for Collaborative Tagging Systems},
booktitle={Proceedings of the Third IEEE International Conference on Social Computing},
pages={633-636},
address={Cambridge, MA, USA},
year={2011},
month={10},
date={2011-10-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Semantic_Tag_Recommendation_Framework_for_Collab.pdf},
keywords={folksonomy;personalization;recommendation;semantic evaluation;tagging},
abstract={In this work we focus on folksonomies. Our goal is to develop techniques that coordinate information processing, by taking advantage of user preferences, in order to automatically produce semantic tag recommendations. To this end, we propose a generalized tag recommendation framework that conveys the semantics of resources according to different user pro?les. We present the integration of various models that take into account content, historic values, user preferences and tagging behavior to produce accurate personalized tag recommendations. Based on this information we build several Bayesian models, we evaluate their performance, and we dis-cuss differences in accuracy with respect to semantic matching criteria, and other approaches.}
}

Kyriakos C. Chatzidimitriou, Ioannis Partalas, Pericles A. Mitkas and Ioannis Vlahavas
"Transferring Evolved Reservoir Features in Reinforcement Learning Tasks"
European Workshop on Reinforcement Learning, pp. 213-224, Springer Berlin Heidelberg, Athens, Greece, 2011 Sep

The major goal of transfer learning is to transfer knowledge acquired on a source task in order to facilitate learning on another, different, but usually related, target task. In this paper, we are using neuroevolution to evolve echo state networks on the source task and transfer the best performing reservoirs to be used as initial population on the target task. The idea is that any non-linear, temporal features, represented by the neurons of the reservoir and evolved on the source task, along with reservoir properties, will be a good starting point for a stochastic search on the target task. In a step towards full autonomy and by taking advantage of the random and fully connected nature of echo state networks, we examine a transfer method that renders any inter-task mappings of states and actions unnecessary. We tested our approach and that of inter-task mappings in two RL testbeds: the mountain car and the server job scheduling domains. Under various setups the results we obtained in both cases are promising.

@inproceedings{2011Chatzidimitriou,
author={Kyriakos C. Chatzidimitriou and Ioannis Partalas and Pericles A. Mitkas and Ioannis Vlahavas},
title={Transferring Evolved Reservoir Features in Reinforcement Learning Tasks},
booktitle={European Workshop on Reinforcement Learning},
pages={213-224},
publisher={Springer Berlin Heidelberg},
address={Athens, Greece},
year={2011},
month={09},
date={2011-09-09},
url={http://link.springer.com/content/pdf/10.1007%2F978-3-642-29946-9_22.pdf},
keywords={Transfer knowledge},
abstract={The major goal of transfer learning is to transfer knowledge acquired on a source task in order to facilitate learning on another, different, but usually related, target task. In this paper, we are using neuroevolution to evolve echo state networks on the source task and transfer the best performing reservoirs to be used as initial population on the target task. The idea is that any non-linear, temporal features, represented by the neurons of the reservoir and evolved on the source task, along with reservoir properties, will be a good starting point for a stochastic search on the target task. In a step towards full autonomy and by taking advantage of the random and fully connected nature of echo state networks, we examine a transfer method that renders any inter-task mappings of states and actions unnecessary. We tested our approach and that of inter-task mappings in two RL testbeds: the mountain car and the server job scheduling domains. Under various setups the results we obtained in both cases are promising.}
}

Michael Tsapanos, Kiriakos C. Chatzidimitriou and Pericles A. Mitkas
"Combining Zeroth-Level Classifier System and Eligibility Traces for Real Time Strategy Games"
IEEE/WIC/ACM International Conference on Web Intelligent and Intelligent Agent Technology (WI-IAT'11), pp. 244-247, Lyons, France, 2011 Aug

This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.

@inproceedings{2011TsapanosIEEE,
author={Michael Tsapanos and Kiriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={Combining Zeroth-Level Classifier System and Eligibility Traces for Real Time Strategy Games},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligent and Intelligent Agent Technology (WI-IAT'11)},
pages={244-247},
address={Lyons, France},
year={2011},
month={08},
date={2011-08-22},
url={http://issel.ee.auth.gr/wp-content/uploads/4513b030.pdf},
keywords={agent communication},
abstract={This work introduces Energy City, a multi-agent framework designed and developed in order to simulate the power system and explore the potential of Consumer Social Networks (CSNs) as a means to promote demand-side response and raise social awareness towards energy consumption. The power system with all its involved actors (Consumers, Producers, Electricity Suppliers, Transmission and Distribution Operators) and their requirements are modeled. The semantic infrastructure for the formation and analysis of electricity CSNs is discussed, and the basic consumer attributes and CSN functionality are identified. Authors argue that the formation of such CSNs is expected to increase the electricity consumer market power by enabling them to act in a collective way.}
}

Kyriakos C. Chatzidimitriou, Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets"
Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011, pp. 228-247, 2011 Apr

In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.

@inproceedings{2012ChatzidimitriouAAMAS,
author={Kyriakos C. Chatzidimitriou and Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Enhancing Agent Intelligence through Evolving Reservoir Networks for Prediction in Power Stock Markets},
booktitle={Agent and Data Mining Interaction 2011 Workshop held in conjuction with the conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2011},
pages={228-247},
year={2011},
month={04},
date={2011-04-19},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Enhancing-Agent-Intelligence-through-Evolving-Reservoir-Networks-for-Predictions-in-Power-Stock-Markets.pdf},
keywords={Neuroevolution;Power Stock Markets;Reservoir Computing},
abstract={In recent years, Time Series Prediction and clustering have been employed in hyperactive and evolving environments -where temporal data play an important role- as a result of the need for reliable methods to estimate and predict the pattern or behavior of events and systems. Power Stock Markets are such highly dynamic and competitive auction environments, additionally perplexed by constrained power laws in the various stages, from production to transmission and consumption. As with all real-time auctioning environments, the limited time available for decision making provides an ideal testbed for autonomous agents to develop bidding strategies that exploit time series prediction. Within the context of this paper, we present Cassandra, a dynamic platform that fosters the development of Data-Mining enhanced Multi-agent systems. Special attention was given on the efficiency and reusability of Cassandra, which provides Plug-n-Play capabilities, so that users may adapt their solution to the problem at hand. Cassandra’s functionality is demonstrated through a pilot case, where autonomously adaptive Recurrent Neural Networks in the form of Echo State Networks are encapsulated into Cassandra agents, in order to generate power load and settlement price prediction models in typical Day-ahead Power Markets. The system has been tested in a real-world scenario, that of the Greek Energy Stock Market.}
}

Kyriakos C. Chatzidimitriou, Lampros C. Stavrogiannis, Andreas Symeonidis and Pericles A. Mitkas
"An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions"
Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011, pp. 21-28, Barcelona, Spain, 2011 Jul

Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.

@inproceedings{Chatzidimitriou2011,
author={Kyriakos C. Chatzidimitriou and Lampros C. Stavrogiannis and Andreas Symeonidis and Pericles A. Mitkas},
title={An Adaptive Proportional Value-per-Click Agent for Bidding in Ad Auctions},
booktitle={Trading Agent Design and Analysis (TADA) 2011 Workshop held in conjuction with the International Joint Conference on Artificial Intelligence (IJCAI) 2011},
pages={21-28},
address={Barcelona, Spain},
year={2011},
month={07},
date={2011-07-17},
url={http://link.springer.com/content/pdf/10.1007%2F978-3-642-34889-1_2.pdf},
keywords={advertisement auction;game theory;sponsored search;trading agent},
abstract={Sponsored search auctions constitutes the most important source of revenue for search engine companies, offering new opportunities for advertisers. The Trading Agent Competition (TAC) Ad Auctions tournament is one of the first attempts to study the competition among advertisers for their placement in sponsored positions along with organic search engine results. In this paper, we describe agent Mertacor, a simulation-based game theoretic agent coupled with on-line learning techniques to optimize its behavior that successfully competed in the 2010 tournament. In addition, we evaluate different facets of our agent to draw conclusions about certain aspects of its strategy.}
}

Dimitrios Vitsios, Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Detecting Species Evolution Through Metabolic Pathways"
6th Conference of the Hellenic Society for computational Biology & Bioinformatics (HSCBB11), pp. 16, Patra, Greece, 2011 Oct

The emergence and evolution of metabolic pathways represented a crucial step in molecular and cellular evolution. Withthe current advances in genomics and proteomics, it has become imperative to explore the impact of gene evolution as reflected in the metabolic signature of each genome (Zhang et al. (2006)). To this end a methodology is presented, which applies a clustering algorithm to genes from different species participating in the same pathway.

@inproceedings{PsomopoulosHSCBB11,
author={Dimitrios Vitsios and Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Detecting Species Evolution Through Metabolic Pathways},
booktitle={6th Conference of the Hellenic Society for computational Biology & Bioinformatics (HSCBB11)},
pages={16},
address={Patra, Greece},
year={2011},
month={10},
date={2011-10-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Detecting-species-evolution-through-metabolic-pathways..pdf},
keywords={folksonomy;personalization;recommendation;semantic evaluation;tagging},
abstract={The emergence and evolution of metabolic pathways represented a crucial step in molecular and cellular evolution. Withthe current advances in genomics and proteomics, it has become imperative to explore the impact of gene evolution as reflected in the metabolic signature of each genome (Zhang et al. (2006)). To this end a methodology is presented, which applies a clustering algorithm to genes from different species participating in the same pathway.}
}

Konstantinos N. Vavliakis, Konstantina Gemenetzi and Pericles A. Mitkas
"A correlation analysis of web social media"
Proceedings of the International Conference on Web Intelligence, Mining and Semantics, pp. 54:1--54:5, ACM, Songdal, Norway, 2011 Jan

In this paper we analyze and compare three popular content creation and sharing websites, namely Panoramio, YouTube and Epinions. This analysis aims in advancing our understanding of Web Social Media and their impact, and may be useful in creating feedback mechanisms for increasing user participation and sharing. For each of the three websites, we select ?ve fundamental factors appearing in all content centered Web Social Media and we use regression analysis to calculate their correlation. We present findings of statistically important correlations among these key factors and we rank the discovered correlations according to the degree of their in?uence. Furthermore, we perform analysis of variance in distinct subgroups of the collected data and we discuss differences found in the characteristics of these subgroups and how these differences may affect correlation results. Although we acknowledge that correlation does not imply causality, the discovered correlations may be a ?rst step towards discovering causality laws behind content contribution, commenting and the formulation of friendship relations. These causality laws are useful for boosting the user participation in social media

@inproceedings{Vavliakis:2011:CAW:1988688.1988752,
author={Konstantinos N. Vavliakis and Konstantina Gemenetzi and Pericles A. Mitkas},
title={A correlation analysis of web social media},
booktitle={Proceedings of the International Conference on Web Intelligence, Mining and Semantics},
pages={54:1--54:5},
publisher={ACM},
address={Songdal, Norway},
year={2011},
month={01},
date={2011-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Correlation-Analysis-of-Web-Social-Media.pdf},
keywords={ANOVA;correlation;regression analysis;social media},
abstract={In this paper we analyze and compare three popular content creation and sharing websites, namely Panoramio, YouTube and Epinions. This analysis aims in advancing our understanding of Web Social Media and their impact, and may be useful in creating feedback mechanisms for increasing user participation and sharing. For each of the three websites, we select ?ve fundamental factors appearing in all content centered Web Social Media and we use regression analysis to calculate their correlation. We present findings of statistically important correlations among these key factors and we rank the discovered correlations according to the degree of their in?uence. Furthermore, we perform analysis of variance in distinct subgroups of the collected data and we discuss differences found in the characteristics of these subgroups and how these differences may affect correlation results. Although we acknowledge that correlation does not imply causality, the discovered correlations may be a ?rst step towards discovering causality laws behind content contribution, commenting and the formulation of friendship relations. These causality laws are useful for boosting the user participation in social media}
}

2010

Journal Articles

Giorgos Papachristoudis, Sotiris Diplaris and Pericles A. Mitkas
"SoFoCles: Feature filtering for microarray classification based on Gene Ontology"
Journal of Biomedical Informatics, 43, (1), 2010 Feb

Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\"curse of dimensionality\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

@article{2010Papachristoudis-JBI,
author={Giorgos Papachristoudis and Sotiris Diplaris and Pericles A. Mitkas},
title={SoFoCles: Feature filtering for microarray classification based on Gene Ontology},
journal={Journal of Biomedical Informatics},
volume={43},
number={1},
year={2010},
month={02},
date={2010-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/SoFoCles-Feature-filtering-for-microarray-classification-based-on-Gene-Ontology.pdf},
keywords={Data Mining;Feature filtering;Microarray classification;Ontologies;Semantic similarity},
abstract={Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the \\\\"curse of dimensionality\\\\" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Bioinformatics algorithm development for Grid environments"
Journal of Systems and Software, 83, (7), 2010 Jul

A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.

@article{2010PsomopoulosJOSAS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Bioinformatics algorithm development for Grid environments},
journal={Journal of Systems and Software},
volume={83},
number={7},
year={2010},
month={07},
date={2010-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Bioinformatics-algorithm-development-for-Grid-environments.pdf},
keywords={Bioinformatics;Data analysis;Grid computing;Protein classification;Semi-automated tool;Workflow design},
abstract={A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of increased availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods either focus on specific groups of proteins or reduce the size of the original data set and/or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.}
}

2010

Conference Papers

Fotis E. Psomopoulos and Pericles A. Mitkas
"Multi Level Clustering of Phylogenetic Profiles"
BioInformatics and BioEngineering (BIBE), 2010 IEEE International Conference, pp. 308-309, Freiburg, Germany, 2010 May

The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infer gene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles. Phylogenetic profiles are vectors which indicate the presence or absence of a gene in other genomes. The main concept of phylogenetic profiles is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of phylogenetic profiles is presented, which aims to detect inter- and intra-genome gene clusters.

@conference{2010PsomopoulosBIBE,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Multi Level Clustering of Phylogenetic Profiles},
booktitle={BioInformatics and BioEngineering (BIBE), 2010 IEEE International Conference},
pages={308-309},
address={Freiburg, Germany},
year={2010},
month={05},
date={2010-05-31},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Multi-Level-Clustering-of-Phylogenetic-Profiles.pdf},
keywords={Algorithm;Clustering;Phylogenetic profiles},
abstract={The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infer gene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles. Phylogenetic profiles are vectors which indicate the presence or absence of a gene in other genomes. The main concept of phylogenetic profiles is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of phylogenetic profiles is presented, which aims to detect inter- and intra-genome gene clusters.}
}

2010

Inproceedings Papers

Kyriakos C. Chatzidimitriou and Pericles A. Mitkas
"A NEAT Way for Evolving Echo State Networks"
European Conference on Artificial Intelligence, pp. 909-914, IOS Press, Alexandroupoli, Greece, 2010 Aug

The Reinforcement Learning (RL) paradigm is an appropriateformulation for agent, goal-directed, sequential decisionmaking. In order though for RL methods to perform well in difficult,complex, real-world tasks, the choice and the architecture ofan appropriate function approximator is of crucial importance. Thiswork presents a method of automatically discovering such functionapproximators, based on a synergy of ideas and techniques that areproven to be working on their own. Using Echo State Networks(ESNs) as our function approximators of choice, we try to adaptthem, by combining evolution and learning, for developing the appropriatead-hoc architectures to solve the problem at hand. Thechoice of ESNs was made for their ability to handle both non-linearand non-Markovian tasks, while also being capable of learning online,through simple gradient descent temporal difference learning.For creating networks that enable efficient learning, a neuroevolutionprocedure was applied. Appropriate topologies and weights wereacquired by applying the NeuroEvolution of Augmented Topologies(NEAT) method as a meta-search algorithm and by adaptingideas like historical markings, complexification and speciation, to thespecifics of ESNs. Our methodology is tested on both supervised andreinforcement learning testbeds with promising results.

@inproceedings{2010ChatzidimitriouECAI,
author={Kyriakos C. Chatzidimitriou and Pericles A. Mitkas},
title={A NEAT Way for Evolving Echo State Networks},
booktitle={European Conference on Artificial Intelligence},
pages={909-914},
publisher={IOS Press},
address={Alexandroupoli, Greece},
year={2010},
month={08},
date={2010-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_NEAT_way_for_evolving_Echo_State_Networks.pdf},
keywords={Echo State Networks;NeuroEvolution of Augmented Topologies;Reinforcement Learning},
abstract={The Reinforcement Learning (RL) paradigm is an appropriateformulation for agent, goal-directed, sequential decisionmaking. In order though for RL methods to perform well in difficult,complex, real-world tasks, the choice and the architecture ofan appropriate function approximator is of crucial importance. Thiswork presents a method of automatically discovering such functionapproximators, based on a synergy of ideas and techniques that areproven to be working on their own. Using Echo State Networks(ESNs) as our function approximators of choice, we try to adaptthem, by combining evolution and learning, for developing the appropriatead-hoc architectures to solve the problem at hand. Thechoice of ESNs was made for their ability to handle both non-linearand non-Markovian tasks, while also being capable of learning online,through simple gradient descent temporal difference learning.For creating networks that enable efficient learning, a neuroevolutionprocedure was applied. Appropriate topologies and weights wereacquired by applying the NeuroEvolution of Augmented Topologies(NEAT) method as a meta-search algorithm and by adaptingideas like historical markings, complexification and speciation, to thespecifics of ESNs. Our methodology is tested on both supervised andreinforcement learning testbeds with promising results.}
}

Kyriakos C. Chatzidimitriou, Fotis E. Psomopoulos and Pericles A. Mitkas
"Grid-enabled parameter initialization for high performance machine learning tasks"
5th EGEE User Forum, pp. 113-114, 2010 Apr

In this work we use the NeuroEvolution of augmented Topologies (NEAT) methodology, for optimising Echo State Networks (ESNs), in order to achieve high performance in machine learning tasks. The large parameter space of NEAT, the many variations of ESNs and the stochastic nature of enolutionary computation, requiring manyevaluations for staatistically valid conclusions, promotes the Grid as a a viable solution for robustly evaluationg the alternatives and deriving significant conclusions.

@inproceedings{2010ChatzidimitriouEGEEForum,
author={Kyriakos C. Chatzidimitriou and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Grid-enabled parameter initialization for high performance machine learning tasks},
booktitle={5th EGEE User Forum},
pages={113-114},
year={2010},
month={04},
date={2010-04-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Grid-enabled-parameter-initialization-for-high-performance-machine-learning-tasks.pdf},
keywords={Neuroenolution;Parameter optimisation},
abstract={In this work we use the NeuroEvolution of augmented Topologies (NEAT) methodology, for optimising Echo State Networks (ESNs), in order to achieve high performance in machine learning tasks. The large parameter space of NEAT, the many variations of ESNs and the stochastic nature of enolutionary computation, requiring manyevaluations for staatistically valid conclusions, promotes the Grid as a a viable solution for robustly evaluationg the alternatives and deriving significant conclusions.}
}

Pericles A. Mitkas
"From Theory and the Research Lav to an Innocative Product for the Greek and the International Market: Agent MerTACor"
1st Private Equity Forum, Transforming the Crisis to Opportunities for Greece, Athens, Greece, 2010 Oct

During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.

@inproceedings{2010MitkasTCOG10,
author={Pericles A. Mitkas},
title={From Theory and the Research Lav to an Innocative Product for the Greek and the International Market: Agent MerTACor},
booktitle={1st Private Equity Forum, Transforming the Crisis to Opportunities for Greece},
address={Athens, Greece},
year={2010},
month={10},
date={2010-10-26},
keywords={Relational Databases to Ontology Transformatio},
abstract={During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas and Christos A. Ouzounis
"Clustering of discrete and fuzzy phylogenetic profiles"
5th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB, pp. 58, Alexandroupoli, Greece, 2010 Oct

Phylogenetic profiles have long been a focus of interest in computational genomics. Encoding the subset of organisms that contain a homolog of a gene or protein, phylogenetic profiles are originally defined as binary vectors of n entries, where n corresponds to the number of target genomes. It is widely accepted that similar profiles especially those not connected by sequence similarity correspond to a correlated pattern of functional linkage. To this end, our study presents two methods of phylogenetic profile data analysis, aiming at detecting genes with peculiar, unique characteristics. Genes with similar phylogenetic profiles are likely to have similar structure or function, such as participating to a common structural complex or to a common pathway. Our two methods aim at detecting those outlier profiles of “interesting” genes, or groups of genes, with different characteristics from their parent genome.

@inproceedings{2010PsomopoulosHSCBB,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos A. Ouzounis},
title={Clustering of discrete and fuzzy phylogenetic profiles},
booktitle={5th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB},
pages={58},
address={Alexandroupoli, Greece},
year={2010},
month={10},
date={2010-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Clustering-of-discrete-and-fuzzy-phylogenetic-profiles.pdf},
keywords={Computational genomics},
abstract={Phylogenetic profiles have long been a focus of interest in computational genomics. Encoding the subset of organisms that contain a homolog of a gene or protein, phylogenetic profiles are originally defined as binary vectors of n entries, where n corresponds to the number of target genomes. It is widely accepted that similar profiles especially those not connected by sequence similarity correspond to a correlated pattern of functional linkage. To this end, our study presents two methods of phylogenetic profile data analysis, aiming at detecting genes with peculiar, unique characteristics. Genes with similar phylogenetic profiles are likely to have similar structure or function, such as participating to a common structural complex or to a common pathway. Our two methods aim at detecting those outlier profiles of “interesting” genes, or groups of genes, with different characteristics from their parent genome.}
}

Andreas L. Symeonidis and Pericles A. Mitkas
"Monitoring Agent Communication in Soft Real-Time Environments"
Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pp. 265--268, Los Alamitos, CA, USA, 2010 Jan

Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.

@inproceedings{2010SymeonidisWIIAT,
author={Andreas L. Symeonidis and Pericles A. Mitkas},
title={Monitoring Agent Communication in Soft Real-Time Environments},
booktitle={Paper presented at the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology},
pages={265--268},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Monitoring_Agent_Communication_in_Soft_Real-Time_E.pdf},
keywords={soft real-time systems;synchronization},
abstract={Real-time systems can be defined as systems operating under specific timing constraints, either hard or soft ones. In principle, agent systems are considered inappropriate for such kinds of systems, due to the asynchronous nature of their communication protocols, which directly influences their temporal behavior. Nevertheless, multi-agent systems could be successfully employed for solving problems where failure to meet a deadline does not have serious consequences, given the existence of a fail-safe system mechanism. Current work focuses on the analysis of multi-agent systems behavior under such soft real-time constraints. To this end, ERMIS has been developed: an integrated framework that provides the agent developer with the ability to benchmark his/her own architecture and identify its limitations and its optimal timing behavior, under specific hardware/software constraints. A variety of MAS configurations have been tested and indicative results are discussed within the context of this paper.}
}

Fani A. Tzima, Fotis E. Psomopoulos and Pericles A. Mitkas
"An investigation of the effect of clustering-based initialization on Learning Classifiers Systems"
5th EGEE User Forum, pp. 111-112, 2010 Apr

Strength-based Learning Classifier Systems (LCS) are machine learning systems designed to tackle both sequential and single-step decision tasks by coupling a gradually evolving population of rules with a reinforcement component. ZCS-DM, a Zeroth-level Classifier System for Data Mining, is a novel algorithm in this field, recently shown to be very effective in several benchmark classification problems. In this paper, we evaluate the effect of clustering-based initialization on the algorithm’s performance, utilizing the EGEE infrastructure as a robust framework for an efficient parameter sweep.

@inproceedings{2010TzimaEGEEForum,
author={Fani A. Tzima and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={An investigation of the effect of clustering-based initialization on Learning Classifiers Systems},
booktitle={5th EGEE User Forum},
pages={111-112},
year={2010},
month={04},
date={2010-04-01},
keywords={Algorithm Optimization;Parameter Sweep},
abstract={Strength-based Learning Classifier Systems (LCS) are machine learning systems designed to tackle both sequential and single-step decision tasks by coupling a gradually evolving population of rules with a reinforcement component. ZCS-DM, a Zeroth-level Classifier System for Data Mining, is a novel algorithm in this field, recently shown to be very effective in several benchmark classification problems. In this paper, we evaluate the effect of clustering-based initialization on the algorithm’s performance, utilizing the EGEE infrastructure as a robust framework for an efficient parameter sweep.}
}

Konstantinos N. Vavliakis, Theofanis K Grollios and Pericles A. Mitkas
"RDOTE - Transforming Relational Databases into Semantic Web Data"
9th International Semantic Web Conference (ISWC2010), 2010 Nov

During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.

@inproceedings{2010Vavliakis-ISWC,
author={Konstantinos N. Vavliakis and Theofanis K Grollios and Pericles A. Mitkas},
title={RDOTE - Transforming Relational Databases into Semantic Web Data},
booktitle={9th International Semantic Web Conference (ISWC2010)},
year={2010},
month={11},
date={2010-11-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/RDOTE-Transforming-Relational-Databases-into-Semantic-Web-Data.pdf},
keywords={Relational Databases to Ontology Transformatio},
abstract={During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.}
}

Konstantinos N. Vavliakis, Theofanis K. Grollios and Pericles A. Mitkas
"R. - Transforming Relational Databases into Semantic Web Data"
International Semantic Web Conference, 2010 Jan

@inproceedings{2010VavliakisISWC,
author={Konstantinos N. Vavliakis and Theofanis K. Grollios and Pericles A. Mitkas},
title={R. - Transforming Relational Databases into Semantic Web Data},
booktitle={International Semantic Web Conference},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/RDOTE-Transforming-Relational-Databases-into-Semantic-Web-Data.pdf},
keywords={Relational Databases;Semantic Web Data;Transform}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity"
Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference, pp. 615-618, IEEE Computer Society, Los Alamitos, CA, USA, 2010 Jan

Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.

@inproceedings{2010VavliakisWI,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards Understanding How Personality, Motivation, and Events Trigger Web User Activity},
booktitle={Web Intelligence and Intelligent Agent Technology, IEEE/WIC/ACM International Conference},
pages={615-618},
publisher={IEEE Computer Society},
address={Los Alamitos, CA, USA},
year={2010},
month={01},
date={2010-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Towards-Understanding-How-Personality-Motivation-and-Events-Trigger-Web-User-Activity.pdf},
keywords={Crowdsourcing;Flickr;Sharing},
abstract={Web 2.0 provided internet users with a dynamic medium, where information is updated continuously and anyone can participate. Though preliminary anal-ysis exists, there is still little understanding on what exactly stimulates users to actively participate, create and share content in online communities. In this paper we present a methodology that aspires to identify and analyze those events that trigger web user activity, content creation and sharing in Web 2.0. Our approach is based on user personality and motiva-tion, and on the occurrence of events with a personal or global impact. The proposed methodology was ap-plied on data collected from Flickr and analysis was performed through the use of statistics and data mining techniques.}
}

2009

Journal Articles

Theodoros Agorastos, Vassilis Koutkias, Manolis Falelakis, Irini Lekka, T. Mikos, Anastasios Delopoulos, Periklis A. Mitkas, A. Tantsis, S. Weyers, P. Coorevits, A. M. Kaufmann, R. Kurzeja and Nicos Maglaveras
"Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach"
Cancer Informatics Journal, Special Issue on Semantic Technologies, 8, (9), pp. 31-44, 2009 Feb

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009AgorastosCIJSIOST,
author={Theodoros Agorastos and Vassilis Koutkias and Manolis Falelakis and Irini Lekka and T. Mikos and Anastasios Delopoulos and Periklis A. Mitkas and A. Tantsis and S. Weyers and P. Coorevits and A. M. Kaufmann and R. Kurzeja and Nicos Maglaveras},
title={Semantic Integration of Cervical Cancer DAta Repositories to Facilitata Multicenter Associtation Studies: Tha ASSIST Approach},
journal={Cancer Informatics Journal, Special Issue on Semantic Technologies},
volume={8},
number={9},
pages={31-44},
year={2009},
month={02},
date={2009-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Semantic-Integration-of-Cervical-Cancer-Data-Repositories-to-Facilitate-Multicenter-Association-Studies-The-ASSIST-Approach.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

Kyriakos C. Chatzidimitriou, Konstantinos N. Vavliakis, Andreas L. Symeonidis and Pericles A. Mitkas
"Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments"
Intelligent Systems, 24, (3), pp. 54-63, 2009 Jan

Special issue on Agents and Data Mining

@article{2009ChatzidimitriouIS,
author={Kyriakos C. Chatzidimitriou and Konstantinos N. Vavliakis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data-Mining-Enhanced Agents in Dynamic Supply-Chain-Management Environments},
journal={Intelligent Systems},
volume={24},
number={3},
pages={54-63},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-Enhanced_Agents_in_Dynamic_Supply-Chai.pdf},
keywords={In modern supply chains;so each action can cause ripple reactions and affect the overall result. In this article},
abstract={Special issue on Agents and Data Mining}
}

John M. Konstantinides, Athanasios Mademlis, Petros Daras, Pericles A. Mitkas and Michael G. Strintzis
"Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics"
IEEE Transactions on Multimedia, 11, (1), pp. 23-38, 2009 Jan

In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.

@article{2009KonstantinidesIEEEToM,
author={John M. Konstantinides and Athanasios Mademlis and Petros Daras and Pericles A. Mitkas and Michael G. Strintzis},
title={Blind Robust 3D-Mesh Watermarking Based on Oblate Spheroidal Harmonics},
journal={IEEE Transactions on Multimedia},
volume={11},
number={1},
pages={23-38},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Blind-Robust-3D-Mesh-Watermarking-Based-onOblate-Spheroidal-Harmonics.pdf},
abstract={In this paper, a novel transform-based, blind and robust 3D mesh watermarking scheme is presented. The 3D surface of the mesh is firstly divided into a number of discrete continuous regions, each of which is successively sampled and mappedonto oblate spheroids, using a novel surface parameterization scheme. The embedding is performed in the spheroidal harmoniccoefficients of the spheroids, using a novel embedding scheme. Changes made to the transform domain are then reversed backto the spatial domain, thus forming the watermarked 3D mesh. The embedding scheme presented herein resembles, in principal,the ones using the multiplicative embedding rule (inherently providing high imperceptibility). The watermark detection isblind and by far more powerful than the various correlators typically incorporated by multiplicative schemes. Experimentalresults have shown that the proposed blind watermarking scheme is competitively robust against similarity transformations, con-nectivity attacks, mesh simplification and refinement, unbalanced re-sampling, smoothing and noise addition, even when juxtaposedto the informed ones.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas, Christos S. Krinas and Ioannis N. Demetropoulos
"A grid-enabled algorithm yields figure-eight molecular knot"
Molecular Simulation, 35, (9), pp. 725-736, 2009 Jun

The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.

@article{2009PsomopoulosMS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos S. Krinas and Ioannis N. Demetropoulos},
title={A grid-enabled algorithm yields figure-eight molecular knot},
journal={Molecular Simulation},
volume={35},
number={9},
pages={725-736},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-grid-enabled-algorithm-yields-Figure-Eight-molecular-knot.pdf},
keywords={data decomposition;figure-eight molecular knot;knot theory;stereochemistry},
abstract={The recently proposed general molecular knotting algorithm and its associated package, MolKnot, introduce programming into certain sections of stereochemistry. This work reports the G-MolKnot procedure that was deployed over the grid infrastructure; it applies a divide-and-conquer approach to the problem by splitting the initial search space into multiple independent processes and, combining the results at the end, yields significant improvements with regards to the overall efficiency. The algorithm successfully detected the smallest ever reported alkane configured to an open-knotted shape with four crossings.}
}

2009

Βιβλία

Fotis Psomopoulos and Pericles Mitkas
"Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine, and Healthcare"
2, UK: IGI Global., Catanzaro, Italy, 2009 May

@book{2009PsomopoulosHRCGTLSBH,
author={Fotis Psomopoulos and Pericles Mitkas},
title={Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine, and Healthcare},
volume={2},
publisher={UK: IGI Global.},
address={Catanzaro, Italy},
year={2009},
month={05},
date={2009-05-00}
}

2009

Incollection

Fotis E. Psomopoulos and Pericles A. Mitkas
"Data Mining in Proteomics using Grid Computing"
Handbook of Research on Computational Grid Technologies for LifeSciences, Biomedicine and Healthcare, pp. 245-267, IGI Global, UK, 2009 May

The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.

@incollection{2009PsomopoulosHRCGT,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Data Mining in Proteomics using Grid Computing},
booktitle={Handbook of Research on Computational Grid Technologies for LifeSciences, Biomedicine and Healthcare},
pages={245-267},
publisher={IGI Global},
address={UK},
year={2009},
month={05},
date={2009-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data-Mining-in-Proteomics-Using-Grid-Computing.pdf},
keywords={Data Mining techniques;knowledge extraction in proteomics},
abstract={The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"BADGE: Bioinformatics Algorithm Development for Grid Environments"
13th Panhellenic Conference on Informatics, pp. 93-107, Corfu, Greece, 2009 Sep

A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods focus on specific groups of proteins or reduce either the size of the original data set or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.

@incollection{2009PsomopoulosPCI,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={BADGE: Bioinformatics Algorithm Development for Grid Environments},
booktitle={13th Panhellenic Conference on Informatics},
pages={93-107},
address={Corfu, Greece},
year={2009},
month={09},
date={2009-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/fpsompci20091.pdf},
abstract={A Grid environment can be viewed as a virtual computing architecture that provides the ability to perform higher throughput computing by taking advantage of many computers geographically dispersed and connected by a network. Bioinformatics applications stand to gain in such a distributed environment in terms of availability, reliability and efficiency of computational resources. There is already considerable research in progress toward applying parallel computing techniques on bioinformatics methods, such as multiple sequence alignment, gene expression analysis and phylogenetic studies. In order to cope with the dimensionality issue, most machine learning methods focus on specific groups of proteins or reduce either the size of the original data set or the number of attributes involved. Grid computing could potentially provide an alternative solution to this problem, by combining multiple approaches in a seamless way. In this paper we introduce a unifying methodology coupling the strengths of the Grid with the specific needs and constraints of the major bioinformatics approaches. We also present a tool that implements this process and allows researchers to assess the computational needs for a specific task and optimize the allocation of available resources for its efficient completion.}
}

2009

Inproceedings Papers

Antonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform"
Agents and Data Mining Interaction workshop AAMAS 2009, pp. 111-125, Springer-Verlag, Budapest, Hungary, 2009 May

Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.

@inproceedings{2009ChrysopoulosADMI,
author={Antonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Improving agent bidding in Power Stock Markets through a data mining enhanced agent platform},
booktitle={Agents and Data Mining Interaction workshop AAMAS 2009},
pages={111-125},
publisher={Springer-Verlag},
address={Budapest, Hungary},
year={2009},
month={05},
date={2009-05-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/04/Improving-agent-bidding-in-Power-Stock-Markets-through-a-data-mining-enhanced-agent-platform.pdf},
keywords={exploit data mining;multi-agent platform;predict Power Load;predict Power Settlement},
abstract={Like in any other auctioning environment, entities participating in Power Stock Markets have to compete against other in order to maximize own revenue. Towards the satisfaction of their goal, these entities (agents - human or software ones) may adopt different types of strategies - from naive to extremely complex ones - in order to identify the most profitable goods compilation, the appropriate price to buy or sell etc, always under time pressure and auction environment constraints. Decisions become even more difficult to make in case one takes the vast volumes of historical data available into account: goods\\\\\\\\92 prices, market fluctuations, bidding habits and buying opportunities. Within the context of this paper we present Cassandra, a multi-agent platform that exploits data mining, in order to extract efficient models for predicting Power Settlement prices and Power Load values in typical Day-ahead Power markets. The functionality of Cassandra is discussed, while focus is given on the bidding mechanism of Cassandra\\\\\\\\92s agents, and the way data mining analysis is performed in order to generate the optimal forecasting models. Cassandra has been tested in a real-world scenario, with data derived from the Greek Energy Stock market.}
}

Anthonios C. Chrysopoulos, Andreas L. Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
Third Electrical and Computer Engineering Department Student Conference, pp. 245-267, IGI Global, Thessaloniki, Greece, 2009 Apr

The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.

@inproceedings{2009ChrysopoulosECEDSC,
author={Anthonios C. Chrysopoulos and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={Third Electrical and Computer Engineering Department Student Conference},
pages={245-267},
publisher={IGI Global},
address={Thessaloniki, Greece},
year={2009},
month={04},
date={2009-04-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Creating-and-Reusing-Metric-Graphs-for-Evaluating-Agent-Performance-in-the-Supply-Chain-Management-Domain.pdf},
keywords={Evaluating Agent Performance},
abstract={The scope of this chapter is the presentation of Data Mining techniques for knowledge extraction in proteomics, taking into account both the particular features of most proteomics issues (such as data retrieval and system complexity), and the opportunities and constraints found in a Grid environment. The chapter discusses the way new and potentially useful knowledge can be extracted from proteomics data, utilizing Grid resources in a transparent way. Protein classification is introduced as a current research issue in proteomics, which also demonstrates most of the domain – specific traits. An overview of common and custom-made Data Mining algorithms is provided, with emphasis on the specific needs of protein classification problems. A unified methodology is presented for complex Data Mining processes on the Grid, highlighting the different application types and the benefits and drawbacks in each case. Finally, the methodology is validated through real-world case studies, deployed over the EGEE grid environment.}
}

Christos Dimou, Fani A. Tzima, Andreas Symeonidis and Pericles Mitkas
"Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case"
IADIS International Conference on Intelligent Systems and Agents, Algarve, Portugal, 2009 Jun

@inproceedings{2009DimouIADIS,
author={Christos Dimou and Fani A. Tzima and Andreas Symeonidis and Pericles Mitkas},
title={Specifying and Validating the Agent Performance Evaluation Methodology: The Symbiosis Use Case},
booktitle={IADIS International Conference on Intelligent Systems and Agents},
address={Algarve, Portugal},
year={2009},
month={06},
date={2009-06-17},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Specifying-and-Validating-the-Agent-Performance-Evaluation-Methodology.pdf},
keywords={evaluation methodology;formal specification;metrics representation;Z nota tion}
}

Manolis Falelakis, Christos Maramis, Irini Lekka, Pericles Mitkas and Anastasios Delopoulos
"An Ontology for Supporting Clincal Research on Cervical Cancer"
International Conference on Knowledge Engineering and Ontology Development, pp. 103--108, Springer-Verlag, Madeira, Portugal, 2009 Jan

This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.

@inproceedings{2009FalelakisICKEOD,
author={Manolis Falelakis and Christos Maramis and Irini Lekka and Pericles Mitkas and Anastasios Delopoulos},
title={An Ontology for Supporting Clincal Research on Cervical Cancer},
booktitle={International Conference on Knowledge Engineering and Ontology Development},
pages={103--108},
publisher={Springer-Verlag},
address={Madeira, Portugal},
year={2009},
month={01},
date={2009-01-01},
url={http://mug.ee.auth.gr/wp-content/uploads/keod2009v22.pdf},
keywords={Domain modelling;Medical ontology},
abstract={This work presents an ontology for cervical cancer that is positioned in the center of a research system for conducting association studies. The ontology aims at providing a uni?ed ”language” for various heterogeneous medical repositories. To this end, it contains both generic patient-management and domain-speci?c concepts, as well as proper uni?cation rules. The inference scheme adopted is coupled with a procedural programming layer in order to comply with the design requirements.}
}

Konstantinos M. Karagiannis, Fotis E. Psomopoulos and Pericles A. Mitkas
"Multi Level Clustering of Phylogenetic Profiles"
4th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB '09, Athens, Greece, 2009 Dec

The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infergene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles (Pellegriniet al. (1999)). Phylogenetic profiles (pp) are vectors which indicate the presence or absence of a gene in other genomes. The main concept of pp’s is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of pp’s is presented, which aims to detect inter- and intra-genome gene clusters

@inproceedings{2009KaragiannisHSCBB,
author={Konstantinos M. Karagiannis and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Multi Level Clustering of Phylogenetic Profiles},
booktitle={4th Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB '09},
address={Athens, Greece},
year={2009},
month={12},
date={2009-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Multi-Level-Clustering-of-Phylogenetic-Profiles.pdf},
keywords={infer gene function;prediction of gene},
abstract={The prediction of gene function from genome sequences is one of the main issues in Bioinformatics. Most computational approaches are based on the similarity between sequences to infergene function. However, the availability of several fully sequenced genomes has enabled alternative approaches, such as phylogenetic profiles (Pellegriniet al. (1999)). Phylogenetic profiles (pp) are vectors which indicate the presence or absence of a gene in other genomes. The main concept of pp’s is that proteins participating in a common structural complex or metabolic pathway are likely to evolve in a correlated fashion. In this paper, a multi level clustering algorithm of pp’s is presented, which aims to detect inter- and intra-genome gene clusters}
}

Pericles A. Mitkas, Anastasios Ntelopoulos, Konstantinos N. Vavliakis, Christos Maramis and Manolis Falelakis andSotiris Diplaris andKoutkias Vasilis andLekka Irini andA. Tantsis andT. Mikos andNikolaos Maglaveras andTheodoros Agorastos
"Pooling data from different sources towards cervical cancer prevention - The ASSIST Project"
8th Scientific Meeting, New Developments in Prevention and Confrontation of Gynecological Cancer, Thessaloniki, Greece, 2009 Jan

@inproceedings{2009MitkasNDPCGC,
author={Pericles A. Mitkas and Anastasios Ntelopoulos and Konstantinos N. Vavliakis and Christos Maramis and Manolis Falelakis andSotiris Diplaris andKoutkias Vasilis andLekka Irini andA. Tantsis andT. Mikos andNikolaos Maglaveras andTheodoros Agorastos},
title={Pooling data from different sources towards cervical cancer prevention - The ASSIST Project},
booktitle={8th Scientific Meeting, New Developments in Prevention and Confrontation of Gynecological Cancer},
address={Thessaloniki, Greece},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Pooling-data-from-different-sources-towards-cervical-cancer-prevention-The-ASSIST-Project.pdf},
keywords={cervical cancer prevention}
}

Vivia Nikolaidou and Pericles A. Mitkas
"A Sequence Mining Method to Predict the Bidding Strategy of Trading Agents"
4th International Workshop on Agents and Data Mining Interaction (ADMI 2009), pp. 139-151, Springer-Verlag, Berlin, Heidelberg, 2009 Jan

In this work, we describe the process used in order to predict the bidding strategy of trading agents. This was done in the context of the Reverse TAC, or CAT, game of the Trading Agent Competition. In this game, a set of trading agents, buyers or sellers, are provided by the server and they trade their goods in one of the markets operated by the competing agents. Better knowledge of the strategy of the trading agents will allow a market maker to adapt its incentives and attract more agents to its own market. Our prediction was based on the time series of the traders\\' past bids, taking into account the variation of each bid compared to its history. The results proved to be of satisfactory accuracy, both in the game\\'s context and when compared to other existing approaches.

@inproceedings{2009NikolaidouADMI,
author={Vivia Nikolaidou and Pericles A. Mitkas},
title={A Sequence Mining Method to Predict the Bidding Strategy of Trading Agents},
booktitle={4th International Workshop on Agents and Data Mining Interaction (ADMI 2009)},
pages={139-151},
publisher={Springer-Verlag},
address={Berlin, Heidelberg},
year={2009},
month={01},
date={2009-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_Sequence_Mining_Method_to_Predict_the_Bidding_St.pdf},
keywords={bidding strategy;trading agents},
abstract={In this work, we describe the process used in order to predict the bidding strategy of trading agents. This was done in the context of the Reverse TAC, or CAT, game of the Trading Agent Competition. In this game, a set of trading agents, buyers or sellers, are provided by the server and they trade their goods in one of the markets operated by the competing agents. Better knowledge of the strategy of the trading agents will allow a market maker to adapt its incentives and attract more agents to its own market. Our prediction was based on the time series of the traders\\\\' past bids, taking into account the variation of each bid compared to its history. The results proved to be of satisfactory accuracy, both in the game\\\\'s context and when compared to other existing approaches.}
}

John E. Psaroudakis, Fani A. Tzima and Pericles A. Mitkas
"EVADING: An Evolutionary Algorithm with Dynamic Niching for Data Classification"
2009 International Conference on Genetic and Evolutionary Methods (GEM, pp. 59--65, Las Vegas, Nevada, USA, 2009 Jul

Multimodal optimization problems (MMOPs) have been widely studied in many fields of machine learning, including pattern recognition and data classification. Formulating the process of rule induction for the latter task as a MMOP and inspired by corresponding findings in the field of function optimization, our current work proposes an evolutionary algorithm (EVADING) capable of discovering a set of accurate and diverse classification rules. The proposed algorithm uses a dynamic clustering technique as a parallel niching method to maintain rule population diversity and converge to the optimal rules for the attribute-space defined by the target dataset. To demonstrate its applicability and potential, EVADING is applied to a series of real-life classification problems and its prediction accuracy is compared to that of other popular non-evolutionary machine learning techniques. Results are encouraging, since EVADING manages to achieve the best overall average ranking and performs significantly better (at significance level a

@inproceedings{2009PsaroudakisGEM,
author={John E. Psaroudakis and Fani A. Tzima and Pericles A. Mitkas},
title={EVADING: An Evolutionary Algorithm with Dynamic Niching for Data Classification},
booktitle={2009 International Conference on Genetic and Evolutionary Methods (GEM},
pages={59--65},
address={Las Vegas, Nevada, USA},
year={2009},
month={07},
date={2009-07-13},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/EVADING-An-Evolutionary-Algorithm-with-Dynamic-Niching-for-Data-Classification.pdf},
keywords={agent performance},
abstract={Multimodal optimization problems (MMOPs) have been widely studied in many fields of machine learning, including pattern recognition and data classification. Formulating the process of rule induction for the latter task as a MMOP and inspired by corresponding findings in the field of function optimization, our current work proposes an evolutionary algorithm (EVADING) capable of discovering a set of accurate and diverse classification rules. The proposed algorithm uses a dynamic clustering technique as a parallel niching method to maintain rule population diversity and converge to the optimal rules for the attribute-space defined by the target dataset. To demonstrate its applicability and potential, EVADING is applied to a series of real-life classification problems and its prediction accuracy is compared to that of other popular non-evolutionary machine learning techniques. Results are encouraging, since EVADING manages to achieve the best overall average ranking and performs significantly better (at significance level a}
}

Marina Riga, Fani A. Tzima, Kostas Karatzas and Pericles A. Mitkas
"Development and evaluation of data mining models for air quality prediction in Athens, Greece"
Information Technologies in Environmental Engineering, Proceedings of the 4th International ICSC Symposium, ITEE 2009, pp. 331--344, Springer Berlin Heidelberg, Thessaloniki, Greece, 2009 May

Air pollution is a major problem in the world today, causing undesirable effects on both the environment and human health and, at the same time, stressing the need for effective simulation and forecasting models of atmospheric quality. Targeting this adverse situation, our current work focuses on investigating the potential of data mining algorithms in air pollution modeling and short-term forecasting problems. In this direction, various data mining methods are adopted for the qualitative forecasting of concentration levels of air pollutants or the quantitative prediction of their values (through the development of different classification and regression models respectively) in five locations of the greater Athens area. An additional aim of this work is the systematic assessment of the quality of experimental results, in order to discover the best performing algorithm (or set of algorithms) that can be proved to be significantly different from its rivals. Obtained experimental results are deemed satisfactory in terms of the aforementioned goals of the investigation, as high percentages of correct classifications are achieved in the set of monitoring stations and clear conclusions are drawn, as far as the determination of significantly best performing algorithms is concerned, for the development of air quality (AQ) prediction models.

@inproceedings{2009TzimaITEE,
author={Marina Riga and Fani A. Tzima and Kostas Karatzas and Pericles A. Mitkas},
title={Development and evaluation of data mining models for air quality prediction in Athens, Greece},
booktitle={Information Technologies in Environmental Engineering, Proceedings of the 4th International ICSC Symposium, ITEE 2009},
pages={331--344},
publisher={Springer Berlin Heidelberg},
address={Thessaloniki, Greece},
year={2009},
month={05},
date={2009-05-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Development-and-evaluation-of-data-mining-models-for-air-quality-prediction-in-Athens-Greece.pdf},
keywords={air pollution model;air quality;data mining algorithms},
abstract={Air pollution is a major problem in the world today, causing undesirable effects on both the environment and human health and, at the same time, stressing the need for effective simulation and forecasting models of atmospheric quality. Targeting this adverse situation, our current work focuses on investigating the potential of data mining algorithms in air pollution modeling and short-term forecasting problems. In this direction, various data mining methods are adopted for the qualitative forecasting of concentration levels of air pollutants or the quantitative prediction of their values (through the development of different classification and regression models respectively) in five locations of the greater Athens area. An additional aim of this work is the systematic assessment of the quality of experimental results, in order to discover the best performing algorithm (or set of algorithms) that can be proved to be significantly different from its rivals. Obtained experimental results are deemed satisfactory in terms of the aforementioned goals of the investigation, as high percentages of correct classifications are achieved in the set of monitoring stations and clear conclusions are drawn, as far as the determination of significantly best performing algorithms is concerned, for the development of air quality (AQ) prediction models.}
}

2008

Journal Articles

Pericles A. Mitkas, Vassilis Koutkias, Andreas L. Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
Studies in Health Technology and Informatic, 136, pp. 241-246, 2008 Jan

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@article{2007MitkasSHTI,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas L. Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
journal={Studies in Health Technology and Informatic},
volume={136},
pages={241-246},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"BioCrawler: An intelligent crawler for the semantic web"
Expert Systems with Applications, 36, (35), 2008 Jul

Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope

@article{2008BatziosESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={BioCrawler: An intelligent crawler for the semantic web},
journal={Expert Systems with Applications},
volume={36},
number={35},
year={2008},
month={07},
date={2008-07-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/BioCrawler-An-intelligent-crawler-for-the-semantic-web.pdf},
keywords={semantic web;Multi-Agent System;focused crawling;web crawling},
abstract={Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTope}
}

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis, Ioannis Kontogounis and Pericles A. Mitkas
"Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments"
Expert Systems with Applications, 35, (3), pp. 591-603, 2008 Jan

Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor

@article{2008ChatzidimitriouESwA,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Ioannis Kontogounis and Pericles A. Mitkas},
title={Agent Mertacor: A robust design for dealing with uncertainty and variation in SCM environments},
journal={Expert Systems with Applications},
volume={35},
number={3},
pages={591-603},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Mertacor-A-robust-design-for-dealing-with-uncertaintyand-variation-in-SCM-environments.pdf},
keywords={machine learning;Agent intelligence;Autonomous trading agents;Electronic commerce},
abstract={Supply Chain Management (SCM) has recently entered a new era, where the old-fashioned static, long-term relationships between involved actors are being replaced by new, dynamic negotiating schemas, established over virtual organizations and trading marketplaces. SCM environments now operate under strict policies that all interested parties (suppliers, manufacturers, customers) have to abide by, in order to participate. And, though such dynamic markets provide greater profit potential, they also conceal greater risks, since competition is tougher and request and demand may vary significantly in the quest for maximum benefit. The need for efficient SCM actors is thus implied, actors that may handle the deluge of (either complete or incomplete) information generated, perceive variations and exploit the full potential of the environments they inhabit. In this context, we introduce Mertacor, an agent that employs robust mechanisms for dealing with all SCM facets and for trading within dynamic and competitive SCM environments. Its efficiency has been extensively tested in one of the most challenging SCM environments, the Trading Agent Competition (TAC) SCM game. This paper provides an extensive analysis of Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and thoroughly discusses Mertacor}
}

Alex andros Batzios, Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems"
Expert Systems with Applications, 36, (4), 2008 Sep

Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.

@article{2008DimouESwA,
author={Alex andros Batzios and Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An Integrated Infrastructure for Monitoring and Evaluating Agent-based Systems},
journal={Expert Systems with Applications},
volume={36},
number={4},
year={2008},
month={09},
date={2008-09-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-integrated-infrastructure-for-monitoring-and-evaluating-agent-based-systems.pdf},
keywords={performance evaluation;automated soft ware engineering;fuzzy measurement aggregation;softrware agents},
abstract={Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Sketching a methodology for efficient supply chain management agents enhanced through data mining"
International Journal of Intelligent Information and Database Systems (IJIIDS), 2, (1), 2008 Feb

Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.

@article{2008SymeoniidsIJIIDS,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Sketching a methodology for efficient supply chain management agents enhanced through data mining},
journal={International Journal of Intelligent Information and Database Systems (IJIIDS)},
volume={2},
number={1},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sketching-a-methodology-for-efficient-supply-chain-management-agents-enhanced-through-data-mining.pdf},
keywords={performance evaluation;Intelligent agents;agent-based systems;multi-agent systems;MAS;trading agent competition;agent-oriented methodology;bidding;forecasting;SCM},
abstract={Supply Chain Management (SCM) environments demand intelligent solutions, which can perceive variations and achieve maximum revenue. This highlights the importance of a commonly accepted design methodology, since most current implementations are application-specific. In this work, we present a methodology for building an intelligent trading agent and evaluating its performance at the Trading Agent Competition (TAC) SCM game. We justify architectural choices made, ranging from the adoption of specific Data Mining (DM) techniques, to the selection of the appropriate metrics for agent performance evaluation. Results indicate that our agent has proven capable of providing advanced SCM solutions in demanding SCM environments.}
}

2008

Conference Papers

Pericles A. Mitkas, Vassilis Koutkias, Andreas Symeonidis, Manolis Falelakis, Christos Diou, Irini Lekka, Anastasios T. Delopoulos, Theodoros Agorastos and Nicos Maglaveras
"Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach"
MIE, Goteborg, Sweden, 2008 May

Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.

@conference{2008MitkasMIE,
author={Pericles A. Mitkas and Vassilis Koutkias and Andreas Symeonidis and Manolis Falelakis and Christos Diou and Irini Lekka and Anastasios T. Delopoulos and Theodoros Agorastos and Nicos Maglaveras},
title={Association Studies on Cervical Cancer Facilitated by Inference and Semantic Technologes: The ASSIST Approach},
booktitle={MIE},
address={Goteborg, Sweden},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Association-Studies-on-Cervical-Cancer-Facilitated-by-Inference-and-Semantic-Technologies-The-ASSIST-Approach-.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={Cervical cancer (CxCa) is currently the second leading cause of cancer-related deaths, for women between 20 and 39 years old. As infection by the human papillomavirus (HPV) is considered as the central risk factor for CxCa, current research focuses on the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. ASSIST is an EU-funded research project that aims to facilitate the design and execution of genetic association studies on CxCa in a systematic way by adopting inference and semantic technologies. Toward this goal, ASSIST provides the means for seamless integration and virtual unification of distributed and heterogeneous CxCa data repositories, and the underlying mechanisms to undertake the entire process of expressing and statistically evaluating medical hypotheses based on the collected data in order to generate medically important associations. The ultimate goal for ASSIST is to foster the biomedical research community by providing an open, integrated and collaborative framework to facilitate genetic association studies.}
}

2008

Inproceedings Papers

Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks"
IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction, pp. 558-561, IEEE Computer Society, Sydney, Australia, 2008 Dec

In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.

@inproceedings{2008ChatzidimitriouADMI,
author={Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining-Driven Analysis and Decomposition in Agent Supply Chain Management Networks},
booktitle={IEEE/WIC/ACM Workshop on Agents and Data Mining Interaction},
pages={558-561},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Data_Mining-Driven_Analysis_and_Decomposition_in_A.pdf},
keywords={fuzzy logic},
abstract={In complex and dynamic environments where interdependencies cannot monotonously determine causality, data mining techniques may be employed in order to analyze the problem, extract key features and identify pivotal factors. Typical cases of such complexity and dynamicity are supply chain networks, where a number of involved stakeholders struggle towards their own benefit. These stakeholders may be agents with varying degrees of autonomy and intelligence, in a constant effort to establish beneficiary contracts and maximize own revenue. In this paper, we illustrate the benefits of data mining analysis on a well-established agent supply chain management network. We apply data mining techniques, both at a macro and micro level, analyze the results and discuss them in the context of agent performance improvement.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"Exploiting parallel data mining processing for protein annotation"
Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference, pp. 242-252, Samos, Greece, 2008 Aug

Proteins are large organic compounds consisting of amino acids arranged in a linear chain and joined together by peptide bonds. One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. In this paper a novel parallel methodology for automatic protein function annotation is presented. Data mining techniques are employed in order to construct models based on data generated from already annotated protein sequences. The first step of the methodology is to obtain the motifs present in these sequences, which are then provided as input to the data mining algorithms in order to create a model for every term. Experiments conducted using the EGEE Grid environment as a source of multiple CPUs clearly indicate that the methodology is highly efficient and accurate, as the utilization of many processors substantially reduces the execution time.

@inproceedings{2008CkekasEURECA,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Exploiting parallel data mining processing for protein annotation},
booktitle={Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference},
pages={242-252},
address={Samos, Greece},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Exploiting-parallel-data-mining-processing-for-protein-annotation-.pdf},
keywords={Finite State Automata;Parallel Processing},
abstract={Proteins are large organic compounds consisting of amino acids arranged in a linear chain and joined together by peptide bonds. One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. In this paper a novel parallel methodology for automatic protein function annotation is presented. Data mining techniques are employed in order to construct models based on data generated from already annotated protein sequences. The first step of the methodology is to obtain the motifs present in these sequences, which are then provided as input to the data mining algorithms in order to create a model for every term. Experiments conducted using the EGEE Grid environment as a source of multiple CPUs clearly indicate that the methodology is highly efficient and accurate, as the utilization of many processors substantially reduces the execution time.}
}

Christos Dimou, Manolis Falelakis, Andreas Symeonidis, Anastasios Delopoulos and Pericles A. Mitkas
"Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation"
IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208), pp. 336--339, IEEE Computer Society, Sydney, Australia, 2008 Dec

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{2008DimouIAT,
author={Christos Dimou and Manolis Falelakis and Andreas Symeonidis and Anastasios Delopoulos and Pericles A. Mitkas},
title={Constructing Optimal Fuzzy Metric Trees for Agent Performance Evaluation},
booktitle={IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT\9208)},
pages={336--339},
publisher={IEEE Computer Society},
address={Sydney, Australia},
year={2008},
month={12},
date={2008-12-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Constructing-Optimal-Fuzzy-Metric-Trees-for-Agent-Performance-Evaluation.pdf},
keywords={fuzzy logic},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

Christos Dimou, Kyriakos C. Chatzidimitriou, Andreas Symeonidis and Pericles A. Mitkas
"Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain"
First Workshop on Knowledge Reuse (KREUSE, Beijing (China), 2008 May

The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.

@inproceedings{2008DimouKREUSE,
author={Christos Dimou and Kyriakos C. Chatzidimitriou and Andreas Symeonidis and Pericles A. Mitkas},
title={Creating and Reusing Metric Graphs for Evaluating Agent Performance in the Supply Chain Management Domain},
booktitle={First Workshop on Knowledge Reuse (KREUSE},
address={Beijing (China)},
year={2008},
month={05},
date={2008-05-25},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KREUSE-08.pdf},
keywords={agent performance evaluation;Supply Chain Management systems},
abstract={The overwhelming demand for efficient agent performance in Supply Chain Management systems, as exemplified by numerous international competitions, raises the issue of defining and using generalized methods for performance evaluation. Up until now, most researchers test their findings in an ad-hoc manner, often having to re-invent existing evaluation-specific knowledge. In this position paper, we tackle the key issue of defining and using metrics within the context of evaluating agent performance in the SCM domain. We propose the Metrics Representation Graph, a structure that organizes performance metrics in hierarchical manner, and perform a preliminary assessment by instantiating an MRG for the TAC SCM competition, one of the most demanding SCM competitions currently established. We envision the automated generation of the MRG, as well as appropriate contribution from the TAC community towards the finalization of the MRG, so that it will be readily available for future performance evaluations.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Data Mining and Agent Technology: a fruitful symbiosis"
Soft Computing for Knowledge Discovery and Data Mining, pp. 327-362, Springer US, Clermont-Ferrand, France, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008DimouSCKDDM,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Data Mining and Agent Technology: a fruitful symbiosis},
booktitle={Soft Computing for Knowledge Discovery and Data Mining},
pages={327-362},
publisher={Springer US},
address={Clermont-Ferrand, France},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-Mining-and-Agent-Technology-a-fruitful-symbiosis.pdf},
keywords={Gene Ontology;Parallel Algorithms;Protein Classi fi cation},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Parallel Data Mining Application for Gene Ontology Term Prediction"
3rd EGEE User Forum, Clermont-Ferrand, France, 2008 Feb

One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.

@inproceedings{2008GkekasEGEEForum,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Parallel Data Mining Application for Gene Ontology Term Prediction},
booktitle={3rd EGEE User Forum},
address={Clermont-Ferrand, France},
year={2008},
month={02},
date={2008-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A_parallel_data_mining_application_for_Gene_Ontology_term_prediction_-_Contribution.pdf},
keywords={Gene Ontology;Parallel Algorithms;Protein Classi fi cation},
abstract={One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"A Parallel Data Mining Methodology for Protein Function Prediction Utilizing Finite State Automata"
2nd Electrical and Computer Engineering Student Conference, Athens, Greece, 2008 Apr

One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.

@inproceedings{2008GkekasSFHMMY,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A Parallel Data Mining Methodology for Protein Function Prediction Utilizing Finite State Automata},
booktitle={2nd Electrical and Computer Engineering Student Conference},
address={Athens, Greece},
year={2008},
month={04},
date={2008-04-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Parallel-Data-Mining-Methodology-for-Protein-Function-Prediction-Utilizing-Finite-State-Automata.pdf},
keywords={Parallel Data Mining for Protein Function},
abstract={One of the most important challenges in modern bioinformatics is the accurate prediction of the functional behaviour of proteins. The strong correlation that exists between the properties of a protein and its motif sequence makes such a prediction possible. In this paper a novel parallel methodology for protein function prediction will be presented. Data mining techniques are employed in order to construct a model for each Gene Ontology term, based on data generated from already annotated protein sequences. In order to predict the annotation of an unknown protein, its motif sequence is run through each GO term model, producing similarity scores for every term. Although it has been experimentally proven that this process is efficient, it unfortunately requires heavy processor resources. In order to address this issue, a parallel application has been implemented and tested using the EGEE Grid infrastructure.}
}

Pericles A. Mitkas
"Training Intelligent Agents and Evaluating Their Performance"
International Workshop on Agents and Data Mining Interaction (ADMI), pp. 336--339, IEEE Computer Society, Sydney,Australia, 2008 Dec

The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.

@inproceedings{2008MitkasADMI,
author={Pericles A. Mitkas},
title={Training Intelligent Agents and Evaluating Their Performance},
booktitle={International Workshop on Agents and Data Mining Interaction (ADMI)},
pages={336--339},
publisher={IEEE Computer Society},
address={Sydney,Australia},
year={2008},
month={12},
date={2008-12-09},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Constructing-Optimal-Fuzzy-Metric-Trees-for-Agent-Performance-Evaluation.pdf},
keywords={fuzzy logic},
abstract={The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information. Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process in compliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.}
}

Pericles A. Mitkas, Christos Maramis, Anastastios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Manolis Falelakis, Fotis E. Psomopoulos, Alex andros Batzios, Nikolaos Maglaveras, Irini Lekka, Vasilis Koutkias, Theodoros Agorastos, T. Mikos and A. Tatsis
"ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer"
6th European Symposium on Biomedical Engineering, Chania, Greece, 2008 Jun

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{2008MitkasEsbmeAssist,
author={Pericles A. Mitkas and Christos Maramis and Anastastios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Manolis Falelakis and Fotis E. Psomopoulos and Alex andros Batzios and Nikolaos Maglaveras and Irini Lekka and Vasilis Koutkias and Theodoros Agorastos and T. Mikos and A. Tatsis},
title={ASSIST: Employing Inference and Semantic Technologies to Facilitate Association Studies on Cervical Cancer},
booktitle={6th European Symposium on Biomedical Engineering},
address={Chania, Greece},
year={2008},
month={06},
date={2008-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/ASSIST-EMPLOYING-INFERENCE-AND-SEMANTIC-TECHNOLOGIES-TO-FACILITATE-ASSOCIATION-STUDIES-ON-CERVICAL-CANCER-.pdf},
keywords={cervical cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

Ioanna K. Mprouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"AMoS: Agent-based Molecular Simulations"
Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference, pp. 175-186, Samos, Greece, 2008 Aug

Molecular dynamics (MD) is a form of computer simulation wherein atoms and molecules are allowed to interact for a period of time under known laws of physics, giving a view of the motion of the atoms. Usually the number of particles involved in a simulation is so large, that the properties of the system in question are virtually impossible to compute analytically. MD circumvents this problem by employing numerical approaches. Utilizing theories and concepts from mathematics, physics and chemistry and employing algorithms from computer science and information theory, MD is a clear example of a multidisciplinary method. In this paper a new framework for MD simulations is presented, which utilizes software agents as particle representations and an empirical potential function as the means of interaction. The framework is applied on protein structural data (PDB files), using an implicit solvent environment and a time step of 5 femto-seconds (5×10−15 sec). The goal of the simulation is to provide another view to the study of emergent behaviours and trends in the movement of the agent-particles in the protein complex. This information can then be used to construct an abstract model of the rules that govern the motion of the particles.

@inproceedings{2008MprouzaEURECA,
author={Ioanna K. Mprouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={AMoS: Agent-based Molecular Simulations},
booktitle={Student EUREKA 2008: 2nd Panhellenic Scientific Student Conference},
pages={175-186},
address={Samos, Greece},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/AMoS-Agent-based-Molecular-Simulations.pdf},
keywords={Force Field Equations;Molecular Dynamics;Protein Data Bank;Protein Prediction Structure;Simulation},
abstract={Molecular dynamics (MD) is a form of computer simulation wherein atoms and molecules are allowed to interact for a period of time under known laws of physics, giving a view of the motion of the atoms. Usually the number of particles involved in a simulation is so large, that the properties of the system in question are virtually impossible to compute analytically. MD circumvents this problem by employing numerical approaches. Utilizing theories and concepts from mathematics, physics and chemistry and employing algorithms from computer science and information theory, MD is a clear example of a multidisciplinary method. In this paper a new framework for MD simulations is presented, which utilizes software agents as particle representations and an empirical potential function as the means of interaction. The framework is applied on protein structural data (PDB files), using an implicit solvent environment and a time step of 5 femto-seconds (5×10−15 sec). The goal of the simulation is to provide another view to the study of emergent behaviours and trends in the movement of the agent-particles in the protein complex. This information can then be used to construct an abstract model of the rules that govern the motion of the particles.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"Sizing Up: Bioinformatics in a Grid Context"
3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB, pp. 558-561, IEEE Computer Society, Thessaloniki, Greece, 2008 Oct

A Frid environmeent can be viewed sa a virtual computing architecture that provides the ability to perform higher thoughput computing by taking advantage of many computer geographically distributed and connected by a network. Bioinformatics applications stand to gain in such environment both in regards of cimputational resources available, but in reliability and efficiency as well. There are several approaches in literature which present the use of Grid resources in bioinformatics. Nevertheless, scientific progress is hindered by the fact that each researcher operates in relative isolation, regarding datasets and efforts, since there is no universally accepted methodology for performing bioinformatics tasks in Grid. Given the complexity of both the data and the algorithms invilvde in the majorityof cases, a case study on protein classification utilizing the Frid infrastructure, may be the first step in presenting a unifying methodology for bioinformatics in a Grind context.

@inproceedings{2008PsomopoulosHSCBB,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Sizing Up: Bioinformatics in a Grid Context},
booktitle={3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics - HSCBB},
pages={558-561},
publisher={IEEE Computer Society},
address={Thessaloniki, Greece},
year={2008},
month={10},
date={2008-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Sizing-Up-Bioinformatics-in-a-Grid-Context.pdf},
keywords={Bioinformatics in Grid Context},
abstract={A Frid environmeent can be viewed sa a virtual computing architecture that provides the ability to perform higher thoughput computing by taking advantage of many computer geographically distributed and connected by a network. Bioinformatics applications stand to gain in such environment both in regards of cimputational resources available, but in reliability and efficiency as well. There are several approaches in literature which present the use of Grid resources in bioinformatics. Nevertheless, scientific progress is hindered by the fact that each researcher operates in relative isolation, regarding datasets and efforts, since there is no universally accepted methodology for performing bioinformatics tasks in Grid. Given the complexity of both the data and the algorithms invilvde in the majorityof cases, a case study on protein classification utilizing the Frid infrastructure, may be the first step in presenting a unifying methodology for bioinformatics in a Grind context.}
}

Fotis E. Psomopoulos, Pericles A. Mitkas, Christos S. Krinas and Ioannis N. Demetropoulos
"G-MolKnot: A grid enabled systematic algorithm to produce open molecular knots"
1st HellasGrid User Forum, pp. 327-362, Springer US, Athens, Greece, 2008 Jan

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2008PsomopoulosHUF,
author={Fotis E. Psomopoulos and Pericles A. Mitkas and Christos S. Krinas and Ioannis N. Demetropoulos},
title={G-MolKnot: A grid enabled systematic algorithm to produce open molecular knots},
booktitle={1st HellasGrid User Forum},
pages={327-362},
publisher={Springer US},
address={Athens, Greece},
year={2008},
month={01},
date={2008-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/G-MolKnot-A-grid-enabled-systematic-algorithm-to-produce-open-molecular-knots-.pdf},
keywords={open molecular knots},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Fani A. Tzima and Pericles A. Mitkas
"ZCS Revisited: Zeroth-level Classifier Systems for Data Mining"
2008 IEEE International Conference on Data Mining Workshops, pp. 700--709, IEEE Computer Society, Washington, DC, 2008 Dec

Learning classifier systems (LCS) are machine learning systems designed to work for both multi-step and singlestep decision tasks. The latter case presents an interesting, though not widely studied, challenge for such algorithms, especially when they are applied to real-world data mining problems. The present investigation departs from the popular approach of applying accuracy-based LCS to data mining problems and aims to uncover the potential of strengthbased LCS in such tasks. In this direction, ZCS-DM, a Zeroth-level Classifier System for data mining, is applied to a series of real world classification problems and its performance is compared to that of other state-of-the-art machine learning techniques (C4.5, HIDER and XCS). Results are encouraging, since with only a modest parameter exploration phase, ZCS-DM manages to outperform its rival algorithms in eleven out of the twelve benchmark datasets used in this study. We conclude this work by identifying future research directions.

@inproceedings{2008TzimaICDMW,
author={Fani A. Tzima and Pericles A. Mitkas},
title={ZCS Revisited: Zeroth-level Classifier Systems for Data Mining},
booktitle={2008 IEEE International Conference on Data Mining Workshops},
pages={700--709},
publisher={IEEE Computer Society},
address={Washington, DC},
year={2008},
month={12},
date={2008-12-15},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/ZCS-Revisited-Zeroth-level-Classifier-Systems-for-Data-Mining.pdf},
keywords={Learning Classifier System;Zeroth-level Classifier System (ZCS)},
abstract={Learning classifier systems (LCS) are machine learning systems designed to work for both multi-step and singlestep decision tasks. The latter case presents an interesting, though not widely studied, challenge for such algorithms, especially when they are applied to real-world data mining problems. The present investigation departs from the popular approach of applying accuracy-based LCS to data mining problems and aims to uncover the potential of strengthbased LCS in such tasks. In this direction, ZCS-DM, a Zeroth-level Classifier System for data mining, is applied to a series of real world classification problems and its performance is compared to that of other state-of-the-art machine learning techniques (C4.5, HIDER and XCS). Results are encouraging, since with only a modest parameter exploration phase, ZCS-DM manages to outperform its rival algorithms in eleven out of the twelve benchmark datasets used in this study. We conclude this work by identifying future research directions.}
}

Theodoros Agorastos, Pericles A. Mitkas, Manolis Falelakis, Fotis E. Psomopoulos, Anastasios N. Delopoulos, Andreas Symeonidis, Sotiris Diplaris, Christos Maramis, Alexandros Batzios, Irini Lekka, Vasilis Koutkias, Themistoklis Mikos, A. Tatsis and Nikolaos Maglaveras
"Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project"
World Cancer Congress, Geneva, Switzerland, 2008 Aug

Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.

@inproceedings{WCCAssist,
author={Theodoros Agorastos and Pericles A. Mitkas and Manolis Falelakis and Fotis E. Psomopoulos and Anastasios N. Delopoulos and Andreas Symeonidis and Sotiris Diplaris and Christos Maramis and Alexandros Batzios and Irini Lekka and Vasilis Koutkias and Themistoklis Mikos and A. Tatsis and Nikolaos Maglaveras},
title={Large Scale Association Studies Using Unified Data for Cervical Cancer and beyond: The ASSIST Project},
booktitle={World Cancer Congress},
address={Geneva, Switzerland},
year={2008},
month={08},
date={2008-08-01},
url={http://issel.ee.auth.gr/wp-content/uploads/wcc2008.pdf},
keywords={Unified Data for Cervical Cancer},
abstract={Despite the proved close connection of cervical cancer with the human papillomavirus (HPV), intensive ongoing research investigates the role of specific genetic and environmental factors in determining HPV persistence and subsequent progression of the disease. To this end, genetic association studies constitute a significant scientific approach that may lead to a more comprehensive insight on the origin of complex diseases. Nevertheless, association studies are most of the times inconclusive, since the datasets employed are small, usually incomplete and of poor quality. The main goal of ASSIST is to aid research in the field of cervical cancer providing larger high quality datasets, via a software system that virtually unifies multiple heterogeneous medical records, located in various sites. Furthermore, the system is being designed in a generic manner, with provision for future extensions to include other types of cancer or even different medical fields. Within the context of ASSIST, innovative techniques have been elaborated for the semantic modelling and fuzzy inferencing on medical knowledge aiming at meaningful data unification: (i) The ASSIST core ontology (being the first ontology ever modelling cervical cancer) permits semantically equivalent but differently coded data to be mapped to a common language. (ii) The ASSIST inference engine maps medical entities to syntactic values that are understood by legacy medical systems, supporting the processes of hypotheses testing and association studies, and at the same time calculating the severity index of each patient record. These modules constitute the ASSIST Core and are accompanied by two other important subsystems: (1) The Interfacing to Medical Archives subsystem maps the information contained in each legacy medical archive to corresponding entities as defined in the knowledge model of ASSIST. These patient data are generated by an advanced anonymisation tool also developed within the context of the project. (2) The User Interface enables transparent and advanced access to the data repositories incorporated in ASSIST by offering query expression as well as patient data and statistical results visualisation to the ASSIST end-users. We also have to point out that the system is easily extendable virtually to any medical domain, as the core ontology was designed with this in mind and all subsystems are ontology-aware i.e., adaptable to any ontology changes/additions. Using ASSIST, a medical researcher can have seamless access to medical records of participating sites and, through a particularly handy computing environment, collect data records satisfying his criteria. Moreover he can define cases and controls, select records adjusting their validity and use the most popular statistical tools for drawing conclusions. The logical unification of medical records of participating sites, including clinical and genetic data, to a common knowledge base is expected to increase the effectiveness of research in the field of cervical cancer as it permits the creation of on-demand study groups as well as the recycling of data used in previous studies.}
}

2007

Journal Articles

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
International Journal of Product Lifecycle Management, 2, (2), pp. 1097-1111, 2007 Jan

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents

@article{2007MitkasIJPLM,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
journal={International Journal of Product Lifecycle Management},
volume={2},
number={2},
pages={1097-1111},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={multi-agent systems;MAS},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Data mining for agent reasoning: A synergy for training intelligent agents"
Engineering Applications of Artificial Intelligence, 20, (8), pp. 1097-1111, 2007 Dec

The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.

@article{2007SymeonidisEAAI,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Data mining for agent reasoning: A synergy for training intelligent agents},
journal={Engineering Applications of Artificial Intelligence},
volume={20},
number={8},
pages={1097-1111},
year={2007},
month={12},
date={2007-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Data-mining-for-agent-reasoning-A-synergy-fortraining-intelligent-agents.pdf},
keywords={Agent Technology;Agent reasoning;Agent training;Knowledge model},
abstract={The task-oriented nature of data mining (DM) has already been dealt successfully with the employment of intelligent agent systems that distribute tasks, collaborate and synchronize in order to reach their ultimate goal, the extraction of knowledge. A number of sophisticated multi-agent systems (MAS) that perform DM have been developed, proving that agent technology can indeed be used in order to solve DM problems. Looking into the opposite direction though, knowledge extracted through DM has not yet been exploited on MASs. The inductive nature of DM imposes logic limitations and hinders the application of the extracted knowledge on such kind of deductive systems. This problem can be overcome, however, when certain conditions are satisfied a priori. In this paper, we present an approach that takes the relevant limitations and considerations into account and provides a gateway on the way DM techniques can be employed in order to augment agent intelligence. This work demonstrates how the extracted knowledge can be used for the formulation initially, and the improvement, in the long run, of agent reasoning.}
}

Andreas L. Symeonidis, Ioannis N. Athanasiadis and Pericles A. Mitkas
"A retraining methodology for enhancing agent intelligence"
Knowledge-Based Systems, 20, (4), pp. 388-396, 2007 Jan

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.

@article{2007SymeonidisKBS,
author={Andreas L. Symeonidis and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={A retraining methodology for enhancing agent intelligence},
journal={Knowledge-Based Systems},
volume={20},
number={4},
pages={388-396},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-retraining-methodology-for-enhancing-agent-intelligence.pdf},
keywords={business data processing;logic programming},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as agent training. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement in the long run of agent intelligence.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionysios Kehagias and Pericles A. Mitkas
"A Multi-agent Infrastructure for Enhancing ERP system Intelligence"
Scalable Computing: Practice and Experience, 8, (1), pp. 101-114, 2007 Jan

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company

@article{2007SymeonidisSCPE,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionysios Kehagias and Pericles A. Mitkas},
title={A Multi-agent Infrastructure for Enhancing ERP system Intelligence},
journal={Scalable Computing: Practice and Experience},
volume={8},
number={1},
pages={101-114},
year={2007},
month={01},
date={2007-01-01},
url={http://www.scpe.org/index.php/scpe/article/viewFile/401/75},
keywords={Adaptive Decision Making;ERP systems;Mutli-Agent Systems;Soft computing},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. In this paper we present an alternative approach for incorporating adaptive business intelligence into the company}
}

2007

Incollection

Pericles A. Mitkas and Paraskevi Nikolaidou
"Agents and Multi-Agent Systems in Supply Chain Management: An Overview"
Agents and Web Services in Virtual Enterprises, pp. 223-243, IGI Global, 2007 Jan

This chapter discusses the current state-of-the-art of agents and multi-agent systems (MAS) in supply chain management (SCM). Following a general description of SCM and the challenges it is currently ed with we present MAS as a possible solution to these challenge. We argue that an application involving multiple autonomous actors, such as SCM, can best be served by a software paradigm that relies on multiple independent software entities, like agents. The most significant current trends in this area and focusing on potential areas of further research. Furthermore, the authors believe that a clearer view on the current state-of-the-art and future extension will help researchers improve existing standards and solve remaining issues, eventually helping MAS-based SCM systems to replace legacy ERP software, but also give a boost on both areas of research separately.

@incollection{2007NikolaidouAWSVE,
author={Pericles A. Mitkas and Paraskevi Nikolaidou},
title={Agents and Multi-Agent Systems in Supply Chain Management: An Overview},
booktitle={Agents and Web Services in Virtual Enterprises},
pages={223-243},
publisher={IGI Global},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-based-modelling-and-simulation-in-the-irrigation-management-sector.pdf},
keywords={agent;agent-based modeling;irrigation management;stakeholder participation},
abstract={This chapter discusses the current state-of-the-art of agents and multi-agent systems (MAS) in supply chain management (SCM). Following a general description of SCM and the challenges it is currently ed with we present MAS as a possible solution to these challenge. We argue that an application involving multiple autonomous actors, such as SCM, can best be served by a software paradigm that relies on multiple independent software entities, like agents. The most significant current trends in this area and focusing on potential areas of further research. Furthermore, the authors believe that a clearer view on the current state-of-the-art and future extension will help researchers improve existing standards and solve remaining issues, eventually helping MAS-based SCM systems to replace legacy ERP software, but also give a boost on both areas of research separately.}
}

Fani A. Tzima and Pericles A. Mitkas
"Web services technology: an overview"
Agents and Web Services in Virtual Enterprises, pp. 25-44, IGI Global, 2007 Jan

This chapter examines the concept of Service-Oriented Architecture (SOA) in conjunction with the Web Services technology as an implementation of the former\\\\92s design principles. Following a brief introduction of SOA and its advantages, a high-level overview of the structure and composition of the Web Services platform is provided. This overview covers the core Web services specifications as well as features of the extended architecture stack, which together form a powerful and robust foundation for building distributed systems. The chapter concludes with a discussion of the scope of applicability of SOA and Web services. The overall goal of this chapter is to portray the key assets of the presented technologies and evaluate them as tools for handling adaptability, portability, and interoperability issues that arise in modern business environments.

@incollection{2007TzimaAWSVE,
author={Fani A. Tzima and Pericles A. Mitkas},
title={Web services technology: an overview},
booktitle={Agents and Web Services in Virtual Enterprises},
pages={25-44},
publisher={IGI Global},
year={2007},
month={01},
date={2007-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Web-services-technology-an-overview.pdf},
keywords={Service-Oriented Architecture;SOA;Web Services},
abstract={This chapter examines the concept of Service-Oriented Architecture (SOA) in conjunction with the Web Services technology as an implementation of the former\\\\\\\\92s design principles. Following a brief introduction of SOA and its advantages, a high-level overview of the structure and composition of the Web Services platform is provided. This overview covers the core Web services specifications as well as features of the extended architecture stack, which together form a powerful and robust foundation for building distributed systems. The chapter concludes with a discussion of the scope of applicability of SOA and Web services. The overall goal of this chapter is to portray the key assets of the presented technologies and evaluate them as tools for handling adaptability, portability, and interoperability issues that arise in modern business environments.}
}

2007

Inproceedings Papers

Chrysa Collyda, Sotiris Diplaris, Pericles A. Mitkas, Nicos Maglaveras and Costas Pappas
"Profile Fuzzy Hidden Markov Models for Phylogenetic Analysis and Protein Classification"
5th Annual Rocky Mountain Bioinformatics Conference, pp. 327-362, Springer US, Aspen/Snowmass, CO, USA, 2007 Nov

Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.

@inproceedings{2007CollydaARMBC,
author={Chrysa Collyda and Sotiris Diplaris and Pericles A. Mitkas and Nicos Maglaveras and Costas Pappas},
title={Profile Fuzzy Hidden Markov Models for Phylogenetic Analysis and Protein Classification},
booktitle={5th Annual Rocky Mountain Bioinformatics Conference},
pages={327-362},
publisher={Springer US},
address={Aspen/Snowmass, CO, USA},
year={2007},
month={11},
date={2007-11-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/G-MolKnot-A-grid-enabled-systematic-algorithm-to-produce-open-molecular-knots-.pdf},
keywords={Fuzzy Hidden Markov Models},
abstract={Multi-agent systems (MAS) have grown quite popular in a wide spec- trum of applications where argumentation, communication, scaling and adaptability are requested. And though the need for well-established engineering approaches for building and evaluating such intelligent systems has emerged, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant defini- tions and scope of applicability. Even existing well-tested evaluation methodologies applied in traditional software engineering, prove inadequate to address the unpre- dictable emerging factors of the behavior of intelligent components. The following chapter aims to present such a unified and integrated methodology for a specific cat- egory of MAS. It takes all constraints and issues into account and denotes the way knowledge extracted with the use of Data mining (DM) techniques can be used for the formulation initially, and the improvement, in the long run, of agent reasoning and MAS performance. The coupling of DM and Agent Technology (AT) principles, proposed within the context of this chapter is therefore expected to provide to the reader an efficient gateway for developing and evaluating highly reconfigurable soft- ware approaches that incorporate domain knowledge and provide sophisticated De- cision Making capabilities. The main objectives of this chapter could be summarized into the following: a) introduce Agent Technology (AT) as a successful paradigm for building Data Mining (DM)-enriched applications, b) provide a methodology for (re)evaluating the performance of such DM-enriched Multi-Agent Systems (MAS), c) Introduce Agent Academy II, an Agent-Oriented Software Engineering framework for building MAS that incorporate knowledge model extracted by the use of (classi- cal and novel) DM techniques and d) denote the benefits of the proposed approach through a real-world demonstrator. This chapter provides a link between DM and AT and explains how these technologies can efficiently cooperate with each other. The exploitation of useful knowledge extracted by the use of DM may consider- ably improve agent infrastructures, while also increasing reusability and minimizing customization costs. The synergy between DM and AT is ultimately expected to provide MAS with higher levels of autonomy, adaptability and accuracy and, hence, intelligence.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Evaluating Knowledge Intensive Multi-Agent Systems"
Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007), pp. 74-87, Springer Berlin / Heidelberg, St. Petersburg, Russia, 2007 Jun

As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.

@inproceedings{2007DimouAIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Evaluating Knowledge Intensive Multi-Agent Systems},
booktitle={Autonomous Intelligent Systems: Multi-Agents and Data Mining (AIS-ADM 2007)},
pages={74-87},
publisher={Springer Berlin / Heidelberg},
address={St. Petersburg, Russia},
year={2007},
month={06},
date={2007-06-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Evaluating-Knowledge-Intensive-Multi-agent-Systems.pdf},
keywords={air pollution;decision making;environmental science computing},
abstract={As modern applications tend to stretch between large, evergrowing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"Towards a Generic Methodology for Evaluating MAS Performance"
IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207, pp. 174--179, Springer Berlin / Heidelberg, Waltham, MA, USA, 2007 Apr

As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.

@inproceedings{2007DimouKIMAS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Towards a Generic Methodology for Evaluating MAS Performance},
booktitle={IEEE International Conference on Integration of Knowledge Intensive Multi-Agents Systems - KIMAS\9207},
pages={174--179},
publisher={Springer Berlin / Heidelberg},
address={Waltham, MA, USA},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/Dimou-KIMAS-07.pdf},
keywords={agent evaluation},
abstract={As Agent Technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain- specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand.}
}

Christos Dimou, Andreas L. Symeonidis and Pericles A. Mitkas
"An agent structure for evaluating micro-level MAS performance"
7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07, pp. 243--250, IEEE Computer Society, Gaithersburg, MD, 2007 Aug

Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.

@inproceedings{2007DimouPERMIS,
author={Christos Dimou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={An agent structure for evaluating micro-level MAS performance},
booktitle={7th Workshop on Performance Metrics for Intelligent Systems - PerMIS-07},
pages={243--250},
publisher={IEEE Computer Society},
address={Gaithersburg, MD},
year={2007},
month={08},
date={2007-08-28},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-agent-structure-for-evaluating-micro-level-MAS-performance.pdf},
keywords={automated evaluation;autonomous agents;performance evaluation methodology},
abstract={Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lackof consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely met ricsselection, monitoring agent activities for appropriate me asurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.}
}

Sotiris Diplaris, G. Papachristoudis and Pericles A. Mitkas
"SoFoCles: Feature Filtering for Microarray Classification Based on Gene Ontology"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007DiplarisHBMIM,
author={Sotiris Diplaris and G. Papachristoudis and Pericles A. Mitkas},
title={SoFoCles: Feature Filtering for Microarray Classification Based on Gene Ontology},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Athens, Greece},
year={2007},
month={10},
date={2007-10-04},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/SoFoCles-Feature-filtering-for-microarray-classification-based-on-Gene-Ontology.pdf},
keywords={art;inference mechanisms;ontologies (artificial intelligence);query processing},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas
"Modeling Gene Ontology Terms using Finite State Automata"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007GkekasBioacademy,
author={Christos N. Gkekas and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Modeling Gene Ontology Terms using Finite State Automata},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2007},
month={10},
date={2007-10-01},
keywords={Modeling Gene Ontology},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

Ioanna K. Mprouza, Fotis E. Psomopoulos and Pericles A. Mitkas
"Simulating molecular dynamics through intelligent software agents"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 279--282, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007MprouzaBioacademy,
author={Ioanna K. Mprouza and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={Simulating molecular dynamics through intelligent software agents},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={279--282},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2007},
month={10},
date={2007-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Simulating-molecular-dynamics-through-intelligent-software-agents.pdf},
keywords={Modeling Gene Ontology},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

P. Tsimpos, Sotiris Diplaris, Pericles A. Mitkas and Georgios Banos
"Mendelian Samples Mining and Cluster Monitoring for National Genetic Evaluations with AGELI"
Interbull Annual Meeting, pp. 73-77, Dublin, Ireland, 2007 Aug

We present an innovative approach for pre-processing, analysis, alarm issuing and presentation of national genetic evaluation data with AGELI using Mendelian sampling mining and clustering techniques. AGELI (Eleftherohorinou et al.,2005) is a software platform that integrates the whole data mining procedure in order to produce a qualitative description of national genetic evaluation results, concerning three milk yield traits. Quality assurance constitutes a critical issue in the range of services provided by Interbull. Although the standard method appears sufficiently functional (Klei et al.,2002), during the last years there has been progress concerning an alternative validation method of genetic evaluation results using data mining (Banoset al.,2003; Diplaris et al.,2004), potentially leading to inference on data quality. This methodology was incorporated in AGELI in order to assess and assure data quality. The whole idea waImport your BibTex here!! :Ds to exploit decision trees and apply a goodness of fit test to individual tree nodes and an F-test to corresponding nodes from consecutive evaluation runs, aiming at discovering possible abnormalities in bull proof distributions. In a previous report (Banos et al.,2003) predictions led to associations, which were qualitatively compared to actual proofs, and existing discrepancies were confirmed using a data set with known errors. In this report we present AGELI’s novel methods of performing data mining by using a series of decision tree and clustering algorithms. Different decision tree models can now be created in order to assess data quality by evaluating data with various criteria. To further assess data quality, a novel technique for cluster monitoring is implemented in AGELI. It is possible to form clusters of bulls and perform unsupervised monitoring on them over the entire period of genetic evaluation runs. Finally, analyses were conducted using bull Mendelian sampling over the whole dataset.

@inproceedings{2007TsimposIAM,
author={P. Tsimpos and Sotiris Diplaris and Pericles A. Mitkas and Georgios Banos},
title={Mendelian Samples Mining and Cluster Monitoring for National Genetic Evaluations with AGELI},
booktitle={Interbull Annual Meeting},
pages={73-77},
address={Dublin, Ireland},
year={2007},
month={08},
date={2007-08-23},
url={http://issel.ee.auth.gr/wp-content/uploads/Tsimpos.pdf},
keywords={AGELI;Cluster Monitoring;Mendelian Samples Mining},
abstract={We present an innovative approach for pre-processing, analysis, alarm issuing and presentation of national genetic evaluation data with AGELI using Mendelian sampling mining and clustering techniques. AGELI (Eleftherohorinou et al.,2005) is a software platform that integrates the whole data mining procedure in order to produce a qualitative description of national genetic evaluation results, concerning three milk yield traits. Quality assurance constitutes a critical issue in the range of services provided by Interbull. Although the standard method appears sufficiently functional (Klei et al.,2002), during the last years there has been progress concerning an alternative validation method of genetic evaluation results using data mining (Banoset al.,2003; Diplaris et al.,2004), potentially leading to inference on data quality. This methodology was incorporated in AGELI in order to assess and assure data quality. The whole idea waImport your BibTex here!! :Ds to exploit decision trees and apply a goodness of fit test to individual tree nodes and an F-test to corresponding nodes from consecutive evaluation runs, aiming at discovering possible abnormalities in bull proof distributions. In a previous report (Banos et al.,2003) predictions led to associations, which were qualitatively compared to actual proofs, and existing discrepancies were confirmed using a data set with known errors. In this report we present AGELI’s novel methods of performing data mining by using a series of decision tree and clustering algorithms. Different decision tree models can now be created in order to assess data quality by evaluating data with various criteria. To further assess data quality, a novel technique for cluster monitoring is implemented in AGELI. It is possible to form clusters of bulls and perform unsupervised monitoring on them over the entire period of genetic evaluation runs. Finally, analyses were conducted using bull Mendelian sampling over the whole dataset.}
}

Fani A. Tzima, Kostas D. Karatzas, Pericles A. Mitkas and Stavros Karathanasis
"Using data-mining techniques for PM10 forecasting in the metropolitan area of Thessaloniki, Greece"
IJCNN 2007 International - Joint Conference on Neural Netwroks, pp. 2752--2757, Orlando, Florida, 2007 Aug

Knowledge extraction and acute forecasting are among the most challenging issues concerning the use of computational intelligence (CI) methods in real world applications. Both aspects are essential in cases where decision making is required, especially in domains directly related to the quality of life, like the quality of the atmospheric environment. In the present paper we emphasize on short term Air Quality (AQ) forecasting as a key constituent of every AQ management system, and we apply various CI methods and tools for assessing PM10 concentration values. We report our experimental strategy and preliminary results that reveal interesting interrelations between AQ and various city operations, while performing satisfactory in predicting concentration values.

@inproceedings{2007TzimaIJCNN,
author={Fani A. Tzima and Kostas D. Karatzas and Pericles A. Mitkas and Stavros Karathanasis},
title={Using data-mining techniques for PM10 forecasting in the metropolitan area of Thessaloniki, Greece},
booktitle={IJCNN 2007 International - Joint Conference on Neural Netwroks},
pages={2752--2757},
address={Orlando, Florida},
year={2007},
month={08},
date={2007-08-12},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Using-data-mining-techniques-for-PM10-forecasting-in-the-metropolitan-area-of-Thessaloniki-Greece.pdf},
keywords={air pollution;decision making;environmental science computing},
abstract={Knowledge extraction and acute forecasting are among the most challenging issues concerning the use of computational intelligence (CI) methods in real world applications. Both aspects are essential in cases where decision making is required, especially in domains directly related to the quality of life, like the quality of the atmospheric environment. In the present paper we emphasize on short term Air Quality (AQ) forecasting as a key constituent of every AQ management system, and we apply various CI methods and tools for assessing PM10 concentration values. We report our experimental strategy and preliminary results that reveal interesting interrelations between AQ and various city operations, while performing satisfactory in predicting concentration values.}
}

Fani A. Tzima, Andreas L. Symeonidis and Pericles. A. Mitkas
"Symbiosis: using predator-prey games as a test bed for studying competitive coevolution"
IEEE KIMAS conference, pp. 115-120, Springer Berlin / Heidelberg, Waltham, Massachusetts, 2007 Apr

The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\\\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.

@inproceedings{2007TzimaKIMAS,
author={Fani A. Tzima and Andreas L. Symeonidis and Pericles. A. Mitkas},
title={Symbiosis: using predator-prey games as a test bed for studying competitive coevolution},
booktitle={IEEE KIMAS conference},
pages={115-120},
publisher={Springer Berlin / Heidelberg},
address={Waltham, Massachusetts},
year={2007},
month={04},
date={2007-04-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/05/Symbiosis-using-predator-prey-games-as-a-test-bed-for-studying-competitive-coevolution.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={The animat approach constitutes an intriguing attempt to study and comprehend the behavior of adaptive, learning entities in complex environments. Further inspired by the notions of co-evolution and evolutionary arms races, we have developed Symbiosis, a virtual ecosystem that hosts two self-organizing, combating species \\\\\\\\96 preys and predators. All animats live and evolve in this shared environment, they are self-maintaining and engage in a series of vital activities nutrition, growth, communication with the ultimate goals of survival and reproduction. The main objective of Symbiosis is to study the behavior of ecosystem members, especially in terms of the emergent learning mechanisms and the effect of co-evolution on the evolved behavioral strategies. In this direction, several indicators are used to assess individual behavior, with the overall effectiveness metric depending strongly on the animats net energy gain and reproduction rate. Several experiments have been conducted with the developed simulator under various environmental conditions. Overall experimental results support our original hypothesis that co-evolution is a driving factor in the animat learning procedure.}
}

Fani A.Tzima, Ioannis N. Athanasiadis and Pericles A. Mitkas
"Agent-based modelling and simulation in the irrigation management sector: applications and potential"
Options Mediterraneennes, Series B: Studies and Research, Proceedings of the WASAMED International Conference, pp. 273--286, 2007 Feb

In the field of sustainable development, the management of common-pool resources is an issue of major importance. Several models that attempt to address the problem can be found in the literature, especially in the case of irrigation management. In fact, the latter task represents a great challenge for researchers and decision makers, as it has to cope with various water-related activities and conflicting user perspectives within a specified geographic area. Simulation models, and particularly Agent-Based Modelling and Simulation (ABMS), can facilitate overcoming these limitations: their inherent ability of integrating ecological and socio-economic dimensions, allows their effective use as tools for evaluating the possible effects of different management plans, as well as for communicating with stakeholders. This great potential has already been recognized in the irrigation management sector, where a great number of test cases have already adopted the modelling paradigm of multi-agent simulation. Our current study of agent-based models for irrigation management draws some interesting conclusions, regarding the geographic and representation scale of the reviewed models, as well as the degree of stakeholder involvement in the various development phases. Overall, we argue that ABMS tools have a great potential in representing dynamic processes in integrated assessment tools for irrigation management. Such tools, when effectively capturing social interactions and coupling them with environmental and economical models, can promote active involvement of interested parties and produce sustainable and approvable solutions to irrigation management problems.

@inproceedings{2007TzimaWASAMED,
author={Fani A.Tzima and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Agent-based modelling and simulation in the irrigation management sector: applications and potential},
booktitle={Options Mediterraneennes, Series B: Studies and Research, Proceedings of the WASAMED International Conference},
pages={273--286},
year={2007},
month={02},
date={2007-02-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-based-modelling-and-simulation-in-the-irrigation-management-sector.pdf},
keywords={agent;agent-based modeling;irrigation management;stakeholder participation},
abstract={In the field of sustainable development, the management of common-pool resources is an issue of major importance. Several models that attempt to address the problem can be found in the literature, especially in the case of irrigation management. In fact, the latter task represents a great challenge for researchers and decision makers, as it has to cope with various water-related activities and conflicting user perspectives within a specified geographic area. Simulation models, and particularly Agent-Based Modelling and Simulation (ABMS), can facilitate overcoming these limitations: their inherent ability of integrating ecological and socio-economic dimensions, allows their effective use as tools for evaluating the possible effects of different management plans, as well as for communicating with stakeholders. This great potential has already been recognized in the irrigation management sector, where a great number of test cases have already adopted the modelling paradigm of multi-agent simulation. Our current study of agent-based models for irrigation management draws some interesting conclusions, regarding the geographic and representation scale of the reviewed models, as well as the degree of stakeholder involvement in the various development phases. Overall, we argue that ABMS tools have a great potential in representing dynamic processes in integrated assessment tools for irrigation management. Such tools, when effectively capturing social interactions and coupling them with environmental and economical models, can promote active involvement of interested parties and produce sustainable and approvable solutions to irrigation management problems.}
}

Konstantinos N. Vavliakis, Andreas L. Symeonidis, Georgios T. Karagiannis and Pericles A. Mitkas
"Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information"
ICTAI, pp. 279--282, IEEE Computer Society, Washington, DC, USA, 2007 Oct

Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\96 either case-specific or generic \\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.

@inproceedings{2007VavliakisICTAI,
author={Konstantinos N. Vavliakis and Andreas L. Symeonidis and Georgios T. Karagiannis and Pericles A. Mitkas},
title={Eikonomia-An Integrated Semantically Aware Tool for Description and Retrieval of Byzantine Art Information},
booktitle={ICTAI},
pages={279--282},
publisher={IEEE Computer Society},
address={Washington, DC, USA},
year={2007},
month={10},
date={2007-10-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Eikonomia-–-An-Integrated-Semantically-Aware-Tool-for-Description-and-Retrieval-of-Byzantine-Art-Information-.pdf},
keywords={art;inference mechanisms;ontologies (artificial intelligence);query processing},
abstract={Semantic annotation and querying is currently applied on a number of versatile disciplines, providing the addedvalue of such an approach and, consequently the need for more elaborate \\\\\\\\96 either case-specific or generic \\\\\\\\96 tools. In this context, we have developed Eikonomia: an integrated semantically-aware tool for the description and retrieval of Byzantine Artwork Information. Following the needs of the ORMYLIA Art Diagnosis Center for adding semantics to their legacy data, an ontology describing Byzantine artwork based on CIDOCCRM, along with the interfaces for synchronization to and from the existing RDBMS have been implemented. This ontology has been linked to a reasoning tool, while a dynamic interface for the automated creation of semantic queries in SPARQL was developed. Finally, all the appropriate interfaces were instantiated, in order to allow easy ontology manipulation, query results projection and restrictions creation.}
}

2006

Journal Articles

Sotiris Diplaris, Andreas L. Symeonidis, Pericles A. Mitkas, Georgios Banos and Z Abas
"A decision-tree-based alarming system for the validation of national genetic evaluations"
Computers and Electronics in Agriculture, 52, (1--2), pp. 21--35, 2006 Jun

The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.

@article{2006DiplarisCEA,
author={Sotiris Diplaris and Andreas L. Symeonidis and Pericles A. Mitkas and Georgios Banos and Z Abas},
title={A decision-tree-based alarming system for the validation of national genetic evaluations},
journal={Computers and Electronics in Agriculture},
volume={52},
number={1--2},
pages={21--35},
year={2006},
month={06},
date={2006-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-decision-tree-based-alarming-system-for-the-validation-of-national-genetic-evaluations.pdf},
keywords={Dairy cattle evaluations;Alarming technique;Genetic evaluations;Quality control},
abstract={The aim of this work was to explore possibilities to build an alarming system based on the results of the application of data mining (DM) techniques in genetic evaluations of dairy cattle, in order to assess and assure data quality. The technique used combined data mining using classification and decision-tree algorithms, Gaussian binned fitting functions, and hypothesis tests. Data were quarterly national genetic evaluations, computed between February 1999 and February 2003 in nine countries. Each evaluation run included 73,000-90,000 bull records complete with their genetic values and evaluation information. Milk production traits were considered. Data mining algorithms were applied separately for each country and evaluation run to search for associations across several dimensions, including bull origin, type of proof, age of bull, and number of daughters. Then, data in each node were fitted to the Gaussian function and the quality of the fit was measured, thus providing a measure of the quality of data. In order to evaluate and ultimately predict decision-tree models, the implemented architecture can compare the node probabilities between two models and decide on their similarity, using hypothesis tests for the standard deviation of their distribution. The key utility of this technique lays in its capacity to identify the exact node where anomalies occur, and to fire a focused alarm pointing to erroneous data.}
}

Andreas L. Symeonidis, Dionisis D. Kehagias, Pericles A. Mitkas and Adamantios Koumpis
"Open Source Supply Chains"
International Journal of Advanced Manufacturing Systems (IJAMS), 9, (1), pp. 33-42, 2006 Jan

Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.

@article{2006SymeonidisIJAMS,
author={Andreas L. Symeonidis and Dionisis D. Kehagias and Pericles A. Mitkas and Adamantios Koumpis},
title={Open Source Supply Chains},
journal={International Journal of Advanced Manufacturing Systems (IJAMS)},
volume={9},
number={1},
pages={33-42},
year={2006},
month={01},
date={2006-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Open-Source-Supply-Chains.pdf},
keywords={agent-based social simulation},
abstract={Enterprise Resource Planning (ERP) systems tend to deploy Supply Chains, in order to successfully integrate customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs, while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated. Furthermore, the development of such systems is subject to strict licensing, since the exploitation of such kind of software is usually proprietary. This leads to a monolithic approach and to sub-utilization of efforts from all sides. Introducing a completely new paradigm of how primitive Supply Chain Management (SCM) rules apply on ERP systems, we have developed a framework as an Open Source Multi-Agent System that introduces adaptive intelligence as a powerful add-on for ERP software customization. In this paper the SCM system developed is described, whereas the expected benefits of the open source initiative employed are illustrated.}
}

2006

Inproceedings Papers

Z. Abas, Andreas L. Symeonidis, Alex andros Batzios, Zoi Basdagianni, Georgios Banos, Pericles A. Mitkas, E. Sinapis and A. Pampoukidou
"AMNOS-mobile: Exploiting handheld computers in efficient sheep recording"
35th ICAR, pp. 99--104, IEEE Computer Society, Kuopio, Finland, 2006 Jun

This paper focuses on AMNOS-mobile, a PDA application developed to support the tasks undertaken by sheep inspector when visiting the farms. It works in close cooperation with AMNOS, an integrated web-based platform developed to record, monitor, evaluate and manage the dairy sheep population of the Chios and Serres breed in Greece. Within the context of this paper, the design features of AMNOS-mobile are presented and the problems tackled by the use of handheld devices are discussed, illustrating how our application can enhance recording, improve the collection data process, and help farmers to more efficiently manage their flocks.

@inproceedings{2006AbasICAR,
author={Z. Abas and Andreas L. Symeonidis and Alex andros Batzios and Zoi Basdagianni and Georgios Banos and Pericles A. Mitkas and E. Sinapis and A. Pampoukidou},
title={AMNOS-mobile: Exploiting handheld computers in efficient sheep recording},
booktitle={35th ICAR},
pages={99--104},
publisher={IEEE Computer Society},
address={Kuopio, Finland},
year={2006},
month={06},
date={2006-06-06},
url={http://books.google.gr/books?id},
keywords={milk recording;data collection;handheld computers;transparent synchronization},
abstract={This paper focuses on AMNOS-mobile, a PDA application developed to support the tasks undertaken by sheep inspector when visiting the farms. It works in close cooperation with AMNOS, an integrated web-based platform developed to record, monitor, evaluate and manage the dairy sheep population of the Chios and Serres breed in Greece. Within the context of this paper, the design features of AMNOS-mobile are presented and the problems tackled by the use of handheld devices are discussed, illustrating how our application can enhance recording, improve the collection data process, and help farmers to more efficiently manage their flocks.}
}

Chrysa Collyda, Sotiris Diplaris, Pericles A. Mitkas, N. Maglaveras and C. Pappas
"Fuzzy Hidden Markov Models: A New Approach In Multiple Sequence Alignment"
20th International Congress of the European Federation for Medical Informatics (MIE 2006) Stud Health Technol Inform, pp. 99--104, IEEE Computer Society, Maastricht, Netherlands, 2006 Aug

This paper proposes a novel method for aligning multiple genomic or proteomic sequences using a fuzzyfied Hidden Markov Model (HMM). HMMs are known to provide compelling performance among multiple sequence alignment (MSA) algorithms, yet their stochastic nature does not help them cope with the existing dependence among the sequence elements. Fuzzy HMMs are a novel type of HMMs based on fuzzy sets and fuzzy integrals which generalizes the classical stochastic HMM, by relaxing its independence assumptions. In this paper, the fuzzy HMM model for MSA is mathematically defined. New fuzzy algorithms are described for building and training fuzzy HMMs, as well as for their use in aligning multiple sequences. Fuzzy HMMs can also increase the model capability of aligning multiple sequences mainly in terms of computation time. Modeling the multiple sequence alignment procedure with fuzzy HMMs can yield a robust and time-effective solution that can be widely used in bioinformatics in various applications, such as protein classification, phylogenetic analysis and gene prediction, among others.

@inproceedings{2006CollydaMIE,
author={Chrysa Collyda and Sotiris Diplaris and Pericles A. Mitkas and N. Maglaveras and C. Pappas},
title={Fuzzy Hidden Markov Models: A New Approach In Multiple Sequence Alignment},
booktitle={20th International Congress of the European Federation for Medical Informatics (MIE 2006) Stud Health Technol Inform},
pages={99--104},
publisher={IEEE Computer Society},
address={Maastricht, Netherlands},
year={2006},
month={08},
date={2006-08-27},
url={http://books.google.gr/books?hl},
keywords={multiple sequence alignment;fuzzy integrals;fuzzy measures;hidden Markov models;protein domains;phylogenetic analysis},
abstract={This paper proposes a novel method for aligning multiple genomic or proteomic sequences using a fuzzyfied Hidden Markov Model (HMM). HMMs are known to provide compelling performance among multiple sequence alignment (MSA) algorithms, yet their stochastic nature does not help them cope with the existing dependence among the sequence elements. Fuzzy HMMs are a novel type of HMMs based on fuzzy sets and fuzzy integrals which generalizes the classical stochastic HMM, by relaxing its independence assumptions. In this paper, the fuzzy HMM model for MSA is mathematically defined. New fuzzy algorithms are described for building and training fuzzy HMMs, as well as for their use in aligning multiple sequences. Fuzzy HMMs can also increase the model capability of aligning multiple sequences mainly in terms of computation time. Modeling the multiple sequence alignment procedure with fuzzy HMMs can yield a robust and time-effective solution that can be widely used in bioinformatics in various applications, such as protein classification, phylogenetic analysis and gene prediction, among others.}
}

Christos Dimou, Alexanros Batzios, Andreas L. Symeonidis and Pericles A. Mitkas
"A Multi-Agent Simulation Framework for Spiders Traversing the Semantic Web"
IEEE/WIC/ACM International Conference on Web Intelligence - WI 2006, pp. 736--739, Springer Berlin / Heidelberg, Hong Kong, China, 2006 Dec

Although search engines traditionally use spiders for traversing and indexing the web, there has not yet been any methodological attempt to model, deploy and test learning spiders. Moreover, the flourishing of the Semantic Web provides understandable information that may enhance search engines in providing more accurate results. Considering the above, we introduce BioSpider, an agent-based simulation framework for developing and testing autonomous, intelligent, semantically- focused web spiders. BioSpider assumes a direct analogy of the problem at hand with a multi-variate ecosystem, where each member is self-maintaining. The population of the ecosystem comprises cooperative spiders incorporating communication, mo- bility and learning skills, striving to improve efficiency. Genetic algorithms and classifier rules have been employed for spider adaptation and learning. A set of experiments has been set up in order to qualitatively test the efficacy and applicability of the proposed approach.

@inproceedings{2006DimouWI,
author={Christos Dimou and Alexanros Batzios and Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Multi-Agent Simulation Framework for Spiders Traversing the Semantic Web},
booktitle={IEEE/WIC/ACM International Conference on Web Intelligence - WI 2006},
pages={736--739},
publisher={Springer Berlin / Heidelberg},
address={Hong Kong, China},
year={2006},
month={12},
date={2006-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Multi-Agent-Simulation-Framework-for-Spiders-Traversing-the-Semantic-Web.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={Although search engines traditionally use spiders for traversing and indexing the web, there has not yet been any methodological attempt to model, deploy and test learning spiders. Moreover, the flourishing of the Semantic Web provides understandable information that may enhance search engines in providing more accurate results. Considering the above, we introduce BioSpider, an agent-based simulation framework for developing and testing autonomous, intelligent, semantically- focused web spiders. BioSpider assumes a direct analogy of the problem at hand with a multi-variate ecosystem, where each member is self-maintaining. The population of the ecosystem comprises cooperative spiders incorporating communication, mo- bility and learning skills, striving to improve efficiency. Genetic algorithms and classifier rules have been employed for spider adaptation and learning. A set of experiments has been set up in order to qualitatively test the efficacy and applicability of the proposed approach.}
}

Demetrios G. Eliades, Andreas L. Symeonidis and Pericles A. Mitkas
"GeneCity: A multi-agent simulation environment for hereditary diseases"
4th ACS/IEEE International Conference on Computer Systems and Applications - AICCSA 06, pp. 529--536, Springer-Verlag, Dubai/Sharjah, UAE, 2006 Mar

Simulating the psycho-societal aspects of a human com- munity is an issue always intriguing and challenging, as- piring us to help better understand, macroscopically, the way(s) humans behave. The mathematical models that have extensively been used for the analytical study of the vari- ous related phenomena prove inefficient, since they cannot conceive the notion of population heterogeneity, a parame- ter highly critical when it comes to community interactions. Following the more successful paradigm of artificial soci- eties, coupled with multi-agent systems and other Artificial Intelligence primitives, and extending previous epidemio- logical research work, we have developed GeneCity: an extended agent community, where agents live and interact under the veil of a hereditary epidemic. The members of the community, which can be either healthy, carriers of a trait, or patients, exhibit a number of human-like social (and medical) characteristics: wealth, acceptance and in- fluence, fear and knowledge, phenotype and reproduction ability. GeneCity provides a highly-configurable interface for simulating social environments and the way they are affected with the appearance of a hereditary disease, ei- ther Autosome or X-linked. This paper presents an ana- lytical overview of the work conducted and examines a test- hypothesis based on the spreading of Thalassaemia major.

@inproceedings{2006EliadesAICCSA,
author={Demetrios G. Eliades and Andreas L. Symeonidis and Pericles A. Mitkas},
title={GeneCity: A multi-agent simulation environment for hereditary diseases},
booktitle={4th ACS/IEEE International Conference on Computer Systems and Applications - AICCSA 06},
pages={529--536},
publisher={Springer-Verlag},
address={Dubai/Sharjah, UAE},
year={2006},
month={03},
date={2006-03-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/GeneCity-A-Multi-Agent-Simulation-Environment-for-Hereditary-Diseases.pdf},
keywords={Agent-mediated E-commerce;Auctions},
abstract={Simulating the psycho-societal aspects of a human com- munity is an issue always intriguing and challenging, as- piring us to help better understand, macroscopically, the way(s) humans behave. The mathematical models that have extensively been used for the analytical study of the vari- ous related phenomena prove inefficient, since they cannot conceive the notion of population heterogeneity, a parame- ter highly critical when it comes to community interactions. Following the more successful paradigm of artificial soci- eties, coupled with multi-agent systems and other Artificial Intelligence primitives, and extending previous epidemio- logical research work, we have developed GeneCity: an extended agent community, where agents live and interact under the veil of a hereditary epidemic. The members of the community, which can be either healthy, carriers of a trait, or patients, exhibit a number of human-like social (and medical) characteristics: wealth, acceptance and in- fluence, fear and knowledge, phenotype and reproduction ability. GeneCity provides a highly-configurable interface for simulating social environments and the way they are affected with the appearance of a hereditary disease, ei- ther Autosome or X-linked. This paper presents an ana- lytical overview of the work conducted and examines a test- hypothesis based on the spreading of Thalassaemia major.}
}

Dionisis Kehagias, Panos Toulis and Pericles A. Mitkas
"A Long-Term Profit Seeking Strategy for Continuous Double Auctions in a Trading Agent Competition"
Fourth Hellenic Conference on Artificial Intelligence, pp. 127--136, Springer-Verlag, Heraklion,Crete,Greece, 2006 May

The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\92s performance.

@inproceedings{2006KehagiasHCAI,
author={Dionisis Kehagias and Panos Toulis and Pericles A. Mitkas},
title={A Long-Term Profit Seeking Strategy for Continuous Double Auctions in a Trading Agent Competition},
booktitle={Fourth Hellenic Conference on Artificial Intelligence},
pages={127--136},
publisher={Springer-Verlag},
address={Heraklion,Crete,Greece},
year={2006},
month={05},
date={2006-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Long-Term-Profit-Seeking-Strategy-for-Continuous-Double-Auctions-in-a-Trading-Agent-Competition-.pdf},
keywords={TAC Travel},
abstract={The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\\\\\92s performance.}
}

Ioannis Kontogounnis, Kyriakos C. Chatzidimitriou, Andreas L. Symeonidis and Pericles A. Mitkas
"A Robust Agent Design for Dynamic SCM environments"
4th Hellenic Conference on Artificial Intelligence (SETN 06), pp. 127--136, Springer-Verlag, Heraklion, Crete, Greece, 2006 May

The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\92s performance.

@inproceedings{2006KontogounnisSETN,
author={Ioannis Kontogounnis and Kyriakos C. Chatzidimitriou and Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Robust Agent Design for Dynamic SCM environments},
booktitle={4th Hellenic Conference on Artificial Intelligence (SETN 06)},
pages={127--136},
publisher={Springer-Verlag},
address={Heraklion, Crete, Greece},
year={2006},
month={05},
date={2006-05-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Robust-Agent-Design-for-Dynamic-SCM-Environments.pdf},
keywords={milk recording;data collection;handheld computers;transparent synchronization},
abstract={The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\\\\\92s performance.}
}

Pericles A. Mitkas, Anastasios N. Delopoulos, Andreas L. Symeonidis and Fotis E. Psomopoulos
"A Framework for Semantic Data Integration and Inferencing on Cervical Cancer"
Hellenic Bioinformatics and Medical Informatics Meeting, pp. 23-26, IEEE Computer Society, Biomedical Research Foundation, Academy of Athens, Greece, 2006 Oct

Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.

@inproceedings{2006MitkasASSISTBioacademy,
author={Pericles A. Mitkas and Anastasios N. Delopoulos and Andreas L. Symeonidis and Fotis E. Psomopoulos},
title={A Framework for Semantic Data Integration and Inferencing on Cervical Cancer},
booktitle={Hellenic Bioinformatics and Medical Informatics Meeting},
pages={23-26},
publisher={IEEE Computer Society},
address={Biomedical Research Foundation, Academy of Athens, Greece},
year={2006},
month={10},
date={2006-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Framework-for-Semantic-Data-Integration-and-Inferencing-on-Cervical-Cancer.pdf},
keywords={bioinformatics databases},
abstract={Advances in the area of biomedicine and bioengineering have allowed for more accurate and detailed data acquisition in the area of health care. Examinations that once were time- and cost-forbidding, are now available to public, providing physicians and clinicians with more patient data for diagnosis and successful treatment. These data are also used by medical researchers in order to perform association studies among environmental agents, virus characteristics and genetic attributes, extracting new and interesting risk markers which can be used to enhance early diagnosis and prognosis. Nevertheless, scientific progress is hindered by the fact that each medical center operates in relative isolation, regarding datasets and medical effort, since there is no universally accepted archetype/ontology for medical data acquisition, data storage and labeling. This, exactly, is the major goal of ASSIST: to virtually unify multiple patient record repositories, physically located at different laboratories, clinics and/or hospitals. ASSIST focuses on cervical cancer and implements a semantically-aware integration layer that unifies data in a seamless manner. Data privacy and security are ensured by techniques for data anonymization, secure data access and storage. Both the clinician as well as the medical researcher will have access to a knowledge base on cervical cancer and will be able to perform more complex and elaborate association studies on larger groups.}
}

Helen E. Polychroniadou, Fotis E. Psomopoulos and Pericles A. Mitkas
"G-Class: A Divide and Conquer Application for Grid Protein Classification"
Proceedings of the 2nd ADMKD 2006: Workshop on Data Mining and Knowledge Discovery (in conjunction with ADBIS 2006: The 10th East-European Conference on Advances in Databases and Information Systems), pp. 121-132, IEEE Computer Society, Thessaloniki, Greece, 2006 Sep

Protein classification has always been one of the major challenges in modern functional proteomics. The presence of motifs in protein chains can make the prediction of the functional behavior of proteins possible. The correlation between protein properties and their motifs is not always obvious, since more than one motif may exist within a protein chain. Due to the complexity of this correlation most data mining algorithms are either non efficient or time consuming. In this paper a data mining methodology that utilizes grid technologies is presented. First, data are split into multiple sets while preserving the original data distribution in each set. Then, multiple models are created by using the data sets as independent training sets. Finally, the models are combined to produce the final classification rules, containing all the previously extracted information. The methodology is tested using various protein and protein class subsets. Results indicate the improved time efficiency of our technique compared to other known data mining algorithms.

@inproceedings{2006PolychroniadouGClass,
author={Helen E. Polychroniadou and Fotis E. Psomopoulos and Pericles A. Mitkas},
title={G-Class: A Divide and Conquer Application for Grid Protein Classification},
booktitle={Proceedings of the 2nd ADMKD 2006: Workshop on Data Mining and Knowledge Discovery (in conjunction with ADBIS 2006: The 10th East-European Conference on Advances in Databases and Information Systems)},
pages={121-132},
publisher={IEEE Computer Society},
address={Thessaloniki, Greece},
year={2006},
month={09},
date={2006-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/G-Class-A-Divide-and-Conquer-Application-for-Grid-Protein-Classification-.pdf},
keywords={bioinformatics databases},
abstract={Protein classification has always been one of the major challenges in modern functional proteomics. The presence of motifs in protein chains can make the prediction of the functional behavior of proteins possible. The correlation between protein properties and their motifs is not always obvious, since more than one motif may exist within a protein chain. Due to the complexity of this correlation most data mining algorithms are either non efficient or time consuming. In this paper a data mining methodology that utilizes grid technologies is presented. First, data are split into multiple sets while preserving the original data distribution in each set. Then, multiple models are created by using the data sets as independent training sets. Finally, the models are combined to produce the final classification rules, containing all the previously extracted information. The methodology is tested using various protein and protein class subsets. Results indicate the improved time efficiency of our technique compared to other known data mining algorithms.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"PROTEAS: A Finite State Automata based data mining algorithm for rule extraction in protein classification"
Proceedings of the 5th Hellenic Data Management Symposium (HDMS), pp. 118-126, IEEE Computer Society, Thessaloniki, Greece, 2006 Sep

An important challenge in modern functional proteomics is the prediction of the functional behavior of proteins. Motifs in protein chains can make such a prediction possible. The correlation between protein properties and their motifs is not always obvious, since more than one motifs may exist within a protein chain. Thus, the behavior of a protein is a function of many motifs, where some overpower others. In this paper a data mining approach for a motif-based classification of proteins is presented. A new classification algorithm that induces rules and exploits finite state automata is introduced. First, data are modeled by terms of prefix tree acceptors, which are later merged into finite state automata. Finally, a new algorithm is proposed, for the induction of protein classification rules from finite state automata. The data mining model is trained and tested using various protein and protein class subsets, as well as the whole dataset of known proteins and protein classes. Results indicate the efficiency of our technique compared to other known data mining algorithms.

@inproceedings{2006PsomopoulosHDMS,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={PROTEAS: A Finite State Automata based data mining algorithm for rule extraction in protein classification},
booktitle={Proceedings of the 5th Hellenic Data Management Symposium (HDMS)},
pages={118-126},
publisher={IEEE Computer Society},
address={Thessaloniki, Greece},
year={2006},
month={09},
date={2006-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/PROTEAS-A-Finite-State-Automata-based-data-mining-algorithm-for-rule-extraction-in-protein-classification-.pdf},
keywords={mining methods and algorithms;classification rules},
abstract={An important challenge in modern functional proteomics is the prediction of the functional behavior of proteins. Motifs in protein chains can make such a prediction possible. The correlation between protein properties and their motifs is not always obvious, since more than one motifs may exist within a protein chain. Thus, the behavior of a protein is a function of many motifs, where some overpower others. In this paper a data mining approach for a motif-based classification of proteins is presented. A new classification algorithm that induces rules and exploits finite state automata is introduced. First, data are modeled by terms of prefix tree acceptors, which are later merged into finite state automata. Finally, a new algorithm is proposed, for the induction of protein classification rules from finite state automata. The data mining model is trained and tested using various protein and protein class subsets, as well as the whole dataset of known proteins and protein classes. Results indicate the efficiency of our technique compared to other known data mining algorithms.}
}

Andreas L. Symeonidis, Vivia Nikolaidou and Pericles A. Mitkas
"Exploiting Data Mining Techniques for Improving the Efficiency of a Supply Chain Management Agent"
WI-IATW 06: Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, pp. 23-26, IEEE Computer Society, Hong Kong, China, 2006 Dec

Supply Chain Management (SCM) environments are often dynamic markets providing a plethora of information, either complete or incomplete. It is, therefore, evident that such environments demand intelligent solutions, which can perceive variations and act in order to achieve maximum revenue. To do so, they must also provide some sophisticated mechanism for exploiting the full potential of the environments they inhabit. Advancing on the way autonomous solutions usually deal with the SCM process, we have built a robust and highly-adaptable mechanism for efficiently dealing with all SCM facets, while at the same time incorporating a module that exploits data mining technology in order to forecast the price of the winning bid in a given order and, thus, adjust its bidding strategy. The paper presents our agent, Mertacor, and focuses on the forecasting mechanism it incorporates, aiming to optimal agent efficiency.

@inproceedings{2006SymeonidisIADM,
author={Andreas L. Symeonidis and Vivia Nikolaidou and Pericles A. Mitkas},
title={Exploiting Data Mining Techniques for Improving the Efficiency of a Supply Chain Management Agent},
booktitle={WI-IATW 06: Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology},
pages={23-26},
publisher={IEEE Computer Society},
address={Hong Kong, China},
year={2006},
month={12},
date={2006-12-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Exploiting-Data-Mining-Techniques-for-Improving-the-Efficiency-of-a-Supply-Chain-Management-Agen.pdf},
keywords={artificial life;learning (artificial intelligence);predator-prey systems},
abstract={Supply Chain Management (SCM) environments are often dynamic markets providing a plethora of information, either complete or incomplete. It is, therefore, evident that such environments demand intelligent solutions, which can perceive variations and act in order to achieve maximum revenue. To do so, they must also provide some sophisticated mechanism for exploiting the full potential of the environments they inhabit. Advancing on the way autonomous solutions usually deal with the SCM process, we have built a robust and highly-adaptable mechanism for efficiently dealing with all SCM facets, while at the same time incorporating a module that exploits data mining technology in order to forecast the price of the winning bid in a given order and, thus, adjust its bidding strategy. The paper presents our agent, Mertacor, and focuses on the forecasting mechanism it incorporates, aiming to optimal agent efficiency.}
}

Panos Toulis, Dionisis Kehagias and Pericles A. Mitkas
"Mertacor: A successful autonomous trading agent"
Autonomous Agents & Multi Agent Systems (AAMAS06), pp. 1191-1198, Springer-Verlag, Hakodate,Japan, 2006 May

The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\92s performance.

@inproceedings{2006ToulisAAMAS,
author={Panos Toulis and Dionisis Kehagias and Pericles A. Mitkas},
title={Mertacor: A successful autonomous trading agent},
booktitle={Autonomous Agents & Multi Agent Systems (AAMAS06)},
pages={1191-1198},
publisher={Springer-Verlag},
address={Hakodate,Japan},
year={2006},
month={05},
date={2006-05-08},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Mertacor-A-Successful-Autonomous-Trading-Agent.pdf},
keywords={Agent-mediated E-commerce;Auctions},
abstract={The leap from decision support to autonomous systems has often raised a number of issues, namely system safety, soundness and security. Depending on the field of application, these issues can either be easily overcome or even hinder progress. In the case of Supply Chain Management (SCM), where system performance implies loss or profit, these issues are of high importance. SCM environments are often dynamic markets providing incomplete information, therefore demanding intelligent solutions which can adhere to environment rules, perceive variations, and act in order to achieve maximum revenue. Advancing on the way such autonomous solutions deal with the SCM process, we have built a robust, highly-adaptable and easily-configurable mechanism for efficiently dealing with all SCM facets, from material procurement and inventory management to goods production and shipment. Our agent has been crash-tested in one of the most challenging SCM environments, the trading agent competition SCM game and has proven capable of providing advanced SCM solutions on behalf of its owner. This paper introduces Mertacor and its main architectural primitives, provides an overview of the TAC SCM environment, and discusses Mertacor\\\\\\\\92s performance.}
}

2005

Journal Articles

Ioannis N. Athanasiadis, Alexandros K. Mentes, Pericles Alexandros Mitkas and Yiannis A. Mylopoulos
"A hybrid agent-based model for estimating residential water demand"
Simulation: Transactions of The Society for Modeling and Simulation International, 81, (3), pp. 175--187, 2005 Mar

The global effort toward sustainable development has initiated a transition in water management. Water utility companies use water-pricing policies as an instrument for controlling residential water demand. To support policy makers in their decisions, the authors have developed DAWN, a hybrid model for evaluating water-pricing policies. DAWN integrates an agent-based social model for the consumer with conventional econometric models and simulates the residential water demand-supply chain, enabling the evaluation of different scenarios for policy making. An agent community is assigned to behave as water consumers, while econometric and social models are incorporated into them for estimating water consumption. DAWN\\\\92s main advantage is that it supports social interaction between consumers, through an influence diffusion mechanism, implemented via inter-agent communication. Parameters affecting water consumption and associated with consumers\\\\92 social behavior can be simulated with DAWN. Real-world results of DAWN\\\\92s application for the evaluation of five water-pricing policies in Thessaloniki, Greece, are presented.

@article{2005Athanasiadis-SIMULATION,
author={Ioannis N. Athanasiadis and Alexandros K. Mentes and Pericles Alexandros Mitkas and Yiannis A. Mylopoulos},
title={A hybrid agent-based model for estimating residential water demand},
journal={Simulation: Transactions of The Society for Modeling and Simulation International},
volume={81},
number={3},
pages={175--187},
year={2005},
month={03},
date={2005-03-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Hybrid-Agent-Based-Model-for-EstimatingResidential-Water-Demand.pdf},
keywords={residential water demand;multiagent systems;social influence;pricing policies},
abstract={The global effort toward sustainable development has initiated a transition in water management. Water utility companies use water-pricing policies as an instrument for controlling residential water demand. To support policy makers in their decisions, the authors have developed DAWN, a hybrid model for evaluating water-pricing policies. DAWN integrates an agent-based social model for the consumer with conventional econometric models and simulates the residential water demand-supply chain, enabling the evaluation of different scenarios for policy making. An agent community is assigned to behave as water consumers, while econometric and social models are incorporated into them for estimating water consumption. DAWN\\\\\\\\92s main advantage is that it supports social interaction between consumers, through an influence diffusion mechanism, implemented via inter-agent communication. Parameters affecting water consumption and associated with consumers\\\\\\\\92 social behavior can be simulated with DAWN. Real-world results of DAWN\\\\\\\\92s application for the evaluation of five water-pricing policies in Thessaloniki, Greece, are presented.}
}

Ioannis N. Athanasiadis and Pericles A. Mitkas
"Social influence and water conservation: An agent-based approach"
IEEE Computing in Science and Engineering, 7, (1), pp. 175--187, 2005 Jan

Every day, consumers are exposed to advertising campaigns that attempt to influence their decisions and affect their behavior. Word-of-mouth communication-he informal channels of daily interactions among friends, relatives, coworkers, neighbors, and acquaintances-plays a much more significant role in how consumer behavior is shaped, fashion is introduced, and product reputation is built. Macrolevel simulations that include this kind of social parameter are usually limited to generalized, often simplistic assumptions. In an effort to represent the phenomenon in a semantically coherent way and model it more realistically, we developed an influence-diffusion mechanism that follows agent-based social simulation primitives. The model is realized as a multiagent software platform, which we call Dawn (for distributed agents for water simulation).

@article{2005AthanasiadisIEEECSE,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Social influence and water conservation: An agent-based approach},
journal={IEEE Computing in Science and Engineering},
volume={7},
number={1},
pages={175--187},
year={2005},
month={01},
date={2005-01-10},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Social-influence-and-water-conservation-An-agent-based-approach.pdf},
keywords={training},
abstract={Every day, consumers are exposed to advertising campaigns that attempt to influence their decisions and affect their behavior. Word-of-mouth communication-he informal channels of daily interactions among friends, relatives, coworkers, neighbors, and acquaintances-plays a much more significant role in how consumer behavior is shaped, fashion is introduced, and product reputation is built. Macrolevel simulations that include this kind of social parameter are usually limited to generalized, often simplistic assumptions. In an effort to represent the phenomenon in a semantically coherent way and model it more realistically, we developed an influence-diffusion mechanism that follows agent-based social simulation primitives. The model is realized as a multiagent software platform, which we call Dawn (for distributed agents for water simulation).}
}

Dionisis Kehagias, Andreas L. Symeonidis and Pericles A. Mitkas
"Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting"
Electronic Markets, 15, (1), pp. 53--62, 2005 Jan

Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.

@article{2005KehagiasEM,
author={Dionisis Kehagias and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Designing Pricing Mechanisms for Autonomous Agents Based on Bid-Forecasting},
journal={Electronic Markets},
volume={15},
number={1},
pages={53--62},
year={2005},
month={01},
date={2005-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Designing-Pricing-Mechanisms-for-Autonomous-Agents-Based-on-Bid-Forecasting.pdf},
abstract={Autonomous agents that participate in the electronic market environment introduce an advanced paradigm for realizing automated deliberations over offered prices of auctioned goods. These agents represent humans and their assets, therefore it is critical for them not only to act rationally but also efficiently. By enabling agents to deploy bidding strategies and to compete with each other in a marketplace, a valuable amount of historical data is produced. An optimal dynamic forecasting of the maximum offered bid would enable more gainful behaviours by agents. In this respect, this paper presents a methodology that takes advantage of price offers generated in e-auctions, in order to provide an adequate short-term forecasting schema based on time-series analysis. The forecast is incorporated into the reasoning mechanism of a group of autonomous e-auction agents to improve their bidding behaviour. In order to test the improvement introduced by the proposed method, we set up a test-bed, on which a slightly variant version of the first-price ascending auction is simulated with many buyers and one seller, trading with each other over one item. The results of the proposed methodology are discussed and many possible extensions and improvements are advocated to ensure wide acceptance of the bid-forecasting reasoning mechanism.}
}

Pericles A. Mitkas
"Knowledge Discovery for Training Intelligent Agents: Methodology, Tools and Applications"
Lecture Notes in Artificial Intelligent, 3505, pp. 2-18, 2005 May

In this paper we address a relatively young but important area of research: the intersection of agent technology and data mining. This intersection can take two forms: a) the more mundane use of intelligent agents for improved knowledge discovery and b) the use of data mining techniques for producing smarter, more efficient agents. The paper focuses on the second approach. Knowledge, hidden in voluminous data repositories routinely created and maintained by today\\'s applications, can be extracted by data mining. The next step is to transform this knowledge into the inference mechanisms or simply the behavior of agents in multi-agent systems. We call this procedure “agent training.” We define different levels of agent training and we present a software engineering methodology that combines the application of deductive logic for generating intelligence from data with a process for transferring this knowledge into agents. We introduce Agent Academy, an integrated open-source framework, which supports data mining techniques and agent development tools. We also provide several examples of multi-agent systems developed with this approach.

@article{2005MitkasLNAI,
author={Pericles A. Mitkas},
title={Knowledge Discovery for Training Intelligent Agents: Methodology, Tools and Applications},
journal={Lecture Notes in Artificial Intelligent},
volume={3505},
pages={2-18},
year={2005},
month={05},
date={2005-05-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Knowledge-Discovery-for-Training-Intelligent-Agents-Methodology-Tools-and-Applications.pdf},
doi={http://dx.doi.org/10.1007/11492870_2},
abstract={In this paper we address a relatively young but important area of research: the intersection of agent technology and data mining. This intersection can take two forms: a) the more mundane use of intelligent agents for improved knowledge discovery and b) the use of data mining techniques for producing smarter, more efficient agents. The paper focuses on the second approach. Knowledge, hidden in voluminous data repositories routinely created and maintained by today\\\\'s applications, can be extracted by data mining. The next step is to transform this knowledge into the inference mechanisms or simply the behavior of agents in multi-agent systems. We call this procedure “agent training.” We define different levels of agent training and we present a software engineering methodology that combines the application of deductive logic for generating intelligence from data with a process for transferring this knowledge into agents. We introduce Agent Academy, an integrated open-source framework, which supports data mining techniques and agent development tools. We also provide several examples of multi-agent systems developed with this approach.}
}

Andreas L. Symeonidis, Evangelos Valtos, Serafeim Seroglou and Pericles A. Mitkas
"Biotope: an integrated framework for simulating distributed multiagent computational systems"
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 35, (3), pp. 420-432, 2005 May

The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \\"ecological\\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \\"living\\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.

@article{2005SymeonidisIEEETSMC,
author={Andreas L. Symeonidis and Evangelos Valtos and Serafeim Seroglou and Pericles A. Mitkas},
title={Biotope: an integrated framework for simulating distributed multiagent computational systems},
journal={IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans},
volume={35},
number={3},
pages={420-432},
year={2005},
month={05},
date={2005-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/confaisadmMitkas05.pdf},
doi={http://dx.doi.org/10.1007/11492870_2},
keywords={agent-based systems},
abstract={The study of distributed computational systems issues, such as heterogeneity, concurrency, control, and coordination, has yielded a number of models and architectures, which aspire to provide satisfying solutions to each of the above problems. One of the most intriguing and complex classes of distributed systems are computational ecosystems, which add an \\\\"ecological\\\\" perspective to these issues and introduce the characteristic of self-organization. Extending previous research work on self-organizing communities, we have developed Biotope, which is an agent simulation framework, where each one of its members is dynamic and self-maintaining. The system provides a highly configurable interface for modeling various environments as well as the \\\\"living\\\\" or computational entities that reside in them, while it introduces a series of tools for monitoring system evolution. Classifier systems and genetic algorithms have been employed for agent learning, while the dispersal distance theory has been adopted for agent replication. The framework has been used for the development of a characteristic demonstrator, where Biotope agents are engaged in well-known vital activities-nutrition, communication, growth, death-directed toward their own self-replication, just like in natural environments. This paper presents an analytical overview of the work conducted and concludes with a methodology for simulating distributed multiagent computational systems.}
}

2005

Βιβλία

Andreas Symeonidis and Pericles A. Mitkas
"Agent Intelligence Through Data Mining (Multiagent Systems, Artificial Societies, and Simulated Organizations)"
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005 Jul

@book{2005Symeonidis,
author={Andreas Symeonidis and Pericles A. Mitkas},
title={Agent Intelligence Through Data Mining (Multiagent Systems, Artificial Societies, and Simulated Organizations)},
publisher={Springer-Verlag New York, Inc.},
address={Secaucus, NJ, USA},
year={2005},
month={07},
date={2005-07-15}
}

2005

Inproceedings Papers

Ioannis N. Athanasiadis and Pericles A. Mitkas
"A distributed system for managing and diffusing environmental information"
5th International Exhibition and Conference on Environmental Technology (HELECO 05), Environment and Development (HELECO, pp. 422--428, ACTA Press, Athens, Greece, 2005 Feb

In an effort to support Environmental Monitoring and Surveillance Centers (EMSC) to fuse, manage and diffuse environmental data in a more efficient manner, we developed a distributed system for managing and diffusing environmental information. The developed system, called AISLE, is an adaptive, intelligent tool for supporting advanced information management services. Its main characteristic is the provision of decision support and information diffusion abilities through electronic services to several users with diverse needs. Specifically, software agents are in charge of integrating and managing environmental data recorded by field sensors or other monitoring devices, along with their diffusion to a wide range of end-user applications, such as environmental databases, terminal applications, or public information services over the internet. The system has been demonstrated in two pilot cases. In the first case, AISLE has been applied for assessing and reporting ambient air quality in Valencia, Spain. In the second case, AISLE was used for monitoring weather conditions in Cyprus.

@inproceedings{2005AthanasiadisHELECO,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={A distributed system for managing and diffusing environmental information},
booktitle={5th International Exhibition and Conference on Environmental Technology (HELECO 05), Environment and Development (HELECO},
pages={422--428},
publisher={ACTA Press},
address={Athens, Greece},
year={2005},
month={02},
date={2005-02-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-distributed-system-for-managing-and-diffusing-environmental-information.pdf},
keywords={environmental management systems;envirnmental informatics;methods and techniques for environmental monitoring;environmental information management and diffusion;ambient air quality assessment and reporting;weather conditions monitoring;efficient radar management;environmental informatics},
abstract={In an effort to support Environmental Monitoring and Surveillance Centers (EMSC) to fuse, manage and diffuse environmental data in a more efficient manner, we developed a distributed system for managing and diffusing environmental information. The developed system, called AISLE, is an adaptive, intelligent tool for supporting advanced information management services. Its main characteristic is the provision of decision support and information diffusion abilities through electronic services to several users with diverse needs. Specifically, software agents are in charge of integrating and managing environmental data recorded by field sensors or other monitoring devices, along with their diffusion to a wide range of end-user applications, such as environmental databases, terminal applications, or public information services over the internet. The system has been demonstrated in two pilot cases. In the first case, AISLE has been applied for assessing and reporting ambient air quality in Valencia, Spain. In the second case, AISLE was used for monitoring weather conditions in Cyprus.}
}

Ioannis N. Athanasiadis, A. K. Mentes, Pericles A. Mitkas and Yiannis A. Mylopoulos
"A system for evaluating water pricing alternatives in urban areas"
HELECO05, Water Management Section: New Legislative Framework for the Integrated Water Resources Management Track, Technical Chamber of Gree, 2005 Feb

@inproceedings{2005AthanasiadisHELECO05,
author={Ioannis N. Athanasiadis and A. K. Mentes and Pericles A. Mitkas and Yiannis A. Mylopoulos},
title={A system for evaluating water pricing alternatives in urban areas},
booktitle={HELECO05, Water Management Section: New Legislative Framework for the Integrated Water Resources Management Track, Technical Chamber of Gree},
year={2005},
month={02},
date={2005-02-03}
}

Ioannis N. Athanasiadis, Marios Milis, Pericles A. Mitkas and Silas C. Michaelides
"Abacus: A multi-agent system for meteorological radar data management and decision support"
Sixth Intl Symposium on Environmental Software Systems (ISESS05), pp. 183--187, Springer Berlin / Heidelberg, Sesimbra, Portugal, 2005 May

The continuous processing and evaluation of meteorological radar data require significant efforts by scientists, both for data processing, storage, and maintenance, and for data interpretation and visualization. To assist meteorologists and to automate a large part of these tasks, we have designed and developed Abacus, a multi-agent system for managing radar data and providing decision support. Abacus’ agents undertake data management and visualization tasks, while they are also responsible for extracting statistical indicators and assessing current weather conditions. Abacus agent system identifies potentially hazardous incidents, disseminates preprocessed information over the web, and enables warning services provided via email notifications. In this paper, Abacus’ agent architecture is detailed and agent communication for information diffusion is presented. Focus is also given on the customizable logical rule-bases for agent reasoning required in decision support. The platform has been tested with real-world data from the Meteorological Service of Cyprus.

@inproceedings{2005AthanasiadisISESS,
author={Ioannis N. Athanasiadis and Marios Milis and Pericles A. Mitkas and Silas C. Michaelides},
title={Abacus: A multi-agent system for meteorological radar data management and decision support},
booktitle={Sixth Intl Symposium on Environmental Software Systems (ISESS05)},
pages={183--187},
publisher={Springer Berlin / Heidelberg},
address={Sesimbra, Portugal},
year={2005},
month={05},
date={2005-05-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Abacus-A-multi-agent-system-for-meteorological-radar-data-management-and-decision-support.pdf},
keywords={agent-oriented software engineering;environmental management and decision- support systems;doppler radar data monitoring;assessment and reporting;meteorology software applications;distributed decision support},
abstract={The continuous processing and evaluation of meteorological radar data require significant efforts by scientists, both for data processing, storage, and maintenance, and for data interpretation and visualization. To assist meteorologists and to automate a large part of these tasks, we have designed and developed Abacus, a multi-agent system for managing radar data and providing decision support. Abacus’ agents undertake data management and visualization tasks, while they are also responsible for extracting statistical indicators and assessing current weather conditions. Abacus agent system identifies potentially hazardous incidents, disseminates preprocessed information over the web, and enables warning services provided via email notifications. In this paper, Abacus’ agent architecture is detailed and agent communication for information diffusion is presented. Focus is also given on the customizable logical rule-bases for agent reasoning required in decision support. The platform has been tested with real-world data from the Meteorological Service of Cyprus.}
}

Ioannis N. Athanasiadis, Andreas Solsbach, Pericles A. Mitkas and Jorge Marx Gómez
"An Agent-based Middleware for Environmental Information Management"
Second Symposium on Information Technologies in Environmental Engineering (ITEE 2005), pp. 1371-1374, ICSC-NAISO Academic Press, Magdeburg, Germany, 2005 Sep

Accurate protein classification is one of the major challenges in modern bioinformatics. Motifs that exist in the protein chain can make such a classification possible. A plethora of algorithms to address this problem have been proposed by both the artificial intelligence and the pattern recognition communities. In this paper, a data mining methodology for classification rules induction in proposed. Initially, expert – based protein families are processed to create a new hybrid set of families. Then, a prefix tree acceptor is created from the motifs in the protein chains, and subsequently transformed into a stochastic finite state automaton using the ALERGIA algorithm. Finally, an algorithm is presented for the extraction of classification rules from the automaton.

@inproceedings{2005AthanasiadisITEE,
author={Ioannis N. Athanasiadis and Andreas Solsbach and Pericles A. Mitkas and Jorge Marx Gómez},
title={An Agent-based Middleware for Environmental Information Management},
booktitle={Second Symposium on Information Technologies in Environmental Engineering (ITEE 2005)},
pages={1371-1374},
publisher={ICSC-NAISO Academic Press},
address={Magdeburg, Germany},
year={2005},
month={09},
date={2005-09-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Agent-based-Middleware-for-Environmental-Information-Management.pdf},
keywords={motifs},
abstract={Accurate protein classification is one of the major challenges in modern bioinformatics. Motifs that exist in the protein chain can make such a classification possible. A plethora of algorithms to address this problem have been proposed by both the artificial intelligence and the pattern recognition communities. In this paper, a data mining methodology for classification rules induction in proposed. Initially, expert – based protein families are processed to create a new hybrid set of families. Then, a prefix tree acceptor is created from the motifs in the protein chains, and subsequently transformed into a stochastic finite state automaton using the ALERGIA algorithm. Finally, an algorithm is presented for the extraction of classification rules from the automaton.}
}

Chrysa Collyda, Sotiris Diplaris, Anastasios Delopoulos, Nikolaos Maglaveras, Pericles A. Mitkas and C. Pappas
"Towards building a model for the unification of distributed and heterogeneous biomedical repositories"
10th International Symposium for Health Information Management Research, Thessaloniki, Greece, 2005 Sep

@inproceedings{2005CollydaISHIMR,
author={Chrysa Collyda and Sotiris Diplaris and Anastasios Delopoulos and Nikolaos Maglaveras and Pericles A. Mitkas and C. Pappas},
title={Towards building a model for the unification of distributed and heterogeneous biomedical repositories},
booktitle={10th International Symposium for Health Information Management Research},
address={Thessaloniki, Greece},
year={2005},
month={09},
date={2005-09-01}
}

Christos Dimou and Pericles A. Mitkas
"Biogrid: An Agent-based Metacomputing Ecosystem"
10th Panhellenic Conference on Informatics, pp. 88--98, Springer-Verlag, Volos, Greece, 2005 Nov

Nowadays, the number of protein sequences being stored in central protein databases from labs all over the world is constantly increasing. From these proteins only a fraction has been experimentally analyzed in order to detect their structure and hence their function in the corresponding organism. The reason is that experimental determination of structure is labor-intensive and quite time-consuming. Therefore there is the need for automated tools that can classify new proteins to structural families. This paper presents a comparative evaluation of several algorithms that learn such classification models from data concerning patterns of proteins with known structure. In addition, several approaches that combine multiple learning algorithms to increase the accuracy of predictions are evaluated. The results of the experiments provide insights that can help biologists and computer scientists design high-performance protein classification systems of high quality.

@inproceedings{2005DimouPCI,
author={Christos Dimou and Pericles A. Mitkas},
title={Biogrid: An Agent-based Metacomputing Ecosystem},
booktitle={10th Panhellenic Conference on Informatics},
pages={88--98},
publisher={Springer-Verlag},
address={Volos, Greece},
year={2005},
month={11},
date={2005-11-11},
url={http://issel.ee.auth.gr/wp-content/uploads/tsoumakas-pci2005a.pdf},
abstract={Nowadays, the number of protein sequences being stored in central protein databases from labs all over the world is constantly increasing. From these proteins only a fraction has been experimentally analyzed in order to detect their structure and hence their function in the corresponding organism. The reason is that experimental determination of structure is labor-intensive and quite time-consuming. Therefore there is the need for automated tools that can classify new proteins to structural families. This paper presents a comparative evaluation of several algorithms that learn such classification models from data concerning patterns of proteins with known structure. In addition, several approaches that combine multiple learning algorithms to increase the accuracy of predictions are evaluated. The results of the experiments provide insights that can help biologists and computer scientists design high-performance protein classification systems of high quality.}
}

Sotiris Diplaris, Grigorios Tsoumakas, Pericles A. Mitkas and Ioannis Vlahavas
"Protein classification with multiple algorithms"
10th Panhellenic Conference in Informatics, pp. 448--456, Springer-Verlag, Volos, Greece, 2005 Nov

Nowadays, the number of protein sequences being stored in central protein databases from labs all over the world is constantly increasing. From these proteins only a fraction has been experimentally analyzed in order to detect their structure and hence their function in the corresponding organism. The reason is that experimental determination of structure is labor-intensive and quite time-consuming. Therefore there is the need for automated tools that can classify new proteins to structural families. This paper presents a comparative evaluation of several algorithms that learn such classification models from data concerning patterns of proteins with known structure. In addition, several approaches that combine multiple learning algorithms to increase the accuracy of predictions are evaluated. The results of the experiments provide insights that can help biologists and computer scientists design high-performance protein classification systems of high quality.

@inproceedings{2005DiplarisPCI,
author={Sotiris Diplaris and Grigorios Tsoumakas and Pericles A. Mitkas and Ioannis Vlahavas},
title={Protein classification with multiple algorithms},
booktitle={10th Panhellenic Conference in Informatics},
pages={448--456},
publisher={Springer-Verlag},
address={Volos, Greece},
year={2005},
month={11},
date={2005-11-21},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Protein-Classification-with-Multiple-Algorithms.pdf},
abstract={Nowadays, the number of protein sequences being stored in central protein databases from labs all over the world is constantly increasing. From these proteins only a fraction has been experimentally analyzed in order to detect their structure and hence their function in the corresponding organism. The reason is that experimental determination of structure is labor-intensive and quite time-consuming. Therefore there is the need for automated tools that can classify new proteins to structural families. This paper presents a comparative evaluation of several algorithms that learn such classification models from data concerning patterns of proteins with known structure. In addition, several approaches that combine multiple learning algorithms to increase the accuracy of predictions are evaluated. The results of the experiments provide insights that can help biologists and computer scientists design high-performance protein classification systems of high quality.}
}

H. Eleftherohorinou, Sotiris ris, Pericles A. Mitkas and Georgios Banos
"AGELI: An Integrated Platform for the Assessment of National Genetic Evaluation Results by Learning and Informing"
Interbull Annual Meeting, pp. 183--187, Springer Berlin / Heidelberg, Uppsala, Sweden, 2005 Jun

One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.

@inproceedings{2005EleftherohorinouIAM,
author={H. Eleftherohorinou and Sotiris ris and Pericles A. Mitkas and Georgios Banos},
title={AGELI: An Integrated Platform for the Assessment of National Genetic Evaluation Results by Learning and Informing},
booktitle={Interbull Annual Meeting},
pages={183--187},
publisher={Springer Berlin / Heidelberg},
address={Uppsala, Sweden},
year={2005},
month={06},
date={2005-06-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/AGELI-An-Integrated-Platform-for-the-Assessment-of-National-Genetic-Evaluation-Results-by-Learning-and-Informing.pdf},
abstract={One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.}
}

Dionisis Kehagias and Pericles A. Mitkas
"Adaptive pricing functions for open outcry auctions"
Intelligent Agent Technology(IAT), pp. 653-656, IEEE Computer Society, Magdeburg, Germany, 2005 Sep

In agent-mediated marketplaces, autonomous agents deploy automated bidding mechanisms in order to increase revenue for humans. The ability of agents to estimate the next prices to be revealed in an auction, by applying forecasting, is a key element for efficient and successful bidding. In open outcry auctions, such as English and Dutch, information about bidders behavior is revealed at each round. This paper proposes a bid calculation function based on forecasting of the next price in English and Dutch auctions. The forecasting is based on two linear adaptive filters for stochastic estimation, whose parameters are calculated using a genetic algorithm. In order to test the efficiency of the two bidding methods and to benchmark the performance of the two filters, we conduct a set of experiments and present the results.

@inproceedings{2005KehagiasIAT,
author={Dionisis Kehagias and Pericles A. Mitkas},
title={Adaptive pricing functions for open outcry auctions},
booktitle={Intelligent Agent Technology(IAT)},
pages={653-656},
publisher={IEEE Computer Society},
address={Magdeburg, Germany},
year={2005},
month={09},
date={2005-09-19},
url={http://issel.ee.auth.gr/wp-content/uploads/publications/01565618.pdf},
doi={http://doi.ieeecomputersociety.org/10.1109/IAT.2005.26},
keywords={motifs},
abstract={In agent-mediated marketplaces, autonomous agents deploy automated bidding mechanisms in order to increase revenue for humans. The ability of agents to estimate the next prices to be revealed in an auction, by applying forecasting, is a key element for efficient and successful bidding. In open outcry auctions, such as English and Dutch, information about bidders behavior is revealed at each round. This paper proposes a bid calculation function based on forecasting of the next price in English and Dutch auctions. The forecasting is based on two linear adaptive filters for stochastic estimation, whose parameters are calculated using a genetic algorithm. In order to test the efficiency of the two bidding methods and to benchmark the performance of the two filters, we conduct a set of experiments and present the results.}
}

Pericles A. Mitkas
"Knowledge discovery for training intelligent agents"
International Work-shop on Autonomous Intelligent Systems: Agents and Data Mining (AIS-ADM 2005), pp. 161--174, Springer Berlin / Heidelberg, St.Petersburg,Russia, 2005 Jun

One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.

@inproceedings{2005MitakasAIS-ADM,
author={Pericles A. Mitkas},
title={Knowledge discovery for training intelligent agents},
booktitle={International Work-shop on Autonomous Intelligent Systems: Agents and Data Mining (AIS-ADM 2005)},
pages={161--174},
publisher={Springer Berlin / Heidelberg},
address={St.Petersburg,Russia},
year={2005},
month={06},
date={2005-06-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/Knowledge-Discovery-for-Training-Intelligent-Agents-Methodology-Tools-and-Applications.pdf},
abstract={One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.}
}

Pericles A. Mitkas, Andreas L. Symeonidis and Ioannis N. Athanasiadis
"A Retraining Methodology for Enhancing Agent Intelligence"
IEEE Intl Conference on Integration of Knowledge Intensive Multi-Agent Systems - KIMAS 05, pp. 422--428, Springer Berlin / Heidelberg, Waltham, MA, USA, 2005 Apr

Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as ‘‘agent training’’. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement – in the long run – of agent intelligence.

@inproceedings{2005MitkasKIMAS,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Ioannis N. Athanasiadis},
title={A Retraining Methodology for Enhancing Agent Intelligence},
booktitle={IEEE Intl Conference on Integration of Knowledge Intensive Multi-Agent Systems - KIMAS 05},
pages={422--428},
publisher={Springer Berlin / Heidelberg},
address={Waltham, MA, USA},
year={2005},
month={04},
date={2005-04-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_retraining_methodology_for_enhancing_agent_intel.pdf},
keywords={retraining},
abstract={Data mining has proven a successful gateway for discovering useful knowledge and for enhancing business intelligence in a range of application fields. Incorporating this knowledge into already deployed applications, though, is highly impractical, since it requires reconfigurable software architectures, as well as human expert consulting. In an attempt to overcome this deficiency, we have developed Agent Academy, an integrated development framework that supports both design and control of multi-agent systems (MAS), as well as ‘‘agent training’’. We define agent training as the automated incorporation of logic structures generated through data mining into the agents of the system. The increased flexibility and cooperation primitives of MAS, augmented with the training and retraining capabilities of Agent Academy, provide a powerful means for the dynamic exploitation of data mining extracted knowledge. In this paper, we present the methodology and tools for agent retraining. Through experimented results with the Agent Academy platform, we demonstrate how the extracted knowledge can be formulated and how retraining can lead to the improvement – in the long run – of agent intelligence.}
}

Fotis E. Psomopoulos and Pericles A. Mitkas
"A protein classification engine based on stochastic finite state automata"
Lecture Series on Computer and Computational Sciences VSP/Brill (Proceedings of the Symposium 35: Computational Methods in Molecular Biology in conjunction with ICCMSE), pp. 1371-1374, Springer-Verlag, Loutraki, Greece, 2005 Oct

Accurate protein classification is one of the major challenges in modern bioinformatics. Motifs that exist in the protein chain can make such a classification possible. A plethora of algorithms to address this problem have been proposed by both the artificial intelligence and the pattern recognition communities. In this paper, a data mining methodology for classification rules induction in proposed. Initially, expert – based protein families are processed to create a new hybrid set of families. Then, a prefix tree acceptor is created from the motifs in the protein chains, and subsequently transformed into a stochastic finite state automaton using the ALERGIA algorithm. Finally, an algorithm is presented for the extraction of classification rules from the automaton.

@inproceedings{2005PsomopoulosICCMSE,
author={Fotis E. Psomopoulos and Pericles A. Mitkas},
title={A protein classification engine based on stochastic finite state automata},
booktitle={Lecture Series on Computer and Computational Sciences VSP/Brill (Proceedings of the Symposium 35: Computational Methods in Molecular Biology in conjunction with ICCMSE)},
pages={1371-1374},
publisher={Springer-Verlag},
address={Loutraki, Greece},
year={2005},
month={10},
date={2005-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-protein-classification-engine-based-on-stochastic-finite-state-automata-.pdf},
keywords={motifs},
abstract={Accurate protein classification is one of the major challenges in modern bioinformatics. Motifs that exist in the protein chain can make such a classification possible. A plethora of algorithms to address this problem have been proposed by both the artificial intelligence and the pattern recognition communities. In this paper, a data mining methodology for classification rules induction in proposed. Initially, expert – based protein families are processed to create a new hybrid set of families. Then, a prefix tree acceptor is created from the motifs in the protein chains, and subsequently transformed into a stochastic finite state automaton using the ALERGIA algorithm. Finally, an algorithm is presented for the extraction of classification rules from the automaton.}
}

Andreas L. Symeonidis, Kyriakos C. Chatzidimitriou, Dionisis Kehagias and Pericles A. Mitkas
"An Intelligent Recommendation Framework for ERP Systems"
AIA 2005: Artificial Intelligence and Applications, pp. 422--428, ACTA Press, Innsbruck, Austria, 2005 Feb

Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. We present an alternative approach for incorporating adaptive business intelligence into the company

@inproceedings{2005SymeonidisAIA,
author={Andreas L. Symeonidis and Kyriakos C. Chatzidimitriou and Dionisis Kehagias and Pericles A. Mitkas},
title={An Intelligent Recommendation Framework for ERP Systems},
booktitle={AIA 2005: Artificial Intelligence and Applications},
pages={422--428},
publisher={ACTA Press},
address={Innsbruck, Austria},
year={2005},
month={02},
date={2005-02-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Intelligent-Recommendation-Framework-for-ERP-Systems.pdf},
keywords={retraining},
abstract={Enterprise Resource Planning systems efficiently administer all tasks concerning real-time planning and manufacturing, material procurement and inventory monitoring, customer and supplier management. Nevertheless, the incorporation of domain knowledge and the application of adaptive decision making into such systems require extreme customization with a cost that becomes unaffordable, especially in the case of SMEs. We present an alternative approach for incorporating adaptive business intelligence into the company}
}

Andreas L. Symeonidis and Pericles A. Mitkas
"A Methodology for Predicting Agent Behavior by the Use of Data Mining Techniques"
Autonomous Intelligent Systems: Agents and Data Mining, pp. 161--174, Springer Berlin / Heidelberg, St. Petersburg, Russia, 2005 Jun

One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.

@inproceedings{2005SymeonidisAISADM,
author={Andreas L. Symeonidis and Pericles A. Mitkas},
title={A Methodology for Predicting Agent Behavior by the Use of Data Mining Techniques},
booktitle={Autonomous Intelligent Systems: Agents and Data Mining},
pages={161--174},
publisher={Springer Berlin / Heidelberg},
address={St. Petersburg, Russia},
year={2005},
month={06},
date={2005-06-06},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A-Methodology-for-Predicting-Agent-Behavior-by-the-Use-of-Data-Mining-Techniques.pdf},
abstract={One of the most interesting issues in agent technology has always been the modeling and enhancement of agent behavior. Numerous approaches exist, attempting to optimally reflect both the inner states, as well as the perceived environment of an agent, in order to provide it either with reactivity or proactivity. Within the context of this paper, an alternative methodology for enhancing agent behavior is presented. The core feature of this methodology is that it exploits knowledge extracted by the use of data mining techniques on historical data, data that describe the actions of agents within the MAS they reside. The main issues related to the design, development, and evaluation of such a methodology for predicting agent actions are discussed, while the basic concessions made to enable agent cooperation are outlined. We also present k-Profile, a new data mining mechanism for discovering action profiles and for providing recommendations on agent actions. Finally, indicative experimental results are apposed and discussed.}
}

2004

Inproceedings Papers

Ioannis N. Athanasiadis and Pericles A. Mitkas
"Software Agents for Assessing Environmental Quality: Advantages and Limitations"
18th International Conference Informatics for Environmental Protection: Sharing (EnviroInfo 2004), Geneva, Switzerland, 2004 Oct

@inproceedings{2004AthanasiadisICIEP,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Software Agents for Assessing Environmental Quality: Advantages and Limitations},
booktitle={18th International Conference Informatics for Environmental Protection: Sharing (EnviroInfo 2004)},
address={Geneva, Switzerland},
year={2004},
month={10},
date={2004-10-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Software-Agents-for-Assessing-Environmental-Quality.pdf},
keywords={agent-based simulation}
}

Ioannis N. Athanasiadis and Pericles A. Mitkas
"Applying agent technology in environmental management systems under real-time constraints"
Second Biennial Meeting of the International Environmental Modelling and Software Society at Environmental Informatics towards Citizen-centered Electronic Information Services Workshop, pp. 54--60, Osnabruck, Germany, 2004 Jun

Changes in the natural environment affect our quality of life. Thus, government, industry, and the public call for integrated environmental management systems capable of supplying all parties with validated, accurate and timely information. The ‘near real-time’ constraint reveals two critical problems in delivering such tasks: the low quality or absence of data, and the changing conditions over a long period. These problems are common in environmental monitoring networks and although harmless for off-line studies, they may be serious for near real-time systems. In this work, we discuss the problem space of near real-time reporting Environmental Management Systems and present a methodology for applying agent technology this area. The proposed methodology applies powerful tools from the IT sector, such as software agents and machine learning, and identifies the potential use for solving real-world problems. An experimental agent-based prototype developed for monitoring and assessing air-quality in near real time is presented. A community of software agents is assigned to monitor and validate measurements coming from several sensors, to assess air-quality, and, finally, to deliver air quality indicators and alarms to appropriate recipients, when needed, over the web. The architecture of the developed system is presented and the deployment of a real-world test case is demonstrated.

@inproceedings{2004AthanasiadisIEMSS,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Applying agent technology in environmental management systems under real-time constraints},
booktitle={Second Biennial Meeting of the International Environmental Modelling and Software Society at Environmental Informatics towards Citizen-centered Electronic Information Services Workshop},
pages={54--60},
address={Osnabruck, Germany},
year={2004},
month={06},
date={2004-06-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Applying-agent-technology-in-environmental-management-systems-under-real-time-constraints.pdf},
keywords={environmental monitoring systems;decision support systems},
abstract={Changes in the natural environment affect our quality of life. Thus, government, industry, and the public call for integrated environmental management systems capable of supplying all parties with validated, accurate and timely information. The ‘near real-time’ constraint reveals two critical problems in delivering such tasks: the low quality or absence of data, and the changing conditions over a long period. These problems are common in environmental monitoring networks and although harmless for off-line studies, they may be serious for near real-time systems. In this work, we discuss the problem space of near real-time reporting Environmental Management Systems and present a methodology for applying agent technology this area. The proposed methodology applies powerful tools from the IT sector, such as software agents and machine learning, and identifies the potential use for solving real-world problems. An experimental agent-based prototype developed for monitoring and assessing air-quality in near real time is presented. A community of software agents is assigned to monitor and validate measurements coming from several sensors, to assess air-quality, and, finally, to deliver air quality indicators and alarms to appropriate recipients, when needed, over the web. The architecture of the developed system is presented and the deployment of a real-world test case is demonstrated.}
}

Ioannis N. Athanasiadis and Pericles A. Mitkas
"Supporting the Decision-Making Process in Environmental Monitoring Systems with Knowledge Discovery Techniques"
KDnet Symposium Knowledge Discovery for Environmental Management, pp. 1--12, Bonn, Germany, 2004 Jun

In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.

@inproceedings{2004AthanasiadisSKDEM,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={Supporting the Decision-Making Process in Environmental Monitoring Systems with Knowledge Discovery Techniques},
booktitle={KDnet Symposium Knowledge Discovery for Environmental Management},
pages={1--12},
address={Bonn, Germany},
year={2004},
month={06},
date={2004-06-03},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Supporting-the-Decision-Making-Process-in-Environmental-Monitoring-Systems-with-Knowledge-Discovery-Techniques.pdf},
abstract={In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.}
}

Sotiris Diplaris, Andreas Symeonidis, Pericles A. Mitkas, Georgios Banos and Z. Abas
"An Alarm Firing System for National Genetic Evaluation Quality Control"
Interbull Annual Meeting, pp. 146--150, Tunis, Tunisia, 2004 May

In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.

@inproceedings{2004DiplarisIAM,
author={Sotiris Diplaris and Andreas Symeonidis and Pericles A. Mitkas and Georgios Banos and Z. Abas},
title={An Alarm Firing System for National Genetic Evaluation Quality Control},
booktitle={Interbull Annual Meeting},
pages={146--150},
address={Tunis, Tunisia},
year={2004},
month={05},
date={2004-05-30},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Alarm-Firing-System-for-National-Genetic-Evaluation-Quality-Control.pdf},
abstract={In this paper an empirical approach for supporting the decision making process involved in an Environmental Management System (EMS) that monitors air quality and triggers air quality alerts is presented. Data uncertainty problems associated with an air quality monitoring network, such as measurement validation and estimation of missing or erroneous values, are addressed through the exploitation of data mining techniques. Exhaustive experiments with real world data have produced trustworthy predictive models, capable of supporting the decision-making process. The outstanding performance of the induced predictive models indicate the added value of this approach for supporting the decision making process in an EMS.}
}

D. Kehagias, Kyriakos C. Chatzidimitriou, Andreas Symeonidis and Pericles A. Mitkas
"Information Agents Cooperating with Heterogeneous Data Sources for Customer-Order Management"
Paper presented at the 19th Annual ACM Symposium on Applied Computing (SAC 2004), pp. 52--57, Nicosia, Cyprus, 2004 Mar

As multi-agent systems and information agents obtain an in- creasing acceptance by application developers, existing legacy Enterprise Resource Planning (ERP) systems still provide the main source of data used in customer, supplier and inventory resource management. In this paper we present a multi-agent system, comprised of information agents, which cooperates with a legacy ERP in order to carry out orders posted by customers in an enterprise environment. Our system is enriched by the capability of producing recommendations to the interested customer through agent cooperation. At first, we address the problem of information workload in an enterprise environment and explore the opportunity of a plausible solution. Secondly we present the architecture of our system and the types of agents involved in it. Finally, we show how it manipulates retrieved information for efficient and facile customer-order management and illustrate results derived from real-data.

@inproceedings{2004KehagiasSAC,
author={D. Kehagias and Kyriakos C. Chatzidimitriou and Andreas Symeonidis and Pericles A. Mitkas},
title={Information Agents Cooperating with Heterogeneous Data Sources for Customer-Order Management},
booktitle={Paper presented at the 19th Annual ACM Symposium on Applied Computing (SAC 2004)},
pages={52--57},
address={Nicosia, Cyprus},
year={2004},
month={03},
date={2004-03-14},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Information-Agents-Cooperating-with-Heterogeneous-Data-Sources-for-Customer-Order-Management.pdf},
keywords={information agents;enterprise resource planning;customer-order management},
abstract={As multi-agent systems and information agents obtain an in- creasing acceptance by application developers, existing legacy Enterprise Resource Planning (ERP) systems still provide the main source of data used in customer, supplier and inventory resource management. In this paper we present a multi-agent system, comprised of information agents, which cooperates with a legacy ERP in order to carry out orders posted by customers in an enterprise environment. Our system is enriched by the capability of producing recommendations to the interested customer through agent cooperation. At first, we address the problem of information workload in an enterprise environment and explore the opportunity of a plausible solution. Secondly we present the architecture of our system and the types of agents involved in it. Finally, we show how it manipulates retrieved information for efficient and facile customer-order management and illustrate results derived from real-data.}
}

Fotis E. Psomopoulos, Sotiris Diplaris and Pericles A. Mitkas
"A finite state automata based technique for protein classification rules induction"
Proceedings of the Second European Workshop on Data Mining and Text Mining in Bioinformatics (in conjunction with ECML/PKDD), pp. 54--60, Pisa, Italy, 2004 Sep

An important challenge in modern functional proteomics is the prediction of the functional behavior of proteins. Motifs in protein chains can make such a prediction possible. The correlation between protein properties and their motifs is not always obvious, since more than one motifs can exist within a protein chain. Thus, the behavior of a protein is a function of many motifs, where some overpower others. In this paper a data-mining approach for motif-based classification of proteins is presented. A new classification rules inducing algorithm that exploits finite state automata is introduced. First, data are modeled by terms of prefix tree acceptors, which are later merged into finite state automata. Finally, we propose a new algorithm for the induction of protein classification rules from finite state automata. The data-mining model is trained and tested using various protein and protein class subsets, as well as the whole dataset of known proteins and protein classes. Results indicate the efficiency of our technique compared to other known data-mining algorithms.

@inproceedings{2004PsomopoulosPSEWDMTMB,
author={Fotis E. Psomopoulos and Sotiris Diplaris and Pericles A. Mitkas},
title={A finite state automata based technique for protein classification rules induction},
booktitle={Proceedings of the Second European Workshop on Data Mining and Text Mining in Bioinformatics (in conjunction with ECML/PKDD)},
pages={54--60},
address={Pisa, Italy},
year={2004},
month={09},
date={2004-09-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/A_finite_state_automata_based_technique_for_protei.pdf},
keywords={proteomics},
abstract={An important challenge in modern functional proteomics is the prediction of the functional behavior of proteins. Motifs in protein chains can make such a prediction possible. The correlation between protein properties and their motifs is not always obvious, since more than one motifs can exist within a protein chain. Thus, the behavior of a protein is a function of many motifs, where some overpower others. In this paper a data-mining approach for motif-based classification of proteins is presented. A new classification rules inducing algorithm that exploits finite state automata is introduced. First, data are modeled by terms of prefix tree acceptors, which are later merged into finite state automata. Finally, we propose a new algorithm for the induction of protein classification rules from finite state automata. The data-mining model is trained and tested using various protein and protein class subsets, as well as the whole dataset of known proteins and protein classes. Results indicate the efficiency of our technique compared to other known data-mining algorithms.}
}

2003

Journal Articles

Andreas L. Symeonidis, Dionisis Kehagias and Pericles A. Mitkas
"Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques"
Expert Systems with Applications, 25, (4), pp. 589-602, 2003 Jan

Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.

@article{2003SymeonidisESWA,
author={Andreas L. Symeonidis and Dionisis Kehagias and Pericles A. Mitkas},
title={Intelligent Policy Recommendations on Enterprise Resource Planning by the use of agent technology and data mining techniques},
journal={Expert Systems with Applications},
volume={25},
number={4},
pages={589-602},
year={2003},
month={01},
date={2003-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Intelligent-policy-recommendations-on-enterprise-resource-planningby-the-use-of-agent-technology-and-data-mining-techniques.pdf},
keywords={agents},
abstract={Enterprise Resource Planning systems tend to deploy Supply Chain Management and/or Customer Relationship Management techniques, in order to successfully fuse information to customers, suppliers, manufacturers and warehouses, and therefore minimize system-wide costs while satisfying service level requirements. Although efficient, these systems are neither versatile nor adaptive, since newly discovered customer trends cannot be easily integrated with existing knowledge. Advancing on the way the above mentioned techniques apply on ERP systems, we have developed a multi-agent system that introduces adaptive intelligence as a powerful add-on for ERP software customization. The system can be thought of as a recommendation engine, which takes advantage of knowledge gained through the use of data mining techniques, and incorporates it into the resulting company selling policy. The intelligent agents of the system can be periodically retrained as new information is added to the ERP. In this paper, we present the architecture and development details of the system, and demonstrate its application on a real test case.}
}

2003

Inproceedings Papers

Ioannis N. Athanasiadis, Pericles A. Mitkas, G. B. Laleci and Y. Kabak
"Embedding data-driven decision strategies on software agents: The case of a Multi-Agent System for Monitoring Air-Quality Indexes"
10th ISPE International Conference on Concurrent Engineering: Research and Applications, pp. 23--30, Madeira, Portugal, 2003 Jul

This paper describes the design and deployment of an agent community, which is responsible for monitoring and assessing air quality, based on measurements generated by a meteorological station. Software agents acting as mediators or decision makers deliver validated information to the appropriate destinations. We outline the procedure for creating agent ontologies, agent types, and, finally, for training agents based on his- torical data volumes. The C4.5 algorithm for decision tree extraction is applied on meteorological and air-pollutant measurements. The decision models extracted are related to the validation of incoming measurements and to the estimation of missing or erroneous measurements. Emphasis is given on the agent training process, which must embed these data-driven decision models on software agents in a simple and effortless way. We developed a prototype system, which demonstrates the advantages of agent-based solutions for intelligent environmental applications.

@inproceedings{2003AthanasiadisISPE,
author={Ioannis N. Athanasiadis and Pericles A. Mitkas and G. B. Laleci and Y. Kabak},
title={Embedding data-driven decision strategies on software agents: The case of a Multi-Agent System for Monitoring Air-Quality Indexes},
booktitle={10th ISPE International Conference on Concurrent Engineering: Research and Applications},
pages={23--30},
address={Madeira, Portugal},
year={2003},
month={07},
date={2003-07-29},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Embedding-data-driven-decision-strategies-on-software-agents-The-case-of-a-Multi-Agent-System-for-Monitoring-Air-Quality-Indexes.pdf},
keywords={agent academy},
abstract={This paper describes the design and deployment of an agent community, which is responsible for monitoring and assessing air quality, based on measurements generated by a meteorological station. Software agents acting as mediators or decision makers deliver validated information to the appropriate destinations. We outline the procedure for creating agent ontologies, agent types, and, finally, for training agents based on his- torical data volumes. The C4.5 algorithm for decision tree extraction is applied on meteorological and air-pollutant measurements. The decision models extracted are related to the validation of incoming measurements and to the estimation of missing or erroneous measurements. Emphasis is given on the agent training process, which must embed these data-driven decision models on software agents in a simple and effortless way. We developed a prototype system, which demonstrates the advantages of agent-based solutions for intelligent environmental applications.}
}

Ioannis N. Athanasiadis, V. G. Kaburlasos, Pericles A. Mitkas and V. Petridis
"Applying Machine Learning Techniques on Air Quality Data for Real-Time Decision Support"
First International NAISO Symposium on Information Technologies in Environmental Engineering (ITEE 2003), pp. 11--18, Gdansk, Poland, 2003 Jun

Fairly rapid environmental changes call for continuous surveillance and decision making, areas where IT technologies can be valuable. In the aforementioned context this work describes the application of a novel classifier, namely ?-FLNMAP, for estimating the ozone concentration level in the atmosphere. In a series of experiments on meteorological and air pollutants data, the ?–FLNMAP classifier compares favorably with both back-propagation neural networks and the C4.5 algorithm; moreover ?–FLNMAP induces only a few rules from the data. The ?–FLNMAP classifier can be implemented as either a neural network or a decision tree. We also discuss the far reaching potential of ?–FLNMAP in IT applications due to its applicability on partially (lattice) ordered data.

@inproceedings{2003AthanasiadisITEE,
author={Ioannis N. Athanasiadis and V. G. Kaburlasos and Pericles A. Mitkas and V. Petridis},
title={Applying Machine Learning Techniques on Air Quality Data for Real-Time Decision Support},
booktitle={First International NAISO Symposium on Information Technologies in Environmental Engineering (ITEE 2003)},
pages={11--18},
address={Gdansk, Poland},
year={2003},
month={06},
date={2003-06-24},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Applying-Machine-Learning-Techniques-on-Air-Quality-Data-for-Real-Time-Decision-Support.pdf},
keywords={concurrent engineering;intelligent agents.},
abstract={Fairly rapid environmental changes call for continuous surveillance and decision making, areas where IT technologies can be valuable. In the aforementioned context this work describes the application of a novel classifier, namely ?-FLNMAP, for estimating the ozone concentration level in the atmosphere. In a series of experiments on meteorological and air pollutants data, the ?–FLNMAP classifier compares favorably with both back-propagation neural networks and the C4.5 algorithm; moreover ?–FLNMAP induces only a few rules from the data. The ?–FLNMAP classifier can be implemented as either a neural network or a decision tree. We also discuss the far reaching potential of ?–FLNMAP in IT applications due to its applicability on partially (lattice) ordered data.}
}

Sotiris Diplaris, Andreas L. Symeonidis, Pericles A. Mitkas, Georgios Banos and Z. Abas
"Quality Control of National Genetic Eva luation Results Using Data-Mining Techniques; A Progress Report"
Interbull Annual Meeting, pp. 8--15, Rome, Italy, 2003 Aug

The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal

@inproceedings{2003BanosIAM,
author={Sotiris Diplaris and Andreas L. Symeonidis and Pericles A. Mitkas and Georgios Banos and Z. Abas},
title={Quality Control of National Genetic Eva luation Results Using Data-Mining Techniques; A Progress Report},
booktitle={Interbull Annual Meeting},
pages={8--15},
address={Rome, Italy},
year={2003},
month={08},
date={2003-08-25},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Quality-Control-of-National-Genetic-Eva-luation-Results-Using-Data-Mining-Techniques-A-Progress-Report.pdf},
keywords={agent academy},
abstract={The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal}
}

Gerasimos Hatzidamianos, Sotiris Diplaris, Ioannis N. Athanasiadis and Pericles A. Mitkas
"GenMiner: A data mining tool for protein analysis"
9th Panhellenic Conference in Informatics, pp. 346--360, Thessaloniki, Greece, 2003 Nov

We present an integrated tool for preprocessing and analysis of genetic data through data mining. Our goal is the prediction of the functional behavior of proteins, a critical problem in functional genomics. During the last years, many programming approaches have been developed for the identification of short amino-acid chains, which are included in families of related proteins. These chains are called motifs and they are widely used for the prediction of the protein’s behavior, since the latter is dependent on them. The idea to use data mining techniques stems from the sheer size of the problem. Since every protein consists of a specific number of motifs, some stronger than others, the identification of the properties of a protein requires the examination of immeasurable combinations. The presence or absence of stronger motifs affects the way in which a protein reacts. GenMiner is a preprocessing software tool that can receive data from three major protein databases and transform them in a form suitable for input to the WEKA data mining suite. A decision tree model was created using the derived training set and an efficiency test was conducted. Finally, the model was applied to unknown proteins. Our experiments have shown that the use of the decision tree model for mining protein data is an efficient and easy-to-implement solution, since it possesses a high degree of parameterization and therefore, it can be used in a plethora of cases.

@inproceedings{2003HatzidamianosPCI,
author={Gerasimos Hatzidamianos and Sotiris Diplaris and Ioannis N. Athanasiadis and Pericles A. Mitkas},
title={GenMiner: A data mining tool for protein analysis},
booktitle={9th Panhellenic Conference in Informatics},
pages={346--360},
address={Thessaloniki, Greece},
year={2003},
month={11},
date={2003-11-21},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/GenMiner-A-data-mining-tool-for-protein-analysis.pdf},
keywords={agent academy},
abstract={We present an integrated tool for preprocessing and analysis of genetic data through data mining. Our goal is the prediction of the functional behavior of proteins, a critical problem in functional genomics. During the last years, many programming approaches have been developed for the identification of short amino-acid chains, which are included in families of related proteins. These chains are called motifs and they are widely used for the prediction of the protein’s behavior, since the latter is dependent on them. The idea to use data mining techniques stems from the sheer size of the problem. Since every protein consists of a specific number of motifs, some stronger than others, the identification of the properties of a protein requires the examination of immeasurable combinations. The presence or absence of stronger motifs affects the way in which a protein reacts. GenMiner is a preprocessing software tool that can receive data from three major protein databases and transform them in a form suitable for input to the WEKA data mining suite. A decision tree model was created using the derived training set and an efficiency test was conducted. Finally, the model was applied to unknown proteins. Our experiments have shown that the use of the decision tree model for mining protein data is an efficient and easy-to-implement solution, since it possesses a high degree of parameterization and therefore, it can be used in a plethora of cases.}
}

G. Milis, Andreas L. Symeonidis and Pericles A. Mitkas
"Ergasiognomon: A Model System of Advanced Digital Services Designed and Developed to Support the Job Marketplace"
9th Panhellenic Conference in Informatics, pp. 346--360, Thessaloniki, Greece, 2003 Nov

The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal

@inproceedings{2003MilisPCI,
author={G. Milis and Andreas L. Symeonidis and Pericles A. Mitkas},
title={Ergasiognomon: A Model System of Advanced Digital Services Designed and Developed to Support the Job Marketplace},
booktitle={9th Panhellenic Conference in Informatics},
pages={346--360},
address={Thessaloniki, Greece},
year={2003},
month={11},
date={2003-11-21},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Ergasiognomon-A-Model-System-of-Advanced-Digital-Services-Designed-and-Developed-to-Support-the-Job-Marketplace.pdf},
keywords={agent academy},
abstract={The continuous expansion of Internet has enabled the development of a wide range of advanced digital services. Real-time data diffusion has elimi- nated processing bottlenecks and has led to fast, easy and no-cost communications. This primitive has been widely exploited in the process of job searching. Numerous systems have been developed offering job candidates with the opportunity to browse for vacancies, submit resumes, and even contact the most appealing of the employers. Although effective, most of these systems are characterized by their simplicity, acting more like an enhanced bulletin board, rather than an integrated, fully functional system. Even for the more advanced of these systems user interaction is obligatory, in order to couple job seekers with job providers, thus continuous supervising of the process is unavoidable. Advancing on the way primitive job recruitment techniques apply on Internet-based systems, and dealing with their lack of efficiency and interactivity, we have de- veloped a robust software system that employs intelligent techniques for coupling candidates and jobs, according to the formers’ skills and the latter’s requirements. A thorough analysis of the system specifications has been conducted, and all issues concerning information retrieval and data filtering, coupling intelligence, storage, security, user interaction and ease-of-use have been integrated into one web-based job portal}
}

Pericles A. Mitkas, Dionisis Kehagias, Andreeas L. Symeonidis and I. N. Athanasiadis
"A Framework for Constructing Multi-Agent Applications and Training Intelligent Agents"
4th International Workshop on Agent-Oriented Software Engineering (AOSE-2003), Autonomous Agents \& Multi-Agent Systems (AAMAS 2003), pp. 96--109, Melbourne, Australia, 2003 Jun

As agent-oriented paradigm is reaching a significant level of acceptance by software developers, there is a lack of integrated high-level abstraction tools for the design and development of agent-based applications. In an effort to mitigate this deficiency, we introduce Agent Academy, an integrated development framework, implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. In addition to these characteristics, our framework goes deeper into agents, by implementing a mechanism for embedding rule-based reasoning into them. We call this procedure «agent training» and it is realized by the application of AI techniques for knowledge discovery on application-specific data, which may be available to the agent developer. In this respect, Agent Academy provides an easy-to-use facility that encourages the substitution of existing, traditionally developed applications by new ones, which follow the agent-orientation paradigm.

@inproceedings{2003MitkasAOSE,
author={Pericles A. Mitkas and Dionisis Kehagias and Andreeas L. Symeonidis and I. N. Athanasiadis},
title={A Framework for Constructing Multi-Agent Applications and Training Intelligent Agents},
booktitle={4th International Workshop on Agent-Oriented Software Engineering (AOSE-2003), Autonomous Agents \& Multi-Agent Systems (AAMAS 2003)},
pages={96--109},
address={Melbourne, Australia},
year={2003},
month={06},
date={2003-06-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/A-Framework-for-Constructing-Multi-Agent-Applications-and-Training-Intelligent-Agents.pdf},
keywords={concurrent engineering;intelligent agents.},
abstract={As agent-oriented paradigm is reaching a significant level of acceptance by software developers, there is a lack of integrated high-level abstraction tools for the design and development of agent-based applications. In an effort to mitigate this deficiency, we introduce Agent Academy, an integrated development framework, implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. In addition to these characteristics, our framework goes deeper into agents, by implementing a mechanism for embedding rule-based reasoning into them. We call this procedure «agent training» and it is realized by the application of AI techniques for knowledge discovery on application-specific data, which may be available to the agent developer. In this respect, Agent Academy provides an easy-to-use facility that encourages the substitution of existing, traditionally developed applications by new ones, which follow the agent-orientation paradigm.}
}

Pericles A. Mitkas, Dionisis Kehagias, Andreas L. Symeonidis and Ioannis N. Athanasiadis
"Agent Academy: An integrated tool for developing multi-agent systems and embedding decision structures into agents"
First European Workshop on Multi-Agent Systems (EUMAS 2003), Oxford, UK, 2003 Dec

In this paper we present Agent Academy, a framework that enables software developers to quickly develop multi-agent applications, when prior historical data relevant to a desired rule-based behaviour are available. Agent Academy is implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. Once an agent has been designed within the framework, the agent developer can create a specific ontology that describes the historical data. In this way, agents become capable of having embedded rule-based reasoning. We call this procedure «agent training» and it is realized by the application of data mining and knowledge discovery techniques on the application-specific historical data. From this point of view, Agent Academy provides a tool for both creating multi-agent systems and embedding rule-based decision structures into one or more of the participating agents.

@inproceedings{2003MitkasEUMAS,
author={Pericles A. Mitkas and Dionisis Kehagias and Andreas L. Symeonidis and Ioannis N. Athanasiadis},
title={Agent Academy: An integrated tool for developing multi-agent systems and embedding decision structures into agents},
booktitle={First European Workshop on Multi-Agent Systems (EUMAS 2003)},
address={Oxford, UK},
year={2003},
month={12},
date={2003-12-18},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Agent-Academy-An-integrated-tool-for-developing-multi-agent-systems-and-embedding-decision-structures-into-agents.pdf},
keywords={agent academy},
abstract={In this paper we present Agent Academy, a framework that enables software developers to quickly develop multi-agent applications, when prior historical data relevant to a desired rule-based behaviour are available. Agent Academy is implemented itself as a multi-agent system, that supports, in a single tool, the design of agent behaviours and reusable agent types, the definition of ontologies, and the instantiation of single agents or multi-agent communities. Once an agent has been designed within the framework, the agent developer can create a specific ontology that describes the historical data. In this way, agents become capable of having embedded rule-based reasoning. We call this procedure «agent training» and it is realized by the application of data mining and knowledge discovery techniques on the application-specific historical data. From this point of view, Agent Academy provides a tool for both creating multi-agent systems and embedding rule-based decision structures into one or more of the participating agents.}
}

Pericles A. Mitkas, Andreas Symeonidis, Dionisis Kehagias and Ioannis N. Athanasiadis
"Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering"
10th ISPE International Conference on Concurrent Engineering: Research and Applications, pp. 11--18, Madeira, Portugal, 2003 Jul

Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents intelligence can range from rudimentary sensor monitoring and data reporting, to more advanced forms of decision-making and autonomous behaviour. The behaviour and intelligence of each agent in the community can be obtained by performing Data Mining on available application data and the respected knowledge domain. We have developed Agent Academy (AA), a software platform for the design, creation, and deployment of MAS, which combines the power of knowledge discovery algorithms with the versatility of agents. Using this platform, we illustrate how agents, equipped with a data-driven inference engine, can be dynamically and continuously trained. We also discuss three prototype MAS developed with AA.

@inproceedings{2003MitkasISPE,
author={Pericles A. Mitkas and Andreas Symeonidis and Dionisis Kehagias and Ioannis N. Athanasiadis},
title={Application of Data Mining and Intelligent Agent Technologies to Concurrent Engineering},
booktitle={10th ISPE International Conference on Concurrent Engineering: Research and Applications},
pages={11--18},
address={Madeira, Portugal},
year={2003},
month={07},
date={2003-07-26},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/Application-of-Data-Mining-and-Intelligent-Agent-Technologies-to-Concurrent-Engineering.pdf},
keywords={concurrent engineering;intelligent agents.},
abstract={Software agent technology has matured enough to produce intelligent agents, which can be used to control a large number of Concurrent Engineering tasks. Multi-Agent Systems (MAS) are communities of agents that exchange information and data in the form of messages. The agents intelligence can range from rudimentary sensor monitoring and data reporting, to more advanced forms of decision-making and autonomous behaviour. The behaviour and intelligence of each agent in the community can be obtained by performing Data Mining on available application data and the respected knowledge domain. We have developed Agent Academy (AA), a software platform for the design, creation, and deployment of MAS, which combines the power of knowledge discovery algorithms with the versatility of agents. Using this platform, we illustrate how agents, equipped with a data-driven inference engine, can be dynamically and continuously trained. We also discuss three prototype MAS developed with AA.}
}

2002

Inproceedings Papers

Z. Abas, G. Banos, Pericles A. Mitkas, P. Saragiotis and I. Maltaris
"AMNOS: An Integrated Web-Based Platform for Dairy Sheep Breeding Management"
7th World Congress on Genetics Applied to Livestock Production, pp. 757--764, Montpellier, France, 2002 Aug

The objective of this paper is to describe AMNOS, an integrated web-based platform, developed to record, monitor, evaluate and manage the dairy sheep population of the Chios breed in Greece. The key component of the platform is a database with several relations operating at the flock and individual animal level. The system is based on the Microsoft SQL server. Dynamic web pages are generated using the Microsoft ActiveX Data Object technology. Business logic was implemented on ASP pages, which are also responsible for creating the HTML pages sent to the user\\'s browser. A series of conventions and rules have been added in order to ensure incoming data integrity. The key advantages of AMNOS are accessibility and ease of management. User (sheep producer) participation is being currently solicited amongst members of the Breeders Cooperative that administers the system.

@inproceedings{2002AbasWCGALP,
author={Z. Abas and G. Banos and Pericles A. Mitkas and P. Saragiotis and I. Maltaris},
title={AMNOS: An Integrated Web-Based Platform for Dairy Sheep Breeding Management},
booktitle={7th World Congress on Genetics Applied to Livestock Production},
pages={757--764},
address={Montpellier, France},
year={2002},
month={08},
date={2002-08-19},
abstract={The objective of this paper is to describe AMNOS, an integrated web-based platform, developed to record, monitor, evaluate and manage the dairy sheep population of the Chios breed in Greece. The key component of the platform is a database with several relations operating at the flock and individual animal level. The system is based on the Microsoft SQL server. Dynamic web pages are generated using the Microsoft ActiveX Data Object technology. Business logic was implemented on ASP pages, which are also responsible for creating the HTML pages sent to the user\\\\'s browser. A series of conventions and rules have been added in order to ensure incoming data integrity. The key advantages of AMNOS are accessibility and ease of management. User (sheep producer) participation is being currently solicited amongst members of the Breeders Cooperative that administers the system.}
}

Dionisis Kehagias, Andreas L. Symeonidis, Pericles A. Mitkas and M. Alborg
"Towards improving Multi-Agent Simulation in safety management and hazard control environments"
Simulation and Planning in High Autonomy Systems AIS 2002, pp. 757--764, Lisbon, Portugal, 2002 Apr

This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).

@inproceedings{2002KehagiasAIS,
author={Dionisis Kehagias and Andreas L. Symeonidis and Pericles A. Mitkas and M. Alborg},
title={Towards improving Multi-Agent Simulation in safety management and hazard control environments},
booktitle={Simulation and Planning in High Autonomy Systems AIS 2002},
pages={757--764},
address={Lisbon, Portugal},
year={2002},
month={04},
date={2002-04-07},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/TOWARDS-IMPROVING-MULTI-AGENT-SIMULATION-IN-SAFETY-MANAGEMENT-AND-HAZARD-CONTROL-ENVIRONMENTS.pdf},
keywords={hazard control},
abstract={This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).}
}

Pericles A. Mitkas, Andreas L. Symeonidis, Dionisis Kechagias, Ioannis N. Athanasiadis, G. Laleci, G. Kurt, Y. Kabak, A. Acar and A. Dogac
"An Agent Framework for Dynamic Agent Retraining: Agent Academy"
eBusiness and eWork 2002 (e2002) 12th annual conference and exhibition, pp. 757--764, Prague, Czech Republic, 2002 Oct

Agent Academy (AA) aims to develop a multi-agent society that can train new agents for specific or general tasks, while constantly retraining existing agents in a recursive mode. The system is based on collecting information both from the environment and the behaviors of the acting agents and their related successes/failures to generate a body of data, stored in the Agent Use Repository, which is mined by the Data Miner module, in order to generate useful knowledge about the application domain. Knowledge extracted by the Data Miner is used by the Agent Training Module as to train new agents or to enhance the behavior of agents already running. In this paper the Agent Academy framework is introduced, and its overall architecture and functionality are presented. Training issues as well as agent ontologies are discussed. Finally, a scenario, which aims to provide environmental alerts to both individuals and public authorities, is described an AA-based use case.

@inproceedings{2002MitkaseBusiness,
author={Pericles A. Mitkas and Andreas L. Symeonidis and Dionisis Kechagias and Ioannis N. Athanasiadis and G. Laleci and G. Kurt and Y. Kabak and A. Acar and A. Dogac},
title={An Agent Framework for Dynamic Agent Retraining: Agent Academy},
booktitle={eBusiness and eWork 2002 (e2002) 12th annual conference and exhibition},
pages={757--764},
address={Prague, Czech Republic},
year={2002},
month={10},
date={2002-10-16},
url={http://issel.ee.auth.gr/wp-content/uploads/2016/02/An-Agent-Framework-for-Dynamic-Agent-Retraining-Agent-Academy.pdf},
abstract={Agent Academy (AA) aims to develop a multi-agent society that can train new agents for specific or general tasks, while constantly retraining existing agents in a recursive mode. The system is based on collecting information both from the environment and the behaviors of the acting agents and their related successes/failures to generate a body of data, stored in the Agent Use Repository, which is mined by the Data Miner module, in order to generate useful knowledge about the application domain. Knowledge extracted by the Data Miner is used by the Agent Training Module as to train new agents or to enhance the behavior of agents already running. In this paper the Agent Academy framework is introduced, and its overall architecture and functionality are presented. Training issues as well as agent ontologies are discussed. Finally, a scenario, which aims to provide environmental alerts to both individuals and public authorities, is described an AA-based use case.}
}

Andreas Symeonidis, Pericles A. Mitkas and Dionisis Kehagias
"Mining Patterns and Rules for Improving Agent Intelligence Through an Integrated Multi-Agent Platform"
6th IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2002), pp. 757--764, Banff, Alberta, Canada, 2002 Jan

This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).

@inproceedings{2002SymeonidisASC,
author={Andreas Symeonidis and Pericles A. Mitkas and Dionisis Kehagias},
title={Mining Patterns and Rules for Improving Agent Intelligence Through an Integrated Multi-Agent Platform},
booktitle={6th IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2002)},
pages={757--764},
address={Banff, Alberta, Canada},
year={2002},
month={01},
date={2002-01-01},
url={http://issel.ee.auth.gr/wp-content/uploads/2017/01/MINING-PATTERNS-AND-RULES-FOR-IMPROVING-AGENT-INTELLIGENCE-THROUGH-AN-INTEGRATED-MULTI-AGENT-PLATFORM.pdf},
keywords={hazard control},
abstract={This paper introduces the capabilities of Agent Academy in the area of Safety Management and Hazard Control Systems. Agent Academy is a framework under development, which uses data mining techniques for training intelligent agents. This framework generates software agents with an initial degree of intelligence and trains them to manipulate complex tasks. The agents, are further integrated into a simulation multi-agent environment capable of managing issues in a hazardous environment, as well as regulating the parameters of the safety management strategy to be deployed in order to control the hazards. The initially created agents take part in long agentto -agent transactions and their activities are formed into behavioural data, which are stored in a database. As soon as the amount of collected data increases sufficiently, a data mining process is initiated, in order to extract specific trends adapted by agents and improve their intelligence. The result of the overall procedure aims to improve the simulation environment of safety management. The communication of agents as well as the architectural characteristics of the simulation environment adheres to the set of specifications imposed by the Foundation for Intelligent Physical Agents (FIPA).}
}