General information







Plenary Lectures

Detlef Streitferdt

Department of Computer Science and Automation
Technische Universität Ilmenau, Germany

Inside Neural Networks

Abstract. Machine learning became a prominent technology with artificial neural networks as its current and highly discussed model. Although the results of image analysis / recognition or speech detection are very promising the analysis of neural networks itself and their behavior is still a very hard task due to the complexity of the models. Even a single software neuron puts high demands on the assessment of software quality aspects. Current models with thousands of interconnected neurons require by far more elaborated software tools and methods.
This talk gives a software engineering overview of the current state in analyzing neural networks within the software development life-cycle. It addresses the limits of using neural networks and emphasizes the corresponding pitfalls a software engineer has to cope with.

Brief Biography of the Speaker: Detlef Streitferdt is currently senior researcher at the Ilmenau University of Technology heading the research group Software Architectures and Product Lines since 2010. The research fields are the efficient development of software architectures and product lines, their analysis and their assessment as well as software development processes and model-driven development. Before returning to the University he was Principal Scientist at the ABB AG Corporate Research Center in Ladenburg, Germany. He was working in the field of software development for embedded systems. Detlef studied Computer Science at the University of Stuttgart and spent a year of his studies at the University of Waterloo in Canada. He received his doctoral degree from the Technical University of Ilmenau in the field of requirements engineering for product line software development in 2004.

Milan Tuba

Singidunum University
Belgrade, Serbia

Recent Topics of Convolutional Neural Networks Applications

Abstract. Artificial intelligence and machine learning algorithms have become a significant part of numerous applications used in various fields from medicine, and security, to agriculture, astronomy, and many more. In general, most of these applications require a classification algorithm, and often for the classification of digital images. Due to the wide need for classification methods and intensive study of the classification problem, numerous classification methods have been proposed and used. However, the convolutional neural networks have proven to be a far better method for certain classification problems and have brought some revolutionary changes in certain areas. Convolutional neural networks (CNNs) represent the type of deep artificial neural networks that due to preserving spatial correlation in inputs, manage to significantly improve signal classification accuracy, especially of digital images. Using, creating and training CNN is a relatively simple task due to the various available software tools, but the problem with CNNs is finding the optimal configuration and architecture. Designing and tuning CNN represents a very challenging problem that should be dealt with in order to achieve the best possible results. The optimal CNN’s configuration depends on the considered problem and one CNN that is good for one problem is not necessarily good for others. Finding the optimal configuration is not a simple task since there are numerous hyperparameters such as the number, type and order of layers, number of neurons in each layer, kernel size, optimization algorithm, padding, stride, and many others, that should be fine-tuned for each classification problem. There is no unique efficient method for finding optimal values of CNNs’ hyperparameters. A commonly used method for setting the CNN’s configuration is to guess good starting values and estimate better values for the hyper-parameters (guestimating). This method is simple but not the most efficient. Since this is an optimization problem, some recent studies tested different optimization metaheuristics such as swarm intelligence algorithms. Usage of swarm intelligence algorithms for finding CNNs’ configuration can be time consuming but the improvement of the classification accuracy is significant. In this talk, the advantages and challenges of finding the optimal CNN configuration will be presented.

Brief Biography of the Speaker: Milan Tuba is the Vice Rector for International Relations, Singidunum University, Belgrade, Serbia and was the Head of the Department for Mathematical Sciences at State University of Novi Pazar and the Dean of the Graduate School of Computer Science at John Naisbitt University. He is listed in the World's Top 2% Scientists by Stanford University in 2020 and 2021. Prof. Tuba is the author or coauthor of more than 250 scientific papers (cited more than 5000 times, h-index 42) and editor, coeditor or member of the editorial board or scientific committee of number of scientific journals and conferences. He was invited and delivered around 60 keynote lectures at international conferences.
He received B. S. in Mathematics, M. S. in Mathematics, M. S. in Computer Science, M. Ph. in Computer Science, Ph. D. in Computer Science from University of Belgrade and New York University. From 1983 to 1994 he was in the U.S.A. first at Vanderbilt University in Nashville and Courant Institute of Mathematical Sciences, New York University and later as Assistant Professor of Electrical Engineering at Cooper Union School of Engineering, New York. During that time he was the founder and director of Microprocessor Lab and VLSI Lab, leader of the NSF scientific projects and theses supervisor. From 1994 he was Assistant Professor of Computer Science and Director of Computer Center at University of Belgrade, from 2001 Associate Professor, Faculty of Mathematics, University of Belgrade, from 2004 also a Professor of Computer Science and Dean of the College of Computer Science, Megatrend University Belgrade. Prof. Tuba was the principal creator of the new curricula and programs at the Faculty of Mathematics and Computer Science at the University of Belgrade and later at John Naisbitt University where he was the founder and practically alone established a complete new school with bachelor, master and PhD program. He was teaching more than 20 graduate and undergraduate courses, from VLSI Design and Computer Architecture to Computer Networks, Operating Systems, Artificial Intelligence, Image Processing, Calculus and Queuing Theory.
His research interest includes nature-inspired optimizations applied to image processing, computer networks, and neural networks. Member of the ACM, IEEE, AMS, SIAM, IFNA, IASEI.

Dan Cristea

"Alexandru Ioan Cuza" University of Iași
Faculty of Computer Science, Iasi, Romania

A Technology of Deciphering Old Cyrillic-Romanian

Abstract. Between the 16th century and the middle of the 19th, a unique Cyrillic alphabet has circulated on the territories of historical Romania, with slight variations in the shapes of graphemes or their phonetic values. As such, a huge bibliography of Cyrillic-Romanian texts has been accumulated in various libraries, while very few of these books have been transliterated by a small number of specialised linguists. The access of the large public of Romanian readers, interested in knowing these documents, is yet very restricted. This is why, an automatic deciphering of old Romanian documents from Cyrillic to Latin would be most welcome. I will present the DeLORo project (Deep Learning for Old Romanian), which aimed to build such a technology for printed and uncial Cyrillic-Romanian documents (not for manuscripts). In this talk I will describe ​the structure of DeLORo’s data repository, which includes images of scanned pages, annotations operated collaboratively ​over them, and alignments between annotated objects in the images and (sequences of) decoded Latin characters​. The primary data​ are ​used to train the deep learning ​recognition ​technology. Since the manual annotation process is very ​time consuming and the density of characters is highly non-uniform in documents, I overview also a strategy for data augmentation that exploits a collection of documents entirely transcribed by experts in other involvements than our project. Different phases of processing are applied over the images of pages, combining binarization and partial blurring operations with segmentation of the page image, detection of objects (as rows of text and characters) and labelling of characters. I will also show some results, as they will be reported at the end of the project (October 2022), for character detection and recognition, in comparison with other approaches. 

Brief Biography of the Speaker: Dan Cristea is an emeritus professor of the “Alexandru Ioan Cuza” University of Iași (UAIC), Faculty of Computer Science, and still holds a part time position as a principal researcher at the Institute of Computer Science in the Iași branch of the Romanian Academy. He is a corresponding member of the Romanian Academy and a full member of the Academy of Technical Sciences of Romania. The main courses taught have been artificial intelligence, rule-based programming and techniques of natural language engineering. He is the initiator of the program of master studies in Computational Linguistics at UAIC. His research interests have been mainly related to discourse theories and applications (methods and techniques for anaphora resolution, discourse parsing, question answering, summarization), automatic configuration of NLP architectures, language evolution, semantic representations of natural language, computerisation of dictionaries, lexical semantics. He was involved in the construction of resources for Romanian language and, recently, on computational studies of the old Romanian language. Prof. Cristea has initiated and is the co-director of the EUROLAN series of Summer Schools on Human Language Technology (, which had 15 biennial editions since 1993, and co-initiated the series of international conferences on Linguistic Resources and Tools for Natural Language Processing, with its 17th edition this year (

Marcel Kyas

Reykjavik University
Reykjavik, Iceland

Autonomous Drones and Analogue Missions

Abstract. We report our involvement in an analog space mission test- ing unmanned aerial systems (UAS) intended for operating on Mars in Holuhraun, Iceland. Our goal in this mission was to test autonomous landing methods: the UAS has to find a suitable, safe landing site and land reliably. Iceland’s lava fields, crevasses, and craters resemble the environment we expect on other planets. The environment allows us to experiment with revolutionary approaches to space exploration. At the same time, the mission allowed participating engineers, geologists, and computer scientists to express their views on mission design, equipment, and programming. Many components of such systems are mission and safety-critical. Their failure jeopardizes the mission and may result in tremendous financial loss. For example, the Perseverance launch is es- timated to cost $2.9 billion. Consequently, the developers take utmost care to avoid all potential issues and follow strict coding standards like MISRA. Their goal to ensure predictable and deterministic behavior is at odds with revolutionary developments. We want to use machine learning for landing site identification and autonomous control of unmanned ve- hicles in planetary environments. Those systems are often unpredictable. Our position is that machine learning can be used safely in space mis- sions and avionics, provided that they are shown not to jeopardize the system’s operations or supervised by deterministic components or prior- itized modularization.

Brief Biography of the Speaker: Marcel Kyas received his Ph.D. from the University of Leiden in 2006. In his dissertation "Verifying OCL specifications of UML models," he developed compositional, computer-aided verification methods for object-oriented real-time systems. After that, he became a PostDoc at the University of Oslo. He researched type systems for dynamically evolving distributed systems. He became an assistant professor at the Freie Universität Berlin. He studied indoor positioning systems and their empirical validation. He published competitive positioning methods and an indoor positioning test bed. Since 2015, he has worked at Reykjavik University. He extends his work on indoor positioning systems to distributed systems. He works on autonomous unmanned aerial vehicles, especially landing in GPS-denied areas. His group participated in an analog mission to investigate equipment for geological sampling on Mars.