KEOD 2019 Abstracts


Full Papers
Paper Nr: 2
Title:

The Application of Knowledge Management to Overcome Barriers to Enterprise Architecture Adoption: A South African Motor Vehicle and Asset Finance Case Study

Authors:

Innocent Gumede, J. P. van Deventer, Hanlie Smuts and Joyce Jordaan

Abstract: Information Technology (IT) enables an organisation to gain competitive advantage by exploiting new opportunities and capabilities offered by evolving technologies. Therefore, it is required to holistically align IT strategy with organisational strategy, and Enterprise Architecture (EA) is considered as a means to achieve such alignment. However, EA adoption is impacted by many organizational barriers and in particular organisational culture factors. Knowledge Management (KM) is a candidate to address these organisational culture issues. Therefore, the purpose of this study, was to understand the barriers to EA adoption, as well as the KM interventions likely to increase the success of EA initiatives. The study was conducted in the South African motor vehicle and asset finance industry and the lack of understanding the purpose of EA, as well as employees not actively participating in the development of EA, were identified as major barriers. The KM interventions identified to be effective in overcoming the barriers pointed to the promotion of knowledge sharing between employees and the EA team, and the increased involvement of EA stakeholders and users in EA development. By considering the research findings, organisations may apply KM, in overcoming barriers that prevent the successful implementation of EA initiatives.
Download

Paper Nr: 3
Title:

Automation of Software Testing Process using Ontologies

Authors:

Vladimir Tarasov, He Tan and Anders Adlemo

Abstract: Testing of a software system is a resource-consuming activity that requires high-level expert knowledge. Methods based on knowledge representation and reasoning can alleviate this problem. This paper presents an approach to enhance the automation of the testing process using ontologies and inference rules. The approach takes software requirements specifications written in structured text documents as input and produces the output, i.e. test scripts. The approach makes use of ontologies to deal with the knowledge embodied in requirements specifications and to represent the desired structure of test cases, as well as makes use of a set of inference rules to represent strategies for deriving test cases. The implementation of the approach, in the context of an industrial case, proves the validity of the overall approach.
Download

Paper Nr: 4
Title:

On Deciding Admissibility in Abstract Argumentation Frameworks

Authors:

Samer Nofal, Katie Atkinson and Paul E. Dunne

Abstract: In the context of abstract argumentation frameworks, the admissibility problem is about deciding whether a given argument (i.e. piece of knowledge) is admissible in a conflicting knowledge base. In this paper we present an enhanced backtracking-based algorithm for solving the admissibility problem. The algorithm performs successfully when applied to a wide range of benchmark abstract argumentation frameworks and when compared to the state-of-the-art algorithm.
Download

Paper Nr: 5
Title:

Observing the Impact and Adaptation to the Evolution of an Imported Ontology

Authors:

Omar Qawasmeh, Maxime Lefrançois, Antoine Zimmermann and Pierre Maret

Abstract: Ontology evolution is the process of maintaining an ontology up to date with respect to the changes that arise in the targeted domain or in the requirements. Inspired by this definition, we introduce two concepts related to observe the impact and the adaptation to the evolution of an imported ontology. In the first one we target the evolution of an imported ontology (if ontology O uses ontology O0, and then O0 evolves). The second one targets the adaptation to the evolution of the imported ontology. Based on our definition we provide a systematic categorization of the different cases that can arise during the evolution of ontologies (e.g. a term t is deleted from O0, but O continues to use it). We led an experiment to identify and count the occurrences of the different cases among the ontologies referenced on two ontology portals: 1. the Linked Open Vocabulary (LOV) ontology portal which references 648 different ontologies, 88 of them evolved. We identified 74 cases that satisfy our definition, involving 28 different ontologies. 2. the BioPortal which references 770 different ontologies, 485 of them evolved. We identified 14 cases that satisfy our definition, involving 10 different ontologies. We present the observation results from this study and we show the number of different cases that occurred during the evolution. We conclude by showing that knowledge engineers could take advantage of a methodological framework based on our study for the maintenance of their ontologies.
Download

Paper Nr: 7
Title:

Multi-criteria Modelling Approach for Ambient Assisted Coaching of Senior Adults

Authors:

Martin Žnidaršič, Bernard Ženko, Aljaž Osojnik, Marko Bohanec, Panče Panov, Helena Burger, Zlatko Matjačić and Mojca Debeljak

Abstract: This paper presents and critically discusses an approach for knowledge modelling and reasoning in a system for monitoring and coaching of senior adults. We present a modular architecture of the system and a detailed description of the modelling methodology which originates from the field of multi-criteria decision modelling and differs from the commonly used ones in this problem domain. The methodology has several characteristics that make it fit well to the purpose in this application and initial insights from potential users are positive. A discussion of the suitability of the proposed methodology for knowledge representation and reasoning in the given problem domain is provided, with an outline of its potential benefits and drawbacks and a comparison with the ontological approach.
Download

Paper Nr: 11
Title:

Ontology Learning from Twitter Data

Authors:

Saad Alajlan, Frans Coenen, Boris Konev and Angrosh Mandya

Abstract: This paper presents and compares three mechanisms for learning an ontology describing a domain of discoursed as defined in a collection of tweets. The task in part involves the identification of entities and relations in the free text data, which can then be used to produce a set of RDF triples from which an ontology can be generated. The first mechanism is therefore founded on the Stanford CoreNLP Toolkit.; in particular the Named Entity Recognition and Relation Extraction mechanisms that come with this tool kit. The second is founded on the GATE General Architecture for Text Engineering which provides an alternative mechanism for relation extraction from text. Both require a substantial amount of training data. To reduce the training data requirement the third mechanism is founded on the concept of Regular Expressions extracted from a training data “seed set”. Although the third mechanism still requires training data the amount of training data is significantly reduced without adversely affecting the quality of the ontologies generated.
Download

Paper Nr: 12
Title:

Development of a Gestational Diabetes Computer Interpretable Guideline using Semantic Web Technologies

Authors:

Garazi Artola, Jordi Torres, Nekane Larburu, Roberto Álvarez and Naiara Muro

Abstract: The benefits of following Clinical Practice Guidelines (CPGs) in the daily practice of medicine have been widely studied, being a powerful method for standardization and improvement of medical care quality. However, applying these guidelines to promote evidence-based and up-to-date clinical practice is a known challenge due to the lack of digitalization of clinical guidelines. In order to overcome this issue, the use of Clinical Decision Support Systems (CDSS) has been promoted in clinical centres. Nevertheless, CPGs must be formalized in a computer interpretable way to be implemented within CDSS. Moreover, these systems are usually developed and implemented using local setups, and hence local terminologies, which causes lack of semantic interoperability. In this context, the implementation of Semantic Web Technologies (SWTs) to formalize the concepts used in guidelines promotes the interoperability and standardization of those systems. In this paper, an architecture that allows the formalization of CPGs into Computer Interpretable Guidelines (CIGs) supported by an ontology in the gestational diabetes domain is presented. This CIG has been implemented within a CDSS and a mobile application has been developed for guiding patients based on up-to-date evidence based clinical guidelines.
Download

Paper Nr: 13
Title:

Memory Nets: Knowledge Representation for Intelligent Agent Operations in Real World

Authors:

Julian Eggert, Jörg Deigmöller, Lydia Fischer and Andreas Richter

Abstract: In this paper, we introduce Memory Nets, a knowledge representation targeted at Autonomous Intelligent Agents (IAs) operating in real world. The main focus is on a knowledge base (KB) that on the one hand is able to leverage the large body of openly available semantic information, and on the other hand allows to incrementally accumulate additional knowledge from situated interaction. Such a KB can only rely on operable semantics fully contained in the knowledge base itself, avoiding any type of hidden semantics in the KB attributes, such as human-interpretable identifier. In addition, it has to provide means for tightly coupling the internal representation to real-world events. We propose a KB structure and inference processes based on a knowledge graph that has a small number of link types with operational semantics only, and where the main information lies in the complex patterns and connectivity structures that can be build incrementally using these links. We describe the basic domain independent features of Memory Nets and the relation to measurements and actuator capabilities as available by autonomous entities, with the target of providing a KB framework for researching how to create IAs that continuously expand their knowledge about the world.
Download

Paper Nr: 17
Title:

Ontological Integration of Semantics and Domain Knowledge in Energy Scenario Co-simulation

Authors:

Jan S. Schwarz and Sebastian Lehnhoff

Abstract: The transition of the power system to more decentralized power plants and intelligent devices in a smart grid leads to a significant rise in complexity. For testing new technologies before their implementation in the field co-simulation is an important approach, which allows to couple diverse simulation models from different domains. In the planning and evaluation of co-simulation scenarios experts from different domains have to collaborate. To assist the stakeholder in this process, we propose to integrate on the one hand semantics of simulation models and exchanged data and on the other hand domain knowledge in the planning, execution, and evaluation of interdisciplinary co-simulation based on ontologies. This approach aims to allow the high-level planning of simulation and the seamless integration of its information to simulation scenario specification, execution and evaluation. Thus, our approach intents to improve the usability of large-scale interdisciplinary co-simulation scenarios.
Download

Paper Nr: 20
Title:

Engineering Smart Behavior in Evacuation Planning using Local Cooperative Path Finding Algorithms and Agent-based Simulations

Authors:

Róbert Selvek and Pavel Surynek

Abstract: This paper addresses evacuation problems from the perspective of cooperative path finding (CPF). The evacuation problem we call multi-agent evacuation (MAE) consists of an undirected graph and a set of agents. The task is to move agents from the endangered part of the graph into the safe part as quickly as possible. Although there exist centralized evacuation algorithms based on network flows that are optimal with respect to various objectives, such algorithms would hardly be applicable in practice since real agents will not be able to follow the centrally created plan. Therefore we designed a local evacuation planning algorithm called LC-MAE based on local CPF techniques. Agent-based simulations in multiple real-life scenarios show that LC-MAE produces solutions that are only worse than the optimum by a small factor. Moreover our approach led to important findings about how many agents need to behave rationally to increase the speed of evacuation.
Download

Paper Nr: 22
Title:

Formal Grammatical and Ontological Modeling of Corpus Data on Tibetan Compounds

Authors:

Aleksei Dobrov, Anastasia Dobrova, Maria Smirnova and Nikolay Soms

Abstract: This article provides a consistent formal grammatical and ontological description of the model of the Tibetan compounds system, developed and used for automatic syntactic and semantic analysis of Tibetan texts, on the material of a hand-verified corpus. This model covers all types of Tibetan compounds, which were previously introduced by other authors, and introduces a number of new classes of compounds, taking into account their derivation, structure and semantics. The article describes the tools used for ontological modeling of Tibetan compounds; special attention is paid to the problem of modeling the semantics of verbs and verbal compounds. Nominal and verbal compounds are considered separately, it is noted that the importance of verbal compounds for the Tibetan language system is not less than that of nominal compounds. The statistical data on the absolute frequency distribution of the use of compounds of different types in the current version of the corpus annotation and on the amounts of ontology concepts associated with each class of compounds are given.
Download

Paper Nr: 24
Title:

Knowledge-based Education and Awareness about the Radiological and Nuclear Hazards

Authors:

Anca D. Ionita, Adriana Olteanu and Radu N. Pietraru

Abstract: There are multiple approaches to organize and formalize the knowledge related to nuclear accidents, emergency situations and management of hazards. However, in general, the materials available for educational and awareness purposes are not directly linked to an organized knowledge base. This paper shows our studies on representing and using the experts’ knowledge on radiological and nuclear risks, with the purpose of making it more accessible to junior students and to other interested stakeholders. This effort resulted in an ontology of nuclear vulnerabilities, a set of rules, and processes for prevention, protection and emergency response, useful for understanding the decisions made by responsible institutions. These representations were applied in the development of a platform for informal education and awareness.
Download

Paper Nr: 28
Title:

Biochemistry Procedure-oriented Ontology: A Case Study

Authors:

Mohammed Alliheedi, Yetian Wang and Robert E. Mercer

Abstract: Ontologies must provide the entities, concepts, and relations required by the domain being represented. The domain of interest in this paper is the biochemistry experimental procedure. The ontology language being used is OWL-DL. OWL-DL was adopted due to its well-balanced flexibility among expressiveness (e.g., class description, cardinality restriction, etc.), completeness, and decidability. These procedures are composed of procedure steps which can be represented as sequences. Sequences are composed of totally ordered, partially ordered, and alternative subsequences. Subsequences can be represented with two relations, directlyFollows and directlyPrecedes that are used to represent sequences. Alternative subsequences can be generated by composing a oneOf function in OWL-DL, referred to it as optionalStepOf in this work, which is a simple generalization of exclusiveOR. Alkaline Agarose Gel Electrophoresis, a biochemistry procedure, is described and examples of these subsequences are provided.
Download

Paper Nr: 33
Title:

Toward Measuring Knowledge Loss due to Ontology Modularization

Authors:

Andrew LeClair, Ridha Khedri and Alicia Marinache

Abstract: This paper formalizes the graphical modularization technique, View Traversal, for an ontology-based system represented using the Domain Information System (DIS). Our work is motivated by the need for autonomous agents, within an ontology-based system, to automatically create their own views of the ontology to address the problems of ontology evolution and data integration found in an enterprise setting. Through DIS, we explore specific ontologies that give Cartesian perspectives of the domain, which allows modularization to be a means for agents to extract views of specific combinations of data. The theory of ideals from Boolean algebra is used to formalize a module. Then, with the use of homomorphisms, the quantity of knowledge within the module can be measured. More specifically, through the first isomorphism theorem, we establish that the loss of information is quantified by the kernel of the homomorphism. This constitutes a foundational step towards theories related to reasoning on partial domain knowledge, and is important for applications where an agent needs to quickly extract a view that contains a specific set of knowledge.
Download

Paper Nr: 34
Title:

Assisted Composition of Linked Data Queries

Authors:

Imen Sarray and Aziz Salah

Abstract: Much research has been undertaken to facilitate the construction of SPARQL queries, while other research has attempted to facilitate the construction of the RDF dataset schema to understand the structure of RDF datasets. However, there is no effective approach that brings together these two complementary objectives. This work is an effort in this direction. We propose an approach that allows assisted SPARQL query composition. Linked data interrogation is not only difficult because it requires mastering a query language such as SPARQL, but mainly because RDF datasets do not have an explicit schema as what you can expect in relational databases. This paper provides two complimentary solutions: synthesis of an interrogation-oriented schema and a form-based RDF Query construction tool, name EXPLO-RDF.
Download

Paper Nr: 54
Title:

FoodOntoMap: Linking Food Concepts across Different Food Ontologies

Authors:

Gorjan Popovski, Barbara K. Seljak and Tome Eftimov

Abstract: In the last decade, a great amount of work has been done in predictive modelling in healthcare. All this work is made possible by the existence of several available biomedical vocabularies and standards, which play a crucial role in understanding health information. Moreover, there are available systems, such as the Unified Medical Language System, that bring and link together all these biomedical vocabularies to enable interoperability between computer systems. However, in 2019, Lancet Planetary Health published that the year 2019 is going to be the year of nutrition, where the focus will be on the links between food systems, human health, and the environment. While there is a large number of available resources for the biomedical domain, only a limited number of resources can be utilized in the food domain. There is still no annotated corpus with food concepts, and there are only a few rule-based food named-entity recognition systems for food concepts extraction. There are also several food ontologies that exist, each developed for a specific application scenario. However there are no links between these ontologies. For this reason, we have created a FoodOntoMap resource that consists of food concepts extracted from recipes. For each food concept, semantic tags from four food ontologies are assigned. With this, we have created a resource that provides a link between different food ontologies that can be further reused to develop applications for understanding the relation between food systems, human health, and the environment.
Download

Paper Nr: 70
Title:

A Combination between Textual and Visual Modalities for Knowledge Extraction of Movie Documents

Authors:

Manel Fourati, Anis Jedidi and Faiez Gargouri

Abstract: In view of the proliferation of audiovisual documents and the indexing limits mentioned in the literature, the progress of a new solution requires a better description extracted from the content. In this paper, we propose an approach to improve the description of the cinematic audiovisual documents. However, this consists not only in extracting the knowledge meaning conveyed in the content but also combining textual and visual modalities. In fact, the semiotic desription represents important information from the content. We propose in this paper an approach based on the use of pre and post production film documents. Consequently, we concentrate efforts to extract some descriptions about the use not only of the probalistic Latent Dirichlet Allocation (LDA) model but also of the semantic ontology LSCOM. Finally, a process of identifying a description is highlighted. In fact, the experimental results confirmed the importance of the performance of our approach through the comparison of our result with a human jugment and a semi-automatic method by using the MovieLens dataset.
Download

Short Papers
Paper Nr: 9
Title:

Modelling Attitudes of a Conversational Agent

Authors:

Mare Koit

Abstract: The paper introduces a work in progress on modelling of attitudes of a conversational agent in negotiation. Two kinds of attitudes are under consideration: (1) related to different aspects of a negotiation object (in our case, doing an action) which direct reasoning about the action, and (2) related to a communication partner (dominance, collaboration, communicative distance, etc.) which are modelled by using the concept of multidimensional social space. Attitudes of participants have been annotated in a small sub-corpus of the Estonian dialogue corpus. An example from the sub-corpus is presented in order to illustrate how the models describe the change of attitudes of human participants. A limited version of the model of a conversational agent is implemented on the computer. Our further aim is to develop a dialogue system to train the user’s negotiation skills by interacting with him in a natural language.
Download

Paper Nr: 15
Title:

Use of Ontologies in Chemical Kinetic Database CHEMCONNECT

Authors:

Edward S. Blurock

Abstract: CHEMCONNECT is an ontology cloud-based repository of experimental, theoretical and computational data for the experimental sciences domain that support the FAIR data principle, namely that data is findable, accessible, interoperable and re-usable. The design also promotes the good scientific practices of accountability, traceability and reproducibility. The key to meeting these design goals is the use of ontologies. The primary goals of using ontologies include not only capturing a domain specific knowledge base (with support of domain experts), but also to create a data/ontology driven software system for the data objects, data entry, the database and the graphical interface. The impetus within (combustion research) domain, which is the initial focus of CHEMCONNECT, of the knowledge base is formation and documentation of standard data reporting practices. The ontology is a software technical implementation of practices within the community. Storing and querying of specific instantiations of object data is done using a NOSQL database (Google datastore). This initial design of CHEMCONNECT is modelled for the chemical kinetics and combustion domain. Within this domain, the ontology defines templates of typical experimental devices producing data, algorithms and protocols manipulating data and the data forms that are encountered in this pipeline. These templates are then instantiated, with the aid of ontology driven cloud-based interface, to specific objects within the database. The knowledge base is key to uniting data input in various forms (including diverse labelling) to a common base for ease of search and comparison. The structure is not limited to this domain and will be expanded in future collaborative work. CHEMCONNECT is currently implemented with the Google App Engine at http:www.connectedsmartdata.info.
Download

Paper Nr: 16
Title:

Adding ‘Sense’ to Conceptual Modeling: An Interdisciplinary Approach

Authors:

Veikko Halttunen

Abstract: In this paper, our aim is to widen the prevailing foundations of conceptual modeling theories and practices, particularly in the context of information systems development. The approach shifts the focus from the link between a model and the modelled reality to the link between human cognition and the model. Our approach combines theoretical issues of different disciplines relevant to conceptual modeling. We shall make an explicit distinction between individual conceptions and interpersonal concepts and show how this distinction could be utilized to have conceptual models of a better consistency. We wish that this article could also serve as a starting point for a profound scientific discussion on the real sources of conceptual models, i.e. the human mind.
Download

Paper Nr: 18
Title:

Enterprise Transformation Management based on Enterprise Engineering Approach with Unified Enterprise Transformation Dimensions

Authors:

Shoji Konno and Junichi Iijima

Abstract: In the enterprise transformation (ET), there are so many ideal models, blueprints and situations. The ideal pictures are provided by practitioners and researchers one picture by one change is predicted or occurs on the business environment, for example, “digital enterprise transformation” by “business model at digital age”, etc. Indeed, a variety of approaches were proposed in the literature. On the other hand, under our literature survey, existing management frameworks are addressing one specific perspective of enterprise management and focusing on one kind of measurement. There is no significant adoption in the state of the enterprise transformation management systems based on the relationship between architecture and transformation practices yet. The goal of this work is, therefore, to propose a holistic management framework to support the transformation based on enterprise engineering. All the dimensions, analysis perspectives, impact analysis of those change practices together support among adaptable enterprise architecture world and real transformation world. It aims to enable the framework to be used in state-of-the-art enterprise change environments.
Download

Paper Nr: 19
Title:

Validation and Recommendation Engine from Service Architecture and Ontology

Authors:

Daniel Mercier and Anthony Ruto

Abstract: The Cloud has emerged as a common platform for data convergence. Structured data, unstructured, serialized, or even chunks of data in various formats are now being transferred, processed, and exchanged by a multitude of services. New applications, service-oriented, rely heavily on the managing of these flows of data. The situation is such, that the perspective is changing, placing data at the center. In this context, end-users must rely on new derivative services to validate these flows of information; and expect from these services accurate feedback and some degree of intelligent recommendations. In this article, we introduce a new validation and recommendation engine encapsulated in a service, backed by ontology and a knowledge structure based on reusable components for fast integration and increased resilience.
Download

Paper Nr: 29
Title:

Routing Algorithms in Connected Cars Context

Authors:

Ioan Stan, Vasile Suciu and Rodica Potolea

Abstract: Most of the existing navigation solutions compute individual routes based on map topology and traffic data but, without considering the route effect on the entire navigation ecosystem. Traffic data usage and sharing in the context of connected cars is a key element for route planning. Such solutions require efficient implementation and deployment in order to reduce any kind of risk. Following a smart driving methodology, we run different route search algorithms on connected cars traffic scenarios in order to avoid traffic congestion and minimize total driving time on the entire navigation ecosystem. The experiments in this work proved that connected cars data usage and sharing reduce the total driving time of the navigation ecosystem and also that specific routing algorithms are more suitable for specific connected cars scenarios in order to obtain relevant results.
Download

Paper Nr: 37
Title:

Ontology Building and Enhancement using a Lexical Semantic Network

Authors:

Nadia Bebeshina-Clairet, Sylvie Despres and Mathieu Lafourcade

Abstract: In the present article, we explore lexical semantic knowledge resources such as lexical semantic networks for ontology building and enhancement. The ontology building process is often a descending and manual process where the high level concepts are defined, then detailed by human experts. We explore a way of using a multilingual or monolingual lexical semantic network to make evolve or localize an existing ontology with a limited human effort.

Paper Nr: 42
Title:

Fault Detection of Elevator System using Deep Autoencoder Feature Extraction for Acceleration Signals

Authors:

Krishna M. Mishra and Kalevi J. Huhtala

Abstract: In this research, we propose a generic deep autoencoder model for automatic calculation of highly informative deep features from the elevator time series data. Random forest algorithm is used for fault detection based on extracted deep features. Maintenance actions recorded are used to label the sensor data into healthy or faulty. Avoiding false positives are performed with the rest of the healthy data in terms of validation of the model to prove its efficacy. New extracted deep features provide 100% accuracy in fault detection along with avoiding false positives, which is better than existing features. Random forest was also used to detect faults based on existing features to compare results. New deep features extracted from the dataset with deep autoencoder random forest outperform the existing features. Good classification and robustness against overfitting are key characteristics of our model. This research will help to reduce unnecessary visits of service technicians to installation sites by detecting false alarms in various predictive maintenance systems.
Download

Paper Nr: 43
Title:

Forensic Analysis of Heterogeneous Social Media Data

Authors:

Aikaterini Nikolaidou, Michalis Lazaridis, Theodoros Semertzidis, Apostolos Axenopoulos and Petros Daras

Abstract: It is a challenge to aggregate and analyze data from heterogeneous social media sources not only for businesses and organizations but also for Law Enforcement Agencies. The latter’s core objectives are to monitor criminal and terrorist related activities and to identify the ”key players” in various networks. In this paper, a framework for homogenizing and exploiting data from multiple sources is presented. Moreover, as part of the framework, an ontology that reflects today’s social media perceptions is introduced. Data from multiple sources is transformed into a labeled property graph and stored in a graph database in a homogenized way based on the proposed ontology. The result is a cross-source analysis system where end-users can explore different scenarios and draw conclusions through a library of predefined query placeholders that focus on forensic investigation. The framework is evaluated on the Stormfront dataset, a radical right, web community. Finally, the benefits of applying the proposed framework to discover and visualize the relationships between the Stormfront profiles are presented.
Download

Paper Nr: 46
Title:

A Neuro-inspired Approach for a Generic Knowledge Management System of the Intelligent Cyber-Enterprise

Authors:

Caramihai Simona, Dumitrache Ioan, Moisescu Mihnea, Saru Daniela and Sacala Ioan

Abstract: The paradigm of Cyber-Physical Systems may be successfully applied to a large number of case-studies, but the most challenging of them are focusing on large scale systems whose dynamics is adapting to various functional scenarios and environmental conditions as energy networks, traffic systems and especially different kinds of cooperative networks of enterprises. However, the large spectrum of possible configurations of such processes raises many issues with respect to the identification of problems to be solved, and furthermore, of the solving method itself, making the efficient use of available knowledge a real challenge. This paper is presenting a neuro-inspired approach for the design of a knowledge management system dedicated to complex networking enterprises organized as Cyber-Physical Systems, and whose functioning is implying dynamical reconfiguration in response to environmental changes, based on the gathering and use of large flows of information, which are referred to as Intelligent Cyber-Enterprises. The proposed ideas are based on a human brain model of reasoning and learning.
Download

Paper Nr: 49
Title:

Models and Capabilities for Supporting Transformation based on Enterprise Dimensions with Enterprise Engineering

Authors:

Shoji Konno and Junichi Iijima

Abstract: In response to changes in the environment surrounding an enterprise, many occasional To-Be models like IT Governance models and IT service management models have been proposed. Recently, digital enterprise model has attracted attention. The concepts, frameworks, and methodologies dealing with the enterprise have also changed in response to the movement. While we are leading enterprises to the transformation to the To-Be model and/or ambitious picture from various perspectives, and it is difficult to promote transformations that maintain interoperability across them, while viewing the enterprise from various perspectives. It seems that we are working on the closed framework of individual frameworks and methodologies that deal with the same enterprise. The purpose of this position paper is to propose the commonly available dimensions related to the enterprise and ET-CMF. The mechanism will aim to analyze the influence of the change based on those dimensions collaborate with the concept of enterprise engineering on enterprise transformation as connectors. The mechanism, currently in development, could be a holistic management framework to support the transformation by using Enterprise Engineering.
Download

Paper Nr: 50
Title:

Axiom-based Probabilistic Description Logic

Authors:

Martin Unold and Christophe Cruz

Abstract: The paper proposes a new type of probabilistic description logics (p-DLs) with a different interpretation of uncertain knowledge. In both approaches (classical state of the art approaches and the approach of this paper), probability values are assigned to axioms in a knowledge base. While In classical p-DLs, the probability value of an axiom is interpreted as the probability of the axiom to be true in contrast to be false or unknown, the probability value in this approach is interpreted as the probability of an the axiom to be true in contrast to other axioms being true. The paper presents the theory of that novel approach and a method for the treatment of such data. The proposed description logic is evaluated with some sample knowledge bases and the results are discussed.
Download

Paper Nr: 51
Title:

Project Management Tools Assessment with OSSpal

Authors:

Samuel Cruz and Jorge Bernardino

Abstract: In this paper it is highlighted the importance of using a methodology to evaluate project management tools and chose the one that will make the project management tasks easier. Three project management tools, GitLab, Microsoft Planner and ]Project-Open[, will be analysed by an open-source assessment methodology OSSPal that focus on important features of this kind of tools. This is one of the most correct and efficient way to choose which tool should be used in a project.
Download

Paper Nr: 56
Title:

Open Source Project Management Tools Assessment using QSOS Methodology

Authors:

Anabela Carreira and Jorge Bernardino

Abstract: With the increasing expansion of the open source tools in our daily life it is crucial to realize which are the best tools of the immensity that exist. In order to compare open source management tools, it is recommended to use a methodology such as QSOS that help to evaluate and choose which tool best suits our objectives. This paper describes some of the most commonly used open source project management tools such as GanttProject, OpenProject, and ProjectLibre and then compares them using QSOS methodology.
Download

Paper Nr: 57
Title:

Integrating Internet Directories by Estimating Category Correspondences

Authors:

Yoshimi Suzuki and Fumiyo Fukumoto

Abstract: This paper focuses on two existing category hierarchies and proposes a method for integrating these hierarchies into one. Integration of hierarchies is proceeded based on semantically related categories which are extracted by using text categorization. We extract semantically related category pairs by estimating category correspondences. Some categories within hierarchies are merged based on the extracted category pairs. We assign the remaining categories to a newly constructed hierarchy. To evaluate the method, we applied the results of new hierarchy to text categorization task. The results showed that the method was effective for categorization.
Download

Paper Nr: 58
Title:

D7-R4: Software Development Life-Cycle for Intelligent Vision Systems

Authors:

J. I. Olszewska

Abstract: Intelligent Vision Systems (IVS) are omnipresent in our daily life from social media apps to m-health services, from street surveillance cameras to airport e-gates, from drones to companion robots. Hence, IVS encompass any software which has a visual input processed by means of algorithm(s) involving Artificial Intelligence (AI) methods. The design and development of these IVS softwares has become an increasingly complex task, since vision-based systems have evolved into (semi-)autonomous AI systems, usually requiring effective and ethical data processing along with efficient signal processing and real-time hardware/software integration as well as User Experience (UX) and (cyber)security features. Consequently, IVS system development necessitates an adapted software development life-cycle (SDLC) addressing these multi-domain needs, whilst being developer friendly. Hence, we propose in this paper a new SDLC we called D7-R4 which allows developers to produce quality, new-generation IVS to be deployed in real-time and in real-world, unstructured environments.
Download

Paper Nr: 59
Title:

Measuring and Avoiding Information Loss During Concept Import from a Source to a Target Ontology

Authors:

James Geller, Shmuel T. Klein and Vipina K. Keloth

Abstract: Comparing pairs of ontologies in the same biomedical content domain often uncovers surprising differences. In many cases these differences can be characterized as “density differences,” where one ontology describes the content domain with more concepts in a more detailed manner. Using the Unified Medical Language System across pairs of ontologies contained in it, these differences can be precisely observed and used as the basis for importing concepts from the ontology of higher density into the ontology of lower density. However, such an import can lead to an intuitive loss of information that is hard to formalize. This paper proposes an approach based on information theory that mathematically distinguishes between different methods of concept import and measures the associated avoidance of information loss.
Download

Paper Nr: 60
Title:

Development of Ontologies for Reasoning and Communication in Multi-Agent Systems

Authors:

Sebastian Törsleff, Constantin Hildebrandt and Alexander Fay

Abstract: In future cyber-physical systems, such as smart factories and energy grids, ontologies can serve as the enabler for semantically precise communication as well as for knowledge representation and reasoning. Multi-agent systems have shown to be a suitable software development paradigm for cyber-physical systems and may well profit from harnessing ontologies in terms of reduced engineering effort and better interoperability. This contribution presents a development methodology for ontologies that enable communication and reasoning in Multi-Agent Systems for cyber-physical systems. The methodology is unique in addressing a set of requirements specific to this application domain.
Download

Paper Nr: 61
Title:

Multi-aspect Ontology for Interoperability in Human-machine Collective Intelligence Systems for Decision Support

Authors:

Alexander Smirnov, Tatiana Levashova, Nikolay Shilov and Andrew Ponomarev

Abstract: A collective intelligence system could significantly help to improve decision making. Its advantage is that often collective decisions can be more efficient than individual ones. The paper considers the human-machine collective intelligence as shared intelligence, which is a product of the collaboration between humans and software services, their joint efforts and conformed decisions. Usually, multiple collaborators do not share a common view on the domain or problem they are working on. The paper assumes usage of multi-aspect ontologies to overcome the problem of different views thus enabling humans and intelligent software services to self-organize into a collaborative community for decision support. A methodology for development of the above multi-aspect ontologies is proposed. The major ideas behind the approach are demonstrated by an example from the smart city domain.
Download

Paper Nr: 62
Title:

Towards a Semantic Matchmaking Algorithm for Capacity Exchange in Manufacturing Supply Chains

Authors:

Audun Vennesland, Johannes Cornelis de Man, Peter H. Haro, Emrah Arica and Manuel Oliveira

Abstract: Within supply chains, companies have difficulties in finding suppliers outside their known supplier pool or geographical areas. The EU project MANUSQUARE aims to deploy a marketplace to match supply and demand of supply chain resources to facilitate accurate and efficient matchmaking. To this end, a semantic matching algorithm has been developed as one of the key enablers of such a marketplace. The algorithm exploits formal descriptions of resources provided by an ontology developed in the project and will later be extended to incorporate additional data from different endpoints. This paper describes the main components of the semantic matching algorithm, which on the basis of the formally described supply chain resources returns a ranked list of relevant suppliers given a customer query. The paper further describes a comparative evaluation of a set of common semantic similarity techniques that was conducted in order to identify the most appropriate technique for our purpose. The results from the evaluation show that all four techniques perform pretty well and are able to distinguish relevant suppliers from irrelevant ones. The best performing technique is the edge-based technique Wu-Palmer.
Download

Paper Nr: 63
Title:

A Comparative Evaluation of Visual and Natural Language Question Answering over Linked Data

Authors:

Gerhard Wohlgenannt, Dmitry Mouromtsev, Dmitry Pavlov, Yury Emelyanov and Alexey Morozov

Abstract: With the growing number and size of Linked Data datasets, it is crucial to make the data accessible and useful for users without knowledge of formal query languages. Two approaches towards this goal are knowledge graph visualization and natural language interfaces. Here, we investigate specifically question answering (QA) over Linked Data by comparing a diagrammatic visual approach with existing natural language-based systems. Given a QA benchmark (QALD7), we evaluate a visual method which is based on iteratively creating diagrams until the answer is found, against four QA systems that have natural language queries as input. Besides other benefits, the visual approach provides higher performance, but also requires more manual input. The results indicate that the methods can be used complementary, and that such a combination has a large positive impact on QA performance, and also facilitates additional features such as data exploration.
Download

Paper Nr: 64
Title:

Organizational Engineering Processes: Integration of the Cause-and-Effect Analysis in the Detection of Exception Kinds

Authors:

Dulce Pacheco, David Aveiro and Nelson Tenório

Abstract: Enterprises are dynamic systems that struggle to adapt to the constant changes in their environment. The complexity of these systems frequently originates inefficiencies that turn into the loss of resources and might even compromise organizations’ viability. Control and G.O.D. (sub)organizations allow enterprises to specify measures and viability norms that help to identify, acknowledge, and handle exceptions. Organizational engineering processes are deployed to treat dysfunctions within the G.O.D. organization but often fail to eliminate or circumvent the root cause of it. In this paper, we propose an extension in the model to allow a thorough investigation of the root causes of dysfunctions within the organizational engineering processes. Grounded on the seven guidelines for Information System Research in the design-science paradigm, we claim that the organizational engineering process should be supplemented with a systematic and broader investigation of causes, namely the Ishikawa approach of cause-and-effect analysis. The main contributions of this paper are the improvement of the organizational engineering process for handling unexpected exceptions in reactive change dynamics and the freely available Dysfunctions Bank with common dysfunctions and its probable causes. This work should trigger a reduction in the number of organizational dysfunctions and help to keep updated the organizational self and the organization’s ontological model.
Download

Paper Nr: 65
Title:

Towards a Usable Ontology for the Quantum World

Authors:

Marcin Skulimowski

Abstract: We present and discuss selected issues related to the problem of representing in a machine-readable way data and knowledge about the quantum level of reality. In particular, we propose a method of creating an ontology for the quantum world. The method uses a mathematical structure of quantum mechanics. We apply the method to obtain a toy ontology corresponding to the Hilbert space formulation of quantum mechanics. We use the terms from the ontology to describe a simple quantum system. We also show how we can use the ontology to create semantic enhancements of scientific publications on quantum mechanics.
Download

Paper Nr: 68
Title:

An Ontology based Personalized Privacy Preservation

Authors:

Ozgu Can and Buket Usenmez

Abstract: Various organizations share sensitive personal data for data analysis. Therefore, sensitive information must be protected. For this purpose, privacy preservation has become a major issue along with the data disclosure in data publishing. Hence, an individual’s sensitive data must be indistinguishable after the data publishing. Data anonymization techniques perform various operations on data before it’s shared publicly. Also, data must be available for accurate data analysis when data is released. Therefore, differential privacy method which adds noise to query results is used. The purpose of data anonymization is to ensure that data cannot be misused even if data are stolen and to enhance the privacy of individuals. In this paper, an ontology-based approach is proposed to support privacy-preservation methods by integrating data anonymization techniques in order to develop a generic anonymization model. The proposed personalized privacy approach also considers individuals’ different privacy concerns and includes privacy preserving algorithms’ concepts.
Download

Paper Nr: 69
Title:

RDF Doctor: A Holistic Approach for Syntax Error Detection and Correction of RDF Data

Authors:

Ahmad Hemid, Lavdim Halilaj, Abderrahmane Khiat and Steffen Lohmann

Abstract: Over the years, the demand for interoperability support between diverse applications has significantly increased. The Resource Definition Framework (RDF), among other solutions, is utilized as a data modeling language which allows for encoding the knowledge from various domains in a unified representation. Moreover, a vast amount of data from heterogeneous data sources are continuously published in documents using the RDF format. Therefore, these RDF documents should be syntactically correct in order to enable software agents performing further processing. Albeit, a number of approaches have been proposed for ensuring error-free RDF documents, commonly they are not able to identify all syntax errors at once by failing on the first encountered error. In this paper, we tackle the problem of simultaneous error identification, and propose RDF-Doctor, a holistic approach for detecting and resolving syntactic errors in a semi-automatic fashion. First, we define a comprehensive list of errors that can be detected along with customized error messages to allow users for a better understanding of the actual errors. Next, a subset of syntactic errors is corrected automatically based on matching them with predefined error messages. Finally, for a particular number of errors, customized and meaningful messages are delivered to users to facilitate the manual corrections process. The results from empirical evaluations provide evidence that the presented approach is able to effectively detect a wide range of syntax errors and automatically correct a large subset of them.
Download

Paper Nr: 8
Title:

A Knowledge Chunk Reuse Support Tool based on Heterogeneous Ontologies

Authors:

Takeshi Morita, Naoya Takahashi, Mizuki Kosuda and Takahira Yamaguchi

Abstract: To develop service robot applications, it is necessary to acquire domain expert knowledge and develop the applications based on the knowledge. However, since, currently, many of these applications have been developed by engineers using the middleware for robots, the domain expert knowledge is embedded in the codes and is difficult to reuse. Therefore, it is considered necessary to have a tool that supports the development of the applications based on machine-readable knowledge of domain experts. We also believe that the machine-readable knowledge can be reused not only for service robots but also for novices in the domain. To address the problems, this paper proposes a knowledge chunk (KC) reuse support tool based on heterogeneous ontologies. In this study, the parts of the reusable workflow, indexes required for a search, and a movie recording of robots movement based on the parts of the workflow are collectively known as a KC. Using the framework of case-based reasoning, the proposed tool accumulates parts of reusable workflows as case examples based on heterogeneous ontologies and facilitates search and reuse of KCs. It promotes domain expert knowledge acquisition and supports novices to learn the knowledge. As a case study, we have applied the proposed tool to teaching assistant (TA) robots. Two public elementary school teachers created workflows for TA robots using the proposed tool, and each teacher conducted a lesson with TA robots once. Through questionnaires given to the teacher, the proposed tool and TA robot application were evaluated to confirm their usefulness.
Download

Paper Nr: 14
Title:

Relevancy Scoring for Knowledge-based Recommender Systems

Authors:

Robert David and Trineke Kamerling

Abstract: Knowledge-based recommender systems are well suited for users to explore complex knowledge domains like iconography without having domain knowledge. To help them understand and make decisions for navigation in the information space, we can show how important specific concept annotations are for the description of an item in a collection. We present an approach to automatically determine relevancy scores for concepts of a domain model. These scores represent the importance for item descriptions as part of knowledge-based recommender systems. In this paper we focus on the knowledge domain of iconography, which is quite complex, difficult to understand and not commonly known. The use case for a knowledge-based recommender system in this knowledge domain is the exploration of a museum collection of historical artworks. The relevancy scores for the concepts of an artwork should help the user to understand the iconographic interpretation and to navigate the collection based on personal interests.
Download

Paper Nr: 25
Title:

Inventing ET Rules to Improve an MI Solver on KR-logic

Authors:

Tadayuki Yoshida, Ekawit Nantajeewarawat, Masaharu Munetomo and Kiyoshi Akama

Abstract: We understand that many logical problems cannot be solved by using logic programs. Logic programs have the limited capability of representation. We try to overcome this limitation by adopting KR-logic, an extension to first-order logic. The extension includes function variables. In this paper, we take a problem which is well-described with function variables. We rely on Logical Problem Solving Framework (LPSF) to formalize our problem as a Model-intersection problem. Then we develop a solver for MI problems by adding five new transformation rules concerning function variables. Correctness of each rule is proved.i.e., each rule is an equivalent tranformation (ET) rule. Since each rule is correct, all ET rules can be used together without modification and combinational cost. Thus, the invented rules can be safely reused in other LPSF-based solvers.
Download

Paper Nr: 26
Title:

Logical Approach to Theorem Proving with Term Rewriting on KR-logic

Authors:

Tadayuki Yoshida, Ekawit Nantajeewarawat, Masaharu Munetomo and Kiyoshi Akama

Abstract: Term rewriting is often used for proving theorems. To mechanizing such a proof method with computation correctness guaranteed strictly, we follow LPSF, which is a general framework for generating logical problem solution methods. In place of the first-order logic, we use KR-logic, which has function variables, for correct formalization. By repeating (1) specialization by a substitution for usual variables, and (2) application of an already derived rewriting rule, we can generate a term rewriting rule from the resulting equational clause. The obtained term rewriting rules are proved to be equivalent transformation rules. The correctness of the computation results is guaranteed. This theory shows that LPSF integrates logical inference and functional rewriting under the broader concept of equivalent transformation.
Download

Paper Nr: 30
Title:

EmoCulture: Towards an Ontology for Describing Cultural Differences When Expressing, Handling and Regulating Emotions

Authors:

Azza Labidi, Fadoua Ouamani and Narjès B. Ben Saoud

Abstract: Collaborative learning environments bring together learners from different sociocultural contexts, around a common task. Besides, these environments are emotional places where learners frequently experience emotions and bring emotions that concern events from outside the learning environment. Moreover, learners express, handle and regulate their emotion differently according to the sociocultural context to which they belong. And as it was proven by empirical research studies, emotions can have important effects on students’ learning and achievement. Therefore, Detecting, understanding, handling and regulating the learner emotion and understanding their cultural differences is a key issue that need to be tackled to enhance collaborative learning. To do so, we propose the emoculture ontology, a domain ontology for representing relevant aspects of affective phenomena and their culture differences in collaborative learning environments. In this paper, we will discuss first the concept of emotion and its relations with learning, collaborative learning and culture. Second, we will present a set of selected existing emotion ontologies which will be compared in the same section according to criteria relevant to our study. Third, we will describe the process upon which EmoCulture was built. Finally, we will discuss the quality of the proposed ontology and how it will be used in future works to guide the building of an emotional and cultural aware collaborative learning environment.
Download

Paper Nr: 31
Title:

A Typology of Temporal Data Imperfection

Authors:

Nassira Achich, Fatma Ghorbel, Fayçal Hamdi, Elisabeth Metais and Faiez Gargouri

Abstract: Temporal data may be subject to several types of imperfection (e.g., uncertainty, imprecision..). In this context, several typologies of data imperfections have been already proposed. However, these typologies cannot be applied to temporal data because of the complexity of this type of data and the specificity that it contains. Besides, to the best of our knowledge, there is no typology of temporal data imperfections. In this paper, we propose a typology of temporal data imperfections. Our typology is divided into direct imperfections of both numeric temporal data and natural language based temporal data, indirect imperfections that can be deduced from the direct ones and granularity (i.e., context - dependent temporal data) which is related to several factors that can interfer in specifying the imperfection type such as person’s profile and multiculturalism. We finish by representing an example of imprecise temporal data in PersonLink ontology.
Download

Paper Nr: 36
Title:

Ontology Learning from Clinical Practice Guidelines

Authors:

Samia Sbissi, Mariem Mahfoudh and Said Gattoufi

Abstract: In order to assist professionals and doctors to make decisions about appropriate health care for patients who are at risk of cardiovascular disease, we propose a decision support system based on OWL (Ontology Language Web) ontology with SWRL (semantic web rule language) rules. The idea consists to parse clinical practice guidelines (i.e. documents that contain recommendations and medical knowledges) to enrich and exploit existing cardiovascular domain ontology. The enrichment process is conducted by ontology learning task. We first pre-process the text and extract the relevant concepts. Then, we enrich the ontology not only by OWL DL axioms, but also SWRL rules. To identify the similarity between terms texts and ontology concepts, we have used a combination of methods as levenshtein similarity and Word2Vec.
Download

Paper Nr: 41
Title:

How Do the Members of a Parliament Negotiate? Analysing Verbatim Records

Authors:

Mare Koit, Haldur Õim and Tiit Roosmaa

Abstract: Negotiation is a strategic discussion that resolves an issue in a way that both parties find acceptable. Specific forms of negotiation are used in many situations, among them in parliamentary discussions. In this paper we report on a pilot study on verbatim records of sittings held in the Estonian Parliament. The structure of the discussions will be represented by using the dialogue acts of a custom-made typology. It will be compared with the structure of negotiation in everyday life. Our further aim is to create means for automatically recognizing the structure and analysing the contents of parliamentary negotiations and political arguments. To our knowledge, this is the first attempt to model Estonian political discussions.
Download

Paper Nr: 44
Title:

Supporting Taxonomy Development and Evolution by Means of Crowdsourcing

Authors:

Binh Vu and Matthias Hemmje

Abstract: Information overload continues to be a challenge. By dividing the material into many different small subsets, classification based on a taxonomy makes data exploration and retrieval faster and more accurate. Instead of having to know the exact keywords that describe the knowledge resource, users can browse and search for them by selecting the categories that the resource is most likely to belong. Nevertheless, developing taxonomies is not an easy task. It requires the authors to have a certain amount of knowledge in the domain. Furthermore, the workload will increase as any new taxonomy needs to be frequently updated to remain relevant and useful. To combat these problems, this paper proposes another approach to crowdsource taxonomy development and evolution. We describe in this paper the concept of this approach along with different types of evaluations targeting on the one hand to demonstrate the feasibility of the approach and the usability of the initial prototype as well as on the other hand the quality and effectiveness of the chosen method.
Download

Paper Nr: 45
Title:

Challenges of Modeling and Evaluating the Semantics of Technical Content Deployed in Recommendation Systems for Industry 4.0

Authors:

Jos Lehmann, Michael Shamiyeh and Sven Ziemer

Abstract: In the context of Industry 4.0 the Smart Factory is enabled by the automation of physical production activities. The automation of intellectual pre-production activities enables what is here dubbed the “Smart Studio”. A key-element of the Smart Studio is Semantic Technology. While prototyping an ontology-based recommendation system for technical content about the case-study of the aviation industry, the problem of the readiness level of Semantic Technology became apparent. This led to the formulation of a Semantic Modeling and Tagging Methodology. The evaluation of both prototype and methodology yielded valuable insight about (i) the quantity and quality of semantics needed in the Smart Studio, (ii) the different interaction profiles identified when testing recommendations, (iii) the efficiency and effectiveness of the methods required to achieve semantics of right quantity and quality, (iv) the extent to which an ontology-based recommendation system is feasible and reduces double work for knowledge workers. Based on these results in this paper a position is formulated about the challenges for the viable application of Semantic Technology to technical content in Industry 4.0.
Download

Paper Nr: 52
Title:

Evaluation of Asana, Odoo, and ProjectLibre Project Management Tools using the OSSpal Methodology

Authors:

Joana F. Marques and Jorge Bernardino

Abstract: In order to successfully complete projects it is essential for companies to acquire a project management tool that assists in their planning, cost and resource management. Currently, there are several open source project management tools on the market that have much of the functionality required. Thus, it is important to choose the most appropriate one according to the needs of the user. To help with this choice, open source software evaluation methodologies can be used. In this paper, we use the OSSpal methodology to evaluate three popular open source project management tools: Asana, Odoo, and ProjectLibre.
Download

Paper Nr: 53
Title:

Distributed Ontology for the Needs of Disabled People

Authors:

Caroline Wintergerst and Guilaine Talens

Abstract: In French society, much help is provided to people. In the particular case of disabled people, it is quite difficult to deal with all the different information coming from heterogeneous contexts. Such different knowledge cannot be directly integrated, so we propose to build ontologies for each aspect. Three ontologies are built, each from different existing sources (thesaurus, ontology, ...). Disability ontology includes the medical and social domain. Service ontology represents generic and local services. Individual needs ontology allows the individual file description and the link with the other ontologies. Each ontology will cooperate with the others. Then, the cooperation of these distributed ontologies must solve the problems of semantic conflicts. A framework is proposed to build each ontology and also to manage ontology collaboration.To ensure the representation of the guidance interactive process, we model a workflow to follow an individual file through its successive steps. This allows better long term assistance monitoring and proves the necessity for evolutive knowledge representation like ontologies.
Download

Paper Nr: 55
Title:

A Flexible Schema for Document Oriented Database (SDOD)

Authors:

Shady Hamouda, Zurinahni Zainol and Mohammed Anbar

Abstract: Big data is emerging as one of the most important crucial issues in the modern world. Most studies mention that a relational database cannot handle big data. This challenge has led to the presentation of the not only structured query language (NoSQL) database as a new concept of database technology. NoSQL supports large volumes of data by providing a mechanism for data storage, retrieval and more flexible than a relational database. One of the most powerful types of NoSQL database is the document-oriented database. Recently, many software developers are willing to migrate from using relational databases to NoSQL database because of scalability, availability, and performance. The document-oriented database has challenged as to how to obtain an appropriate schema for the document-oriented database. The existing approach to migrate a relational database to a document-oriented database does not consider all the properties of the former, especially on how to handle various types of relationships. This research proposed a flexible schema for a document-oriented database (SDOD). This study evaluated the development of agility based on the schema of a document-oriented database and query execution time. The evaluation verifies the reliability of the proposed schema.
Download

Paper Nr: 67
Title:

Towards the Prokaryotic Regulation Ontology: An Ontological Model to Infer Gene Regulation Physiology from Mechanisms in Bacteria

Authors:

Citlalli Mejía-Almonte and Julio Collado-Vides

Abstract: Here we present a formal ontological model that explicitly represents regulatory interactions among the main objects involved in transcriptional regulation in bacteria. These formal relations allow the inference of gene regulation physiology from gene regulation mechanisms. The automatically instantiated classes can be used to assist in the mechanistic interpretation of gene expression experiments done at the physiological level, such as RNA-seq. This is the first step to develop a more comprehensive ontology focused on prokaryotic gene regulation. The ontology is available at https://github.com/prokaryotic-regulation-ontology
Download