KEOD 2017 Abstracts


Full Papers
Paper Nr: 2
Title:

Onto.KOM - Towards a Minimally Supervised Ontology Learning System based on Word Embeddings and Convolutional Neural Networks

Authors:

Wael Alkhatib, Leon Alexander Herrmann and Christoph Rensing

Abstract: This paper introduces Onto.KOM: a minimally supervised ontology learning system which minimizes the reliance on complicated feature engineering and supervised linguistic modules for constructing the different consecutive components of an ontology, potentially providing domain independent and fully automatic ontology learning system. The focus here is to fill in the gap between automatically identifying the different ontological categories reflecting the domain of interest and the extraction and classification of semantic relations between the concepts under the different categories. In Onto.KOM, we depart from traditional approaches with intensive linguistic analysis and manual feature engineering for relation classification by introducing a convolutional neural network (CNN) that automatically learns features from word-pair offset in the vector space. The experimental results show that our system outperforms the state-of-the-art systems for relation classification in terms of F1-measure.
Download

Paper Nr: 4
Title:

Towards a Complex Interaction Scenario in Worker-cobot Reconfigurable Collaborative Manufacturing via Reactive Agent Ontology - Case-study: Two Workers in Cooperation with One Cobot

Authors:

Ahmed R. Sadik and Bodo Urban

Abstract: Close Human-Robot Interaction (HRI) has been a great focus of research for the last decades. The outcomes of this focus is a new field in industrial robotics called collaborative robotics. A collaborative robot (cobot) is usually an industrial robot designed to operate safely in a shared work environment with the human worker. This in contrast to conventional Industrial Robots (IRs) which are operating in isolation from the worker workspace, the cobot is changing the concept of automation from fully automated operations to semi-autonomous operations, where the decisions of the worker will influence the actions of the cobot and vice-versa. Therefore, a communication and information control framework must exist to connect the worker and the cobot together to fulfil this semi-autonomous paradigm. This framework should be able to provide a method to represent the common knowledge which can support the collaborative manufacturing between the worker and the cobot. During this research we are proposing an ontology-based Holonic Control Architecture (HCA) as a proper solution to share and communicate the knowledge needed to achieve complex interaction scenarios between the worker and the cobot.
Download

Paper Nr: 6
Title:

Is (President,大統領) a Correct Sense Pair? - Linking and Creating Bilingual Sense Correspondences

Authors:

Fumiyo Fukumoto, Yoshimi Suzuki and Attaporn Wangpoonsarp

Abstract: This paper presents a method of linking and creating bilingual sense correspondences between English and Japanese noun word dictionaries. Locally, we extracted bilingual noun words using sentence-based similarity. Globally, for each monolingual dictionary, we identified domain-specific senses using a textual corpus with category information. We incorporated these, i.e., we assigned a sense to each noun word of the extracted bilingual words keeping domain (category) consistency. Evaluation on the WordNet 3.0 and EDR Japanese dictionaries using Reuters andMainichi Japanese newspaper corpora showed 23.1% improvement of bilingual noun word extraction over the baseline with local data view only. Moreover, we found that the extracted bilingual noun senses can be used as a lexical resource for the machine translation.
Download

Paper Nr: 19
Title:

Automatic Algorithm for Extracting an Ontology for a Specific Domain Name

Authors:

Saeed Sarencheh and Andrea Schiffauerova

Abstract: Scientists use knowledge representation techniques to transfer knowledge from humans to machines. Ontology is the well-known representation technique of transferring knowledge to machines. Creating a new knowledge ontology is a complex task, and most proposed algorithms for creating an ontology from documents have problems in detecting complex concepts and their non-taxonomic relationships. Moreover, previous algorithms are not able to analyze multidimensional context, where each concept might have different meanings. This study proposes a framework that separates the process of finding important concepts from linguistic analysis to extract more taxonomic and non-taxonomic relationships. In this framework, we use a modified version of Term Frequency – Inverse Document Frequency (TF-IDF) weight to extract important concepts from an online encyclopedia. Data mining algorithms like labeling semantic classes are used to connect concepts, categorize attributes, and label them and an online encyclopedia is used to create a structure for the knowledge of the given domain. Part Of Speech tagging (POS) and dependency tree of sentences are used to extract concepts and their relationships (i.e. taxonomic and non-taxonomic). We then evaluate this framework by comparing the results of our framework with an existing ontology in the area of “biochemy”. The results show that the proposed method can detect more detailed information and has better performance.
Download

Paper Nr: 22
Title:

Multi-user Feedback for Large-scale Cross-lingual Ontology Matching

Authors:

Mamoun Abu Helou and Matteo Palmonari

Abstract: Automatic matching systems are introduced to reduce the manual workload of users that need to align two ontologies by finding potential mappings and determining which ones should be included in a final alignment. Mappings found by fully automatic matching systems are neither correct nor complete when compared to gold standards. In addition, automatic matching systems may not be able to decide which one, among a set of candidate target concepts, is the best match for a source concept based on the available evidence. To handle the above mentioned problems, we present an interactive mapping Web tool named ICLM (Interactive Cross-lingual Mapping), which aims to improve an alignment computed by an automatic matching system by incorporating the feedback of multiple users. Users are asked to validate mappings computed by the automatic matching system by selecting the best match among a set of candidates, i.e., by performing a mapping selection task. ICLM tries to reduce users’ effort required to validate mappings. ICLM distributes the mapping selection tasks to users based on the tasks’ difficulty, which is estimated by considering the lexical characterization of the ontology concepts, and the confidence of automatic matching algorithms. Accordingly, ICLM estimates the effort (number of users) needed to validate the mappings. An experiment with several users involved in the alignment of large lexical ontologies is discussed in the paper, where different strategies for distributing the workload among the users are evaluated. Experimental results show that ICLM significantly improves the accuracy of the final alignment using the strategies proposed to balance and reduce the user workload.
Download

Paper Nr: 24
Title:

Deep Associative Semantic Neural Graphs for Knowledge Representation and Fast Data Exploration

Authors:

Adrian Horzyk

Abstract: This paper presents new deep associative neural networks that can semantically associate any data, represent their complex relations of various kinds, and be used for fast information search, data mining, and knowledge exploration. They allow to store various horizontal and vertical relations between data and significantly broaden and accelerate various search operations. Many relations which must be searched in the relational databases are immediately available using the presented associative data model based on a new special kind of associative spiking neurons and sensors used for the construction of these networks. The inference operations are also performed using the reactive abilities of these spiking neurons. The paper describes the transformation of any relational database to this kind of networks. All related data and their combinations representing various objects are contextually connected with different strengths reproducing various similarities, proximities, successions, orders, inclusions, rarities, or frequencies of these data. The computational complexity of the described operations is usually constant and less than operations used in the databases. The theory is illustrated by a few examples and used for inference on this kind of neural networks.
Download

Paper Nr: 31
Title:

Cybersecurity Ontology for Critical Infrastructures

Authors:

Sandra Bergner and Ulrike Lechner

Abstract: The number and frequency of hacker attacks on critical infrastructures like waterworks, government institutions, airports increase. The Cybersecurity of critical infrastructure is a complex topic with. A plenthora of requirements, measures from the BSI and the NIST as well as vulnerabilities that needs to be considered. This paper describes the ontology for IT-Security of critical infrastructures that combines the aforementioned requirements to give critical infrastructures a kind of guideline or roadmap for security and safety measures in order to preventively protect critical infrastructures of hacker attacks.
Download

Paper Nr: 46
Title:

Representing Ecological Network Specifications with Semantic Web Techniques

Authors:

Gianluca Torta, Liliana Ardissono, Luigi La Riccia, Adriano Savoca and Angioletta Voghera

Abstract: Ecological Networks (ENs) are a way to describe the structures of existing real ecosystems and to plan their expansion, conservation and improvement. In this work, we present a model to represent the specifications for the local planning of ENs in a way that can support reasoning, e.g., to detect violations within new proposals of expansion, or to reason about improvements of the networks. Moreover, we describe an OWL ontology for the representation of ENs themselves. In the context of knowledge engineering, ENs provide a complex, inherently geographic domain that demands for the expressive power of a language like OWL augmented with the GeoSPARQL ontology to be conveniently represented. More importantly, the set of specification rules that we consider (taken from the project for a local EN implementation) constitute a challenging problem for representing constraints over complex geographic domains, and evaluating whether a given large knowledge base satisfies or violates them.
Download

Paper Nr: 53
Title:

Exploiting Linked Open Data for Enhancing MediaWiki-based Semantic Organizational Knowledge Bases

Authors:

Matthias Frank and Stefan Zander

Abstract: One of the main driving forces for the integration of Semantic Media Wiki systems in corporate contexts is their query construction capabilities on top of organization-specific vocabularies together with the possibility to directly embed query results in wiki pages. However, exploiting knowledge from external sources like other organizational knowledge bases or Linked Open Data as well as sharing knowledge in a meaningful way is difficult due to the lack of a common and shared schema definition. In this paper, we introduce Linked Data Wiki (LD-Wiki), an approach that combines the power of Linked Open Vocabularies and Data with established organizational semantic wiki systems for knowledge management. It supports suggestions for annotations from Linked Open Data sources for organizational knowledge bases in order to enrich them with background information from Linked Open Data. The inclusion of potentially uncertain, incomplete, inconsistent or redundant Linked Open Data within an organization’s knowledge base poses the challenge of interpreting such data correctly within the respective context. In our approach, we evaluate data provenance information in order to handle data from heterogeneous internal and external sources adequately and provide data consumers with the latest and best evaluated information according to a ranking system.
Download

Paper Nr: 55
Title:

ETL4Social-Data: Modeling Approach for Topic Hierarchy

Authors:

Afef Walha, Faiza Ghozzi and Faïez Gargouri

Abstract: Transforming social media data into meaningful and useful information to enable more effective decision-making is nowadays a hot topic for Social Business Intelligence (SBI) systems. Integrating such data into Social Data Warehouse (SDW) is in charge of ETL (Extraction, Transformation and Loading) which are the typical processes recognized as a complex combination of operations and technologies that consumes a significant portion of the DW development efforts. These processes become more complex when we consider the unstructured social sources. For that, we propose an ETL4Social modeling approach that designs ETL processes suitable to social data characteristics. This approach offers specific models to social ETL operations that help ETL designer to integrate data. A key role in the analysis of textual data is also played by topics, meant as specific concepts of interest within a subject area. In this paper, we mainly insist on emerging topic discovering models from textual media clips. The proposed models are instantiated through Twitter case study. ETL4Social is considered a standard-based modeling approach using Business Process Modeling and Notation (BPMN). ETL Operations models are validated based on ETL4Social meta-model, which is an extension of BPMN meta-model.
Download

Paper Nr: 56
Title:

The Role of Community and Social Metrics in Ontology Evaluation: An Interview Study of Ontology Reuse

Authors:

Marzieh Talebpour, Martin Sykora and Tom Jackson

Abstract: Finding a “good” or the “right” ontology for reuse is an ongoing challenge in the field of ontology engineering, where the main aim is to share and reuse existing semantics. This paper reports on a qualitative study with interviews of ontologists and knowledge engineers in different domains, ranging from biomedical field to manufacturing industry, and investigates the challenges they face while searching, evaluating, and selecting an ontology for reuse. Analysis of the interviews reveals diverse sets of quality metrics that are used when evaluating the quality of an ontology. While some of the metrics have already been mentioned in the literature, the findings from our study identify new sets of quality metrics such as community and social related metrics. We believe that this work represents a noteworthy contribution to the field of ontology engineering, with the hope that the research community can further draw on these initial findings in developing relevant quality metrics and ontology search and selection.
Download

Paper Nr: 57
Title:

HybQA: Hybrid Deep Relation Extraction for Question Answering on Freebase

Authors:

Reham Mohamed, Nagwa M. El-Makky and Khaled Nagi

Abstract: Question Answering over knowledge-based data is one of the most important Natural Language Processing tasks. Despite numerous efforts that have been made in this field, it is not yet in the mainstream. Question Answering can be formulated as a Relation Extraction task between the question focus entity and the expected answer. Therefore, it requires high accuracy to solve a dual problem where the relation and answer are unknown. In this work, we propose a HybQA, a Hybrid Relation Extraction system to provide high accuracy for the Relation Extraction and the Question Answering tasks over Freebase. We propose a hybrid model that combines different types of state-of-the-art deep networks that capture the relation type between the question and the expected answer from different perspectives and combine their outputs to provide accurate relations. We then use a joint model to infer the possible relation and answer pairs simultaneously. However, since Relation Extraction might still be prone to errors due to the large size of the knowledge-base corpus (Freebase), we finally use evidence from Wikipedia as an unstructured knowledge base to select the best relation-answer pair. We evaluate the system on WebQuestions data and show that the system achieves a statistical significant improvement over the existing state-of-the-art models and provides the best accuracy which is 57%.
Download

Short Papers
Paper Nr: 1
Title:

Software Centric Innovative Methodology for Ontology Development

Authors:

Santhosh John, Nazaraf Shah, Craig Stewart and Leon Samlov

Abstract: Ontologies are mainly used to establish ontological agreements explicitly which serves as the basis for communication between either humans or software agents. In the aspect of knowledge representation, knowledge base starts where ontology ends. Ontology Engineering, a branch of knowledge engineering derived exclusively for the methods, methodologies, techniques and technologies used for the design, development and maintenance of ontologies. Though ontology engineering and software engineering are two complementary engineering branches, there exists a significant gap between them in terms of maturity level and popularity. Absence of effective methodologies eligible to claim the tag ‘standardized’ aimed at supporting the development of large scale ontologies is one of the reasons behind the gap. This paper attempts to bridge this gap by proposing a software centric innovative methodology (SCIM) for ontology development by extending the process models of software engineering with a defined ontology development life cycle (ODLC).The proposed methodology defines the stages, workflows, activities and techniques for the development of an ontology regardless of domain in a systematic manner for the practitioners to follow.
Download

Paper Nr: 3
Title:

Modelling Decision Support Systems using Conceptual Constraints - Linking Process Systems Engineering and Decision Making Models

Authors:

Canan Dombayci and Antonio Espuña

Abstract: This paper presents the use of a Conceptual Constraint (CC) Domain to systematize the construction of Decision Making Models (DMMs). The modelling systematics include the integration between the CC Domain and production systems as well as an identification procedure which contains some steps aimed at constraint identification using the CC Domain. The CC Domain consists of different modelling elements such as Conceptual Constraints (generic constraint types), Conceptual Components (pieces of a constraint), and Conceptual Component Elements (pieces of a conceptual component that may be connected to production systems). In this instance, the CC Domain is integrated with the Process Systems Engineering (PSE) Domain as a production system domain. The PSE Domain contains information from the multi-level functional hierarchical in an enterprise and it will be used to cover a wide range of scenarios related to hierarchical integration of DMMs. In addition, an integration step between the CC and PSE Domains is illustrated. The focus of the work is to show how these models should be developed in order to be properly integrated, and how they are used by different functionalities with an identification procedure.
Download

Paper Nr: 8
Title:

Ontology-based Sentiment Analysis Model for Recommendation Systems

Authors:

Samreen Zehra, Shaukat Wasi, Imran Jami, Aisha Nazir, Ambreen Khan and Nusrat Waheed

Abstract: In this paper, we propose a novel approach towards developing a recommendation system using ontology-based sentiment analysis. To conduct our study, we have targeted a Facebook closed group which contains posts/reviews regarding different schools. For elucidating the knowledge domain, a school ontology is manually designed based on a set of extracted post/comment data. Sentiment analysis is consequently performed on the resulting Data set and the relative sentiment scores are stored back in the ontology for making recommendations in future.
Download

Paper Nr: 9
Title:

POMap: An Effective Pairwise Ontology Matching System

Authors:

A. Laadhar, F. Ghozzi, I. Megdiche, F. Ravat, O. Teste and F. Gargouri

Abstract: The identification of alignments between heterogeneous ontologies is one of the main research issues in the semantic web. The manual matching of the ontologies is a complex, time consuming and an error prone task. Therefore, ontology matching systems aims to automate this process. Usually, these systems perform the matching process by combining element and structural level matchers. Selecting the optimal string similarity measure associated with its threshold is an important issue in order to enhance the effectiveness of the element level matcher, which in turn will improve the whole ontology system results. In this paper, we present POMap, an ontology matching system based on a syntactic study covering element and structural levels. For the element level matcher we have adopted the best configuration based on the analysis of the performances of many string similarity measures associated with their thresholds. For the structural level, we have performed a syntactic study on both subclasses and siblings in order to infer the structural similarity. Our proposed matching system is validated and evaluated on the Anatomy, the Conference and the Large Biomedical tracks provided by the benchmark of OAEI 2016 ontology matching campaign.
Download

Paper Nr: 10
Title:

ULSOnt: Ontology in Intellirehab System - Development of Ontology for Intelligent Rehabilitation System

Authors:

Radhi Rafiee Afandi, Abduljalil Radman, Mahadi Bahari, Lailatul Qadri Zakaria, Muzaimi Mustapha and Waidah Ismail

Abstract: Upper limb complications are common following stroke and may be seriously debilitating. There are many treatments and assessments to improve the ability in upper limb movements. Due to the increasing of stroke survivors, specialists in rehabilitation departments usually use Patients Information System (PIS) to store and manage the patient’s information and their assessments records. Designing an ontology for PIS is crucial to help the specialists in seeking the patients’ information and managing their assessments. In this paper, an Upper Limb Stroke Ontology (ULSOnt) was developed to enable semantic knowledge representation for PIS at rehabilitation department. ULSOnt consists of tangible objects that are listed by the specialists at rehabilitation department of Hospital Universiti Sains Malaysia (HUSM). ULSOnt was designed on the basis of Enterprise Ontology, TOronto Virtual Enterprise Ontology, METHONTOLOGY and Ontology Development 101. The logical consistency and model completeness of ULSOnt ontology were verified by ontology experts. ULSOnt ontology offers more flexibility in accessing the patients’ information, which means that it can be utilize for designing Intelligent Rehabilitation (IntelliRehab) systems.

Paper Nr: 13
Title:

The Mid Level Data Collection Ontology (DCO) - Generic Data Collection using a Mid Level Ontology

Authors:

Joel Cummings and Deborah Stacey

Abstract: Capturing data through an ontology is a common goal where instances exist as datums mapping to universal terms defined in an ontology. Currently these ontologies lack a shared conceptualization for data collection terms. We propose a mid level Data Collection Ontology (DCO) that defines data collection terms in a domain agnostic way enabling extension for domain ontologies to build off of. Such an ontology should provide reasoning support and enable automated error detection required by all data collection ontologies. By using the Basic Formal Ontology (BFO) as its base it enables existing OBO foundry ontologies to extend the proposed ontology in their design allowing existing domain level ontologies an entry point.
Download

Paper Nr: 16
Title:

Linked Data Research Management System

Authors:

Abdolreza hajmoosaei, Rodrigo de Oliveira Costa and John Arthur Graham

Abstract: Various educational institutes around the world are currently engaging in opening up their research data to other organizations as well as ensuring it is available to the public. Educational institutes typically use research management systems (RMSs) to manage their research material. An RMS can support researchers in producing new research by providing access to material that presently exists in the system. The Wellington Institute of Technology (WelTec) is an educational institute in New Zealand that actively produces academic output. WelTec however currently uses an outdated RMS which lacks features that provide access to their research output by external stakeholders and it also does not illustrate connections and relationships between its content. WelTec research data is maintained in a private, traditional database repository and it only provides access to managers and to those who are directly involved in accumulating and studying research data pertaining to their particular project. In order for WelTec research output to be accessible and shareable to the widest possible audience it needs to be made available in a standard, non-proprietary format that is both human readable and machine consumable. This paper focuses on the infrastructure requirements of opening up research data and proposes a system architecture designed to ensure that Weltec research material is easily accessible by external stakeholders in a scalable way. The proposed system uses technologies and standards that embodies the new generation of World Wide Web (WWW): Linked Data.
Download

Paper Nr: 23
Title:

Parallel Markov-based Clustering Strategy for Large-scale Ontology Partitioning

Authors:

Imadeddine Mountasser, Brahim Ouhbi and Bouchra Frikh

Abstract: Actually, huge amounts of data are generated at distributed heterogeneous sources, to create and to share information on several domains. Thus, data scientists need to develop appropriate and efficient management strategies to cope with the heterogeneity and the interoperability issues of data sources. In fact, ontology as schema-less graph model and ontology matching as dynamic real-time large-scale data integration enabler are addressed to design and develop advanced management mechanisms. However, given the large-scale context, we adopt ontology partitioning strategies, which split ontologies into a set of disjoint partitions, as a crucial part to reduce the computational complexity and to improve the performance of the ontology matching process. To this end, this paper proposes a novel approach for large-scale ontology partitioning through parallel Markov-based clustering strategy using Spark framework. This latter offers the ability to run in-memory computations to provide faster and expressive partitioning and to increase the speed of the matching system. The results drawn by our strategy over real-world ontologies demonstrate significant performance which makes it suitable to be incorporated in our large-scale ontology matching system.
Download

Paper Nr: 25
Title:

Mapping Food Composition Data from Various Data Sources to a Domain-Specific Ontology

Authors:

Gordana Ispirova, Tome Eftimov, Barbara Koroušić Seljak and Peter Korošec

Abstract: Food composition data are detailed sets of information on food components, providing values for energy and nutrients, food classifiers and descriptors. The data of this kind is presented in food composition databases, which are a powerful source of knowledge. Food composition databases may differ in their structure between countries, which makes it difficult to connect them and preferably compare them in order to borrow missing values. In this paper, we present a method for mapping food composition data from various sources to a terminological resource-a food domain ontology. An existing ontology used for the mapping was extended and modelled to cover a larger portion of the food domain. The method was evaluated on two food composition databases: EuroFIR and USDA.
Download

Paper Nr: 26
Title:

Rule-based System Enriched with a Folksonomy-based Matcher for Generating Information Integration Alignments

Authors:

Alexandre Gouveia, Nuno Silva and Paulo Martins

Abstract: Ontology matchers establish correspondences between ontologies to enable knowledge from different sources and domains to be used in ontology mediation tasks (e.g. data transformation and information/ knowledge integration) in many ways. While these processes demand great quality alignments, even the best-performing alignment needs to be corrected and completed before application. In this paper, we propose a rule-based system that improves and completes the automatically-generated alignments into fully-fledged alignments. For that, the rules capture the pre-conditions (existing facts) and the actions to solve each (ambiguous) scenario, in which automatic decisions supported by a folksonomy-based matcher are adopted. The evaluation of the proposed system shows the increasing accuracy of the alignments.
Download

Paper Nr: 29
Title:

When Data Science Becomes Software Engineering

Authors:

Lito Perez Cruz

Abstract: Data science is strongly related to knowledge discovery. It can be said that the output of the data science work is input to the knowledge discovery process. With data science evolving as a discipline of its own, it is estimated that the U.S.A alone, needs more than 1M professionals skilled in the discipline by next year. If we include the needs of the rest of the world, then internationally, it needs more than that. Consequently, private and public educational institutions are hurriedly offering data science courses to candidates. The general emphasis of these courses understandably, is in the use of data mining and machine learning tools and methods. In this paper, we will argue that the subject of software engineering should also be taught to these candidates formally, and not haphazardly, as if it is something the would-be data scientist can pick up along the way. In this paper, we will examine the data science work process and the present state of skills training provided by data science educators. We will present warrants and arguments that software engineering as a discipline can not be taken for granted in the training of a data scientist.
Download

Paper Nr: 36
Title:

Method of Reconstruction of Semantic Relations using Translingual Information

Authors:

Viktor Osika, Sergey Klimenkov, Evgenij Tsopa, Alexey Pismak, Vladimir Nikolaev and Alexander Yarkeev

Abstract: The article is devoted to the problem of developing a method of restoring semantic relations using translingual information. Wiktionary article may contain a translation section (a translingual section) – an important element that allows to link a sense from one language to a sense from another language that is expressed with a reference to the lexeme of that sense. One of our tasks is the design and implementation of Wiktionary-based ontology for the semantic analysis. In this article we present a set of rules that can be used to establish the coincidence of the same senses in different languages, and consequently map links restored from Russian Wiktionary to English nodes. The algorithm for restoring inter-sense references includes the selection of candidate links across senses of different language sections and set of rules to accept the candidate to be included in the list of senses’ links. As a result, 69,309 potential links-transfers (excluding duplicated links) were selected. More than 16,000 links between the nodes of semantic senses of the Russian and English sections of Wiktionary were confirmed that allowed the creation of a generalized ontology.
Download

Paper Nr: 37
Title:

A Large Scale Knowledge Base Representing the Base Form of Kaomoji

Authors:

Noriyuki Okumura

Abstract: In this paper, we construct a large-scale knowledge base representing the base form of kaomoji (emoticon) and other elements of kaomoji: eye, nose, mouth, and so on, to analyze features of kaomoji in detail. Previous methods to analyze kaomoji mainly aim to extract kaomoji from sentences, paragraphs, or documents, or to classify kaomoji into some emotion classes based on the emotion that kaomoji shows or potentially includes. We define the base form of kaomoji for detailed kaomoji analytics. Application systems can estimate another feature of derivative kaomoji based on its base form and other elements for sentiment analytics, emotion extraction, or kaomoji classification. We annotated about 40,000 kinds of kaomoji for constructing a largescale knowledge base. The total number of extracted base forms is about 3,000. In experimental evaluations based on cosine similarity using N-gram based features and simple Skip-gram based features, we show that the model can estimate the base form of kaomoji with an accuracy of about 50%.
Download

Paper Nr: 40
Title:

Dynamic Behavior Control of Interoperability: An Ontological Approach

Authors:

Wided Guédria and Sérgio Guerreiro

Abstract: The obligation to become more competitive and effective in providing better products and services requires enterprises to transform from traditional businesses into networked businesses. One of the challenges faced by a network of enterprises is the development of interoperability between its members. Transformations in this context are usually driven by Enterprise Interoperability (EI) problems that may be faced. In order to quickly overcome these problems, enterprises need characterizing and assessing interoperability to be prepared to establish means for collaboration and initiate corrective actions before potential interoperability problems occur and then be obliged to make unprepared transformations that may be costly and induce unmanageable issues. In this paper, we define an integrated metamodel for interoperability using DEMO. The proposed metamodel is based on an Ontology of Enterprise Interoperability (OoEI) and concepts from a maturity model for interoperability while taking into account principles from Enterprise Dynamic Systems Control (EDSC) domain. It allows to understand and control the dynamic behavior of interoperability between companies.
Download

Paper Nr: 41
Title:

A Semantic Representation of Time Intervals in OWL 2

Authors:

Noura Herradi, Fayçal Hamdi and Elisabeth Métais

Abstract: Representing time over the Semantic Web has always been a challenging issue that many scientific works were interested to address. To the best of our knowledge, the most important ones focused on models, whereas Semantic Web and especially OWL 2 offers semantics that can be efficiently used to describe qualitative diachronic information (i.e. information evolving in time and which start and/or end time is unknown). In this work, we show the relationship between the OWL 2 semantics and the representation of time intervals; then we introduce a qualitative representation of temporal information based on a set of SWRL rules, that allows a sound and complete reasoning mechanism.
Download

Paper Nr: 51
Title:

Ontological and Machine Learning Approaches for Managing Driving Context in Intelligent Transportation

Authors:

Manolo Dulva Hina, Clement Thierry, Assia Soukane and Amar Ramdane-Cherif

Abstract: In this paper, a novel approach of managing driving context information in smart transportation is presented. The driving context refers to the ensemble of parameters that make up the contexts of the environment, the vehicle and the driver. To manage this rich information, knowledge representation using ontology is used and through it, such information becomes a source of knowledge. When this context information (i.e. basically a template or model) is instantiated with actual instances of objects, we can describe any kind of driving situation. Furthermore, through ontological knowledge management, we can find the answers related to various queries of the given driving situation. A smart vehicle is equipped with machine learning functionalities that are capable of classifying any driving situation, and accord assistance to the driver or the vehicle or both to avoid accident, when necessary. This work is a contribution to the ongoing research in safe driving, and a specific application of using data from the internet of things.
Download

Paper Nr: 21
Title:

Academic Style Marker Ontology Design

Authors:

Viacheslav Lanin and Sofia Philipson

Abstract: As any other genre, academic paper can be characterized by its own specific rules and fitches. The authors assume that academic style fitches called in this research “style markers” can be modelled by means of ontology engineering. The article is aimed at describing the academic style markers ontology design and its practical using. The designed ontology is divided into two levels. The first level provides information about linguistic terms and the second level consists of style markers, which were suggested by experts in linguistic. It is assumed that two tasks will be solved on the basis of developed ontology. The first task is generating lexical-semantic templates, which is used to identify the list of markers in a text. Due to ontology approach and Domain Specific Language (DSL) technologies applying users can be able to extend and modify marker templates. The second task is developing an expert system rules for text style enhancement.
Download

Paper Nr: 27
Title:

Using the Unified Foundational Ontology (UFO) for Grounding Legal Domain Ontologies

Authors:

Mirna El Ghosh, Habib Abdulrab, Hala Naja and Mohamad Khalil

Abstract: In this paper, the concept of ontology-driven conceptual modelling is outlined where grounding a modular legal domain ontology in the unified foundational ontology UFO is overviewed. The domain ontology is modularized in four independent modules. The top ontology modules are discussed in this work: upper and core. The ontology modelling language OntoUML is used for the conceptual modelling process.
Download

Paper Nr: 30
Title:

Reinforcement Learning for Modeling Large-Scale Cognitive Reasoning

Authors:

Ying Zhao, Emily Mooren and Nate Derbinsky

Abstract: Accurate, relevant, and timely combat identification (CID) enables warfighters to locate and identify critical airborne targets with high precision. The current CID processes included a wide combination of platforms, sensors, networks, and decision makers. There are diversified doctrines, rules of engagements, knowledge databases, and expert systems used in the current process to make the decision making very complex. Furthermore, the CID decision process is still very manual. Decision makers are constantly overwhelmed with the cognitive reasoning required. Soar is a cognitive architecture that can be used to model complex reasoning, cognitive functions, and decision making for warfighting processes like the ones in a kill chain. In this paper, we present a feasibility study of Soar, and in particular the reinforcement learning (RL) module, for optimal decision making using existing expert systems and smart data. The system has the potential to scale up and automate CID decision-making to reduce the cognitive load of human operators.
Download

Paper Nr: 39
Title:

A Meta Model for Interoperability of Secure Business Transactions - Using BlockChain and DEMO

Authors:

Sérgio Guerreiro, Wided Guédria, Robert Lagerström and Steven van Kervel

Abstract: Business transactions executed between organizations and individuals are largely operated on digital environments, conducting to an industrial interoperability challenge demanding secure environments to cooperate safely, therefore increasing credibility, and trust ability between end-users. This paper conceptualizes and prescribes a fine-grained control solution for the execution of business transactions involving critical assets, and using a human-based coordination and interaction design to minimize the negative impacts of security risks, the non-conformable operation and the coarse-grained control. This solution integrates the DEMO-based Enterprise Operating System (EOS) with BlockChain as a way to redesign, and distribute globally, a set of services that are founded in a human-oriented approach, and therefore, offering trust, authenticity, resilience, robustness against fraud and identification and mitigation of risk. The impacts for organizations and individuals are manifold: a security risk-based solution for end-users with budgetary constraints; educate on cyber security issues; and augment the trust for digital business processes environments.
Download

Paper Nr: 42
Title:

NewSQL Databases - MemSQL and VoltDB Experimental Evaluation

Authors:

João Oliveira and Jorge Bernardino

Abstract: NewSQL databases are a set of new relational databases that provide better performance than the existing systems, while maintaining the use of the SQL language. Due to the huge amounts of data stored by organizations these types of databases are suitable to process efficiently this information. In this paper, we describe and test two of the most popular NewSQL databases: MemSQL and VoltDB. We show the advantages of the NewSQL databases engines using the TPC-H benchmark. The experimental evaluation demonstrated the ability of MemSQL and VoltDB to execute effectively TPC-H benchmark queries.
Download

Paper Nr: 43
Title:

Using DataMining to Predict Diseases in Vineyards and Olive Groves

Authors:

Luís Alves, Rodrigo Rocha Silva and Jorge Bernardino

Abstract: Currently, the advancements in computer technology allows progress of the agricultural sector. Producers and service providers are exploring the value of information and its importance in increasing the productivity and profitability of a farm. This paper intends to evaluate various classification algorithms of data mining to predict various diseases in vineyards and olive groves. We propose using machine learning to predict diseases based on symptoms and weather data. The accuracy of classification algorithms like Random Forest, IBK, Naïve Bayes and SMO have been compared using Weka Software. Using our proposal, it is expected to reduce the incidence of diseases by more than 75%.
Download

Paper Nr: 44
Title:

An Ontology for Clinical Decision Support System to Predict Female’s Fertile Period

Authors:

Francisco Vaz, Rodrigo Rocha Silva and Jorge Bernardino

Abstract: Nowadays, many women do not fully realize what the fertile period is, as well as what it represents in their life. The prediction of fertile period is quite complex and difficult to calculate accurately and this is undoubtedly a problem for women’s. There are still some completely wrong ideas about reproductive health, especially about the fertile period and the menstrual cycle. A good example of the myths that persist is the fact that many women continue to believe that ovulation occurs precisely in the middle of their menstrual cycle, which is not always true. Therefore, to better understand the female cycle, we proposed a Clinical Decision Support System based on the use of an ontology. Our proposal can predict the female fertile period, based on certain factors that allow a calculation that is more accurate improving the quality of patient life.
Download

Paper Nr: 49
Title:

Querying Natural Logic Knowledge Bases

Authors:

Troels Andreasen, Henrik Bulskov, Per Anker Jensen and Jørgen Fischer Nilsson

Abstract: This paper describes the principles of a system applying natural logic as a knowledge base language. Natural logics are regimented fragments of natural language employing high level inference rules. We advocate the use of natural logic for knowledge bases dealing with querying of classes in ontologies and class-relationships such as are common in life-science descriptions. The paper adopts a version of natural logic with recursive restrictive clauses such as relative clauses and adnominal prepositional phrases. It includes passive as well as active voice sentences. We outline a prototype for partial translation of natural language into natural logic, featuring further querying and conceptual path finding in natural logic knowledge bases.
Download

Paper Nr: 52
Title:

Integrating a Survey Ontology into an Upper Level Ontology - Using the Data Collection Ontology (DCO) as the Basis for a Survey Ontology

Authors:

Joel Cummings and Deborah Stacey

Abstract: Capturing data in a step-by-step manner is generally completed using surveys that maintain some flow between questions to capture data from a large number of respondents in a consistent manner. In other words capturing data using surveys is a form of data collection that imposes a specific process to collect data. In this paper we present the benefit of utilizing the mid-level Data Collection Ontology (DCO) to construct a survey ontology that is domain independent and compare to an existing Survey Ontology (Fox M.S., 2016) implementation.
Download

Paper Nr: 59
Title:

Search and Extraction of Process Models from Unstructured Web Content

Authors:

Maya Lincoln and Avi Wasser

Abstract: While research on searching structured business process repositories has been extensive, little attention was dedicated to searching and extracting process content from unstructured repositories, such as the Web. We demonstrate how current search technologies are not useful for extracting process content from the Web, and explain the core reasons for the deficiency. We then present a framework for overcoming this challenge by enabling operational searches on unstructured, free-text pages on the Web.