Visible to the public Biblio

Filters: Keyword is Linked data  [Clear All Filters]
2022-11-08
Drakopoulos, Georgios, Giannoukou, Ioanna, Mylonas, Phivos, Sioutas, Spyros.  2020.  A Graph Neural Network For Assessing The Affective Coherence Of Twitter Graphs. 2020 IEEE International Conference on Big Data (Big Data). :3618–3627.
Graph neural networks (GNNs) is an emerging class of iterative connectionist models taking full advantage of the interaction patterns in an underlying domain. Depending on their configuration GNNs aggregate local state information to obtain robust estimates of global properties. Since graphs inherently represent high dimensional data, GNNs can effectively perform dimensionality reduction for certain aggregator selections. One such task is assigning sentiment polarity labels to the vertices of a large social network based on local ground truth state vectors containing structural, functional, and affective attributes. Emotions have been long identified as key factors in the overall social network resiliency and determining such labels robustly would be a major indicator of it. As a concrete example, the proposed methodology has been applied to two benchmark graphs obtained from political Twitter with topic sampling regarding the Greek 1821 Independence Revolution and the US 2020 Presidential Elections. Based on the results recommendations for researchers and practitioners are offered.
2019-01-16
Akhtar, U., Lee, S..  2018.  Adaptive Cache Replacement in Efficiently Querying Semantic Big Data. 2018 IEEE International Conference on Web Services (ICWS). :367–370.
This paper addresses the problem of querying Knowledge bases (KBs) that store semantic big data. For efficiently querying data the most important factor is cache replacement policy, which determines the overall query response. As cache is limited in size, less frequently accessed data should be removed to provide more space to hot triples (frequently accessed). So, to achieve a similar performance to RDBMS, we proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Moreover, performance bottleneck of triplestore, makes realworld application difficult. To achieve a closer performance similar to RDBMS, we have proposed an Adaptive Cache Replacement (ACR) policy that predict the hot triples from query log. Our proposed algorithm effectively replaces cache with high accuracy. To implement cache replacement policy, we have applied exponential smoothing, a forecast method, to collect most frequently accessed triples. The evaluation result shows that the proposed scheme outperforms the existing cache replacement policies, such as LRU (least recently used) and LFU (least frequently used), in terms of higher hit rates and less time overhead.
2017-12-12
Jiang, L., Kuhn, W., Yue, P..  2017.  An interoperable approach for Sensor Web provenance. 2017 6th International Conference on Agro-Geoinformatics. :1–6.

The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.

Ktob, A., Li, Z..  2017.  The Arabic Knowledge Graph: Opportunities and Challenges. 2017 IEEE 11th International Conference on Semantic Computing (ICSC). :48–52.

Semantic Web has brought forth the idea of computing with knowledge, hence, attributing the ability of thinking to machines. Knowledge Graphs represent a major advancement in the construction of the Web of Data where machines are context-aware when answering users' queries. The English Knowledge Graph was a milestone realized by Google in 2012. Even though it is a useful source of information for English users and applications, it does not offer much for the Arabic users and applications. In this paper, we investigated the different challenges and opportunities prone to the life-cycle of the construction of the Arabic Knowledge Graph (AKG) while following some best practices and techniques. Additionally, this work suggests some potential solutions to these challenges. The proprietary factor of data creates a major problem in the way of harvesting this latter. Moreover, when the Arabic data is openly available, it is generally in an unstructured form which requires further processing. The complexity of the Arabic language itself creates a further problem for any automatic or semi-automatic extraction processes. Therefore, the usage of NLP techniques is a feasible solution. Some preliminary results are presented later in this paper. The AKG has very promising outcomes for the Semantic Web in general and the Arabic community in particular. The goal of the Arabic Knowledge Graph is mainly the integration of the different isolated datasets available on the Web. Later, it can be used in both the academic (by providing a large dataset for many different research fields and enhance discovery) and commercial sectors (by improving search engines, providing metadata, interlinking businesses).

2017-11-03
Collarana, Diego, Lange, Christoph, Auer, Sören.  2016.  FuhSen: A Platform for Federated, RDF-based Hybrid Search. Proceedings of the 25th International Conference Companion on World Wide Web. :171–174.
The increasing amount of structured and semi-structured information available on the Web and in distributed information systems, as well as the Web's diversification into different segments such as the Social Web, the Deep Web, or the Dark Web, requires new methods for horizontal search. FuhSen is a federated, RDF-based, hybrid search platform that searches, integrates and summarizes information about entities from distributed heterogeneous information sources using Linked Data. As a use case, we present scenarios where law enforcement institutions search and integrate data spread across these different Web segments to identify cases of organized crime. We present the architecture and implementation of FuhSen and explain the queries that can be addressed with this new approach.
2015-05-01
Keivanloo, Iman, Rilling, Juergen.  2014.  Software Trustworthiness 2.0-A Semantic Web Enabled Global Source Code Analysis Approach. J. Syst. Softw.. 89:33–50.

There has been an ongoing trend toward collaborative software development using open and shared source code published in large software repositories on the Internet. While traditional source code analysis techniques perform well in single project contexts, new types of source code analysis techniques are ermerging, which focus on global source code analysis challenges. In this article, we discuss how the Semantic Web, can become an enabling technology to provide a standardized, formal, and semantic rich representations for modeling and analyzing large global source code corpora. Furthermore, inference services and other services provided by Semantic Web technologies can be used to support a variety of core source code analysis techniques, such as semantic code search, call graph construction, and clone detection. In this paper, we introduce SeCold, the first publicly available online linked data source code dataset for software engineering researchers and practitioners. Along with its dataset, SeCold also provides some Semantic Web enabled core services to support the analysis of Internet-scale source code repositories. We illustrated through several examples how this linked data combined with Semantic Web technologies can be harvested for different source code analysis tasks to support software trustworthiness. For the case studies, we combine both our linked-data set and Semantic Web enabled source code analysis services with knowledge extracted from StackOverflow, a crowdsourcing website. These case studies, we demonstrate that our approach is not only capable of crawling, processing, and scaling to traditional types of structured data (e.g., source code), but also supports emerging non-structured data sources, such as crowdsourced information (e.g., StackOverflow.com) to support a global source code analysis context.