Biblio
By connecting devices, people, vehicles and infrastructures everywhere in a city, governments and their partners can improve community wellbeing and other economic and financial aspects (e.g., cost and energy savings). Nonetheless, smart cities are complex ecosystems that comprise many different stakeholders (network operators, managed service providers, logistic centers...) who must work together to provide the best services and unlock the commercial potential of the IoT. This is one of the major challenges that faces today's smart city movement, and more generally the IoT as a whole. Indeed, while new smart connected objects hit the market every day, they mostly feed "vertical silos" (e.g., vertical apps, siloed apps...) that are closed to the rest of the IoT, thus hampering developers to produce new added value across multiple platforms. Within this context, the contribution of this paper is twofold: (i) present the EU vision and ongoing activities to overcome the problem of vertical silos; (ii) introduce recent IoT standards used as part of a recent Horizon 2020 IoT project to address this problem. The implementation of those standards for enhanced sporting event management in a smart city/government context (FIFA World Cup 2022) is developed, presented, and evaluated as a proof-of-concept.
We propose a new voting scheme, BeleniosRF, that offers both receipt-freeness and end-to-end verifiability. It is receipt-free in a strong sense, meaning that even dishonest voters cannot prove how they voted. We provide a game-based definition of receipt-freeness for voting protocols with non-interactive ballot casting, which we name strong receipt-freeness (sRF). To our knowledge, sRF is the first game-based definition of receipt-freeness in the literature, and it has the merit of being particularly concise and simple. Built upon the Helios protocol, BeleniosRF inherits its simplicity and does not require any anti-coercion strategy from the voters. We implement BeleniosRF and show its feasibility on a number of platforms, including desktop computers and smartphones.
This paper presents the holistic approach to cyber resilience as a means of preparing for the "unknown unknowns". Principles of augmented cyber risks management and resilience management model at national level are presented, with elaboration on multi-stakeholder engagement and partnership for the implementation of national cyber resilience collaborative framework. The complementarity of governance, law, and business/industry initiatives is outlined, with examples of the collaborative resilience model for the Bulgarian national strategy and its multi-national engagements.
Cloud computing is one of the happening technologies in these years and gives scope to lot of research ideas. Banks are likely to enter the cloud computing field because of abundant advantages offered by cloud like reduced IT costs, pay-per-use modeling, and business agility and green IT. Main challenges to be addressed while moving bank to cloud are security breach, governance, and Service Level Agreements (SLA). Banks should not give prospect for security breaches at any cost. Access control and authorization are vivacious solutions to security risks. Thus we are proposing a knowledge based security model addressing the present issue. Separate ontologies for subject, object, and action elements are created and an authorization rule is framed by considering the inter linkage between those elements to ensure data security with restricted access. Moreover banks are now using Software as a Service (SaaS), which is managed by Cloud Service Providers (CSPs). Banks rely upon the security measures provided by CSPs. If CSPs follow traditional security model, then the data security will be a big question. Our work facilitates the bank to pose some security measures on their side along with the security provided by the CSPs. Banks can add and delete rules according to their needs and can have control over the data in addition to CSPs. We also showed the performance analysis of our model and proved that our model provides secure access to bank data.
Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.
This paper proposes the Digital Public Service Innovation Framework that extends the "standard" provision of digital public services according to the emerging, enhanced, transactional and connected stages underpinning the United Nations Global e-Government Survey, with seven example "innovations" in digital public service delivery – transparent, participatory, anticipatory, personalized, co-created, context-aware and context-smart. Unlike the "standard" provisions, innovations in digital public service delivery are open-ended – new forms may continuously emerge in response to new policy demands and technological progress, and are non-linear – one innovation may or may not depend on others. The framework builds on the foundations of public sector innovation and Digital Government Evolution model. In line with the latter, the paper equips each innovation with sharp logical characterization, body of research literature and real-life cases from around the world to simultaneously serve the illustration and validation goals. The paper also identifies some policy implications of the framework, covering a broad range of issues from infrastructure, capacity, eco-system and partnerships, to inclusion, value, channels, security, privacy and authentication.
Recent years have seen an exponential growth of the collection and processing of data from heterogeneous sources for a variety of purposes. Several methods and techniques have been proposed to transform and fuse data into "useful" information. However, the security aspects concerning the fusion of sensitive data are often overlooked. This paper investigates the problem of data fusion and derived data control. In particular, we identify the requirements for regulating the fusion process and eliciting restrictions on the access and usage of derived data. Based on these requirements, we propose an attribute-based policy framework to control the fusion of data from different information sources and under the control of different authorities. The framework comprises two types of policies: access control policies, which define the authorizations governing the resources used in the fusion process, and fusion policies, which define constraints on allowed fusion processes. We also discuss how such policies can be obtained for derived data.
While in business and private settings the disruptive impact of advanced information communication technology (ICT) have already been felt, the legal sector is now starting to face great disruptions due to such ICTs. Bits and pieces of innovations in the legal sector have been emerging for some time, affecting the performance of core functions and the legitimacy of public institutions. In this paper, we present our framework for enabling the smart government vision, particularly for the case of criminal justice systems, by unifying different isolated ICT-based solutions. Our framework, coined as Legal Logistics, supports the well-functioning of a legal system in order to streamline the innovations in these legal systems. The framework targets the exploitation of all relevant data generated by the ICT-based solutions. As will be illustrated for the Dutch criminal justice system, the framework may be used to integrate different ICT-based innovations and to gain insights about the well-functioning of the system. Furthermore, Legal Logistics can be regarded as a roadmap towards a smart and open justice.
In this workshop, participants coming from a variety of disciplinary backgrounds and countries–-China, South Korea, EU, and US–-will present their country's cyber security initiatives and challenges. Following the presentations, participants will discuss current trends, lessons learned in implementing the initiatives, and international collaboration. The workshop will culminate in the setting an agenda for future collaborative studies in cyber security.
While the rapid progress in smart city technologies are changing cities and the lifestyle of the people, there are increasingly enormous challenges in terms of the safety and security of smart cities. The potential vulnerabilities of e-government products and imminent attacks on smart city infrastructure and services will have catastrophic consequences on the governments and can cause substantial economic and noneconomic losses, even chaos, to the cities and their residents. This paper aims to explore alternative economic solutions ranging from incentive mechanisms to market-based solutions to motivate smart city product vendors, governments, and vulnerability researchers and finders to improve the cybersecurity of smart cities.
With an immense number of threats pouring in from nation states and hacktivists as well as terrorists and cybercriminals, the requirement of a globally secure infrastructure becomes a major obligation. Most critical infrastructures were primarily designed to work isolated from the normal communication network, but due to the advent of the "Smart Grid" that uses advanced and intelligent approaches to control critical infrastructure, it is necessary for these cyber-physical systems to have access to the communication system. Consequently, such critical systems have become prime targets; hence security of critical infrastructure is currently one of the most challenging research problems. Performing an extensive security analysis involving experiments with cyber-attacks on a live industrial control system (ICS) is not possible. Therefore, researchers generally resort to test beds and complex simulations to answer questions related to SCADA systems. Since all conclusions are drawn from the test bed, it is necessary to perform validation against a physical model. This paper examines the fidelity of a virtual SCADA testbed to a physical test bed and allows for the study of the effects of cyber- attacks on both of the systems.
Security and privacy are crucial for all IT systems and services. The diversity of applications places high demands on the knowledge and experience of software developers and IT professionals. Besides programming skills, security and privacy aspects are required as well and must be considered during development. If developers have not been trained in these topics, it is especially difficult for them to prevent problematic security issues such as vulnerabilities. In this work we present an interactive e-learning platform focusing on solving real-world cybersecurity tasks in a sandboxed web environment. With our platform students can learn and understand how security vulnerabilities can be exploited in different scenarios. The platform has been evaluated in four university IT security courses with around 1100 participants over three years.
Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention concerns many organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that present a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing.
Cyber ranges are well-defined controlled virtual environments used in cybersecurity training as an efficient way for trainees to gain practical knowledge through hands-on activities. However, creating an environment that contains all the necessary features and settings, such as virtual machines, network topology and security-related content, is not an easy task, especially for a large number of participants. Therefore, we propose CyRIS (Cyber Range Instantiation System) as a solution towards this problem. CyRIS provides a mechanism to automatically prepare and manage cyber ranges for cybersecurity education and training based on specifications defined by the instructors. In this paper, we first describe the design and implementation of CyRIS, as well as its utilization. We then present an evaluation of CyRIS in terms of feature coverage compared to the Technical Guide to Information Security Testing and Assessment of the U.S National Institute of Standards and Technology, and in terms of functionality compared to other similar tools. We also discuss the execution performance of CyRIS for several representative scenarios.
College students as digital natives suffer from cyberattacks that include social engineering and phishing attacks. Moreover, students as college computer users and as future employees may inadvertently commit cybercrimes as insiders. Cybersecurity awareness programs and training have been found to be effective in reducing the risk of successful cyberattacks related to human users. In this outline, we propose a three dimensional (3D) approach to cybersecurity awareness and training for college students.
This paper presents a supervisory control and data acquisition (SCADA) testbed recently built at the University of New Orleans. The testbed consists of models of three industrial physical processes: a gas pipeline, a power transmission and distribution system, and a wastewater treatment plant–these systems are fully-functional and implemented at small-scale. It utilizes real-world industrial equipment such as transformers, programmable logic controllers (PLC), aerators, etc., bringing it closer to modeling real-world SCADA systems. Sensors, actuators, and PLCs are deployed at each physical process system for local control and monitoring, and the PLCs are also connected to a computer running human-machine interface (HMI) software for monitoring the status of the physical processes. The testbed is a useful resource for cybersecurity research, forensic research, and education on different aspects of SCADA systems such as PLC programming, protocol analysis, and demonstration of cyber attacks.
Moving target defense (MTD) is an emerging paradigm in which system defenses dynamically mutate in order to decrease the overall system attack surface. Though the concept is promising, implementations have not been widely adopted. The field has been actively researched for over ten years, and has only produced a small amount of extensively adopted defenses, most notably, address space layout randomization (ASLR). This is despite the fact that there currently exist a variety of moving target implementations and proofs-of-concept. We suspect that this results from the moving target controls breaking critical system dependencies from the perspectives of users and administrators, as well as making things more difficult for attackers. As a result, the impact of the controls on overall system security is not sufficient to overcome the inconvenience imposed on legitimate system users. In this paper, we analyze a successful MTD approach. We study the control's dependency graphs, showing how we use graph theoretic and network properties to predict the effectiveness of the selected control.
The Physical Web is a project announced by Google's Chrome team that essentially provides a framework to discover "smart" physical objects (e.g. vending machines, classroom, conference room, cafeteria etc.) and interact with specific, contextual content without having to resort to downloading a specific app. A common app such as the open source and freely available Physical Web app on the Google Play Store or the BKON Browser on the Apple App Store, can access nearby beacons. A current work-in-progress at the University of Maui College is developing a campus-wide prototype of beacon technology using Eddystone-URL and EID protocol from various beacon vendors.
Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.
Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar.
Method: Two phishing detection experiments were conducted. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.
Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the domain highlighting.
Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.
Application: Potential applications include development of phishing-prevention training incorporating domain highlighting with other methods to help users identify phishing webpages.
The threat of DDOS and other cyberattacks has increased during the last decade. In addition to the radical increase in the number of attacks, they are also becoming more sophisticated with the targets ranging from ordinary users to service providers and even critical infrastructure. According to some resources, the sophistication of attacks is increasing faster than the mitigating actions against them. For example determining the location of the attack origin is becoming impossible as cyber attackers employ specific means to evade detection of the attack origin by default, such as using proxy services and source address spoofing. The purpose of this paper is to initiate discussion about effective Internet Protocol traceback mechanisms that are needed to overcome this problem. We propose an approach for traceback that is based on extensive use of security metrics before (proactive) and during (reactive) the attacks.
To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.
This study focuses on the spatial context of hacking to networks of Honey-pots. We investigate the relationship between topological positions and geographic positions of victimized computers and system trespassers. We've deployed research Honeypots on the computer networks of two academic institutions, collected information on successful brute force attacks (BFA) and system trespassing events (sessions), and used Social Network Analysis (SNA) techniques, to depict and understand the correlation between spatial attributes (IP addresses) and hacking networks' topology. We mapped and explored hacking patterns and found that geography might set the behavior of the attackers as well as the topology of hacking networks. The contribution of this study stems from the fact that there are no prior studies of geographical influences on the topology of hacking networks and from the unique usage of SNA to investigate hacking activities. Looking ahead, our study can assist policymakers in forming effective policies in the field of cybercrime.
This paper introduces a research agenda focusing on cybersecurity in the context of product lifecycle management. The paper discusses research directions on critical protection techniques, including protection techniques from insider threat, access control systems, secure supply chains and remote 3D printing, compliance techniques, and secure collaboration techniques. The paper then presents an overview of DBSAFE, a system for protecting data from insider threat.
Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Cybersecurity is a problem of growing relevance that impacts all facets of society. As a result, many researchers have become interested in studying cybercriminals and online hacker communities in order to develop more effective cyber defenses. In particular, analysis of hacker community contents may reveal existing and emerging threats that pose great risk to individuals, businesses, and government. Thus, we are interested in developing an automated methodology for identifying tangible and verifiable evidence of potential threats within hacker forums, IRC channels, and carding shops. To identify threats, we couple machine learning methodology with information retrieval techniques. Our approach allows us to distill potential threats from the entirety of collected hacker contents. We present several examples of identified threats found through our analysis techniques. Results suggest that hacker communities can be analyzed to aid in cyber threat detection, thus providing promising direction for future work.