Biblio
Controllers for software defined networks (SDNs) are quickly maturing to offer network operators more intuitive programming frameworks and greater abstractions for network application development. Likewise, many security solutions now exist within SDN environments for detecting and blocking clients who violate network policies. However, many of these solutions stop at triggering the security measure and give little thought to amending it. As a consequence, once the violation is addressed, no clear path exists for reinstating the flagged client beyond having the network operator reset the controller or manually implement a state change via an external command. This presents a burden for the network and its clients and administrators. Hence, we present a security policy transition framework for revoking security measures in an SDN environment once said measures are activated.
Hypervisors are the main components for managing virtual machines on cloud computing systems. Thus, the security of hypervisors is very crucial as the whole system could be compromised when just one vulnerability is exploited. In this paper, we assess the vulnerabilities of widely used hypervisors including VMware ESXi, Citrix XenServer and KVM using the NIST 800-115 security testing framework. We perform real experiments to assess the vulnerabilities of those hypervisors using security testing tools. The results are evaluated using weakness information from CWE, and using vulnerability information from CVE. We also compute the severity scores using CVSS information. All vulnerabilities found of three hypervisors will be compared in terms of weaknesses, severity scores and impact. The experimental results showed that ESXi and XenServer have common weaknesses and vulnerabilities whereas KVM has fewer vulnerabilities. In addition, we discover a new vulnerability called HTTP response splitting on ESXi Web interface.
Increasing cyber-security presents an ongoing challenge to security professionals. Research continuously suggests that online users are a weak link in information security. This research explores the relationship between cyber-security and cultural, personality and demographic variables. This study was conducted in four different countries and presents a multi-cultural view of cyber-security. In particular, it looks at how behavior, self-efficacy and privacy attitude are affected by culture compared to other psychological and demographics variables (such as gender and computer expertise). It also examines what kind of data people tend to share online and how culture affects these choices. This work supports the idea of developing personality based UI design to increase users' cyber-security. Its results show that certain personality traits affect the user cyber-security related behavior across different cultures, which further reinforces their contribution compared to cultural effects.
The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.
A two-factor authenticated key-agreement scheme for session initiation protocol emerged as a best remedy to overcome the ascribed limitations of the password-based authentication scheme. Recently, Lu et al. proposed an anonymous two-factor authenticated key-agreement scheme for SIP using elliptic curve cryptography. They claimed that their scheme is secure against attacks and achieves user anonymity. Conversely, this paper's keen analysis points out several severe security weaknesses of the Lu et al.'s scheme. In addition, this paper puts forward an enhanced anonymous two-factor mutual authenticated key-agreement scheme for session initiation protocol using elliptic curve cryptography. The security analysis and performance analysis sections demonstrates that the proposed scheme is more robust and efficient than Lu et al.'s scheme.
Recently, various certificate-less signature (CLS) schemes have been developed using bilinear pairing to provide authenticity of message. In 2015, Jia-Lun Tsai proposed a certificate-less pairing based short signature scheme using elliptic curve cryptography (ECC) and prove its security under random oracle. However, it is shown that the scheme is inappropriate for its practical use as there is no message-signature dependency present during signature generation and verification. Thus, the scheme is vulnerable. To overcome these attacks, this paper aims to present a variant of Jia-Lun Tsai's short signature scheme. Our scheme is secured under the hardness of collusion attack algorithm with k traitors (k–-CAA). The performance analysis demonstrates that proposed scheme is efficient than other related signature schemes.
Security patterns are generic solutions that can be applied since early stages of software life to overcome recurrent security weaknesses. Their generic nature and growing number make their choice difficult, even for experts in system design. To help them on the pattern choice, this paper proposes a semi-automatic methodology of classification and the classification itself, which exposes relationships among software weaknesses, security principles and security patterns. It expresses which patterns remove a given weakness with respect to the security principles that have to be addressed to fix the weakness. The methodology is based on seven steps, which anatomize patterns and weaknesses into set of more precise sub-properties that are associated through a hierarchical organization of security principles. These steps provide the detailed justifications of the resulting classification and allow its upgrade. Without loss of generality, this classification has been established for Web applications and covers 185 software weaknesses, 26 security patterns and 66 security principles. Research supported by the industrial chair on Digital Confidence (http://confiance-numerique.clermont-universite.fr/index-en.html).
Defending key network infrastructure, such as Internet backbone links or the communication channels of critical infrastructure, is paramount, yet challenging. The inherently complex nature and quantity of network data impedes detecting attacks in real world settings. In this paper, we utilize features of network flows, characterized by their entropy, together with an extended version of the original Replicator Neural Network (RNN) and deep learning techniques to learn models of normality. This combination allows us to apply anomaly-based intrusion detection on arbitrarily large amounts of data and, consequently, large networks. Our approach is unsupervised and requires no labeled data. It also accurately detects network-wide anomalies without presuming that the training data is completely free of attacks. The evaluation of our intrusion detection method, on top of real network data, indicates that it can accurately detect resource exhaustion attacks and network profiling techniques of varying intensities. The developed method is efficient because a normality model can be learned by training an RNN within a few seconds only.
Power system security is one of the key issues in the operation of smart grid system. Evaluation of power system security is a big challenge considering all the contingencies, due to huge computational efforts involved. Phasor measurement unit plays a vital role in real time power system monitoring and control. This paper presents static security assessment scheme for large scale inter connected power system with Phasor measurement unit using Artificial Neural Network. Voltage magnitude and phase angle are used as input variables of the ANN. The optimal location of PMU under base case and critical contingency cases are determined using Genetic algorithm. The performance of the proposed optimization model was tested with standard IEEE 30 bus system incorporating zero injection buses and successful results have been obtained.
Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-domain basis which provides a simple and flexible means to detect known DGA families. Recent machine learning approaches to DGA detection have been successful on fairly simplistic DGAs, many of which produce names of fixed length. However, models trained on limited datasets are somewhat blind to new DGA variants. In this paper, we leverage the concept of generative adversarial networks to construct a deep learning based DGA that is designed to intentionally bypass a deep learning based detector. In a series of adversarial rounds, the generator learns to generate domain names that are increasingly more difficult to detect. In turn, a detector model updates its parameters to compensate for the adversarially generated domains. We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs. We detail solutions to several challenges in training this character-based generative adversarial network. In particular, our deep learning architecture begins as a domain name auto-encoder (encoder + decoder) trained on domains in the Alexa one million. Then the encoder and decoder are reassembled competitively in a generative adversarial network (detector + generator), with novel neural architectures and training strategies to improve convergence.
The traditional text classification methods usually follow this process: first, a sentence can be considered as a bag of words (BOW), then transformed into sentence feature vector which can be classified by some methods, such as maximum entropy (ME), Naive Bayes (NB), support vector machines (SVM), and so on. However, when these methods are applied to text classification, we usually can not obtain an ideal result. The most important reason is that the semantic relations between words is very important for text categorization, however, the traditional method can not capture it. Sentiment classification, as a special case of text classification, is binary classification (positive or negative). Inspired by the sentiment analysis, we use a novel deep learning-based recurrent neural networks (RNNs)model for automatic security audit of short messages from prisons, which can classify short messages(secure and non-insecure). In this paper, the feature of short messages is extracted by word2vec which captures word order information, and each sentence is mapped to a feature vector. In particular, words with similar meaning are mapped to a similar position in the vector space, and then classified by RNNs. RNNs are now widely used and the network structure of RNNs determines that it can easily process the sequence data. We preprocess short messages, extract typical features from existing security and non-security short messages via word2vec, and classify short messages through RNNs which accept a fixed-sized vector as input and produce a fixed-sized vector as output. The experimental results show that the RNNs model achieves an average 92.7% accuracy which is higher than SVM.
Software defined networking promises network operators to dramatically simplify network management. It provides flexibility and innovation through network programmability. With SDN, network management moves from codifying functionality in terms of low-level device configuration to building software that facilitates network management and debugging[1]. SDN provides new techniques to solve long-standing problems in networking like routing by separating the complexity of state distribution from network specification. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the major issue and a striking challenge that reduces the growth of SDNs. Moreover the introduction of various architectural components and up cycling of novel entities of SDN poses new security issues and threats. SDN is considered as major target for digital threats and cyber-attacks[2] and have more devastating effects than simple networks. Initial SDN design doesn't considered security as its part; therefore, it must be raised on the agenda. This article discusses the security solutions proposed to secure SDNs. We categorize the security solutions in the article by presenting a thematic taxonomy based on SDN architectural layers/interfaces[3], security measures and goals, simulation framework. Moreover, the literature also points out the possible attacks[2] targeting different layers/interfaces of SDNs. For securing SDNs, the potential requirements and their key enablers are also identified and presented. Also, the articles sketch the design of secure and dependable SDNs. At last, we discuss open issues and challenges of SDN security that may be rated appropriate to be handled by professionals and researchers in the future.
Modern smart surveillance systems can not only record the monitored environment but also identify the targeted objects and detect anomaly activities. These advanced functions are often facilitated by deep neural networks, achieving very high accuracy and large data processing throughput. However, inappropriate design of the neural network may expose such smart systems to the risks of leaking the target being searched or even the adopted learning model itself to attackers. In this talk, we will present the security challenges in the design of smart surveillance systems. We will also discuss some possible solutions that leverage the unique properties of emerging nano-devices, including the incurred design and performance cost and optimization methods for minimizing these overheads.
Coming days are becoming a much challenging task for the power system researchers due to the anomalous increase in the load demand with the existing system. As a result there exists a discordant between the transmission and generation framework which is severely pressurizing the power utilities. In this paper a quick and efficient methodology has been proposed to identify the most sensitive or susceptible regions in any power system network. The technique used in this paper comprises of correlation of a multi-bus power system network to an equivalent two-bus network along with the application of Artificial neural network(ANN) Architecture with training algorithm for online monitoring of voltage security of the system under all multiple exigencies which makes it more flexible. A fast voltage stability indicator has been proposed known as Unified Voltage Stability Indicator (UVSI) which is used as a substratal apparatus for the assessment of the voltage collapse point in a IEEE 30-bus power system in combination with the Feed Forward Neural Network (FFNN) to establish the accuracy of the status of the system for different contingency configurations.
In view of the high demand for the security of visiting data in power system, a network data security analysis method based on DPI technology was put forward in this paper, to solve the problem of security gateway judge the legality of the network data. Considering the legitimacy of the data involves data protocol and data contents, this article will filters the data from protocol matching and content detection. Using deep packet inspection (DPI) technology to screen the protocol. Using protocol analysis to detect the contents of data. This paper implements the function that allowing secure data through the gateway and blocking threat data. The example proves that the method is more effective guarantee the safety of visiting data.
With the popularization and development of network knowledge, network intruders are increasing, and the attack mode has been updated. Intrusion detection technology is a kind of active defense technology, which can extract the key information from the network system, and quickly judge and protect the internal or external network intrusion. Intrusion detection is a kind of active security technology, which provides real-time protection for internal attacks, external attacks and misuse, and it plays an important role in ensuring network security. However, with the diversification of intrusion technology, the traditional intrusion detection system cannot meet the requirements of the current network security. Therefore, the implementation of intrusion detection needs diversifying. In this context, we apply neural network technology to the network intrusion detection system to solve the problem. In this paper, on the basis of intrusion detection method, we analyze the development history and the present situation of intrusion detection technology, and summarize the intrusion detection system overview and architecture. The neural network intrusion detection is divided into data acquisition, data analysis, pretreatment, intrusion behavior detection and testing.
This paper considers the physical layer security for the cluster-based cooperative wireless sensor networks (WSNs), where each node is equipped with a single antenna and sensor nodes cooperate at each cluster of the network to form a virtual multi-input multi-output (MIMO) communication architecture. We propose a joint cooperative beamforming and jamming scheme to enhance the security of the WSNs where a part of sensor nodes in Alice's cluster are deployed to transmit beamforming signals to Bob while a part of sensor nodes in Bob's cluster are utilized to jam Eve with artificial noise. The optimization of beamforming and jamming vectors to minimize total energy consumption satisfying the quality-of-service (QoS) constraints is a NP-hard problem. Fortunately, through reformulation, the problem is proved to be a quadratically constrained quadratic problem (QCQP) which can be solved by solving constraint integer programs (SCIP) algorithm. Finally, we give the simulation results of our proposed scheme.
By connecting devices, people, vehicles and infrastructures everywhere in a city, governments and their partners can improve community wellbeing and other economic and financial aspects (e.g., cost and energy savings). Nonetheless, smart cities are complex ecosystems that comprise many different stakeholders (network operators, managed service providers, logistic centers...) who must work together to provide the best services and unlock the commercial potential of the IoT. This is one of the major challenges that faces today's smart city movement, and more generally the IoT as a whole. Indeed, while new smart connected objects hit the market every day, they mostly feed "vertical silos" (e.g., vertical apps, siloed apps...) that are closed to the rest of the IoT, thus hampering developers to produce new added value across multiple platforms. Within this context, the contribution of this paper is twofold: (i) present the EU vision and ongoing activities to overcome the problem of vertical silos; (ii) introduce recent IoT standards used as part of a recent Horizon 2020 IoT project to address this problem. The implementation of those standards for enhanced sporting event management in a smart city/government context (FIFA World Cup 2022) is developed, presented, and evaluated as a proof-of-concept.
We propose a new voting scheme, BeleniosRF, that offers both receipt-freeness and end-to-end verifiability. It is receipt-free in a strong sense, meaning that even dishonest voters cannot prove how they voted. We provide a game-based definition of receipt-freeness for voting protocols with non-interactive ballot casting, which we name strong receipt-freeness (sRF). To our knowledge, sRF is the first game-based definition of receipt-freeness in the literature, and it has the merit of being particularly concise and simple. Built upon the Helios protocol, BeleniosRF inherits its simplicity and does not require any anti-coercion strategy from the voters. We implement BeleniosRF and show its feasibility on a number of platforms, including desktop computers and smartphones.
This paper presents the holistic approach to cyber resilience as a means of preparing for the "unknown unknowns". Principles of augmented cyber risks management and resilience management model at national level are presented, with elaboration on multi-stakeholder engagement and partnership for the implementation of national cyber resilience collaborative framework. The complementarity of governance, law, and business/industry initiatives is outlined, with examples of the collaborative resilience model for the Bulgarian national strategy and its multi-national engagements.
Cloud computing is one of the happening technologies in these years and gives scope to lot of research ideas. Banks are likely to enter the cloud computing field because of abundant advantages offered by cloud like reduced IT costs, pay-per-use modeling, and business agility and green IT. Main challenges to be addressed while moving bank to cloud are security breach, governance, and Service Level Agreements (SLA). Banks should not give prospect for security breaches at any cost. Access control and authorization are vivacious solutions to security risks. Thus we are proposing a knowledge based security model addressing the present issue. Separate ontologies for subject, object, and action elements are created and an authorization rule is framed by considering the inter linkage between those elements to ensure data security with restricted access. Moreover banks are now using Software as a Service (SaaS), which is managed by Cloud Service Providers (CSPs). Banks rely upon the security measures provided by CSPs. If CSPs follow traditional security model, then the data security will be a big question. Our work facilitates the bank to pose some security measures on their side along with the security provided by the CSPs. Banks can add and delete rules according to their needs and can have control over the data in addition to CSPs. We also showed the performance analysis of our model and proved that our model provides secure access to bank data.
Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.
This paper proposes the Digital Public Service Innovation Framework that extends the "standard" provision of digital public services according to the emerging, enhanced, transactional and connected stages underpinning the United Nations Global e-Government Survey, with seven example "innovations" in digital public service delivery – transparent, participatory, anticipatory, personalized, co-created, context-aware and context-smart. Unlike the "standard" provisions, innovations in digital public service delivery are open-ended – new forms may continuously emerge in response to new policy demands and technological progress, and are non-linear – one innovation may or may not depend on others. The framework builds on the foundations of public sector innovation and Digital Government Evolution model. In line with the latter, the paper equips each innovation with sharp logical characterization, body of research literature and real-life cases from around the world to simultaneously serve the illustration and validation goals. The paper also identifies some policy implications of the framework, covering a broad range of issues from infrastructure, capacity, eco-system and partnerships, to inclusion, value, channels, security, privacy and authentication.
Recent years have seen an exponential growth of the collection and processing of data from heterogeneous sources for a variety of purposes. Several methods and techniques have been proposed to transform and fuse data into "useful" information. However, the security aspects concerning the fusion of sensitive data are often overlooked. This paper investigates the problem of data fusion and derived data control. In particular, we identify the requirements for regulating the fusion process and eliciting restrictions on the access and usage of derived data. Based on these requirements, we propose an attribute-based policy framework to control the fusion of data from different information sources and under the control of different authorities. The framework comprises two types of policies: access control policies, which define the authorizations governing the resources used in the fusion process, and fusion policies, which define constraints on allowed fusion processes. We also discuss how such policies can be obtained for derived data.
While in business and private settings the disruptive impact of advanced information communication technology (ICT) have already been felt, the legal sector is now starting to face great disruptions due to such ICTs. Bits and pieces of innovations in the legal sector have been emerging for some time, affecting the performance of core functions and the legitimacy of public institutions. In this paper, we present our framework for enabling the smart government vision, particularly for the case of criminal justice systems, by unifying different isolated ICT-based solutions. Our framework, coined as Legal Logistics, supports the well-functioning of a legal system in order to streamline the innovations in these legal systems. The framework targets the exploitation of all relevant data generated by the ICT-based solutions. As will be illustrated for the Dutch criminal justice system, the framework may be used to integrate different ICT-based innovations and to gain insights about the well-functioning of the system. Furthermore, Legal Logistics can be regarded as a roadmap towards a smart and open justice.