Visible to the public Biblio

Found 12044 results

Filters: Keyword is Resiliency  [Clear All Filters]
2017-08-18
Song, Yang, Venkataramani, Arun, Gao, Lixin.  2016.  Identifying and Addressing Reachability and Policy Attacks in “Secure” BGP. IEEE/ACM Trans. Netw.. 24:2969–2982.

BGP is known to have many security vulnerabilities due to the very nature of its underlying assumptions of trust among independently operated networks. Most prior efforts have focused on attacks that can be addressed using traditional cryptographic techniques to ensure authentication or integrity, e.g., BGPSec and related works. Although augmenting BGP with authentication and integrity mechanisms is critical, they are, by design, far from sufficient to prevent attacks based on manipulating the complex BGP protocol itself. In this paper, we identify two serious attacks on two of the most fundamental goals of BGP—to ensure reachability and to enable ASes to pick routes available to them according to their routing policies—even in the presence of BGPSec-like mechanisms. Our key contributions are to 1 formalize a series of critical security properties, 2 experimentally validate using commodity router implementations that BGP fails to achieve those properties, 3 quantify the extent of these vulnerabilities in the Internet's AS topology, and 4 propose simple modifications to provably ensure that those properties are satisfied. Our experiments show that, using our attacks, a single malicious AS can cause thousands of other ASes to become disconnected from thousands of other ASes for arbitrarily long, while our suggested modifications almost completely eliminate such attacks.

Gupta, Arpit, Feamster, Nick, Vanbever, Laurent.  2016.  Authorizing Network Control at Software Defined Internet Exchange Points. Proceedings of the Symposium on SDN Research. :16:1–16:6.

Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends "says" and "speaks for" logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings.

Roos, Stefanie, Strufe, Thorsten.  2016.  Dealing with Dead Ends: Efficient Routing in Darknets. ACM Trans. Model. Perform. Eval. Comput. Syst.. 1:4:1–4:30.

Darknets, membership-concealing peer-to-peer networks, suffer from high message delivery delays due to insufficient routing strategies. They form topologies restricted to a subgraph of the social network of their users by limiting connections to peers with a mutual trust relationship in real life. Whereas centralized, highly successful social networking services entail a privacy loss of their users, Darknets at higher performance represent an optimal private and censorship-resistant communication substrate for social applications. Decentralized routing so far has been analyzed under the assumption that the network resembles a perfect lattice structure. Freenet, currently the only widely used Darknet, attempts to approximate this structure by embedding the social graph into a metric space. Considering the resulting distortion, the common greedy routing algorithm is adapted to account for local optima. Yet the impact of the adaptation has not been adequately analyzed. We thus suggest a model integrating inaccuracies in the embedding. In the context of this model, we show that the Freenet routing algorithm cannot achieve polylog performance. Consequently, we design NextBestOnce, a provable poylog algorithm based only on information about neighbors. Furthermore, we show that the routing length of NextBestOnce is further decreased by more than a constant factor if neighbor-of-neighbor information is included in the decision process.

Priayoheswari, B., Kulothungan, K., Kannan, A..  2016.  Beta Reputation and Direct Trust Model for Secure Communication in Wireless Sensor Networks. Proceedings of the International Conference on Informatics and Analytics. :73:1–73:5.

WSN is a collection of tiny nodes that used to absorb the natural phenomenon from the operational environment and send it to the control station to extract the useful information. In most of the Existing Systems, the assumption is that the operational environment of the sensor nodes deployed is trustworthy and secure by means of some cryptographic operations and existing trust model. But in the reality it is not the case. Most of the existing systems lacks in providing reliable security to the sensor nodes. To overcome the above problem, in this paper, Beta Reputation and Direct Trust Model (BRDT) is the combination of Direct Trust and Beta Reputation Trust for secure communication in Wireless Sensor Networks. This model is used to perform secure routing in WSN. Overall, the method provides an efficient trust in WSN compared to existing methods.

Mitropoulos, Dimitris, Stroggylos, Konstantinos, Spinellis, Diomidis, Keromytis, Angelos D..  2016.  How to Train Your Browser: Preventing XSS Attacks Using Contextual Script Fingerprints. ACM Trans. Priv. Secur.. 19:2:1–2:31.

Cross-Site Scripting (XSS) is one of the most common web application vulnerabilities. It is therefore sometimes referred to as the “buffer overflow of the web.” Drawing a parallel from the current state of practice in preventing unauthorized native code execution (the typical goal in a code injection), we propose a script whitelisting approach to tame JavaScript-driven XSS attacks. Our scheme involves a transparent script interception layer placed in the browser’s JavaScript engine. This layer is designed to detect every script that reaches the browser, from every possible route, and compare it to a list of valid scripts for the site or page being accessed; scripts not on the list are prevented from executing. To avoid the false positives caused by minor syntactic changes (e.g., due to dynamic code generation), our layer uses the concept of contextual fingerprints when comparing scripts. Contextual fingerprints are identifiers that represent specific elements of a script and its execution context. Fingerprints can be easily enriched with new elements, if needed, to enhance the proposed method’s robustness. The list can be populated by the website’s administrators or a trusted third party. To verify our approach, we have developed a prototype and tested it successfully against an extensive array of attacks that were performed on more than 50 real-world vulnerable web applications. We measured the browsing performance overhead of the proposed solution on eight websites that make heavy use of JavaScript. Our mechanism imposed an average overhead of 11.1% on the execution time of the JavaScript engine. When measured as part of a full browsing session, and for all tested websites, the overhead introduced by our layer was less than 0.05%. When script elements are altered or new scripts are added on the server side, a new fingerprint generation phase is required. To examine the temporal aspect of contextual fingerprints, we performed a short-term and a long-term experiment based on the same websites. The former, showed that in a short period of time (10 days), for seven of eight websites, the majority of valid fingerprints stay the same (more than 92% on average). The latter, though, indicated that, in the long run, the number of fingerprints that do not change is reduced. Both experiments can be seen as one of the first attempts to study the feasibility of a whitelisting approach for the web.

Perrey, Heiner, Landsmann, Martin, Ugus, Osman, Wählisch, Matthias, Schmidt, Thomas C..  2016.  TRAIL: Topology Authentication in RPL. Proceedings of the 2016 International Conference on Embedded Wireless Systems and Networks. :59–64.

The IPv6 Routing Protocol for Low-Power and Lossy Networks was recently introduced as the new routing standard for the Internet of Things. Although RPL defines basic security modes, it remains vulnerable to topological attacks which facilitate blackholing, interception, and resource exhaustion. We are concerned with analyzing the corresponding threats and protecting future RPL deployments from such attacks. Our contributions are twofold. First, we analyze the state of the art, in particular the protective scheme VeRA and present two new rank order attacks as well as extensions to mitigate them. Second, we derive and evaluate TRAIL, a generic scheme for topology authentication in RPL. TRAIL solely relies on the basic assumptions of RPL that (1) the root node serves as a trust anchor and (2) each node interconnects to the root in a straight hierarchy. Using proper reachability tests, TRAIL scalably and reliably identifies any topological attacker without strong cryptographic efforts.

Usman, Aminu Bello, Gutierrez, Jairo.  2016.  A Reliability-Based Trust Model for Efficient Collaborative Routing in Wireless Networks. Proceedings of the 11th International Conference on Queueing Theory and Network Applications. :15:1–15:7.

Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.

Hinge, Rashmi, Dubey, Jigyasu.  2016.  Opinion Based Trusted AODV Routing Protocol for MANET. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :126:1–126:5.

Mobile ad hoc network is one of the popular network technology used for rapid deployment in critical situations. Because the nature of network is ad hoc therefore a number of issues exist. In order to investigate the security in mobile ad hoc network a number of research articles are explored and it is observed that most of the attacks are deployed on the basis of poor routing methodology. For providing the security in the ad hoc networks an opinion based trust model is proposed which is working on the basis of the network properties. In this model two techniques are used one is trust calculation that helps in finding most trustworthy node and other is opinion evaluation by which most secure route to the destination is obtained. By the experimental outcomes the results are compared with the traditional approach of trust based security. According to the obtained results the performance of the network becomes efficient in all the evaluated parameters as compared to the traditional technique. Thus proposed model is more adoptable for secure routing in MANET.

Shillair, Ruth.  2016.  Talking About Online Safety: A Qualitative Study Exploring the Cybersecurity Learning Process of Online Labor Market Workers. Proceedings of the 34th ACM International Conference on the Design of Communication. :21:1–21:9.

Technological changes bring great efficiencies and opportunities; however, they also bring new threats and dangers that users are often ill prepared to handle. Some individuals have training at work or school while others have family or friends to help them. However, there are few widely known or ubiquitous educational programs to inform and motivate users to develop safe cybersecurity practices. Additionally, little is known about learning strategies in this domain. Understanding how active Internet users have learned their security practices can give insight into more effective learning methods. I surveyed 800 online labor workers to discover their learning processes. They shared how they had to construct their own schema and negotiate meaning in a complex domain. Findings suggest a need to help users build a dynamic mental model of security. Participants recommend encouraging participatory and constructive learning, multi-model dissemination, and ubiquitous opportunities for learning security behaviors.

Sprengel, Matthew D., Pittman, Jason M..  2016.  An Enhanced Visualization Tool for Teaching Monoalphabetic Substitution Cipher Frequency Analysis. Proceedings of the 2016 ACM SIGMIS Conference on Computers and People Research. :29–30.

Information Systems curricula require on-going and frequent review [2] [11]. Furthermore, such curricula must be flexible because of the fast-paced, dynamic nature of the workplace. Such flexibility can be maintained through modernizing course content or, inclusively, exchanging hardware or software for newer versions. Alternatively, flexibility can arise from incorporating new information into curricula from other disciplines. One field where the pace of change is extremely high is cybersecurity [3]. Students are left with outdated skills when curricula lag behind the pace of change in industry. For example, cryptography is a required learning objective in the DHS/NSA Center of Academic Excellence (CAE) knowledge criteria [1]. However, the overarching curriculum associated with basic ciphers has gone unchanged for decades. Indeed, a general problem in cybersecurity education is that students lack fundamental knowledge in areas such as ciphers [5]. In response, researchers have developed a variety of interactive classroom visualization tools [5] [8] [9]. Such tools visualize the standard approach to frequency analysis of simple substitution ciphers that includes review of most common, single letters in ciphertext. While fundamental ciphers such as the monoalphabetic substitution cipher have not been updated (these are historical ciphers), collective understanding of how humans interact with language has changed. Updated understanding in both English language pedagogy [10] [12] and automated cryptanalysis of substitution ciphers [4] potentially renders the interactive classroom visualization tools incomplete or outdated. Classroom visualization tools are powerful teaching aids, particularly for abstract concepts. Existing research has established that such tools promote an active learning environment that translates to not only effective learning conditions but also higher student retention rates [7]. However, visualization tools require extensive planning and design when used to actively engage students with detailed, specific knowledge units such as ciphers [7] [8]. Accordingly, we propose a heatmap-based frequency analysis visualization solution that (a) incorporates digraph and trigraph language processing norms; (b) and enhances the active learning pedagogy inherent in visualization tools. Preliminary results indicate that study participants take approximately 15% longer to learn the heatmap-based frequency analysis technique compared to traditional frequency analysis but demonstrate a 50% increase in efficacy when tasked with solving simple substitution ciphers. Further, a heatmap-based solution contributes positively to the field insofar as educators have an additional tool to use in the classroom. As well, the heatmap visualization tool may allow researchers to comparatively examine efficacy of visualization tools in the cryptanalysis of mono-alphabetic substitution ciphers.

Li, Yanyan, Xie, Mengjun.  2016.  Platoon: A Virtual Platform for Team-oriented Cybersecurity Training and Exercises. Proceedings of the 17th Annual Conference on Information Technology Education. :20–25.

Recent years have witnessed a flourish of hands-on cybersecurity labs and competitions. The information technology (IT) education community has recognized their significant role in boosting students' interest in security and enhancing their security knowledge and skills. Compared to the focus on individual based education materials, much less attention has been paid to the development of tools and materials suitable for team-based security practices, which, however, prevail in real-world environments. One major bottleneck is lack of suitable platforms for this type of practices in IT education community. In this paper, we propose a low-cost, team-oriented cybersecurity practice platform called Platoon. The Platoon platform allows for quickly and automatically creating one or more virtual networks that mimic real-world corporate networks using a regular computer. The virtual environment created by Platoon is suitable for both cybersecurity labs, competitions, and projects. The performance data and user feedback collected from our cyber-defense exercises indicate that Platoon is practical and useful for enhancing students' security learning outcomes.

Armitage, William D., Gauvin, William, Sheffield, Adam.  2016.  Design and Launch of an Intensive Cybersecurity Program for Military Veterans. Proceedings of the 17th Annual Conference on Information Technology Education. :40–45.

The demand for trained cybersecurity operators is growing more quickly than traditional programs in higher education can fill. At the same time, unemployment for returning military veterans has become a nationally discussed problem. We describe the design and launch of New Skills for a New Fight (NSNF), an intensive, one-year program to train military veterans for the cybersecurity field. This non-traditional program, which leverages experience that veterans gained in military service, includes recruitment and selection, a base of knowledge in the form of four university courses in a simultaneous cohort mode, a period of hands-on cybersecurity training, industry certifications and a practical internship in a Security Operations Center (SOC). Twenty veterans entered this pilot program in January of 2016, and will complete in less than a year's time. Initially funded by a global financial services company, the program provides veterans with an expense-free preparation for an entry-level cybersecurity job.

Blair, Jean, Sobiesk, Edward, Ekstrom, Joseph J., Parrish, Allen.  2016.  What is Information Technology's Role in Cybersecurity? Proceedings of the 17th Annual Conference on Information Technology Education. :46–47.

This panel will discuss and debate what role(s) the information technology discipline should have in cybersecurity. Diverse viewpoints will be considered including current and potential ACM curricular recommendations, current and potential ABET and NSA accreditation criteria, the emerging cybersecurity discipline(s), consideration of government frameworks, the need for a multi-disciplinary approach to cybersecurity, and what aspects of cybersecurity should be under information technology's purview.

Burley, Diana, Bishop, Matt, Hawthorne, Elizabeth, Kaza, Siddharth, Buck, Scott, Futcher, Lynn.  2016.  Special Session: ACM Joint Task Force on Cyber Education. Proceedings of the 47th ACM Technical Symposium on Computing Science Education. :234–235.

In this special session, members of the ACM Joint Task Force on Cyber Education to Develop Undergraduate Curricular Guidance will provide an overview of the task force mission, objectives, and work plan. After the overview, task force members will engage session participants in the curricular development process.

Lakhdhar, Yosra, Rekhis, Slim, Boudriga, Noureddine.  2016.  An Approach To A Graph-Based Active Cyber Defense Model. Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media. :261–268.

Securing cyber system is a major concern as security attacks become more and more sophisticated. We develop in this paper a novel graph-based Active Cyber Defense (ACD) model to proactively respond to cyber attacks. The proposed model is based on the use of a semantically rich graph to describe cyber systems, types of used interconnection between them, and security related data useful to develop active defense strategies. The developed model takes into consideration the probabilistic nature of cyber attacks, and their degree of complexity. In this context, analytics are provided to proactively test the impact of vulnerabilities/threats increase on the system, analyze the consequent behavior of cyber systems and security solution, and decide about the security state of the whole cyber system. Our model integrates in the same framework decisions made by cyber defenders based on their expertise and knowledge, and decisions that are automatically generated using security analytic rules.

Ji, Shouling, Li, Weiqing, Srivatsa, Mudhakar, He, Jing Selena, Beyah, Raheem.  2016.  General Graph Data De-Anonymization: From Mobility Traces to Social Networks. ACM Trans. Inf. Syst. Secur.. 18:12:1–12:29.

When people utilize social applications and services, their privacy suffers a potential serious threat. In this article, we present a novel, robust, and effective de-anonymization attack to mobility trace data and social data. First, we design a Unified Similarity (US) measurement, which takes account of local and global structural characteristics of data, information obtained from auxiliary data, and knowledge inherited from ongoing de-anonymization results. By analyzing the measurement on real datasets, we find that some data can potentially be de-anonymized accurately and the other can be de-anonymized in a coarse granularity. Utilizing this property, we present a US-based De-Anonymization (DA) framework, which iteratively de-anonymizes data with accuracy guarantee. Then, to de-anonymize large-scale data without knowledge of the overlap size between the anonymized data and the auxiliary data, we generalize DA to an Adaptive De-Anonymization (ADA) framework. By smartly working on two core matching subgraphs, ADA achieves high de-anonymization accuracy and reduces computational overhead. Finally, we examine the presented de-anonymization attack on three well-known mobility traces: St Andrews, Infocom06, and Smallblue, and three social datasets: ArnetMiner, Google+, and Facebook. The experimental results demonstrate that the presented de-anonymization framework is very effective and robust to noise. The source code and employed datasets are now publicly available at SecGraph [2015].

Pei, Kexin, Gu, Zhongshu, Saltaformaggio, Brendan, Ma, Shiqing, Wang, Fei, Zhang, Zhiwei, Si, Luo, Zhang, Xiangyu, Xu, Dongyan.  2016.  HERCULE: Attack Story Reconstruction via Community Discovery on Correlated Log Graph. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :583–595.

Advanced cyber attacks consist of multiple stages aimed at being stealthy and elusive. Such attack patterns leave their footprints spatio-temporally dispersed across many different logs in victim machines. However, existing log-mining intrusion analysis systems typically target only a single type of log to discover evidence of an attack and therefore fail to exploit fundamental inter-log connections. The output of such single-log analysis can hardly reveal the complete attack story for complex, multi-stage attacks. Additionally, some existing approaches require heavyweight system instrumentation, which makes them impractical to deploy in real production environments. To address these problems, we present HERCULE, an automated multi-stage log-based intrusion analysis system. Inspired by graph analytics research in social network analysis, we model multi-stage intrusion analysis as a community discovery problem. HERCULE builds multi-dimensional weighted graphs by correlating log entries across multiple lightweight logs that are readily available on commodity systems. From these, HERCULE discovers any "attack communities" embedded within the graphs. Our evaluation with 15 well known APT attack families demonstrates that HERCULE can reconstruct attack behaviors from a spectrum of cyber attacks that involve multiple stages with high accuracy and low false positive rates.

Cook, Kyle, Shaw, Thomas, Hawrylak, Peter, Hale, John.  2016.  Scalable Attack Graph Generation. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :21:1–21:4.

Attack graphs are a powerful modeling technique with which to explore the attack surface of a system. However, they can be difficult to generate due to the exponential growth of the state space, often times making exhaustive search impractical. This paper discusses an approach for generating large attack graphs with an emphasis on scalable generation over a distributed system. First, a serial algorithm is presented, highlighting bottlenecks and opportunities to exploit inherent concurrency in the generation process. Then a strategy to parallelize this process is presented. Finally, we discuss plans for future work to implement the parallel algorithm using a hybrid distributed/shared memory programming model on a heterogeneous compute node cluster.

2017-08-02
Kubler, Sylvain, Robert, Jérémy, Hefnawy, Ahmed, Cherifi, Chantal, Bouras, Abdelaziz, Främling, Kary.  2016.  IoT-based Smart Parking System for Sporting Event Management. Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. :104–114.

By connecting devices, people, vehicles and infrastructures everywhere in a city, governments and their partners can improve community wellbeing and other economic and financial aspects (e.g., cost and energy savings). Nonetheless, smart cities are complex ecosystems that comprise many different stakeholders (network operators, managed service providers, logistic centers...) who must work together to provide the best services and unlock the commercial potential of the IoT. This is one of the major challenges that faces today's smart city movement, and more generally the IoT as a whole. Indeed, while new smart connected objects hit the market every day, they mostly feed "vertical silos" (e.g., vertical apps, siloed apps...) that are closed to the rest of the IoT, thus hampering developers to produce new added value across multiple platforms. Within this context, the contribution of this paper is twofold: (i) present the EU vision and ongoing activities to overcome the problem of vertical silos; (ii) introduce recent IoT standards used as part of a recent Horizon 2020 IoT project to address this problem. The implementation of those standards for enhanced sporting event management in a smart city/government context (FIFA World Cup 2022) is developed, presented, and evaluated as a proof-of-concept.

Chaidos, Pyrros, Cortier, Veronique, Fuchsbauer, Georg, Galindo, David.  2016.  BeleniosRF: A Non-interactive Receipt-Free Electronic Voting Scheme. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1614–1625.

We propose a new voting scheme, BeleniosRF, that offers both receipt-freeness and end-to-end verifiability. It is receipt-free in a strong sense, meaning that even dishonest voters cannot prove how they voted. We provide a game-based definition of receipt-freeness for voting protocols with non-interactive ballot casting, which we name strong receipt-freeness (sRF). To our knowledge, sRF is the first game-based definition of receipt-freeness in the literature, and it has the merit of being particularly concise and simple. Built upon the Helios protocol, BeleniosRF inherits its simplicity and does not require any anti-coercion strategy from the voters. We implement BeleniosRF and show its feasibility on a number of platforms, including desktop computers and smartphones.

Sharkov, George.  2016.  From Cybersecurity to Collaborative Resiliency. Proceedings of the 2016 ACM Workshop on Automated Decision Making for Active Cyber Defense. :3–9.

This paper presents the holistic approach to cyber resilience as a means of preparing for the "unknown unknowns". Principles of augmented cyber risks management and resilience management model at national level are presented, with elaboration on multi-stakeholder engagement and partnership for the implementation of national cyber resilience collaborative framework. The complementarity of governance, law, and business/industry initiatives is outlined, with examples of the collaborative resilience model for the Bulgarian national strategy and its multi-national engagements.

Auxilia, M., Raja, K..  2016.  Knowledge Based Security Model for Banking in Cloud. Proceedings of the International Conference on Informatics and Analytics. :51:1–51:6.

Cloud computing is one of the happening technologies in these years and gives scope to lot of research ideas. Banks are likely to enter the cloud computing field because of abundant advantages offered by cloud like reduced IT costs, pay-per-use modeling, and business agility and green IT. Main challenges to be addressed while moving bank to cloud are security breach, governance, and Service Level Agreements (SLA). Banks should not give prospect for security breaches at any cost. Access control and authorization are vivacious solutions to security risks. Thus we are proposing a knowledge based security model addressing the present issue. Separate ontologies for subject, object, and action elements are created and an authorization rule is framed by considering the inter linkage between those elements to ensure data security with restricted access. Moreover banks are now using Software as a Service (SaaS), which is managed by Cloud Service Providers (CSPs). Banks rely upon the security measures provided by CSPs. If CSPs follow traditional security model, then the data security will be a big question. Our work facilitates the bank to pose some security measures on their side along with the security provided by the CSPs. Banks can add and delete rules according to their needs and can have control over the data in addition to CSPs. We also showed the performance analysis of our model and proved that our model provides secure access to bank data.

Chu, Pin-Yu, Tseng, Hsien-Lee.  2016.  A Theoretical Framework for Evaluating Government Open Data Platform. Proceedings of the International Conference on Electronic Governance and Open Society: Challenges in Eurasia. :135–142.

Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.

Bertot, John Carlo, Estevez, Elsa, Janowski, Tomasz.  2016.  Digital Public Service Innovation: Framework Proposal. Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance. :113–122.

This paper proposes the Digital Public Service Innovation Framework that extends the "standard" provision of digital public services according to the emerging, enhanced, transactional and connected stages underpinning the United Nations Global e-Government Survey, with seven example "innovations" in digital public service delivery – transparent, participatory, anticipatory, personalized, co-created, context-aware and context-smart. Unlike the "standard" provisions, innovations in digital public service delivery are open-ended – new forms may continuously emerge in response to new policy demands and technological progress, and are non-linear – one innovation may or may not depend on others. The framework builds on the foundations of public sector innovation and Digital Government Evolution model. In line with the latter, the paper equips each innovation with sharp logical characterization, body of research literature and real-life cases from around the world to simultaneously serve the illustration and validation goals. The paper also identifies some policy implications of the framework, covering a broad range of issues from infrastructure, capacity, eco-system and partnerships, to inclusion, value, channels, security, privacy and authentication.

den Hartog, Jerry, Zannone, Nicola.  2016.  A Policy Framework for Data Fusion and Derived Data Control. Proceedings of the 2016 ACM International Workshop on Attribute Based Access Control. :47–57.

Recent years have seen an exponential growth of the collection and processing of data from heterogeneous sources for a variety of purposes. Several methods and techniques have been proposed to transform and fuse data into "useful" information. However, the security aspects concerning the fusion of sensitive data are often overlooked. This paper investigates the problem of data fusion and derived data control. In particular, we identify the requirements for regulating the fusion process and eliciting restrictions on the access and usage of derived data. Based on these requirements, we propose an attribute-based policy framework to control the fusion of data from different information sources and under the control of different authorities. The framework comprises two types of policies: access control policies, which define the authorizations governing the resources used in the fusion process, and fusion policies, which define constraints on allowed fusion processes. We also discuss how such policies can be obtained for derived data.