Biblio

Found 3403 results

Filters: First Letter Of Last Name is A  [Clear All Filters]
2019-06-28
Dixit, Vaibhav Hemant, Doupé, Adam, Shoshitaishvili, Yan, Zhao, Ziming, Ahn, Gail-Joon.  2018.  AIM-SDN: Attacking Information Mismanagement in SDN-Datastores. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :664-676.

Network Management is a critical process for an enterprise to configure and monitor the network devices using cost effective methods. It is imperative for it to be robust and free from adversarial or accidental security flaws. With the advent of cloud computing and increasing demands for centralized network control, conventional management protocols like SNMP appear inadequate and newer techniques like NMDA and NETCONF have been invented. However, unlike SNMP which underwent improvements concentrating on security, the new data management and storage techniques have not been scrutinized for the inherent security flaws. In this paper, we identify several vulnerabilities in the widely used critical infrastructures which leverage the Network Management Datastore Architecture design (NMDA). Software Defined Networking (SDN), a proponent of NMDA, heavily relies on its datastores to program and manage the network. We base our research on the security challenges put forth by the existing datastore's design as implemented by the SDN controllers. The vulnerabilities identified in this work have a direct impact on the controllers like OpenDayLight, Open Network Operating System and their proprietary implementations (by CISCO, Ericsson, RedHat, Brocade, Juniper, etc). Using our threat detection methodology, we demonstrate how the NMDA-based implementations are vulnerable to attacks which compromise availability, integrity, and confidentiality of the network. We finally propose defense measures to address the security threats in the existing design and discuss the challenges faced while employing these countermeasures.

2019-03-15
Salman, Muhammad, Husna, Diyanatul, Apriliani, Stella Gabriella, Pinem, Josua Geovani.  2018.  Anomaly Based Detection Analysis for Intrusion Detection System Using Big Data Technique with Learning Vector Quantization (LVQ) and Principal Component Analysis (PCA). Proceedings of the 2018 International Conference on Artificial Intelligence and Virtual Reality. :20-23.

Data security has become a very serious parf of any organizational information system. More and more threats across the Internet has evolved and capable to deceive firewall as well as antivirus software. In addition, the number of attacks become larger and become more dificult to be processed by the firewall or antivirus software. To improve the security of the system is usually done by adding Intrusion Detection System(IDS), which divided into anomaly-based detection and signature-based detection. In this research to process a huge amount of data, Big Data technique is used. Anomaly-based detection is proposed using Learning Vector Quantization Algorithm to detect the attacks. Learning Vector Quantization is a neural network technique that learn the input itself and then give the appropriate output according to the input. Modifications were made to improve test accuracy by varying the test parameters that present in LVQ. Varying the learning rate, epoch and k-fold cross validation resulted in a more efficient output. The output is obtained by calculating the value of information retrieval from the confusion matrix table from each attack classes. Principal Component Analysis technique is used along with Learning Vector Quantization to improve system performance by reducing the data dimensionality. By using 18-Principal Component, dataset successfully reduced by 47.3%, with the best Recognition Rate of 96.52% and time efficiency improvement up to 43.16%.

2019-02-14
Anand, Priya, Ryoo, Jungwoo.  2018.  Architectural Solutions to Mitigate Security Vulnerabilities in Software Systems. Proceedings of the 13th International Conference on Availability, Reliability and Security. :5:1-5:5.

Security issues emerging out of the constantly evolving software applications became a huge challenge to software security experts. In this paper, we propose a prototype to detect vulnerabilities by identifying their architectural sources and also use security patterns to mitigate the identified vulnerabilities. We emphasize the need to consider architectural relations to introduce an effective security solution. In this research, we focused on the taint-style vulnerabilities that can induce injection-based attacks like XSS, SQLI in web applications. With numerous tools available to detect the taint-style vulnerabilities in the web applications, we scanned for the presence of repetition of a vulnerable code pattern in the software. Very importantly, we attempted to identify the architectural source files or modules by developing a tool named ArT Analyzer. We conducted a case study on a leading health-care software by applying the proposed architectural taint analysis and identified the vulnerable spots. We could identify the architectural roots for those vulnerable spots with the use of our tool ArT Analyzer. We verified the results by sharing it with the lead software architect of the project. By adopting an architectural solution, we avoided changes to be done on 252 different lines of code by merely introducing 2 lines of code changes at the architectural roots. Eventually, this solution was integrated into the latest updated release of the health-care software.

2019-01-31
Abou-Zahra, Shadi, Brewer, Judy, Cooper, Michael.  2018.  Artificial Intelligence (AI) for Web Accessibility: Is Conformance Evaluation a Way Forward? Proceedings of the Internet of Accessible Things. :20:1–20:4.

The term "artificial intelligence" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as "artificial intelligence" versus simply advanced computer technologies mimicking aspects of natural intelligence. Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts. Several technological advances enable what is commonly referred to as "artificial intelligence". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities. In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.

2019-03-15
Noor, U., Anwar, Z., Noor, U., Anwar, Z., Rashid, Z..  2018.  An Association Rule Mining-Based Framework for Profiling Regularities in Tactics Techniques and Procedures of Cyber Threat Actors. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). :1-6.

Tactics Techniques and Procedures (TTPs) in cyber domain is an important threat information that describes the behavior and attack patterns of an adversary. Timely identification of associations between TTPs can lead to effective strategy for diagnosing the Cyber Threat Actors (CTAs) and their attack vectors. This study profiles the prevalence and regularities in the TTPs of CTAs. We developed a machine learning-based framework that takes as input Cyber Threat Intelligence (CTI) documents, selects the most prevalent TTPs with high information gain as features and based on them mine interesting regularities between TTPs using Association Rule Mining (ARM). We evaluated the proposed framework with publicly available TTPbased CTI documents. The results show that there are 28 TTPs more prevalent than the other TTPs. Our system identified 155 interesting association rules among the TTPs of CTAs. A summary of these rules is given to effectively investigate threats in the network.

2019-04-01
Abe, Ryosuke, Nakamura, Keita, Teramoto, Kentaro, Takahashi, Misato.  2018.  Attack Incentive and Security of Exchanging Tokens on Proof-of-Work Blockchain. Proceedings of the Asian Internet Engineering Conference. :32–37.

In a consensus algorithm based on Proof-of-Work, miners are motivated by crypto rewards. Furthermore, security is guaranteed because a cost of a 50% attack chance is higher than the potential rewards. However, because of the sudden price jump of cryptocurrencies and cheap prices of mining machines like ASICs, the cost and profit were on equilibrium for Bitcoin in 2017. In this situation, attackers are motivated by the balance between hash power and profits. In this paper, we describe that there is relevance between mining power on the network and price of tokens that can be taken securely on a blockchain. Users who exchange tokens on the PoW blockchain should monitor mining power and exchange tokens cheaper than the attack cost so that profit and cost of the attacker are not in equilibrium.

2019-08-05
Sertbaş, Nurefşan, Aytaç, Samet, Ermiş, Orhan, Alagöz, Fatih, Gür, Gürkan.  2018.  Attribute Based Content Security and Caching in Information Centric IoT. Proceedings of the 13th International Conference on Availability, Reliability and Security. :34:1–34:8.

Information-centric networking (ICN) is a Future Internet paradigm which uses named information (data objects) instead of host-based end-to-end communications. In-network caching is a key pillar of ICN. Basically, data objects are cached in ICN routers and retrieved from these network elements upon availability when they are requested. It is a particularly promising networking approach due to the expected benefits of data dissemination efficiency, reduced delay and improved robustness for challenging communication scenarios in IoT domain. From the security perspective, ICN concentrates on securing data objects instead of ensuring the security of end-to-end communication link. However, it inherently involves the security challenge of access control for content. Thus, an efficient access control mechanism is crucial to provide secure information dissemination. In this work, we investigate Attribute Based Encryption (ABE) as an access control apparatus for information centric IoT. Moreover, we elaborate on how such a system performs for different parameter settings such as different numbers of attributes and file sizes.

2019-11-04
Abani, Noor, Braun, Torsten, Gerla, Mario.  2018.  Betweenness Centrality and Cache Privacy in Information-Centric Networks. Proceedings of the 5th ACM Conference on Information-Centric Networking. :106-116.

In-network caching is a feature shared by all proposed Information Centric Networking (ICN) architectures as it is critical to achieving a more efficient retrieval of content. However, the default "cache everything everywhere" universal caching scheme has caused the emergence of several privacy threats. Timing attacks are one such privacy breach where attackers can probe caches and use timing analysis of data retrievals to identify if content was retrieved from the data source or from the cache, the latter case inferring that this content was requested recently. We have previously proposed a betweenness centrality based caching strategy to mitigate such attacks by increasing user anonymity. We demonstrated its efficacy in a transit-stub topology. In this paper, we further investigate the effect of betweenness centrality based caching on cache privacy and user anonymity in more general synthetic and real world Internet topologies. It was also shown that an attacker with access to multiple compromised routers can locate and track a mobile user by carrying out multiple timing analysis attacks from various parts of the network. We extend our privacy evaluation to a scenario with mobile users and show that a betweenness centrality based caching policy provides a mobile user with path privacy by increasing an attacker's difficulty in locating a moving user or identifying his/her route.

2019-02-14
Dauda, Ahmed, Mclean, Scott, Almehmadi, Abdulaziz, El-Khatib, Khalil.  2018.  Big Data Analytics Architecture for Security Intelligence. Proceedings of the 11th International Conference on Security of Information and Networks. :19:1-19:4.

The need for security1 continues to grow in distributed computing. Today's security solutions require greater scalability and convenience in cloud-computing architectures, in addition to the ability to store and process larger volumes of data to address very sophisticated attacks. This paper explores some of the existing architectures for big data intelligence analytics, and proposes an architecture that promises to provide greater security for data intensive environments. The architecture is designed to leverage the wealth in the multi-source information for security intelligence.

2019-02-08
Kroes, Taddeus, Altinay, Anil, Nash, Joseph, Na, Yeoul, Volckaert, Stijn, Bos, Herbert, Franz, Michael, Giuffrida, Cristiano.  2018.  BinRec: Attack Surface Reduction Through Dynamic Binary Recovery. Proceedings of the 2018 Workshop on Forming an Ecosystem Around Software Transformation. :8-13.

Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but does have its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of BinRec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries.

2019-02-22
Mutiarachim, A., Pranata, S. Felix, Ansor, B., Shidik, G. Faiar, Fanani, A. Zainul, Soeleman, A., Pramunendar, R. Anggi.  2018.  Bit Localization in Least Significant Bit Using Fuzzy C-Means. 2018 International Seminar on Application for Technology of Information and Communication. :290-294.

Least Significant Bit (LSB) as one of steganography methods that already exist today is really mainstream because easy to use, but has weakness that is too easy to decode the hidden message. It is because in LSB the message embedded evenly to all pixels of an image. This paper introduce a method of steganography that combine LSB with clustering method that is Fuzzy C-Means (FCM). It is abbreviated with LSB\_FCM, then compare the stegano result with LSB method. Each image will divided into two cluster, then the biggest cluster capacity will be choosen, finally save the cluster coordinate key as place for embedded message. The key as a reference when decode the message. Each image has their own cluster capacity key. LSB\_FCM has disadvantage that is limited place to embedded message, but it also has advantages compare with LSB that is LSB\_FCM have more difficulty level when decrypted the message than LSB method, because in LSB\_FCM the messages embedded randomly in the best cluster pixel of an image, so to decrypted people must have the cluster coordinate key of the image. Evaluation result show that the MSE and PSNR value of LSB\_FCM some similiar with the pure LSB, it means that LSB\_FCM can give imperceptible image as good as the pure LSB, but have better security from the embedding place.

2019-02-08
Ispoglou, Kyriakos K., AlBassam, Bader, Jaeger, Trent, Payer, Mathias.  2018.  Block Oriented Programming: Automating Data-Only Attacks. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1868-1882.

With the widespread deployment of Control-Flow Integrity (CFI), control-flow hijacking attacks, and consequently code reuse attacks, are significantly more difficult. CFI limits control flow to well-known locations, severely restricting arbitrary code execution. Assessing the remaining attack surface of an application under advanced control-flow hijack defenses such as CFI and shadow stacks remains an open problem. We introduce BOPC, a mechanism to automatically assess whether an attacker can execute arbitrary code on a binary hardened with CFI/shadow stack defenses. BOPC computes exploits for a target program from payload specifications written in a Turing-complete, high-level language called SPL that abstracts away architecture and program-specific details. SPL payloads are compiled into a program trace that executes the desired behavior on top of the target binary. The input for BOPC is an SPL payload, a starting point (e.g., from a fuzzer crash) and an arbitrary memory write primitive that allows application state corruption. To map SPL payloads to a program trace, BOPC introduces Block Oriented Programming (BOP), a new code reuse technique that utilizes entire basic blocks as gadgets along valid execution paths in the program, i.e., without violating CFI or shadow stack policies. We find that the problem of mapping payloads to program traces is NP-hard, so BOPC first reduces the search space by pruning infeasible paths and then uses heuristics to guide the search to probable paths. BOPC encodes the BOP payload as a set of memory writes. We execute 13 SPL payloads applied to 10 popular applications. BOPC successfully finds payloads and complex execution traces – which would likely not have been found through manual analysis – while following the target's Control-Flow Graph under an ideal CFI policy in 81% of the cases.

2019-11-26
Acharjamayum, Irani, Patgiri, Ripon, Devi, Dhruwajita.  2018.  Blockchain: A Tale of Peer to Peer Security. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :609-617.

The underlying or core technology of Bitcoin cryptocurrency has become a blessing for human being in this era. Everything is gradually changing to digitization in this today's epoch. Bitcoin creates virtual money using Blockchain that's become popular over the world. Blockchain is a shared public ledger, and it includes all transactions which are confirmed. It is almost impossible to crack the hidden information in the blocks of the Blockchain. However, there are certain security and technical challenges like scalability, privacy leakage, selfish mining, etc. which hampers the wide application of Blockchain. In this paper, we briefly discuss this emerging technology namely Blockchain. In addition, we extrapolate in-depth insight on Blockchain technology.

2019-02-25
Al-Waisi, Zainab, Agyeman, Michael Opoku.  2018.  On the Challenges and Opportunities of Smart Meters in Smart Homes and Smart Grids. Proceedings of the 2Nd International Symposium on Computer Science and Intelligent Control. :16:1-16:6.

Nowadays, electricity companies have started applying smart grid in their systems rather than the conventional electrical grid (manual grid). Smart grid produces an efficient and effective energy management and control, reduces the cost of production, saves energy and it is more reliable compared to the conventional grid. As an advanced energy meter, smart meters can measure the power consumption as well as monitor and control electrical devices. Smart meters have been adopted in many countries since the 2000s as they provide economic, social and environmental benefits for multiple stakeholders. The design of smart meter can be customized depending on the customer and the utility company needs. There are different sensors and devices supported by dedicated communication infrastructure which can be utilized to implement smart meters. This paper presents a study of the challenges associated with smart meters, smart homes and smart grids as an effort to highlight opportunities for emerging research and industrial solutions.

2019-01-21
Dixit, Vaibhav Hemant, Kyung, Sukwha, Zhao, Ziming, Doupé, Adam, Shoshitaishvili, Yan, Ahn, Gail-Joon.  2018.  Challenges and Preparedness of SDN-based Firewalls. Proceedings of the 2018 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :33–38.

Software-Defined Network (SDN) is a novel architecture created to address the issues of traditional and vertically integrated networks. To increase cost-effectiveness and enable logical control, SDN provides high programmability and centralized view of the network through separation of network traffic delivery (the "data plane") from network configuration (the "control plane"). SDN controllers and related protocols are rapidly evolving to address the demands for scaling in complex enterprise networks. Because of the evolution of modern SDN technologies, production networks employing SDN are prone to several security vulnerabilities. The rate at which SDN frameworks are evolving continues to overtake attempts to address their security issues. According to our study, existing defense mechanisms, particularly SDN-based firewalls, face new and SDN-specific challenges in successfully enforcing security policies in the underlying network. In this paper, we identify problems associated with SDN-based firewalls, such as ambiguous flow path calculations and poor scalability in large networks. We survey existing SDN-based firewall designs and their shortcomings in protecting a dynamically scaling network like a data center. We extend our study by evaluating one such SDN-specific security solution called FlowGuard, and identifying new attack vectors and vulnerabilities. We also present corresponding threat detection techniques and respective mitigation strategies.

2019-03-22
Ali, Syed Ahmed, Memon, Shahzad, Sahito, Farhan.  2018.  Challenges and Solutions in Cloud Forensics. Proceedings of the 2018 2Nd International Conference on Cloud and Big Data Computing. :6-10.

Cloud computing is cutting-edge platform in this information age, where organizations are shifting their business due to its elasticity, ubiquity, cost-effectiveness. Unfortunately the cyber criminals has used these characteristics for the criminal activities and victimizing multiple users at the same time, by their single exploitation which was impossible in before. Cloud forensics is a special branch of digital forensics, which aims to find the evidences of the exploitation in order to present these evidences in the court of law and bring the culprit to accountability. Collection of evidences in the cloud is not as simple as the traditional digital forensics because of its complex distributed architecture which is scattered globally. In this paper, various issues and challenges in the field of cloud forensics research and their proposed solutions have been critically reviewed, summarized and presented.

2019-09-13
P. Damacharla, A. Y. Javaid, J. J. Gallimore, V. K. Devabhaktuni.  2018.  Common Metrics to Benchmark Human-Machine Teams (HMT): A Review. IEEE Access. 6:38637-38655.

A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.

2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

2019-10-30
Belkin, Maxim, Haas, Roland, Arnold, Galen Wesley, Leong, Hon Wai, Huerta, Eliu A., Lesny, David, Neubauer, Mark.  2018.  Container Solutions for HPC Systems: A Case Study of Using Shifter on Blue Waters. Proceedings of the Practice and Experience on Advanced Research Computing. :43:1-43:8.

Software container solutions have revolutionized application development approaches by enabling lightweight platform abstractions within the so-called "containers." Several solutions are being actively developed in attempts to bring the benefits of containers to high-performance computing systems with their stringent security demands on the one hand and fundamental resource sharing requirements on the other. In this paper, we discuss the benefits and short-comings of such solutions when deployed on real HPC systems and applied to production scientific applications. We highlight use cases that are either enabled by or significantly benefit from such solutions. We discuss the efforts by HPC system administrators and support staff to support users of these type of workloads on HPC systems not initially designed with these workloads in mind focusing on NCSA's Blue Waters system.

2019-12-09
Alemán, Concepción Sánchez, Pissinou, Niki, Alemany, Sheila, Boroojeni, Kianoosh, Miller, Jerry, Ding, Ziqian.  2018.  Context-Aware Data Cleaning for Mobile Wireless Sensor Networks: A Diversified Trust Approach. 2018 International Conference on Computing, Networking and Communications (ICNC). :226–230.

In mobile wireless sensor networks (MWSN), data imprecision is a common problem. Decision making in real time applications may be greatly affected by a minor error. Even though there are many existing techniques that take advantage of the spatio-temporal characteristics exhibited in mobile environments, few measure the trustworthiness of sensor data accuracy. We propose a unique online context-aware data cleaning method that measures trustworthiness by employing an initial candidate reduction through the analysis of trust parameters used in financial markets theory. Sensors with similar trajectory behaviors are assigned trust scores estimated through the calculation of “betas” for finding the most accurate data to trust. Instead of devoting all the trust into a single candidate sensor's data to perform the cleaning, a Diversified Trust Portfolio (DTP) is generated based on the selected set of spatially autocorrelated candidate sensors. Our results show that samples cleaned by the proposed method exhibit lower percent error when compared to two well-known and effective data cleaning algorithms in tested outdoor and indoor scenarios.

2018-11-14
Alagar, V., Alsaig, A., Ormandjiva, O., Wan, K..  2018.  Context-Based Security and Privacy for Healthcare IoT. 2018 IEEE International Conference on Smart Internet of Things (SmartIoT). :122–128.

Healthcare Internet of Things (HIoT) is transforming healthcare industry by providing large scale connectivity for medical devices, patients, physicians, clinical and nursing staff who use them and facilitate real-time monitoring based on the information gathered from the connected things. Heterogeneity and vastness of this network provide both opportunity and challenges for information collection and sharing. Patient-centric information such as health status and medical devices used by them must be protected to respect their safety and privacy, while healthcare knowledge should be shared in confidence by experts for healthcare innovation and timely treatment of patients. In this paper an overview of HIoT is given, emphasizing its characteristics to those of Big Data, and a security and privacy architecture is proposed for it. Context-sensitive role-based access control scheme is discussed to ensure that HIoT is reliable, provides data privacy, and achieves regulatory compliance.

2018-10-26
Halabi, T., Bellaiche, M., Abusitta, A..  2018.  A Cooperative Game for Online Cloud Federation Formation Based on Security Risk Assessment. 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :83–88.

Cloud federations allow Cloud Service Providers (CSPs) to deliver more efficient service performance by interconnecting their Cloud environments and sharing their resources. However, the security of the federated Cloud service could be compromised if the resources are shared with relatively insecure and unreliable CSPs. In this paper, we propose a Cloud federation formation model that considers the security risk levels of CSPs. We start by quantifying the security risk of CSPs according to well defined evaluation criteria related to security risk avoidance and mitigation, then we model the Cloud federation formation process as a hedonic coalitional game with a preference relation that is based on the security risk levels and reputations of CSPs. We propose a federation formation algorithm that enables CSPs to cooperate while considering the security risk introduced to their infrastructures, and refrain from cooperating with undesirable CSPs. According to the stability-based solution concepts that we use to evaluate the game, the model shows that CSPs will be able to form acceptable federations on the fly to service incoming resource provisioning requests whenever required.

2019-02-22
Anderson, Ross.  2018.  Covert and Deniable Communications. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :1-1.

At the first Information Hiding Workshop in 1996 we tried to clarify the models and assumptions behind information hiding. We agreed the terminology of cover text and stego text against a background of the game proposed by our keynote speaker Gus Simmons: that Alice and Bob are in jail and wish to hatch an escape plan without the fact of their communication coming to the attention of the warden, Willie. Since then there have been significant strides in developing technical mechanisms for steganography and steganalysis, with new techniques from machine learning providing ever more powerful tools for the analyst, such as the ensemble classifier. There have also been a number of conceptual advances, such as the square root law and effective key length. But there always remains the question whether we are using the right security metrics for the application. In this talk I plan to take a step backwards and look at the systems context. When can stegosystems actually be used? The deployment history is patchy, with one being Trucrypt's hidden volumes, inspired by the steganographic file system. Image forensics also find some use, and may be helpful against some adversarial machine learning attacks (or at least help us understand them). But there are other contexts in which patterns of activity have to be hidden for that activity to be effective. I will discuss a number of examples starting with deception mechanisms such as honeypots, Tor bridges and pluggable transports, which merely have to evade detection for a while; then moving on to the more challenging task of designing deniability mechanisms, from leaking secrets to a newspaper through bitcoin mixes, which have to withstand forensic examination once the participants come under suspicion. We already know that, at the system level, anonymity is hard. However the increasing quantity and richness of the data available to opponents may move a number of applications from the deception category to that of deniability. To pick up on our model of 20 years ago, Willie might not just put Alice and Bob in solitary confinement if he finds them communicating, but torture them or even execute them. Changing threat models are historically one of the great disruptive forces in security engineering. This leads me to suspect that a useful research area may be the intersection of deception and forensics, and how information hiding systems can be designed in anticipation of richer and more complex threat models. The ever-more-aggressive censorship systems deployed in some parts of the world also raise the possibility of using information hiding techniques in censorship circumvention. As an example of recent practical work, I will discuss Covertmark, a toolkit for testing pluggable transports that was partly inspired by Stirmark, a tool we presented at the second Information Hiding Workshop twenty years ago.

2020-04-24
Bahman Soltani, Hooman, Abiri, Habibollah.  2018.  Criteria for Determining Maximum Theoretical Oscillating Frequency of Extended Interaction Oscillators for Terahertz Applications. IEEE Transactions on Electron Devices. 65:1564—1571.

Extended interaction oscillators (EIOs) are high-frequency vacuum-electronic sources, capable to generate millimeter-wave to terahertz (THz) radiations. They are considered to be potential sources of high-power submillimeter wavelengths. Different slow-wave structures and beam geometries are used for EIOs. This paper presents a quantitative figure of merit, the critical unloaded oscillating frequency (fcr) for any specific geometry of EIO. This figure is calculated and tested for 2π standing-wave modes (a common mode for EIOs) of two different slowwave structures (SWSs), one double-ridge SWS driven by a sheet electron beam and one ring-loaded waveguide driven by a cylindrical beam. The calculated fcrs are compared with particle-in-cell (PIC) results, showing an acceptable agreement. The derived fcr is calculated three to four orders of magnitude faster than the PIC solver. Generality of the method, its clear physical interpretation and computational rapidity, makes it a convenient approach to evaluate the high-frequency behavior of any specified EIO geometry. This allows to investigate the changes in geometry to attain higher frequencies at THz spectrum.

2019-06-24
Bessa, Ricardo J., Rua, David, Abreu, Cláudia, Machado, Paulo, Andrade, José R., Pinto, Rui, Gonçalves, Carla, Reis, Marisa.  2018.  Data Economy for Prosumers in a Smart Grid Ecosystem. Proceedings of the Ninth International Conference on Future Energy Systems. :622–630.

Smart grids technologies are enablers of new business models for domestic consumers with local flexibility (generation, loads, storage) and where access to data is a key requirement in the value stream. However, legislation on personal data privacy and protection imposes the need to develop local models for flexibility modeling and forecasting and exchange models instead of personal data. This paper describes the functional architecture of an home energy management system (HEMS) and its optimization functions. A set of data-driven models, embedded in the HEMS, are discussed for improving renewable energy forecasting skill and modeling multi-period flexibility of distributed energy resources.