Visible to the public Biblio

Found 154 results

Filters: Keyword is Prototypes  [Clear All Filters]
2020-07-13
Ghosh, Debanjana, Chatterjee, Soumyajit, Kothari, Vasudha, Kumar, Aakash, Nair, Mahesh, Lokesh, Ella.  2019.  An application of Li-Fi based Wireless Communication System using Visible Light Communication. 2019 International Conference on Opto-Electronics and Applied Optics (Optronix). :1–3.
This paper attempts to clarify the concept and applications of Li-Fi technology. The current Wi-Fi network use Radio Frequency waves, but the usage of the available RF spectrum is limited. Therefore a new technology, Li-Fi has come into picture. Li-Fi is a recently developed technology. This paper explains how array of LEDs are used to transmit data in the visible light spectrum. This technology has advantages like security, increased accessible spectrum, low latency efficiency and much higher speed as compared to Wi- Fi. The aim of this research paper is to design a Li-Fi transceiver using Arduino which is able to transmit and receive data in binary format. The software coding is done in Arduino- Uno platform. Successful transmission and reception of data(alphanumeric) has been done.
2020-04-03
Kuznetsov, Alexandr, Kiian, Anastasiia, Gorbenko, Yurii, Smirnov, Oleksii, Cherep, Oleksandr, Bexhter, Liliia.  2019.  Code-based Pseudorandom Generator for the Post-Quantum Period. 2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT). :204—209.
This paper focuses on research of a provably secure code-based pseudorandom sequence generators whose cryptanalysis problem equals to syndrome decoding (belonging to the NP-complex class). It was found that generated sequences of such well-known Fischer-Stern code-based generator don’t have a maximum period, the actual period is much lower than expected. In our work, we have created a new generator scheme. It retains all advantages of the Fisher-Stern algorithm and provides pseudorandom sequences which are formed with maximum period. Also comparative analysis of proposed generator and popular generators was conducted.
2020-03-30
Dreher, Patrick, Ramasami, Madhuvanti.  2019.  Prototype Container-Based Platform for Extreme Quantum Computing Algorithm Development. 2019 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.
Recent advances in the development of the first generation of quantum computing devices have provided researchers with computational platforms to explore new ideas and reformulate conventional computational codes suitable for a quantum computer. Developers can now implement these reformulations on both quantum simulators and hardware platforms through a cloud computing software environment. For example, the IBM Q Experience provides the direct access to their quantum simulators and quantum computing hardware platforms. However these current access options may not be an optimal environment for developers needing to download and modify the source codes and libraries. This paper focuses on the construction of a Docker container environment with Qiskit source codes and libraries running on a local cloud computing system that can directly access the IBM Q Experience. This prototype container based system allows single user and small project groups to do rapid prototype development, testing and implementation of extreme capability algorithms with more agility and flexibility than can be provided through the IBM Q Experience website. This prototype environment also provides an excellent teaching environment for labs and project assignments within graduate courses in cloud computing and quantum computing. The paper also discusses computer security challenges for expanding this prototype container system to larger groups of quantum computing researchers.
2020-03-23
Pewny, Jannik, Koppe, Philipp, Holz, Thorsten.  2019.  STEROIDS for DOPed Applications: A Compiler for Automated Data-Oriented Programming. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :111–126.
The wide-spread adoption of system defenses such as the randomization of code, stack, and heap raises the bar for code-reuse attacks. Thus, attackers utilize a scripting engine in target programs like a web browser to prepare the code-reuse chain, e.g., relocate gadget addresses or perform a just-in-time gadget search. However, many types of programs do not provide such an execution context that an attacker can use. Recent advances in data-oriented programming (DOP) explored an orthogonal way to abuse memory corruption vulnerabilities and demonstrated that an attacker can achieve Turing-complete computations without modifying code pointers in applications. As of now, constructing DOP exploits requires a lot of manual work-for every combination of application and payload anew. In this paper, we present novel techniques to automate the process of generating DOP exploits. We implemented a compiler called STEROIDS that leverages these techniques and compiles our high-level language SLANG into low-level DOP data structures driving malicious computations at run time. This enables an attacker to specify her intent in an application-and vulnerability-independent manner to maximize reusability. We demonstrate the effectiveness of our techniques and prototype implementation by specifying four programs of varying complexity in SLANG that calculate the Levenshtein distance, traverse a pointer chain to steal a private key, relocate a ROP chain, and perform a JIT-ROP attack. STEROIDS compiles each of those programs to low-level DOP data structures targeted at five different applications including GStreamer, Wireshark and ProFTPd, which have vastly different vulnerabilities and DOP instances. Ultimately, this shows that our compiler is versatile, can be used for both 32-bit and 64-bit applications, works across bug classes, and enables highly expressive attacks without conventional code-injection or code-reuse techniques in applications lacking a scripting engine.
2020-03-16
Goli, Mehran, Drechsler, Rolf.  2019.  Scalable Simulation-Based Verification of SystemC-Based Virtual Prototypes. 2019 22nd Euromicro Conference on Digital System Design (DSD). :522–529.
Virtual Prototypes (VPs) at the Electronic System Level (ESL) written in SystemC language using its Transaction Level Modeling (TLM) framework are increasingly adopted by the semiconductor industry. The main reason is that VPs are much earlier available, and their simulation is orders of magnitude faster in comparison to the hardware models implemented at lower levels of abstraction (e.g. RTL). This leads designers to use VPs as reference models for an early design verification. Hence, the correctness assurance of these reference models (VPs) is critical as undetected faults may propagate to less abstract levels in the design process, increasing the fixing cost and effort. In this paper, we propose a novel simulation-based verification approach to automatically validate the simulation behavior of a given SystemC VP against both the TLM-2.0 rules and its specifications (i.e. functional and timing behavior of communications in the VP). The scalability and the efficiency of the proposed approach are demonstrated using an extensive set of experiments including a real-word VP.
2020-03-09
Hăjmăȿan, Gheorghe, Mondoc, Alexandra, Creț, Octavian.  2019.  Bytecode Heuristic Signatures for Detecting Malware Behavior. 2019 Conference on Next Generation Computing Applications (NextComp). :1–6.
For a long time, the most important approach for detecting malicious applications was the use of static, hash-based signatures. This approach provides a fast response time, has a low performance overhead and is very stable due to its simplicity. However, with the rapid growth in the number of malware, as well as their increased complexity in terms of polymorphism and evasion, the era of reactive security solutions started to fade in favor of new, proactive approaches such as behavior based detection. We propose a novel approach that uses an interpreter virtual machine to run proactive behavior heuristics from bytecode signatures, thus combining the advantages of behavior based detection with those of signatures. Based on our approximation, using this approach we succeeded to reduce by 85% the time required to update a behavior based detection solution to detect new threats, while continuing to benefit from the versatility of behavior heuristics.
2020-02-10
Cha, Shi-Cho, Li, Zhuo-Xun, Fan, Chuan-Yen, Tsai, Mila, Li, Je-Yu, Huang, Tzu-Chia.  2019.  On Design and Implementation a Federated Chat Service Framework in Social Network Applications. 2019 IEEE International Conference on Agents (ICA). :33–36.
As many organizations deploy their chatbots on social network applications to interact with their customers, a person may switch among different chatbots for different services. To reduce the switching cost, this study proposed the Federated Chat Service Framework. The framework maintains user profiles and historical behaviors. Instead of deploying chatbots, organizations follow the rules of the framework to provide chat services. Therefore, the framework can organize service requests with context information and responses to emulate the conversations between users and chat services. Consequently, the study can hopefully contribute to reducing the cost for a user to communicate with different chatbots.
2019-11-12
Luo, Qiming, Lv, Ang, Hou, Ligang, Wang, Zhongchao.  2018.  Realization of System Verification Platform of IoT Smart Node Chip. 2018 IEEE 3rd International Conference on Integrated Circuits and Microsystems (ICICM). :341-344.

With the development of large scale integrated circuits, the functions of the IoT chips have been increasingly perfect. The verification work has become one of the most important aspects. On the one hand, an efficient verification platform can ensure the correctness of the design. On the other hand, it can shorten the chip design cycle and reduce the design cost. In this paper, based on a transmission protocol of the IoT node, we propose a verification method which combines simulation verification and FPGA-based prototype verification. We also constructed a system verification platform for the IoT smart node chip combining two kinds of verification above. We have simulated and verificatied the related functions of the node chip using this platform successfully. It has a great reference value.

2019-10-14
Koo, H., Chen, Y., Lu, L., Kemerlis, V. P., Polychronakis, M..  2018.  Compiler-Assisted Code Randomization. 2018 IEEE Symposium on Security and Privacy (SP). :461–477.

Despite decades of research on software diversification, only address space layout randomization has seen widespread adoption. Code randomization, an effective defense against return-oriented programming exploits, has remained an academic exercise mainly due to i) the lack of a transparent and streamlined deployment model that does not disrupt existing software distribution norms, and ii) the inherent incompatibility of program variants with error reporting, whitelisting, patching, and other operations that rely on code uniformity. In this work we present compiler-assisted code randomization (CCR), a hybrid approach that relies on compiler-rewriter cooperation to enable fast and robust fine-grained code randomization on end-user systems, while maintaining compatibility with existing software distribution models. The main concept behind CCR is to augment binaries with a minimal set of transformation-assisting metadata, which i) facilitate rapid fine-grained code transformation at installation or load time, and ii) form the basis for reversing any applied code transformation when needed, to maintain compatibility with existing mechanisms that rely on referencing the original code. We have implemented a prototype of this approach by extending the LLVM compiler toolchain, and developing a simple binary rewriter that leverages the embedded metadata to generate randomized variants using basic block reordering. The results of our experimental evaluation demonstrate the feasibility and practicality of CCR, as on average it incurs a modest file size increase of 11.46% and a negligible runtime overhead of 0.28%, while it is compatible with link-time optimization and control flow integrity.

2019-09-23
Yazici, I. M., Karabulut, E., Aktas, M. S..  2018.  A Data Provenance Visualization Approach. 2018 14th International Conference on Semantics, Knowledge and Grids (SKG). :84–91.
Data Provenance has created an emerging requirement for technologies that enable end users to access, evaluate, and act on the provenance of data in recent years. In the era of Big Data, the amount of data created by corporations around the world has grown each year. As an example, both in the Social Media and e-Science domains, data is growing at an unprecedented rate. As the data has grown rapidly, information on the origin and lifecycle of the data has also grown. In turn, this requires technologies that enable the clarification and interpretation of data through the use of data provenance. This study proposes methodologies towards the visualization of W3C-PROV-O Specification compatible provenance data. The visualizations are done by summarization and comparison of the data provenance. We facilitated the testing of these methodologies by providing a prototype, extending an existing open source visualization tool. We discuss the usability of the proposed methodologies with an experimental study; our initial results show that the proposed approach is usable, and its processing overhead is negligible.
2019-01-21
Leal, A. G., Teixeira, Í C..  2018.  Development of a suite of IPv6 vulnerability scanning tests using the TTCN-3 language. 2018 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.

With the transition from IPv4 IPv6 protocol to improve network communications, there are concerns about devices and applications' security that must be dealt at the beginning of implementation or during its lifecycle. Automate the vulnerability assessment process reduces management overhead, enabling better management of risks and control of the vulnerabilities. Consequently, it reduces the effort needed for each test and it allows the increase of the frequency of application, improving time management to perform all the other complicated tasks necessary to support a secure network. There are several researchers involved in tests of vulnerability in IPv6 networks, exploiting addressing mechanisms, extension headers, fragmentation, tunnelling or dual-stack networks (using both IPv4 and IPv6 at the same time). Most existing tools use the programming languages C, Java, and Python instead of a language designed specifically to create a suite of tests, which reduces maintainability and extensibility of the tests. This paper presents a solution for IPv6 vulnerabilities scan tests, based on attack simulations, combining passive analysis (observing the manifestation of behaviours of the system under test) and an active one (stimulating the system to become symptomatic). Also, it describes a prototype that simulates and detects denial-of-service attacks on the ICMPv6 Protocol from IPv6. Also, a detailed report is created with the identified vulnerability and the possible existing solutions to mitigate such a gap, thus assisting the process of vulnerability management.

2019-01-16
Hwang, D., Shin, J., Choi, Y..  2018.  Authentication Protocol for Wearable Devices Using Mobile Authentication Proxy. 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN). :700–702.
The data transmitted from the wearable device commonly includes sensitive data. So, application service using the data collected from the unauthorized wearable devices can cause serious problems. Also, it is important to authenticate any wearable device and then, protect the transmitted data between the wearable devices and the application server. In this paper, we propose an authentication protocol, which is designed by using the Transport Layer Security (TLS) handshake protocol combined with a mobile authentication proxy. By using the proposed authentication protocol, we can authenticate the wearable device. And we can secure data transmission since session key is shared between the wearable device and the application server. In addition, the proposed authentication protocol is secure even when the mobile authentication proxy is unreliable.
2018-11-19
Lebeck, K., Ruth, K., Kohno, T., Roesner, F..  2017.  Securing Augmented Reality Output. 2017 IEEE Symposium on Security and Privacy (SP). :320–337.

Augmented reality (AR) technologies, such as Microsoft's HoloLens head-mounted display and AR-enabled car windshields, are rapidly emerging. AR applications provide users with immersive virtual experiences by capturing input from a user's surroundings and overlaying virtual output on the user's perception of the real world. These applications enable users to interact with and perceive virtual content in fundamentally new ways. However, the immersive nature of AR applications raises serious security and privacy concerns. Prior work has focused primarily on input privacy risks stemming from applications with unrestricted access to sensor data. However, the risks associated with malicious or buggy AR output remain largely unexplored. For example, an AR windshield application could intentionally or accidentally obscure oncoming vehicles or safety-critical output of other AR applications. In this work, we address the fundamental challenge of securing AR output in the face of malicious or buggy applications. We design, prototype, and evaluate Arya, an AR platform that controls application output according to policies specified in a constrained yet expressive policy framework. In doing so, we identify and overcome numerous challenges in securing AR output.

2018-11-14
Sakumoto, S., Kanaoka, A..  2017.  Improvement of Privacy Preserved Rule-Based Risk Analysis via Secure Multi-Party Computation. 2017 12th Asia Joint Conference on Information Security (AsiaJCIS). :15–22.

Currently, when companies conduct risk analysis of own networks and systems, it is common to outsource risk analysis to third-party experts. At that time, the company passes the information used for risk analysis including confidential information such as network configuration to third-party expert. It raises the risk of leakage and abuse of confidential information. Therefore, a method of risk analysis by using secure computation without passing confidential information of company has been proposed. Although Liu's method have firstly achieved secure risk analysis method using multiparty computation and attack tree analysis, it has several problems to be practical. In this paper, improvement of secure risk analysis method is proposed. It can dynamically reduce compilation time, enhance scale of target network and system without increasing execution time. Experimental work is carried out by prototype implementation. As a result, we achieved improved performance in compile time and enhance scale of target with equivalent performance on execution time.

2018-06-07
Larisch, J., Choffnes, D., Levin, D., Maggs, B. M., Mislove, A., Wilson, C..  2017.  CRLite: A Scalable System for Pushing All TLS Revocations to All Browsers. 2017 IEEE Symposium on Security and Privacy (SP). :539–556.

Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in real-time. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of name that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google's Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.

2018-04-04
Zhang, B., Ye, J., Feng, C., Tang, C..  2017.  S2F: Discover Hard-to-Reach Vulnerabilities by Semi-Symbolic Fuzz Testing. 2017 13th International Conference on Computational Intelligence and Security (CIS). :548–552.
Fuzz testing is a popular program testing technique. However, it is difficult to find hard-to-reach vulnerabilities that are nested with complex branches. In this paper, we propose semi-symbolic fuzz testing to discover hard-to-reach vulnerabilities. Our method groups inputs into high frequency and low frequency ones. Then symbolic execution is utilized to solve only uncovered branches to mitigate the path explosion problem. Especially, in order to play the advantages of fuzz testing, our method locates critical branch for each low frequency input and corrects the generated test cases to comfort the branch condition. We also implemented a prototype\textbackslashtextbarS2F, and the experimental results show that S2F can gain 17.70% coverage performance and discover more hard-to-reach vulnerabilities than other vulnerability detection tools for our benchmark.
2018-04-02
Doynikova, E., Kotenko, I..  2017.  CVSS-Based Probabilistic Risk Assessment for Cyber Situational Awareness and Countermeasure Selection. 2017 25th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). :346–353.

The paper suggests several techniques for computer network risk assessment based on Common Vulnerability Scoring System (CVSS) and attack modeling. Techniques use a set of integrated security metrics and consider input data from security information and event management (SIEM) systems. Risk assessment techniques differ according to the used input data. They allow to get risk assessment considering requirements to the accuracy and efficiency. Input data includes network characteristics, attacks, attacker characteristics, security events and countermeasures. The tool that implements these techniques is presented. Experiments demonstrate operation of the techniques for different security situations.

2018-02-06
Brannsten, M. R., Bloebaum, T. H., Johnsen, F. T., Reitan, B. K..  2017.  Kings Eye: Platform Independent Situational Awareness. 2017 International Conference on Military Communications and Information Systems (ICMCIS). :1–5.

Kings Eye is a platform independent situational awareness prototype for smart devices. Platform independence is important as there are more and more soldiers bringing their own devices, with different operating systems, into the field. The concept of Bring Your Own Device (BYOD) is a low-cost approach to equipping soldiers with situational awareness tools and by this it is important to facilitate and evaluate such solutions.

Vorobiev, E. G., Petrenko, S. A., Kovaleva, I. V., Abrosimov, I. K..  2017.  Organization of the Entrusted Calculations in Crucial Objects of Informatization under Uncertainty. 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM). :299–300.

The urgent task of the organization of confidential calculations in crucial objects of informatization on the basis of domestic TPM technologies (Trusted Platform Module) is considered. The corresponding recommendations and architectural concepts of the special hardware TPM module (Trusted Platform Module) which is built in a computing platform are proposed and realize a so-called ``root of trust''. As a result it gave the organization the confidential calculations on the basis of domestic electronic base.

2018-01-10
Zheng, Y., Shi, Y., Guo, K., Li, W., Zhu, L..  2017.  Enhanced word embedding with multiple prototypes. 2017 4th International Conference on Industrial Economics System and Industrial Security Engineering (IEIS). :1–5.

Word representation is one of the basic word repressentation methods in natural language processing, which mapped a word into a dense real-valued vector space based on a hypothesis: words with similar context have similar meanings. Models like NNLM, C&W, CBOW, Skip-gram have been designed for word embeddings learning, and get widely used in many NLP tasks. However, these models assume that one word had only one semantics meaning which is contrary to the real language rules. In this paper we pro-pose a new word unit with multiple meanings and an algorithm to distinguish them by it's context. This new unit can be embedded in most language models and get series of efficient representations by learning variable embeddings. We evaluate a new model MCBOW that integrate CBOW with our word unit on word similarity evaluation task and some downstream experiments, the result indicated our new model can learn different meanings of a word and get a better result on some other tasks.

2017-12-20
Wampler, J. A., Hsieh, C., Toth, A..  2017.  Efficient distribution of fragmented sensor data for obfuscation. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :695–700.
The inherent nature of unattended sensors makes these devices most vulnerable to detection, exploitation, and denial in contested environments. Physical access is often cited as the easiest way to compromise any device or network. A new mechanism for mitigating these types of attacks developed under the Assistant Secretary of Defense for Research and Engineering, ASD(R&E) project, “Smoke Screen in Cyberspace”, was previously demonstrated in a live, over-the-air experiment. Smoke Screen encrypts, slices up, and disburses redundant fragments of files throughout the network. This paper describes enhancements to the disbursement of the file fragments routing improving the efficiency and time to completion of fragment distribution by defining the exact route, fragments should take to the destination. This is the first step in defining a custom protocol for the discovery of participating nodes and the efficient distribution of fragments in a mobile network. Future work will focus on the movement of fragments to avoid traffic analysis and avoid the collection of the entire fragment set that would enable an adversary to reconstruct the original piece of data.
2017-04-20
Zhang, X., Gong, L., Xun, Y., Piao, X., Leit, K..  2016.  Centaur: A evolutionary design of hybrid NDN/IP transport architecture for streaming application. 2016 IEEE 7th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :1–7.

Named Data Networking (NDN), a clean-slate data oriented Internet architecture targeting on replacing IP, brings many potential benefits for content distribution. Real deployment of NDN is crucial to verify this new architecture and promote academic research, but work in this field is at an early stage. Due to the fundamental design paradigm difference between NDN and IP, Deploying NDN as IP overlay causes high overhead and inefficient transmission, typically in streaming applications. Aiming at achieving efficient NDN streaming distribution, this paper proposes a transitional architecture of NDN/IP hybrid network dubbed Centaur, which embodies both NDN's smartness, scalability and IP's transmission efficiency and deployment feasibility. In Centaur, the upper NDN module acts as the smart head while the lower IP module functions as the powerful feet. The head is intelligent in content retrieval and self-control, while the IP feet are able to transport large amount of media data faster than that if NDN directly overlaying on IP. To evaluate the performance of our proposal, we implement a real streaming prototype in ndnSIM and compare it with both NDN-Hippo and P2P under various experiment scenarios. The result shows that Centaur can achieve better load balance with lower overhead, which is close to the performance that ideal NDN can achieve. All of these validate that our proposal is a promising choice for the incremental and compatible deployment of NDN.

2017-02-27
Zhang, L., Li, B., Zhang, L., Li, D..  2015.  Fuzzy clustering of incomplete data based on missing attribute interval size. 2015 IEEE 9th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :101–104.

Fuzzy c-means algorithm is used to identity clusters of similar objects within a data set, while it is not directly applied to incomplete data. In this paper, we proposed a novel fuzzy c-means algorithm based on missing attribute interval size for the clustering of incomplete data. In the new algorithm, incomplete data set was transformed to interval data set according to the nearest neighbor rule. The missing attribute value was replaced by the corresponding interval median and the interval size was set as the additional property for the incomplete data to control the effect of interval size in clustering. Experiments on standard UCI data set show that our approach outperforms other clustering methods for incomplete data.

2015-05-06
Shimauchi, S., Ohmuro, H..  2014.  Accurate adaptive filtering in square-root Hann windowed short-time fourier transform domain. Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. :1305-1309.

A novel short-time Fourier transform (STFT) domain adaptive filtering scheme is proposed that can be easily combined with nonlinear post filters such as residual echo or noise reduction in acoustic echo cancellation. Unlike normal STFT subband adaptive filters, which suffers from aliasing artifacts due to its poor prototype filter, our scheme achieves good accuracy by exploiting the relationship between the linear convolution and the poor prototype filter, i.e., the STFT window function. The effectiveness of our scheme was confirmed through the results of simulations conducted to compare it with conventional methods.

2015-05-05
Boleng, J., Novakouski, M., Cahill, G., Simanta, S., Morris, E..  2014.  Fusing Open Source Intelligence and Handheld Situational Awareness: Benghazi Case Study. Military Communications Conference (MILCOM), 2014 IEEE. :1421-1426.

This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.