Biblio

Filters: Keyword is Context  [Clear All Filters]
2017-11-27
Mohammadi, M., Chu, B., Lipford, H. R., Murphy-Hill, E..  2016.  Automatic Web Security Unit Testing: XSS Vulnerability Detection. 2016 IEEE/ACM 11th International Workshop in Automation of Software Test (AST). :78–84.

Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.

2017-05-16
AlEroud, Ahmed, Karabatis, George.  2016.  Beyond Data: Contextual Information Fusion for Cyber Security Analytics. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :73–79.

A major challenge of the existing attack detection approaches is the identification of relevant information to a particular situation, and the use of such information to perform multi-evidence intrusion detection. Addressing such a limitation requires integrating several aspects of context to better predict, avoid and respond to impending attacks. The quality and adequacy of contextual information is important to decrease uncertainty and correctly identify potential cyber-attacks. In this paper, a systematic methodology has been used to identify contextual dimensions that improve the effectiveness of detecting cyber-attacks. This methodology combines graph, probability, and information theories to create several context-based attack prediction models that analyze data at a high- and low-level. An extensive validation of our approach has been performed using a prototype system and several benchmark intrusion detection datasets yielding very promising results.

2017-03-29
Kosek, A. M..  2016.  Contextual anomaly detection for cyber-physical security in Smart Grids based on an artificial neural network model. 2016 Joint Workshop on Cyber- Physical Security and Resilience in Smart Grids (CPSR-SG). :1–6.

This paper presents a contextual anomaly detection method and its use in the discovery of malicious voltage control actions in the low voltage distribution grid. The model-based anomaly detection uses an artificial neural network model to identify a distributed energy resource's behaviour under control. An intrusion detection system observes distributed energy resource's behaviour, control actions and the power system impact, and is tested together with an ongoing voltage control attack in a co-simulation set-up. The simulation results obtained with a real photovoltaic rooftop power plant data show that the contextual anomaly detection performs on average 55% better in the control detection and over 56% better in the malicious control detection over the point anomaly detection.

2017-12-28
Thuraisingham, B., Kantarcioglu, M., Hamlen, K., Khan, L., Finin, T., Joshi, A., Oates, T., Bertino, E..  2016.  A Data Driven Approach for the Science of Cyber Security: Challenges and Directions. 2016 IEEE 17th International Conference on Information Reuse and Integration (IRI). :1–10.

This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.

2017-04-20
Bronzino, F., Raychaudhuri, D., Seskar, I..  2016.  Demonstrating Context-Aware Services in the Mobility First Future Internet Architecture. 2016 28th International Teletraffic Congress (ITC 28). 01:201–204.

As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.

2017-11-20
Chaisiri, S., Ko, R. K. L..  2016.  From Reactionary to Proactive Security: Context-Aware Security Policy Management and Optimization under Uncertainty. 2016 IEEE Trustcom/BigDataSE/ISPA. :535–543.

At the core of its nature, security is a highly contextual and dynamic challenge. However, current security policy approaches are usually static, and slow to adapt to ever-changing requirements, let alone catching up with reality. In a 2012 Sophos survey, it was stated that a unique malware is created every half a second. This gives a glimpse of the unsustainable nature of a global problem, any improvement in terms of closing the "time window to adapt" would be a significant step forward. To exacerbate the situation, a simple change in threat and attack vector or even an implementation of the so-called "bring-your-own-device" paradigm will greatly change the frequency of changed security requirements and necessary solutions required for each new context. Current security policies also typically overlook the direct and indirect costs of implementation of policies. As a result, technical teams often fail to have the ability to justify the budget to the management, from a business risk viewpoint. This paper considers both the adaptive and cost-benefit aspects of security, and introduces a novel context-aware technique for designing and implementing adaptive, optimized security policies. Our approach leverages the capabilities of stochastic programming models to optimize security policy planning, and our preliminary results demonstrate a promising step towards proactive, context-aware security policies.

2019-09-09
E. Peterson.  2016.  Dagger: Modeling and visualization for mission impact situation awareness. MILCOM 2016 - 2016 IEEE Military Communications Conference. :25-30.

Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.

2019-12-18
Kania, Elsa B..  2016.  Cyber deterrence in times of cyber anarchy - evaluating the divergences in U.S. and Chinese strategic thinking. 2016 International Conference on Cyber Conflict (CyCon U.S.). :1–17.
The advent of the cyber domain has introduced a new dimension into warfare and complicated existing strategic concepts, provoking divergent responses within different national contexts and strategic cultures. Although current theories regarding cyber deterrence remain relatively nascent, a comparison of U.S. and Chinese strategic thinking highlights notable asymmetries between their respective approaches. While U.S. debates on cyber deterrence have primarily focused on the deterrence of cyber threats, Chinese theorists have also emphasized the potential importance of cyber capabilities to enhance strategic deterrence. Whereas the U.S. government has maintained a consistent declaratory policy for response, Beijing has yet to progress toward transparency regarding its cyber strategy or capabilities. However, certain PLA strategists, informed by a conceptualization of deterrence as integrated with warfighting, have advocated for the actualization of deterrence through engaging in cyber attacks. Regardless of whether these major cyber powers' evolving strategic thinking on cyber deterrence will prove logically consistent or feasibly operational, their respective perspectives will certainly shape their attempts to achieve cyber deterrence. Ultimately, cyber deterrence may continue to be "what states make of it," given conditions of "cyber anarchy" and prevailing uncertainties regarding cyber conflict. Looking forward, future strategic stability in Sino-U.S. cyber interactions will require mitigation of the misperceptions and heightened risks of escalation that could be exacerbated by these divergent strategic approaches.
2018-02-02
Gouglidis, A., Green, B., Busby, J., Rouncefield, M., Hutchison, D., Schauer, S..  2016.  Threat awareness for critical infrastructures resilience. 2016 8th International Workshop on Resilient Networks Design and Modeling (RNDM). :196–202.

Utility networks are part of every nation's critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering in a European utility company.

2020-08-28
Dauenhauer, Ralf, Müller, Tobias.  2016.  An Evaluation of Information Connection in Augmented Reality for 3D Scenes with Occlusion. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct). :235—237.
Most augmented reality applications connect virtual information to anchors, i.e. physical places or objects, by using spatial overlays or proximity. However, for industrial use cases this is not always feasible because specific parts must remain fully visible in order to meet work or security requirements. In these situations virtual information must be displayed at alternative positions while connections to anchors must still be clearly recognizable. In our previous research we were the first to show that for simple scenes connection lines are most suitable for this. To extend these results to more complex environments, we conducted an experiment on the effects of visual interruptions in connection lines and incorrect occlusion. Completion time and subjective mental effort for search tasks were used as measures. Our findings confirm that also in 3D scenes with partial occlusion connection lines are preferable to connect virtual information with anchors if an assignment via overlay or close proximity is not feasible. The results further imply that neither incorrectly used depth cues nor missing parts of connection lines make a significant difference concerning completion time or subjective mental effort. For designers of industrial augmented reality applications this means that they can choose either visualization based on their needs.
2017-03-08
Sandic-Stankovic, D., Kukolj, D., Callet, P. Le.  2015.  DIBR synthesized image quality assessment based on morphological pyramids. 2015 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON). :1–4.

Most Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain non-uniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. There is inherent congruence between the morphological pyramid decomposition scheme and human visual perception. In this paper, multi-scale measure, morphological pyramid peak signal-to-noise ratio MP-PSNR, based on morphological pyramid decomposition is proposed for the evaluation of DIBR synthesized images. It is shown that MPPSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.

Gonzalez, N., Calot, E. P..  2015.  Finite Context Modeling of Keystroke Dynamics in Free Text. 2015 International Conference of the Biometrics Special Interest Group (BIOSIG). :1–5.

Keystroke dynamics analysis has been applied successfully to password or fixed short texts verification as a means to reduce their inherent security limitations, because their length and the fact of being typed often makes their characteristic timings fairly stable. On the other hand, free text analysis has been neglected until recent years due to the inherent difficulties of dealing with short term behavioral noise and long term effects over the typing rhythm. In this paper we examine finite context modeling of keystroke dynamics in free text and report promising results for user verification over an extensive data set collected from a real world environment outside the laboratory setting that we make publicly available.

2016-12-05
Marwan Abi-Antoun, Yibin Wang, Ebrahim Khalaj, Andrew Giang, Vaclav Rajlich.  2015.  Impact Analysis based on a Global Hierarchical Object Graph. 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER).

During impact analysis on object-oriented code, statically extracting dependencies is often complicated by subclassing, programming to interfaces, aliasing, and collections, among others. When a tool recommends a large number of types or does not rank its recommendations, it may lead developers to explore more irrelevant code. We propose to mine and rank dependencies based on a global, hierarchical points-to graph that is extracted using abstract interpretation. A previous whole-program static analysis interprets a program enriched with annotations that express hierarchy, and over-approximates all the objects that may be created at runtime and how they may communicate. In this paper, an analysis mines the hierarchy and the edges in the graph to extract and rank dependencies such as the most important classes related to a class, or the most important classes behind an interface. An evaluation using two case studies on two systems totaling 10,000 lines of code and five completed code modification tasks shows that following dependencies based on abstract interpretation achieves higher effectiveness compared to following dependencies extracted from the abstract syntax tree. As a result, developers explore less irrelevant code.

2017-03-07
Gupta, M. K., Govil, M. C., Singh, G., Sharma, P..  2015.  XSSDM: Towards detection and mitigation of cross-site scripting vulnerabilities in web applications. 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :2010–2015.

With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.

2017-03-08
Herrera, A., Janczewski, L..  2015.  Cloud supply chain resilience. 2015 Information Security for South Africa (ISSA). :1–9.

Cloud computing is a service-based computing resources sourcing model that is changing the way in which companies deploy and operate information and communication technologies (ICT). This model introduces several advantages compared with traditional environments along with typical outsourcing benefits reshaping the ICT services supply chain by creating a more dynamic ICT environment plus a broader variety of service offerings. This leads to higher risk of disruption and brings additional challenges for organisational resilience, defined herein as the ability of organisations to survive and also to thrive when exposed to disruptive incidents. This paper draws on supply chain theory and supply chain resilience concepts in order to identify a set of coordination mechanisms that positively impact ICT operational resilience processes within cloud supply chains and packages them into a conceptual model.

2017-03-07
Alnaami, K., Ayoade, G., Siddiqui, A., Ruozzi, N., Khan, L., Thuraisingham, B..  2015.  P2V: Effective Website Fingerprinting Using Vector Space Representations. 2015 IEEE Symposium Series on Computational Intelligence. :59–66.

Language vector space models (VSMs) have recently proven to be effective across a variety of tasks. In VSMs, each word in a corpus is represented as a real-valued vector. These vectors can be used as features in many applications in machine learning and natural language processing. In this paper, we study the effect of vector space representations in cyber security. In particular, we consider a passive traffic analysis attack (Website Fingerprinting) that threatens users' navigation privacy on the web. By using anonymous communication, Internet users (such as online activists) may wish to hide the destination of web pages they access for different reasons such as avoiding tyrant governments. Traditional website fingerprinting studies collect packets from the users' network and extract features that are used by machine learning techniques to reveal the destination of certain web pages. In this work, we propose the packet to vector (P2V) approach where we model website fingerprinting attack using word vector representations. We show how the suggested model outperforms previous website fingerprinting works.

2017-02-23
K. Alnaami, G. Ayoade, A. Siddiqui, N. Ruozzi, L. Khan, B. Thuraisingham.  2015.  "P2V: Effective Website Fingerprinting Using Vector Space Representations". 2015 IEEE Symposium Series on Computational Intelligence. :59-66.

Language vector space models (VSMs) have recently proven to be effective across a variety of tasks. In VSMs, each word in a corpus is represented as a real-valued vector. These vectors can be used as features in many applications in machine learning and natural language processing. In this paper, we study the effect of vector space representations in cyber security. In particular, we consider a passive traffic analysis attack (Website Fingerprinting) that threatens users' navigation privacy on the web. By using anonymous communication, Internet users (such as online activists) may wish to hide the destination of web pages they access for different reasons such as avoiding tyrant governments. Traditional website fingerprinting studies collect packets from the users' network and extract features that are used by machine learning techniques to reveal the destination of certain web pages. In this work, we propose the packet to vector (P2V) approach where we model website fingerprinting attack using word vector representations. We show how the suggested model outperforms previous website fingerprinting works.

2017-02-27
Njenga, K., Ndlovu, S..  2015.  Mobile banking and information security risks: Demand-side predilections of South African lead-users. 2015 Second International Conference on Information Security and Cyber Forensics (InfoSec). :86–92.

South Africa's lead-users predilections to tinker and innovate mobile banking services is driven by various constructs. Advanced technologies have made mobile banking services easy to use, attractive and beneficial. While this is welcome news to many, there are concerns that when lead-users tinker with these services, information security risks are exacerbated. The aim of this article is to present an insightful understanding of the demand-side predilections of South Africa's lead-users in such contexts. We assimilate the theories of Usage Control, (UCON), the Theory of Technology Acceptance Model (TAM), and the Theory of Perceived Risk (TPP) to explain predilections over technology. We demonstrate that constructs derived from these theories can explain the general demand-side predilection to tinker with mobile banking services. A quantitative approach was used to test this. From a sample of South African banking lead-users operating in Gauteng province of South Africa, data was collected and analysed with the help of a software package. We found unexpectedly that, lead-users predilections to tinker with mobile banking services was inhibited by perceived risk. Moreover, male lead-users were more domineering in the tinkering process than female lead-users. The implication for this is discussed and explained in the main body of work.

2017-02-14
B. C. M. Cappers, J. J. van Wijk.  2015.  "SNAPS: Semantic network traffic analysis through projection and selection". 2015 IEEE Symposium on Visualization for Cyber Security (VizSec). :1-8.

Most network traffic analysis applications are designed to discover malicious activity by only relying on high-level flow-based message properties. However, to detect security breaches that are specifically designed to target one network (e.g., Advanced Persistent Threats), deep packet inspection and anomaly detection are indispensible. In this paper, we focus on how we can support experts in discovering whether anomalies at message level imply a security risk at network level. In SNAPS (Semantic Network traffic Analysis through Projection and Selection), we provide a bottom-up pixel-oriented approach for network traffic analysis where the expert starts with low-level anomalies and iteratively gains insight in higher level events through the creation of multiple selections of interest in parallel. The tight integration between visualization and machine learning enables the expert to iteratively refine anomaly scores, making the approach suitable for both post-traffic analysis and online monitoring tasks. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.

2017-03-08
Casola, V., Benedictis, A. D., Rak, M., Villano, U..  2015.  SLA-Based Secure Cloud Application Development: The SPECS Framework. 2015 17th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). :337–344.

The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor userdefined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.

2017-02-21
J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

2019-09-09
Johnson, Matthew, Bradshaw, Jeffrey M., Feltovich, Paul J., Jonker, Catholijn M., van Riemsdijk, M. Birna, Sierhuis, Maarten.  2014.  Coactive Design: Designing Support for Interdependence in Joint Activity. J. Hum.-Robot Interact.. 3:43–69.

Coactive Design is a new approach to address the increasingly sophisticated roles that people and robots play as the use of robots expands into new, complex domains. The approach is motivated by the desire for robots to perform less like teleoperated tools or independent automatons and more like interdependent teammates. In this article, we describe what it means to be interdependent, why this is important, and the design implications that follow from this perspective. We argue for a human-robot system model that supports interdependence through careful attention to requirements for observability, predictability, and directability. We present a Coactive Design method and show how it can be a useful approach for developers trying to understand how to translate high-level teamwork concepts into reusable control algorithms, interface elements, and behaviors that enable robots to fulfill their envisioned role as teammates. As an example of the coactive design approach, we present our results from the DARPA Virtual Robotics Challenge, a competition designed to spur development of advanced robots that can assist humans in recovering from natural and man-made disasters. Twenty-six teams from eight countries competed in three different tasks providing an excellent evaluation of the relative effectiveness of different approaches to human-machine system design.

2015-05-06
Kobayashi, F., Talburt, J.R..  2014.  Decoupling Identity Resolution from the Maintenance of Identity Information. Information Technology: New Generations (ITNG), 2014 11th International Conference on. :349-354.

The EIIM model for ER allows for creation and maintenance of persistent entity identity structures. It accomplishes this through a collection of batch configurations that allow updates and asserted fixes to be made to the Identity knowledgebase (IKB). The model also provides a batch IR configuration that provides no maintenance activity but instead allows access to the identity information. This batch IR configuration is limited in a few ways. It is driven by the same rules used for maintaining the IKB, has no inherent method to identity "close" matches, and can only identify and return the positive matches. Through the decoupling of this configuration and its movements into an interactive role under the umbrella of an Identity Management Service, a more robust access method can be provided for the use of identity information. This more robust access to the information improved the quality of the information along multiple Information Quality dimensions.

Adjei, J.K..  2014.  Explaining the Role of Trust in Cloud Service Acquisition. Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on. :283-288.

Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information.

Adjei, J.K..  2014.  Explaining the Role of Trust in Cloud Service Acquisition. Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on. :283-288.

Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information.