Visible to the public Biblio

Filters: Keyword is Context  [Clear All Filters]
2015-05-06
Nemoianu, I.-D., Greco, C., Cagnazzo, M., Pesquet-Popescu, B..  2014.  On a Hashing-Based Enhancement of Source Separation Algorithms Over Finite Fields With Network Coding Perspectives. Multimedia, IEEE Transactions on. 16:2011-2024.

Blind Source Separation (BSS) deals with the recovery of source signals from a set of observed mixtures, when little or no knowledge of the mixing process is available. BSS can find an application in the context of network coding, where relaying linear combinations of packets maximizes the throughput and increases the loss immunity. By relieving the nodes from the need to send the combination coefficients, the overhead cost is largely reduced. However, the scaling ambiguity of the technique and the quasi-uniformity of compressed media sources makes it unfit, at its present state, for multimedia transmission. In order to open new practical applications for BSS in the context of multimedia transmission, we have recently proposed to use a non-linear encoding to increase the discriminating power of the classical entropy-based separation methods. Here, we propose to append to each source a non-linear message digest, which offers an overhead smaller than a per-symbol encoding and that can be more easily tuned. Our results prove that our algorithm is able to provide high decoding rates for different media types such as image, audio, and video, when the transmitted messages are less than 1.5 kilobytes, which is typically the case in a realistic transmission scenario.

Barclay, C..  2014.  Sustainable security advantage in a changing environment: The Cybersecurity Capability Maturity Model (CM2). ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :275-282.

With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.
 

Badis, H., Doyen, G., Khatoun, R..  2014.  Understanding botclouds from a system perspective: A principal component analysis. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-9.

Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.

2015-05-05
Zadeh, B.Q., Handschuh, S..  2014.  Random Manhattan Indexing. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :203-208.

Vector space models (VSMs) are mathematically well-defined frameworks that have been widely used in text processing. In these models, high-dimensional, often sparse vectors represent text units. In an application, the similarity of vectors -- and hence the text units that they represent -- is computed by a distance formula. The high dimensionality of vectors, however, is a barrier to the performance of methods that employ VSMs. Consequently, a dimensionality reduction technique is employed to alleviate this problem. This paper introduces a new method, called Random Manhattan Indexing (RMI), for the construction of L1 normed VSMs at reduced dimensionality. RMI combines the construction of a VSM and dimension reduction into an incremental, and thus scalable, procedure. In order to attain its goal, RMI employs the sparse Cauchy random projections.

Babour, A., Khan, J.I..  2014.  Tweet Sentiment Analytics with Context Sensitive Tone-Word Lexicon. Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on. 1:392-399.

In this paper we propose a twitter sentiment analytics that mines for opinion polarity about a given topic. Most of current semantic sentiment analytics depends on polarity lexicons. However, many key tone words are frequently bipolar. In this paper we demonstrate a technique which can accommodate the bipolarity of tone words by context sensitive tone lexicon learning mechanism where the context is modeled by the semantic neighborhood of the main target. Performance analysis shows that ability to contextualize the tone word polarity significantly improves the accuracy.

Heimerl, F., Lohmann, S., Lange, S., Ertl, T..  2014.  Word Cloud Explorer: Text Analytics Based on Word Clouds. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :1833-1842.

Word clouds have emerged as a straightforward and visually appealing visualization method for text. They are used in various contexts as a means to provide an overview by distilling text down to those words that appear with highest frequency. Typically, this is done in a static way as pure text summarization. We think, however, that there is a larger potential to this simple yet powerful visualization paradigm in text analytics. In this work, we explore the usefulness of word clouds for general text analysis tasks. We developed a prototypical system called the Word Cloud Explorer that relies entirely on word clouds as a visualization method. It equips them with advanced natural language processing, sophisticated interaction techniques, and context information. We show how this approach can be effectively used to solve text analysis tasks and evaluate it in a qualitative user study.

Dey, L., Mahajan, D., Gupta, H..  2014.  Obtaining Technology Insights from Large and Heterogeneous Document Collections. Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on. 1:102-109.

Keeping up with rapid advances in research in various fields of Engineering and Technology is a challenging task. Decision makers including academics, program managers, venture capital investors, industry leaders and funding agencies not only need to be abreast of latest developments but also be able to assess the effect of growth in certain areas on their core business. Though analyst agencies like Gartner, McKinsey etc. Provide such reports for some areas, thought leaders of all organisations still need to amass data from heterogeneous collections like research publications, analyst reports, patent applications, competitor information etc. To help them finalize their own strategies. Text mining and data analytics researchers have been looking at integrating statistics, text analytics and information visualization to aid the process of retrieval and analytics. In this paper, we present our work on automated topical analysis and insight generation from large heterogeneous text collections of publications and patents. While most of the earlier work in this area provides search-based platforms, ours is an integrated platform for search and analysis. We have presented several methods and techniques that help in analysis and better comprehension of search results. We have also presented methods for generating insights about emerging and popular trends in research along with contextual differences between academic research and patenting profiles. We also present novel techniques to present topic evolution that helps users understand how a particular area has evolved over time.
 

Pirinen, R..  2014.  Studies of Integration Readiness Levels: Case Shared Maritime Situational Awareness System. Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint. :212-215.

The research question of this study is: How Integration Readiness Level (IRL) metrics can be understood and realized in the domain of border control information systems. The study address to the IRL metrics and their definition, criteria, references, and questionnaires for validation of border control information systems in case of the shared maritime situational awareness system. The target of study is in improvements of ways for acceptance, operational validation, risk assessment, and development of sharing mechanisms and integration of information systems and border control information interactions and collaboration concepts in Finnish national and European border control domains.
 

Kornmaier, A., Jaouen, F..  2014.  Beyond technical data - a more comprehensive situational awareness fed by available intelligence information. Cyber Conflict (CyCon 2014), 2014 6th International Conference On. :139-154.

Information on cyber incidents and threats are currently collected and processed with a strong technical focus. Threat and vulnerability information alone are not a solid base for effective, affordable or actionable security advice for decision makers. They need more than a small technical cut of a bigger situational picture to combat and not only to mitigate the cyber threat. We first give a short overview over the related work that can be found in the literature. We found that the approaches mostly analysed “what” has been done, instead of looking more generically beyond the technical aspects for the tactics, techniques and procedures to identify the “how” it was done, by whom and why. We examine then, what information categories and data already exist to answer the question for an adversary's capabilities and objectives. As traditional intelligence tries to serve a better understanding of adversaries' capabilities, actions, and intent, the same is feasible in the cyber space with cyber intelligence. Thus, we identify information sources in the military and civil environment, before we propose to link that traditional information with the technical data for a better situational picture. We give examples of information that can be collected from traditional intelligence for correlation with technical data. Thus, the same intelligence operational picture for the cyber sphere could be developed like the one that is traditionally fed from conventional intelligence disciplines. Finally we propose a way of including intelligence processing in cyber analysis. We finally outline requirements that are key for a successful exchange of information and intelligence between military/civil information providers.
 

Boleng, J., Novakouski, M., Cahill, G., Simanta, S., Morris, E..  2014.  Fusing Open Source Intelligence and Handheld Situational Awareness: Benghazi Case Study. Military Communications Conference (MILCOM), 2014 IEEE. :1421-1426.

This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.
 

Gupta, M.K., Govil, M.C., Singh, G..  2014.  A context-sensitive approach for precise detection of cross-site scripting vulnerabilities. Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on. :7-12.

Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.

Sanger, J., Richthammer, C., Hassan, S., Pernul, G..  2014.  Trust and Big Data: A Roadmap for Research. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :278-282.

We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
 

Arimura, S., Fujita, M., Kobayashi, S., Kani, J., Nishigaki, M., Shiba, A..  2014.  i/k-Contact: A context-aware user authentication using physical social trust. Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on. :407-413.

In recent years, with growing demands towards big data application, various research on context-awareness has once again become active. This paper proposes a new type of context-aware user authentication that controls the authentication level of users, using the context of “physical trust relationship” that is built between users by visual contact. In our proposal, the authentication control is carried out by two mechanisms; “i-Contact” and “k-Contact”. i-Contact is the mechanism that visually confirms the user (owner of a mobile device) using the surrounding users' eyes. The authenticity of users can be reliably assessed by the people (witnesses), even when the user exhibits ambiguous behavior. k-Contact is the mechanism that dynamically changes the authentication level of each user using the context information collected through i-Contact. Once a user is authenticated by eyewitness reports, the user is no longer prompted for a password to unlock his/her mobile device and/or to access confidential resources. Thus, by leveraging the proposed authentication system, the usability for only trusted users can be securely enhanced. At the same time, our proposal anticipates the promotion of physical social communication as face-to-face communication between users is triggered by the proposed authentication system.
 

Buranasaksee, U., Porkaew, K., Supasitthimethee, U..  2014.  AccAuth: Accounting system for OAuth protocol. Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the. :8-13.

When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.
 

2015-05-04
Hummen, R., Shafagh, H., Raza, S., Voig, T., Wehrle, K..  2014.  Delegation-based authentication and authorization for the IP-based Internet of Things. Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on. :284-292.

IP technology for resource-constrained devices enables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agreement purposes. These overheads particularly hamper secure communication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the constrained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 %, computations by 97 %, network transmissions by 68 %.
 

Gimenez, A., Gamblin, T., Rountree, B., Bhatele, A., Jusufi, I., Bremer, P.-T., Hamann, B..  2014.  Dissecting On-Node Memory Access Performance: A Semantic Approach. High Performance Computing, Networking, Storage and Analysis, SC14: International Conference for. :166-176.

Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
 

2015-05-01
Achouri, A., Hlaoui, Y.B., Jemni Ben Ayed, L..  2014.  Institution Theory for Services Oriented Applications. Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International. :516-521.

In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.

Ammann, P., Delamaro, M.E., Offutt, J..  2014.  Establishing Theoretical Minimal Sets of Mutants. Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on. :21-30.

Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.

2015-04-30
Hua Chai, Wenbing Zhao.  2014.  Towards trustworthy complex event processing. Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on. :758-761.

Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.

Barclay, C..  2014.  Sustainable security advantage in a changing environment: The Cybersecurity Capability Maturity Model (CM2). ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014. :275-282.

With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.

Cailleux, L., Bouabdallah, A., Bonnin, J.-M..  2014.  A confident email system based on a new correspondence model. Advanced Communication Technology (ICACT), 2014 16th International Conference on. :489-492.

Despite all the current controversies, the success of the email service is still valid. The ease of use of its various features contributed to its widespread adoption. In general, the email system provides for all its users the same set of features controlled by a single monolithic policy. Such solutions are efficient but limited because they grant no place for the concept of usage which denotes a user's intention of communication: private, professional, administrative, official, military. The ability to efficiently send emails from mobile devices creates new interesting opportunities. We argue that the context (location, time, device, operating system, access network...) of the email sender appears as a new dimension we have to take into account to complete the picture. Context is clearly orthogonal to usage because a same usage may require different features depending of the context. It is clear that there is no global policy meeting requirements of all possible usages and contexts. To address this problem, we propose to define a correspondence model which for a given usage and context allows to derive a correspondence type encapsulating the exact set of required features. With this model, it becomes possible to define an advanced email system which may cope with multiple policies instead of a single monolithic one. By allowing a user to select the exact policy coping with her needs, we argue that our approach reduces the risk-taking allowing the email system to slide from a trusted one to a confident one.

Yinping Yang, Falcao, H., Delicado, N., Ortony, A..  2014.  Reducing Mistrust in Agent-Human Negotiations. Intelligent Systems, IEEE. 29:36-43.

Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.

Dondio, P., Longo, L..  2014.  Computing Trust as a Form of Presumptive Reasoning. Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on. 2:274-281.

This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.

Fei Hao, Geyong Min, Man Lin, Changqing Luo, Yang, L.T..  2014.  MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks. Parallel and Distributed Systems, IEEE Transactions on. 25:2944-2955.

Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.

Sousa, S., Dias, P., Lamas, D..  2014.  A model for Human-computer trust: A key contribution for leveraging trustful interactions. Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on. :1-6.

This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.