Biblio
ARGOS is a web service we implemented to offer face recognition Authentication Services (AaaS) to mobile and desktop (via the web browser) end users. The Authentication Services may be used by 3rd party service organizations to enhance their service offering to their customers. ARGOS implements a secure face recognition-based authentication service aiming to provide simple and intuitive tools for 3rd party service providers (like PayPal, banks, e-commerce etc) to replace passwords with face biometrics. It supports authentication from any device with 2D or 3D frontal facing camera (mobile phones, laptops, tablets etc.) and almost any operating systems (iOS, Android, Windows and Linux Ubuntu).
Personal data have been compiled and harnessed by a great number of establishments to execute their legal activities. Establishments are legally bound to maintain the confidentiality and security of personal data. Hence it is a requirement to provide access logs for the personal information. Depending on the needs and capacity, personal data can be opened to the users via platforms such as file system, database and web service. Web service platform is a popular alternative since it is autonomous and can isolate the data source from the user. In this paper, the way to log personal data accessed via web service method has been discussed. As an alternative to classical method in which logs were recorded and saved by client applications, a different mechanism of forming a central audit log with API manager has been investigated. By forging a model policy to exemplify central logging method, its advantages and disadvantages have been explored. It has been concluded in the end that this model could be employed in centrally recording audit logs.
The Agave Platform first appeared in 2011 as a pilot project for the iPlant Collaborative [11]. In its first two years, Foundation saw over 40% growth per month, supporting 1000+ clients, 600+ applications, 4 HPC systems at 3 centers across the US. It also gained users outside of plant biology. To better serve the needs of the general open science community, we rewrote Foundation as a scalable, cloud native application and named it the Agave Platform. In this paper we present the Agave Platform, a Science-as-a-Service (ScaaS) platform for reproducible science. We provide a brief history and technical overview of the project, and highlight three case studies leveraging the platform to create synergistic value for their users.
The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixed-content XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation.
The rapid development of cloud computing has resulted in the emergence of numerous web services on the Internet. Selecting a suitable cloud service is becoming a major problem for users especially non-professionals. Quality of Service (QoS) is considered to be the criterion for judging web services. There are several Collaborative Filtering (CF)-based QoS prediction methods proposed in recent years. QoS values among different users may vary largely due to the network and geographical location. Moreover, QoS data provided by untrusted users will definitely affect the prediction accuracy. However, most existing methods seldom take both facts into consideration. In this paper, we present a trust-aware and location-based approach for web service QoS prediction. A trust value for each user is evaluated before the similarity calculation and the location is taken into account in similar neighbors selecting. A series of experiments are performed based on a realworld QoS dataset including 339 service users and 5,825 services. The experimental analysis shows that the accuracy of our method is much higher than other CF-based methods.
Attacks against websites are increasing rapidly with the expansion of web services. An increasing number of diversified web services make it difficult to prevent such attacks due to many known vulnerabilities in websites. To overcome this problem, it is necessary to collect the most recent attacks using decoy web honeypots and to implement countermeasures against malicious threats. Web honeypots collect not only malicious accesses by attackers but also benign accesses such as those by web search crawlers. Thus, it is essential to develop a means of automatically identifying malicious accesses from mixed collected data including both malicious and benign accesses. Specifically, detecting vulnerability scanning, which is a preliminary process, is important for preventing attacks. In this study, we focused on classification of accesses for web crawling and vulnerability scanning since these accesses are too similar to be identified. We propose a feature vector including features of collective accesses, e.g., intervals of request arrivals and the dispersion of source port numbers, obtained with multiple honeypots deployed in different networks for classification. Through evaluation using data collected from 37 honeypots in a real network, we show that features of collective accesses are advantageous for vulnerability scanning and crawler classification.
Quality of service (QoS) has been considered as a significant criterion for querying among functionally similar web services. Most researches focus on the search of QoS under certain data which may not cover some practical scenarios. Recent approaches for uncertain QoS of web service deal with discrete data domain. In this paper, we try to build the search of QoS under continuous probability distribution. We offer the definition of two kinds of queries under uncertain QoS and form the optimization approaches for specific distributions. Based on that, the search is extended to general cases. With experiments, we show the feasibility of the proposed methods.
This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.