Visible to the public Biblio

Filters: Keyword is learning  [Clear All Filters]
2023-03-06
Gori, Monica, Volpe, Gualtiero, Cappagli, Giulia, Volta, Erica, Cuturi, Luigi F..  2021.  Embodied multisensory training for learning in primary school children. 2021 {IEEE} {International} {Conference} on {Development} and {Learning} ({ICDL}). :1–7.
Recent scientific results show that audio feedback associated with body movements can be fundamental during the development to learn new spatial concepts [1], [2]. Within the weDraw project [3], [4], we have investigated how this link can be useful to learn mathematical concepts. Here we present a study investigating how mathematical skills changes after multisensory training based on human-computer interaction (RobotAngle and BodyFraction activities). We show that embodied angle and fractions exploration associated with audio and visual feedback can be used in typical children to improve cognition of spatial mathematical concepts. We finally present the exploitation of our results: an online, optimized version of one of the tested activity to be used at school. The training result suggests that audio and visual feedback associated with body movements is informative for spatial learning and reinforces the idea that spatial representation development is based on sensory-motor interactions.
2022-03-23
Agana, Moses Adah, Edu, Joseph Ikpabi.  2021.  Predicting Cyber Attacks in a Proxy Server using Support Vector Machine (SVM) Learning Algorithm. 2021 IST-Africa Conference (IST-Africa). :1–11.
This study used the support vector machine (SVM) algorithm to predict Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks on a proxy server. Proxy-servers are prone to attacks such as DoS and DDoS and existing detection and prediction systems are inefficient. Three convex optimization problems using the Gaussian, linear and non-linear kernel methods were solved using the SVM module to detect the attacks. The SVM module and proxy server were implemented in Python and javascript respectively and made to run on a local network. Four other computers running on the same network where made to each communicate with the proxy server (two dedicated to attack the server). The server was able to detect and filter out the malicious requests from the attacking clients. Hence, the SVM module can effectively predict cyber attacks and can be integrated into any server to detect such attacks for improved security.
2021-07-27
Fatehi, Nina, Shahhoseini, HadiShahriar.  2020.  A Hybrid Algorithm for Evaluating Trust in Online Social Networks. 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE). :158—162.
The acceleration of extending popularity of Online Social Networks (OSNs) thanks to various services with which they provide people, is inevitable. This is why in OSNs security as a way to protect private data of users to be abused by unauthoritative people has a vital role to play. Trust evaluation is the security approach that has been utilized since the advent of OSNs. Graph-based approaches are among the most popular methods for trust evaluation. However, graph-based models need to employ limitations in the search process of finding trusted paths. This contributes to a reduction in trust accuracy. In this investigation, a learning-based model which with no limitation is able to find reliable users of any target user, is proposed. Experimental results depict 12% improvement in trust accuracy compares to models based on the graph-based approach.
2021-05-13
Ho, Tsung-Yu, Chen, Wei-An, Huang, Chiung-Ying.  2020.  The Burden of Artificial Intelligence on Internal Security Detection. 2020 IEEE 17th International Conference on Smart Communities: Improving Quality of Life Using ICT, IoT and AI (HONET). :148—150.
Our research team have devoted to extract internal malicious behavior by monitoring the network traffic for many years. We applied the deep learning approach to recognize the malicious patterns within network, but this methodology may lead to more works to examine the results from AI models production. Hence, this paper addressed the scenario to consider the burden of AI, and proposed an idea for long-term reliable detection in the future work.
2021-05-05
Cano M, Jeimy J..  2020.  Sandbox: Revindicate failure as the foundation of learning. 2020 IEEE World Conference on Engineering Education (EDUNINE). :1—6.

In an increasingly asymmetric context of both instability and permanent innovation, organizations demand new capacities and learning patterns. In this sense, supervisors have adopted the metaphor of the "sandbox" as a strategy that allows their regulated parties to experiment and test new proposals in order to study them and adjust to the established compliance frameworks. Therefore, the concept of the "sandbox" is of educational interest as a way to revindicate failure as a right in the learning process, allowing students to think, experiment, ask questions and propose ideas outside the known theories, and thus overcome the mechanistic formation rooted in many of the higher education institutions. Consequently, this article proposes the application of this concept for educational institutions as a way of resignifying what students have learned.

2020-10-12
D'Angelo, Mirko, Gerasimou, Simos, Ghahremani, Sona, Grohmann, Johannes, Nunes, Ingrid, Pournaras, Evangelos, Tomforde, Sven.  2019.  On Learning in Collective Self-Adaptive Systems: State of Practice and a 3D Framework. 2019 IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). :13–24.
Collective self-adaptive systems (CSAS) are distributed and interconnected systems composed of multiple agents that can perform complex tasks such as environmental data collection, search and rescue operations, and discovery of natural resources. By providing individual agents with learning capabilities, CSAS can cope with challenges related to distributed sensing and decision-making and operate in uncertain environments. This unique characteristic of CSAS enables the collective to exhibit robust behaviour while achieving system-wide and agent-specific goals. Although learning has been explored in many CSAS applications, selecting suitable learning models and techniques remains a significant challenge that is heavily influenced by expert knowledge. We address this gap by performing a multifaceted analysis of existing CSAS with learning capabilities reported in the literature. Based on this analysis, we introduce a 3D framework that illustrates the learning aspects of CSAS considering the dimensions of autonomy, knowledge access, and behaviour, and facilitates the selection of learning techniques and models. Finally, using example applications from this analysis, we derive open challenges and highlight the need for research on collaborative, resilient and privacy-aware mechanisms for CSAS.
2020-08-28
Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

2020-06-01
Baruwal Chhetri, Mohan, Uzunov, Anton, Vo, Bao, Nepal, Surya, Kowalczyk, Ryszard.  2019.  Self-Improving Autonomic Systems for Antifragile Cyber Defence: Challenges and Opportunities. 2019 IEEE International Conference on Autonomic Computing (ICAC). :18–23.

Antifragile systems enhance their capabilities and become stronger when exposed to adverse conditions, stresses or attacks, making antifragility a desirable property for cyber defence systems that operate in contested military environments. Self-improvement in autonomic systems refers to the improvement of their self-* capabilities, so that they are able to (a) better handle previously known (anticipated) situations, and (b) deal with previously unknown (unanticipated) situations. In this position paper, we present a vision of using self-improvement through learning to achieve antifragility in autonomic cyber defence systems. We first enumerate some of the major challenges associated with realizing distributed self-improvement. We then propose a reference model for middleware frameworks for self-improving autonomic systems and a set of desirable features of such frameworks.

2019-11-04
Li, Teng, Ma, Jianfeng, Pei, Qingqi, Shen, Yulong, Sun, Cong.  2018.  Anomalies Detection of Routers Based on Multiple Information Learning. 2018 International Conference on Networking and Network Applications (NaNA). :206-211.

Routers are important devices in the networks that carry the burden of transmitting information among the communication devices on the Internet. If a malicious adversary wants to intercept the information or paralyze the network, it can directly attack the routers and then achieve the suspicious goals. Thus, preventing router security is of great importance. However, router systems are notoriously difficult to understand or diagnose for their inaccessibility and heterogeneity. The common way of gaining access to the router system and detecting the anomaly behaviors is to inspect the router syslogs or monitor the packets of information flowing to the routers. These approaches just diagnose the routers from one aspect but do not consider them from multiple views. In this paper, we propose an approach to detect the anomalies and faults of the routers with multiple information learning. We try to use the routers' information not from the developer's view but from the user' s view, which does not need any expert knowledge. First, we do the offline learning to transform the benign or corrupted user actions into the syslogs. Then, we try to decide whether the input routers' conditions are poor or not with clustering. During the detection phase, we use the distance between the event and the cluster to decide if it is the anomaly event and we can provide the corresponding solutions. We have applied our approach in a university network which contains Cisco, Huawei and Dlink routers for three months. We aligned our experiment with former work as a baseline for comparison. Our approach can gain 89.6% accuracy in detecting the attacks which is 5.1% higher than the former work. The results show that our approach performs in limited time as well as memory usages and has high detection and low false positives.

2019-08-26
Paletov, Rumen, Tsankov, Petar, Raychev, Veselin, Vechev, Martin.  2018.  Inferring Crypto API Rules from Code Changes. Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation. :450–464.
Creating and maintaining an up-to-date set of security rules that match misuses of crypto APIs is challenging, as crypto APIs constantly evolve over time with new cryptographic primitives and settings, making existing ones obsolete. To address this challenge, we present a new approach to extract security fixes from thousands of code changes. Our approach consists of: (i) identifying code changes, which often capture security fixes, (ii) an abstraction that filters irrelevant code changes (such as refactorings), and (iii) a clustering analysis that reveals commonalities between semantic code changes and helps in eliciting security rules. We applied our approach to the Java Crypto API and showed that it is effective: (i) our abstraction effectively filters non-semantic code changes (over 99% of all changes) without removing security fixes, and (ii) over 80% of the code changes are security fixes identifying security rules. Based on our results, we identified 13 rules, including new ones not supported by existing security checkers.
2019-05-01
Chen, Yudong, Su, Lili, Xu, Jiaming.  2018.  Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent. Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems. :96-96.

We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including Google's Federated Learning. Formally, we focus on a decentralized system that consists of a parameter server and m working machines; each working machine keeps N/m data samples, where N is the total number of samples. In each iteration, up to q of the m working machines suffer Byzantine faults – a faulty machine in the given iteration behaves arbitrarily badly against the system and has complete knowledge of the system. Additionally, the sets of faulty machines may be different across iterations. Our goal is to design robust algorithms such that the system can learn the underlying true parameter, which is of dimension d, despite the interruption of the Byzantine attacks. In this paper, based on the geometric median of means of the gradients, we propose a simple variant of the classical gradient descent method. We show that our method can tolerate q Byzantine failures up to 2(1+$ε$)q łe m for an arbitrarily small but fixed constant $ε$0. The parameter estimate converges in O(łog N) rounds with an estimation error on the order of max $\surd$dq/N, \textasciitilde$\surd$d/N , which is larger than the minimax-optimal error rate $\surd$d/N in the centralized and failure-free setting by at most a factor of $\surd$q . The total computational complexity of our algorithm is of O((Nd/m) log N) at each working machine and O(md + kd log 3 N) at the central server, and the total communication cost is of O(m d log N). We further provide an application of our general results to the linear regression problem. A key challenge arises in the above problem is that Byzantine failures create arbitrary and unspecified dependency among the iterations and the aggregated gradients. To handle this issue in the analysis, we prove that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.

2017-05-22
Carlsten, Miles, Kalodner, Harry, Weinberg, S. Matthew, Narayanan, Arvind.  2016.  On the Instability of Bitcoin Without the Block Reward. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :154–167.

Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain. We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a "wealthy" block to "steal" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest. We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.

2017-05-17
Ostberg, Jan-Peter, Wagner, Stefan, Weilemann, Erica.  2016.  Does Personality Influence the Usage of Static Analysis Tools?: An Explorative Experiment Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering. :75–81.

There are many techniques to improve software quality. One is using automatic static analysis tools. We have observed, however, that despite the low-cost help they offer, these tools are underused and often discourage beginners. There is evidence that personality traits influence the perceived usability of a software. Thus, to support beginners better, we need to understand how the workflow of people with different prevalent personality traits using these tools varies. For this purpose, we observed users' solution strategies and correlated them with their prevalent personality traits in an exploratory study with student participants within a controlled experiment. We gathered data by screen capturing and chat protocols as well as a Big Five personality traits test. We found strong correlations between particular personality traits and different strategies of removing the findings of static code analysis as well as between personality and tool utilization. Based on that, we offer take-away improvement suggestions. Our results imply that developers should be aware of these solution strategies and use this information to build tools that are more appealing to people with different prevalent personality traits.

2017-05-16
Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard.  2016.  Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :3035–3041.

Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.

2017-03-20
Canfora, Gerardo, Medvet, Eric, Mercaldo, Francesco, Visaggio, Corrado Aaron.  2016.  Acquiring and Analyzing App Metrics for Effective Mobile Malware Detection. Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. :50–57.

Android malware is becoming very effective in evading detection techniques, and traditional malware detection techniques are demonstrating their weaknesses. Signature based detection shows at least two drawbacks: first, the detection is possible only after the malware has been identified, and the time needed to produce and distribute the signature provides attackers with window of opportunities for spreading the malware in the wild. For solving this problem, different approaches that try to characterize the malicious behavior through the invoked system and API calls emerged. Unfortunately, several evasion techniques have proven effective to evade detection based on system and API calls. In this paper, we propose an approach for capturing the malicious behavior in terms of device resource consumption (using a thorough set of features), which is much more difficult to camouflage. We describe a procedure, and the corresponding practical setting, for extracting those features with the aim of maximizing their discriminative power. Finally, we describe the promising results we obtained experimenting on more than 2000 applications, on which our approach exhibited an accuracy greater than 99%.

2017-03-08
Leong, F. H..  2015.  Automatic detection of frustration of novice programmers from contextual and keystroke logs. 2015 10th International Conference on Computer Science Education (ICCSE). :373–377.

Novice programmers exhibit a repertoire of affective states over time when they are learning computer programming. The modeling of frustration is important as it informs on the need for pedagogical intervention of the student who may otherwise lose confidence and interest in the learning. In this paper, contextual and keystroke features of the students within a Java tutoring system are used to detect frustration of student within a programming exercise session. As compared to psychological sensors used in other studies, the use of contextual and keystroke logs are less obtrusive and the equipment used (keyboard) is ubiquitous in most learning environment. The technique of logistic regression with lasso regularization is utilized for the modeling to prevent over-fitting. The results showed that a model that uses only contextual and keystroke features achieved a prediction accuracy level of 0.67 and a recall measure of 0.833. Thus, we conclude that it is possible to detect frustration of a student from distilling both the contextual and keystroke logs within the tutoring system with an adequate level of accuracy.

2015-05-04
Balakrishnan, R., Parekh, R..  2014.  Learning to predict subject-line opens for large-scale email marketing. Big Data (Big Data), 2014 IEEE International Conference on. :579-584.

Billions of dollars of services and goods are sold through email marketing. Subject lines have a strong influence on open rates of the e-mails, as the consumers often open e-mails based on the subject. Traditionally, the e-mail-subject lines are compiled based on the best assessment of the human editors. We propose a method to help the editors by predicting subject line open rates by learning from past subject lines. The method derives different types of features from subject lines based on keywords, performance of past subject lines and syntax. Furthermore, we evaluate the contribution of individual subject-line keywords to overall open rates based on an iterative method-namely Attribution Scoring - and use this for improved predictions. A random forest based model is trained to combine these features to predict the performance. We use a dataset of more than a hundred thousand different subject lines with many billions of impressions to train and test the method. The proposed method shows significant improvement in prediction accuracy over the baselines for both new as well as already used subject lines.