d Krit, S., Haimoud, E..
2017.
Overview of Firewalls: Types and Policies: Managing Windows Embedded Firewall Programmatically. 2017 International Conference on Engineering MIS (ICEMIS). :1–7.
Due to the increasing threat of network attacks, Firewall has become crucial elements in network security, and have been widely deployed in most businesses and institutions for securing private networks. The function of a firewall is to examine each packet that passes through it and decide whether to letting them pass or halting them based on preconfigured rules and policies, so firewall now is the first defense line against cyber attacks. However most of people doesn't know how firewall works, and the most users of windows operating system doesn't know how to use the windows embedded firewall. This paper explains how firewall works, firewalls types, and all you need to know about firewall policies, then presents a novel application (QudsWall) developed by authors that manages windows embedded firewall and make it easy to use.
D. Kergl.
2015.
"Enhancing Network Security by Software Vulnerability Detection Using Social Media Analysis Extended Abstract". 2015 IEEE International Conference on Data Mining Workshop (ICDMW). :1532-1533.
Detecting attacks that are based on unknown security vulnerabilities is a challenging problem. The timely detection of attacks based on hitherto unknown vulnerabilities is crucial for protecting other users and systems from being affected as well. To know the attributes of a novel attack's target system can support automated reconfiguration of firewalls and sending alerts to administrators of other vulnerable targets. We suggest a novel approach of post-incident intrusion detection by utilizing information gathered from real-time social media streams. To accomplish this we take advantage of social media users posting about incidents that affect their user accounts of attacked target systems or their observations about misbehaving online services. Combining knowledge of the attacked systems and reported incidents, we should be able to recognize patterns that define the attributes of vulnerable systems. By matching detected attribute sets with those attributes of well-known attacks, we furthermore should be able to link attacks to already existing entries in the Common Vulnerabilities and Exposures database. If a link to an existing entry is not found, we can assume to have detected an exploitation of an unknown vulnerability, i.e., a zero day exploit or the result of an advanced persistent threat. This finding could also be used to direct efforts of examining vulnerabilities of attacked systems and therefore lead to faster patch deployment.
D. L. Schales, X. Hu, J. Jang, R. Sailer, M. P. Stoecklin, T. Wang.
2015.
"FCCE: Highly scalable distributed Feature Collection and Correlation Engine for low latency big data analytics". 2015 IEEE 31st International Conference on Data Engineering. :1316-1327.
In this paper, we present the design, architecture, and implementation of a novel analysis engine, called Feature Collection and Correlation Engine (FCCE), that finds correlations across a diverse set of data types spanning over large time windows with very small latency and with minimal access to raw data. FCCE scales well to collecting, extracting, and querying features from geographically distributed large data sets. FCCE has been deployed in a large production network with over 450,000 workstations for 3 years, ingesting more than 2 billion events per day and providing low latency query responses for various analytics. We explore two security analytics use cases to demonstrate how we utilize the deployment of FCCE on large diverse data sets in the cyber security domain: 1) detecting fluxing domain names of potential botnet activity and identifying all the devices in the production network querying these names, and 2) detecting advanced persistent threat infection. Both evaluation results and our experience with real-world applications show that FCCE yields superior performance over existing approaches, and excels in the challenging cyber security domain by correlating multiple features and deriving security intelligence.
D. Y. Kao.
2015.
"Performing an APT Investigation: Using People-Process-Technology-Strategy Model in Digital Triage Forensics". 2015 IEEE 39th Annual Computer Software and Applications Conference. 3:47-52.
Taiwan has become the frontline in an emerging cyberspace battle. Cyberattacks from different countries are constantly reported during past decades. The incident of Advanced Persistent Threat (APT) is analyzed from the golden triangle components (people, process and technology) to ensure the application of digital forensics. This study presents a novel People-Process-Technology-Strategy (PPTS) model by implementing a triage investigative step to identify evidence dynamics in digital data and essential information in auditing logs. The result of this study is expected to improve APT investigation. The investigation scenario of this proposed methodology is illustrated by applying to some APT incidents in Taiwan.
D. Zhu, Z. Fan, N. Pang.
2015.
"A Dynamic Supervisory Mechanism of Process Behaviors Based on Dalvik VM". 2015 International Conference on Computational Intelligence and Communication Networks (CICN). :1203-1210.
The threats of smartphone security are mostly from the privacy disclosure and malicious chargeback software which deducting expenses abnormally. They exploit the vulnerabilities of previous permission mechanism to attack to mobile phones, and what's more, it might call hardware to spy privacy invisibly in the background. As the existing Android operating system doesn't support users the monitoring and auditing of system resources, a dynamic supervisory mechanism of process behavior based on Dalvik VM is proposed to solve this problem. The existing android system framework layer and application layer are modified and extended, and special underlying services of system are used to realize a dynamic supervisory on the process behavior of Dalvik VM. Via this mechanism, each process on the system resources and the behavior of each app process can be monitored and analyzed in real-time. It reduces the security threats in system level and positions that which process is using the system resource. It achieves the detection and interception before the occurrence or the moment of behavior so that it protects the private information, important data and sensitive behavior of system security. Extensive experiments have demonstrated the accuracy, effectiveness, and robustness of our approach.
D'Agostino, Jack, Kul, Gokhan.
2021.
Toward Pinpointing Data Leakage from Advanced Persistent Threats. 2021 7th IEEE Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :157–162.
Advanced Persistent Threats (APT) consist of most skillful hackers who employ sophisticated techniques to stealthily gain unauthorized access to private networks and exfiltrate sensitive data. When their existence is discovered, organizations - if they can sustain business continuity - mostly have to perform forensics activities to assess the damage of the attack and discover the extent of sensitive data leakage. In this paper, we construct a novel framework to pinpoint sensitive data that may have been leaked in such an attack. Our framework consists of creating baseline fingerprints for each workstation for setting normal activity, and we consider the change in the behavior of the network overall. We compare the accused fingerprint with sensitive database information by utilizing both Levenstein distance and TF-IDF/cosine similarity resulting in a similarity percentage. This allows us to pinpoint what part of data was exfiltrated by the perpetrators, where in the network the data originated, and if that data is sensitive to the private company's network. We then perform feasibility experiments to show that even these simple methods are feasible to run on a network representative of a mid-size business.
D'Angelo, Mirko, Gerasimou, Simos, Ghahremani, Sona, Grohmann, Johannes, Nunes, Ingrid, Pournaras, Evangelos, Tomforde, Sven.
2019.
On Learning in Collective Self-Adaptive Systems: State of Practice and a 3D Framework. 2019 IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS). :13–24.
Collective self-adaptive systems (CSAS) are distributed and interconnected systems composed of multiple agents that can perform complex tasks such as environmental data collection, search and rescue operations, and discovery of natural resources. By providing individual agents with learning capabilities, CSAS can cope with challenges related to distributed sensing and decision-making and operate in uncertain environments. This unique characteristic of CSAS enables the collective to exhibit robust behaviour while achieving system-wide and agent-specific goals. Although learning has been explored in many CSAS applications, selecting suitable learning models and techniques remains a significant challenge that is heavily influenced by expert knowledge. We address this gap by performing a multifaceted analysis of existing CSAS with learning capabilities reported in the literature. Based on this analysis, we introduce a 3D framework that illustrates the learning aspects of CSAS considering the dimensions of autonomy, knowledge access, and behaviour, and facilitates the selection of learning techniques and models. Finally, using example applications from this analysis, we derive open challenges and highlight the need for research on collaborative, resilient and privacy-aware mechanisms for CSAS.
D'Arco, Paolo, Ansaroudi, Zahra Ebadi.
2021.
Security Attacks on Multi-Stage Proof-of-Work. 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :698—703.
Multi-stage Proof-of-Work is a recently proposed protocol which extends the Proof-of-Work protocol used in Bitcoin. It splits Proof-of-Work into multiple stages, to achieve a more efficient block generation and a fair reward distribution. In this paper we study some of the Multi-stage Proof-of-Work security vulnerabilities. Precisely, we present two attacks: a Selfish Mining attack and a Selfish Stage-Withholding attack. We show that Multi-stage Proof-of-Work is not secure against a selfish miner owning more than 25% of the network hashing power. Moreover, we show that Selfish Stage-Withholding is a complementary strategy to boost a selfish miner's profitability.
D'Lima, N., Mittal, J..
2015.
Password authentication using Keystroke Biometrics. 2015 International Conference on Communication, Information Computing Technology (ICCICT). :1–6.
The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user's behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
D’Alterio, P., Garibaldi, J. M., John, R. I..
2020.
Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
Da Costa, Alessandro Monteiro, de Sá, Alan Oliveira, Machado, Raphael C. S..
2022.
Data Acquisition and extraction on mobile devices-A Review. 2022 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0&IoT). :294—299.
Forensic Science comprises a set of technical-scientific knowledge used to solve illicit acts. The increasing use of mobile devices as the main computing platform, in particular smartphones, makes existing information valuable for forensics. However, the blocking mechanisms imposed by the manufacturers and the variety of models and technologies make the task of reconstructing the data for analysis challenging. It is worth mentioning that the conclusion of a case requires more than the simple identification of evidence, as it is extremely important to correlate all the data and sources obtained, to confirm a suspicion or to seek new evidence. This work carries out a systematic review of the literature, identifying the different types of existing image acquisition and the main extraction and encryption methods used in smartphones with the Android operating system.
da Costa, Patricia, Pereira, Pedro T. L., Paim, Guilherme, da Costa, Eduardo, Bampi, Sergio.
2021.
Boosting the Efficiency of the Harmonics Elimination VLSI Architecture by Arithmetic Approximations. 2021 28th IEEE International Conference on Electronics, Circuits, and Systems (ICECS). :1—4.
Approximate computing emerged as a key alternative for trading off accuracy against energy efficiency and area reduction. Error-tolerant applications, such as multimedia processing, machine learning, and signal processing, can process the information with lower-than-standard accuracy at the circuit level while still fulfilling a good and acceptable service quality at the application level. Adaptive filtering-based systems have been demonstrating high resiliency against hardware errors due to their intrinsic self-healing characteristic. This paper investigates the design space exploration of arithmetic approximations in a Very Large-Scale Integration (VLSI) harmonic elimination (HE) hardware architecture based on Least Mean Square (LMS) adaptive filters. We evaluate the Pareto front of the area- and power versus quality curves by relaxing the arithmetic precision and by adopting both approximate multipliers (AxMs) in combination with approximate adders (AxAs). This paper explores the benefits and impacts of the Dynamic Range Unbiased (DRUM), Rounding-based Approximate (RoBA), and Leading one Bit-based Approximate (LoBA) multipliers in the power dissipation, circuit area, and quality of the VLSI HE architectures. Our results highlight the LoBA 0 as the most efficient AxM applied in the HE architecture. We combine the LoBA 0 with Copy and LOA AxAs with variations in the approximation level (L). Notably, LoBA 0 and LOA with \$L=6\$ resulted in savings of 43.7% in circuit area and 45.2% in power dissipation, compared to the exact HE, which uses multiplier and adder automatically selected by the logic synthesis tool. Finally, we demonstrate that the best hardware architecture found in our investigation successfully eliminates the contaminating spurious noise (i.e., 60 Hz and its harmonics) from the signal.
da Silva Andrade, Richardson B., Souto Rosa, Nelson.
2019.
MidSecThings: Assurance Solution for Security Smart Homes in IoT. 2019 IEEE 19th International Symposium on High Assurance Systems Engineering (HASE). :171–178.
The interest over building security-based solutions to reduce the vulnerability exploits and mitigate the risks associated with smart homes in IoT is growing. However, our investigation identified to architect and implement distributed security mechanisms is still a challenge because is necessary to handle security and privacy in IoT middleware with a strong focus. Our investigation, it was identified the significant proportion of the systems that did not address security and did not describe the security approach in any meaningful detail. The idea proposed in this work is to provide middleware aim to implement security mechanisms in smart home and contribute as how guide to beginner developers' IoT middleware. The advantages of using MidSecThings are to avoid leakage data, unavailable service, unidentification action and not authorized access over IoT devices in smart home.
Da Veiga, Tomás, Chandler, James H., Pittiglio, Giovanni, Lloyd, Peter, Holdar, Mohammad, Onaizah, Onaizah, Alazmani, Ali, Valdastri, Pietro.
2021.
Material Characterization for Magnetic Soft Robots. 2021 IEEE 4th International Conference on Soft Robotics (RoboSoft). :335–342.
Magnetic soft robots are increasingly popular as they provide many advantages such as miniaturization and tetherless control that are ideal for applications inside the human body or in previously inaccessible locations.While non-magnetic elastomers have been extensively characterized and modelled for optimizing the fabrication of soft robots, a systematic material characterization of their magnetic counterparts is still missing. In this paper, commonly employed magnetic materials made out of Ecoflex™ 00-30 and Dragon Skin™ 10 with different concentrations of NdFeB microparticles were mechanically and magnetically characterized. The magnetic materials were evaluated under uniaxial tensile testing and their behavior analyzed through linear and hyperelastic model comparison. To determine the corresponding magnetic properties, we present a method to determine the magnetization vector, and magnetic remanence, by means of a force and torque load cell and large reference permanent magnet; demonstrating a high level of accuracy. Furthermore, we study the influence of varied magnitude impulse magnetizing fields on the resultant magnetizations. In combination, by applying improved, material-specific mechanical and magnetic properties to a 2-segment discrete magnetic robot, we show the potential to reduce simulation errors from 8.5% to 5.4%.
Da, Gaofeng, Xu, Maochao, Xu, Shouhuai.
2014.
A New Approach to Modeling and Analyzing Security of Networked Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :6:1–6:12.
Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.
Dabas, K., Madaan, N., Arya, V., Mehta, S., Chakraborty, T., Singh, G..
2019.
Fair Transfer of Multiple Style Attributes in Text. 2019 Grace Hopper Celebration India (GHCI). :1—5.
To preserve anonymity and obfuscate their identity on online platforms users may morph their text and portray themselves as a different gender or demographic. Similarly, a chatbot may need to customize its communication style to improve engagement with its audience. This manner of changing the style of written text has gained significant attention in recent years. Yet these past research works largely cater to the transfer of single style attributes. The disadvantage of focusing on a single style alone is that this often results in target text where other existing style attributes behave unpredictably or are unfairly dominated by the new style. To counteract this behavior, it would be nice to have a style transfer mechanism that can transfer or control multiple styles simultaneously and fairly. Through such an approach, one could obtain obfuscated or written text incorporated with a desired degree of multiple soft styles such as female-quality, politeness, or formalness. To the best of our knowledge this work is the first that shows and attempt to solve the issues related to multiple style transfer. We also demonstrate that the transfer of multiple styles cannot be achieved by sequentially performing multiple single-style transfers. This is because each single style-transfer step often reverses or dominates over the style incorporated by a previous transfer step. We then propose a neural network architecture for fairly transferring multiple style attributes in a given text. We test our architecture on the Yelp dataset to demonstrate our superior performance as compared to existing one-style transfer steps performed in a sequence.
Dabas, N., Singh, R. P., Kher, G., Chaudhary, V..
2017.
A novel SVD and online sequential extreme learning machine based watermark method for copyright protection. 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–5.
For the increasing use of internet, it is equally important to protect the intellectual property. And for the protection of copyright, a blind digital watermark algorithm with SVD and OSELM in the IWT domain has been proposed. During the embedding process, SVD has been applied to the coefficient blocks to get the singular values in the IWT domain. Singular values are modulated to embed the watermark in the host image. Online sequential extreme learning machine is trained to learn the relationship between the original coefficient and the corresponding watermarked version. During the extraction process, this trained OSELM is used to extract the embedded watermark logo blindly as no original host image is required during this process. The watermarked image is altered using various attacks like blurring, noise, sharpening, rotation and cropping. The experimental results show that the proposed watermarking scheme is robust against various attacks. The extracted watermark has very much similarity with the original watermark and works good to prove the ownership.
Dabbaghi Varnosfaderani, Shirin, Kasprzak, Piotr, Pohl, Christof, Yahyapour, Ramin.
2019.
A Flexible and Compatible Model for Supporting Assurance Level through a Central Proxy. 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :46–52.
Generally, methods of authentication and identification utilized in asserting users' credentials directly affect security of offered services. In a federated environment, service owners must trust external credentials and make access control decisions based on Assurance Information received from remote Identity Providers (IdPs). Communities (e.g. NIST, IETF and etc.) have tried to provide a coherent and justifiable architecture in order to evaluate Assurance Information and define Assurance Levels (AL). Expensive deployment, limited service owners' authority to define their own requirements and lack of compatibility between heterogeneous existing standards can be considered as some of the unsolved concerns that hinder developers to openly accept published works. By assessing the advantages and disadvantages of well-known models, a comprehensive, flexible and compatible solution is proposed to value and deploy assurance levels through a central entity called Proxy.
Dąbrowski, Marcin, Pacyna, Piotr.
2022.
Blockchain-based identity dicovery between heterogenous identity management systems. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :131—137.
Identity Management Systems (IdMS) have seemingly evolved in recent years, both in terms of modelling approach and in terms of used technology. The early centralized, later federated and user-centric Identity Management (IdM) was finally replaced by Self-Sovereign Identity (SSI). Solutions based on Distributed Ledger Technology (DLT) appeared, with prominent examples of uPort, Sovrin or ShoCard. In effect, users got more freedom in creation and management of their identities. IdM systems became more distributed, too. However, in the area of interoperability, dynamic and ad-hoc identity management there has been almost no significant progress. Quest for the best IdM system which will be used by all entities and organizations is deemed to fail. The environment of IdM systems is, and in the near future will still be, heterogenous. Therefore a person will have to manage her or his identities in multiple IdM systems. In this article authors argument that future-proof IdM systems should be able to interoperate with each other dynamically, i.e. be able to discover existence of different identities of a person across multiple IdM systems, dynamically build trust relations and be able to translate identity assertions and claims across various IdM domains. Finally, authors introduce identity relationship model and corresponding identity discovery algorithm, propose IdMS-agnostic identity discovery service design and its implementation with use of Ethereum and Smart Contracts.
Dabthong, Hachol, Warasart, Maykin, Duma, Phongsaphat, Rakdej, Pongpat, Majaroen, Natt, Lilakiatsakun, Woraphon.
2021.
Low Cost Automated OS Security Audit Platform Using Robot Framework. 2021 Research, Invention, and Innovation Congress: Innovation Electricals and Electronics (RI2C). :31—34.
Security baseline hardening is a baseline configuration framework aims to improve the robustness of the operating system, lowering the risk and impact of breach incidents. In typical best practice, the security baseline hardening requires to have regular check and follow-up to keep the system in-check, this set of activities are called "Security Baseline Audit". The Security Baseline Audit process is responsible by the IT department. In terms of business, this process consumes a fair number of resources such as man-hour, time, and technical knowledge. In a huge production environment, the resources mentioned can be multiplied by the system's amount in the production environment. This research proposes improving the process with automation while maintaining the quality and security level at the standard. Robot Framework, a useful and flexible opensource automation framework, is being utilized in this research following with a very successful result where the configuration is aligned with CIS (Center for Internet Security) run by the automation process. A tremendous amount of time and process are decreased while the configuration is according to this tool's standard.
Dabush, Lital, Routtenberg, Tirza.
2022.
Detection of False Data Injection Attacks in Unobservable Power Systems by Laplacian Regularization. 2022 IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM). :415—419.
The modern electrical grid is a complex cyber-physical system, and thus is vulnerable to measurement losses and attacks. In this paper, we consider the problem of detecting false data injection (FDI) attacks and bad data in unobservable power systems. Classical bad-data detection methods usually assume observable systems and cannot detect stealth FDI attacks. We use the smoothness property of the system states (voltages) w.r.t. the admittance matrix, which is also the Laplacian of the graph representation of the grid. First, we present the Laplacian-based regularized state estimator, which does not require full observability of the network. Then, we derive the Laplacian-regularized generalized likelihood ratio test (LR-GLRT). We show that the LR-GLRT has a component of a soft high-pass graph filter applied to the state estimator. Numerical results on the IEEE 118-bus system demonstrate that the LR-GLRT outperforms other detection approaches and is robust to missing data.
Daemen, Joan.
2016.
On Non-uniformity in Threshold Sharings. Proceedings of the 2016 ACM Workshop on Theory of Implementation Security. :41–41.
In threshold schemes one represents each sensitive variable by a number n of shares such that their (usually) bitwise sum equals that variable. These shares are initially generated in such a way that any subset of n-1 shares gives no information about the sensitive variable. Functions (S-boxes, mixing layers, round functions, etc.) are computed on the shares of the inputs resulting in the output as a number of shares. An essential property of a threshold implementation of a function is that each output share is computed from at most n-1 input shares. This is called incompleteness and guarantees that that computation cannot leak information about sensitive variables. The resulting output is then typically subject to some further computation, again in the form of separate, incomplete, computation on shares. For these subsequent computations to not leak information about the sensitive variables, the output of the previous stage must still be uniform. Hence, in an iterative cryptographic primitive such as a block cipher, we need a threshold implementation of the round function that yields a uniformly shared output if its input is uniformly shared. This property of the threshold implementation is called uniformity. Threshold schemes form a good protection mechanism against differential power analysis (DPA). In particular, using it allows building cryptographic hardware that is guaranteed to be unattackable with first-order DPA, assuming certain leakage models of the cryptographic hardware at hand and for a plausible definition of "first order". Constructing an incomplete threshold implementation of a non-linear function is rather straightforward. To offer resistance against first-order DPA, the number of shares equals the algebraic degree of the function plus one. However, constructing one that is at the same time incomplete and uniform may present a challenge. For instance, for the Keccak non-linear layer, incomplete 3-share threshold implementations are easy to generate but no uniform one is known. Exhaustive investigations have been performed on all small S-boxes (3 to 5 bits) and there are many S-boxes for which it is not known to build uniform threshold implementations with d+1 shares if their algebraic degree is d. Uniformity of a threshold implementation is essential in its information-theoretical proof of resistance against first-order DPA. However, given a non-uniform threshold implementation, it is not immediate how to exploit its non-uniformity in an attack. In my talk I discuss the local and global effects of non-uniformity in iterated functions and their significance on the resistance against DPA. I treat methods to quantitatively limit the amount of non-uniformity and to keep it away from where it may be harmful. These techniques are relatively cheap and can reduce non-uniformity to such a low level that it would require an astronomical amount of samples to measure it.
Daesung Choi, Sungdae Hong, Hyoung-Kee Choi.
2014.
A group-based security protocol for Machine Type Communications in LTE-Advanced. Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. :161-162.
We propose Authentication and Key Agreement (AKA) for Machine Type Communications (MTC) in LTE-Advanced. This protocol is based on an idea of grouping devices so that it would reduce signaling congestion in the access network and overload on the single authentication server. We verified that this protocol is designed to be secure against many attacks by using a software verification tool. Furthermore, performance evaluation suggests that this protocol is efficient with respect to authentication overhead and handover delay.
Dagelić, Ante, Perković, Toni, Čagalj, Mario.
2019.
Location Privacy and Changes in WiFi Probe Request Based Connection Protocols Usage Through Years. 2019 4th International Conference on Smart and Sustainable Technologies (SpliTech). :1–5.
Location privacy is one of most frequently discussed terms in the mobile devices security breaches and data leaks. With the expected growth of the number of IoT devices, which is 20 billions by 2020., location privacy issues will be further brought to focus. In this paper we give an overview of location privacy implications in wireless networks, mainly focusing on user's Preferred Network List (list of previously used WiFi Access Points) contained within WiFi Probe Request packets. We will showcase the existing work and suggest interesting topics for future work. A chronological overview of sensitive location data we collected on a musical festival in years 2014, 2015, 2017 and 2018 is provided. We conclude that using passive WiFi monitoring scans produces different results through years, with a significant increase in the usage of a more secure Broadcast Probe Request packets and MAC address randomizations by the smartphone operating systems.
Dahan, Mathieu, Amin, Saurabh.
2015.
Network Flow Routing under Strategic Link Disruptions. arXiv preprint arXiv:1512.09335.
This paper considers a 2-player strategic game for network routing under link disruptions. Player 1 (defender) routes flow through a network to maximize her value of effective flow while facing transportation costs. Player 2 (attacker) simultaneously disrupts one or more links to maximize her value of lost flow but also faces cost of disrupting links. This game is strategically equivalent to a zero-sum game. Linear programming duality and the max-flow min-cut theorem are applied to obtain properties that are satisfied in any mixed Nash equilibrium. In any equilibrium, both players achieve identical payoffs. While the defender's expected transportation cost decreases in attacker's marginal value of lost flow, the attacker's expected cost of attack increases in defender's marginal value of effective flow. Interestingly, the expected amount of effective flow decreases in both these parameters. These results can be viewed as a generalization of the classical max-flow with minimum transportation cost problem to adversarial environments.