Visible to the public Biblio

Found 101 results

Filters: Keyword is computational complexity  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
A
Agirre, I..  2020.  Safe and secure software updates on high-performance embedded systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :68—69.

The next generation of dependable embedded systems feature autonomy and higher levels of interconnection. Autonomy is commonly achieved with the support of artificial intelligence algorithms that pose high computing demands on the hardware platform, reaching a high performance scale. This involves a dramatic increase in software and hardware complexity, fact that together with the novelty of the technology, raises serious concerns regarding system dependability. Traditional approaches for certification require to demonstrate that the system will be acceptably safe to operate before it is deployed into service. The nature of autonomous systems, with potentially infinite scenarios, configurations and unanticipated interactions, makes it increasingly difficult to support such claim at design time. In this context, the extended networking technologies can be exploited to collect post-deployment evidence that serve to oversee whether safety assumptions are preserved during operation and to continuously improve the system through regular software updates. These software updates are not only convenient for critical bug fixing but also necessary for keeping the interconnected system resilient against security threats. However, such approach requires a recondition of the traditional certification practices.

Aires Urquiza, Abraão, AlTurki, Musab A., Kanovich, Max, Ban Kirigin, Tajana, Nigam, Vivek, Scedrov, Andre, Talcott, Carolyn.  2019.  Resource-Bounded Intruders in Denial of Service Attacks. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :382—38214.

Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.

Akbarzadeh, Aida, Pandey, Pankaj, Katsikas, Sokratis.  2019.  Cyber-Physical Interdependencies in Power Plant Systems: A Review of Cyber Security Risks. 2019 IEEE Conference on Information and Communication Technology. :1—6.

Realizing the importance of the concept of “smart city” and its impact on the quality of life, many infrastructures, such as power plants, began their digital transformation process by leveraging modern computing and advanced communication technologies. Unfortunately, by increasing the number of connections, power plants become more and more vulnerable and also an attractive target for cyber-physical attacks. The analysis of interdependencies among system components reveals interdependent connections, and facilitates the identification of those among them that are in need of special protection. In this paper, we review the recent literature which utilizes graph-based models and network-based models to study these interdependencies. A comprehensive overview, based on the main features of the systems including communication direction, control parameters, research target, scalability, security and safety, is presented. We also assess the computational complexity associated with the approaches presented in the reviewed papers, and we use this metric to assess the scalability of the approaches.

Algehed, M., Flanagan, C..  2020.  Transparent IFC Enforcement: Possibility and (In)Efficiency Results. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :65—78.

Information Flow Control (IFC) is a collection of techniques for ensuring a no-write-down no-read-up style security policy known as noninterference. Traditional methods for both static (e.g. type systems) and dynamic (e.g. runtime monitors) IFC suffer from untenable numbers of false alarms on real-world programs. Secure Multi-Execution (SME) promises to provide secure information flow control without modifying the behaviour of already secure programs, a property commonly referred to as transparency. Implementations of SME exist for the web in the form of the FlowFox browser and as plug-ins to several programming languages. Furthermore, SME can in theory work in a black-box manner, meaning that it can be programming language agnostic, making it perfect for securing legacy or third-party systems. As such SME, and its variants like Multiple Facets (MF) and Faceted Secure Multi-Execution (FSME), appear to be a family of panaceas for the security engineer. The question is, how come, given all these advantages, that these techniques are not ubiquitous in practice? The answer lies, partially, in the issue of runtime and memory overhead. SME and its variants are prohibitively expensive to deploy in many non-trivial situations. The natural question is why is this the case? On the surface, the reason is simple. The techniques in the SME family all rely on the idea of multi-execution, running all or parts of a program multiple times to achieve noninterference. Naturally, this causes some overhead. However, the predominant thinking in the IFC community has been that these overheads can be overcome. In this paper we argue that there are fundamental reasons to expect this not to be the case and prove two key theorems: (1) All transparent enforcement is polynomial time equivalent to multi-execution. (2) All black-box enforcement takes time exponential in the number of principals in the security lattice. Our methods also allow us to answer, in the affirmative, an open question about the possibility of secure and transparent enforcement of a security condition known as Termination Insensitive Noninterference.

Alias T, E., Naveen, N., Mathew, D..  2014.  A Novel Acoustic Fingerprint Method for Audio Signal Pattern Detection. Advances in Computing and Communications (ICACC), 2014 Fourth International Conference on. :64-68.

This paper presents a novel and efficient audio signal recognition algorithm with limited computational complexity. As the audio recognition system will be used in real world environment where background noises are high, conventional speech recognition techniques are not directly applicable, since they have a poor performance in these environments. So here, we introduce a new audio recognition algorithm which is optimized for mechanical sounds such as car horn, telephone ring etc. This is a hybrid time-frequency approach which makes use of acoustic fingerprint for the recognition of audio signal patterns. The limited computational complexity is achieved through efficient usage of both time domain and frequency domain in two different processing phases, detection and recognition respectively. And the transition between these two phases is carried out through a finite state machine(FSM)model. Simulation results shows that the algorithm effectively recognizes audio signals within a noisy environment.

Alim, Adil, Zhao, Xujiang, Cho, Jin-Hee, Chen, Feng.  2019.  Uncertainty-Aware Opinion Inference Under Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :6—15.

Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.

Arslan, B., Ulker, M., Akleylek, S., Sagiroglu, S..  2018.  A Study on the Use of Quantum Computers, Risk Assessment and Security Problems. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1–6.

In the computer based solutions of the problems in today's world; if the problem has a high complexity value, different requirements can be addressed such as necessity of simultaneous operation of many computers, the long processing times for the operation of algorithms, and computers with hardware features that can provide high performance. For this reason, it is inevitable to use a computer based on quantum physics in the near future in order to make today's cryptosystems unsafe, search the servers and other information storage centers on internet very quickly, solve optimization problems in the NP-hard category with a very wide solution space and analyze information on large-scale data processing and to process high-resolution image for artificial intelligence applications. In this study, an examination of quantum approaches and quantum computers, which will be widely used in the near future, was carried out and the areas in which such innovation can be used was evaluated. Malicious or non-malicious use of quantum computers with this capacity, the advantages and disadvantages of the high performance which it provides were examined under the head of security, the effect of this recent technology on the existing security systems was investigated.

Atiqoh, Jihan Lailatul, Moesrami Barmawi, Ari, Afianti, Farah.  2022.  Blockchain-based Smart Parking System using Ring Learning With Errors based Signature. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :154–158.
Recently, placing vehicles in the parking area is becoming a problem. A smart parking system is proposed to solve the problem. Most smart parking systems have a centralized system, wherein that type of system is at-risk of single-point failure that can affect the whole system. To overcome the weakness of the centralized system, the most popular mechanism that researchers proposed is blockchain. If there is no mechanism implemented in the blockchain to verify the authenticity of every transaction, then the system is not secure against impersonation attacks. This study combines blockchain mechanism with Ring Learning With Errors (RLWE) based digital signature for securing the scheme against impersonation and double-spending attacks. RLWE was first proposed by Lyubashevsky et al. This scheme is a development from the previous scheme Learning with Error or LWE.
Azab, M..  2014.  Multidimensional Diversity Employment for Software Behavior Encryption. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.

Azab, M..  2014.  Multidimensional Diversity Employment for Software Behavior Encryption. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.

B
Babenko, Mikhail, Redvanov, Aziz Salimovich, Deryabin, Maxim, Chervyakov, Nikolay, Nazarov, Anton, Al-Galda, Safwat Chiad, Vashchenko, Irina, Dvoryaninova, Inna, Nepretimova, Elena.  2019.  Efficient Implementation of Cryptography on Points of an Elliptic Curve in Residue Number System. 2019 International Conference on Engineering and Telecommunication (EnT). :1—5.

The article explores the question of the effective implementation of arithmetic operations with points of an elliptic curve given over a prime field. Given that the basic arithmetic operations with points of an elliptic curve are the operations of adding points and doubling points, we study the question of implementing the arithmetic operations of adding and doubling points in various coordinate systems using the weighted number system and using the Residue Number System (RNS). We have shown that using the fourmodule RNS allows you to get an average gain for the operation of adding points of the elliptic curve of 8.67% and for the operation of doubling the points of the elliptic curve of 8.32% compared to the implementation using the operation of modular multiplication with special moduli from NIST FIPS 186.

Bai, Kunpeng, Wu, Chuankun, Zhang, Zhenfeng.  2018.  Protect white-box AES to resist table composition attacks. IET Information Security. 12:305–313.
White-box cryptography protects cryptographic software in a white-box attack context (WBAC), where the dynamic execution of the cryptographic software is under full control of an adversary. Protecting AES in the white-box setting attracted many scientists and engineers, and several solutions emerged. However, almost all these solutions have been badly broken by various efficient white-box attacks, which target compositions of key-embedding lookup tables. In 2014, Luo, Lai, and You proposed a new WBAC-oriented AES implementation, and claimed that their implementation is secure against both Billet et al.'s attack and De Mulder et al.'s attack. In this study, based on the existing table-composition-targeting cryptanalysis techniques, the authors show that the secret key of the Luo-Lai-You (LLY) implementation can be recovered with a time complexity of about 244. Furthermore, the authors propose a new white-box AES implementation based on table lookups, which is shown to be resistant against the existing table-composition-targeting white-box attacks. The authors, key-embedding tables are obfuscated with large affine mappings, which cannot be cancelled out by table compositions of the existing cryptanalysis techniques. Although their implementation requires twice as much memory as the LLY WBAES to store the tables, its speed is about 63 times of the latter.
Bao, D., Yang, F., Jiang, Q., Li, S., He, X..  2017.  Block RLS algorithm for surveillance video processing based on image sparse representation. 2017 29th Chinese Control And Decision Conference (CCDC). :2195–2200.

Block recursive least square (BRLS) algorithm for dictionary learning in compressed sensing system is developed for surveillance video processing. The new method uses image blocks directly and iteratively to train dictionaries via BRLS algorithm, which is different from classical methods that require to transform blocks to columns first and then giving all training blocks at one time. Since the background in surveillance video is almost fixed, the residual of foreground can be represented sparsely and reconstructed with background subtraction directly. The new method and framework are applied in real image and surveillance video processing. Simulation results show that the new method achieves better representation performance than classical ones in both image and surveillance video.

Bayat-sarmadi, S., Mozaffari-Kermani, M., Reyhani-Masoleh, A..  2014.  Efficient and Concurrent Reliable Realization of the Secure Cryptographic SHA-3 Algorithm. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on. 33:1105-1109.

The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
 

Berge, Pierre, Crampton, Jason, Gutin, Gregory, Watrigant, Rémi.  2017.  The Authorization Policy Existence Problem. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :163–165.

Constraints such as separation-of-duty are widely used to specify requirements that supplement basic authorization policies. However, the existence of constraints (and authorization policies) may mean that a user is unable to fulfill her/his organizational duties because access to resources is denied. In short, there is a tension between the need to protect resources (using policies and constraints) and the availability of resources. Recent work on workflow satisfiability and resiliency in access control asks whether this tension compromises the ability of an organization to achieve its objectives. In this paper, we develop a new method of specifying constraints which subsumes much related work and allows a wider range of constraints to be specified. The use of such constraints leads naturally to a range of questions related to "policy existence", where a positive answer means that an organization's objectives can be realized. We provide an overview of our results establishing that some policy existence questions, notably for those instances that are restricted to user-independent constraints, are fixed-parameter tractable.

Bhotto, M.Z.A., Antoniou, A..  2014.  Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size. Circuits and Systems I: Regular Papers, IEEE Transactions on. 61:2048-2056.

A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.

C
Cai, Zhipeng, Miao, Dongjing, Li, Yingshu.  2019.  Deletion Propagation for Multiple Key Preserving Conjunctive Queries: Approximations and Complexity. 2019 IEEE 35th International Conference on Data Engineering (ICDE). :506—517.

This paper studies the deletion propagation problem in terms of minimizing view side-effect. It is a problem funda-mental to data lineage and quality management which could be a key step in analyzing view propagation and repairing data. The investigated problem is a variant of the standard deletion propagation problem, where given a source database D, a set of key preserving conjunctive queries Q, and the set of views V obtained by the queries in Q, we try to identify a set T of tuples from D whose elimination prevents all the tuples in a given set of deletions on views △V while preserving any other results. The complexity of this problem has been well studied for the case with only a single query. Dichotomies, even trichotomies, for different settings are developed. However, no results on multiple queries are given which is a more realistic case. We study the complexity and approximations of optimizing the side-effect on the views, i.e., find T to minimize the additional damage on V after removing all the tuples of △V. We focus on the class of key-preserving conjunctive queries which is a dichotomy for the single query case. It is surprising to find that except the single query case, this problem is NP-hard to approximate within any constant even for a non-trivial set of multiple project-free conjunctive queries in terms of view side-effect. The proposed algorithm shows that it can be approximated within a bound depending on the number of tuples of both V and △V. We identify a class of polynomial tractable inputs, and provide a dynamic programming algorithm to solve the problem. Besides data lineage, study on this problem could also provide important foundations for the computational issues in data repairing. Furthermore, we introduce some related applications of this problem, especially for query feedback based data cleaning.

Carmosino, Marco L., Gao, Jiawei, Impagliazzo, Russell, Mihajlin, Ivan, Paturi, Ramamohan, Schneider, Stefan.  2016.  Nondeterministic Extensions of the Strong Exponential Time Hypothesis and Consequences for Non-reducibility. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :261–270.

We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.

Chadha, R., Sistla, A. P., Viswanathan, M..  2017.  Verification of Randomized Security Protocols. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–12.

We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two properties are investigated here - secrecy, which asks if no adversary interacting with a protocol P can determine a secret sec with probability textgreater 1 - p; and indistinguishability, which asks if the probability observing any sequence 0$øverline$ in P1 is the same as that of observing 0$øverline$ in P2, under the same adversary. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. We also prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality.

Chen, Z., Jia, Z., Wang, Z., Jafar, S. A..  2020.  GCSA Codes with Noise Alignment for Secure Coded Multi-Party Batch Matrix Multiplication. 2020 IEEE International Symposium on Information Theory (ISIT). :227—232.

A secure multi-party batch matrix multiplication problem (SMBMM) is considered, where the goal is to allow a master to efficiently compute the pairwise products of two batches of massive matrices, by distributing the computation across S servers. Any X colluding servers gain no information about the input, and the master gains no additional information about the input beyond the product. A solution called Generalized Cross Subspace Alignment codes with Noise Alignment (GCSA- NA) is proposed in this work, based on cross-subspace alignment codes. The state of art solution to SMBMM is a coding scheme called polynomial sharing (PS) that was proposed by Nodehi and Maddah-Ali. GCSA-NA outperforms PS codes in several key aspects - more efficient and secure inter-server communication, lower latency, flexible inter-server network topology, efficient batch processing, and tolerance to stragglers.

Chen, Z., Tondi, B., Li, X., Ni, R., Zhao, Y., Barni, M..  2017.  A Gradient-Based Pixel-Domain Attack against SVM Detection of Global Image Manipulations. 2017 IEEE Workshop on Information Forensics and Security (WIFS). :1–6.

We present a gradient-based attack against SVM-based forensic techniques relying on high-dimensional SPAM features. As opposed to prior work, the attack works directly in the pixel domain even if the relationship between pixel values and SPAM features can not be inverted. The proposed method relies on the estimation of the gradient of the SVM output with respect to pixel values, however it departs from gradient descent methodology due to the necessity of preserving the integer nature of pixels and to reduce the effect of the attack on image quality. A fast algorithm to estimate the gradient is also introduced to reduce the complexity of the attack. We tested the proposed attack against SVM detection of histogram stretching, adaptive histogram equalization and median filtering. In all cases the attack succeeded in inducing a decision error with a very limited distortion, the PSNR between the original and the attacked images ranging from 50 to 70 dBs. The attack is also effective in the case of attacks with Limited Knowledge (LK) when the SVM used by the attacker is trained on a different dataset with respect to that used by the analyst.

Crampton, Jason, Gutin, Gregory, Watrigant, Rémi.  2016.  Resiliency Policies in Access Control Revisited. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :101–111.

Resiliency is a relatively new topic in the context of access control. Informally, it refers to the extent to which a multi-user computer system, subject to an authorization policy, is able to continue functioning if a number of authorized users are unavailable. Several interesting problems connected to resiliency were introduced by Li, Wang and Tripunitara [13], many of which were found to be intractable. In this paper, we show that these resiliency problems have unexpected connections with the workflow satisfiability problem (WSP). In particular, we show that an instance of the resiliency checking problem (RCP) may be reduced to an instance of WSP. We then demonstrate that recent advances in our understanding of WSP enable us to develop fixed-parameter tractable algorithms for RCP. Moreover, these algorithms are likely to be useful in practice, given recent experimental work demonstrating the advantages of bespoke algorithms to solve WSP. We also generalize RCP in several different ways, showing in each case how to adapt the reduction to WSP. Li et al also showed that the coexistence of resiliency policies and static separation-of-duty policies gives rise to further interesting questions. We show how our reduction of RCP to WSP may be extended to solve these problems as well and establish that they are also fixed-parameter tractable.

F
Fahrbach, M., Miller, G. L., Peng, R., Sawlani, S., Wang, J., Xu, S. C..  2018.  Graph Sketching against Adaptive Adversaries Applied to the Minimum Degree Algorithm. 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). :101–112.

Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.

Fang, Bo, Hua, Zhongyun, Huang, Hejiao.  2019.  Locality-Sensitive Hashing Scheme Based on Heap Sort of Hash Bucket. 2019 14th International Conference on Computer Science Education (ICCSE). :5–10.
Nearest neighbor search (NNS) is one of the current popular research directions, which widely used in machine learning, pattern recognition, image detection and so on. In the low dimension data, based on tree search method can get good results. But when the data dimension goes up, that will produce a curse of dimensional. The proposed Locality-Sensitive Hashing algorithm (LSH) greatly improves the efficiency of nearest neighbor query for high dimensional data. But the algorithm relies on the building a large number of hash table, which makes the space complexity very high. C2LSH based on dynamic collision improves the disadvantage of LSH, but its disadvantage is that it needs to detect the collision times of a large number of data points which Increased query time. Therefore, Based on LSH algorithm, later researchers put forward many improved algorithms, but still not ideal.In this paper, we put forward Locality-Sensitive Hashing Scheme Based on Heap Sort of Hash Bucket (HSLSH) algorithm aiming at the shortcomings of LSH and C2LSH. Its main idea is to take advantage of the efficiency of heapsort in massive data sorting to improve the efficiency of nearest neighbor query. It only needs to rely on a small number of hash functions can not only overcome the shortcoming of LSH need to build a large number of hash table, and avoids defects of C2LSH. Experiments show that our algorithm is more than 20% better than C2LSH in query accuracy and 40% percent lower in query time.
Fuchs, Caro, Spolaor, Simone, Nobile, Marco S., Kaymak, Uzay.  2019.  A Swarm Intelligence Approach to Avoid Local Optima in Fuzzy C-Means Clustering. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Clustering analysis is an important computational task that has applications in many domains. One of the most popular algorithms to solve the clustering problem is fuzzy c-means, which exploits notions from fuzzy logic to provide a smooth partitioning of the data into classes, allowing the possibility of multiple membership for each data sample. The fuzzy c-means algorithm is based on the optimization of a partitioning function, which minimizes inter-cluster similarity. This optimization problem is known to be NP-hard and it is generally tackled using a hill climbing method, a local optimizer that provides acceptable but sub-optimal solutions, since it is sensitive to initialization and tends to get stuck in local optima. In this work we propose an alternative approach based on the swarm intelligence global optimization method Fuzzy Self-Tuning Particle Swarm Optimization (FST-PSO). We solve the fuzzy clustering task by optimizing fuzzy c-means' partitioning function using FST-PSO. We show that this population-based metaheuristics is more effective than hill climbing, providing high quality solutions with the cost of an additional computational complexity. It is noteworthy that, since this particle swarm optimization algorithm is self-tuning, the user does not have to specify additional hyperparameters for the optimization process.