Biblio
The mitigation of insider threats against databases is a challenging problem as insiders often have legitimate access privileges to sensitive data. Therefore, conventional security mechanisms, such as authentication and access control, may be insufficient for the protection of databases against insider threats and need to be complemented with techniques that support real-time detection of access anomalies. The existing real-time anomaly detection techniques consider anomalies in references to the database entities and the amounts of accessed data. However, they are unable to track the access frequencies. According to recent security reports, an increase in the access frequency by an insider is an indicator of a potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this paper, we propose techniques for tracking users' access frequencies and detecting anomalous related activities in real-time. We present detailed algorithms for constructing accurate profiles that describe the access patterns of the database users and for matching subsequent accesses by these users to the profiles. Our methods report and log mismatches as anomalies that may need further investigation. We evaluated our techniques on the OLTP-Benchmark. The results of the evaluation indicate that our techniques are very effective in the detection of anomalies.
Cloud systems offer a diversity of security mechanisms with potentially complex configuration options. So far, security engineering has focused on achievable security levels, but not on the costs associated with a specific security mechanism and its configuration. Through a series of experiments with a variety of cloud datastores conducted over the last years, we gained substantial knowledge on how one desired quality like security can have a significant impact on other system qualities like performance. In this paper, we report on select findings related to security-performance trade-offs for three prominent cloud datastores, focusing on data in transit encryption, and propose a simple, structured approach for making trade-off decisions based on factual evidence gained through experimentation. Our approach allows to rationally reason about security trade-offs.
Insider attacks are one of the most dangerous threats to an organization. Unfortunately, they are very difficult to foresee, detect, and defend against due to the trust and responsibilities placed on the employees. In this paper, we first define the notion of user intent, and construct a model for the most common threat scenario used in the literature that poses a very high risk for sensitive data stored in the organization's database. We show that the complexity of identifying pseudo-intents of a user is coNP-Complete in this domain, and launching a harvester insider attack within the boundaries of the defined threat model takes linear time while a targeted threat model is an NP-Complete problem. We also discuss about the general defense mechanisms against the modeled threats, and show that countering against the harvester insider attack model takes quadratic time while countering against the targeted insider attack model can take linear to quadratic time depending on the strategy chosen. Finally, we analyze the adversarial behavior, and show that launching an attack with minimum risk is also an NP-Complete problem.
Volumetric DDoS attacks continue to inflict serious damage. Many proposed defenses for mitigating such attacks assume that a monitoring system has already detected the attack. However, many proposed DDoS monitoring systems do not focus on efficiently analyzing high volume network traffic to provide important characterizations of the attack in real-time to downstream traffic filtering systems. We propose a scalable real-time framework for an effective volumetric DDoS monitoring system that leverages modern big data technologies for streaming analytics of high volume network traffic to accurately detect and characterize attacks.
Confidentiality concerns are important in the context of cloud databases. In this paper, the technique of vertical fragmentation is explored to break sensitive associations between columns of several database tables according to confidentiality constraints. By storing insensitive portions of the database at different non-communicating servers it is possible to overcome confidentiality concerns. In addition, visibility constraints and data dependencies are supported. Moreover, to provide some control over the distribution of columns among different servers, novel closeness constraints are introduced. Finding confidentiality-preserving fragmentations is studied in the context of mathematical optimization and a corresponding integer linear program formulation is presented. Benchmarks were performed to evaluate the suitability of our approach.
The privacy of information is an increasing concern of software applications users. This concern was caused by attacks to cloud services over the last few years, that have leaked confidential information such as passwords, emails and even private pictures. Once the information is leaked, the users and software applications are powerless to contain the spread of information and its misuse. With databases as a central component of applications that store almost all of their data, they are one of the most common targets of attacks. However, typical deployments of databases do not leverage security mechanisms to stop attacks and do not apply cryptographic schemes to protect data. This issue has been tackled by multiple secure databases that provide trade-offs between security, query capabilities and performance. Despite providing stronger security guarantees, the proposed solutions still entrust their data to a single entity that can be corrupted or hacked. Secret sharing can solve this problem by dividing data in multiple secrets and storing each secret at a different location. The division is done in such a way that if one location is hacked, no information can be leaked. Depending on the protocols used to divide data, functions can be computed over this data through secure protocols that do not disclose information or actually know which values are being calculated. We propose a SQL database prototype capable of offering a trade-off between security and query latency by using a different secure protocol. An evaluation of the protocols is also performed, showing that our most relaxed protocol has an improvement of 5+ on the query latency time over the original protocol.
Various critical state models have been developed to understand the hysteresis loss mechanism of high-temperature superconducting (HTSC) films. The analytic relation between the hysteresis loss and the remanent field was obtained based on Bean's critical state model for thin films in the full-penetration case. Furthermore, numerical calculation of local hysteresis loops was carried out by Kim's critical state model. In this paper, we investigated local hysteresis losses for a GdBCO coated conductor by using low-temperature scanning Hall probe microscopy and reproduced the experimental results by applying the critical state model. Because of the demagnetizing effect in thin films, analysis of local hysteresis losses can be useful approach to understand of total hysteresis losses.
Having significant role in the storing, delivering and conversion of the energy, the permanent magnets are key elements in the actual technology. In many applications, the gap between ferrites and rare earths (RE) based sintered permanent magnets is nowadays filled by RE bonded magnets, used in more applications, below their magnetic performances. Therewith, the recent trends in the RE market concerning their scarcity, impose EU to consider alternative magnets (without RE) to fill such gap. The paper presents the chemical synthesis of the exchange coupled SrFe12O19/CoFe2O4 nanocomposites, based on nanoferrites. The appropriate annealing leads to the increasing of the main magnetic characteristics, saturation magnetization MS and intrinsic coercivity Hc, in the range of 49 - 53 emu/g, respectively 126.5 - 306 kA/m. The value reached for the ratio between remanent magnetization and saturation magnetization is higher than 0.5, fact that proved that between the two magnetic phases occurred exchange interaction.
Alternatives to rare earth permanent magnets, such as alnico, will reduce supply instability, increase sustainability, and could decrease the cost of permanent magnets, especially for high-temperature applications, such as traction drive motors. Alnico magnets with moderate coercivity, high remanence, and relatively high-energy product are conventionally processed by directional solidification and (significant) final machining, contributing to increased costs and additional material waste. Additive manufacturing (AM) is developing as a cost effective method to build net-shape 3-D parts with minimal final machining and properties comparable to wrought parts. This paper describes initial studies of net-shape fabrication of alnico magnets by AM using a laser engineered net shaping (LENS) system. High-pressure gas atomized pre-alloyed powders of two different modified alnico “8” compositions, with high purity and sphericity, were built into cylinders using the LENS process, and followed by heat treatment. The magnetic properties showed improvement over their cast and sintered counterparts. The resulting alnico permanent magnets were characterized using scanning electron microscopy, energy dispersive spectroscopy, electron backscatter diffraction, and hysteresisgraph measurements. These results display the potential for net-shape processing of alnico permanent magnets for use in next generation traction-drive motors and other applications requiring high temperatures and/or complex engineered part geometries.
The heat load of the original cryomodules for the continuous electron beam accelerator facility is 50% higher than the target value of 100 W at 2.07 K for refurbished cavities operating at an accelerating gradient of 12.5 MV/m. This issue is due to the quality factor of the cavities being 50% lower in the cryomodule than when tested in a vertical cryostat, even at low RF field. Previous studies were not conclusive about the origin of the additional losses. We present the results of a systematic study of the additional losses in a five-cell cavity from a decommissioned cryomodule after attaching components, which are part of the cryomodule, such as the cold tuner, the He tank, and the cold magnetic shield, prior to cryogenic testing in a vertical cryostat. Flux-gate magnetometers and temperature sensors are used as diagnostic elements. Different cool-down procedures and tests in different residual magnetic fields were investigated during the study. Three flux-gate magnetometers attached to one of the cavities installed in the refurbished cryomodule C50-12 confirmed the hypothesis of high residual magnetic field as a major cause for the increased RF losses.
In order to investigate the relationship and effect on the performance of magnetic modulator among applied DC current, excitation source, excitation loop current, sensitivity and induced voltage of detecting winding, this paper measured initial permeability, maximum permeability, saturation magnetic induction intensity, remanent magnetic induction intensity, coercivity, saturated magnetic field intensity, magnetization curve, permeability curve and hysteresis loop of main core 1J85 permalloy of magnetic modulator based on ballistic method. On this foundation, employ curve fitting tool of MATLAB; adopt multiple regression method to comprehensively compare and analyze the sum of squares due to error (SSE), coefficient of determination (R-square), degree-of-freedom adjusted coefficient of determination (Adjusted R-square), and root mean squared error (RMSE) of fitting results. Finally, establish B-H curve mathematical model based on the sum of arc-hyperbolic sine function and polynomial.
Ensemble waveform analysis is used to calculate signal to noise ratio (SNR) and other recording characteristics from micromagnetically modeled heat assisted magnetic recording waveforms and waveforms measured at both drive and spin-stand level. Using windowing functions provides the breakdown between transition and remanence SNRs. In addition, channel bit density (CBD) can be extracted from the ensemble waveforms using the di-bit extraction method. Trends in both transition SNR, remanence SNR, and CBD as a function of ambient temperature at constant track width showed good agreement between model and measurement. Both model and drive-level measurement show degradation in SNR at higher ambient temperatures, which may be due to changes in the down-track profile at the track edges compared with track center. CBD as a function of cross-track position is also calculated for both modeling and spin-stand measurements. The CBD widening at high cross-track offset, which is observed at both measurement and model, was directly related to the radius of curvature of the written transitions observed in the model and the thermal profiles used.
A significant advance in magnetic field management in a fully assembled superconducting radiofrequency cryomodule has been achieved and is reported here. Demagnetization of the entire cryomodule after assembly is a crucial step toward the goal of average magnetic flux density less than 0.5 μT at the location of the superconducting radio frequency cavities. An explanation of the physics of demagnetization and experimental results are presented.
The start-up value of an SRAM cell is unique, random, and unclonable as it is determined by the inherent process mismatch between transistors. These properties make SRAM an attractive circuit for generating encryption keys. The primary challenge for SRAM based key generation, however, is the poor stability when the circuit is subject to random noise, temperature and voltage changes, and device aging. Temporal majority voting (TMV) and bit masking were used in previous works to identify and store the location of unstable or marginally stable SRAM cells. However, TMV requires a long test time and significant hardware resources. In addition, the number of repetitive power-ups required to find the most stable cells is prohibitively high. To overcome the shortcomings of TMV, we propose a novel data remanence based technique to detect SRAM cells with the highest stability for reliable key generation. This approach requires only two remanence tests: writing `1' (or `0') to the entire array and momentarily shutting down the power until a few cells flip. We exploit the fact that the cells that are easily flipped are the most robust cells when written with the opposite data. The proposed method is more effective in finding the most stable cells in a large SRAM array than a TMV scheme with 1,000 power-up tests. Experimental studies show that the 256-bit key generated from a 512 kbit SRAM using the proposed data remanence method is 100% stable under different temperatures, power ramp up times, and device aging.
SDN is a new network architecture for control and data forwarding logic separation, able to provide a high degree of openness and programmability, with many advantages not available by traditional networks. But there are still some problems unsolved, for example, it is easy to cause the controller to be attacked due to the lack of verifying the source of the packet, and the limited range of match fields cannot meet the requirement of the precise control of network services etc. Aiming at the above problems, this paper proposes a SDN network security control forwarding mechanism based on cipher identification, when packets flow into and out of the network, the forwarding device must verify their source to ensure the user's non-repudiation and the authenticity of packets. Besides administrators control the data forwarding based on cipher identification, able to form network management and control capabilities based on human, material, business flow, and provide a new method and means for the future of Internet security.
Digital signatures now become a crucial requirement in communication and digital messaging. Digital messaging is information that is very vulnerable to be manipulated by irresponsible people. Digital signatures seek to maintain the two security aspects that cryptography aims, such as integrity and non-repudiation. This research aims to applied MAC address with AES-128 and SHA-2 256 bit for digital signature. The use of MAC address in AES-128 could improve the security of the digital signature because of its uniqueness in every computer which could randomize the traditional processes of AES. SHA-2 256-bit will provides real unique randomized strings with reasonable speed. As result the proposed digital signature able to implement and work perfectly in many platforms.
In the field of communication, the need for cryptography is growing faster, so it is very difficult to achieve the objectives of cryptography such as confidentiality, data integrity, non-repudiation. To ensure data security, key scheduling and key management are the factors which the algorithm depends. In this paper, the enciphering and deciphering process of the SERPENT algorithm is done using the graphical programming tool. It is an algorithm which uses substitution permutation network procedure which contains round function including key scheduling, s-box and linear mixing stages. It is fast and easy to actualize and it requires little memory.
Embedded electronic devices and sensors such as smartphones, smart watches, medical implants, and Wireless Sensor Nodes (WSN) are making the “Internet of Things” (IoT) a reality. Such devices often require cryptographic services such as authentication, integrity and non-repudiation, which are provided by Public-Key Cryptography (PKC). As these devices are severely resource-constrained, choosing a suitable cryptographic system is challenging. Pairing Based Cryptography (PBC) is among the best candidates to implement PKC in lightweight devices. In this research, we present a fast and energy efficient implementation of PBC based on Barreto-Naehrig (BN) curves and optimal Ate pairing using hardware/software co-design. Our solution consists of a hardware-based Montgomery multiplier, and pairing software running on an ARM Cortex A9 processor in a Zynq-7020 System-on-Chip (SoC). The multiplier is protected against simple power analysis (SPA) and differential power analysis (DPA), and can be instantiated with a variable number of processing elements (PE). Our solution improves performance (in terms of latency) over an open-source software PBC implementation by factors of 2.34 and 2.02, for 256- and 160-bit field sizes, respectively, as measured in the Zynq-7020 SoC.
In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.
In several critical military missions, more than one decision level are involved. These decision levels are often independent and distributed, and sensitive pieces of information making up the military mission must be kept hidden from one level to another even if all of the decision levels cooperate to accomplish the same task. Usually, a mission is negotiated through insecure networks such as the Internet using cryptographic protocols. In such protocols, few security properties have to be ensured. However, designing a secure cryptographic protocol that ensures several properties at once is a very challenging task. In this paper, we propose a new secure protocol for multipart military missions that involve two independent and distributed decision levels having different security levels. We show that it ensures the secrecy, authentication, and non-repudiation properties. In addition, we show that it resists against man-in-the-middle attacks.