Visible to the public Biblio

Found 1140 results

Filters: First Letter Of Title is E  [Clear All Filters]
2017-03-07
Agnihotri, Lalitha, Mojarad, Shirin, Lewkow, Nicholas, Essa, Alfred.  2016.  Educational Data Mining with Python and Apache Spark: A Hands-on Tutorial. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge. :507–508.

Enormous amount of educational data has been accumulated through Massive Open Online Courses (MOOCs), as well as commercial and non-commercial learning platforms. This is in addition to the educational data released by US government since 2012 to facilitate disruption in education by making data freely available. The high volume, variety and velocity of collected data necessitate use of big data tools and storage systems such as distributed databases for storage and Apache Spark for analysis. This tutorial will introduce researchers and faculty to real-world applications involving data mining and predictive analytics in learning sciences. In addition, the tutorial will introduce statistics required to validate and accurately report results. Topics will cover how big data is being used to transform education. Specifically, we will demonstrate how exploratory data analysis, data mining, predictive analytics, machine learning, and visualization techniques are being applied to educational big data to improve learning and scale insights driven from millions of student's records. The tutorial will be held over a half day and will be hands on with pre-posted material. Due to the interdisciplinary nature of work, the tutorial appeals to researchers from a wide range of backgrounds including big data, predictive analytics, learning sciences, educational data mining, and in general, those interested in how big data analytics can transform learning. As a prerequisite, attendees are required to have familiarity with at least one programming language.

Olabelurin, A., Veluru, S., Healing, A., Rajarajan, M..  2015.  Entropy clustering approach for improving forecasting in DDoS attacks. 2015 IEEE 12th International Conference on Networking, Sensing and Control. :315–320.

Volume anomaly such as distributed denial-of-service (DDoS) has been around for ages but with advancement in technologies, they have become stronger, shorter and weapon of choice for attackers. Digital forensic analysis of intrusions using alerts generated by existing intrusion detection system (IDS) faces major challenges, especially for IDS deployed in large networks. In this paper, the concept of automatically sifting through a huge volume of alerts to distinguish the different stages of a DDoS attack is developed. The proposed novel framework is purpose-built to analyze multiple logs from the network for proactive forecast and timely detection of DDoS attacks, through a combined approach of Shannon-entropy concept and clustering algorithm of relevant feature variables. Experimental studies on a cyber-range simulation dataset from the project industrial partners show that the technique is able to distinguish precursor alerts for DDoS attacks, as well as the attack itself with a very low false positive rate (FPR) of 22.5%. Application of this technique greatly assists security experts in network analysis to combat DDoS attacks.

Benjamin, V., Li, W., Holt, T., Chen, H..  2015.  Exploring threats and vulnerabilities in hacker web: Forums, IRC and carding shops. 2015 IEEE International Conference on Intelligence and Security Informatics (ISI). :85–90.

Cybersecurity is a problem of growing relevance that impacts all facets of society. As a result, many researchers have become interested in studying cybercriminals and online hacker communities in order to develop more effective cyber defenses. In particular, analysis of hacker community contents may reveal existing and emerging threats that pose great risk to individuals, businesses, and government. Thus, we are interested in developing an automated methodology for identifying tangible and verifiable evidence of potential threats within hacker forums, IRC channels, and carding shops. To identify threats, we couple machine learning methodology with information retrieval techniques. Our approach allows us to distill potential threats from the entirety of collected hacker contents. We present several examples of identified threats found through our analysis techniques. Results suggest that hacker communities can be analyzed to aid in cyber threat detection, thus providing promising direction for future work.

Lin, C. H., Tien, C. W., Chen, C. W., Tien, C. W., Pao, H. K..  2015.  Efficient spear-phishing threat detection using hypervisor monitor. 2015 International Carnahan Conference on Security Technology (ICCST). :299–303.

In recent years, cyber security threats have become increasingly dangerous. Hackers have fabricated fake emails to spoof specific users into clicking on malicious attachments or URL links in them. This kind of threat is called a spear-phishing attack. Because spear-phishing attacks use unknown exploits to trigger malicious activities, it is difficult to effectively defend against them. Thus, this study focuses on the challenges faced, and we develop a Cloud-threat Inspection Appliance (CIA) system to defend against spear-phishing threats. With the advantages of hardware-assisted virtualization technology, we use the CIA to develop a transparent hypervisor monitor that conceals the presence of the detection engine in the hypervisor kernel. In addition, the CIA also designs a document pre-filtering algorithm to enhance system performance. By inspecting PDF format structures, the proposed CIA was able to filter 77% of PDF attachments and prevent them from all being sent into the hypervisor monitor for deeper analysis. Finally, we tested CIA in real-world scenarios. The hypervisor monitor was shown to be a better anti-evasion sandbox than commercial ones. During 2014, CIA inspected 780,000 mails in a company with 200 user accounts, and found 65 unknown samples that were not detected by commercial anti-virus software.

Amin, R., Islam, S. K. H., Biswas, G. P., Khan, M. K..  2015.  An efficient remote mutual authentication scheme using smart mobile phone over insecure networks. 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–7.

To establish a secure connection between a mobile user and a remote server, this paper presents a session key agreement scheme through remote mutual authentication protocol by using mobile application software(MAS). We analyzed the security of our protocol informally, which confirms that the protocol is secure against all the relevant security attacks including off-line identity-password guessing attacks, user-server impersonation attacks, and insider attack. In addition, the widely accepted simulator tool AVISPA simulates the proposed protocol and confirms that the protocol is SAFE under the OFMC and CL-AtSe back-ends. Our protocol not only provide strong security against the relevant attacks, but it also achieves proper mutual authentication, user anonymity, known key secrecy and efficient password change operation. The performance comparison is also performed, which ensures that the protocol is efficient in terms of computation and communication costs.

Tosh, D., Sengupta, S., Kamhoua, C., Kwiat, K., Martin, A..  2015.  An evolutionary game-theoretic framework for cyber-threat information sharing. 2015 IEEE International Conference on Communications (ICC). :7341–7346.

The initiative to protect against future cyber crimes requires a collaborative effort from all types of agencies spanning industry, academia, federal institutions, and military agencies. Therefore, a Cybersecurity Information Exchange (CYBEX) framework is required to facilitate breach/patch related information sharing among the participants (firms) to combat cyber attacks. In this paper, we formulate a non-cooperative cybersecurity information sharing game that can guide: (i) the firms (players)1 to independently decide whether to “participate in CYBEX and share” or not; (ii) the CYBEX framework to utilize the participation cost dynamically as incentive (to attract firms toward self-enforced sharing) and as a charge (to increase revenue). We analyze the game from an evolutionary game-theoretic strategy and determine the conditions under which the players' self-enforced evolutionary stability can be achieved. We present a distributed learning heuristic to attain the evolutionary stable strategy (ESS) under various conditions. We also show how CYBEX can wisely vary its pricing for participation to increase sharing as well as its own revenue, eventually evolving toward a win-win situation.

Jaina, J., Suma, G. S., Dija, S., Thomas, K. L..  2015.  Extracting network connections from Windows 7 64-bit physical memory. 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). :1–4.

Nowadays, Memory Forensics is more acceptable in Cyber Forensics Investigation because malware authors and attackers choose RAM or physical memory for storing critical information instead of hard disk. The volatile physical memory contains forensically relevant artifacts such as user credentials, chats, messages, running processes and its details like used dlls, files, command and network connections etc. Memory Forensics involves acquiring the memory dump from the Suspect's machine and analyzing the acquired dump to find out crucial evidence with the help of windows pre-defined kernel data structures. While retrieving different artifacts from these data structures, finding the network connections from Windows 7 system's memory dump is a very challenging task. This is because the data structures that store network connections in earlier versions of Windows are not present in Windows 7. In this paper, a methodology is described for efficiently retrieving details of network related activities from Windows 7 x64 memory dump. This includes remote and local IP addresses and associated port information corresponding to each of the running processes. This can provide crucial information in cyber crime investigation.

Zhang, Ce, Shin, Jaeho, Ré, Christopher, Cafarella, Michael, Niu, Feng.  2016.  Extracting Databases from Dark Data with DeepDive. Proceedings of the 2016 International Conference on Management of Data. :847–859.

DeepDive is a system for extracting relational databases from dark data: the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data — scientific papers, Web classified ads, customer service notes, and so on — were instead in a relational database, it would give analysts access to a massive and highly-valuable new set of "big data" to exploit. DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

Xia, Xiaoxu, Song, Wei, Chen, Fangfei, Li, Xuansong, Zhang, Pengcheng.  2016.  Effa: A proM Plugin for Recovering Event Logs. Proceedings of the 8th Asia-Pacific Symposium on Internetware. :108–111.

While event logs generated by business processes play an increasingly significant role in business analysis, the quality of data remains a serious problem. Automatic recovery of dirty event logs is desirable and thus receives more attention. However, existing methods only focus on missing event recovery, or fall short of efficiency. To this end, we present Effa, a ProM plugin, to automatically recover event logs in the light of process specifications. Based on advanced heuristics including process decomposition and trace replaying to search the minimum recovery, Effa achieves a balance between repairing accuracy and efficiency.

Legaard, Lasse, Thomsen, Josephine Raun, Lorentzen, Christian Hannesbo, Techen, Jonas Peter.  2016.  Exploring SCI As Means of Interaction Through the Design Case of Vacuum Cleaning. Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction. :488–493.

This paper explores the opportunities for incorporating shape changing properties into everyday home appliances. Throughout a design research approach the vacuum cleaner is used as a design case with the overall aim of enhancing the user experience by transforming the appliance into a sensing object. Three fully functional prototypes were developed in order to illustrate how shape change can fit into the context of our homes. The shape changing functionalities are: 1) a digital power button that supports dynamic affordances, 2) an analog handle that mediates the amount of dust particles through haptic feedback and 3) a body that behaves in a lifelike manner dependent on the user treatment. We report the development and implementation of the functional prototypes as well as technical limitations and initial user reactions on the prototypes.

Lappalainen, Tuomas, Virtanen, Lasse, Häkkilä, Jonna.  2016.  Experiences with Wellness Ring and Bracelet Form Factor. Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia. :351–353.

This paper explores experiences with ring and bracelet activity tracker form factors. During the first week of a 2-week field study participants (n=6) wore non-functional mock-ups of ring and bracelet wellness trackers, and provided feedback on their experiences. During the second week, participants used a commercial wellness tracking ring, which collected physical exercise and sleep data and visualized it in a mobile application. Our salient findings based on 196 user diary entries suggest, that the ring form factor is considered beautiful, aesthetic and contributing to the wearer's image. However, the bracelet form factor is more practical for active lifestyle, and preferred in situations where the hands are performing tasks requiring gripping objects, such as sport activities, cleaning the car, cooking and washing dishes. Users strongly identified the ring form factor as jewellery that is intended to be seen, whereas bracelets were considered hidden and inconspicuous elements of the user's ensemble.

Francese, Rita, Gravino, Carmine, Risi, Michele, Tortora, Genoveffa, Scanniello, Giuseppe.  2016.  Estimate Method Calls in Android Apps. Proceedings of the International Conference on Mobile Software Engineering and Systems. :13–14.

In this paper, we focus on the definition of estimators to predict method calls in Android apps. Estimation models are based on information from requirements specification documents (e.g., number of actors, number of use cases, and number of classes in the conceptual model). We have used a dataset containing information on 23 Android apps. After performing data-cleaning, we applied linear regression to build estimation models on 21 data points. Results suggest that measures gathered from requirements specification documents can be considered good predictors to estimate the number of internal calls (i.e., methods invoking other methods present in the app) and external calls (i.e., invocations to API) as well as their sum.

West, Ruth, Kajihara, Meghan, Parola, Max, Hays, Kathryn, Hillard, Luke, Carlew, Anne, Deutsch, Jeremey, Lane, Brandon, Holloway, Michelle, John, Brendan et al..  2016.  Eliciting Tacit Expertise in 3D Volume Segmentation. Proceedings of the 9th International Symposium on Visual Information Communication and Interaction. :59–66.

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.

Chung, Yeounoh, Mortensen, Michael Lind, Binnig, Carsten, Kraska, Tim.  2016.  Estimating the Impact of Unknown Unknowns on Aggregate Query Results. Proceedings of the 2016 International Conference on Management of Data. :861–876.

It is common practice for data scientists to acquire and integrate disparate data sources to achieve higher quality results. But even with a perfectly cleaned and merged data set, two fundamental questions remain: (1) is the integrated data set complete and (2) what is the impact of any unknown (i.e., unobserved) data on query results? In this work, we develop and analyze techniques to estimate the impact of the unknown data (a.k.a., unknown unknowns) on simple aggregate queries. The key idea is that the overlap between different data sources enables us to estimate the number and values of the missing data items. Our main techniques are parameter-free and do not assume prior knowledge about the distribution. Through a series of experiments, we show that estimating the impact of unknown unknowns is invaluable to better assess the results of aggregate queries over integrated data sources.

Pinsenschaum, Richard, Neff, Flaithri.  2016.  Evaluating Gesture Characteristics When Using a Bluetooth Handheld Music Controller. Proceedings of the Audio Mostly 2016. :209–214.

This paper describes a study that investigates tilt-gesture depth on a Bluetooth handheld music controller for activating and deactivating music loops. Making use of a Wii Remote's 3-axis ADXL330 accelerometer, a Max patch was programmed to receive, handle, and store incoming accelerometer data. Each loop corresponded to the front, back, left and right tilt-gesture direction, with each gesture motion triggering a loop 'On' or 'Off' depending on its playback status. The study comprised 40 undergraduate students interacting with the prototype controller for a duration of 5 minutes per person. Each participant performed three full cycles beginning with the front gesture direction and moving clockwise. This corresponded to a total of 24 trigger motions per participant. Raw data associated with tilt-gesture motion depth was scaled, analyzed and graphed. Results show significant differences between each gesture direction in terms of tilt-gesture depth, as well as issues with noise for left/right gesture motion due to dependency on Roll and Yaw values. Front and Left tilt-gesture depths displayed significantly higher threshold levels compared to the Back and Right axes. Front and Left tilt-gesture thresholds therefore allow the device to easily differentiate between intentional sample triggering and general device handling, while this is more difficult for Back and Left directions. Future work will include finding an alternative method for evaluating intentional tilt-gesture triggering on the Back and Left axes, as well as utilizing two 2-axis accelerometers to garner clean data from the Left and Right axes.

2017-02-27
Dou, Huijing, Bian, Tingting.  2015.  An effective information filtering method based on the LTE network. 2015 4th International Conference on Computer Science and Network Technology (ICCSNT). 01:1428–1432.

With the rapid development of the information technology, more and more high-speed networks came out. The 4G LTE network as a recently emerging network has gradually entered the mainstream of the communication network. This paper proposed an effective content-based information filtering based on the 4G LTE high-speed network by combing the content-based filter and traditional simple filter. Firstly, raw information is pre-processed by five-tuple filter. Secondly, we determine the topics and character of the source data by key nearest neighbor text classification after minimum-risk Bayesian classification. Finally, the improved AdaBoost algorithm achieves the four-level content-based information filtering. The experiments reveal that the effective information filtering method can be applied to the network security, big data analysis and other fields. It has high research value and market value.

Orojloo, H., Azgomi, M. A..  2015.  Evaluating the complexity and impacts of attacks on cyber-physical systems. 2015 CSI Symposium on Real-Time and Embedded Systems and Technologies (RTEST). :1–8.

In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.

2017-02-21
H. S. Jeon, H. Jung, W. Chun.  2015.  "An extended web browser for id/locator separation network". 2015 International Conference on Information and Communication Technology Convergence (ICTC). :749-754.

With the pretty prompt growth in Internet content, the main usage pattern of internet is shifting from traditional host-to-host model to content dissemination model. To support content distribution, content delivery networks (CDNs) gives an ad-hoc solution and some of future internet projects suggest a clean-slate design. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes the IDNet-based web applications. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. We design and develop an IDNet Browser based on the open source Qt. IDNet browser enables ID fetching and rendering by both `idp:/' schemes URID (Universal Resource Identifier) and `http:/' schemes URI in HTML The experiment shows that it can well be applicable to the IDNet test topology.

J. Pan, R. Jain, S. Paul.  2015.  "Enhanced Evaluation of the Interdomain Routing System for Balanced Routing Scalability and New Internet Architecture Deployments". IEEE Systems Journal. 9:892-903.

Internet is facing many challenges that cannot be solved easily through ad hoc patches. To address these challenges, many research programs and projects have been initiated and many solutions are being proposed. However, before we have a new architecture that can motivate Internet service providers (ISPs) to deploy and evolve, we need to address two issues: 1) know the current status better by appropriately evaluating the existing Internet; and 2) find how various incentives and strategies will affect the deployment of the new architecture. For the first issue, we define a series of quantitative metrics that can potentially unify results from several measurement projects using different approaches and can be an intrinsic part of future Internet architecture (FIA) for monitoring and evaluation. Using these metrics, we systematically evaluate the current interdomain routing system and reveal many “autonomous-system-level” observations and key lessons for new Internet architectures. Particularly, the evaluation results reveal the imbalance underlying the interdomain routing system and how the deployment of FIAs can benefit from these findings. With these findings, for the second issue, appropriate deployment strategies of the future architecture changes can be formed with balanced incentives for both customers and ISPs. The results can be used to shape the short- and long-term goals for new architectures that are simple evolutions of the current Internet (so-called dirty-slate architectures) and to some extent to clean-slate architectures.

2017-02-15
Wenxuan Zhou, University of Illinois at Urbana-Champaign, Dong Jin, Illinois Institute of Technology, Jason Croft, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign, P. Brighten Godfrey, University of Illinois at Urbana-Champaign.  2015.  Enforcing Generalized Consistency Properties in Software-Defined Networks. 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2015).

It is critical to ensure that network policy remains consistent during state transitions. However, existing techniques impose a high cost in update delay, and/or FIB space. We propose the Customizable Consistency Generator (CCG), a fast and generic framework to support customizable consistency policies during network updates. CCG effectively reduces the task of synthesizing an update plan under the constraint of a given consistency policy to a verification problem, by checking whether an update can safely be installed in the network at a particular time, and greedily processing network state transitions to heuristically minimize transition delay. We show a large class of consistency policies are guaranteed by this greedy jeuristic alone; in addition, CCG makes judicious use of existing heavier-weight network update mechanisms to provide guarantees when necessary. As such, CCG nearly achieves the “best of both worlds”: the efficiency of simply passing through updates in most cases, with the consistency guarantees of more heavyweight techniques. Mininet and physical testbed evaluations demonstrate CCG’s capability to achieve various types of consistency, such as path and bandwidth properties, with zero switch memory overhead and up to a 3× delay reduction compared to previous solutions.

2017-02-14
D. Kergl.  2015.  "Enhancing Network Security by Software Vulnerability Detection Using Social Media Analysis Extended Abstract". 2015 IEEE International Conference on Data Mining Workshop (ICDMW). :1532-1533.

Detecting attacks that are based on unknown security vulnerabilities is a challenging problem. The timely detection of attacks based on hitherto unknown vulnerabilities is crucial for protecting other users and systems from being affected as well. To know the attributes of a novel attack's target system can support automated reconfiguration of firewalls and sending alerts to administrators of other vulnerable targets. We suggest a novel approach of post-incident intrusion detection by utilizing information gathered from real-time social media streams. To accomplish this we take advantage of social media users posting about incidents that affect their user accounts of attacked target systems or their observations about misbehaving online services. Combining knowledge of the attacked systems and reported incidents, we should be able to recognize patterns that define the attributes of vulnerable systems. By matching detected attribute sets with those attributes of well-known attacks, we furthermore should be able to link attacks to already existing entries in the Common Vulnerabilities and Exposures database. If a link to an existing entry is not found, we can assume to have detected an exploitation of an unknown vulnerability, i.e., a zero day exploit or the result of an advanced persistent threat. This finding could also be used to direct efforts of examining vulnerabilities of attacked systems and therefore lead to faster patch deployment.

A. K. M. A., J. C. D..  2015.  "Execution Time Measurement of Virtual Machine Volatile Artifacts Analyzers". 2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS). :314-319.

Due to a rapid revaluation in a virtualization environment, Virtual Machines (VMs) are target point for an attacker to gain privileged access of the virtual infrastructure. The Advanced Persistent Threats (APTs) such as malware, rootkit, spyware, etc. are more potent to bypass the existing defense mechanisms designed for VM. To address this issue, Virtual Machine Introspection (VMI) emerged as a promising approach that monitors run state of the VM externally from hypervisor. However, limitation of VMI lies with semantic gap. An open source tool called LibVMI address the semantic gap. Memory Forensic Analysis (MFA) tool such as Volatility can also be used to address the semantic gap. But, it needs to capture a memory dump (RAM) as input. Memory dump acquires time and its analysis time is highly crucial if Intrusion Detection System IDS (IDS) depends on the data supplied by FAM or VMI tool. In this work, live virtual machine RAM dump acquire time of LibVMI is measured. In addition, captured memory dump analysis time consumed by Volatility is measured and compared with other memory analyzer such as Rekall. It is observed through experimental results that, Rekall takes more execution time as compared to Volatility for most of the plugins. Further, Volatility and Rekall are compared with LibVMI. It is noticed that examining the volatile data through LibVMI is faster as it eliminates memory dump acquire time.

E. Pisek, S. Abu-Surra, R. Taori, J. Dunham, D. Rajan.  2015.  "Enhanced Cryptcoding: Joint Security and Advanced Dual-Step Quasi-Cyclic LDPC Coding". 2015 IEEE Global Communications Conference (GLOBECOM). :1-7.

Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.

S. Chandran, Hrudya P, P. Poornachandran.  2015.  "An efficient classification model for detecting advanced persistent threat". 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :2001-2009.

Among most of the cyber attacks that occured, the most drastic are advanced persistent threats. APTs are differ from other attacks as they have multiple phases, often silent for long period of time and launched by adamant, well-funded opponents. These targeted attacks mainly concentrated on government agencies and organizations in industries, as are those involved in international trade and having sensitive data. APTs escape from detection by antivirus solutions, intrusion detection and intrusion prevention systems and firewalls. In this paper we proposes a classification model having 99.8% accuracy, for the detection of APT.

2017-02-13
R. Mishra, A. Mishra, P. Bhanodiya.  2015.  "An edge based image steganography with compression and encryption". 2015 International Conference on Computer, Communication and Control (IC4). :1-4.

Security of secret data has been a major issue of concern from ancient time. Steganography and cryptography are the two techniques which are used to reduce the security threat. Cryptography is an art of converting secret message in other than human readable form. Steganography is an art of hiding the existence of secret message. These techniques are required to protect the data theft over rapidly growing network. To achieve this there is a need of such a system which is very less susceptible to human visual system. In this paper a new technique is going to be introducing for data transmission over an unsecure channel. In this paper secret data is compressed first using LZW algorithm before embedding it behind any cover media. Data is compressed to reduce its size. After compression data encryption is performed to increase the security. Encryption is performed with the help of a key which make it difficult to get the secret message even if the existence of the secret message is reveled. Now the edge of secret message is detected by using canny edge detector and then embedded secret data is stored there with the help of a hash function. Proposed technique is implemented in MATLAB and key strength of this project is its huge data hiding capacity and least distortion in Stego image. This technique is applied over various images and the results show least distortion in altered image.