Visible to the public Biblio

Found 148 results

Filters: Keyword is Computer science  [Clear All Filters]
2018-02-02
Kim, H., Ben-Othman, J., Mokdad, L., Cho, S., Bellavista, P..  2017.  On collision-free reinforced barriers for multi domain IoT with heterogeneous UAVs. 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). :466–471.

Thanks to advancement of vehicle technologies, Unmanned Aerial Vehicle (UAV) now widely spread over practical services and applications affecting daily life of people positively. Especially, multiple heterogeneous UAVs with different capabilities should be considered since UAVs can play an important role in Internet of Things (IoT) environment in which the heterogeneity and the multi domain of UAVs are indispensable. Also, a concept of barrier-coverage has been proved as a promising one applicable to surveillance and security. In this paper, we present collision-free reinforced barriers by heterogeneous UAVs to support multi domain. Then, we define a problem which is to minimize maximum movement of UAVs on condition that a property of collision-free among UAVs is assured while they travel from current positions to specific locations so as to form reinforced barriers within multi domain. Because the defined problem depends on how to locate UAVs on barriers, we develop a novel approach that provides a collision-free movement as well as a creation of virtual lines in multi domain. Furthermore, we address future research topics which should be handled carefully for the barrier-coverage by heterogeneous UAVs.

2018-01-23
Erola, A., Agrafiotis, I., Happa, J., Goldsmith, M., Creese, S., Legg, P. A..  2017.  RicherPicture: Semi-automated cyber defence using context-aware data analytics. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

In a continually evolving cyber-threat landscape, the detection and prevention of cyber attacks has become a complex task. Technological developments have led organisations to digitise the majority of their operations. This practice, however, has its perils, since cybespace offers a new attack-surface. Institutions which are tasked to protect organisations from these threats utilise mainly network data and their incident response strategy remains oblivious to the needs of the organisation when it comes to protecting operational aspects. This paper presents a system able to combine threat intelligence data, attack-trend data and organisational data (along with other data sources available) in order to achieve automated network-defence actions. Our approach combines machine learning, visual analytics and information from business processes to guide through a decision-making process for a Security Operation Centre environment. We test our system on two synthetic scenarios and show that correlating network data with non-network data for automated network defences is possible and worth investigating further.

Yasin, M., Mazumdar, B., Rajendran, J. J. V., Sinanoglu, O..  2017.  TTLock: Tenacious and traceless logic locking. 2017 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :166–166.
Logic locking is an intellectual property (IP) protection technique that prevents IP piracy, reverse engineering and overbuilding attacks by the untrusted foundry or endusers. Existing logic locking techniques are all vulnerable to various attacks, such as sensitization, key-pruning and signal skew analysis enabled removal attacks. In this paper, we propose TTLock that provably withstands all known attacks. TTLock protects a designer-specified number of input patterns, enabling a controlled and provably-secure trade-off between key-pruning attack resilience and removal attack resilience. All the key-bits converge on a single signal, creating maximal interference and thus resisting sensitization attacks. And, obfuscation is performed by modifying the design IP in a secret and traceless way, thwarting signal skew analysis and the removal attack it enables. Experimental results confirm our theoretical expectations that the computational complexity of attacks launched on TTLock grows exponentially with increasing key-size, while the area, power, and delay overhead increases only linearly.
Dabas, N., Singh, R. P., Kher, G., Chaudhary, V..  2017.  A novel SVD and online sequential extreme learning machine based watermark method for copyright protection. 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–5.

For the increasing use of internet, it is equally important to protect the intellectual property. And for the protection of copyright, a blind digital watermark algorithm with SVD and OSELM in the IWT domain has been proposed. During the embedding process, SVD has been applied to the coefficient blocks to get the singular values in the IWT domain. Singular values are modulated to embed the watermark in the host image. Online sequential extreme learning machine is trained to learn the relationship between the original coefficient and the corresponding watermarked version. During the extraction process, this trained OSELM is used to extract the embedded watermark logo blindly as no original host image is required during this process. The watermarked image is altered using various attacks like blurring, noise, sharpening, rotation and cropping. The experimental results show that the proposed watermarking scheme is robust against various attacks. The extracted watermark has very much similarity with the original watermark and works good to prove the ownership.

Al-Mashhadi, H. M., Abduljaleel, I. Q..  2017.  Color image encryption using chaotic maps, triangular scrambling, with DNA sequences. 2017 International Conference on Current Research in Computer Science and Information Technology (ICCIT). :93–98.

Applying security to the transmitted image is very important issues, because the transmission channel is open and can be compromised by attackers. To secure this channel from the eavesdropping attack, man in the middle attack, and so on. A new hybrid encryption image mechanism that utilize triangular scrambling, DNA encoding and chaotic map is implemented. The scheme takes a master key with a length of 320 bit, and produces a group of sub-keys with two length (32 and 128 bit) to encrypt the blocks of images, then a new triangular scrambling method is used to increase the security of the image. Many experiments are implemented using several different images. The analysis results for these experiments show that the security obtained on by using the proposed method is very suitable for securing the transmitted images. The current work has been compared with other works and the result of comparison shows that the current work is very strong against attacks.

Srilatha, N., Mrali, G., Deepthi, M..  2017.  A framework to improve E-seva services through E-governance by using DNA cryptography. 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET). :1–4.

The proposed frame describes two objectives one is to issue certificates through online and second is provide three level security through DNA cryptography. DNA Cryptography means converting the data to the DNA sequence. DNA is a succession comprising of four letters in order; A, C, G and T. every letter set is identified with a nucleotide. DNA can be used for store data, transmit the data and also used for computation of the data. This paper implemented 3 levels of cryptography. The receiver will apply the decryption for extracting the readable from the unreadable format. This DNA cryptography provide the security more than the other cryptography, but it takes more time complexity for generating the encoding and decoding and it has the chances to hacking the data by the hacker. So in this paper we implement the fast three level DNA Cryptography for me seva services.

2017-12-28
Tane, E., Fujigaki, Y..  2017.  Cross-Disciplinary Survey on \#34;Data Science \#34; Field Development: Historical Analysis from 1600s-2000s. 2017 Portland International Conference on Management of Engineering and Technology (PICMET). :1–10.

For the last several decades, the rapid development of information technology and computer performance accelerates generation, transportation and accumulation of digital data, it came to be called "Big Data". In this context, researchers and companies are eager to utilize the data to create new values or manage a wide range of issues, and much focus is being placed on "Data Science" to extract useful information (knowledge) from digital data. Data Science has been developed from several independent fields such as Mathematics/Operations Research, Computer Science, Data Engineering, Visualization and Statistics since 1800s. In addition, Artificial Intelligence converges on this stream recent years. On the other hand, the national projects have been established to utilize data for society with concerns surrounding the security and privacy. In this paper, through detailed analysis on history of this field, processes of development and integration among related fields are discussed as well as comparative aspects between Japan and the United States. This paper also includes a brief discussion of future directions.

Poon, W. N., Bennin, K. E., Huang, J., Phannachitta, P., Keung, J. W..  2017.  Cross-Project Defect Prediction Using a Credibility Theory Based Naive Bayes Classifier. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :434–441.

Several defect prediction models proposed are effective when historical datasets are available. Defect prediction becomes difficult when no historical data exist. Cross-project defect prediction (CPDP), which uses projects from other sources/companies to predict the defects in the target projects proposed in recent studies has shown promising results. However, the performance of most CPDP approaches are still beyond satisfactory mainly due to distribution mismatch between the source and target projects. In this study, a credibility theory based Naïve Bayes (CNB) classifier is proposed to establish a novel reweighting mechanism between the source projects and target projects so that the source data could simultaneously adapt to the target data distribution and retain its own pattern. Our experimental results show that the feasibility of the novel algorithm design and demonstrate the significant improvement in terms of the performance metrics considered achieved by CNB over other CPDP approaches.

2017-12-20
Williams, N., Li, S..  2017.  Simulating Human Detection of Phishing Websites: An Investigation into the Applicability of the ACT-R Cognitive Behaviour Architecture Model. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–8.

The prevalence and effectiveness of phishing attacks, despite the presence of a vast array of technical defences, are due largely to the fact that attackers are ruthlessly targeting what is often referred to as the weakest link in the system - the human. This paper reports the results of an investigation into how end users behave when faced with phishing websites and how this behaviour exposes them to attack. Specifically, the paper presents a proof of concept computer model for simulating human behaviour with respect to phishing website detection based on the ACT-R cognitive architecture, and draws conclusions as to the applicability of this architecture to human behaviour modelling within a phishing detection scenario. Following the development of a high-level conceptual model of the phishing website detection process, the study draws upon ACT-R to model and simulate the cognitive processes involved in judging the validity of a representative webpage based primarily around the characteristics of the HTTPS padlock security indicator. The study concludes that despite the low-level nature of the architecture and its very basic user interface support, ACT-R possesses strong capabilities which map well onto the phishing use case, and that further work to more fully represent the range of human security knowledge and behaviours in an ACT-R model could lead to improved insights into how best to combine technical and human defences to reduce the risk to end users from phishing attacks.

Nguyen, C. T., Hoang, T. T., Phan, V. X..  2017.  A simple method for anonymous tag cardinality estimation in RFID systems with false detection. 2017 4th NAFOSTED Conference on Information and Computer Science. :101–104.

This work investigates the anonymous tag cardinality estimation problem in radio frequency identification systems with frame slotted aloha-based protocol. Each tag, instead of sending its identity upon receiving the reader's request, randomly responds by only one bit in one of the time slots of the frame due to privacy and security. As a result, each slot with no response is observed as in an empty state, while the others are non-empty. Those information can be used for the tag cardinality estimation. Nevertheless, under effects of fading and noise, time slots with tags' response might be observed as empty, while those with no response might be detected as non-empty, which is known as a false detection phenomenon. The performance of conventional estimation methods is, thus, degraded because of inaccurate observations. In order to cope with this issue, we propose a new estimation algorithm using expectation-maximization method. Both the tag cardinality and a probability of false detection are iteratively estimated to maximize a likelihood function. Computer simulations will be provided to show the merit of the proposed method.

2017-12-12
Tuan, D. M., Viet, N. A..  2017.  A new multi-proxy multi-signature scheme based on elliptic curve cryptography. 2017 4th NAFOSTED Conference on Information and Computer Science. :105–109.

In multi-proxy multi-signature schemes, an original group of signers can authorize another group of proxy signers under the agreement of all singers both in the original group and proxy group. The paper proposes a new multi-proxy multi-signature based on elliptic curve cryptography. This new scheme is secure against the insider attack that is a powerful attack on the multi-signature schemes.

2017-02-23
P. Jain, S. Nandanwar.  2015.  "Securing the Clustered Database Using Data Modification Technique". 2015 International Conference on Computational Intelligence and Communication Networks (CICN). :1163-1166.

The new era of information communication and technology (ICT), everyone wants to store/share their Data or information in online media, like in cloud database, mobile database, grid database, drives etc. When the data is stored in online media the main problem is arises related to data is privacy because different types of hacker, attacker or crackers wants to disclose their private information as publically. Security is a continuous process of protecting the data or information from attacks. For securing that information from those kinds of unauthorized people we proposed and implement of one the technique based on the data modification concept with taking the iris database on weka tool. And this paper provides the high privacy in distributed clustered database environments.

2017-02-14
F. Hassan, J. L. Magalini, V. de Campos Pentea, R. A. Santos.  2015.  "A project-based multi-disciplinary elective on digital data processing techniques". 2015 IEEE Frontiers in Education Conference (FIE). :1-7.

Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.

S. Majumdar, A. Maiti, A. Nath.  2015.  "New Secured Steganography Algorithm Using Encrypted Secret Message inside QRTM Code: System Implemented in Android Phone". 2015 International Conference on Computational Intelligence and Communication Networks (CICN). :1130-1134.

Steganography is a method of hiding information, whereas the goal of cryptography is to make data unreadable. Both of these methodologies have their own advantages and disadvantages. Encrypted messages are easily detectable. If someone is spying on communication channel for encrypted message, he/she can easily identify the encrypted messages. Encryption may draw unnecessary attention to the transferred messages. This may lead to cryptanalysis of the encrypted message if the spy tries to know the message. If the encryption technique is not strong enough, the message may be deciphered. In contrast, Steganography tries to hide the data from third party by smartly embedding the data to some other file which is not at all related to the message. Here care is to be taken to minimize the modification of the container file in the process of embedding data. But the disadvantage of steganography is that it is not as secure as cryptography. In the present method the authors have introduced three-step security. Firstly the secret message is encrypted using bit level columnar transposition method introduced by Nath et al and after that the encrypted message is embedded in some image file along with its size. Finally the modified image is encoded into a QR Code TM. The entire method has also been implemented for the Android mobile environment. This method may be used to transfer confidential message through Android mobile phone.

2015-05-06
Daesung Choi, Sungdae Hong, Hyoung-Kee Choi.  2014.  A group-based security protocol for Machine Type Communications in LTE-Advanced. Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. :161-162.

We propose Authentication and Key Agreement (AKA) for Machine Type Communications (MTC) in LTE-Advanced. This protocol is based on an idea of grouping devices so that it would reduce signaling congestion in the access network and overload on the single authentication server. We verified that this protocol is designed to be secure against many attacks by using a software verification tool. Furthermore, performance evaluation suggests that this protocol is efficient with respect to authentication overhead and handover delay.
 

2015-05-05
Pal, S.K., Sardana, P., Sardana, A..  2014.  Efficient search on encrypted data using bloom filter. Computing for Sustainable Global Development (INDIACom), 2014 International Conference on. :412-416.

Efficient and secure search on encrypted data is an important problem in computer science. Users having large amount of data or information in multiple documents face problems with their storage and security. Cloud services have also become popular due to reduction in cost of storage and flexibility of use. But there is risk of data loss, misuse and theft. Reliability and security of data stored in the cloud is a matter of concern, specifically for critical applications and ones for which security and privacy of the data is important. Cryptographic techniques provide solutions for preserving the confidentiality of data but make the data unusable for many applications. In this paper we report a novel approach to securely store the data on a remote location and perform search in constant time without the need for decryption of documents. We use bloom filters to perform simple as well advanced search operations like case sensitive search, sentence search and approximate search.
 

Uddin, M.P., Abu Marjan, M., Binte Sadia, N., Islam, M.R..  2014.  Developing a cryptographic algorithm based on ASCII conversions and a cyclic mathematical function. Informatics, Electronics Vision (ICIEV), 2014 International Conference on. :1-5.

Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
 

Manandhar, K., Adcock, B., Xiaojun Cao.  2014.  Preserving the Anonymity in MobilityFirst networks. Computer Communication and Networks (ICCCN), 2014 23rd International Conference on. :1-6.

A scheme for preserving privacy in MobilityFirst (MF) clean-slate future Internet architecture is proposed in this paper. The proposed scheme, called Anonymity in MobilityFirst (AMF), utilizes the three-tiered approach to effectively exploit the inherent properties of MF Network such as Globally Unique Flat Identifier (GUID) and Global Name Resolution Service (GNRS) to provide anonymity to the users. While employing new proposed schemes in exchanging of keys between different tiers of routers to alleviate trust issues, the proposed scheme uses multiple routers in each tier to avoid collaboration amongst the routers in the three tiers to expose the end users.

Sanger, J., Richthammer, C., Hassan, S., Pernul, G..  2014.  Trust and Big Data: A Roadmap for Research. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :278-282.

We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
 

2015-05-04
Khosmood, F., Nico, P.L., Woolery, J..  2014.  User identification through command history analysis. Computational Intelligence in Cyber Security (CICS), 2014 IEEE Symposium on. :1-7.

As any veteran of the editor wars can attest, Unix users can be fiercely and irrationally attached to the commands they use and the manner in which they use them. In this work, we investigate the problem of identifying users out of a large set of candidates (25-97) through their command-line histories. Using standard algorithms and feature sets inspired by natural language authorship attribution literature, we demonstrate conclusively that individual users can be identified with a high degree of accuracy through their command-line behavior. Further, we report on the best performing feature combinations, from the many thousands that are possible, both in terms of accuracy and generality. We validate our work by experimenting on three user corpora comprising data gathered over three decades at three distinct locations. These are the Greenberg user profile corpus (168 users), Schonlau masquerading corpus (50 users) and Cal Poly command history corpus (97 users). The first two are well known corpora published in 1991 and 2001 respectively. The last is developed by the authors in a year-long study in 2014 and represents the most recent corpus of its kind. For a 50 user configuration, we find feature sets that can successfully identify users with over 90% accuracy on the Cal Poly, Greenberg and one variant of the Schonlau corpus, and over 87% on the other Schonlau variant.

Shahare, P.C., Chavhan, N.A..  2014.  An Approach to Secure Sink Node's Location Privacy in Wireless Sensor Networks. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :748-751.

Wireless Sensor Network has a wide range of applications including environmental monitoring and data gathering in hostile environments. This kind of network is easily leaned to different external and internal attacks because of its open nature. Sink node is a receiving and collection point that gathers data from the sensor nodes present in the network. Thus, it forms bridge between sensors and the user. A complete sensor network can be made useless if this sink node is attacked. To ensure continuous usage, it is very important to preserve the location privacy of sink nodes. A very good approach for securing location privacy of sink node is proposed in this paper. The proposed scheme tries to modify the traditional Blast technique by adding shortest path algorithm and an efficient clustering mechanism in the network and tries to minimize the energy consumption and packet delay.

2015-04-30
Sousa, S., Dias, P., Lamas, D..  2014.  A model for Human-computer trust: A key contribution for leveraging trustful interactions. Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on. :1-6.

This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.

2014-09-26
Sommer, R., Paxson, V..  2010.  Outside the Closed World: On Using Machine Learning for Network Intrusion Detection. Security and Privacy (SP), 2010 IEEE Symposium on. :305-316.

In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational "real world" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection.