Biblio

Found 433 results

Filters: Keyword is Neural networks  [Clear All Filters]
2021-08-31
Manavi, Farnoush, Hamzeh, Ali.  2020.  A New Method for Ransomware Detection Based on PE Header Using Convolutional Neural Networks. 2020 17th International ISC Conference on Information Security and Cryptology (ISCISC). :82–87.
With the spread of information technology in human life, data protection is a critical task. On the other hand, malicious programs are developed, which can manipulate sensitive and critical data and restrict access to this data. Ransomware is an example of such a malicious program that encrypts data, restricts users' access to the system or their data, and then request a ransom payment. Many types of research have been proposed for ransomware detection. Most of these methods attempt to identify ransomware by relying on program behavior during execution. The main weakness of these methods is that it is not clear how long the program should be monitored to show its real behavior. Therefore, sometimes, these researches cannot early detect ransomware. In this paper, a new method for ransomware detection is proposed that does not require running the program and uses the PE header of the executable files. To extract effective features from the PE header files, an image based on PE header is constructed. Then, according to the advantages of Convolutional Neural Networks in extracting features from images and classifying them, CNN is used. The proposed method achieves 93.33% accuracy. Our results indicate the usefulness and practicality method for ransomware detection.
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
2021-05-13
Sheptunov, Sergey A., Sukhanova, Natalia V..  2020.  The Problems of Design and Application of Switching Neural Networks in Creation of Artificial Intelligence. 2020 International Conference Quality Management, Transport and Information Security, Information Technologies (IT QM IS). :428–431.
The new switching architecture of the neural networks was proposed. The switching neural networks consist of the neurons and the switchers. The goal is to reduce expenses on the artificial neural network design and training. For realization of complex models, algorithms and methods of management the neural networks of the big size are required. The number of the interconnection links “everyone with everyone” grows with the number of neurons. The training of big neural networks requires the resources of supercomputers. Time of training of neural networks also depends on the number of neurons in the network. Switching neural networks are divided into fragments connected by the switchers. Training of switcher neuron network is provided by fragments. On the basis of switching neural networks the devices of associative memory were designed with the number of neurons comparable to the human brain.
Li, Yizhi.  2020.  Research on Application of Convolutional Neural Network in Intrusion Detection. 2020 7th International Forum on Electrical Engineering and Automation (IFEEA). :720–723.
At present, our life is almost inseparable from the network, the network provides a lot of convenience for our life. However, a variety of network security incidents occur very frequently. In recent years, with the continuous development of neural network technology, more and more researchers have applied neural network to intrusion detection, which has developed into a new research direction in intrusion detection. As long as the neural network is provided with input data including network data packets, through the process of self-learning, the neural network can separate abnormal data features and effectively detect abnormal data. Therefore, the article innovatively proposes an intrusion detection method based on deep convolutional neural networks (CNN), which is used to test on public data sets. The results show that the model has a higher accuracy rate and a lower false negative rate than traditional intrusion detection methods.
2020-07-20
Boumiza, Safa, Braham, Rafik.  2019.  An Anomaly Detector for CAN Bus Networks in Autonomous Cars based on Neural Networks. 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :1–6.
The domain of securing in-vehicle networks has attracted both academic and industrial researchers due to high danger of attacks on drivers and passengers. While securing wired and wireless interfaces is important to defend against these threats, detecting attacks is still the critical phase to construct a robust secure system. There are only a few results on securing communication inside vehicles using anomaly-detection techniques despite their efficiencies in systems that need real-time detection. Therefore, we propose an intrusion detection system (IDS) based on Multi-Layer Perceptron (MLP) neural network for Controller Area Networks (CAN) bus. This IDS divides data according to the ID field of CAN packets using K-means clustering algorithm, then it extracts suitable features and uses them to train and construct the neural network. The proposed IDS works for each ID separately and finally it combines their individual decisions to construct the final score and generates alert in the presence of attack. The strength of our intrusion detection method is that it works simultaneously for two types of attacks which will eliminate the use of several separate IDS and thus reduce the complexity and cost of implementation.
2020-05-08
Fu, Tian, Lu, Yiqin, Zhen, Wang.  2019.  APT Attack Situation Assessment Model Based on optimized BP Neural Network. 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). :2108—2111.
In this paper, it first analyzed the characteristics of Advanced Persistent Threat (APT). according to APT attack, this paper established an BP neural network optimized by improved adaptive genetic algorithm to predict the security risk of nodes in the network. and calculated the path of APT attacks with the maximum possible attack. Finally, experiments verify the effectiveness and correctness of the algorithm by simulating attacks. Experiments show that this model can effectively evaluate the security situation in the network, For the defenders to adopt effective measures defend against APT attacks, thus improving the security of the network.
2020-09-04
Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
2020-09-11
Azakami, Tomoka, Shibata, Chihiro, Uda, Ryuya, Kinoshita, Toshiyuki.  2019.  Creation of Adversarial Examples with Keeping High Visual Performance. 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT). :52—56.
The accuracy of the image classification by the convolutional neural network is exceeding the ability of human being and contributes to various fields. However, the improvement of the image recognition technology gives a great blow to security system with an image such as CAPTCHA. In particular, since the character string CAPTCHA has already added distortion and noise in order not to be read by the computer, it becomes a problem that the human readability is lowered. Adversarial examples is a technique to produce an image letting an image classification by the machine learning be wrong intentionally. The best feature of this technique is that when human beings compare the original image with the adversarial examples, they cannot understand the difference on appearance. However, Adversarial examples that is created with conventional FGSM cannot completely misclassify strong nonlinear networks like CNN. Osadchy et al. have researched to apply this adversarial examples to CAPTCHA and attempted to let CNN misclassify them. However, they could not let CNN misclassify character images. In this research, we propose a method to apply FGSM to the character string CAPTCHAs and to let CNN misclassified them.
2020-09-08
de Almeida Ramos, Elias, Filho, João Carlos Britto, Reis, Ricardo.  2019.  Cryptography by Synchronization of Hopfield Neural Networks that Simulate Chaotic Signals Generated by the Human Body. 2019 17th IEEE International New Circuits and Systems Conference (NEWCAS). :1–4.
In this work, an asymmetric cryptography method for information security was developed, inspired by the fact that the human body generates chaotic signals, and these signals can be used to create sequences of random numbers. Encryption circuit was implemented in a Reconfigurable Hardware (FPGA). To encode and decode an image, the chaotic synchronization between two dynamic systems, such as Hopfield neural networks (HNNs), was used to simulate chaotic signals. The notion of Homotopy, an argument of topological nature, was used for the synchronization. The results show efficiency when compared to state of the art, in terms of image correlation, histogram analysis and hardware implementation.
2020-07-03
Bashir, Muzammil, Rundensteiner, Elke A., Ahsan, Ramoza.  2019.  A deep learning approach to trespassing detection using video surveillance data. 2019 IEEE International Conference on Big Data (Big Data). :3535—3544.
Railroad trespassing is a dangerous activity with significant security and safety risks. However, regular patrolling of potential trespassing sites is infeasible due to exceedingly high resource demands and personnel costs. This raises the need to design automated trespass detection and early warning prediction techniques leveraging state-of-the-art machine learning. To meet this need, we propose a novel framework for Automated Railroad Trespassing detection System using video surveillance data called ARTS. As the core of our solution, we adopt a CNN-based deep learning architecture capable of video processing. However, these deep learning-based methods, while effective, are known to be computationally expensive and time consuming, especially when applied to a large volume of surveillance data. Leveraging the sparsity of railroad trespassing activity, ARTS corresponds to a dual-stage deep learning architecture composed of an inexpensive pre-filtering stage for activity detection, followed by a high fidelity trespass classification stage employing deep neural network. The resulting dual-stage ARTS architecture represents a flexible solution capable of trading-off accuracy with computational time. We demonstrate the efficacy of our approach on public domain surveillance data achieving 0.87 f1 score while keeping up with the enormous video volume, achieving a practical time and accuracy trade-off.
2021-01-15
Yadav, D., Salmani, S..  2019.  Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :852—857.
"Deepfake" it is an incipiently emerging face video forgery technique predicated on AI technology which is used for creating the fake video. It takes images and video as source and it coalesces these to make a new video using the generative adversarial network and the output is very convincing. This technique is utilized for generating the unauthentic spurious video and it is capable of making it possible to generate an unauthentic spurious video of authentic people verbally expressing and doing things that they never did by swapping the face of the person in the video. Deepfake can create disputes in countries by influencing their election process by defaming the character of the politician. This technique is now being used for character defamation of celebrities and high-profile politician just by swapping the face with someone else. If it is utilized in unethical ways, this could lead to a serious problem. Someone can use this technique for taking revenge from the person by swapping face in video and then posting it to a social media platform. In this paper, working of Deepfake technique along with how it can swap faces with maximum precision in the video has been presented. Further explained are the different ways through which we can identify if the video is generated by Deepfake and its advantages and drawback have been listed.
2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-12-07
Yang, Z..  2019.  Fidelity: Towards Measuring the Trustworthiness of Neural Network Classification. 2019 IEEE Conference on Dependable and Secure Computing (DSC). :1–8.
With the increasing performance of neural networks on many security-critical tasks, the security concerns of machine learning have become increasingly prominent. Recent studies have shown that neural networks are vulnerable to adversarial examples: carefully crafted inputs with negligible perturbations on legitimate samples could mislead a neural network to produce adversary-selected outputs while humans can still correctly classify them. Therefore, we need an additional measurement on the trustworthiness of the results of a machine learning model, especially in adversarial settings. In this paper, we analyse the root cause of adversarial examples, and propose a new property, namely fidelity, of machine learning models to describe the gap between what a model learns and the ground truth learned by humans. One of its benefits is detecting adversarial attacks. We formally define fidelity, and propose a novel approach to quantify it. We evaluate the quantification of fidelity in adversarial settings on two neural networks. The study shows that involving the fidelity enables a neural network system to detect adversarial examples with true positive rate 97.7%, and false positive rate 1.67% on a studied neural network.
2020-01-27
Qureshi, Ayyaz-Ul-Haq, Larijani, Hadi, Javed, Abbas, Mtetwa, Nhamoinesu, Ahmad, Jawad.  2019.  Intrusion Detection Using Swarm Intelligence. 2019 UK/ China Emerging Technologies (UCET). :1–5.
Recent advances in networking and communication technologies have enabled Internet-of-Things (IoT) devices to communicate more frequently and faster. An IoT device typically transmits data over the Internet which is an insecure channel. Cyber attacks such as denial-of-service (DoS), man-in-middle, and SQL injection are considered as big threats to IoT devices. In this paper, an anomaly-based intrusion detection scheme is proposed that can protect sensitive information and detect novel cyber-attacks. The Artificial Bee Colony (ABC) algorithm is used to train the Random Neural Network (RNN) based system (RNN-ABC). The proposed scheme is trained on NSL-KDD Train+ and tested for unseen data. The experimental results suggest that swarm intelligence and RNN successfully classify novel attacks with an accuracy of 91.65%. Additionally, the performance of the proposed scheme is also compared with a hybrid multilayer perceptron (MLP) based intrusion detection system using sensitivity, mean of mean squared error (MMSE), the standard deviation of MSE (SDMSE), best mean squared error (BMSE) and worst mean squared error (WMSE) parameters. All experimental tests confirm the robustness and high accuracy of the proposed scheme.
2020-05-11
Peng, Wang, Kong, Xiangwei, Peng, Guojin, Li, Xiaoya, Wang, Zhongjie.  2019.  Network Intrusion Detection Based on Deep Learning. 2019 International Conference on Communications, Information System and Computer Engineering (CISCE). :431–435.
With the continuous development of computer network technology, security problems in the network are emerging one after another, and it is becoming more and more difficult to ignore. For the current network administrators, how to successfully prevent malicious network hackers from invading, so that network systems and computers are at Safe and normal operation is an urgent task. This paper proposes a network intrusion detection method based on deep learning. This method uses deep confidence neural network to extract features of network monitoring data, and uses BP neural network as top level classifier to classify intrusion types. The method was validated using the KDD CUP'99 dataset from the Lincoln Laboratory of the Massachusetts Institute of Technology. The results show that the proposed method has a significant improvement over the traditional machine learning accuracy.
2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
2020-09-14
Ma, Zhuo, Liu, Yang, Liu, Ximeng, Ma, Jianfeng, Li, Feifei.  2019.  Privacy-Preserving Outsourced Speech Recognition for Smart IoT Devices. IEEE Internet of Things Journal. 6:8406–8420.
Most of the current intelligent Internet of Things (IoT) products take neural network-based speech recognition as the standard human-machine interaction interface. However, the traditional speech recognition frameworks for smart IoT devices always collect and transmit voice information in the form of plaintext, which may cause the disclosure of user privacy. Due to the wide utilization of speech features as biometric authentication, the privacy leakage can cause immeasurable losses to personal property and privacy. Therefore, in this paper, we propose an outsourced privacy-preserving speech recognition framework (OPSR) for smart IoT devices in the long short-term memory (LSTM) neural network and edge computing. In the framework, a series of additive secret sharing-based interactive protocols between two edge servers are designed to achieve lightweight outsourced computation. And based on the protocols, we implement the neural network training process of LSTM for intelligent IoT device voice control. Finally, combined with the universal composability theory and experiment results, we theoretically prove the correctness and security of our framework.
2019-12-30
Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Real-Time Facial Expression Recognition Based on CNN. 2019 International Conference on System Science and Engineering (ICSSE). :120–123.
In this paper, we propose a method for improving the robustness of real-time facial expression recognition. Although there are many ways to improve the accuracy of facial expression recognition, a revamp of the training framework and image preprocessing allow better results in applications. One existing problem is that when the camera is capturing images in high speed, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of the human facial expression. To solve this problem for smooth system operation and maintenance of recognition speed, we take changes in image characteristics at high speed capturing into account. The proposed method does not use the immediate output for reference, but refers to the previous image for averaging to facilitate recognition. In this way, we are able to reduce interference by the characteristics of the images. The experimental results show that after adopting this method, overall robustness and accuracy of facial expression recognition have been greatly improved compared to those obtained by only the convolution neural network (CNN).
2020-11-30
Stokes, J. W., Agrawal, R., McDonald, G., Hausknecht, M..  2019.  ScriptNet: Neural Static Analysis for Malicious JavaScript Detection. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–8.
Malicious scripts are an important computer infection threat vector for computer users. For internet-scale processing, static analysis offers substantial computing efficiencies. We propose the ScriptNet system for neural malicious JavaScript detection which is based on static analysis. We also propose a novel deep learning model, Pre-Informant Learning (PIL), which processes Javascript files as byte sequences. Lower layers capture the sequential nature of these byte sequences while higher layers classify the resulting embedding as malicious or benign. Unlike previously proposed solutions, our model variants are trained in an end-to-end fashion allowing discriminative training even for the sequential processing layers. Evaluating this model on a large corpus of 212,408 JavaScript files indicates that the best performing PIL model offers a 98.10% true positive rate (TPR) for the first 60K byte subsequences and 81.66% for the full-length files, at a false positive rate (FPR) of 0.50%. Both models significantly outperform several baseline models. The best performing PIL model can successfully detect 92.02% of unknown malware samples in a hindsight experiment where the true labels of the malicious JavaScript files were not known when the model was trained.
2020-08-28
Huang, Angus F.M., Chi-Wei, Yang, Tai, Hsiao-Chi, Chuan, Yang, Huang, Jay J.C., Liao, Yu-Han.  2019.  Suspicious Network Event Recognition Using Modified Stacking Ensemble Machine Learning. 2019 IEEE International Conference on Big Data (Big Data). :5873—5880.
This study aims to detect genuine suspicious events and false alarms within a dataset of network traffic alerts. The rapid development of cloud computing and artificial intelligence-oriented automatic services have enabled a large amount of data and information to be transmitted among network nodes. However, the amount of cyber-threats, cyberattacks, and network intrusions have increased in various domains of network environments. Based on the fields of data science and machine learning, this paper proposes a series of solutions involving data preprocessing, exploratory data analysis, new features creation, features selection, ensemble learning, models construction, and verification to identify suspicious network events. This paper proposes a modified form of stacking ensemble machine learning which includes AdaBoost, Neural Networks, Random Forest, LightGBM, and Extremely Randomised Trees (Extra Trees) to realise a high-performance classification. A suspicious network event recognition dataset for a security operations centre, which uses real network log observations from the 2019 IEEE BigData Cup Challenge, is used as an experimental dataset. This paper investigates the possibility of integrating big-data analytics, machine learning, and data science to improve intelligent cybersecurity.
2020-05-04
Su, Liya, Yao, Yepeng, Lu, Zhigang, Liu, Baoxu.  2019.  Understanding the Influence of Graph Kernels on Deep Learning Architecture: A Case Study of Flow-Based Network Attack Detection. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :312–318.
Flow-based network attack detection technology is able to identify many threats in network traffic. Existing techniques have several drawbacks: i) rule-based approaches are vulnerable because it needs all the signatures defined for the possible attacks, ii) anomaly-based approaches are not efficient because it is easy to find ways to launch attacks that bypass detection, and iii) both rule-based and anomaly-based approaches heavily rely on domain knowledge of networked system and cyber security. The major challenge to existing methods is to understand novel attack scenarios and design a model to detect novel and more serious attacks. In this paper, we investigate network attacks and unveil the key activities and the relationships between these activities. For that reason, we propose methods to understand the network security practices using theoretic concepts such as graph kernels. In addition, we integrate graph kernels over deep learning architecture to exploit the relationship expressiveness among network flows and combine ability of deep neural networks (DNNs) with deep architectures to learn hidden representations, based on the communication representation graph of each network flow in a specific time interval, then the flow-based network attack detection can be done effectively by measuring the similarity between the graphs to two flows. The proposed study provides the effectiveness to obtain insights about network attacks and detect network attacks. Using two real-world datasets which contain several new types of network attacks, we achieve significant improvements in accuracies over existing network attack detection tasks.
2020-05-08
Lavrova, Daria, Zegzhda, Dmitry, Yarmak, Anastasiia.  2019.  Using GRU neural network for cyber-attack detection in automated process control systems. 2019 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1—3.
This paper provides an approach to the detection of information security breaches in automated process control systems (APCS), which consists in forecasting multivariate time series formed from the values of the operating parameters of the end system devices. Using an experimental model of water treatment, a comparison was made of the forecasting results for the parameters characterizing the operation of the entire model, and for the parameters characterizing the flow of individual subprocesses implemented by the model. For forecasting, GRU-neural network training was performed.
2020-09-28
Liu, Kai, Zhou, Yun, Wang, Qingyong, Zhu, Xianqiang.  2019.  Vulnerability Severity Prediction With Deep Neural Network. 2019 5th International Conference on Big Data and Information Analytics (BigDIA). :114–119.
High frequency of network security incidents has also brought a lot of negative effects and even huge economic losses to countries, enterprises and individuals in recent years. Therefore, more and more attention has been paid to the problem of network security. In order to evaluate the newly included vulnerability text information accurately, and to reduce the workload of experts and the false negative rate of the traditional method. Multiple deep learning methods for vulnerability text classification evaluation are proposed in this paper. The standard Cross Site Scripting (XSS) vulnerability text data is processed first, and then classified using three kinds of deep neural networks (CNN, LSTM, TextRCNN) and one kind of traditional machine learning method (XGBoost). The dropout ratio of the optimal CNN network, the epoch of all deep neural networks and training set data were tuned via experiments to improve the fit on our target task. The results show that the deep learning methods evaluate vulnerability risk levels better, compared with traditional machine learning methods, but cost more time. We train our models in various training sets and test with the same testing set. The performance and utility of recurrent convolutional neural networks (TextRCNN) is highest in comparison to all other methods, which classification accuracy rate is 93.95%.
2020-06-26
Betha, Durga Janardhana Anudeep, Bhanuj, Tatineni Sai, Umamaheshwari, B, Iyer, R. Abirami, Devi, R. Santhiya, Amirtharajan, Rengarajan, Praveenkumar, Padmapriya.  2019.  Chaotic based Image Encryption - A Neutral Perspective. 2019 International Conference on Computer Communication and Informatics (ICCCI). :1—5.

Today, there are several applications which allow us to share images over the internet. All these images must be stored in a secure manner and should be accessible only to the intended recipients. Hence it is of utmost importance to develop efficient and fast algorithms for encryption of images. This paper uses chaotic generators to generate random sequences which can be used as keys for image encryption. These sequences are seemingly random and have statistical properties. This makes them resistant to analysis and correlation attacks. However, these sequences have fixed cycle lengths. This restricts the number of sequences that can be used as keys. This paper utilises neural networks as a source of perturbation in a chaotic generator and uses its output to encrypt an image. The robustness of the encryption algorithm can be verified using NPCR, UACI, correlation coefficient analysis and information entropy analysis.

2020-06-12
Cui, Yongcheng, Wang, Wenyong.  2019.  Colorless Video Rendering System via Generative Adversarial Networks. 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :464—467.

In today's society, even though the technology is so developed, the coloring of computer images has remained at the manual stage. As a carrier of human culture and art, film has existed in our history for hundred years. With the development of science and technology, movies have developed from the simple black-and-white film era to the current digital age. There is a very complicated process for coloring old movies. Aside from the traditional hand-painting techniques, the most common method is to use post-processing software for coloring movie frames. This kind of operation requires extraordinary skills, patience and aesthetics, which is a great test for the operator. In recent years, the extensive use of machine learning and neural networks has made it possible for computers to intelligently process images. Since 2016, various types of generative adversarial networks models have been proposed to make deep learning shine in the fields of image style transfer, image coloring, and image style change. In this case, the experiment uses the generative adversarial networks principle to process pictures and videos to realize the automatic rendering of old documentary movies.