Biblio
Filters: First Letter Of Title is C [Clear All Filters]
Comparison Of Different Machine Learning Methods Applied To Obesity Classification. 2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE). :467—472.
.
2022. Estimation for obesity levels is always an important topic in medical field since it can provide useful guidance for people that would like to lose weight or keep fit. The article tries to find a model that can predict obesity and provides people with the information of how to avoid overweight. To be more specific, this article applied dimension reduction to the data set to simplify the data and tried to Figure out a most decisive feature of obesity through Principal Component Analysis (PCA) based on the data set. The article also used some machine learning methods like Support Vector Machine (SVM), Decision Tree to do prediction of obesity and wanted to find the major reason of obesity. In addition, the article uses Artificial Neural Network (ANN) to do prediction which has more powerful feature extraction ability to do this. Finally, the article found that family history of obesity is the most decisive feature, and it may because of obesity may be greatly affected by genes or the family eating diet may have great influence. And both ANN and Decision tree’s accuracy of prediction is higher than 90%.
Comparison of Different Machine Learning Algorithms Based on Intrusion Detection System. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:667—672.
.
2022. An IDS is a system that helps in detecting any kind of doubtful activity on a computer network. It is capable of identifying suspicious activities at both the levels i.e. locally at the system level and in transit at the network level. Since, the system does not have its own dataset as a result it is inefficient in identifying unknown attacks. In order to overcome this inefficiency, we make use of ML. ML assists in analysing and categorizing attacks on diverse datasets. In this study, the efficacy of eight machine learning algorithms based on KDD CUP99 is assessed. Based on our implementation and analysis, amongst the eight Algorithms considered here, Support Vector Machine (SVM), Random Forest (RF) and Decision Tree (DT) have the highest testing accuracy of which got SVM does have the highest accuracy
Classification of Mobile Phone Price Dataset Using Machine Learning Algorithms. 2022 3rd International Conference on Pattern Recognition and Machine Learning (PRML). :438—443.
.
2022. With the development of technology, mobile phones are an indispensable part of human life. Factors such as brand, internal memory, wifi, battery power, camera and availability of 4G are now modifying consumers' decisions on buying mobile phones. But people fail to link those factors with the price of mobile phones; in this case, this paper is aimed to figure out the problem by using machine learning algorithms like Support Vector Machine, Decision Tree, K Nearest Neighbors and Naive Bayes to train the mobile phone dataset before making predictions of the price level. We used appropriate algorithms to predict smartphone prices based on accuracy, precision, recall and F1 score. This not only helps customers have a better choice on the mobile phone but also gives advice to businesses selling mobile phones that the way to set reasonable prices with the different features they offer. This idea of predicting prices level will give support to customers to choose mobile phones wisely in the future. The result illustrates that among the 4 classifiers, SVM returns to the most desirable performance with 94.8% of accuracy, 97.3 of F1 score (without feature selection) and 95.5% of accuracy, 97.7% of F1 score (with feature selection).
A Comparative Analysis of Open Source Automated Malware Tools. 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom). :226—230.
.
2022. Malwares are designed to cause harm to the machine without the user's knowledge. Malwares belonging to different families infect the system in its own unique way causing damage which could be irreversible and hence there is a need to detect and analyse the malwares. Manual analysis of all types of malwares is not a practical approach due to the huge effort involved and hence Automated Malware Analysis is resorted to so that the burden on humans can be decreased and the process is made robust. A lot of Automated Malware Analysis tools are present right now both offline and online but the problem arises as to which tool to select while analysing a suspicious binary. A comparative analysis of three most widely used automated tools has been done with different malware class samples. These tools are Cuckoo Sandbox, Any. Run and Intezer Analyze. In order to check the efficacy of the tool in both online and offline analysis, Cuckoo Sandbox was configured for offline use, and Any. Run and Intezer Analyze were configured for online analysis. Individual tools analyse each malware sample and after analysis is completed, a comparative chart is prepared to determine which tool is good at finding registry changes, processes created, files created, network connections, etc by the malicious binary. The findings conclude that Intezer Analyze tool recognizes file changes better than others but otherwise Cuckoo Sandbox and Any. Run tools are better in determining other functionalities.
CFGExplainer: Explaining Graph Neural Network-Based Malware Classification from Control Flow Graphs. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :172—184.
.
2022. With the ever increasing threat of malware, extensive research effort has been put on applying Deep Learning for malware classification tasks. Graph Neural Networks (GNNs) that process malware as Control Flow Graphs (CFGs) have shown great promise for malware classification. However, these models are viewed as black-boxes, which makes it hard to validate and identify malicious patterns. To that end, we propose CFG-Explainer, a deep learning based model for interpreting GNN-oriented malware classification results. CFGExplainer identifies a subgraph of the malware CFG that contributes most towards classification and provides insight into importance of the nodes (i.e., basic blocks) within it. To the best of our knowledge, CFGExplainer is the first work that explains GNN-based mal-ware classification. We compared CFGExplainer against three explainers, namely GNNExplainer, SubgraphX and PGExplainer, and showed that CFGExplainer is able to identify top equisized subgraphs with higher classification accuracy than the other three models.
Construction of Computer Big Data Security Technology Platform Based on Artificial Intelligence. 2022 Second International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE). :1–4.
.
2022. Artificial technology developed in recent years. It is an intelligent system that can perform tasks without human intervention. AI can be used for various purposes, such as speech recognition, face recognition, etc. AI can be used for good or bad purposes, depending on how it is implemented. The discuss the application of AI in data security technology and its advantages over traditional security methods. We will focus on the good use of AI by analyzing the impact of AI on the development of big data security technology. AI can be used to enhance security technology by using machine learning algorithms, which can analyze large amounts of data and identify patterns that cannot be detected automatically by humans. The computer big data security technology platform based on artificial intelligence in this paper is the process of creating a system that can identify and prevent malicious programs. The system must be able to detect all types of threats, including viruses, worms, Trojans and spyware. It should also be able to monitor network activity and respond quickly in the event of an attack.
Current Trends in Internet of Things Forensics. 2022 International Arab Conference on Information Technology (ACIT). :1—5.
.
2022. Digital forensics is essential when performing in-depth crime investigations and evidence extraction, especially in the field of the Internet of Things, where there is a ton of information every second boosted with latest and smartest technological devices. However, the enormous growth of data and the nature of its complexity could constrain the data examination process since traditional data acquisition techniques are not applicable nowadays. Therefore, if the knowledge gap between digital forensics and the Internet of Things is not bridged, investigators will jeopardize the loss of a possible rich source of evidence that otherwise could act as a lead in solving open cases. The work aims to introduce examples of employing the latest Internet of Things forensics approaches as a panacea in this regard. The paper covers a variety of articles presenting the new Blockchain, fog, and video-based applications that can aid in easing the process of digital forensics investigation with a focus on the Internet of Things. The results of the review indicated that the above current trends are very promising procedures in the field of Internet of Things digital forensics and need to be explored and applied more actively.
A Crawler-based Digital Forensics Method Oriented to Illegal Website. 2022 IEEE 5th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). 5:1883—1887.
.
2022. There are a large number of illegal websites on the Internet, such as pornographic websites, gambling websites, online fraud websites, online pyramid selling websites, etc. This paper studies the use of crawler technology for digital forensics on illegal websites. First, a crawler based illegal website forensics program is designed and developed, which can detect the peripheral information of illegal websites, such as domain name, IP address, network topology, and crawl key information such as website text, pictures, and scripts. Then, through comprehensive analysis such as word cloud analysis, word frequency analysis and statistics on the obtained data, it can help judge whether a website is illegal.
A Coordination Artifact for Multi-disciplinary Reuse in Production Systems Engineering. 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA). :1—8.
.
2022. In Production System Engineering (PSE), domain experts from different disciplines reuse assets such as products, production processes, and resources. Therefore, PSE organizations aim at establishing reuse across engineering disciplines. However, the coordination of multi-disciplinary reuse tasks, e.g., the re-validation of related assets after changes, is hampered by the coarse-grained representation of tasks and by scattered, heterogeneous domain knowledge. This paper introduces the Multi-disciplinary Reuse Coordination (MRC) artifact to improve task management for multi-disciplinary reuse. For assets and their properties, the MRC artifact describes sub-tasks with progress and result states to provide references for detailed reuse task management across engineering disciplines. In a feasibility study on a typical robot cell in automotive manufacturing, we investigate the effectiveness of task management with the MRC artifact compared to traditional approaches. Results indicate that the MRC artifact is feasible and provides effective capabilities for coordinating multi-disciplinary re-validation after changes.
Cloud Security Analysis Based on Virtualization Technology. 2022 International Conference on Big Data, Information and Computer Network (BDICN). :519—522.
.
2022. The experimental results demonstrated that, With the development of cloud computing, more and more people use cloud computing to do all kinds of things. However, for cloud computing, the most important thing is to ensure the stability of user data and improve security at the same time. From an analysis of the experimental results, it can be found that Cloud computing makes extensive use of technical means such as computing virtualization, storage system virtualization and network system virtualization, abstracts the underlying physical facilities into external unified interfaces, maps several virtual networks with different topologies to the underlying infrastructure, and provides differentiated services for external users. By comparing and analyzing the experimental results, it is clear that virtualization technology will be the main way to solve cloud computing security. Virtualization technology introduces a virtual layer between software and hardware, provides an independent running environment for applications, shields the dynamics, distribution and differences of hardware platforms, supports the sharing and reuse of hardware resources, provides each user with an independent and isolated computer environment, and facilitates the efficient and dynamic management and maintenance of software and hardware resources of the whole system. Applying virtualization technology to cloud security reduces the hardware cost and management cost of "cloud security" enterprises to a certain extent, and improves the security of "cloud security" technology to a certain extent. This paper will outline the basic cloud computing security methods, and focus on the analysis of virtualization cloud security technology
Colored Petri Net Reusing for Service Function Chaining Validation. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1531—1535.
.
2022. With the development of software defined network and network function virtualization, network operators can flexibly deploy service function chains (SFC) to provide network security services more than before according to the network security requirements of business systems. At present, most research on verifying the correctness of SFC is based on whether the logical sequence between service functions (SF) in SFC is correct before deployment, and there is less research on verifying the correctness after SFC deployment. Therefore, this paper proposes a method of using Colored Petri Net (CPN) to establish a verification model offline and verify whether each SF deployment in SFC is correct after online deployment. After the SFC deployment is completed, the information is obtained online and input into the established model for verification. The experimental results show that the SFC correctness verification method proposed in this paper can effectively verify whether each SF in the deployed SFC is deployed correctly. In this process, the correctness of SF model is verified by using SF model in the model library, and the model reuse technology is preliminarily discussed.
Compliance Checking Based Detection of Insider Threat in Industrial Control System of Power Utilities. 2022 7th Asia Conference on Power and Electrical Engineering (ACPEE). :1142—1147.
.
2022. Compare to outside threats, insider threats that originate within targeted systems are more destructive and invisible. More importantly, it is more difficult to detect and mitigate these insider threats, which poses significant cyber security challenges to an industry control system (ICS) tightly coupled with today’s information technology infrastructure. Currently, power utilities rely mainly on the authentication mechanism to prevent insider threats. If an internal intruder breaks the protection barrier, it is hard to identify and intervene in time to prevent harmful damage. Based on the existing in-depth security defense system, this paper proposes an insider threat protection scheme for ICSs of power utilities. This protection scheme can conduct compliance check by taking advantage of the characteristics of its business process compliance and the nesting of upstream and downstream business processes. Taking the Advanced Metering Infrastructures (AMIs) in power utilities as an example, the potential insider threats of violation and misoperation under the current management mechanism are identified after the analysis of remote charge control operation. According to the business process, a scheme of compliance check for remote charge control command is presented. Finally, the analysis results of a specific example demonstrate that the proposed scheme can effectively prevent the consumers’ power outage due to insider threats.
Contribution of Blockchain in Development of Metaverse. 2022 7th International Conference on Communication and Electronics Systems (ICCES). :845–850.
.
2022. Metaverse is becoming the new standard for social networks and 3D virtual worlds when Facebook officially rebranded to Metaverse in October 2021. Many relevant technologies are used in the metaverse to offer 3D immersive and customized experiences at the user’s fingertips. Despite the fact that the metaverse receives a lot of attention and advantages, one of the most pressing concerns for its users is the safety of their digital material and data. As a result of its decentralization, immutability, and transparency, blockchain is a possible alternative. Our goal is to conduct a comprehensive assessment of blockchain systems in the metaverse to properly appreciate its function in the metaverse. To begin with, the paper introduces blockchain and the metaverse and explains why it’s necessary for the metaverse to adopt blockchain technology. Aside from these technological considerations, this article focuses on how blockchain-based approaches for the metaverse may be used from a privacy and security standpoint. There are several technological challenegs that need to be addressed for making the metaverse a reality. The influence of blockchain on important key technologies with in metaverse, such as Artifical Intelligence, big data and the Internet-of-Things (IoT) is also examined. Several prominent initiatives are also shown to demonstrate the importance of blockchain technology in the development of metaverse apps and services. There are many possible possibilities for future development and research in the application of blockchain technology in the metaverse.
Cybersecurity Education in the Age of Artificial Intelligence: A Novel Proactive and Collaborative Learning Paradigm. 2022 IEEE Frontiers in Education Conference (FIE). :1–5.
.
2022. This Innovative Practice Work-in-Progress paper presents a virtual, proactive, and collaborative learning paradigm that can engage learners with different backgrounds and enable effective retention and transfer of the multidisciplinary AI-cybersecurity knowledge. While progress has been made to better understand the trustworthiness and security of artificial intelligence (AI) techniques, little has been done to translate this knowledge to education and training. There is a critical need to foster a qualified cybersecurity workforce that understands the usefulness, limitations, and best practices of AI technologies in the cybersecurity domain. To address this import issue, in our proposed learning paradigm, we leverage multidisciplinary expertise in cybersecurity, AI, and statistics to systematically investigate two cohesive research and education goals. First, we develop an immersive learning environment that motivates the students to explore AI/machine learning (ML) development in the context of real-world cybersecurity scenarios by constructing learning models with tangible objects. Second, we design a proactive education paradigm with the use of hackathon activities based on game-based learning, lifelong learning, and social constructivism. The proposed paradigm will benefit a wide range of learners, especially underrepresented students. It will also help the general public understand the security implications of AI. In this paper, we describe our proposed learning paradigm and present our current progress of this ongoing research work. In the current stage, we focus on the first research and education goal and have been leveraging cost-effective Minecraft platform to develop an immersive learning environment where the learners are able to investigate the insights of the emerging AI/ML concepts by constructing related learning modules via interacting with tangible AI/ML building blocks.
ISSN: 2377-634X
Cloud Storage I/O Load Prediction Based on XB-IOPS Feature Engineering. 2022 IEEE 8th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :54—60.
.
2022. With the popularization of cloud computing and the deepening of its application, more and more cloud block storage systems have been put into use. The performance optimization of cloud block storage systems has become an important challenge facing today, which is manifested in the reduction of system performance caused by the unbalanced resource load of cloud block storage systems. Accurately predicting the I/O load status of the cloud block storage system can effectively avoid the load imbalance problem. However, the cloud block storage system has the characteristics of frequent random reads and writes, and a large amount of I/O requests, which makes prediction difficult. Therefore, we propose a novel I/O load prediction method for XB-IOPS feature engineering. The feature engineering is designed according to the I/O request pattern, I/O size and I/O interference, and realizes the prediction of the actual load value at a certain moment in the future and the average load value in the continuous time interval in the future. Validated on a real dataset of Alibaba Cloud block storage system, the results show that the XB-IOPS feature engineering prediction model in this paper has better performance in Alibaba Cloud block storage devices where random I/O and small I/O dominate. The prediction performance is better, and the prediction time is shorter than other prediction models.
Comparative Analysis of Password Storage Security using Double Secure Hash Algorithm. 2022 IEEE North Karnataka Subsection Flagship International Conference (NKCon). :1—5.
.
2022. Passwords are generally used to keep unauthorized users out of the system. Password hacking has become more common as the number of internet users has extended, causing a slew of issues. These problems include stealing the confidential information of a company or a country by adversaries which harm the economy or the security of the organization. Hackers often use password hacking for criminal activities. It is indispensable to protect passwords from hackers. There are many hacking methods such as credential stuffing, social engineering, traffic interception, and password spraying for hacking the passwords. So, in order to control hacking, there are hashing algorithms that are mostly used to hash passwords making password cracking more difficult. In this proposed work, different hashing algorithms such as SHA-1, MD-5, Salted MD-5, SHA-256, and SHA-512 have been used. And the MySQL database is used to store the hash values of passwords that are generated using various hash functions. It is proven that SHA is better than MD-5 and Salted MD-5. Whereas in the SHA family, SHA-512 and SHA-256 have their own benefits. Four new hashing functions have been proposed using the combination of existing algorithms like SHA-256, and SHA-512 namely SHA-256\_with\_SHA-256, SHA-256\_ With\_SHA-512,SHA-512\_With\_SHA-512,and SHA-512\_ With\_SHA-256. They provide strong hash value for passwords by which the security of passwords increases, and hacking can be controlled to an extent.
Cost-Efficient Network Protection Games Against Uncertain Types of Cyber-Attackers. 2022 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.
.
2022. This paper considers network protection games for a heterogeneous network system with N nodes against cyber-attackers of two different types of intentions. The first type tries to maximize damage based on the value of each net-worked node, while the second type only aims at successful infiltration. A defender, by applying defensive resources to networked nodes, can decrease those nodes' vulnerabilities. Meanwhile, the defender needs to balance the cost of using defensive resources and potential security benefits. Existing literature shows that, in a Nash equilibrium, the defender should adopt different resource allocation strategies against different types of attackers. However, it could be difficult for the defender to know the type of incoming cyber-attackers. A Bayesian game is investigated considering the case that the defender is uncertain about the attacker's type. We demonstrate that the Bayesian equilibrium defensive resource allocation strategy is a mixture of the Nash equilibrium strategies from the games against the two types of attackers separately.
CySec Game: A Framework and Tool for Cyber Risk Assessment and Security Investment Optimization in Critical Infrastructures. 2022 Resilience Week (RWS). :1–6.
.
2022. Cyber physical system (CPS) Critical infrastructures (CIs) like the power and energy systems are increasingly becoming vulnerable to cyber attacks. Mitigating cyber risks in CIs is one of the key objectives of the design and maintenance of these systems. These CPS CIs commonly use legacy devices for remote monitoring and control where complete upgrades are uneconomical and infeasible. Therefore, risk assessment plays an important role in systematically enumerating and selectively securing vulnerable or high-risk assets through optimal investments in the cybersecurity of the CPS CIs. In this paper, we propose a CPS CI security framework and software tool, CySec Game, to be used by the CI industry and academic researchers to assess cyber risks and to optimally allocate cybersecurity investments to mitigate the risks. This framework uses attack tree, attack-defense tree, and game theory algorithms to identify high-risk targets and suggest optimal investments to mitigate the identified risks. We evaluate the efficacy of the framework using the tool by implementing a smart grid case study that shows accurate analysis and feasible implementation of the framework and the tool in this CPS CI environment.
Cross-Layer Design for UAV-Based Streaming Media Transmission. IEEE Transactions on Circuits and Systems for Video Technology. 32:4710–4723.
.
2022. Unmanned Aerial Vehicle (UAV)-based streaming media transmission may become unstable when the bit rate generated by the source load exceeds the channel capacity owing to the UAV location and speed change. The change of the location can affect the network connection, leading to reduced transmission rate; the change of the flying speed can increase the video payload due to more I-frames. To improve the transmission reliability, in this paper we design a Client-Server-Ground&User (C-S-G&U) framework, and propose an algorithm of splitting-merging stream (SMS) for multi-link concurrent transmission. We also establish multiple transport links and configure the routing rules for the cross-layer design. The multi-link transmission can achieve higher throughput and significantly smaller end-to-end delay than a single-link especially in a heavy load situation. The audio and video data are packaged into the payload by the Real-time Transport Protocol (RTP) before being transmitted over the User Datagram Protocol (UDP). The forward error correction (FEC) algorithm is implemented to promote the reliability of the UDP transmission, and an encryption algorithm to enhance security. In addition, we propose a Quality of Service (QoS) strategy so that the server and the user can control the UAV to adapt its transmission mode dynamically, according to the load, delay, and packet loss. Our design has been implemented on an engineering platform, whose efficacy has been verified through comprehensive experiments.
Conference Name: IEEE Transactions on Circuits and Systems for Video Technology
Clustering-based routing protocol using FCM-RSOA and DNA cryptography algorithm for smart building. 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). :1—8.
.
2022. The WSN nodes are arranged uniformly or randomly on the area of need for gathering the required data. The admin utilizes wireless broadband networks to connect to the Internet and acquire the required data from the base station (BS). However, these sensor nodes play a significant role in a variety of professional and industrial domains, but some of the concerns stop the growth of WSN, such as memory, transmission, battery power and processing power. The most significant issue with these restrictions is to increase the energy efficiency for WSN with rapid and trustworthy data transfer. In this designed model, the sensor nodes are clustered using the FCM (Fuzzy C-Means) clustering algorithm with the Reptile Search Optimization (RSO) for finding the centre of the cluster. The cluster head is determined by using African vulture optimization (AVO). For selecting the path of data transmission from the cluster head to the base station, the adaptive relay nodes are selected using the Fuzzy rule. These data from the base station are given to the server with a DNA cryptography encryption algorithm for secure data transmission. The performance of the designed model is evaluated with specific parameters such as average residual energy, throughput, end-to-end delay, information loss and execution time for a secure and energy-efficient routing protocol. These evaluated values for the proposed model are 0.91 %, 1.17Mbps, 1.76 ms, 0.14 % and 0.225 s respectively. Thus, the resultant values of the proposed model show that the designed clustering-based routing protocol using FCM-RSOA and DNA cryptography for smart building performs better compared to the existing techniques.
A Classification Method of Power Unstructured Encrypted Data Based on Fuzzy Data Matching. 2022 3rd International Conference on Intelligent Design (ICID). :294—298.
.
2022. With the development of the digital development transformation of the power grid, the classification of power unstructured encrypted data is an important basis for data security protection. However, most studies focus on exact match classification or single-keyword fuzzy match classification. This paper proposes a fuzzy matching classification method for power unstructured encrypted data. The data owner generates an index vector based on the power unstructured file, and the data user generates a query vector by querying the file through the same process. The index and query vector are uploaded to the cloud server in encrypted form, and the cloud server calculates the relevance score and sorts it, and returns the classification result with the highest score to the user. This method realizes the multi-keyword fuzzy matching classification of unstructured encrypted data of electric power, and through the experimental simulation of a large number of data sets, the effect and feasibility of the method are proved.
A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
.
2022. Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions. 2022 6th International Conference on Electronics, Communication and Aerospace Technology. :1360—1365.
.
2022. The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
Constant False Alarm Rate Frame Detection Strategy for Terrestrial ASM/VDE Signals Received by Satellite. 2022 IEEE 5th International Conference on Electronics and Communication Engineering (ICECE). :29—33.
.
2022. Frame detection is an important part of the reconnaissance satellite receiver to identify the terrestrial application specific messages (ASM) / VHF data exchange (VDE) signal, and has been challenged by Doppler shift and message collision. A constant false alarm rate (CFAR) frame detection strategy insensitive to Doppler shift has been proposed in this paper. Based on the double Barker sequence, a periodical sequence has been constructed, and differential operations have been adopted to eliminate the Doppler shift. Moreover, amplitude normalization is helpful for suppressing the interference introduced by message collision. Simulations prove that the proposed CFAR frame detection strategy is very attractive for the reconnaissance satellite to identify the terrestrial ASM/VDE signal.
A Context-Based Decision-Making Trust Scheme for Malicious Detection in Connected and Autonomous Vehicles. 2022 International Conference on Computing, Electronics & Communications Engineering (iCCECE). :31—36.
.
2022. The fast-evolving Intelligent Transportation Systems (ITS) are crucial in the 21st century, promising answers to congestion and accidents that bother people worldwide. ITS applications such as Connected and Autonomous Vehicle (CAVs) update and broadcasts road incident event messages, and this requires significant data to be transmitted between vehicles for a decision to be made in real-time. However, broadcasting trusted incident messages such as accident alerts between vehicles pose a challenge for CAVs. Most of the existing-trust solutions are based on the vehicle's direct interaction base reputation and the psychological approaches to evaluate the trustworthiness of the received messages. This paper provides a scheme for improving trust in the received incident alert messages for real-time decision-making to detect malicious alerts between CAVs using direct and indirect interactions. This paper applies artificial intelligence and statistical data classification for decision-making on the received messages. The model is trained based on the US Department of Technology Safety Pilot Deployment Model (SPMD). An Autonomous Decision-making Trust Scheme (ADmTS) that incorporates a machine learning algorithm and a local trust manager for decision-making has been developed. The experiment showed that the trained model could make correct predictions such as 98% and 0.55% standard deviation accuracy in predicting false alerts on the 25% malicious data