Biblio
Proving secure compilation of partial programs typically requires back-translating an attack against the compiled program to an attack against the source program. To prove back-translation, one can syntactically translate the target attacker to a source one-i.e., syntax-directed back-translation-or show that the interaction traces of the target attacker can also be emitted by source attackers—i.e., trace-directed back-translation. Syntax-directed back-translation is not suitable when the target attacker may use unstructured control flow that the source language cannot directly represent. Trace-directed back-translation works with such syntactic dissimilarity because only the external interactions of the target attacker have to be mimicked in the source, not its internal control flow. Revealing only external interactions is, however, inconvenient when sharing memory via unforgeable pointers, since information about shared pointers stashed in private memory is not present on the trace. This made prior proofs unnecessarily complex, since the generated attacker had to instead stash all reachable pointers. In this work, we introduce more informative data-flow traces, combining the best of syntax- and trace-directed back-translation in a simpler technique that handles both syntactic dissimilarity and memory sharing well, and that is proved correct in Coq. Additionally, we develop a novel turn-taking simulation relation and use it to prove a recomposition lemma, which is key to reusing compiler correctness in such secure compilation proofs. We are the first to mechanize such a recomposition lemma in the presence of memory sharing. We use these two innovations in a secure compilation proof for a code generation compiler pass between a source language with structured control flow and a target language with unstructured control flow, both with safe pointers and components.
One of the biggest studies on public safety and tracking that has sparked a lot of interest in recent years is deep learning approach. Current public safety methods are existent for counting and detecting persons. But many issues such as aberrant occurring in public spaces are seldom detected and reported to raise an automated alarm. Our proposed method detects anomalies (deviation from normal events) from the video surveillance footages using deep learning and raises an alarm, if anomaly is found. The proposed model is trained to detect anomalies and then it is applied to the video recording of the surveillance that is used to monitor public safety. Then the video is assessed frame by frame to detect anomaly and then if there is match, an alarm is raised.
Fake news is a new phenomenon that promotes misleading information and fraud via internet social media or traditional news sources. Fake news is readily manufactured and transmitted across numerous social media platforms nowadays, and it has a significant influence on the real world. It is vital to create effective algorithms and tools for detecting misleading information on social media platforms. Most modern research approaches for identifying fraudulent information are based on machine learning, deep learning, feature engineering, graph mining, image and video analysis, and newly built datasets and online services. There is a pressing need to develop a viable approach for readily detecting misleading information. The deep learning LSTM and Bi-LSTM model was proposed as a method for detecting fake news, In this work. First, the NLTK toolkit was used to remove stop words, punctuation, and special characters from the text. The same toolset is used to tokenize and preprocess the text. Since then, GLOVE word embeddings have incorporated higher-level characteristics of the input text extracted from long-term relationships between word sequences captured by the RNN-LSTM, Bi-LSTM model to the preprocessed text. The proposed model additionally employs dropout technology with Dense layers to improve the model's efficiency. The proposed RNN Bi-LSTM-based technique obtains the best accuracy of 94%, and 93% using the Adam optimizer and the Binary cross-entropy loss function with Dropout (0.1,0.2), Once the Dropout range increases it decreases the accuracy of the model as it goes 92% once Dropout (0.3).
With the rapid development of Internet Technology in recent years, the demand for security support for complex applications is becoming stronger and stronger. Intel Software Guard Extensions (Intel SGX) is created as an extension of Intel Systems to enhance software security. Intel SGX allows application developers to create so-called enclave. Sensitive application code and data are encapsulated in Trusted Execution Environment (TEE) by enclave. TEE is completely isolated from other applications, operating systems, and administrative programs. Enclave is the core structure of Intel SGX Technology. Enclave supports multi-threading. Thread Control Structure (TCS) stores special information for restoring enclave threads when entering or exiting enclave. Each execution thread in enclave is associated with a TCS. This paper analyzes and verifies the possible security risks of enclave under concurrent conditions. It is found that in the case of multithread concurrency, a single enclave cannot resist flooding attacks, and related threads also throw TCS exception codes.