Biblio
The steady decline of IP transit prices in the past two decades has helped fuel the growth of traffic demands in the Internet ecosystem. Despite the declining unit pricing, bandwidth costs remain significant due to ever-increasing scale and reach of the Internet, combined with the price disparity between the Internet's core hubs versus remote regions. In the meantime, cloud providers have been auctioning underutilized computing resources in their marketplace as spot instances for a much lower price, compared to their on-demand instances. This state of affairs has led the networking community to devote extensive efforts to cloud-assisted networks - the idea of offloading network functionality to cloud platforms, ultimately leading to more flexible and highly composable network service chains.We initiate a critical discussion on the economic and technological aspects of leveraging cloud-assisted networks for Internet-scale interconnections and data transfers. Namely, we investigate the prospect of constructing a large-scale virtualized network provider that does not own any fixed or dedicated resources and runs atop several spot instances. We construct a cloud-assisted overlay as a virtual network provider, by leveraging third-party cloud spot instances. We identify three use case scenarios where such approach will not only be economically and technologically viable but also provide performance benefits compared to current commercial offerings of connectivity and transit providers.
This paper describes the technology of neural network application to solve the problem of information security incidents forecasting. We describe the general problem of analyzing and predicting time series in a graphical and mathematical setting. To solve this problem, it is proposed to use a neural network model. To solve the task of forecasting a time series of information security incidents, data are generated and described on the basis of which the neural network is trained. We offer a neural network structure, train the neural network, estimate it's adequacy and forecasting ability. We show the possibility of effective use of a neural network model as a part of an intelligent forecasting system.
In this paper we solve the problem of neural network technology development for e-mail messages classification. We analyze basic methods of spam filtering such as a sender IP-address analysis, spam messages repeats detection and the Bayesian filtering according to words. We offer the neural network technology for solving this problem because the neural networks are universal approximators and effective in addressing the problems of classification. Also, we offer the scheme of this technology for e-mail messages “spam”/“not spam” classification. The creation of effective neural network model of spam filtering is performed within the databases knowledge discovery technology. For this training set is formed, the neural network model is trained, its value and classifying ability are estimated. The experimental studies have shown that a developed artificial neural network model is adequate and it can be effectively used for the e-mail messages classification. Thus, in this paper we have shown the possibility of the effective neural network model use for the e-mail messages filtration and have shown a scheme of artificial neural network model use as a part of the e-mail spam filtering intellectual system.
Deep learning technologies, which are the key components of state-of-the-art Artificial Intelligence (AI) services, have shown great success in providing human-level capabilities for a variety of tasks, such as visual analysis, speech recognition, and natural language processing and etc. Building a production-level deep learning model is a non-trivial task, which requires a large amount of training data, powerful computing resources, and human expertises. Therefore, illegitimate reproducing, distribution, and the derivation of proprietary deep learning models can lead to copyright infringement and economic harm to model creators. Therefore, it is essential to devise a technique to protect the intellectual property of deep learning models and enable external verification of the model ownership. In this paper, we generalize the "digital watermarking'' concept from multimedia ownership verification to deep neural network (DNNs) models. We investigate three DNN-applicable watermark generation algorithms, propose a watermark implanting approach to infuse watermark into deep learning models, and design a remote verification mechanism to determine the model ownership. By extending the intrinsic generalization and memorization capabilities of deep neural networks, we enable the models to learn specially crafted watermarks at training and activate with pre-specified predictions when observing the watermark patterns at inference. We evaluate our approach with two image recognition benchmark datasets. Our framework accurately (100$\backslash$%) and quickly verifies the ownership of all the remotely deployed deep learning models without affecting the model accuracy for normal input data. In addition, the embedded watermarks in DNN models are robust and resilient to different counter-watermark mechanisms, such as fine-tuning, parameter pruning, and model inversion attacks.
Most traditional recommendation algorithms only consider the binary relationship between users and projects, these can basically be converted into score prediction problems. But most of these algorithms ignore the users's interests, potential work factors or the other social factors of the recommending products. In this paper, based on the existing trustworthyness model and similarity measure, we puts forward the concept of trust similarity and design a joint interest-content recommendation framework to suggest users which videos to watch in the online video site. In this framework, we first analyze the user's viewing history records, tags and establish the user's interest characteristic vector. Then, based on the updated vector, users should be clustered by sparse subspace clust algorithm, which can improve the efficiency of the algorithm. We certainly improve the calculation of similarity to help users find better neighbors. Finally we conduct experiments using real traces from Tencent Weibo and Youku to verify our method and evaluate its performance. The results demonstrate the effectiveness of our approach and show that our approach can substantially improve the recommendation accuracy.
The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.
{Information and Communications Technology (ICT) have rationalized government services into a more efficient and transparent government. However, a large part of the government services remained constant in the manual process due to the high cost of ICT. The purpose of this paper is to explore the role of e-governance and ICT in the legislative management of municipalities in the Philippines. This study adopted the phases of Princeton Project Management Methodology (PPMM) as the approach in the development of LeMTrac. This paper utilized the developmental- quantitative research design involving two (2) sets of respondents, which are the end-users and IT experts. Majority of the respondents perceived that the system as "highly acceptable" with an average Likert score of 4.72 for the ISO 9126 Software quality metric Usability. The findings also reveal that the integration of LeMTrac within the Sangguniang Bayan (SB) Office in the Municipal Local Government Units (LGU) of Nabua and Bula, Camarines Sur provided better accessibility, security, and management of documents.
In our daily lives, the advances of new technology can be used to sustain the development of people across the globe. Particularly, e-government can be the dynamo of the development for the people. The development of technology and the rapid growth in the use of internet creates a big challenge in the administration in both the public and the private sector. E-government is a vital accomplishment, whereas the security is the main downside which occurs in each e-government process. E-government has to be secure as technology grows and the users have to follow the procedures to make their own transactions safe. This paper tackles the challenges and obstacles to enhance the security of information in e-government. Hence to achieve security data hiding techniques are found to be trustworthy. Reversible data hiding (RDH) is an emerging technique which helps in retaining the quality of the cover image. Hence it is preferred over the traditional data hiding techniques. Modification in the existing algorithm is performed for image encryption scheme and data hiding scheme in order to improve the results. To achieve this secret data is split into 20 parts and data concealing is performed on each part. The data hiding procedure includes embedding of data into least significant nibble of the cover image. The bits are further equally distributed in the cover image to obtain the key security parameters. Hence the obtained results validate that the proposed scheme is better than the existing schemes.
Error correction in quantum cryptography based on artificial neural networks is a new and promising solution. In this paper the security verification of this method is discussed and results of many simulations with different parameters are presented. The test scenarios assumed partially synchronized neural networks, typical for error rates in quantum cryptography. The results were also compared with scenarios based on the neural networks with random chosen weights to show the difficulty of passive attacks.
In recent years, the spreading of malicious social media messages about financial stocks has threatened the security of financial market. Market Anomaly Attacks is an illegal practice in the stock or commodities markets that induces investors to make purchase or sale decisions based on false information. Identifying these threats from noisy social media datasets remains challenging because of the long time sequence in these social media postings, ambiguous textual context and the difficulties for traditional deep learning approaches to handle both temporal and text dependent data such as financial social media messages. This research developed a temporal recurrent neural network (TRNN) approach to capturing both time and text sequence dependencies for intelligent detection of market anomalies. We tested the approach by using financial social media of U.S. technology companies and their stock returns. Compared with traditional neural network approaches, TRNN was found to more efficiently and effectively classify abnormal returns.
The monitoring circuit is widely applied in radiation environment and it is of significance to study the circuit reliability with the radiation effects. In this paper, an intelligent analysis method based on Deep Belief Network (DBN) and Support Vector Method is proposed according to the radiation experiments analysis of the monitoring circuit. The Total Ionizing Dose (TID) of the monitoring circuit is used to identify the circuit degradation trend. Firstly, the output waveforms of the monitoring circuit are obtained by radiating with the different TID. Subsequently, the Deep Belief Network Model is trained to extract the features of the circuit signal. Finally, the Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied to classify and predict the remaining useful life (RUL) of the monitoring circuit. According to the experimental results, the performance of DBN-SVM exceeds DBN method for feature extraction and classification, and SVR is effective for predicting the degradation.
With an increase in targeted attacks such as advanced persistent threats (APTs), enterprise system defenders require comprehensive frameworks that allow them to collaborate and evaluate their defense systems against such attacks. MITRE has developed a framework which includes a database of different kill-chains, tactics, techniques, and procedures that attackers employ to perform these attacks. In this work, we leverage natural language processing techniques to extract attacker actions from threat report documents generated by different organizations and automatically classify them into standardized tactics and techniques, while providing relevant mitigation advisories for each attack. A naïve method to achieve this is by training a machine learning model to predict labels that associate the reports with relevant categories. In practice, however, sufficient labeled data for model training is not always readily available, so that training and test data come from different sources, resulting in bias. A naïve model would typically underperform in such a situation. We address this major challenge by incorporating an importance weighting scheme called bias correction that efficiently utilizes available labeled data, given threat reports, whose categories are to be automatically predicted. We empirically evaluated our approach on 18,257 real-world threat reports generated between year 2000 and 2018 from various computer security organizations to demonstrate its superiority by comparing its performance with an existing approach.
The authors have proposed the Fallback Control System (FCS) as a countermeasure after cyber-attacks happen in Industrial Control Systems (ICSs). For increased robustness against cyber-attacks, introducing multiple countermeasures is desirable. Then, an appropriate collaboration is essential. This paper introduces two FCSs in ICS: field network signal is driven FCS and analog signal driven FCS. This paper also implements a collaborative FCS by a collaboration function of the two FCSs. The collaboration function is that the analog signal driven FCS estimates the state of the other FCS. The collaborative FCS decides the countermeasure based on the result of the estimation after cyber-attacks happen. Finally, we show practical experiment results to analyze the effectiveness of the proposed method.
In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.
With the construction and implementation of the government information resources sharing mechanism, the protection of citizens' privacy has become a vital issue for government departments and the public. This paper discusses the risk of citizens' privacy disclosure related to data sharing among government departments, and analyzes the current major privacy protection models for data sharing. Aiming at the issues of low efficiency and low reliability in existing e-government applications, a statistical data sharing framework among governmental departments based on local differential privacy and blockchain is established, and its applicability and advantages are illustrated through example analysis. The characteristics of the private blockchain enhance the security, credibility and responsiveness of information sharing between departments. Local differential privacy provides better usability and security for sharing statistics. It not only keeps statistics available, but also protects the privacy of citizens.
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
The e-government concept and healthcare have usually been studied separately. Even when and where both e-government and healthcare systems were combined in a study, the roles of e-government in healthcare have not been examined. As a result., the complementarity of the systems poses potential challenges. The interpretive approach was applied in this study. Existing materials in the areas of healthcare and e-government were used as data from a qualitative method viewpoint. Dimension of change from the perspective of the structuration theory was employed to guide the data analysis. From the analysis., six factors were found to be the main roles of e-government in the implementation and application of e-health in the delivering of healthcare services. An understanding of the roles of e-government promotes complementarity., which enhances the healthcare service delivery to the community.
Cryptographic protocols are the basis for the security of any protected system, including the electronic voting system. One of the most effective ways to analyze protocol security is to use verifiers. In this paper, the formal verifier SPIN was used to analyze the security of the cryptographic protocol for e-voting, which is based on model checking using linear temporal logic (LTL). The cryptographic protocol of electronic voting is described. The main structural units of the Promela language used for simulation in the SPIN verifier are described. The model of the electronic voting protocol in the language Promela is given. The interacting parties, transferred data, the order of the messages transmitted between the parties are described. Security of the cryptographic protocol using the SPIN tool is verified. The simulation of the protocol with active intruder using the man in the middle attack (MITM) to substitute data is made. In the simulation results it is established that the protocol correctly handles the case of an active attack on the parties' authentication.
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ norms, namely, L\_0, L\_1, L\_2, and L\_$ınfty$ attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
This paper advocates programming high-performance code using partial evaluation. We present a clean-slate programming system with a simple, annotation-based, online partial evaluator that operates on a CPS-style intermediate representation. Our system exposes code generation for accelerators (vectorization/parallelization for CPUs and GPUs) via compiler-known higher-order functions that can be subjected to partial evaluation. This way, generic implementations can be instantiated with target-specific code at compile time. In our experimental evaluation we present three extensive case studies from image processing, ray tracing, and genome sequence alignment. We demonstrate that using partial evaluation, we obtain high-performance implementations for CPUs and GPUs from one language and one code base in a generic way. The performance of our codes is mostly within 10%, often closer to the performance of multi man-year, industry-grade, manually-optimized expert codes that are considered to be among the top contenders in their fields.
As a valuable source of information, Word Of Mouth1 has always been valued by consumers and business marketers. The Internet provides a new medium for Word Of Mouth communication. Consumers share their views and comments on products, services, brands and enterprises through online platforms, thus forming Internet Word Of Mouth, which will be of great importance to B2C enterprises. However, disturbing and even false information as well as uncertainties and risks existing in the online communication environment lead to the crisis of online trust. Accordingly, this study constructs a trust mechanism model of Internet Word Of Mouth effect, which shows that the professionalism of communicators, online relationship strength, communication channels, and product involvement are key factors significantly affecting the Word Of Mouth effect. This model can provide theoretical guidance in the word-of-mouth marketing and the operation of B2C e-commerce enterprises.
Advancements in computing, communication, and sensing technologies are making it possible to embed, control, and gather vital information from tiny devices that are being deployed and utilized in practically every aspect of our modernized society. From smart home appliances to municipal water and electric industrial facilities to our everyday work environments, the next Internet frontier, dubbed IoT, is promising to revolutionize our lives and tackle some of our nations' most pressing challenges. While the seamless interconnection of IoT devices with the physical realm is envisioned to bring a plethora of critical improvements in many aspects and diverse domains, it will undoubtedly pave the way for attackers that will target and exploit such devices, threatening the integrity of their data and the reliability of critical infrastructure. Further, such compromised devices will undeniably be leveraged as the next generation of botnets, given their increased processing capabilities and abundant bandwidth. While several demonstrations exist in the literature describing the exploitation procedures of a number of IoT devices, the up-to-date inference, characterization, and analysis of unsolicited IoT devices that are currently deployed "in the wild" is still in its infancy. In this article, we address this imperative task by leveraging active and passive measurements to report on unsolicited Internet-scale IoT devices. This work describes a first step toward exploring the utilization of passive measurements in combination with the results of active measurements to shed light on the Internet-scale insecurities of the IoT paradigm. By correlating results of Internet-wide scanning with Internet background radiation traffic, we disclose close to 14,000 compromised IoT devices in diverse sectors, including critical infrastructure and smart home appliances. To this end, we also analyze their generated traffic to create effective mitigation signatures that could be deployed in local IoT realms. To support largescale empirical data analytics in the context of IoT, we make available the inferred and extracted IoT malicious raw data through an authenticated front-end service. The outcomes of this work confirm the existence of such compromised devices on an Internet scale, while the generated inferences and insights are postulated to be employed for inferring other similarly compromised IoT devices, in addition to contributing to IoT cyber security situational awareness.
With the development of Internet of Things, numerous IoT devices have been brought into our daily lives. Bluetooth Low Energy (BLE), due to the low energy consumption and generic service stack, has become one of the most popular wireless communication technologies for IoT. However, because of the short communication range and exclusive connection pattern, a BLE-equipped device can only be used by a single user near the device. To fully explore the benefits of BLE and make BLE-equipped devices truly accessible over the Internet as IoT devices, in this paper, we propose a cloud-based software framework that can enable multiple users to interact with various BLE IoT devices over the Internet. This framework includes an agent program, a suite of services hosting in cloud, and a set of RESTful APIs exposed to Internet users. Given the availability of this framework, the access to BLE devices can be extended from local to the Internet scale without any software or hardware changes to BLE devices, and more importantly, shared usage of remote BLE devices over the Internet is also made available.