Visible to the public Biblio

Found 721 results

Filters: Keyword is Computational modeling  [Clear All Filters]
2020-11-20
Sun, Y., Wang, J., Lu, Z..  2019.  Asynchronous Parallel Surrogate Optimization Algorithm Based on Ensemble Surrogating Model and Stochastic Response Surface Method. :74—84.
{Surrogate model-based optimization algorithm remains as an important solution to expensive black-box function optimization. The introduction of ensemble model enables the algorithm to automatically choose a proper model integration mode and adapt to various parameter spaces when dealing with different problems. However, this also significantly increases the computational burden of the algorithm. On the other hand, utilizing parallel computing resources and improving efficiency of black-box function optimization also require combination with surrogate optimization algorithm in order to design and realize an efficient parallel parameter space sampling mechanism. This paper makes use of parallel computing technology to speed up the weight updating related computation for the ensemble model based on Dempster-Shafer theory, and combines it with stochastic response surface method to develop a novel parallel sampling mechanism for asynchronous parameter optimization. Furthermore, it designs and implements corresponding parallel computing framework and applies the developed algorithm to quantitative trading strategy tuning in financial market. It is verified that the algorithm is both feasible and effective in actual application. The experiment demonstrates that with guarantee of optimizing performance, the parallel optimization algorithm can achieve excellent accelerating effect.
Liu, D., Lou, F., Wang, H..  2019.  Modeling and measurement internal threat process based on advanced stochastic model*. 2019 Chinese Automation Congress (CAC). :1077—1081.
Previous research on internal threats was mostly focused on modeling threat behaviors. These studies have paid little attention to risk measurement. This paper analyzed the internal threat scenarios, introduced the operation related protection model into the firewall-password model, constructed a series of sub models. By analyzing the illegal data out process, the analysis model of target network can be rapidly generated based on four protection sub-models. Then the risk value of an assessment point can be computed dynamically according to the Petri net computing characteristics and the effectiveness of overall network protection can be measured. This method improves the granularity of the model and simplifies the complexity of modeling complex networks and can realize dynamic and real-time risk measurement.
2020-11-17
Abuzainab, N., Saad, W..  2018.  Misinformation Control in the Internet of Battlefield Things: A Multiclass Mean-Field Game. 2018 IEEE Global Communications Conference (GLOBECOM). :1—7.

In this paper, the problem of misinformation propagation is studied for an Internet of Battlefield Things (IoBT) system in which an attacker seeks to inject false information in the IoBT nodes in order to compromise the IoBT operations. In the considered model, each IoBT node seeks to counter the misinformation attack by finding the optimal probability of accepting a given information that minimizes its cost at each time instant. The cost is expressed in terms of the quality of information received as well as the infection cost. The problem is formulated as a mean-field game with multiclass agents which is suitable to model a massive heterogeneous IoBT system. For this game, the mean-field equilibrium is characterized, and an algorithm based on the forward backward sweep method is proposed. Then, the finite IoBT case is considered, and the conditions of convergence of the equilibria in the finite case to the mean-field equilibrium are presented. Numerical results show that the proposed scheme can achieve a two-fold increase in the quality of information (QoI) compared to the baseline when the nodes are always transmitting.

Kamhoua, C. A..  2018.  Game theoretic modeling of cyber deception in the Internet of Battlefield Things. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :862—862.

Internet of Battlefield Things (IoBT) devices such as actuators, sensors, wearable devises, robots, drones, and autonomous vehicles, facilitate the Intelligence, Surveillance and Reconnaissance (ISR) to Command and Control and battlefield services. IoBT devices have the ability to collect operational field data, to compute on the data, and to upload its information to the network. Securing the IoBT presents additional challenges compared with traditional information technology (IT) systems. First, IoBT devices are mass produced rapidly to be low-cost commodity items without security protection in their original design. Second, IoBT devices are highly dynamic, mobile, and heterogeneous without common standards. Third, it is imperative to understand the natural world, the physical process(es) under IoBT control, and how these real-world processes can be compromised before recommending any relevant security counter measure. Moreover, unprotected IoBT devices can be used as “stepping stones” by attackers to launch more sophisticated attacks such as advanced persistent threats (APTs). As a result of these challenges, IoBT systems are the frequent targets of sophisticated cyber attack that aim to disrupt mission effectiveness.

Khakurel, U., Rawat, D., Njilla, L..  2019.  2019 IEEE International Conference on Industrial Internet (ICII). 2019 IEEE International Conference on Industrial Internet (ICII). :241—247.

FastChain is a simulator built in NS-3 which simulates the networked battlefield scenario with military applications, connecting tankers, soldiers and drones to form Internet-of-Battlefield-Things (IoBT). Computing, storage and communication resources in IoBT are limited during certain situations in IoBT. Under these circumstances, these resources should be carefully combined to handle the task to accomplish the mission. FastChain simulator uses Sharding approach to provide an efficient solution to combine resources of IoBT devices by identifying the correct and the best set of IoBT devices for a given scenario. Then, the set of IoBT devices for a given scenario collaborate together for sharding enabled Blockchain technology. Interested researchers, policy makers and developers can download and use the FastChain simulator to design, develop and evaluate blockchain enabled IoBT scenarios that helps make robust and trustworthy informed decisions in mission-critical IoBT environment.

2020-11-09
Pflanzner, T., Feher, Z., Kertesz, A..  2019.  A Crawling Approach to Facilitate Open IoT Data Archiving and Reuse. 2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :235–242.
Several cloud providers have started to offer specific data management services by responding to the new trend called the Internet of Things (IoT). In recent years, we have already seen that cloud computing has managed to serve IoT needs for data retrieval, processing and visualization transparent for the user side. IoT-Cloud systems for smart cities and smart regions can be very complex, therefore their design and analysis should be supported by means of simulation. Nevertheless, the models used in simulation environments should be as close as to the real world utilization to provide reliable results. To facilitate such simulations, in earlier works we proposed an IoT trace archiving service called SUMMON that can be used to gather real world datasets, and to reuse them for simulation experiments. In this paper we provide an extension to SUMMON with an automated web crawling service that gathers IoT and sensor data from publicly available websites. We introduce the architecture and operation of this approach, and exemplify it utilization with three use cases. The provided archiving solution can be used by simulators to perform realistic evaluations.
2020-11-04
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S..  2019.  Poisoning Attack in Federated Learning using Generative Adversarial Nets. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :374—380.

Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.

Bell, S., Oudshoorn, M..  2018.  Meeting the Demand: Building a Cybersecurity Degree Program With Limited Resources. 2018 IEEE Frontiers in Education Conference (FIE). :1—7.

This innovative practice paper considers the heightening awareness of the need for cybersecurity programs in light of several well publicized cyber-attacks in recent years. An examination of the academic job market reveals that a significant number of institutions are looking to hire new faculty in the area of cybersecurity. Additionally, a growing number of universities are starting to offer courses, certifications and degrees in cybersecurity. Other recent activity includes the development of a model cybersecurity curriculum and the creation of a program accreditation criteria for cybersecurity through ABET. This sudden and significant growth in demand for cybersecurity expertise has some similarities to the significant demand for networking faculty that Computer Science programs experienced in the late 1980s as a result of the rise of the Internet. This paper examines the resources necessary to respond to the demand for cybersecurity courses and programs and draws some parallels and distinctions to the demand for networking faculty over 25 years ago. Faculty and administration are faced with a plethora of questions to answer as they approach this problem: What degree and courses to offer, what certifications to consider, which curriculum to incorporate and how to deliver the material (online, faceto-face, or something in-between)? However, the most pressing question in today's fiscal climate in higher education is: what resources will it take to deliver a cybersecurity program?

Torkura, K. A., Sukmana, M. I. H., Strauss, T., Graupner, H., Cheng, F., Meinel, C..  2018.  CSBAuditor: Proactive Security Risk Analysis for Cloud Storage Broker Systems. 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA). :1—10.

Cloud Storage Brokers (CSB) provide seamless and concurrent access to multiple Cloud Storage Services (CSS) while abstracting cloud complexities from end-users. However, this multi-cloud strategy faces several security challenges including enlarged attack surfaces, malicious insider threats, security complexities due to integration of disparate components and API interoperability issues. Novel security approaches are imperative to tackle these security issues. Therefore, this paper proposes CS-BAuditor, a novel cloud security system that continuously audits CSB resources, to detect malicious activities and unauthorized changes e.g. bucket policy misconfigurations, and remediates these anomalies. The cloud state is maintained via a continuous snapshotting mechanism thereby ensuring fault tolerance. We adopt the principles of chaos engineering by integrating BrokerMonkey, a component that continuously injects failure into our reference CSB system, CloudRAID. Hence, CSBAuditor is continuously tested for efficiency i.e. its ability to detect the changes injected by BrokerMonkey. CSBAuditor employs security metrics for risk analysis by computing severity scores for detected vulnerabilities using the Common Configuration Scoring System, thereby overcoming the limitation of insufficient security metrics in existing cloud auditing schemes. CSBAuditor has been tested using various strategies including chaos engineering failure injection strategies. Our experimental evaluation validates the efficiency of our approach against the aforementioned security issues with a detection and recovery rate of over 96 %.

2020-11-02
Pan, C., Huang, J., Gong, J., Yuan, X..  2019.  Few-Shot Transfer Learning for Text Classification With Lightweight Word Embedding Based Models. IEEE Access. 7:53296–53304.
Many deep learning architectures have been employed to model the semantic compositionality for text sequences, requiring a huge amount of supervised data for parameters training, making it unfeasible in situations where numerous annotated samples are not available or even do not exist. Different from data-hungry deep models, lightweight word embedding-based models could represent text sequences in a plug-and-play way due to their parameter-free property. In this paper, a modified hierarchical pooling strategy over pre-trained word embeddings is proposed for text classification in a few-shot transfer learning way. The model leverages and transfers knowledge obtained from some source domains to recognize and classify the unseen text sequences with just a handful of support examples in the target problem domain. The extensive experiments on five datasets including both English and Chinese text demonstrate that the simple word embedding-based models (SWEMs) with parameter-free pooling operations are able to abstract and represent the semantic text. The proposed modified hierarchical pooling method exhibits significant classification performance in the few-shot transfer learning tasks compared with other alternative methods.
2020-10-29
Roseline, S. Abijah, Sasisri, A. D., Geetha, S., Balasubramanian, C..  2019.  Towards Efficient Malware Detection and Classification using Multilayered Random Forest Ensemble Technique. 2019 International Carnahan Conference on Security Technology (ICCST). :1—6.

The exponential growth rate of malware causes significant security concern in this digital era to computer users, private and government organizations. Traditional malware detection methods employ static and dynamic analysis, which are ineffective in identifying unknown malware. Malware authors develop new malware by using polymorphic and evasion techniques on existing malware and escape detection. Newly arriving malware are variants of existing malware and their patterns can be analyzed using the vision-based method. Malware patterns are visualized as images and their features are characterized. The alternative generation of class vectors and feature vectors using ensemble forests in multiple sequential layers is performed for classifying malware. This paper proposes a hybrid stacked multilayered ensembling approach which is robust and efficient than deep learning models. The proposed model outperforms the machine learning and deep learning models with an accuracy of 98.91%. The proposed system works well for small-scale and large-scale data since its adaptive nature of setting parameters (number of sequential levels) automatically. It is computationally efficient in terms of resources and time. The method uses very fewer hyper-parameters compared to deep neural networks.

2020-10-26
Mutalemwa, Lilian C., Shin, Seokjoo.  2019.  Investigating the Influence of Routing Scheme Algorithms on the Source Location Privacy Protection and Network Lifetime. 2019 International Conference on Information and Communication Technology Convergence (ICTC). :1188–1191.
There exist numerous strategies for Source Location Privacy (SLP) routing schemes. In this study, an experimental analysis of a few routing schemes is done to investigate the influence of the routing scheme algorithms on the privacy protection level and the network lifetime performance. The analysis involved four categories of SLP routing schemes. Analysis results revealed that the algorithms used in the representative schemes for tree-based and angle-based routing schemes incur the highest influence. The tree-based algorithm stimulates the highest energy consumption with the lowest network lifetime while the angle-based algorithm does the opposite. Moreover, for the tree-based algorithm, the influence is highly dependent on the region of the network domain.
2020-10-19
Engoulou, Richard Gilles, Bellaiche, Martine, Halabi, Talal, Pierre, Samuel.  2019.  A Decentralized Reputation Management System for Securing the Internet of Vehicles. 2019 International Conference on Computing, Networking and Communications (ICNC). :900–904.
The evolution of the Internet of Vehicles (IoV) paradigm has recently attracted a lot of researchers and industries. Vehicular Ad Hoc Networks (VANET) is the networking model that lies at the heart of this technology. It enables the vehicles to exchange relevant information concerning road conditions and safety. However, ensuring communication security has been and still is one of the main challenges to vehicles' interconnection. To secure the interconnected vehicular system, many cryptography techniques, communication protocols, and certification and reputation-based security approaches were proposed. Nonetheless, some limitations are still present, preventing the practical implementation of such approaches. In this paper, we first define a set of locally-perceived behavioral reputation parameters that enable a distributed evaluation of vehicles' reputation. Then, we integrate these parameters into the design of a reputation management system to exclude malicious or faulty vehicles from the IoV network. Our system can help in the prevention of several attacks on the VANET environment such as Sybil and Denial of Service attacks, and can be implemented in a fully decentralized fashion.
2020-10-12
Kannan, Uma, Swamidurai, Rajendran.  2019.  Empirical Validation of System Dynamics Cyber Security Models. 2019 SoutheastCon. :1–6.

Model validation, though a process that's continuous and complex, establishes confidence in the soundness and usefulness of a model. Making sure that the model behaves similar to the modes of behavior seen in real systems, allows the builder of said model to assure accumulation of confidence in the model and thus validating the model. While doing this, the model builder is also required to build confidence from a target audience in the model through communicating to the bases. The basis of the system dynamics model validation, both in general and in the field of cyber security, relies on a casual loop diagram of the system being agreed upon by a group of experts. Model validation also uses formal quantitative and informal qualitative tools in addition to the validation techniques used by system dynamics. Amongst others, the usefulness of a model, in a user's eyes, is a valid standard by which we can evaluate them. To validate our system dynamics cyber security model, we used empirical structural and behavior tests. This paper describes tests of model structure and model behavior, which includes each test's purpose, the ways the tests were conducted, and empirical validation results using a proof-of-concept cyber security model.

Granatyr, Jones, Gomes, Heitor Murilo, Dias, João Miguel, Paiva, Ana Maria, Nunes, Maria Augusta Silveira Netto, Scalabrin, Edson Emílio, Spak, Fábio.  2019.  Inferring Trust Using Personality Aspects Extracted from Texts. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :3840–3846.
Trust mechanisms are considered the logical protection of software systems, preventing malicious people from taking advantage or cheating others. Although these concepts are widely used, most applications in this field do not consider affective aspects to aid in trust computation. Researchers of Psychology, Neurology, Anthropology, and Computer Science argue that affective aspects are essential to human's decision-making processes. So far, there is a lack of understanding about how these aspects impact user's trust, particularly when they are inserted in an evaluation system. In this paper, we propose a trust model that accounts for personality using three personality models: Big Five, Needs, and Values. We tested our approach by extracting personality aspects from texts provided by two online human-fed evaluation systems and correlating them to reputation values. The empirical experiments show statistically significant better results in comparison to non-personality-wise approaches.
2020-10-06
Ravikumar, Gelli, Hyder, Burhan, Govindarasu, Manimaran.  2019.  Efficient Modeling of HIL Multi-Grid System for Scalability Concurrency in CPS Security Testbed. 2019 North American Power Symposium (NAPS). :1—6.
Cyber-event-triggered power grid blackout compels utility operators to intensify cyber-aware and physics-constrained recovery and restoration process. Recently, coordinated cyber attacks on the Ukrainian grid witnessed such a cyber-event-triggered power system blackout. Various cyber-physical system (CPS) testbeds have attempted with multitude designs to analyze such interdependent events and evaluate remedy measures. However, resource constraints and modular integration designs have been significant barriers while modeling large-scale grid models (scalability) and multi-grid isolated models (concurrency) under a single real-time execution environment for the hardware-in-the-loop (HIL) CPS security testbeds. This paper proposes a meticulous design and effective modeling for simulating large-scale grid models and multi-grid isolated models in a HIL realtime digital simulator environment integrated with industry-grade hardware and software systems. We have used our existing HIL CPS security testbed to demonstrate scalability by the realtime performance of a Texas-2000 bus US synthetic grid model and concurrency by the real-time performance of simultaneous ten IEEE-39 bus grid models and an IEEE-118 bus grid model. The experiments demonstrated significant results by 100% realtime performance with zero overruns, low latency while receiving and executing control signals from SEL Relays via IEC-61850 protocol and low latency while computing and transmitting grid data streams including stability measures via IEEE C37.118 synchrophasor data protocol to SEL Phasor Data Concentrators.
Akbarzadeh, Aida, Pandey, Pankaj, Katsikas, Sokratis.  2019.  Cyber-Physical Interdependencies in Power Plant Systems: A Review of Cyber Security Risks. 2019 IEEE Conference on Information and Communication Technology. :1—6.

Realizing the importance of the concept of “smart city” and its impact on the quality of life, many infrastructures, such as power plants, began their digital transformation process by leveraging modern computing and advanced communication technologies. Unfortunately, by increasing the number of connections, power plants become more and more vulnerable and also an attractive target for cyber-physical attacks. The analysis of interdependencies among system components reveals interdependent connections, and facilitates the identification of those among them that are in need of special protection. In this paper, we review the recent literature which utilizes graph-based models and network-based models to study these interdependencies. A comprehensive overview, based on the main features of the systems including communication direction, control parameters, research target, scalability, security and safety, is presented. We also assess the computational complexity associated with the approaches presented in the reviewed papers, and we use this metric to assess the scalability of the approaches.

Dattana, Vishal, Gupta, Kishu, Kush, Ashwani.  2019.  A Probability based Model for Big Data Security in Smart City. 2019 4th MEC International Conference on Big Data and Smart City (ICBDSC). :1—6.

Smart technologies at hand have facilitated generation and collection of huge volumes of data, on daily basis. It involves highly sensitive and diverse data like personal, organisational, environment, energy, transport and economic data. Data Analytics provide solution for various issues being faced by smart cities like crisis response, disaster resilience, emergence management, smart traffic management system etc.; it requires distribution of sensitive data among various entities within or outside the smart city,. Sharing of sensitive data creates a need for efficient usage of smart city data to provide smart applications and utility to the end users in a trustworthy and safe mode. This shared sensitive data if get leaked as a consequence can cause damage and severe risk to the city's resources. Fortification of critical data from unofficial disclosure is biggest issue for success of any project. Data Leakage Detection provides a set of tools and technology that can efficiently resolves the concerns related to smart city critical data. The paper, showcase an approach to detect the leakage which is caused intentionally or unintentionally. The model represents allotment of data objects between diverse agents using Bigraph. The objective is to make critical data secure by revealing the guilty agent who caused the data leakage.

Drozd, Oleksandr, Kharchenko, Vyacheslav, Rucinski, Andrzej, Kochanski, Thaddeus, Garbos, Raymond, Maevsky, Dmitry.  2019.  Development of Models in Resilient Computing. 2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT). :1—6.

The article analyzes the concept of "Resilience" in relation to the development of computing. The strategy for reacting to perturbations in this process can be based either on "harsh Resistance" or "smarter Elasticity." Our "Models" are descriptive in defining the path of evolutionary development as structuring under the perturbations of the natural order and enable the analysis of the relationship among models, structures and factors of evolution. Among those, two features are critical: parallelism and "fuzziness", which to a large extent determine the rate of change of computing development, especially in critical applications. Both reversible and irreversible development processes related to elastic and resistant methods of problem solving are discussed. The sources of perturbations are located in vicinity of the resource boundaries, related to growing problem size with progress combined with the lack of computational "checkability" of resources i.e. data with inadequate models, methodologies and means. As a case study, the problem of hidden faults caused by the growth, the deficit of resources, and the checkability of digital circuits in critical applications is analyzed.

Bartan, Burak, Pilanci, Mert.  2019.  Straggler Resilient Serverless Computing Based on Polar Codes. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :276—283.

We propose a serverless computing mechanism for distributed computation based on polar codes. Serverless computing is an emerging cloud based computation model that lets users run their functions on the cloud without provisioning or managing servers. Our proposed approach is a hybrid computing framework that carries out computationally expensive tasks such as linear algebraic operations involving large-scale data using serverless computing and does the rest of the processing locally. We address the limitations and reliability issues of serverless platforms such as straggling workers using coding theory, drawing ideas from recent literature on coded computation. The proposed mechanism uses polar codes to ensure straggler-resilience in a computationally effective manner. We provide extensive evidence showing polar codes outperform other coding methods. We have designed a sequential decoder specifically for polar codes in erasure channels with full-precision input and outputs. In addition, we have extended the proposed method to the matrix multiplication case where both matrices being multiplied are coded. The proposed coded computation scheme is implemented for AWS Lambda. Experiment results are presented where the performance of the proposed coded computation technique is tested in optimization via gradient descent. Finally, we introduce the idea of partial polarization which reduces the computational burden of encoding and decoding at the expense of straggler-resilience.

2020-10-05
Ahmed, Abdelmuttlib Ibrahim Abdalla, Khan, Suleman, Gani, Abdullah, Hamid, Siti Hafizah Ab, Guizani, Mohsen.  2018.  Entropy-based Fuzzy AHP Model for Trustworthy Service Provider Selection in Internet of Things. 2018 IEEE 43rd Conference on Local Computer Networks (LCN). :606—613.

Nowadays, trust and reputation models are used to build a wide range of trust-based security mechanisms and trust-based service management applications on the Internet of Things (IoT). Considering trust as a single unit can result in missing important and significant factors. We split trust into its building-blocks, then we sort and assign weight to these building-blocks (trust metrics) on the basis of its priorities for the transaction context of a particular goal. To perform these processes, we consider trust as a multi-criteria decision-making problem, where a set of trust worthiness metrics represent the decision criteria. We introduce Entropy-based fuzzy analytic hierarchy process (EFAHP) as a trust model for selecting a trustworthy service provider, since the sense of decision making regarding multi-metrics trust is structural. EFAHP gives 1) fuzziness, which fits the vagueness, uncertainty, and subjectivity of trust attributes; 2) AHP, which is a systematic way for making decisions in complex multi-criteria decision making; and 3) entropy concept, which is utilized to calculate the aggregate weights for each service provider. We present a numerical illustration in trust-based Service Oriented Architecture in the IoT (SOA-IoT) to demonstrate the service provider selection using the EFAHP Model in assessing and aggregating the trust scores.

Yu, Zihuan.  2018.  Research on Cloud Computing Security Evaluation Model Based on Trust Management. 2018 IEEE 4th International Conference on Computer and Communications (ICCC). :1934—1937.

At present, cloud computing technology has made outstanding contributions to the Internet in data unification and sharing applications. However, the problem of information security in cloud computing environment has to be paid attention to and effective measures have to be taken to solve it. In order to control the data security under cloud services, the DS evidence theory method is introduced. The trust management mechanism is established from the source of big data, and a cloud computing security assessment model is constructed to achieve the quantifiable analysis purpose of cloud computing security assessment. Through the simulation, the innovative way of quantifying the confidence criterion through big data trust management and DS evidence theory not only regulates the data credible quantification mechanism under cloud computing, but also improves the effectiveness of cloud computing security assessment, providing a friendly service support platform for subsequent cloud computing service.

Lago, Loris Dal, Ferrante, Orlando, Passerone, Roberto, Ferrari, Alberto.  2018.  Dependability Assessment of SOA-Based CPS With Contracts and Model-Based Fault Injection. IEEE Transactions on Industrial Informatics. 14:360—369.

Engineering complex distributed systems is challenging. Recent solutions for the development of cyber-physical systems (CPS) in industry tend to rely on architectural designs based on service orientation, where the constituent components are deployed according to their service behavior and are to be understood as loosely coupled and mostly independent. In this paper, we develop a workflow that combines contract-based and CPS model-based specifications with service orientation, and analyze the resulting model using fault injection to assess the dependability of the systems. Compositionality principles based on the contract specification help us to make the analysis practical. The presented techniques are evaluated on two case studies.

Parra, Pablo, Polo, Oscar R., Fernández, Javier, Da Silva, Antonio, Sanchez Prieto, Sebastian, Martinez, Agustin.  2018.  A Platform-Aware Model-Driven Embedded Software Engineering Process Based on Annotated Analysis Models. IEEE Transactions on Emerging Topics in Computing. :1—1.

In this work a platform-aware model-driven engineering process for building component-based embedded software systems using annotated analysis models is described. The process is supported by a framework, called MICOBS, that allows working with different component technologies and integrating different tools that, independently of the component technology, enable the analysis of non-functional properties based on the principles of composability and compositionality. An actor, called Framework Architect, is responsible for this integration. Three other actors take a relevant part in the analysis process. The Component Provider supplies the components, while the Component Tester is in charge of their validation. The latter also feeds MICOBS with the annotated analysis models that characterize the extra-functional properties of the components for the different platforms on which they can be deployed. The Application Architect uses these components to build new systems, performing the trade-off between different alternatives. At this stage, and in order to verify that the final system meets the extra-functional requirements, the Application Architect uses the reports generated by the integrated analysis tools. This process has been used to support the validation and verification of the on-board application software for the Instrument Control Unit of the Energetic Particle Detector of the Solar Orbiter mission.

Chakraborty, Anit, Dutta, Sayandip, Bhattacharyya, Siddhartha, Platos, Jan, Snasel, Vaclav.  2018.  Reinforcement Learning inspired Deep Learned Compositional Model for Decision Making in Tracking. 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :158—163.

We formulate a tracker which performs incessant decision making in order to track objects where the objects may undergo different challenges such as partial occlusions, moving camera, cluttered background etc. In the process, the agent must make a decision on whether to keep track of the object when it is occluded or has moved out of the frame temporarily based on its prediction from the previous location or to reinitialize the tracker based on the belief that the target has been lost. Instead of the heuristic methods we depend on reward and penalty based training that helps the agent reach an optimal solution via this partially observable Markov decision making (POMDP). Furthermore, we employ deeply learned compositional model to estimate human pose in order to better handle occlusion without needing human inputs. By learning compositionality of human bodies via deep neural network the agent can make better decision on presence of human in a frame or lack thereof under occlusion. We adapt skeleton based part representation and do away with the large spatial state requirement. This especially helps in cases where orientation of the target in focus is unorthodox. Finally we demonstrate that the deep reinforcement learning based training coupled with pose estimation capabilities allows us to train and tag multiple large video datasets much quicker than previous works.