Biblio
Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.
We present ctrlTCP, a method to combine the congestion controls of multiple TCP connections. In contrast to the previous methods such as the Congestion Manager, ctrlTCP can couple all TCP flows that leave one sender, traverse a common bottleneck (e.g., a home user's thin uplink) and arrive at different destinations. Using ns-2 simulations and an implementation in the FreeBSD kernel, we show that our mechanism reduces queuing delay, packet loss, and short flow completion times while enabling precise allocation of the share of the available bandwidth between the connections according to the needs of the applications.
Existing systems allow manufacturers to acquire factory floor data and perform analysis with cloud applications for machine health monitoring, product quality prediction, fault diagnosis and prognosis etc. However, they do not provide capabilities to perform testing of machine tools and associated components remotely, which is often crucial to identify causes of failure. This paper presents a fault diagnosis system in a cyber-physical manufacturing cloud (CPMC) that allows manufacturers to perform diagnosis and maintenance of manufacturing machine tools through remote monitoring and online testing using Machine Tool Communication (MTComm). MTComm is an Internet scale communication method that enables both monitoring and operation of heterogeneous machine tools through RESTful web services over the Internet. It allows manufacturers to perform testing operations from cloud applications at both machine and component level for regular maintenance and fault diagnosis. This paper describes different components of the system and their functionalities in CPMC and techniques used for anomaly detection and remote online testing using MTComm. It also presents the development of a prototype of the proposed system in a CPMC testbed. Experiments were conducted to evaluate its performance to diagnose faults and test machine tools remotely during various manufacturing scenarios. The results demonstrated excellent feasibility to detect anomaly during manufacturing operations and perform testing operations remotely from cloud applications using MTComm.
The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.
This Work-In-Progress Paper for the Innovative Practice Category presents a novel experiment in active learning of cybersecurity. We introduced a new workshop on hacking for an existing science-popularizing program at our university. The workshop participants, 28 teenagers, played a cybersecurity game designed for training undergraduates and professionals in penetration testing. Unlike in learning environments that are simplified for young learners, the game features a realistic virtual network infrastructure. This allows exploring security tools in an authentic scenario, which is complemented by a background story. Our research aim is to examine how young players approach using cybersecurity tools by interacting with the professional game. A preliminary analysis of the game session showed several challenges that the workshop participants faced. Nevertheless, they reported learning about security tools and exploits, and 61% of them reported wanting to learn more about cybersecurity after the workshop. Our results support the notion that young learners should be allowed more hands-on experience with security topics, both in formal education and informal extracurricular events.
Testing which is an indispensable part of software engineering is itself an art and science which emerged as a discipline over a period. On testing, if defects are found, testers diminish the risk by providing the awareness of defects and solutions to deal with them before release. If testing does not find any defects, testing assure that under certain conditions the system functions correctly. To guarantee that enough testing has been done, major risk areas need to be tested. We have to identify the risks, analyse and control them. We need to categorize the risk items to decide the extent of testing to be covered. Also, Implementation of structured metrics is lagging in software testing. Efficient metrics are necessary to evaluate, manage the testing process and make testing a part of engineering discipline. This paper proposes the usage of risk based testing using FMEA technique and provides an ideal set of metrics which provides a way to ensure effective testing process.
Crowd sensing is one of the core features of internet of vehicles, the use of internet of vehicles for crowd sensing is conducive to the rational allocation of sensing tasks. This paper mainly studies the problem of task allocation for crowd sensing in internet of vehicles, proposes a trajectory-based task allocation scheme for crowd sensing in internet of vehicles. With limited budget constraints, participants' trajectory is taken as an indicator of the spatiotemporal availability. Based on the solution idea of the minimal-cover problem, select the minimum number of participating vehicles to achieve the coverage of the target area.
As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.
In our daily lives, the advances of new technology can be used to sustain the development of people across the globe. Particularly, e-government can be the dynamo of the development for the people. The development of technology and the rapid growth in the use of internet creates a big challenge in the administration in both the public and the private sector. E-government is a vital accomplishment, whereas the security is the main downside which occurs in each e-government process. E-government has to be secure as technology grows and the users have to follow the procedures to make their own transactions safe. This paper tackles the challenges and obstacles to enhance the security of information in e-government. Hence to achieve security data hiding techniques are found to be trustworthy. Reversible data hiding (RDH) is an emerging technique which helps in retaining the quality of the cover image. Hence it is preferred over the traditional data hiding techniques. Modification in the existing algorithm is performed for image encryption scheme and data hiding scheme in order to improve the results. To achieve this secret data is split into 20 parts and data concealing is performed on each part. The data hiding procedure includes embedding of data into least significant nibble of the cover image. The bits are further equally distributed in the cover image to obtain the key security parameters. Hence the obtained results validate that the proposed scheme is better than the existing schemes.
In the open network environment, the network offensive information is implanted in big data environment, so it is necessary to carry out accurate location marking of network offensive information, to realize network attack detection, and to implement the process of accurate location marking of network offensive information. Combined with big data analysis method, the location of network attack nodes is realized, but when network attacks cross in series, the performance of attack information tagging is not good. An accurate marking technique for network attack information is proposed based on big data fusion tracking recognition. The adaptive learning model combined with big data is used to mark and sample the network attack information, and the feature analysis model of attack information chain is designed by extracting the association rules. This paper classifies the data types of the network attack nodes, and improves the network attack detection ability by the task scheduling method of the network attack information nodes, and realizes the accurate marking of the network attacking information. Simulation results show that the proposed algorithm can effectively improve the accuracy of marking offensive information in open network environment, the efficiency of attack detection and the ability of intrusion prevention is improved, and it has good application value in the field of network security defense.
Mobile Ad hoc Network (MANET) is the collection of mobile devices which could change the locations and configure themselves without a centralized base point. Mobile Ad hoc Networks are vulnerable to attacks due to its dynamic infrastructure. The routing attacks are one among the possible attacks that causes damage to MANET. This paper gives a new method of risk aware response technique which is combined version the Dijkstra's shortest path algorithm and Destination Sequenced Distance Vector (DSDV) algorithm. This can reduce black hole attacks. Dijkstra's algorithm finds the shortest path from the single source to the destination when the edges have positive weights. The DSDV is an improved version of the conventional technique by adding the sequence number and next hop address in each routing table.