Biblio
Semi-supervised learning has recently gained increasingly attention because it can combine abundant unlabeled data with carefully labeled data to train deep neural networks. However, common semi-supervised methods deeply rely on the quality of pseudo labels. In this paper, we proposed a new semi-supervised learning method based on Generative Adversarial Network (GAN), by using discriminator to learn the feature of both labeled and unlabeled data, instead of generating pseudo labels that cannot all be correct. Our approach, semi-supervised conditional GAN (SCGAN), builds upon the conditional GAN model, extending it to semi-supervised learning by changing the discriminator's output to a classification output and a real or false output. We evaluate our approach with basic semi-supervised model on MNIST dataset. It shows that our approach achieves the classification accuracy with 84.15%, outperforming the basic semi-supervised model with 72.94%, when labeled data are 1/600 of all data.
It is a research hotspot that using blockchain technology to solve the security problems of the Internet of Things (IoT). Although many related ideas have been proposed, there are very few literatures with theoretical and data support. This paper focuses on the research of model construction and performance evaluation. First, an IoT security model is established based on blockchain and InterPlanetary File System (IPFS). In this model, many security risks of traditional IoT architectures can be avoided, and system performance is significantly improved in distributed large capacity storage, concurrency and query. Secondly, the performance of the proposed model is evaluated through the average latency and throughput, which are meaningful for further research and optimization of this direction. Analysis and test results demonstrate the effectiveness of the blockchain-based security model.
An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N; an m-by-N measurement Φ; and a recovery algorithm R. Given a vector, x, the system approximates x by &xwidehat; = R(Φ x), which must satisfy ‖ &xwidehat;-x‖1 ≤ (1+ε)‖ x - xk‖1. We consider the “for all” model, in which a single matrix Φ, possibly “constructed” non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss [2012] uses O(ε−3klog (N/k)) measurements and runs in time O(k1 − αNα) for any constant α textgreater 0. In this article, we improve the number of measurements to O(ε − 2klog (N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(log N,1/ε)), with a modest restriction that k ⩽ N1 − α and ε ⩽ (log k/log N)γ for any constants α, β, γ textgreater 0. When k ⩽ log cN for some c textgreater 0, the runtime is reduced to O(kpoly(N,1/ε)). With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog (N/k)((log N/log k)γ + 1/ε)) measurements. The overall architecture of this algorithm is similar to that of Porat and Strauss [2012] in that we repeatedly use a weak recovery system (with varying parameters) to obtain a top-level recovery algorithm. The weak recovery system consists of a two-layer hashing procedure (or with two unbalanced expanders for a deterministic algorithm). The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. The idea is to encode the signal position index i by associating it with a unique message mi, which will be encoded to a longer message m′i (in contrast to Porat and Strauss [2012] in which the encoding is simply the identity). Portions of the message m′i correspond to repetitions of the hashing, and we use a regular expander graph to encode the linkages among these portions. The decoding or recovery algorithm consists of recovering the portions of the longer messages m′i and then decoding to the original messages mi, all the while ensuring that corruptions can be detected and/or corrected. The recovery algorithm is similar to list recovery introduced in Indyk et al. [2010] and used in Gilbert et al. [2013]. In our algorithm, the messages \mi\ are independent of the hashing, which enables us to obtain a better result.