Biblio
Filters: Author is Yuan, Yuyu [Clear All Filters]
A Novel Trust-based Model for Collaborative Filtering Recommendation Systems using Entropy. 2021 8th International Conference on Dependable Systems and Their Applications (DSA). :184—188.
.
2021. With the proliferation of false redundant information on various e-commerce platforms, ineffective recommendations and other untrustworthy behaviors have seriously hindered the healthy development of e-commerce platforms. Modern recommendation systems often use side information to alleviate these problems and also increase prediction accuracy. One such piece of side information, which has been widely investigated, is trust. However, it is difficult to obtain explicit trust relationship data, so researchers infer trust values from other methods, such as the user-to-item relationship. In this paper, addressing the problems, we proposed a novel trust-based recommender model called UITrust, which uses user-item relationship value to improve prediction accuracy. With the improvement the traditional similarity measures by employing the entropies of user and item history ratings to reflect the global rating behavior on both. We evaluate the proposed model using two real-world datasets. The proposed model performs significantly better than the baseline methods. Also, we can use the UITrust to alleviate the sparsity problem associated with correlation-based similarity. In addition to that, the proposed model has a better computational complexity for making predictions than the k-nearest neighbor (kNN) method.
Measuring Trust and Automatic Verification in Multi-Agent Systems. 2021 8th International Conference on Dependable Systems and Their Applications (DSA). :271—277.
.
2021. Due to the shortage of resources and services, agents are often in competition with each other. Excessive competition will lead to a social dilemma. Under the viewpoint of breaking social dilemma, we present a novel trust-based logic framework called Trust Computation Logic (TCL) for measure method to find the best partners to collaborate and automatically verifying trust in Multi-Agent Systems (MASs). TCL starts from defining trust state in Multi-Agent Systems, which is based on contradistinction between behavior in trust behavior library and in observation. In particular, a set of reasoning postulates along with formal proofs were put forward to support our measure process. Moreover, we introduce symbolic model checking algorithms to formally and automatically verify the system. Finally, the trust measure method and reported experimental results were evaluated by using DeepMind’s Sequential Social Dilemma (SSD) multi-agent game-theoretic environments.