Online social networks (OSNs) have fundamentally transformed how billions of people use the Internet. These users are increasingly discovering books, music bands, TV shows, movies, news articles, products, and other content through posts from trusted users that they follow. All major OSNs have deployed content curation algorithms that are designed to increase interaction and act as the "gatekeepers" of what users see. While this curation and filtering is useful and necessary given the amount of content available, it has also exposed people and platforms to manipulation attacks whereby bad actors attempt to promote content people would otherwise prefer not to see. This has driven the creation of an underground ecosystem that provides services and techniques tailored towards subverting OSNs' content curation algorithms for economic and ideological gains. This project will conduct open research to improve our understanding of current algorithmic curation attackers. The team will devise content curation algorithms and defenses which are hardened against manipulation and that can be adopted by these OSN platforms, providing a systematic approach to improving design and practice in an area of critical national importance. Technology transfer from this project will protect the integrity of social media discourse from adversarial manipulation. This project will train students with expertise in security and machine learning, areas of broad national need, and produce educational materials to engage both high school students and the public in these critical questions. The team will holistically explore the economic, social, and technical perspectives of machine learning-based content curation algorithms' weaknesses. The research comprises three main activities: 1) understand how OSNs are currently being successfully manipulated at large scales, 2) investigate the defenses OSNs have in place, and 3) design more resilient defenses. The team will build the first-ever taxonomy of manipulation services and techniques that are actively used to manipulate curation algorithms. Another thrust of the project is to create a framework for the external evaluation of deployed manipulation defenses based on the collection of both public data from the OSN's platform and external data to compare it against. The team will then develop robust and scalable algorithms to detect OSN manipulation within the collected data. Finally, the team will use the insights from the taxonomy of effective manipulation techniques and the exploration of the limitation of current defenses to design fundamentally resilient content curation algorithms. The project will explore both new curation algorithms and more effective mitigation techniques for existing algorithms. The project's findings will deepen our understanding of social network manipulation and adversarial learning and produce reliable approaches to algorithmic content curation.