Visible to the public Biblio

Filters: Keyword is Bot (Internet)  [Clear All Filters]
2022-04-12
Rane, Prachi, Rao, Aishwarya, Verma, Diksha, Mhaisgawali, Amrapali.  2021.  Redacting Sensitive Information from the Data. 2021 International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON). :1—5.
Redaction of personal, confidential and sensitive information from documents is becoming increasingly important for individuals and organizations. In past years, there have been many well-publicized cases of data leaks from various popular companies. When the data contains sensitive information, these leaks pose a serious threat. To protect and conceal sensitive information, many companies have policies and laws about processing and sanitizing sensitive information in business documents.The traditional approach of manually finding and matching millions of words and then redacting is slow and error-prone. This paper examines different models to automate the identification and redaction of personal and sensitive information contained within the documents using named entity recognition. Sensitive entities example person’s name, bank account details or Aadhaar numbers targeted for redaction, are recognized based on the file’s content, providing users with an interactive approach to redact the documents by changing selected sensitive terms.
2022-01-31
Li, Xigao, Azad, Babak Amin, Rahmati, Amir, Nikiforakis, Nick.  2021.  Good Bot, Bad Bot: Characterizing Automated Browsing Activity. 2021 IEEE Symposium on Security and Privacy (SP). :1589—1605.
As the web keeps increasing in size, the number of vulnerable and poorly-managed websites increases commensurately. Attackers rely on armies of malicious bots to discover these vulnerable websites, compromising their servers, and exfiltrating sensitive user data. It is, therefore, crucial for the security of the web to understand the population and behavior of malicious bots.In this paper, we report on the design, implementation, and results of Aristaeus, a system for deploying large numbers of "honeysites", i.e., websites that exist for the sole purpose of attracting and recording bot traffic. Through a seven-month-long experiment with 100 dedicated honeysites, Aristaeus recorded 26.4 million requests sent by more than 287K unique IP addresses, with 76,396 of them belonging to clearly malicious bots. By analyzing the type of requests and payloads that these bots send, we discover that the average honeysite received more than 37K requests each month, with more than 50% of these requests attempting to brute-force credentials, fingerprint the deployed web applications, and exploit large numbers of different vulnerabilities. By comparing the declared identity of these bots with their TLS handshakes and HTTP headers, we uncover that more than 86.2% of bots are claiming to be Mozilla Firefox and Google Chrome, yet are built on simple HTTP libraries and command-line tools.