Visible to the public Biblio

Filters: Keyword is Ethics  [Clear All Filters]
2023-08-24
Aliman, Nadisha-Marie, Kester, Leon.  2022.  VR, Deepfakes and Epistemic Security. 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :93–98.
In recent years, technological advancements in the AI and VR fields have increasingly often been paired with considerations on ethics and safety aimed at mitigating unintentional design failures. However, cybersecurity-oriented AI and VR safety research has emphasized the need to additionally appraise instantiations of intentional malice exhibited by unethical actors at pre- and post-deployment stages. On top of that, in view of ongoing malicious deepfake developments that can represent a threat to the epistemic security of a society, security-aware AI and VR design strategies require an epistemically-sensitive stance. In this vein, this paper provides a theoretical basis for two novel AIVR safety research directions: 1) VR as immersive testbed for a VR-deepfake-aided epistemic security training and 2) AI as catalyst within a deepfake-aided so-called cyborgnetic creativity augmentation facilitating an epistemically-sensitive threat modelling. For illustration, we focus our use case on deepfake text – an underestimated deepfake modality. In the main, the two proposed transdisciplinary lines of research exemplify how AIVR safety to defend against unethical actors could naturally converge toward AIVR ethics whilst counteracting epistemic security threats.
ISSN: 2771-7453
2023-06-30
Libicki, Martin C..  2022.  Obnoxious Deterrence. 2022 14th International Conference on Cyber Conflict: Keep Moving! (CyCon). 700:65–77.
The reigning U.S. paradigm for deterring malicious cyberspace activity carried out by or condoned by other countries is to levy penalties on them. The results have been disappointing. There is little evidence of the permanent reduction of such activity, and the narrative behind the paradigm presupposes a U.S./allied posture that assumes the morally superior role of judge upon whom also falls the burden of proof–-a posture not accepted but nevertheless exploited by other countries. In this paper, we explore an alternative paradigm, obnoxious deterrence, in which the United States itself carries out malicious cyberspace activity that is used as a bargaining chip to persuade others to abandon objectionable cyberspace activity. We then analyze the necessary characteristics of this malicious cyberspace activity, which is generated only to be traded off. It turns out that two fundamental criteria–that the activity be sufficiently obnoxious to induce bargaining but be insufficiently valuable to allow it to be traded away–may greatly reduce the feasibility of such a ploy. Even if symmetric agreements are easier to enforce than pseudo-symmetric agreements (e.g., the XiObama agreement of 2015) or asymmetric red lines (e.g., the Biden demand that Russia not condone its citizens hacking U.S. critical infrastructure), when violations occur, many of today’s problems recur. We then evaluate the practical consequences of this approach, one that is superficially attractive.
ISSN: 2325-5374
2023-04-14
Qian, Jun, Gan, Zijie, Zhang, Jie, Bhunia, Suman.  2022.  Analyzing SocialArks Data Leak - A Brute Force Web Login Attack. 2022 4th International Conference on Computer Communication and the Internet (ICCCI). :21–27.
In this work, we discuss data breaches based on the “2012 SocialArks data breach” case study. Data leakage refers to the security violations of unauthorized individuals copying, transmitting, viewing, stealing, or using sensitive, protected, or confidential data. Data leakage is becoming more and more serious, for those traditional information security protection methods like anti-virus software, intrusion detection, and firewalls have been becoming more and more challenging to deal with independently. Nevertheless, fortunately, new IT technologies are rapidly changing and challenging traditional security laws and provide new opportunities to develop the information security market. The SocialArks data breach was caused by a misconfiguration of ElasticSearch Database owned by SocialArks, owned by “Tencent.” The attack methodology is classic, and five common Elasticsearch mistakes discussed the possibilities of those leakages. The defense solution focuses on how to optimize the Elasticsearch server. Furthermore, the ElasticSearch database’s open-source identity also causes many ethical problems, which means that anyone can download and install it for free, and they can install it almost anywhere. Some companies download it and install it on their internal servers, while others download and install it in the cloud (on any provider they want). There are also cloud service companies that provide hosted versions of Elasticsearch, which means they host and manage Elasticsearch clusters for their customers, such as Company Tencent.
Selvaganesh, M., Naveen Karthi, P., Nitish Kumar, V. A., Prashanna Moorthy, S. R..  2022.  Efficient Brute-force handling methodology using Indexed-Cluster Architecture of Splunk. 2022 International Conference on Electronics and Renewable Systems (ICEARS). :697–701.
A brute force is a Hacking methodology used to decrypt login passwords, keys and credentials. Hacks that exploit vulnerabilities in packages are rare, whereas Brute Force attacks aim to be the simplest, cheapest, and most straightforward approach to access a website. Using Splunk to analyse massive amounts of data could be very beneficial. The application enables to capture, search, and analyse log information in real-time. By analysing logs as well as many different sources of system information, security events can be uncovered. A log file, which details the events that have occurred in the environment of the application and the server on which they run, is a valuable piece of information. Identifying the attacks against these systems is possible by analysing and correlating this information. Massive amounts of ambiguous and amorphous information can be analysed with its superior resolution. The paper includes instructions on setting up a Splunk server and routing information there from multiple sources. Practical search examples and pre-built add-on applications are provided. Splunk is a powerful tool that allows users to explore big data with greater ease. Seizure can be tracked in near real-time and can be searched through logs. A short amount of time can be spent on analysing big data using map-reduce technology. Briefly, it helps to analyse unstructured log data to better understand how the applications operate. With Splunk, client can detect patterns in the data through a powerful query language. It is easy to set up alerts and warnings based on the queries, which will help alert client about an ongoing (suspected) activity and generate a notification in real-time.
2023-03-31
Kahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi.  2022.  Label-Only Model Inversion Attacks via Boundary Repulsion. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :15025–15033.
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
2023-02-17
Zehnder, E., Dinet, J., Charpillet, F..  2022.  Perception of physical and virtual agents: exploration of factors influencing the acceptance of intrusive domestic agents. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1050–1057.
Domestic robots and agents are widely sold to the grand public, leading us to ethical issues related to the data harvested by such machines. While users show a general acceptance of these robots, concerns remain when it comes to information security and privacy. Current research indicates that there’s a privacy-security trade-off for better use, but the anthropomorphic and social abilities of a robot are also known to modulate its acceptance and use. To explore and deepen what literature already brought on the subject we examined how users perceived their robot (Replika, Roomba©, Amazon Echo©, Google Home©, or Cozmo©/Vector©) through an online questionnaire exploring acceptance, perceived privacy and security, anthropomorphism, disclosure, perceived intimacy, and loneliness. The results supported the literature regarding the potential manipulative effects of robot’s anthropomorphism for acceptance but also information disclosure, perceived intimacy, security, and privacy.
ISSN: 1944-9437
2023-02-03
Chakraborty, Joymallya, Majumder, Suvodeep, Tu, Huy.  2022.  Fair-SSL: Building fair ML Software with less data. 2022 IEEE/ACM International Workshop on Equitable Data & Technology (FairWare). :1–8.
Ethical bias in machine learning models has become a matter of concern in the software engineering community. Most of the prior software engineering works concentrated on finding ethical bias in models rather than fixing it. After finding bias, the next step is mitigation. Prior researchers mainly tried to use supervised approaches to achieve fairness. However, in the real world, getting data with trustworthy ground truth is challenging and also ground truth can contain human bias. Semi-supervised learning is a technique where, incrementally, labeled data is used to generate pseudo-labels for the rest of data (and then all that data is used for model training). In this work, we apply four popular semi-supervised techniques as pseudo-labelers to create fair classification models. Our framework, Fair-SSL, takes a very small amount (10%) of labeled data as input and generates pseudo-labels for the unlabeled data. We then synthetically generate new data points to balance the training data based on class and protected attribute as proposed by Chakraborty et al. in FSE 2021. Finally, classification model is trained on the balanced pseudo-labeled data and validated on test data. After experimenting on ten datasets and three learners, we find that Fair-SSL achieves similar performance as three state-of-the-art bias mitigation algorithms. That said, the clear advantage of Fair-SSL is that it requires only 10% of the labeled training data. To the best of our knowledge, this is the first SE work where semi-supervised techniques are used to fight against ethical bias in SE ML models. To facilitate open science and replication, all our source code and datasets are publicly available at https://github.com/joymallyac/FairSSL. CCS CONCEPTS • Software and its engineering → Software creation and management; • Computing methodologies → Machine learning. ACM Reference Format: Joymallya Chakraborty, Suvodeep Majumder, and Huy Tu. 2022. Fair-SSL: Building fair ML Software with less data. In International Workshop on Equitable Data and Technology (FairWare ‘22), May 9, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3524491.3527305
2023-01-06
S, Harichandana B S, Agarwal, Vibhav, Ghosh, Sourav, Ramena, Gopi, Kumar, Sumit, Raja, Barath Raj Kandur.  2022.  PrivPAS: A real time Privacy-Preserving AI System and applied ethics. 2022 IEEE 16th International Conference on Semantic Computing (ICSC). :9—16.
With 3.78 billion social media users worldwide in 2021 (48% of the human population), almost 3 billion images are shared daily. At the same time, a consistent evolution of smartphone cameras has led to a photography explosion with 85% of all new pictures being captured using smartphones. However, lately, there has been an increased discussion of privacy concerns when a person being photographed is unaware of the picture being taken or has reservations about the same being shared. These privacy violations are amplified for people with disabilities, who may find it challenging to raise dissent even if they are aware. Such unauthorized image captures may also be misused to gain sympathy by third-party organizations, leading to a privacy breach. Privacy for people with disabilities has so far received comparatively less attention from the AI community. This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content. To this end, we introduce PrivPAS (A real time Privacy-Preserving AI System) a novel framework to identify sensitive content. Additionally, we curate and annotate a dataset to identify and localize accessibility markers and classify whether an image is sensitive to a featured subject with a disability. We demonstrate that the proposed lightweight architecture, with a memory footprint of a mere 8.49MB, achieves a high mAP of 89.52% on resource-constrained devices. Furthermore, our pipeline, trained on face anonymized data. achieves an F1-score of 73.1%.
Banciu, Doina, Cîrnu, Carmen Elena.  2022.  AI Ethics and Data Privacy compliance. 2022 14th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1—5.
Throughout history, technological evolution has generated less desired side effects with impact on society. In the field of IT&C, there are ongoing discussions about the role of robots within economy, but also about their impact on the labour market. In the case of digital media systems, we talk about misinformation, manipulation, fake news, etc. Issues related to the protection of the citizen's life in the face of technology began more than 25 years ago; In addition to the many messages such as “the citizen is at the center of concern” or, “privacy must be respected”, transmitted through various channels of different entities or companies in the field of ICT, the EU has promoted a number of legislative and normative documents to protect citizens' rights and freedoms.
Xu, Huikai, Yu, Miao, Wang, Yanhao, Liu, Yue, Hou, Qinsheng, Ma, Zhenbang, Duan, Haixin, Zhuge, Jianwei, Liu, Baojun.  2022.  Trampoline Over the Air: Breaking in IoT Devices Through MQTT Brokers. 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). :171—187.
MQTT is widely adopted by IoT devices because it allows for the most efficient data transfer over a variety of communication lines. The security of MQTT has received increasing attention in recent years, and several studies have demonstrated the configurations of many MQTT brokers are insecure. Adversaries are allowed to exploit vulnerable brokers and publish malicious messages to subscribers. However, little has been done to understanding the security issues on the device side when devices handle unauthorized MQTT messages. To fill this research gap, we propose a fuzzing framework named ShadowFuzzer to find client-side vulnerabilities when processing incoming MQTT messages. To avoiding ethical issues, ShadowFuzzer redirects traffic destined for the actual broker to a shadow broker under the control to monitor vulnerabilities. We select 15 IoT devices communicating with vulnerable brokers and leverage ShadowFuzzer to find vulnerabilities when they parse MQTT messages. For these devices, ShadowFuzzer reports 34 zero-day vulnerabilities in 11 devices. We evaluated the exploitability of these vulnerabilities and received a total of 44,000 USD bug bounty rewards. And 16 CVE/CNVD/CN-NVD numbers have been assigned to us.
2022-05-24
Chan, Matthew.  2021.  Bare-metal hypervisor virtual servers with a custom-built automatic scheduling system for educational use. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–5.
In contrast to traditional physical servers, a custom-built system utilizing a bare-metal hypervisor virtual server environment provides advantages of both cost savings and flexibility in terms of systems configuration. This system is designed to facilitate hands-on experience for Computer Science students, particularly those specializing in systems administration and computer networking. This multi-purpose and functional system uses an automatic advanced virtual server reservation system (AAVSRsv), written in C++, to schedule and manage virtual servers. The use of such a system could be extended to additional courses focusing on such topics as cloud computing, database systems, information assurance, as well as ethical hacking and system defense. The design can also be replicated to offer training sessions to other information technology professionals.
2022-02-03
Battistuzzi, Linda, Grassi, Lucrezia, Recchiuto, Carmine Tommaso, Sgorbissa, Antonio.  2021.  Towards Ethics Training in Disaster Robotics: Design and Usability Testing of a Text-Based Simulation. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :104—109.
Rescue robots are expected to soon become commonplace at disaster sites, where they are increasingly being deployed to provide rescuers with improved access and intervention capabilities while mitigating risks. The presence of robots in operation areas, however, is likely to carry a layer of additional ethical complexity to situations that are already ethically challenging. In addition, limited guidance is available for ethically informed, practical decision-making in real-life disaster settings, and specific ethics training programs are lacking. The contribution of this paper is thus to propose a tool aimed at supporting ethics training for rescuers operating with rescue robots. To this end, we have designed an interactive text-based simulation. The simulation was developed in Python, using Tkinter, Python's de-facto standard GUI. It is designed in accordance with the Case-Based Learning approach, a widely used instructional method that has been found to work well for ethics training. The simulation revolves around a case grounded in ethical themes we identified in previous work on ethical issues in rescue robotics: fairness and discrimination, false or excessive expectations, labor replacement, safety, and trust. Here we present the design of the simulation and the results of usability testing.
2021-02-01
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
2020-12-01
Poulsen, A., Burmeister, O. K., Tien, D..  2018.  Care Robot Transparency Isn't Enough for Trust. 2018 IEEE Region Ten Symposium (Tensymp). :293—297.

A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.

2020-03-18
Yang, Yunxue, Ji, Guohua, Yang, Zhenqi, Xue, Shengjun.  2019.  Incentive Contract for Cybersecurity Information Sharing Considering Monitoring Signals. 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). :507–512.
Cyber insurance is a viable method for cyber risk transfer. However, the cyber insurance faces critical challenges, the most important of which is lack of statistical data. In this paper, we proposed an incentive model considering monitoring signals for cybersecurity information haring based on the principal-agent theory. We studied the effect of monitoring signals on increasing the rationality of the incentive contract and reducing moral hazard in the process of cybersecurity information sharing, and analyzed factors influencing the effectiveness of the incentive contract. We show that by introducing monitoring signals, the insurer can collect more information about the effort level of the insured, and encourage the insured to share cybersecurity information based on the information sharing output and monitoring signals of the effort level, which can not only reduce the blindness of incentive to the insured in the process of cybersecurity information sharing, but also reduce moral hazard.
2019-12-18
Shepherd, Morgan M., Klein, Gary.  2012.  Using Deterrence to Mitigate Employee Internet Abuse. 2012 45th Hawaii International Conference on System Sciences. :5261–5266.
This study looks at the question of how to reduce/eliminate employee Internet Abuse. Companies have used acceptable use policies (AUP) and technology in an attempt to mitigate employees' personal use of company resources. Research shows that AUPs do not do a good job at this but that technology does. Research also shows that while technology can be used to greatly restrict personal use of the internet in the workplace, employee satisfaction with the workplace suffers when this is done. In this research experiment we used technology not to restrict employee use of company resources for personal use, but to make the employees more aware of the current Acceptable Use Policy, and measured the decrease in employee internet abuse. The results show that this method can result in a drop from 27 to 21 percent personal use of the company networks.
2018-11-19
Eskandari, S., Leoutsarakos, A., Mursch, T., Clark, J..  2018.  A First Look at Browser-Based Cryptojacking. 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :58–66.

In this paper, we examine the recent trend to- wards in-browser mining of cryptocurrencies; in particular, the mining of Monero through Coinhive and similar code- bases. In this model, a user visiting a website will download a JavaScript code that executes client-side in her browser, mines a cryptocurrency - typically without her consent or knowledge - and pays out the seigniorage to the website. Websites may consciously employ this as an alternative or to supplement advertisement revenue, may offer premium content in exchange for mining, or may be unwittingly serving the code as a result of a breach (in which case the seigniorage is collected by the attacker). The cryptocurrency Monero is preferred seemingly for its unfriendliness to large-scale ASIC mining that would drive browser-based efforts out of the market, as well as for its purported privacy features. In this paper, we survey this landscape, conduct some measurements to establish its prevalence and profitability, outline an ethical framework for considering whether it should be classified as an attack or business opportunity, and make suggestions for the detection, mitigation and/or prevention of browser-based mining for non- consenting users.

2017-10-18
Miller, David.  2016.  AgentSmith: Exploring Agentic Systems. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :234–238.

The design of systems with independent agency to act on the environment or which can act as persuasive agents requires consideration of not only the technical aspects of design, but of the psychological, sociological, and philosophical aspects as well. Creating usable, safe, and ethical systems will require research into human-computer communication, in order to design systems that can create and maintain a relationship with users, explain their workings, and act in the best interests of both users and of the larger society.

2017-03-08
Xu, Kun, Bao, Xinzhong, Tao, Qiuyan.  2015.  Research on income distribution model of supply chain financing based on third-party trading platform. 2015 International Conference on Logistics, Informatics and Service Sciences (LISS). :1–6.

The stability and effectiveness of supply chain financing union are directly affected by income fluctuation and unequal distribution problems, subsequently making the economic interests of the involved parties impacted. In this paper, the incomes of the parties in the union were distributed using Shapley value from the perspective of cooperative game under the background of the supply chain financing based on third-party trading platform, and then correction factors were weighted by introducing risk correction factors and combining with analytic hierarchy process (AHP), in order to improve the original model. Finally, the feasibility of the scheme was proved using example.

2017-03-07
Rashid, A., Moore, K., May-Chahal, C., Chitchyan, R..  2015.  Managing Emergent Ethical Concerns for Software Engineering in Society. 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. 2:523–526.

This paper presents an initial framework for managing emergent ethical concerns during software engineering in society projects. We argue that such emergent considerations can neither be framed as absolute rules about how to act in relation to fixed and measurable conditions. Nor can they be addressed by simply framing them as non-functional requirements to be satisficed. Instead, a continuous process is needed that accepts the 'messiness' of social life and social research, seeks to understand complexity (rather than seek clarity), demands collective (not just individual) responsibility and focuses on dialogue over solutions. The framework has been derived based on retrospective analysis of ethical considerations in four software engineering in society projects in three different domains.

2015-05-05
Al Barghuthi, N.B., Said, H..  2014.  Ethics behind Cyber Warfare: A study of Arab citizens awareness. Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on. :1-7.

Persisting to ignore the consequences of Cyber Warfare will bring severe concerns to all people. Hackers and governments alike should understand the barriers of which their methods take them. Governments use Cyber Warfare to give them a tactical advantage over other countries, defend themselves from their enemies or to inflict damage upon their adversaries. Hackers use Cyber Warfare to gain personal information, commit crimes, or to reveal sensitive and beneficial intelligence. Although both methods can provide ethical uses, the equivalent can be said at the other end of the spectrum. Knowing and comprehending these devices will not only strengthen the ability to detect these attacks and combat against them but will also provide means to divulge despotic government plans, as the outcome of Cyber Warfare can be worse than the outcome of conventional warfare. The paper discussed the concept of ethics and reasons that led to use information technology in military war, the effects of using cyber war on civilians, the legality of the cyber war and ways of controlling the use of information technology that may be used against civilians. This research uses a survey methodology to overlook the awareness of Arab citizens towards the idea of cyber war, provide findings and evidences of ethics behind the offensive cyber warfare. Detailed strategies and approaches should be developed in this aspect. The author recommended urging the scientific and technological research centers to improve the security and develop defending systems to prevent the use of technology in military war against civilians.