Algorithm fairness has started to attract the attention of researchers in AI, Software Engineering and Law communities, with more than twenty different notions of fairness proposed in the last few years. Yet, there is no clear agreement on which definition to apply in each situation. Moreover, the detailed differences between multiple definitions are difficult to grasp. To address this issue, this paper collects the most prominent definitions of fairness for the algorithmic classification problem, explains the rationale behind these definitions, and demonstrates each of them on a single unifying case-study. Our analysis intuitively explains why the same case can be considered fair according to some definitions and unfair according to others.
Software Design Patterns (SDPs) are core solutions to the recurring problems in software. However, adopting SDPs without taking into account their value implications may result in breach of social values and ultimately lead to user dissatisfaction, lack of adoption, and financial loss. An example is the airline system that overcharged people who were trying to escape from the Hurricane Irma. Although not intentional, overlsight of social values in the design of the airline system resulted in significant customer dissatisfaction and loss of trust. To mitigate such value breaches in software design we propose taking social values into account in SDPs explicitly. To achieve this, we outline a collaborative framework that allows for (i) specifying the value implications of SDPs, (ii) developing or extending SDPs for integrating social values, (iii) providing guidance on the value-conscious adoption of design patterns, (iv) collecting and analyzing insights from collaborators, (v) maintaining an up-to-date library of the valufied design patterns, and (vi) incorporating lessons learned from the real-world adoption of the valuefied design patterns into the proposed framework for its continuous improvement in integrating social values into software.
Today's software is highly intertwined with our lives, and it possesses an increasing ability to act and influence us. Besides the renown example of self-driving cars and their potential harmfulness, more mundane software such as social networks can introduce bias, break privacy preferences, lead to digital addiction, etc. Additionally, the software engineering (SE) process itself is highly affected by ethical issues, such as diversity and business ethics. This paper introduces ethics-aware SE, a version of SE in which the ethical values of the stakeholders (including developers and users) are captured, analyzed, and reflected in software specifications and in the SE processes. We propose an analytical framework that assists stakeholders in analyzing ethical issues in terms of subject (software artifact or SE process), relevant value (diversity, privacy, autonomy, ...), and threatened object (user, developer, ...). We also define a roadmap that illustrates the necessary steps for the SE research and practice community in order to fully realize ethics-aware SE.
Decision-making software may exhibit biases due to hidden dependencies between protected characteristics and the data used as input for making decisions. To uncover such dependencies, we propose the development of a framework to support discrimination analysis during the system design phase, based on system models and available data.
Most of the world's financial markets are electronic (i.e., are implemented as software systems) and continuous (i.e., process orders received from market participants immediately, on a FIFO basis). In this short position paper I argue that such markets cannot provide 'racetrack fairness' to their participants, yet this form of fairness seems to feature quite prominently throughout the large, multi-jurisdictional body of law governing financial markets. What seems to follow from this is that electronic batch-style markets are not only a desirable replacement for continuous ones---as a number of economists have recently argued---but a necessary replacement.
As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethical aspects. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems have up to now being presented in the form of an idealized unsolvable decision-making problem, the so-called "trolley problem", which is built on the assumptions that are neither technically nor ethically justifiable. The intrinsic unfairness of the trolley problem comes from the assumption that lives of different people have different values.
In this paper, techno-social arguments are used to show the infeasibility of the trolley problem when addressing the ethics of self-driving cars. We argue that different components can contribute to an "unfair" behaviour and features, which requires ethical analysis on multiple levels and stages of the development process. Instead of an idealized and intrinsically unfair thought experiment, we present real-life techno-social challenges relevant for the domain of software fairness in the context of self-driving cars.
The IEEE P7003 Standard for Algorithmic Bias Considerations is one of eleven IEEE ethics related standards currently under development as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The purpose of the IEEE P7003 standard is to provide individuals or organizations creating algorithmic systems with development framework to avoid unintended, unjustified and inappropriately differential outcomes for users. In this paper, we present the scope and structure of the IEEE P7003 draft standard, and the methodology of the development process.