WE REGRET THAT THIS EVENT HAS BEEN CANCELLED.
Abstract: An algorithm is a set of instructions that describe how to solve a problem. Some algorithms are value-laden in the sense that agents who seek to solve one and the same problem, and share the same factual beliefs, have compelling reasons to design their algorithms differently because they accept different value judgements. In this talk I give some examples of this phenomenon and ask who should get to specify the value judgments in value-laden algorithms. Some common answers are: (1) Users should specify the value judgments. (2) Well-informed and morally conscientious software designers should specify the value judgements. (3) Regulators should specify the value judgements. (4) The market forces should specify the value judgements. (5) The machine itself should do the job. The solution I propose can be summarized as follows: In each algorithm, the value judgements should be specified in whatever way they should be specified in the most morally similar case in which a comparable problem is solved without using algorithms. This is a reasonable application of the plausible idea that we should treat like cases alike.
Martin Peterson is Professor of Philosophy & Sue and Harry E. Bovay Professor of the History and Ethics of Professional Engineering at Texas A&M University. His research covers ethics of technology, moral philosophy and decision theory.