After a successful workshop and a great iConference we would like to thank all authors and speakers for their excellent contributions.
The tentative program for the workshop on Sunday, March 25, is as follows:
- 13.30 – 14.15
- Keynote Policy approaches to socio-technical causes of bias in algorithmic systems – what role can ethical standards play? (Ansgar Koene) and discussion
- 14.15 – 14.40
- Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair? (William Seymour)
- 14.40 – 15.05
- Algorithms, Bias and the Importance of Agency (Alan Rubel, Clinton Castro and Adam Pham)
- 15.05 – 15.30
- Re-Considering Bias: What Could Bringing Gender Studies and Computing Together Teach Us About Bias in Information Systems? (Claude Draude, Goda Klumbyte and Pat Treusch)
- 15.30 – 16.00
- 16.00 – 16.25
- Towards Bias Detection in Online Text Corpora (Christoph Hube, Besnik Fetahu and Robert Jäschke)
- 16.25 – 16.50
- Dealing with Bias via Data Augmentation in Supervised Learning Scenarios (Vasileios Iosifidis and Eirini Ntoutsi)
- 16.50 – 17.15
- The Politics and Biases of the Crime Anticipation System of the Dutch Police (Serena Oosterloo and Gerwin van Schie)
- 17.15 – 17.30
Only two weeks to go until the workshop starts – time to present the accepted papers. Of the 14 submissions we have received, the following 6 were accepted and will be presented at the workshop on March 24 in Sheffield:
- Claude Draude, Goda Klumbyte and Pat Treusch: Re-Considering Bias: What Could Bringing Gender Studies and Computing Together Teach Us About Bias in Information Systems?
- Christoph Hube, Besnik Fetahu and Robert Jäschke: Towards Bias Detection in Online Text Corpora
- Vasileios Iosifidis and Eirini Ntoutsi: Dealing with Bias via Data Augmentation in Supervised Learning Scenarios
- Serena Oosterloo and Gerwin van Schie: The Politics and Biases of the Crime Anticipation System of the Dutch Police
- Alan Rubel, Clinton Castro and Adam Pham: Algorithms, Bias, and the Importance of Agency
- William Seymour: Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair?
A big thanks to everyone who submitted something and congratulations to the authors of the accepted papers. We are looking forward to seeing you in Sheffield.
We are proud to announce that Ansgar Koene from the University of Nottingham will give a keynote at the BIAS workshop on March 25, 2018.
Ansgar is Co-Investigator on the UnBias project whose goal is to emancipate users against algorithmic biases for a trusted digital economy.
The title of his keynote is Policy approaches to socio-technical causes of bias in algorithmic systems – what role can ethical standards play? and the abstract is as follows:
A half-day workshop at the 2018 iConference to be held in Sheffield, UK on Sunday, March 25, 2018
More than ever before, information, algorithms and systems have the potential to influence and shape our experiences and views. With increased access to digital media and the ubiquity of data and data-driven processes in all areas of life, an awareness and understanding of areas, such as algorithmic accountability, transparency, governance and bias, are becoming increasingly important. Recent cases in the news and media have highlighted the wider societal effects of data and algorithms requiring we pay it more attention.
The BIAS workshop will bring together researchers from different disciplines who are interested in analysing and tackling bias within their discipline, arising from the data, algorithms and methods they use. The theme of the workshop, bias in information, algorithms, and systems, includes, but is not limited to, the following areas:
- Bias in sources of data and information (e.g., datasets, data production, publications, visualisations, annotations, knowledge bases)
- Bias in categorisation and representation schemes (e.g., vocabularies, standards, etc.)
- Bias in algorithms (e.g., information retrieval, recommendation, classification, etc.)
- Bias in the broader context of information and social systems (e.g., social media, search engines, social networks, crowdsourcing, etc.)
- Considerations in evaluation (e.g., to identify and avoid bias, to create unbiased test and training collections, crowdsourcing, etc.)
- Interactions between individuals, technologies and data/information
- Considerations for data governance and policy
The workshop aims to identify potential avenues for future directions around the notions of bias, algorithmic transparency and accountability, with the concrete goal of generating a collaborative proposal for publishing a position paper (e.g., in ACM SIGIR Forum) and/or the coordination of a special issue on BIAS for the journal Online Information Review. With these goals in mind, the workshop will feature a keynote talk, presentations and posters from workshop participants, and thematic discussions in small groups.
Submission and Publication
The workshop welcomes the following types of submissions:
- Extended abstracts of up to 1,500 words,
- Short research papers of up to 6 pages, and
- Full research papers of up to 12 pages.
Submissions will be peer-reviewed by at least two members of the programme committee. Submissions should be formatted according to Springer’s LNCS style guidelines and not exceed the word/page limit. The submission is to be done via EasyChair. All accepted submissions will be published as workshop proceedings on CEUR-WS.org. Their metadata will also be provided in BibSonomy and everything will be linked on the workshop homepage, together with the program and presentation slides. At least one author of each accepted paper must register for the conference and present the paper there.
- Abstract submission deadline: Jan 10, 2018
- Submission deadline: Jan 20, 2018
- Notification of acceptance: Feb 25, 2018
- Camera-ready paper: Mar 10, 2018
(Abstract submission: Please submit the title and (short) abstract of your work until January 10 to speed up reviewer assignment.)
- Alessandro Checco, Information School, University of Sheffield
- Maria Gäde, Humboldt-Universität zu Berlin
- David Garcia, Complexity Science Hub Vienna
- Jutta Haider, Lund University
- Libby Hemphill, University of Michigan iSchool
- Frank Hopfgartner, Information School, University of Sheffield
- Ansgar Koene, University of Nottingham
- Jochen Leidner, Thomson Reuters
- Kristian Lum, Human Rights Data Analysis Group
- Elvira Perez Vallejos, University of Nottingham
- Emilee Rader, Michigan State University
- Kalpana Shankar, UCD, School of Information and Library Studies
- Claudia Wagner, GESIS Cologne
- Ziqi Zhang, Information School, University of Sheffield
subscribe via RSS