Hi, I'm Matthew Olckers. Welcome to my research website.
I work as a research fellow in the School of Computer Science and Engineering at UNSW Sydney. I am part of team, led by Toby Walsh, studying how to build fair, transparent, and explainable AI.Although I now work in an engineering department, my training is in economics. I completed my PhD at the Paris School of Economics and my undergraduate at the University of Cape Town.My research interests include social networks, development economics, and household finance. I am particularly interested in the intersection of economics and computer science, such as using mechanism design in practical applications or using alternative data sources to answer questions about poverty and development.I am a co-organizer of the Mechanism Design for Social Good (MD4SG) initiative. I am an affiliate of SoDa Labs.Outside of my research, I love spending time with my family, surfing, helping out at church, and seeing graffiti on trains. I compiled a book about graffiti in Cape Town, South Africa, which subsequently evolved into a documentary.
I use tools from economics and computer science to study a range of topics.I am working on projects in the following areas:
Rejected applicants for job and university applicants usually receive generic feedback—if they receive any feedback at all. Can techniques from AI provide personalized explanations? What are the barriers to provide personalized explanations?
Through social interaction, we learn about our peers. For example, we may learn which of our peers is in need of financial aid. How can a social planner use peer information to target aid to the most needy individuals? How can the planner prevent manipulation of a targeting mechanism that relies on peer information?
Household finance focusses on how we manage our money. How much do we save for retirement? How do we use credit? Do we take risks? Which types of financial products do we use?
with Toby WalshIn peer mechanisms, the competitors for a prize also determine who wins. Each competitor may be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such as financial aid, course grades, or an award at a conference, competitors may be tempted to manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of peer mechanisms. We conclude our survey by identifying several important research challenges.
with Alicia Vidler and Toby WalshRejected job applicants seldom receive explanations from employers. Techniques from Explainable AI (XAI) could provide explanations at scale. Although XAI researchers have developed many different types of explanations, we know little about the type of explanations job applicants want. We use a survey of recent job applicants to fill this gap. Our survey generates three main insights. First, the current norm of, at most, generic feedback frustrates applicants. Second, applicants feel the employer has an obligation to provide an explanation. Third, job applicants want to know why they were unsuccessful and how to improve.