Meta
Responsible AI

/ Background
At Meta, I worked on the Responsible Innovation and Ethics team where I helped surface and mitigate the potential harmful impacts of new products and features.
As part of our Responsible AI initiatives, I helped develop product responsibility principles to ensure Meta's consumer-facing AI products were safe, fair, and transparent.
/ Role
Product Design Lead, Meta Responsible Innovation and Ethics
/ Year
2021 - 2023
Product Goals
Define potential harms caused by consumer-facing AI features
Create a taxonomy of harms, including a prioritization framework for teams to use.
Create re-usable methods to mitigate harms
Create artifacts that teams around the company can use as a "play book" to mitigate harms from the taxonomy
Develop measurement strategies to detect harmful unintended consequences
Work with data science and AI engineering to establish countermetrics and guardrails that help us detect when harmful unintended consequences happen.
I started by mapping what Responsible AI controls we had developed on the model level, and then widened that to identify the gaps in product-level considerations.
Product-level considerations are user-facing features that design and product could help mitigate.
Model-level responsibility considerations
Image: Diagram showing AI model mitigations
Product-level responsibility considerations
Misinformation
Over-personalization
Transparency
Privacy
Unintended inferences
Data reuse and drift
Loss of agency
Overreliance
Unsafe outputs
Context collapse
Cultural insensitivity
Misaligned incentives
Bad actor abuse
Physical space harms
Inter-social harms
and more
While I can't show much of my process work here, I can share some of the product mitigations that have ended up in the current-day Meta AI product.
How are we making sure people know how to use the new features and understand their limitations?
We provide information within the features to help people understand when they’re interacting with AI and how this new technology works.
We denote in the product experience that they might return inaccurate or inappropriate outputs.
How are we helping people to know when images are created with our AI features?
Images created or edited by Meta AI, restyle, and backdrop will have visible markers so people know the content was created by AI.
We’re also developing additional techniques to include information within image files that were created by Meta AI, and we intend to expand this to other experiences as the technology improves.
How can we equip people with the tools to use AI responsibly?
We created a set of system cards to promote transparency and help people engage with responsible AI on projects that are relevant to them.