IBM reaffirms its commitment to the Rome Call for AI ethicsNewsMike Murphy15 Jul 2024AIFairness, Accountability, Transparency
What is red teaming for generative AI?ExplainerKim Martineau11 Apr 2024Adversarial Robustness and PrivacyAIAI TestingFairness, Accountability, TransparencyFoundation ModelsNatural Language ProcessingSecurityTrustworthy AI
The latest AI safety method is a throwback to our maritime pastResearchKim Martineau16 Nov 2023AIAI TransparencyExplainable AIFairness, Accountability, TransparencyGenerative AI
What is AI alignment?ExplainerKim Martineau08 Nov 2023AIAutomated AIFairness, Accountability, TransparencyFoundation ModelsGraniteNatural Language Processing
Accelerator TechnologiesWe're developing technological solutions to assist subject matter experts with their scientific workflows by enabling the Human-AI co-creation process.
The Literary Canons of Large-Language Models: An Exploration of the Frequency of Novel and Author Generations Across Gender, Race and Ethnicity, and NationalityPaulina Toro IsazaNalani Kopp2025NAACL 2025
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-CreationJessica HeStephanie Houdeet al.2025CHI 2025
Responsible Prompting Recommendation: Fostering Responsible AI Practices in Prompting-TimeVagner Figueredo De SantanaSara Bergeret al.2025CHI 2025
Ethical Co-Development of AI Applications with Indigenous CommunitiesClaudio Santos PinhanezEdem Wornyo2025CHI 2025
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient ReasonsShahaf BassanRon Eliavet al.2025ICLR 2025