IBM Research Africa at Deep Learning Indaba 2023
- University of Ghana in Accra, Ghana
About
The Deep Learning Indaba is the annual meeting of the African machine learning and AI community with the mission to Strengthen African AI. In 2023, the Indaba and Africa’s artificial intelligence community will meet for a week-long event of learning, research, exchange, ideation, and debate around the state of the art in machine learning and artificial intelligence in Accra, Ghana from the 3rd to the 9th of September. AI research teams from the Africa Lab will be at the Deep Learning Indaba 2023. Various IBMers will showcase some of our exciting projects by presenting posters and hosting interactive workshops.
Agenda
- Description:
Thapelo Andrew Sindane and Nomonde Francisca Khalo will be showcasing their work at the poster session. Nomonde Francisca Khalo** **will be presenting her PhD work: "Knowledge Enhancement to improve the Reliability of Large Language Models for Patient-Centered Text Simplification."
- Description:Speakers:SRSekou RemyStaff Research ScientistIBM Research
- Description:
The mission of the Workshop is to build an active community of robotics and machine learning practitioners with the skills to develop practical and sustainable solutions for addressing key societal issues on the African continent. Through this workshop, we hope to introduce robotics to the curious, spark collaborations, and provide opportunities for sharing knowledge and resources.
More details: https://sites.google.com/view/robotlearning4africa/home
Ndivhuwo Makondo is one of the organizers of the Robot Learning for Africa Workshop.
Speakers:NM - Description:
Recent years have seen an overwhelming body of work on fairness and robustness in machine learning (ML) models. This is not unexpected, as it is an increasingly important concern as ML models are used to support decision-making in high-stakes applications such as mortgage lending, hiring, and diagnosis in healthcare. Trustworthy AI aims to provide an explainable, robust, and fair decision-making process. In addition, transparency and security also play a significant role in improving the adoption and impact of ML solutions. Currently, most ML models assume ideal conditions and rely on the assumption that test/clinical data comes from the same distribution of the training samples. However, this assumption is not satisfied in most real-world applications; in a clinical setting, we can find different hardware devices, diverse patient populations, or samples from unknown medical conditions. On the other hand, we need to assess potential disparities in outcomes that can be translated and deepened in our ML solutions. Particularly, data and models are often imported from external sources in addressing solutions in developing countries, thereby risking potential security issues. The divergence of data and model from a population at hand also poses a lack of transparency and explainability in the decision-making process. In this second edition of the workshop, we aim to bring researchers, policymakers, and regulators to discuss ways to ensure security and transparency while addressing fundamental problems in developing countries, particularly, when data and models are imported and/or collected locally with less focus on ethical considerations and governance guidelines.
More details: https://trustaideepindaba.github.io/about/
Celia Cintas is one of the organizers of The Trustworthy AI Workshop @ DeepIndaba 2023.
Speakers:CC
Upcoming events
- —
Berkeley Innovation Forum 2025 at IBM Research
- San Jose, CA, USA
IBM Research Brazil Forum 2025
- Rio de Janeiro, Brazil
- —
AI Hardware Forum 2025
- Yorktown Heights, NY, USA