Celebrating Google Summer of Code Responsible AI Projects
huhtikuuta 04, 2023

Posted by Bhaktipriya Radharapu, Software Engineer, Google Research

One of the key goals of Responsible AI is to develop software ethically and in a way that is responsive to the needs of society and takes into account the diverse viewpoints of users. Open source software helps address this by providing a way for a wide range of stakeholders to contribute.

To continue making Responsible AI development more inclusive and transparent, and in line with our AI Principles, Google’s Responsible AI team partnered with Google Summer of Code (GSoC), to provide students and professionals with the opportunity to contribute to open source projects that promote Responsible AI resources and practices. GSoC is a global, online program focused on bringing new contributors into open source software development. GSoC contributors work with an open source organization on a 12+ week programming project under the guidance of mentors. By bringing in new contributors and ideas, we saw that GSoC helped to foster a more innovative and creative environment for Responsible AI development.

This was also the first time several of Google’s Responsible AI tools, such as The Learning Interpretability Tool (LIT), TensorFlow Model Remediation and Data Cards Playbook, pulled in contributions from third-party developers across the globe, bringing in diverse and new developers to join us in our journey for building Responsible AI for all.

We’re happy to share the work completed by GSoC participants and share what they learned about working with state-of-the-art fairness and interpretability techniques, what we learned as mentors, and how rewarding summer of code was for each of us, and for the Responsible AI community.

We had the opportunity to mentor four developers - Aryan Chaurasia, Taylor Lee, Anjishnu Mukherjee, Chris Schmitz. Aryan successfully implemented XAI tutorials for LIT under the mentorship of Ryan Mullins, software engineer at Google. These showcase how LIT can be used to evaluate the performance of (multi-lingual) question-answering models, and understand behavioral patterns in text-to-image generation models.

Anjishnu implemented Tutorials for LIT also under the mentorship of Ryan Mullins. Anjishnu’s work influenced in-review research assessing professionals’ interpretability practices in production settings.

Chris, under the technical guidance of Jenny Hamer, a software engineer at Google, created two tutorials for TensorFlow Model Remediations' experimental technique, Fair Data Reweighting. The tutorials help developers apply a fairness-enforcing data reweighting algorithm, a pre-processing bias remediation technique that is model architecture agnostic.

Finally, Taylor, under the guidance of Mahima Pushkarna, a senior UX designer at Google Research, and Andrew Zaldivar, a Responsible AI Developer Advocate at Google, designed the information architecture and user experience for activities from the Data Cards Playbook. This project translated a manual calculator that helps groups assess the reader-centricity of their Data Card templates into virtual experiences to foster rich discussion.

The participants learned a lot about working with state-of-the-art fairness and interpretability techniques. They also learned about the challenges of developing Responsible AI systems, and about the importance of considering the social implications of their work. What is also unique about GSOC is that this wasn't just code and development – mentees were exposed to the code-adjacent work such as design and technical writing skills that are essential for the success of software projects and critical for cutting-edge Responsible AI projects; giving them a 360ยบ view into the lifecycle of Responsible AI projects.

The program was open to participants from all over the world, and saw participation from 14 countries. We set-up several community channels for participants and professionals to discuss Responsible AI topics and Google’s Responsible AI tools and offerings which organically grew to 300+ members. The community engaged in various hands-on starter projects for GSoC in the areas of fairness, interpretibility and transparency, and were guided by a team of 8 Google Research mentors and organizers.

We were able to underscore the importance of community and collaboration in open source software development, especially in a field like Responsible AI, which thrives on transparent, inclusive development. Overall, the Google Summer of Code program has been a valuable tool for democratizing the responsible development of AI technologies. By providing a platform for mentorship, and innovation, GSoC has helped us improve the quality of open source software and to guide developers with tools and techniques to build AI in a safe and responsible way.

We’d like to say a heartfelt thank you to all the participants, mentors, and organizers who made Summer of Code a success. We're excited to see how our developer community continues to work on the future of Responsible AI, together.

We encourage you to check out Google’s Responsible AI toolkit and share what you have built with us by tagging #TFResponsibleAI on your social media posts, or share your work for the community spotlight program.

If you’re interested in participating in the Summer of Code with TensorFlow in 2023, you can find more information about our organization and suggested projects here.

Acknowledgements:

Mentors and Organizers:

Andrew Zaldivar, Mahima Pushkarna, Ryan Mullins, Jenny Hamer, Pranjal Awasthi, Tesh Goyal, Parker Barnes, Bhaktipriya Radharapu

Sponsors and champions:

Special thanks to Shivani Poddar, Amy Wang, Piyush Kumar, Donald Gonzalez, Nikhil Thorat, Daniel Smilkov, James Wexler, Stephanie Taylor, Thea Lamkin, Philip Nelson, Christina Greer, Kathy Meier-Hellstern and Marian Croak for enabling this work.

Next post
Celebrating Google Summer of Code Responsible AI Projects

Posted by Bhaktipriya Radharapu, Software Engineer, Google ResearchOne of the key goals of Responsible AI is to develop software ethically and in a way that is responsive to the needs of society and takes into account the diverse viewpoints of users. Open source software helps address this by providing a way for a wide range of stakeholders to contribute.To continue making Responsible AI developm…