GW Engineering Hosts Multidisciplinary Discussion on the Powers and Perils of AI


October 11, 2023

Moderator Dean Lach and panelists Zoe Szajnfarber, Rebecca Hwa, Ryan Watkins, and Susan Aaronson stand with President Ellen Granberg

Barely a day goes by that we don’t hear something about artificial intelligence (AI), either as an exciting new frontier or a serious ethical conundrum. Faculty of the George Washington University are taking a holistic approach to AI research and education to address both sides, as demonstrated by the panel “The Multifaceted Landscape of AI” during the 2023 GW Alumni & Families Weekend. The panel was moderated by School of Engineering & Applied Science (GW Engineering) Dean John Lach but featured a multidisciplinary set of panelists to address the societal implications of AI in addition to its technical design. 

Panelists included Rebecca Hwa, Professor and Department Chair of Computer Science (CS) at GW Engineering; Zoe Szajnfarber, Director of Strategic Initiatives and Professor of Engineering Management and Systems Engineering (EMSE) at GW Engineering; Susan Aaronson, Research Professor of International Affairs at the Elliott School; and Ryan Watkins, Professor of Education Technology at the GW Graduate School of Education and Human Development. Together, they explored GW’s work around AI, examined cross-cutting research happening at the forefront of this critical technology, and discussed its powers and perils.

“One of the things I love most about GW is that it doesn’t just produce scholarship to sit on a shelf. It produces scholarship to make a difference,” said President Ellen Granberg in her opening remarks. “We have students and faculty working on real-world solutions to some of our most pressing challenges and our most exciting opportunities. Of course, we all know that those two things are deeply integrated when it comes to artificial intelligence.”

The definition of AI has been contested since its advent. Professor Hwa gave Tin Man from The Wizard of Oz as an example of early perceptions of AI and detailed how it evolved in the 1950s into being thought of more scientifically. She shared that until the 90s, it was a more data-driven machine learning statistical approach, but by the 2010s, we had fast enough machines, enough data, and had developed algorithms well enough to advance AI to the point it is today.

While the numerous benefits of AI have been demonstrated, such as improving processes and workflows and assisting in medical applications, the risks still cause concern and fear among the general public. Previously, Professor Szajnfarber said AI kind of lived in a vacuum, but that’s not true anymore. It is being embedded into more and more systems that interact with countless people daily.

“It’s not just about the technology we’re developing. It’s about understanding it in context,” says Szajnfarber.

Generative AI is built on vast sets of data, whether proprietary or data scraped from the web. Professor Aaronson noted the problem that causes this fear of AI is that people do not trust it partially because the designers are not yet asking if this data is accurate, valid, complete, and representative before using it to design an AI system.

“Trust is the grease that makes government, or really any relationship, work,” Aaronson stated. “The idea is that if we can build trust in the technology, people will be much more willing to accept the changes that technology may have on society.”

The multi-institutional effort, the Institute for Trustworthy AI in Law & Society (TRAILS), co-led by GW and supported by the National Science Foundation, was founded earlier this year on the premise that currently no trust in AI exists and to reverse that trust must be embedded in every level of AI design, deployment, and governance. This belief necessitates an interdisciplinary approach, uniting specialists in AI and machine learning with systems engineers, social scientists, legal scholars, educators, and public policy experts.

In fact, many of the risks around the deployment of AI in society are a systems problem, and systems engineers are also heavy users of computing. Thus, GW Engineering’s CS and EMSE Departments have created multiple joint programs, including the Ph.D. program Co-Design of Trustworthy AI Systems.

Other perils around the deployment of AI systems include historically underrepresented communities inequitably feeling the harms of AI due to their concerns not being reflected in the design process. GW researchers in the TRAILS Institute will work directly with these impacted communities to ensure AI is being created in a way that aligns with the values and interests of diverse groups of people.

“Our vision is to transform the field of AI from tech-first to people-first where AI is developed and governed in a way that promotes human rights and serves the interest of people who use it,” said Professor Aaronson.

Regardless of discipline, AI in the classroom is also becoming increasingly prevalent. Many faculty members in the social sciences and humanities, like Professor Watkins, have begun teaching their students how to leverage the power of AI because they feel future graduates who understand how to use these technologies may have an advantage in the workforce over those who do not. Since these faculty members are not the technical designers of the AI systems, they often have to consult those who are, such as GW Engineering faculty and students.

“There’s risk in the security and privacy of data that we may not know about because we are not trained in those areas, so we must do this in collaboration with people with greater expertise in those,” Watkins stated.

The CS Department aims to train the next generation of AI experts by having a solid AI foundation curriculum as part of its mission. They also offer courses focused on general knowledge they feel every student should have, such as how computing impacts society. Professor Hwa says CS is thinking about things logically, so it connects with lots of other disciplines such as the social sciences and humanities.

The panel rounded out with questions from the audience, who primarily asked questions concerning ethical dilemmas in AI, whether in self-driving cars, the upcoming election, or the classroom. Panelists’ responses demonstrated how the answers to these pressing questions are part of the ongoing discussion around the regulation of AI. Through institutes such as TRAILS, GW is bringing together faculty and students from all disciplines to discuss how we can regulate AI in a way that addresses its risks but does not stifle innovation and also takes advantage of its opportunities. 

President Granberg best summarized all of GW’s work around AI, saying, “By collaborating across disciplines and working directly with impacted communities, we can add value to traditional academic disciplines, we can prepare our students to succeed in an increasingly complex and interdisciplinary world, and we can ensure our research will have a very real and immediate impact on society.”