Barnard’s ‘Responsible AI and the Liberal Arts’ symposium
- 5 hours ago
- 5 min read
A day of discussion and presentations about the use of AI in the liberal arts.

Photo by Haley Scull/The Barnard Bulletin
February 27, 2026
To promote Barnard’s AI literacy framework and demonstrate ways that artificial intelligence (AI) can be used in educational and professional settings, the College held a “Responsible AI and the Liberal Arts” symposium on Friday, February 6. The symposium highlighted how responsible AI usage intersects with the liberal arts in multiple fields and domains, including mental health, libraries, journalism, and research.
The day’s events featured computer science professor Kathleen McKeown as a keynote speaker, an alumni panel that highlighted “the application of AI in diverse fields and research domains,” and a community showcase featuring research and projects from across campus.
Professor McKeown’s address, titled “Research Arcs in AI: From Literature to Art with a Few Stops in Between,” discussed her research throughout the years within the language processing field of AI. McKeown traced the evolution of research in natural language processing, a core area of AI. She reflected on her early work in the 1980s on language generation, discussing its three major strands: text summarization and narrative analysis, social media analysis, and multimodal AI systems that engage with visual art.
“While the disciplines we interact with may have changed over time, interdisciplinary interaction is still a critical component of our work,” McKeown said, highlighting that collaboration across fields remains essential in defining and practicing responsible AI.
Following the keynote, five Barnard alumni offered their perspectives on the responsible use of AI. Throughout the discussion and audience Q&A, panelists and attendees addressed ideas about accessibility, risk, efficiency, ethics, and regulation. The panel converged on the idea that AI literacy is not simply technical proficiency but requires the ability to consider factors including safety, bias, human relationships, and societal consequences, in order to decide how AI tools should be used.
“We now are in a country where we have incredibly high rates of severe mental health problems, … even in elementary and middle school levels,” said Julie Scelfo (BC ’96), a former New York Times journalist and media ecologist and founder of Mothers Against Media Addiction. Scelfo spoke about the mental health implications of technology use, particularly among young people. These trends led her to found her organization as a way to advocate for safer digital environments for children.
“Tech could be really helpful and really fun, but it’s not always automatically the best solution,” she stated. “My idea about responsible AI is that it’s AI built with people in mind, that if it’s going to be used by children, by default, it is safe for children.”
For Jessica Wall (BC ’10), the Director of Participant Success for JSTOR Digital Stewardship Services at ITHAKA, a nonprofit organization working towards making higher education accessible, responsible AI involves flexibility that allows her team to avoid being “beholden to a single entity” while evaluating how AI tools should be implemented. Reflecting on her liberal arts education, Wall credited her time at Barnard with strengthening the critical thinking skills that inform her work today, especially as she collaborates with multiple large language models (LLMs) rather than relying on a single provider.
In a conversation with The Bulletin following the panel, Wall reflected further on how her liberal arts education shapes her approach to AI. As an art history major at Barnard, she stated that she was drawn through visual art as a way to “connect to the idea of the human story.” That perspective continued to inform Wall’s work in digital stewardship, where she views AI not as an end in itself but as a tool in service of “a greater mission.” For students utilizing LLMs in their academic work, she encouraged engaging with “a critical eye,” understanding what the tool is doing, and using it as a supplement rather than a substitute for independent thinking.
“I knew I wanted to be interdisciplinary,” said Grace Li (BC ’24), a computer science PhD student at the University of Chicago studying AI literacy in education. “I knew I wanted to incorporate writing and education into my computer science research, in the sense of really understanding how humans are interacting with these tools.”
Li explained that Barnard’s required coursework and research opportunities shaped her academic trajectory and reinforced the importance of examining AI through multiple lenses. “I think that type of interdisciplinary research is something that is really cultivated by a liberal arts education,” she said. Li also emphasized the importance of correctly distinguishing AI from LLMs in discussions about responsible AI use.
Sonia Mohandas (BC ’23), a graduate student studying dance and movement therapy, emphasized both the promise and risks of AI in the mental health field. She stressed that human connection in therapy is important even with the breadth of AI resources available, though she acknowledged that certain AI tools have the potential to improve efficiency for therapists. “Accessibility is such a huge struggle for therapists,” she pointed out, noting that financial, temporal, and systemic barriers prevent many people from receiving care. “Being in a physical relationship with somebody is not something that can be replaced … [AI] can’t mirror you. It can’t be in a relationship with you the same way that another person can.”
Lauren Beltrone (BC ’17), a Conversational Designer at Google DeepMind, stated that designers of LLMs have a responsibility to embed safety and ethics into everyday decisions. She cautioned against technological over-optimism, highlighting Scelfo’s advice to “don’t believe the hype” and to critically evaluate when AI enhancements are effective.
Following the panel was the Community Showcase, in which students, faculty, and staff presented projects, research, and open discussions related to responsible AI in the liberal arts. Spread across a series of tables in the James Room, the Community Showcase created an interactive “show-and-tell” environment for attendees.
Among the featured projects was “Improper Conduct: A Human-AI Collaboration Framework for Identifying Prosecutorial Misconduct,” presented by Dina Blachman (BC ’27) and second year Columbia PhD student Amy Pu. Their project used AI to assist journalists in identifying patterns of prosecutorial misconduct within large volumes of legal case documents, inspired by journalists who had spent years manually combing through cases and sought out computer scientists for assistance.
“We have a really strong emphasis on the collaboration between humans and AI,” Pu explained, noting that the system would not function without experts to train models and interpret their outputs. Blachman added that their goal was not to replace the job of journalists but to support them, stating that AI in their project was intended to “give human experts more time for more critical thinking, rather than replace any of the people in the process.”
“Find a way to use [AI] as a tool but just don’t let it do the thinking for you,” concluded Wall.


