
Critical Thinking About Generative AI
Generative AI is reshaping how we create, communicate, and engage with the world—but what do we gain, and what do we risk losing? This thought-provoking guide challenges you to move beyond fear or hype, applying critical thinking to AI’s evolving role in media, creativity, ethics, and society.


Communication and media scholars think critically about the introduction of new technologies, exploring what society gains and loses as new tools for communicating and new forms of media production and distribution become integrated into society. Rather than be motivated by fear of a new technology or a tendency to ask if a new technology is “good” or “bad,” communication and media scholars apply critical thinking strategies to consider its benefits and challenges as well as its design and uses. Through these lenses, scholars argue that the design of each technology, including Artificial Intelligence (AI), affords us the ability to use it in certain ways, including uses that we find beneficial and those that we find harmful. At the same time, we understand that a technology does not determine its own impact on the world; instead, we can think about how those who design, distribute, profit from, and use the technology are all part of the mélange of factors that impact how a new technology will be integrated into and possibly change society. Since AI is always developing, here are some questions you can ask to think critically about its value, use, and impact:
We encourage you to spend some time thinking about your own answers to these questions.
What might we gain and/or lose from the introduction of Generative AI (GenAI)?

As many of us have experienced, GenAI can increase efficiency and process large amounts of information. It may even increase creativity by helping us to think outside of the confines of human thought. On the one hand, these tools could lead us to develop complex perspectives and stronger evidence-based arguments. On the other hand, AI processes could also lead us to rely less on our own memories and analytical skills, potentially atrophying our abilities to think critically, develop expertise, and exercise moral judgments.
What moral codes and ethical principles does AI use as it creates communications and media?

Like all new technologies, AIs’ processes are encoded with the biases of its developers. In a capitalist society, developers are likely to value profit over human wellness. In addition, some scholars are concerned that because GenAI relies on user prompts and develops its intelligence by building on existing information and patterns, it is not equipped to challenge social norms or stereotypes. It is essential to consider when it is imperative for AI to value humanity or ecology over profit. If we first specify, through prompts, what a GenAI tool should value, it may abide, but what about when we don’t? In other words, what is or should be the AI moral default? Alternatively, can AI be developed that helps users critically reflect on their own biases and consider alternative ideologies?
How might AI impact the creative industries?

Much of the buzz surrounding GenAI has focused on its potential uses in artistic endeavors, such as creating literature, music, and video games. It’s worth considering the potential benefits and challenges of using AI in these areas. AI might lower the barrier to entry for creative work and thus help even more people create and share their artistic visions with the world. Yet it’s possible that AI trained primarily on media content that reflects predominant power dynamics and stereotypes would largely generate output reflecting those same power dynamics and stereotypes, thus impeding the creation and spread of innovative and resistive creative ideas and expressions. AI has already provoked a “crisis” in intellectual property, with artists expressing concern that AI is using their works without permission and threatening their livelihood. AI thus raises critical questions about what it means to “own” an idea or creative expression as well as the meaning of creativity in general.
As users who interact with new and expanding AI tools, how can we help shape their use?

It’s important to recognize that AI is a technology made by and ultimately used by humans, thus giving us influence over how AI is designed and implemented. New literacies must be developed to help people learn how to use AI safely and responsibly. New norms, expectations, and regulations are needed to make sure AI is used ethically and to hold those accountable who fail to do so. Serious consideration must also go into developing and implementing strategies to prevent AI from exacerbating the digital divide. Who will have access to the highest quality AI? Will it remain free and open or will those with greater privilege, have access to more powerful and advanced tools? What might be the long term socio-political and economic impact of this divide?
More from Pace
As artificial intelligence seeps into every facet of life, Pace scholars are working to harness the technology’s potential to transform teaching and research. While the road ahead is fraught with uncertainty, these Pace experts see a fairer and safer AI-driven future.
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.
FutureU Podcast Ep. 189: Building an AI-Ready College
President Kirslov also appears on the FutureU Podcast, discussing how higher education can adapt to the rapid rise of AI and prepare students for an evolving workforce.

Do Those Five-Star Funds Really Shine?
Finance Professor Matthew Morey speaks with Bloomberg, analyzing the performance of diversified U.S. equity funds and finding that while five-star funds outperformed the market before receiving top ratings, they lagged behind afterward.

Jury Finds Greenpeace Owes Hundreds Of Millions For Dakota Access Pipeline Protest
Elisabeth Haub School of Law Professor Josh Galperin offers commentary to NPR on the $300 million jury verdict against Greenpeace, calling it a troubling precedent for advocacy organizations.

Power & Politics: Lt. Gov. Delgado's Future, Tariffs Tumble The Market And The Latest Siena College Poll
Economics Professor Mark Weinstock appears on News 12’s Power & Politics to break down recent tariff impacts and market turbulence.

How Tinker v. Des Moines Established Students’ Free Speech Rights
Haub Law Professor Emily Waldman is featured in Retro Report, discussing the lasting impact of Tinker v. Des Moines on students’ free speech rights.

A Very Big Business It's Called Facebook.Com And It's Liike No College Directory.
NYC Counseling Center Director Richard Shaddock speaks on WHYY (Radio) about how social media culture and the pursuit of “likes” have fueled anxiety and unhealthy comparisons among college students.
Living the AI Experiment
As artificial intelligence seeps into every facet of life, Pace scholars are working to harness the technology’s potential to transform teaching and research. While the road ahead is fraught with uncertainty, these Pace experts see a fairer and safer AI-driven future.


When philosophy professor James Brusseau, PhD, introduced his students to the Caffeinated Professor, a generative artificial intelligence (AI) chatbot trained on his business ethics textbook, he wasn’t trying to replace traditional teaching by handing the classroom over to a robot.
He was embarking on an experiment into uncharted educational territory, a journey without a map and only one direction of travel.

“I don’t know all the ways that it will help and hurt my students,” said Brusseau, who unveiled the AI professor to his Philosophy 121 class this semester. Students are encouraged to converse with the bot day or night, engaging in conversation just as they might with him. “When answers are a few keystrokes away, there’s a clear pedagogical negative to introducing a tool like this.”
“But if I didn’t build this, someone else would have,” he added. “While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
The rise of generative AI—tools like ChatGPT, Gemini, and Grok that generate original text, images, and videos—has sent shockwaves through many industries. For some observers, fear is the dominant emotion, with concerns that AI could take jobs or lead to humanity’s downfall.
Professors and researchers at Pace University, however, see a different future. For them, AI anxiety is giving way to a cautious acceptance of a technology that’s transforming how we live, work, study, and play. While creators urge caution and experts debate regulations, scholars are concluding that, for better or worse, AI is here to stay.
The real question is what we choose to do with that actuality.
At Pace, experimentation is the only way forward. In Fall 2024, Pace included an AI course—Introduction to Computing—to its core curriculum for undergraduates, bringing the number of courses that incorporate AI at the undergraduate and graduate levels to 39.
“While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
Pace is also leading the way in cross-disciplinary AI and machine learning research. At the Pace AI Lab, led by pioneering AI researcher Christelle Scharff, PhD, faculty, staff, and students integrate their knowledge areas into collective problem solving powered by the technology.
In doing so, Pace’s academics are writing and revising the script for how to balance the dangers and opportunities that AI presents. “We’re living in a heuristic reality, where we experiment, see what happens, and then do another experiment,” said Brusseau.
A Defining Moment
Jessica Magaldi’s AI experiment began with revenge porn. Early in her career, the award-winning Ivan Fox Scholar and professor of business law at the Lubin School of Business studied intellectual property law and transactions for emerging and established companies.

In 2020, she turned her attention to laws criminalizing the illegal sharing of sexually explicit images or videos of a person online without consent. Shockingly, most revenge porn laws were toothless, she said, and there was very little public or political appetite to sharpen them.
Now, fast forward to January 2024, when fake sexually explicit images of singer Taylor Swift went viral on X. Public outrage was immediate. Users demanded accountability, and fans initiated a “Protect Taylor Swift” campaign online. In Europe, lawmakers called for blood.
For Magaldi, something didn’t add up. “We were at a moment when AI generated content that everyone knows is fake was producing more outrage than so-called revenge porn photos, images that are real.” Understanding that contradiction could offer clues on how to draft laws and legislation that are more effective for the victims, she said.
Eventually, it might even teach us something about ourselves. “My greatest hope is that we can use what we learn about the differences between how we feel about what is real and what is AI to explore what that means for us and our collective humanity,” she said.
Optimism Grows
Harnessing the benefits of AI is also what occupies Brian McKernan, PhD, an assistant professor of communication and media studies at the Dyson College of Arts and Sciences.

McKernan, who describes himself as cautiously optimistic about AI, would be excused for taking a less rosy view of the technology. His research areas include misinformation, cognitive biases, and political campaign transparency—topics where the use of AI is rarely benevolent. In a 2024 study of the 2020 US presidential election, McKernan and his collaborators found that President Donald Trump used the massive exposure popular social media platforms offer in an attempt to sow distrust in the electoral process.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying."
And yet, McKernan remains upbeat, an optimism stemming from the fact that AI helps him keep tabs on what politicians are saying, and doing, online.
“It’s a data deluge,” he said. To help sort through it, McKernan and colleagues at the Illuminating project, based at Syracuse University train supervised AI models to classify and analyze social media content. Researchers check the performance of the models before making their findings public.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying,” he said.
Racing to Regulate?
To be sure, there are social and ethical dangers inherent in AI’s application—even when people are at the keyboard. One concern is access. Many generative AI tools are free, but they won’t be forever. When people can’t afford “the shiniest tools,” McKernan said, the digital divide will deepen.
Other challenges include maintaining data privacy, expanding availability of non-English tools, protecting the intellectual property of creators, and reducing biases in code. Even AI terrorism is an area of increasing concern for security experts.
Emilie Zaslow, PhD, a professor and chair of communication and media studies at Pace, said given these concerns, eventually, a regulatory framework for AI might be wise.

“In media, we have examples of both government regulatory oversight, through the Federal Communications Commission, for example, and industry self-regulation, such as the Motion Picture Association film rating system,” Zaslow said. “There is also government involvement in evaluating new consumer products; take the Food and Drug Administration, for example. Every time a new drug comes to market, the FDA evaluates it, tests it, and decides whether it gets released and with what kind of warnings.”
“There should be increased regulatory oversight for technology,” she said.
Regulations are emerging. In Europe, the AI Act bans certain applications deemed to pose an “unacceptable risk” to citizens. Punishable programming includes social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.
Companies face fines up to $35.8 million or 7% of their global annual revenues—whichever amount is higher.
Brusseau, while sensitive to the dangers, doubts that the punitive approach will pay off. “The internet has no geography; it isn’t anywhere,” he said. “How do we prohibit something that isn't anywhere?”
“There should be increased regulatory oversight for technology.”
He suggests a different approach: using technology to regulate itself. He calls this acceleration ethics, the idea that the most effective way to approach risks raised by innovation is with still more innovating.
In a recent paper, Brusseau examined how TELUS, a Canadian telecommunications company, developed an automated safety tool to monitor its customer-serving chatbot. When the safety tool detected hallucinations, phishing threats, or privacy risks in the chatbot’s answers, it flagged them for human review.
“While the purity of theoretical positions is blurred by real-world ambiguities,” Brusseau wrote, “the TELUS case illustrates how the acceleration strategy transforms AI ethics into an innovation catalyst.”
Risks Worth Taking
Ask ChatGPT whether it’s dangerous, and its response is unequivocal: “I’m here to help and have meaningful conversations.”
Ask ChatGPT whether AI is dangerous, the reply is a bit murkier: “It depends on how it's used.”
But point out that ChatGPT is AI, and the contradiction isn’t lost on the technology itself. “What I meant to convey is that I am designed to be helpful, safe, and non-threatening. But it’s true that, like any tool, the potential for harm exists if used irresponsibly.”
When scholars and historians look back at this era of AI experimentation, they may be similarly conflicted. Magaldi, who understands how devastating sexually explicit deepfake images can be, also recognizes the usefulness of AI’s creativity. In Spring 2024, she even used AI to help her flesh out an idea for a class on Taylor Swift. She did it, in part, as an exercise for herself to use AI in a creative way.
“I'm not worried in the least. Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
“With ChatGPT, I was able to build an entire music industry law class based on Swift's disputes and lawsuits,” Magaldi said. After lots of tweaking, she ended up with the syllabus for a three-credit class exploring the singer’s experiences with copyright infringement, music industry contracts, trademark law, and ticketing practices.
It was a massive success. TikTok videos were made about the class, registration for the class closed in minutes, and students are eager for it to run again.
This type of human-AI interaction—using the technology as a “thought partner,” as Magaldi puts it—is the sweet spot in AI’s societal integration.
It’s also why Brusseau is upbeat. “I'm not worried in the least,” he said. “Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
That certainty, however, doesn’t mean he has all the answers. With AI, there are only questions. “Like buying a one-way plane ticket, it’s not the destination that matters, but the journey,” he said. “That’s why I built the Caffeinated Professor—to see where it takes us.”
More from Pace
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.
Haub Law’s Mock Trial Team Finishes Strong in the Queens County District Attorney's Office 10th Annual Mock Trial Competition
The Elisabeth Haub School of Law at Pace University’s Mock Trial Team recently competed in the Queens District Attorney’s Office 10th Annual Mock Trial Competition, which took place in the court facilities of the Queens Criminal Court. The Pace Haub Law team consisting of Skyler Pozo (2L), Maiya Aubry (2L), Alexa Saccomanno (2L), and James Page (2L), finished in second place out of the eighteen nationally ranked law schools who competed.


The Elisabeth Haub School of Law at Pace University’s Mock Trial Team recently competed in the Queens District Attorney’s Office 10th Annual Mock Trial Competition, which took place in the court facilities of the Queens Criminal Court. The Pace Haub Law team consisting of Skyler Pozo (2L), Maiya Aubry (2L), Alexa Saccomanno (2L), and James Page (2L), finished in second place out of the eighteen nationally ranked law schools who competed. During the intense competition, students competed before senior prosecutors and members of the defense bar with judges from Queens and Brooklyn presiding over the competition, along with prosecutors and defense attorneys.
The Haub Law Mock Trial Team successfully made it through two preliminary rounds, a blind quarterfinal round, semi-finals, before finishing in second place during the final round. “It was a challenging competition with some of the best and brightest law students throughout the country, but I’m proud to say that our student advocates rose to the occasion,” said Luis Felix ’15, who coached the Pace Haub Law team. “Their dedication, hard work and knowledge of the fact pattern was reflected in their strong finish, and I look forward to seeing what else they accomplish in the courtroom.” Alexa Saccomanno (2L) was also awarded the individual award of Best Opening Statement.
“The performance by our 2L students demonstrates both the strength and depth of our program,” said Professor Louis Fasulo, Director of Advocacy Programs and Professor of Trial Practice. “These students along with the support of Coach Felix makes us all proud and is a major highlight of this year’s competitions.”
Smart Medicine: The Promise and Peril of AI in Healthcare
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.


To the untrained eye, the grainy medical images vaguely look like knees, black and white scans of what might be muscle, bone, and green wisps of something else.
But to Juan Shan, PhD, an associate professor of computer science in the Seidenberg School of Computer Science and Information Systems at Pace University, the photos are validation of a decades-long hunch: robots can read an MRI.

“The method does not require any human intervention,” Shan wrote in a recent paper detailing her machine learning tool for identifying bone marrow lesions (BMLs), early indicators of knee osteoarthritis. In a standard MRI, BMLs appear as pixelated clouds. In Shan’s model, they pop in vibrant hues of color.
“This work provides a possible convenient tool to assess BML volumes efficiently in larger MRI data sets to facilitate the assessment of knee osteoarthritis progression,” Shan wrote.
As artificial intelligence (AI) reshapes how medicine is practiced and delivered, Pace researchers like Shan are shaping the technology—and the guardrails—driving the revolution in clinical care. Computer scientists at Pace harness machine learning to build tools to reduce medical errors in pediatric care and strengthen clinical decision-making. Social scientists work to ensure fairness and transparency in AI-supported applications. And students are taking their skills to the field, addressing challenges like diagnosing autism.
Collectively, their goal isn’t to replace people in lab coats. Rather, it’s to facilitate doctors’ work and make medicine more precise, efficient, and equitable.
“In healthcare, AI enables earlier disease detection, personalized medicine, improves patient and clinical outcomes, and reduces the burden on healthcare systems,” said Soheyla Amirian, PhD, an assistant professor of computer science at Seidenberg who, like Shan, trains computers to diagnose illnesses.
“New York is a world-class hub for innovation, healthcare, and advanced technologies, and its diversity makes it the perfect place to explore how fair and responsible AI can address inequities across populations,” Amirian said.
In Shan’s lab, that work begins below the kneecap. Together with colleagues, she feeds medical images—MRIs and X-rays—into machine learning models to train them to detect early signs of joint disease. They’re looking to identify biomarkers—cartilage, bone marrow lesions, effusions—that might indicate whether a patient has or is prone to developing osteoarthritis, the fourth leading cause of disability in the world. Current results indicate her models generate results that are highly correlated with manual labels marked by physicians.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy."
Shan’s vision is to create diagnostic tools that would supplement human interventions and pre-screen patients who are at lower risk of disease.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy,” she said. “Our goal is to automate time-consuming medical tasks—like manual labeling of scans—to free doctors for other, more human tasks.”
Pace has invested heavily in training future leaders in AI and machine learning applications. A key focal point for these efforts has been in the healthcare sector, where rapid innovations are changing the patient experience for the better. Over the last decade, Pace researchers have published more than 100 papers in peer-reviewed journals addressing questions in psychology, biology, and medicine. Much of this work has taken advantage of AI applications.
Information technology professor Yegin Genc, PhD, and PhD student Xing Chen explored the use of AI in clinical psychology. Computer science professor D. Paul Benjamin, PhD, and PhD student Gunjan Asrani used machine learning to analyze features of patients’ speech to assess diagnostic criteria for cluttering, a fluency disorder.
Lu Shi, PhD, an associate professor of health sciences at the College of Health Professions, even uses AI to brainstorm complex healthcare questions for his students—like whether public health insurance should cover the cost of birth companions (doulas) for undocumented migrant women.
“In the past, that kind of population-wide analysis could be an entire dissertation project for a PhD student, who would have spent up to two years reaching a conclusion,” Shi said. “With consumer-grade generative AI, answering a question like that might take a couple of days.”
Pace’s efforts complement rapid developments in healthcare technology around the world. Today, AI is helping emergency dispatchers in Denmark assess callers’ risk of cardiac arrest, accelerating drug discoveries in the US, and revolutionizing how neurologists in Britain read brain scans.

Amirian, like Shan, is developing AI-powered tools for analyzing the knee. Her work, which she said has significant potential for commercialization, aims to assist clinicians in diagnosing and monitoring osteoarthritis with accurate and actionable insights. “Its scalability and ability to integrate with existing healthcare systems make it a promising innovation for widespread adoption,” she said.
A key focus for Amirian is building equity into the algorithms she creates. “Reducing healthcare disparities is central to my work,” she said. As head of the Applied Machine Intelligence Initiatives and Education (AMIIE) Laboratory at Pace, Amirian leads a multidisciplinary team of computer scientists, informaticians, physicians, AI experts, and students to create AI models that work well for diverse populations.
Intentionality is essential. “The objective is to develop algorithms that minimize bias related to sex, ethnicity, or socioeconomic status, ensuring equitable healthcare outcomes,” Amirian said. “This work is guided by the principle that AI should benefit everyone, not just a privileged few.”
Zhan Zhang, PhD, another Pace computer science researcher, has won accolades for his contribution to the field of AI and medicine. Like Amirian and Shan, he shares the view that while AI holds great potential, it must be developed with caution. In a recent literature review, he warned that “bias, whether in data or algorithms, is a cardinal ethical concern” in medicine.
“Data bias arises when data used to train the AI models are not representative of the entire patient population,” Zhang wrote in a co-authored editorial for the journal, Frontiers in Computer Science. “This can lead to erroneous conclusions, misdiagnoses, and inappropriate treatment recommendations, disproportionately affecting underrepresented populations.”
“While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial.”
Preventing bias in AI healthcare applications won’t be easy. For one, privacy concerns can create a bottleneck for securing data for research. There’s also a simple numbers challenge. Unlike AI models trained on public image benchmarks, which draw on millions of inputs, training AI models on medical images is limited by a dearth of information, said Shan. While there are efforts to augment the dataset and generate synthetic data, the relatively small size of the available medical datasets is still a barrier to fully unlocking the potential of deep learning models.
Solving these challenges will be essential for AI’s potential in healthcare to be realized. “While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial,” Amirian said.
Simply put, AI is both a threat and an opportunity. “The opportunity lies in its potential to revolutionize industries, improve efficiency, and solve global challenges,” Amirian said. “But it becomes a threat if not used ethically and responsibly. By fostering ethical frameworks and interdisciplinary collaboration, we can ensure AI serves as a tool for good, promoting equity and trust.”
Above all, she said, as AI offers “smarter solutions” to many modern problems, it’s also “challenging us to consider its societal and ethical implications.”
More from Pace
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.