
Do Those Five-Star Funds Really Shine?
Finance Professor Matthew Morey speaks with Bloomberg, analyzing the performance of diversified U.S. equity funds and finding that while five-star funds outperformed the market before receiving top ratings, they lagged behind afterward.

Jury Finds Greenpeace Owes Hundreds Of Millions For Dakota Access Pipeline Protest
Elisabeth Haub School of Law Professor Josh Galperin offers commentary to NPR on the $300 million jury verdict against Greenpeace, calling it a troubling precedent for advocacy organizations.

Power & Politics: Lt. Gov. Delgado's Future, Tariffs Tumble The Market And The Latest Siena College Poll
Economics Professor Mark Weinstock appears on News 12’s Power & Politics to break down recent tariff impacts and market turbulence.

How Tinker v. Des Moines Established Students’ Free Speech Rights
Haub Law Professor Emily Waldman is featured in Retro Report, discussing the lasting impact of Tinker v. Des Moines on students’ free speech rights.

A Very Big Business It's Called Facebook.Com And It's Liike No College Directory.
NYC Counseling Center Director Richard Shaddock speaks on WHYY (Radio) about how social media culture and the pursuit of “likes” have fueled anxiety and unhealthy comparisons among college students.
Living the AI Experiment
As artificial intelligence seeps into every facet of life, Pace scholars are working to harness the technology’s potential to transform teaching and research. While the road ahead is fraught with uncertainty, these Pace experts see a fairer and safer AI-driven future.


When philosophy professor James Brusseau, PhD, introduced his students to the Caffeinated Professor, a generative artificial intelligence (AI) chatbot trained on his business ethics textbook, he wasn’t trying to replace traditional teaching by handing the classroom over to a robot.
He was embarking on an experiment into uncharted educational territory, a journey without a map and only one direction of travel.

“I don’t know all the ways that it will help and hurt my students,” said Brusseau, who unveiled the AI professor to his Philosophy 121 class this semester. Students are encouraged to converse with the bot day or night, engaging in conversation just as they might with him. “When answers are a few keystrokes away, there’s a clear pedagogical negative to introducing a tool like this.”
“But if I didn’t build this, someone else would have,” he added. “While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
The rise of generative AI—tools like ChatGPT, Gemini, and Grok that generate original text, images, and videos—has sent shockwaves through many industries. For some observers, fear is the dominant emotion, with concerns that AI could take jobs or lead to humanity’s downfall.
Professors and researchers at Pace University, however, see a different future. For them, AI anxiety is giving way to a cautious acceptance of a technology that’s transforming how we live, work, study, and play. While creators urge caution and experts debate regulations, scholars are concluding that, for better or worse, AI is here to stay.
The real question is what we choose to do with that actuality.
At Pace, experimentation is the only way forward. In Fall 2024, Pace included an AI course—Introduction to Computing—to its core curriculum for undergraduates, bringing the number of courses that incorporate AI at the undergraduate and graduate levels to 39.
“While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
Pace is also leading the way in cross-disciplinary AI and machine learning research. At the Pace AI Lab, led by pioneering AI researcher Christelle Scharff, PhD, faculty, staff, and students integrate their knowledge areas into collective problem solving powered by the technology.
In doing so, Pace’s academics are writing and revising the script for how to balance the dangers and opportunities that AI presents. “We’re living in a heuristic reality, where we experiment, see what happens, and then do another experiment,” said Brusseau.
A Defining Moment
Jessica Magaldi’s AI experiment began with revenge porn. Early in her career, the award-winning Ivan Fox Scholar and professor of business law at the Lubin School of Business studied intellectual property law and transactions for emerging and established companies.

In 2020, she turned her attention to laws criminalizing the illegal sharing of sexually explicit images or videos of a person online without consent. Shockingly, most revenge porn laws were toothless, she said, and there was very little public or political appetite to sharpen them.
Now, fast forward to January 2024, when fake sexually explicit images of singer Taylor Swift went viral on X. Public outrage was immediate. Users demanded accountability, and fans initiated a “Protect Taylor Swift” campaign online. In Europe, lawmakers called for blood.
For Magaldi, something didn’t add up. “We were at a moment when AI generated content that everyone knows is fake was producing more outrage than so-called revenge porn photos, images that are real.” Understanding that contradiction could offer clues on how to draft laws and legislation that are more effective for the victims, she said.
Eventually, it might even teach us something about ourselves. “My greatest hope is that we can use what we learn about the differences between how we feel about what is real and what is AI to explore what that means for us and our collective humanity,” she said.
Optimism Grows
Harnessing the benefits of AI is also what occupies Brian McKernan, PhD, an assistant professor of communication and media studies at the Dyson College of Arts and Sciences.

McKernan, who describes himself as cautiously optimistic about AI, would be excused for taking a less rosy view of the technology. His research areas include misinformation, cognitive biases, and political campaign transparency—topics where the use of AI is rarely benevolent. In a 2024 study of the 2020 US presidential election, McKernan and his collaborators found that President Donald Trump used the massive exposure popular social media platforms offer in an attempt to sow distrust in the electoral process.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying."
And yet, McKernan remains upbeat, an optimism stemming from the fact that AI helps him keep tabs on what politicians are saying, and doing, online.
“It’s a data deluge,” he said. To help sort through it, McKernan and colleagues at the Illuminating project, based at Syracuse University train supervised AI models to classify and analyze social media content. Researchers check the performance of the models before making their findings public.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying,” he said.
Racing to Regulate?
To be sure, there are social and ethical dangers inherent in AI’s application—even when people are at the keyboard. One concern is access. Many generative AI tools are free, but they won’t be forever. When people can’t afford “the shiniest tools,” McKernan said, the digital divide will deepen.
Other challenges include maintaining data privacy, expanding availability of non-English tools, protecting the intellectual property of creators, and reducing biases in code. Even AI terrorism is an area of increasing concern for security experts.
Emilie Zaslow, PhD, a professor and chair of communication and media studies at Pace, said given these concerns, eventually, a regulatory framework for AI might be wise.

“In media, we have examples of both government regulatory oversight, through the Federal Communications Commission, for example, and industry self-regulation, such as the Motion Picture Association film rating system,” Zaslow said. “There is also government involvement in evaluating new consumer products; take the Food and Drug Administration, for example. Every time a new drug comes to market, the FDA evaluates it, tests it, and decides whether it gets released and with what kind of warnings.”
“There should be increased regulatory oversight for technology,” she said.
Regulations are emerging. In Europe, the AI Act bans certain applications deemed to pose an “unacceptable risk” to citizens. Punishable programming includes social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.
Companies face fines up to $35.8 million or 7% of their global annual revenues—whichever amount is higher.
Brusseau, while sensitive to the dangers, doubts that the punitive approach will pay off. “The internet has no geography; it isn’t anywhere,” he said. “How do we prohibit something that isn't anywhere?”
“There should be increased regulatory oversight for technology.”
He suggests a different approach: using technology to regulate itself. He calls this acceleration ethics, the idea that the most effective way to approach risks raised by innovation is with still more innovating.
In a recent paper, Brusseau examined how TELUS, a Canadian telecommunications company, developed an automated safety tool to monitor its customer-serving chatbot. When the safety tool detected hallucinations, phishing threats, or privacy risks in the chatbot’s answers, it flagged them for human review.
“While the purity of theoretical positions is blurred by real-world ambiguities,” Brusseau wrote, “the TELUS case illustrates how the acceleration strategy transforms AI ethics into an innovation catalyst.”
Risks Worth Taking
Ask ChatGPT whether it’s dangerous, and its response is unequivocal: “I’m here to help and have meaningful conversations.”
Ask ChatGPT whether AI is dangerous, the reply is a bit murkier: “It depends on how it's used.”
But point out that ChatGPT is AI, and the contradiction isn’t lost on the technology itself. “What I meant to convey is that I am designed to be helpful, safe, and non-threatening. But it’s true that, like any tool, the potential for harm exists if used irresponsibly.”
When scholars and historians look back at this era of AI experimentation, they may be similarly conflicted. Magaldi, who understands how devastating sexually explicit deepfake images can be, also recognizes the usefulness of AI’s creativity. In Spring 2024, she even used AI to help her flesh out an idea for a class on Taylor Swift. She did it, in part, as an exercise for herself to use AI in a creative way.
“I'm not worried in the least. Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
“With ChatGPT, I was able to build an entire music industry law class based on Swift's disputes and lawsuits,” Magaldi said. After lots of tweaking, she ended up with the syllabus for a three-credit class exploring the singer’s experiences with copyright infringement, music industry contracts, trademark law, and ticketing practices.
It was a massive success. TikTok videos were made about the class, registration for the class closed in minutes, and students are eager for it to run again.
This type of human-AI interaction—using the technology as a “thought partner,” as Magaldi puts it—is the sweet spot in AI’s societal integration.
It’s also why Brusseau is upbeat. “I'm not worried in the least,” he said. “Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
That certainty, however, doesn’t mean he has all the answers. With AI, there are only questions. “Like buying a one-way plane ticket, it’s not the destination that matters, but the journey,” he said. “That’s why I built the Caffeinated Professor—to see where it takes us.”
More from Pace
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.
Haub Law’s Mock Trial Team Finishes Strong in the Queens County District Attorney's Office 10th Annual Mock Trial Competition
The Elisabeth Haub School of Law at Pace University’s Mock Trial Team recently competed in the Queens District Attorney’s Office 10th Annual Mock Trial Competition, which took place in the court facilities of the Queens Criminal Court. The Pace Haub Law team consisting of Skyler Pozo (2L), Maiya Aubry (2L), Alexa Saccomanno (2L), and James Page (2L), finished in second place out of the eighteen nationally ranked law schools who competed.


The Elisabeth Haub School of Law at Pace University’s Mock Trial Team recently competed in the Queens District Attorney’s Office 10th Annual Mock Trial Competition, which took place in the court facilities of the Queens Criminal Court. The Pace Haub Law team consisting of Skyler Pozo (2L), Maiya Aubry (2L), Alexa Saccomanno (2L), and James Page (2L), finished in second place out of the eighteen nationally ranked law schools who competed. During the intense competition, students competed before senior prosecutors and members of the defense bar with judges from Queens and Brooklyn presiding over the competition, along with prosecutors and defense attorneys.
The Haub Law Mock Trial Team successfully made it through two preliminary rounds, a blind quarterfinal round, semi-finals, before finishing in second place during the final round. “It was a challenging competition with some of the best and brightest law students throughout the country, but I’m proud to say that our student advocates rose to the occasion,” said Luis Felix ’15, who coached the Pace Haub Law team. “Their dedication, hard work and knowledge of the fact pattern was reflected in their strong finish, and I look forward to seeing what else they accomplish in the courtroom.” Alexa Saccomanno (2L) was also awarded the individual award of Best Opening Statement.
“The performance by our 2L students demonstrates both the strength and depth of our program,” said Professor Louis Fasulo, Director of Advocacy Programs and Professor of Trial Practice. “These students along with the support of Coach Felix makes us all proud and is a major highlight of this year’s competitions.”
Smart Medicine: The Promise and Peril of AI in Healthcare
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.


To the untrained eye, the grainy medical images vaguely look like knees, black and white scans of what might be muscle, bone, and green wisps of something else.
But to Juan Shan, PhD, an associate professor of computer science in the Seidenberg School of Computer Science and Information Systems at Pace University, the photos are validation of a decades-long hunch: robots can read an MRI.

“The method does not require any human intervention,” Shan wrote in a recent paper detailing her machine learning tool for identifying bone marrow lesions (BMLs), early indicators of knee osteoarthritis. In a standard MRI, BMLs appear as pixelated clouds. In Shan’s model, they pop in vibrant hues of color.
“This work provides a possible convenient tool to assess BML volumes efficiently in larger MRI data sets to facilitate the assessment of knee osteoarthritis progression,” Shan wrote.
As artificial intelligence (AI) reshapes how medicine is practiced and delivered, Pace researchers like Shan are shaping the technology—and the guardrails—driving the revolution in clinical care. Computer scientists at Pace harness machine learning to build tools to reduce medical errors in pediatric care and strengthen clinical decision-making. Social scientists work to ensure fairness and transparency in AI-supported applications. And students are taking their skills to the field, addressing challenges like diagnosing autism.
Collectively, their goal isn’t to replace people in lab coats. Rather, it’s to facilitate doctors’ work and make medicine more precise, efficient, and equitable.
“In healthcare, AI enables earlier disease detection, personalized medicine, improves patient and clinical outcomes, and reduces the burden on healthcare systems,” said Soheyla Amirian, PhD, an assistant professor of computer science at Seidenberg who, like Shan, trains computers to diagnose illnesses.
“New York is a world-class hub for innovation, healthcare, and advanced technologies, and its diversity makes it the perfect place to explore how fair and responsible AI can address inequities across populations,” Amirian said.
In Shan’s lab, that work begins below the kneecap. Together with colleagues, she feeds medical images—MRIs and X-rays—into machine learning models to train them to detect early signs of joint disease. They’re looking to identify biomarkers—cartilage, bone marrow lesions, effusions—that might indicate whether a patient has or is prone to developing osteoarthritis, the fourth leading cause of disability in the world. Current results indicate her models generate results that are highly correlated with manual labels marked by physicians.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy."
Shan’s vision is to create diagnostic tools that would supplement human interventions and pre-screen patients who are at lower risk of disease.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy,” she said. “Our goal is to automate time-consuming medical tasks—like manual labeling of scans—to free doctors for other, more human tasks.”
Pace has invested heavily in training future leaders in AI and machine learning applications. A key focal point for these efforts has been in the healthcare sector, where rapid innovations are changing the patient experience for the better. Over the last decade, Pace researchers have published more than 100 papers in peer-reviewed journals addressing questions in psychology, biology, and medicine. Much of this work has taken advantage of AI applications.
Information technology professor Yegin Genc, PhD, and PhD student Xing Chen explored the use of AI in clinical psychology. Computer science professor D. Paul Benjamin, PhD, and PhD student Gunjan Asrani used machine learning to analyze features of patients’ speech to assess diagnostic criteria for cluttering, a fluency disorder.
Lu Shi, PhD, an associate professor of health sciences at the College of Health Professions, even uses AI to brainstorm complex healthcare questions for his students—like whether public health insurance should cover the cost of birth companions (doulas) for undocumented migrant women.
“In the past, that kind of population-wide analysis could be an entire dissertation project for a PhD student, who would have spent up to two years reaching a conclusion,” Shi said. “With consumer-grade generative AI, answering a question like that might take a couple of days.”
Pace’s efforts complement rapid developments in healthcare technology around the world. Today, AI is helping emergency dispatchers in Denmark assess callers’ risk of cardiac arrest, accelerating drug discoveries in the US, and revolutionizing how neurologists in Britain read brain scans.

Amirian, like Shan, is developing AI-powered tools for analyzing the knee. Her work, which she said has significant potential for commercialization, aims to assist clinicians in diagnosing and monitoring osteoarthritis with accurate and actionable insights. “Its scalability and ability to integrate with existing healthcare systems make it a promising innovation for widespread adoption,” she said.
A key focus for Amirian is building equity into the algorithms she creates. “Reducing healthcare disparities is central to my work,” she said. As head of the Applied Machine Intelligence Initiatives and Education (AMIIE) Laboratory at Pace, Amirian leads a multidisciplinary team of computer scientists, informaticians, physicians, AI experts, and students to create AI models that work well for diverse populations.
Intentionality is essential. “The objective is to develop algorithms that minimize bias related to sex, ethnicity, or socioeconomic status, ensuring equitable healthcare outcomes,” Amirian said. “This work is guided by the principle that AI should benefit everyone, not just a privileged few.”
Zhan Zhang, PhD, another Pace computer science researcher, has won accolades for his contribution to the field of AI and medicine. Like Amirian and Shan, he shares the view that while AI holds great potential, it must be developed with caution. In a recent literature review, he warned that “bias, whether in data or algorithms, is a cardinal ethical concern” in medicine.
“Data bias arises when data used to train the AI models are not representative of the entire patient population,” Zhang wrote in a co-authored editorial for the journal, Frontiers in Computer Science. “This can lead to erroneous conclusions, misdiagnoses, and inappropriate treatment recommendations, disproportionately affecting underrepresented populations.”
“While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial.”
Preventing bias in AI healthcare applications won’t be easy. For one, privacy concerns can create a bottleneck for securing data for research. There’s also a simple numbers challenge. Unlike AI models trained on public image benchmarks, which draw on millions of inputs, training AI models on medical images is limited by a dearth of information, said Shan. While there are efforts to augment the dataset and generate synthetic data, the relatively small size of the available medical datasets is still a barrier to fully unlocking the potential of deep learning models.
Solving these challenges will be essential for AI’s potential in healthcare to be realized. “While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial,” Amirian said.
Simply put, AI is both a threat and an opportunity. “The opportunity lies in its potential to revolutionize industries, improve efficiency, and solve global challenges,” Amirian said. “But it becomes a threat if not used ethically and responsibly. By fostering ethical frameworks and interdisciplinary collaboration, we can ensure AI serves as a tool for good, promoting equity and trust.”
Above all, she said, as AI offers “smarter solutions” to many modern problems, it’s also “challenging us to consider its societal and ethical implications.”
More from Pace
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.
Please Use Responsibly: AI in Literacy
Generative AI is transforming education, but is it a help or a hindrance? Pace University literacy experts Francine Falk-Ross, PhD, and Peter McDermott, PhD, critically assess AI’s role in teaching and learning, exploring its potential to enhance literacy while raising concerns about its impact on research, critical thinking, and academic integrity.


Just as a chalkboard once revolutionized the classroom, Generative AI (GenAI) is the latest in a long line of technologies that seek to upend the educational landscape.
School of Education Chair on the New York City Campus Francine Falk-Ross, PhD, specializes in literacy development for all ages. Professor Peter McDermott, PhD, specializes in literacy and has conducted and presented research on incorporating technology into reading lessons. As educators, both professors underscore the importance of not shying away from but rather understanding the ways in which technologies like GenAI have and will affect teaching and learning in the years to come.
Through their own experimentation with GenAI, they stress the importance of responsible and effective use so that it can empower students and educators rather than serve as an intellectual hindrance. In the Q+A below, we discuss how AI may threaten the ability to develop expertise through rigorous citation evaluation and research, while also identifying the ways in which GenAI has and can provide immense benefits to the learning and teaching experience.
You both have presented research regarding AI in Literacy. Can you briefly discuss the content of this work?
Peter McDermott: Fran and I just had a paper accepted for publication on ways teachers could effectively use AI in The Middle School Journal. We talk about the differences of using AI and Googling, and importance of writing descriptive prompts.
For teachers, it’s critical to learn how to write good prompts that accurately describe important information regarding specifics like one’s school, student population, learning needs, what your goals and objectives are—and then, beyond the prompt, to be able to clinically analyze what AI produces.
What are your general thoughts on the introduction of GenAI into the classroom?
Fran Falk-Ross: Using GenAI effectively can really improve the classroom experience, and be very supportive of diverse student populations, or students with learning disabilities.
But at the same time, a good teacher needs to know how to think on their feet. And as we stand and teach, we don’t always have AI at our fingertips. If you have time to sit down and write something you can rely more on AI, but in terms of building skills in relation to research and critical thinking, you need a foundation that empowers you to create your own ideas and analyze existing ideas. You need to build your own expertise.
In what ways have you incorporated AI into the classroom?
McDermott: I’ve been playing with different assignments with my grad classes at Pace. One assignment is to write a half page argument essay about it a topic. The students write it, then ask ChatGPT to write an argument about the same topic. I have students compare what they produced with what ChatGPT produced. It’s an interesting exercise that helps students think about how AI can be used. Some students are surprised AI can be quite good.
In another assignment, I ask my students to upload middle school and high school student writing samples to ChatGPT and ask AI to analyze it, identify patterns of errors, and correct it. What AI produces in this case, is also very good. Taking the second step and having AI explain these edits can be very useful for education students.
In terms of building skills in relation to research and critical thinking, you need a foundation that empowers you to create your own ideas and analyze existing ideas. You need to build your own expertise.
In working with GenAI, have you had any experiences or observations that you have found concerning?
McDermott: There’s a lot of research—through publications such as the International Literacy Association, for example—that are essentially saying “AI is good, but you have to use it with a critical eye.”
When I use ChatGPT, I’ll often ask it to give me some recent citations about a topic. Last month when I did this it gave me a citation from a journal that I didn’t recognize, so I searched and searched for the journal. Turns out the journal doesn’t exist; it was an AI hallucination.
Falk-Ross: GenAI doesn’t use references without prompting, which means that students reliant on the technology will not become familiar with the history, or the established reasons for a scholarly argument. In class they’ll get an overview of research principles, but that now won’t carry over to assignments outside the classroom. In my view, AI should be complemented by a primary source, so that users are able to know where the information came from.
A student might not know the origin of an argument or a fact, or which research articles are seminal pieces—these are very important things know for teachers to be able to read more, learn more, and pass on a model of research and critical thinking before teaching students.
McDermott: You could ask AI to cite, but you still need that critical eye, of whether the citation is authentic.
You could ask AI to cite, but you still need that critical eye, of whether the citation is authentic.
In what ways have you seen students adopt the technology? How has student writing changed since the introduction of GenAI?
Falk-Ross: Using a tool like ChatGPT can be a part of the writing process and in many cases important to help clarify ideas. But there should be an editing process. You could get a useful model and good vocabulary suggestions from GenAI, but you need to reconstruct the text as your own.
McDermott: This is one of the reasons it’s important to teach students how to use AI in effective ways, to have ongoing dialogue with it. You can ask AI to produce something but then use your critical eye to evaluate that writing and ask it to revise—but be descriptive and specific in the ways you want the writing revised. The process of using ChatGPT can then become collaborative and discussion-like.
Falk-Ross: There is also the issue of students passing off what AI write as their own work. Even a few years ago, if a student wrote something and it didn’t seem like it was something from them, I could just throw it into Google and see if it’s plagiarized, because it’s referenced. Now, it’s much harder to figure out where it came from.
McDermott: There is a category of research/study called “AI resistant assignments.” We can develop assignments where students must use their personal life experience and history as research, that’s a way to overcome issues of plagiarism.
Falk-Ross: Regardless of plagiarism, I do think students lose the ability to develop expertise on their own and independently understand the process of constructing arguments, which is important in a diverse population. You need to make things work for the student population you’re teaching, but also within the school’s limits—there may be certain initiatives important to the schools, and this is a complex process.
What are your overall thoughts on this new academic normal? How can we balance the clear benefits of AI with some of its pitfalls?
Falk-Ross: Peter and I can read what AI produces and understand the quality of the output—what’s good, and what might be inaccurate or out of context. But I think that students—without being able or understanding the importance of looking up information further to see where it came from—can ultimately lose this important skill, and the scholarly model that academia is historically built upon.
Those things are pushed to the wayside, and they are critically important. But GenAI does organize information extremely well and provide useful facts. It’s important that users both learn from GenAI and understand its limitations.
McDermott: We really need to prepare teachers how to use it well. I think there’s more benefits than disadvantages, but teachers must be the critical decision-makers and leaders in its use.
It’s the responsibility of educators to understand how to effectively use these technologies.
More from Pace
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.
From the Desk of Professor No One
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.


It’s no exaggeration that generative artificial intelligence (GenAI) may be one of the most revolutionary and quickly-evolving technologies of the modern world. And it’s getting smarter every day. When ChatGPT was first released to the public in November 2022, it would give false facts, misunderstand queries, and (in a viral example turned industry joke) couldn’t identify the number of R’s in the word strawberry.
Since then, many more companies have released their own models and continually update them. As of March 2025, ChatGPT boasts their latest o-models, omni-functional models (capable of processing text, video, and audio) with higher levels of ‘reasoning’.
Another new feature is “Deep Research," which unlike prior models that would respond rather simply to requests—conducts thorough web searches of peer-reviewed and industry reports to create high-quality research papers with accurate citations and often novel conclusions.
What Does it Mean
Still, many people—especially those of us in academia—may be more cautious. The possible advantages seem clear. If we can speed up research, what breakthroughs might come from healthcare, tech, law, the humanities?
The concerns, however, remain nebulous and ever-changing. In a world where AI can research a topic in ten minutes, what is the value of assigning essays to students? Will we ever be able to clearly discern between real images and deepfakes? Will AI remove challenge social norms or reaffirm bias?
These are difficult questions.
So, we asked the expert.
Hey, ChatGPT—Are You Evil?
To answer the question “what exactly should we be concerned about with generative AI”, we asked ChatGPT. Specifically, Deep Research.
The prompt: I'd like a critical academic paper that discusses the drawbacks and pitfalls of generative AI. What are the biggest concerns? Can AI challenge social norms, or does it reinforce existing biases? What role do corporations have in balancing ethical decisions and the need to use these tools? What role do universities play in ensuring fair AI literacy and access for students of all backgrounds?
Based on 34 sources, ChatGPT delivered, as it describes, a 15-page “deep analysis in Chicago style discussing the drawbacks and pitfalls of generative AI across various applications and disciplines.”
Explore the live prompt and results.
Watching a Machine Think
The following video (slightly edited and significantly sped up for time) shows the process of a Deep Research query. After the user sends an initial query, ChatGPT usually asks a few questions in response, such as the length and format, preferred tone, additional areas of focus, and then it begins to think. This process can take anywhere from a few minutes up to twenty.
As it thinks, users can watch along as ChatGPT explains its thoughts. Along the sidebar (starting at 0:15) ChatGPT describes its actions, explaining not only what it’s searching, but why it chooses certain sources, considers potential avenues of thought, and reasons through its next steps.
Not only can the user comb through all of the sources listed, but each source is embedded as a link following the relevant sections within the paper.

Too Long, Didn't Read
Never fear, we also asked ChatGPT to summarize its findings. It listed misinformation, bias and discrimination and automation of creative work among top concerns, and discusses the role of both corporate responsibility and higher education in ensuring a more sustainable model of ethical AI growth.

But Really, What Does it Mean?
It’s a bit dystopian to ask an AI chatbot what’s wrong with AI. (Thankfully, it didn’t say “absolutely nothing, please continue to give me your data.”) But as AI becomes more capable, it is up to humans how we use it. How we regulate it. How much we trust it. Many of the concerns quickly become existential. In a world where AI is becoming smarter, what does that mean for us? As AI seems to become more human, will humans somehow become less?
When considering how we should use AI, or what place humans have in an AI world, perhaps the wisdom of perhaps the most famous AI chatbot Hal 9000 can serve as some guidance: ‘I’m putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.’
More from Pace
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.
Generative AI is reshaping how we create, communicate, and engage with the world—but what do we gain, and what do we risk losing? This thought-provoking guide challenges you to move beyond fear or hype, applying critical thinking to AI’s evolving role in media, creativity, ethics, and society.