
Smart Medicine: The Promise and Peril of AI in Healthcare
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.


To the untrained eye, the grainy medical images vaguely look like knees, black and white scans of what might be muscle, bone, and green wisps of something else.
But to Juan Shan, PhD, an associate professor of computer science in the Seidenberg School of Computer Science and Information Systems at Pace University, the photos are validation of a decades-long hunch: robots can read an MRI.

“The method does not require any human intervention,” Shan wrote in a recent paper detailing her machine learning tool for identifying bone marrow lesions (BMLs), early indicators of knee osteoarthritis. In a standard MRI, BMLs appear as pixelated clouds. In Shan’s model, they pop in vibrant hues of color.
“This work provides a possible convenient tool to assess BML volumes efficiently in larger MRI data sets to facilitate the assessment of knee osteoarthritis progression,” Shan wrote.
As artificial intelligence (AI) reshapes how medicine is practiced and delivered, Pace researchers like Shan are shaping the technology—and the guardrails—driving the revolution in clinical care. Computer scientists at Pace harness machine learning to build tools to reduce medical errors in pediatric care and strengthen clinical decision-making. Social scientists work to ensure fairness and transparency in AI-supported applications. And students are taking their skills to the field, addressing challenges like diagnosing autism.
Collectively, their goal isn’t to replace people in lab coats. Rather, it’s to facilitate doctors’ work and make medicine more precise, efficient, and equitable.
“In healthcare, AI enables earlier disease detection, personalized medicine, improves patient and clinical outcomes, and reduces the burden on healthcare systems,” said Soheyla Amirian, PhD, an assistant professor of computer science at Seidenberg who, like Shan, trains computers to diagnose illnesses.
“New York is a world-class hub for innovation, healthcare, and advanced technologies, and its diversity makes it the perfect place to explore how fair and responsible AI can address inequities across populations,” Amirian said.
In Shan’s lab, that work begins below the kneecap. Together with colleagues, she feeds medical images—MRIs and X-rays—into machine learning models to train them to detect early signs of joint disease. They’re looking to identify biomarkers—cartilage, bone marrow lesions, effusions—that might indicate whether a patient has or is prone to developing osteoarthritis, the fourth leading cause of disability in the world. Current results indicate her models generate results that are highly correlated with manual labels marked by physicians.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy."
Shan’s vision is to create diagnostic tools that would supplement human interventions and pre-screen patients who are at lower risk of disease.
“We want to apply the most advanced techniques in machine learning to the medical domain, to give doctors, radiologists, and other practitioners a second opinion to improve their diagnosis accuracy,” she said. “Our goal is to automate time-consuming medical tasks—like manual labeling of scans—to free doctors for other, more human tasks.”
Pace has invested heavily in training future leaders in AI and machine learning applications. A key focal point for these efforts has been in the healthcare sector, where rapid innovations are changing the patient experience for the better. Over the last decade, Pace researchers have published more than 100 papers in peer-reviewed journals addressing questions in psychology, biology, and medicine. Much of this work has taken advantage of AI applications.
Information technology professor Yegin Genc, PhD, and PhD student Xing Chen explored the use of AI in clinical psychology. Computer science professor D. Paul Benjamin, PhD, and PhD student Gunjan Asrani used machine learning to analyze features of patients’ speech to assess diagnostic criteria for cluttering, a fluency disorder.
Lu Shi, PhD, an associate professor of health sciences at the College of Health Professions, even uses AI to brainstorm complex healthcare questions for his students—like whether public health insurance should cover the cost of birth companions (doulas) for undocumented migrant women.
“In the past, that kind of population-wide analysis could be an entire dissertation project for a PhD student, who would have spent up to two years reaching a conclusion,” Shi said. “With consumer-grade generative AI, answering a question like that might take a couple of days.”
Pace’s efforts complement rapid developments in healthcare technology around the world. Today, AI is helping emergency dispatchers in Denmark assess callers’ risk of cardiac arrest, accelerating drug discoveries in the US, and revolutionizing how neurologists in Britain read brain scans.

Amirian, like Shan, is developing AI-powered tools for analyzing the knee. Her work, which she said has significant potential for commercialization, aims to assist clinicians in diagnosing and monitoring osteoarthritis with accurate and actionable insights. “Its scalability and ability to integrate with existing healthcare systems make it a promising innovation for widespread adoption,” she said.
A key focus for Amirian is building equity into the algorithms she creates. “Reducing healthcare disparities is central to my work,” she said. As head of the Applied Machine Intelligence Initiatives and Education (AMIIE) Laboratory at Pace, Amirian leads a multidisciplinary team of computer scientists, informaticians, physicians, AI experts, and students to create AI models that work well for diverse populations.
Intentionality is essential. “The objective is to develop algorithms that minimize bias related to sex, ethnicity, or socioeconomic status, ensuring equitable healthcare outcomes,” Amirian said. “This work is guided by the principle that AI should benefit everyone, not just a privileged few.”
Zhan Zhang, PhD, another Pace computer science researcher, has won accolades for his contribution to the field of AI and medicine. Like Amirian and Shan, he shares the view that while AI holds great potential, it must be developed with caution. In a recent literature review, he warned that “bias, whether in data or algorithms, is a cardinal ethical concern” in medicine.
“Data bias arises when data used to train the AI models are not representative of the entire patient population,” Zhang wrote in a co-authored editorial for the journal, Frontiers in Computer Science. “This can lead to erroneous conclusions, misdiagnoses, and inappropriate treatment recommendations, disproportionately affecting underrepresented populations.”
“While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial.”
Preventing bias in AI healthcare applications won’t be easy. For one, privacy concerns can create a bottleneck for securing data for research. There’s also a simple numbers challenge. Unlike AI models trained on public image benchmarks, which draw on millions of inputs, training AI models on medical images is limited by a dearth of information, said Shan. While there are efforts to augment the dataset and generate synthetic data, the relatively small size of the available medical datasets is still a barrier to fully unlocking the potential of deep learning models.
Solving these challenges will be essential for AI’s potential in healthcare to be realized. “While AI offers immense opportunities, addressing challenges like algorithmic bias, data privacy, and transparency is crucial,” Amirian said.
Simply put, AI is both a threat and an opportunity. “The opportunity lies in its potential to revolutionize industries, improve efficiency, and solve global challenges,” Amirian said. “But it becomes a threat if not used ethically and responsibly. By fostering ethical frameworks and interdisciplinary collaboration, we can ensure AI serves as a tool for good, promoting equity and trust.”
Above all, she said, as AI offers “smarter solutions” to many modern problems, it’s also “challenging us to consider its societal and ethical implications.”
More from Pace
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.
Please Use Responsibly: AI in Literacy
Generative AI is transforming education, but is it a help or a hindrance? Pace University literacy experts Francine Falk-Ross, PhD, and Peter McDermott, PhD, critically assess AI’s role in teaching and learning, exploring its potential to enhance literacy while raising concerns about its impact on research, critical thinking, and academic integrity.


Just as a chalkboard once revolutionized the classroom, Generative AI (GenAI) is the latest in a long line of technologies that seek to upend the educational landscape.
School of Education Chair on the New York City Campus Francine Falk-Ross, PhD, specializes in literacy development for all ages. Professor Peter McDermott, PhD, specializes in literacy and has conducted and presented research on incorporating technology into reading lessons. As educators, both professors underscore the importance of not shying away from but rather understanding the ways in which technologies like GenAI have and will affect teaching and learning in the years to come.
Through their own experimentation with GenAI, they stress the importance of responsible and effective use so that it can empower students and educators rather than serve as an intellectual hindrance. In the Q+A below, we discuss how AI may threaten the ability to develop expertise through rigorous citation evaluation and research, while also identifying the ways in which GenAI has and can provide immense benefits to the learning and teaching experience.
You both have presented research regarding AI in Literacy. Can you briefly discuss the content of this work?
Peter McDermott: Fran and I just had a paper accepted for publication on ways teachers could effectively use AI in The Middle School Journal. We talk about the differences of using AI and Googling, and importance of writing descriptive prompts.
For teachers, it’s critical to learn how to write good prompts that accurately describe important information regarding specifics like one’s school, student population, learning needs, what your goals and objectives are—and then, beyond the prompt, to be able to clinically analyze what AI produces.
What are your general thoughts on the introduction of GenAI into the classroom?
Fran Falk-Ross: Using GenAI effectively can really improve the classroom experience, and be very supportive of diverse student populations, or students with learning disabilities.
But at the same time, a good teacher needs to know how to think on their feet. And as we stand and teach, we don’t always have AI at our fingertips. If you have time to sit down and write something you can rely more on AI, but in terms of building skills in relation to research and critical thinking, you need a foundation that empowers you to create your own ideas and analyze existing ideas. You need to build your own expertise.
In what ways have you incorporated AI into the classroom?
McDermott: I’ve been playing with different assignments with my grad classes at Pace. One assignment is to write a half page argument essay about it a topic. The students write it, then ask ChatGPT to write an argument about the same topic. I have students compare what they produced with what ChatGPT produced. It’s an interesting exercise that helps students think about how AI can be used. Some students are surprised AI can be quite good.
In another assignment, I ask my students to upload middle school and high school student writing samples to ChatGPT and ask AI to analyze it, identify patterns of errors, and correct it. What AI produces in this case, is also very good. Taking the second step and having AI explain these edits can be very useful for education students.
In terms of building skills in relation to research and critical thinking, you need a foundation that empowers you to create your own ideas and analyze existing ideas. You need to build your own expertise.
In working with GenAI, have you had any experiences or observations that you have found concerning?
McDermott: There’s a lot of research—through publications such as the International Literacy Association, for example—that are essentially saying “AI is good, but you have to use it with a critical eye.”
When I use ChatGPT, I’ll often ask it to give me some recent citations about a topic. Last month when I did this it gave me a citation from a journal that I didn’t recognize, so I searched and searched for the journal. Turns out the journal doesn’t exist; it was an AI hallucination.
Falk-Ross: GenAI doesn’t use references without prompting, which means that students reliant on the technology will not become familiar with the history, or the established reasons for a scholarly argument. In class they’ll get an overview of research principles, but that now won’t carry over to assignments outside the classroom. In my view, AI should be complemented by a primary source, so that users are able to know where the information came from.
A student might not know the origin of an argument or a fact, or which research articles are seminal pieces—these are very important things know for teachers to be able to read more, learn more, and pass on a model of research and critical thinking before teaching students.
McDermott: You could ask AI to cite, but you still need that critical eye, of whether the citation is authentic.
You could ask AI to cite, but you still need that critical eye, of whether the citation is authentic.
In what ways have you seen students adopt the technology? How has student writing changed since the introduction of GenAI?
Falk-Ross: Using a tool like ChatGPT can be a part of the writing process and in many cases important to help clarify ideas. But there should be an editing process. You could get a useful model and good vocabulary suggestions from GenAI, but you need to reconstruct the text as your own.
McDermott: This is one of the reasons it’s important to teach students how to use AI in effective ways, to have ongoing dialogue with it. You can ask AI to produce something but then use your critical eye to evaluate that writing and ask it to revise—but be descriptive and specific in the ways you want the writing revised. The process of using ChatGPT can then become collaborative and discussion-like.
Falk-Ross: There is also the issue of students passing off what AI write as their own work. Even a few years ago, if a student wrote something and it didn’t seem like it was something from them, I could just throw it into Google and see if it’s plagiarized, because it’s referenced. Now, it’s much harder to figure out where it came from.
McDermott: There is a category of research/study called “AI resistant assignments.” We can develop assignments where students must use their personal life experience and history as research, that’s a way to overcome issues of plagiarism.
Falk-Ross: Regardless of plagiarism, I do think students lose the ability to develop expertise on their own and independently understand the process of constructing arguments, which is important in a diverse population. You need to make things work for the student population you’re teaching, but also within the school’s limits—there may be certain initiatives important to the schools, and this is a complex process.
What are your overall thoughts on this new academic normal? How can we balance the clear benefits of AI with some of its pitfalls?
Falk-Ross: Peter and I can read what AI produces and understand the quality of the output—what’s good, and what might be inaccurate or out of context. But I think that students—without being able or understanding the importance of looking up information further to see where it came from—can ultimately lose this important skill, and the scholarly model that academia is historically built upon.
Those things are pushed to the wayside, and they are critically important. But GenAI does organize information extremely well and provide useful facts. It’s important that users both learn from GenAI and understand its limitations.
McDermott: We really need to prepare teachers how to use it well. I think there’s more benefits than disadvantages, but teachers must be the critical decision-makers and leaders in its use.
It’s the responsibility of educators to understand how to effectively use these technologies.
More from Pace
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.
From the Desk of Professor No One
AI is changing the world—but should we be worried? To test its ability to engage in real academic discourse, Pace's writers tasked ChatGPT’s Deep Research with generating a fully AI-written, cited academic article. By pushing its capabilities, we’re not just showcasing what AI can do—we’re interrogating its limitations.


It’s no exaggeration that generative artificial intelligence (GenAI) may be one of the most revolutionary and quickly-evolving technologies of the modern world. And it’s getting smarter every day. When ChatGPT was first released to the public in November 2022, it would give false facts, misunderstand queries, and (in a viral example turned industry joke) couldn’t identify the number of R’s in the word strawberry.
Since then, many more companies have released their own models and continually update them. As of March 2025, ChatGPT boasts their latest o-models, omni-functional models (capable of processing text, video, and audio) with higher levels of ‘reasoning’.
Another new feature is “Deep Research," which unlike prior models that would respond rather simply to requests—conducts thorough web searches of peer-reviewed and industry reports to create high-quality research papers with accurate citations and often novel conclusions.
What Does it Mean
Still, many people—especially those of us in academia—may be more cautious. The possible advantages seem clear. If we can speed up research, what breakthroughs might come from healthcare, tech, law, the humanities?
The concerns, however, remain nebulous and ever-changing. In a world where AI can research a topic in ten minutes, what is the value of assigning essays to students? Will we ever be able to clearly discern between real images and deepfakes? Will AI remove challenge social norms or reaffirm bias?
These are difficult questions.
So, we asked the expert.
Hey, ChatGPT—Are You Evil?
To answer the question “what exactly should we be concerned about with generative AI”, we asked ChatGPT. Specifically, Deep Research.
The prompt: I'd like a critical academic paper that discusses the drawbacks and pitfalls of generative AI. What are the biggest concerns? Can AI challenge social norms, or does it reinforce existing biases? What role do corporations have in balancing ethical decisions and the need to use these tools? What role do universities play in ensuring fair AI literacy and access for students of all backgrounds?
Based on 34 sources, ChatGPT delivered, as it describes, a 15-page “deep analysis in Chicago style discussing the drawbacks and pitfalls of generative AI across various applications and disciplines.”
Explore the live prompt and results.
Watching a Machine Think
The following video (slightly edited and significantly sped up for time) shows the process of a Deep Research query. After the user sends an initial query, ChatGPT usually asks a few questions in response, such as the length and format, preferred tone, additional areas of focus, and then it begins to think. This process can take anywhere from a few minutes up to twenty.
As it thinks, users can watch along as ChatGPT explains its thoughts. Along the sidebar (starting at 0:15) ChatGPT describes its actions, explaining not only what it’s searching, but why it chooses certain sources, considers potential avenues of thought, and reasons through its next steps.
Not only can the user comb through all of the sources listed, but each source is embedded as a link following the relevant sections within the paper.

Too Long, Didn't Read
Never fear, we also asked ChatGPT to summarize its findings. It listed misinformation, bias and discrimination and automation of creative work among top concerns, and discusses the role of both corporate responsibility and higher education in ensuring a more sustainable model of ethical AI growth.

But Really, What Does it Mean?
It’s a bit dystopian to ask an AI chatbot what’s wrong with AI. (Thankfully, it didn’t say “absolutely nothing, please continue to give me your data.”) But as AI becomes more capable, it is up to humans how we use it. How we regulate it. How much we trust it. Many of the concerns quickly become existential. In a world where AI is becoming smarter, what does that mean for us? As AI seems to become more human, will humans somehow become less?
When considering how we should use AI, or what place humans have in an AI world, perhaps the wisdom of perhaps the most famous AI chatbot Hal 9000 can serve as some guidance: ‘I’m putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.’
More from Pace
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.
Generative AI is reshaping how we create, communicate, and engage with the world—but what do we gain, and what do we risk losing? This thought-provoking guide challenges you to move beyond fear or hype, applying critical thinking to AI’s evolving role in media, creativity, ethics, and society.
Pace MPA Alumna’s Path is about Public Service
From municipal government to her role at the Federal Reserve Bank of New York, MPA alumna Andrea Grenadier has navigated a successful career in public administration.


Andrea Grenadier
Class of 2016
Master of Public Administration
DISCLAIMER: The views expressed here are my own and do not necessarily represent those of the Federal Reserve Bank of New York or the Federal Reserve System.
Tell us more about your current role at the Federal Reserve Bank of New York.
As part of my role at the Federal Reserve Bank of New York, my team regularly meets with business, community, academic, and government leaders to obtain on-the-ground insights to inform our understanding of the national and regional economy. The work I do is extremely rewarding because it helps to humanize the data and bridge the gap between economic indicators and real-world experiences. I appreciate that the role is people-centered, and it feels good to know that the stakeholders I meet help to influence the monetary policy making process.
Why did you choose to study public administration?
I’ve always gravitated towards people-centered and mission-driven work and realized early that a public administration degree was necessary for career mobility. I saw the value of having a broad-based degree as a pathway to good-paying, stable jobs and wanted a foundation that would give me flexibility without pigeon-holing myself into one or two “types” of jobs/titles. Further, I knew that I could market myself for any position in the public or private sector.
Why did you choose to enroll in the Master of Public Administration (MPA) at Pace?
I chose to enroll in Pace’s MPA program (government track) because, first, it had an amazing reputation for those interested in a public sector career in downstate New York. Second was its convenience as I wanted to continue working full-time while getting my master’s degree, and Pace’s program allowed me to, ‘have my cake and eat-it too’–and not have to put my career on hold. At the time I was getting my master’s, most schools did not offer fully online programs or classes. Because of its bi-campus structure, Pace was ahead of the curve in the modality of its course offerings.
How have faculty in the MPA program been instrumental in your academic journey?
The Public Administration faculty are approachable, down-to-earth, and extremely considerate. They were always willing to meet with me and set me up for success. As a testament to the exemplary faculty, the relationships I built have lasted for years after graduation. Whether I’ve needed advice or a reference, I’ve relied on the strong relationships I built with them. A few of the jobs I’ve had during, and post-degree were a direct result of the faculty at Pace. They truly want to elevate their students.
Pace has opened many doors for me and was the catalyst to my career. I could not be happier with my decision almost 10 years ago to attend Pace.
How have your studies in the MPA program benefited you in your career?
My studies in the program have helped me to reflect on and analyze experiences in my career. I’m able to understand the systems/processes of the institutions I interact with and the degree allowed me to build a strong foundation which has made career progression easier. Many job postings require a certain number of years of experience or education/coursework in a relevant field. Consequently, the degree has been invaluable to me in terms of return-on-investment and career progression.
How did you get started in your career; what has been your trajectory to the present?
I started my career in municipal government in Westchester County, serving as a congressional staffer for a Westchester representative. From there, I pivoted to a communications role with a New York State Assemblyman who, after two years, gave me the opportunity to lend my communications skills to the Westchester County Executive campaign. I then landed a position with the New York City Mayor’s Office as an advance associate for Mayor DeBlasio.
Post-pandemic, I was able to pivot to the City Legislative Affairs Unit, where I stayed for half a year. Next, during the mayoral transition, I worked for the New York City Economic Development Corporation. In 2022, I found my way to the Federal Reserve Bank of New York.
How has your time as a Pace student influenced the person you are today?
Pace’s motto of Opportunitas has influenced the person I am today. My career has had peaks and valleys, and even during challenging and difficult situations, the motto of Opportunitas has allowed me to reframe and embrace the experiences as learning opportunities. I will always be grateful and pay it forward, as I keep in touch with many of my peers and classmates from the program who continue to inspire me both personally and professionally. Ultimately, my life would be a lot less rich if Pace was not a part of my story.
In addition, outside of Westchester County, I’ve been outnumbered by my peers with the same degree from Harvard, NYU, and Baruch. Though at first, I was intimidated, I quickly began to view being a Pace graduate as a competitive edge. Pace has opened many doors and was the catalyst to my career. I could not be happier with my decision almost 10 years ago. Go Setters!
Rethinking Education for an AI Future
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.


President Marvin Krislov recently joined the Future U. podcast at Google Public Sector GenAI Live & Labs, recorded at Google’s headquarters on Manhattan’s Pier 57. In a conversation alongside Ann Kirschner, PhD, of CUNY and Arizona State University, and Chris Hein, Field CTO at Google Public Sector, Krislov explored the profound impact of AI on higher education and the workforce.
Hosted by Future U.’s Michael Horn and Jeff Selingo, the discussion centered on the need for institutions to develop a strategic approach to AI, its role in shaping the future of work, and the importance of university-industry partnerships in ensuring equitable access to AI-driven education.
Krislov emphasized that AI is not just a passing trend—it requires proactive planning, faculty training, and industry collaboration to prepare students for the evolving job market.
“Pace has always been focused on preparing people for the next step. Thinking about your career, your job, skills, and internships is part of the discussion the minute you enter Pace University,” said Krislov. “When we saw the important change happening with technology and AI, we said, We owe it to our students and our faculty to help them navigate this.”
He highlighted Pace University’s leadership in AI education, including AI-integrated coursework across disciplines, real-world partnerships, and initiatives like the "AI in the Workplace" program. As AI continues to reshape industries, Krislov reinforced that higher education must not just adapt, but lead, ensuring students graduate not just AI-literate, but AI-ready.
More from Pace
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
Generative AI is reshaping how we create, communicate, and engage with the world—but what do we gain, and what do we risk losing? This thought-provoking guide challenges you to move beyond fear or hype, applying critical thinking to AI’s evolving role in media, creativity, ethics, and society.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.
Navigating AI Responsibly: A Practical Guide from the Pace Library
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.

It’s our new best friend!
It’s the end of critical thought!
It will destroy/revolutionize education!
Many if not most of us are grappling with understanding and learning a suddenly pervasive technology: generative AI (GenAI). Like most new technologies, GenAI carries a load of anxieties along with its benefits, presenting not only skills issues but also wider ethical questions.
Though the death of writing appears to have been exaggerated, there is still plenty to be concerned about during this rapid adoption. Is it possible to use GenAI in a way that feels safe and principled?
Here are a few things you might worry about when using ChatGPT, Claude, Gemini, or any of the array of AI models, and how you might adjust your practices.

Privacy
Since the advent of GenAI tools, privacy has been an issue. Any info that you put into a GenAI tool—including the content you create, like prompts or material you upload to work on—can theoretically be used to train the AI. If it trains on your work, your work may come out in another user’s output.
Big AI companies usually claim that user information is not used as training material, but their privacy policies and terms of use say otherwise.
An even more obvious privacy problem comes from the fact that AI companies can collect your identity and contact data, IP address (which indicates your location), device and network information, and possibly other information as available to them.
What can I do to safeguard my privacy?
Better safe than sorry. You can proactively opt out of having your data kept and possibly used by an AI company by finding the opt-out process. (Not all US states have opt-out requirements for companies that gather personal data, but enough of them do that there should be a mechanism.) The companies don’t make it easy to locate, but it is usually in the privacy policy or the terms of use.
Environmental Cost
Generative AI is extremely sophisticated and powerful, and it requires sophisticated and powerful computers to run it. These, in turn, demand enormous amounts of energy and water.
It has been estimated by The Washington Post that every prompt entered into a GenAI tool consumes about a bottle’s worth of water. That’s not a lot, but it’s 10 times as much as a Google search, and over the course of a big project, you could end up using a truckload of bottles.
What can I do to reduce AI-induced waste?
Abandoning AI isn’t the answer. Even the greenest among us are using resources all the time—simply by being alive—and it’s possible that AI will be able to reduce our energy use in the long run. For now, while the short-term costs are high, the best thing you can do is be efficient about how you use it.
Learn how to write good prompts (you can use LinkedIn Learning through Pace ITS or review Pace’s resources on prompting), think them out beforehand, and you’ll need to use fewer of them.
Loss of Skills
This is the one that probably worries us, as university affiliates, the most. We’re in the business of teaching and learning; what happens when we outsource planning, writing, even drawing to AI? It seems like uniquely human abilities—critical thinking, logical planning, creativity—can’t help but atrophy.
What can I do to make sure my skills stay sharp?
Don’t panic. ChatGPT may be able to produce 500 readable words that address a topic, but truly useful content requires a lot of human intervention. AI doesn’t do your weightlifting for you; it’s the gym equipment that makes it easy and convenient for you to do the weightlifting yourself.
As a result, those high-level intellectual skills are still very much required to get good results out of GenAI. A prompt that produces what you want must be planned and broken down, step by step, and written carefully with attention to detail and subject-specific knowledge.
Conclusion
These aren’t the only issues with AI, and these suggestions aren’t the only ways to improve your relationship with AI. But, if you’re a member of the Pace Community, the Pace Library can help you with specific questions, instruction, class policies, and more. Ultimately, it’s up to each of us to balance these ethical challenges with AI’s potential, ensuring AI is used effectively, thoughtfully, and responsibly.
For more information, check out the Pace Library’s faculty and student guides to AI, or set up an appointment with a librarian in NYC or Westchester.
As artificial intelligence seeps into every facet of life, Pace scholars are working to harness the technology’s potential to transform teaching and research. While the road ahead is fraught with uncertainty, these Pace experts see a fairer and safer AI-driven future.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
Seidenberg’s Deep Learning Afternoon with Thunder Compute
The Seidenberg School of Computer Science and Information Systems’ Pace Data Science Club recently hosted an exciting and informative event featuring Thunder Compute, a Y-Combinator-backed pioneering company in GPU cloud computing.


The Seidenberg School of Computer Science and Information Systems’ Pace Data Science Club recently hosted an exciting and informative event featuring Thunder Compute, a Y-Combinator-backed pioneering company in GPU cloud computing. Co-founders Carl Peterson and Brian Model joined Pace students to share insights into their cutting-edge technology and its impact on the future of deep learning.
Thunder Compute is known for revolutionizing the deep learning landscape with its GPU virtualization technology, which powers a highly efficient cloud platform and makes user access to powerful GPUs significantly more accessible.
During the event, the co-founders provided an in-depth look into their innovative platform before guiding students through a hands-on installation process. Their interactive approach ensured that attendees received personalized support and had their questions addressed effectively. The session aimed to demystify GPU virtualization and provide students with firsthand experience in setting up and utilizing powerful cloud-based compute instances.
As part of the event’s hands-on session, participants ran a deep learning model script developed by the Pace Data Science Club. By leveraging Thunder Compute’s GPU acceleration, students were able to experience the significant performance improvements firsthand, reinforcing the advantages of such a solution for deep learning applications.
Throughout the session, students actively engaged with the co-founders in discussions about the evolving landscape of cloud-based GPU computing, particularly in data science and machine learning. These conversations highlighted the growing significance of cost-efficient, high-performance solutions in the industry, reinforcing Thunder Compute’s unique value proposition. The dialogue also explored broader industry trends, including AI model training, scalability challenges, and the future of cloud-based infrastructure.
The event concluded with a heartfelt expression of gratitude to Carl and Brian for traveling to New York to share their expertise and connect with Pace students. Their visit provided a valuable opportunity for attendees to gain hands-on experience while expanding their professional networks in the tech industry. With the rapid advancements in AI and machine learning, events like these serve as crucial learning experiences, empowering students with the knowledge and skills necessary to navigate the evolving landscape.
Haub Law's Trial Advocacy Team Advances to ICC Moot Court Competition in The Hague
On March 8–9, 2025, the Elisabeth Haub School of Law at Pace University hosted the 2025 Regional Round for the Americas and the Caribbean of the International Criminal Court Moot Court Competition (ICC Moot). The event brought seven teams to Haub Law, with the top US teams qualifying for the global ICC Moot Court Competition held annually in The Hague, Netherlands. This year, Haub Law’s team qualified as a finalist and will be traveling to The Hague in June.


On March 8–9, 2025, the Elisabeth Haub School of Law at Pace University hosted the 2025 Regional Round for the Americas and the Caribbean of the International Criminal Court Moot Court Competition (ICC Moot). The event brought seven teams to Haub Law, with the top US teams qualifying for the global ICC Moot Court Competition held annually in The Hague, Netherlands. This year, Haub Law’s team qualified as a finalist and will be traveling to The Hague in June.
“Haub Law’s team was impressive in the qualifying rounds,” said Bradford Gorson ’13, one of the team’s coaches. “Each student prepared diligently for this competition and the results are reflective of that.” The Haub Law team consists of 3L Priscilla Holloway, 2L Sophie Bacas, 2L Jacob Cannon, 2; Tenzin Lhamo, and 2L Victoria Perretti. The team was coached by two Haub Law alumni, Bradford Gorson ’13 and Steph Areford ’24, along with David Anderson. In addition to the team advancing, Sophie Bacas was awarded first place in the Best Prosecutor category for her performance during the competition.

“Our team dedicated seven months of rigorous preparation to this competition, and the journey was nothing short of challenging—especially since none of us had prior experience with the ICC,” said 2L Tenzin Lhamo. “However, with the guidance of our exceptional coaches, Bradford Gorson, David Anderson, and Steph Areford, we were able to rise to the challenge.” Notably, alumni coach Bradford Gorson was part of the Haub Law team that competed in The Hague 12 years ago.
“Haub Law founded the ICC Moot and as it has grown into a global competition we now host the qualifying round for American teams hoping to compete in The Hague,” said Professor Alexander K.A. Greenawalt, who serves as faculty director of the Moot. “It is wonderful to have a Haub Law team advancing once again to the global competition.” The Elisabeth Haub School of Law at Pace University is home to a top ranked trial advocacy program. In 2024, it was ranked #13 in the nation by U.S. News & World Report, placing it impressively among the top 10% of schools nationwide.
The ICC Moot was first organized in 2004 by Haub Law Professor Emeritus Gayl S. Westerman and Matthew E. Brotmann. At the time, the moot was the world’s only moot court competition based on the law and procedures of the newly created ICC, the first permanent international tribunal dedicated to the prosecution of international criminal offenses. Since 2004, the International Criminal Court (ICC) has grown, and the Competition has grown with it. In 2014, Haub Law partnered with the International Criminal Court and the Grotius Centre for International Legal Studies, Leiden University to create a global competition, the ICC Moot Court Competition, which is held annually in The Hague, Netherlands, with the final round judged at the ICC itself by ICC judges and legal officers. More recently, in 2017, the ICC Moot started its collaboration with the International Bar Association (IBA), and in 2020 the IBA became a name partner in the Competition.
This year, the five top US teams were the University of Chicago, Georgetown University Law Center, Case Western Reserve University School of Law, Elisabeth Haub School of Law at Pace University, and Tulane University School of Law. These top five teams all qualified for the International Criminal Court Moot Court Competition to be held in June in The Hague.
2025 Regional Qualifying Round for the Americas and Caribbean results
Best Overall
- First: University of Chicago
- Second: Georgetown University Law Center
- Third: Case Western Reserve University School of Law
Best Preliminary Round Oralists – Prosecution
- First: Sophie Bacas, Elisabeth Haub School of Law at Pace University
- Second: Jade Armstrong, University of Miami School of Law
- Third: Kaylara Benfield, Case Western Reserve University School of Law
Best Preliminary Round Oralists – Defense
- First: Inanna Khansa, University of Chicago
- Second: Rose Leakin, Case Western Reserve University School of Law
- Third: Luke Dykowski, Georgetown University Law Center
Best Preliminary Round Oralists – Victims’ Advocate
- First: Vikram Ramaswamy, University of Chicago
- Second: Haley Dykstra, Tulane University School of Law
- Third: Minah Malik, University of Miami School of Law
Best Prosecutorial Memorial
- First: Georgetown University Law Center
- Second (TIE): Tulane University School of Law
- Second (TIE): Case Western Reserve University School of Law
Best Defense Memorial
- First: Case Western Reserve University School of Law
- Second: Georgetown University Law Center
- Third: University of Miami School of Law
Best Victims’ Advocate Memorial
- First: Case Western Reserve University School of Law
- Second (TIE): University of Miami School of Law
- Second (TIE): Georgetown University Law Center
Semifinalist Teams
- University of Chicago
- Georgetown University Law Center
- Case Western Reserve University School of Law
- Elisabeth Haub School of Law at Pace University
- Tulane University School of Law
Participating Teams
- Case Western Reserve University School of Law
- Elisabeth Haub School of Law at Pace University
- Georgetown University Law Center
- Tulane University School of Law
- University of Chicago
- University of Miami School of Law
- Chicago-Kent College of Law
Op-ed: Higher Education Drives New York's Economic And Social Vitality
Pace President Marvin Krislov writes an op-ed with Jessica Lappin of the Downtown Alliance in Crain's New York Business, emphasizing the critical role of higher education in sustaining New York City's economic and social well-being.

Fox 5 News: Professor Bennett Gershman on Babylon Village Retail Gun Store Ban
Haub Law Professor Bennett Gershman speaks to Fox 5 News about the controversial law enacted in Babylon Village, New York which bans the sale of firearms and ammunition within its borders, saying it could be viewed as unconstitutional.
