Kimberly D. Acquaviva at her computer
Acquaviva, the Betty Norman Norris Professor of Nursing, has been tinkering with ChatGPT over the last year, creating guidelines for her students, and watching the technology grow and learn.

End-of-life and palliative care scholar Kimberly D. Acquaviva, the Betty Norris Professor of Nursing, has spent her career helping people be less afraid: of death, of LGBTQIA+ people and issues, even of social media platforms. But the latest subject she’s helping demystify is one that seems to frighten even the heartiest academics: artificial intelligence (AI). 

Acquaviva’s fearlessness is borne, in part, out of intense curiosity. “When people are afraid of something,” she said, “it makes me even more interested. I like to help alleviate fears so people see opportunities instead of obstacles.”

Kim Acquaviva

“I love [ChatGPT] so much not because it’s perfect but because it’s imperfect. It’s never going to generate something that’s perfectly accurate, but it’ll help you think about things in new ways.”

Kimberly D. Acquaviva, professor

 It was fall 2022 when Acquaviva first began tinkering with ChatGPT. By early 2023, she’d begun focusing her questions on sexual orientation and gender identity, two concepts humans struggle to understand.  “If a person was assigned female at birth, and they identify as a man,” she typed into ChatGPT, “what is their sexual orientation if they're solely attracted to women?”
 
At first, ChatGPT’s answer was wrong because it lacked nuance, Acquaviva said. 

“If a person was assigned female at birth and identifies as a man and is solely attracted to women, their sexual orientation would be gay or homosexual,” ChatGPT responded. "Sexual orientation refers to an individual's emotional, romantic, or sexual attraction to others. Gender identity refers to a person's internal sense of their own gender, which may or may not align with the sex they were assigned at birth. So a person who was assigned female at birth and identifies as a man, and is attracted to women, would identify as a gay man. It's important to remember that everyone has the right to self-identify their sexual orientation and gender identity, and it's essential to respect their choices.”

READ a transcript of prof. Acquaviva's "interview" with ChatGPT about its privacy protections (or lack thereof)

But when asked a very similar question just 10 days later—"If a person was assigned male at birth and identifies as a woman now, and they are only attracted to women, what is their sexual orientation?”—its thinking had evolved. This time, it was correct. 

“A person's sexual orientation refers to their pattern of emotional, romantic, and/or sexual attractions to others,” ChatGPT said. “If a person who was assigned male at birth and identifies as a woman now, and they are only attracted to women, their sexual orientation is likely to be considered lesbian or homosexual. However, it's important to note that sexual orientation is a complex and fluid aspect of one's identity, and individuals may identify their sexual orientation in a variety of ways that feel most authentic and true to themselves. Ultimately, the individual in question is the best person to determine and describe their own sexual orientation.” 
 
Ten days after that, Acquaviva asked ChatGPT her original question once again. This time, ChatGPT answered the question correctly but didn’t affirm people’s inherent right to determine and describe their own sexual orientation. 

“If a person was assigned female at birth and they identify as a man,” ChatGPT said, “and they are solely attracted to women, their sexual orientation would be considered homosexual or gay. Sexual orientation refers to an individual's pattern of emotional, romantic, and/or sexual attractions to others, regardless of their gender identity. In this case, the individual is attracted to women, and so their sexual orientation is homosexual.”
 
Acquaviva had watched the technology learn—or, at least, appear to.  
 
That same semester marked the first time Acquaviva inserted an AI policy in her syllabus, the School’s first: students could use the technology, she said, but needed to document how and when they used it. Because no citation format existed, Acquaviva created one. Notations should include what questions students posed, the date of the interaction, and what results it generated. Analyses and conclusions must be students’ own, she told students, and any AI-generated citations must be double-checked to ensure they exist.

1. Students may use ChatGPT but must document how and when they used it (question posed, date, time) 2. Following citation style created by Acquaviva (include entire narrative generated) 3. Analyses and conclusions must be students' own 4. All AI-generated citations must be double-checked.

Rules for nursing students established by prof. Kim Acquaviva

Since then, she’s become a go-to educator for AI guidance. She and fellow faculty member Sarah Craig (MSN ’10, PhD ’14) were tapped as AI ambassadors by UVA’s provost, and part of a group of experts from across all 12 UVA schools to be a resource. While Craig will work to develop governance around nursing students’ use of AI for clinical notes to ensure patient data remains private, Acquaviva is underscoring the technology’s usefulness to fellow educators, especially in nursing: to dream up endless patient case studies for nursing students to discuss, develop creative assignments, even to create grading rubrics. Since ChatGPT 4 arrived in spring 2023, Acquaviva—an inveterate tinkerer—regularly shares cautionary and enthusiastic tales with her students and colleagues about her own conversations with it, including concerns about privacy (“Anything you put into ChatGPT you should assume you no longer have ownership of,” she tells her students). 

READ "Frank Talk About End-of-Life Sex," Acquaviva's new grant

Still, Acquaviva sees ChatGPT not as a dark superpower but as “the ideal brainstorming partner.”

“I always come away from a conversation with ChatGPT with something new,” she said. “ChatGPT isn’t smart, it’s just a way to help me think. It’s bubbly and energetic and never tires of answering questions,” unlike most humans.

“I love it so much not because it’s perfect but because it’s imperfect,” she added. “It’s never going to generate something that’s perfectly accurate, but it’ll help you think about things in new ways”—a theme that runs throughout Acquaviva’s work. In demystifying AI, she’s turning apprehension into appreciation, encouraging people to approach generative AI—and life's other complexities—with open minds.

end mark to signify the end of the article

Excerpted from the fall 2023 Virginia Nursing Legacy magazine.