Kevin CrowstonDistinguished Professor of Information Science Kevin Crowston has received a $50,000 grant from the Alfred P. Sloan Foundation to launch a pilot study examining how the use of generative AI tools is reshaping the way software developers learn and retain core programming skills.

“Generative AI is expected to change many different kinds of work, but it’s already having an impact on coding, where it’s particularly useful,” Crowston said. His proposal cites Google CEO Sundar Pichai’s 2024 estimate that as much as 25 percent of the company’s code was being  written with the assistance of AI tools—a sign of the rapidly shifting landscape.

These advances raise new questions about how programmers acquire skills. “There’s potential for real productivity increases, with people writing more code more quickly,” Crowston explained. “But the fear is that because you have the machine doing these tasks, people will stop practicing them, with negative consequences for their own abilities.”

To explore this possibility, Crowston, professor of practice Michael Fudge, and Francesco Bolici, associate professor at the University of Cassino and Southern Lazio in Italy, have put together a three-year proposal for the National Science Foundation.

The Sloan Foundation grant will kickstart the first year of research, supporting student involvement—doctoral students Akit Kumar at Syracuse and Alberto Varone in Italy along with undergraduate Cassandra Rivera ‘27 are part of the team—and two initial studies.

“I was extremely pleased to receive this funding,” Crowston said. “It gives us external validation that our project is addressing an interesting and important idea.”

The first of the two studies will examine how undergraduate students in a required introductory Python course use generative AI tools. “The hypothesis is that if you’re just using the tool to do your work, you’ll finish the assignments but won’t actually learn,” Crowston said. “We expect students who ask questions to understand each line of code to learn more.”

The researchers are also exploring what motivates these different patterns of use. Students who are genuinely interested in programming may turn to AI in ways that deepen understanding, while students who feel time pressure or are taking the class only to fulfill a requirement may be more inclined to let AI do the work.

At the same time, Crowston noted, programming itself may be evolving. “Maybe the days of coding each for loop are behind us,” he said. “Maybe the real skill is learning how to convey what you want to the AI—and to check that it did it correctly.” The study will explore how these novel AI skills intersect with the traditional skills of programming. 

 

Experienced programmers are subjects of the second study. The team plans to interview 40 individuals who develop software to support scientific research about how they use generative AI, what benefits they see, and whether they worry about long-term impacts on their own abilities. 

For scientific domains, the stakes may be especially high. While AI models have been trained on large amounts of general-purpose Python, they have seen far less specialized code—such as software used to model black hole collisions or other niche scientific phenomena. “You could imagine the model producing code that looks plausible but isn’t scientifically accurate,” Crowston said. Experienced programmers recognize this risk, he added—“they’re really, really worried about it”—but newer programmers may not have the same skepticism.

Crowston believes the project taps into a broader question facing many professions. What happens to expertise when AI takes over routine tasks? Early evidence from several industries suggests that entry-level hiring is already declining. “If companies rely on AI to do the work entry-level people used to do, then two years later they have nobody with two years of experience,” he said. “That’s not great for students—and it’s a challenge for employers and universities alike.”