Faculty Expert

  • Yasmin B. Kafai

    Lori and Michael Milken President’s Distinguished Professor

    Learning, Teaching, and Literacies Division

At a time when generative AI tools like ChatGPT are reshaping communication, learning, and creativity, a summer workshop hosted at the Franklin Institute helped high schoolers do more than just use AI—they learned how to build it. Led by Luis Morales-Navarro, a doctoral student in Penn GSE’s Learning Sciences and Technologies program, the “babyGPTs” workshop invited teenagers to construct small generative language models from scratch, empowering them to engage critically with the design, ethics, and limitations of artificial intelligence.

Involving young people in creating AI systems is a focus of the work of Yasmin Kafai, the Lori and Michael Milken President’s Distinguished Professor at Penn GSE and Morales-Navarro’s doctoral advisor. Earlier this year, Kafai led the CreateAI Workshop on campus, gathering together experts from industry, academe, and education to encourage K–12 students and their teachers to be active creators of AI technologies not just consumers of them. These latest Franklin Institute workshops, co-designed by Morales-Navarro under the guidance of Kafai and in partnership with Danaé Metaxa from Penn Engineering, is an example of that work. 

Rather than treating students as passive recipients of information, Morales-Navarro and his team employed a participatory design approach for their workshops: “We don’t think of this as a traditional classroom,” he said. “The workshop is structured so that teens are treated as protagonists in the design process. They decide what kind of model to build, what data to use, and what ethical questions to consider.”

Over the course of five days, participants created their babyGPTs using the nanoGPT framework—small-scale generative language models trained on 75,000 to 300,000 tokens of hand-curated data. (For comparison, GPT3.5, which created ChatGPT, was trained on hundreds of billions of tokens.) Students formed teams, sourced data from movie scripts or recipes, and submitted training jobs to generate models with outputs tailored to their chosen themes. As one student described, “It was really funny reading what came out. It looked good at first, but when you zoom in, you’re like—'What is this?’”

Facilitators encouraged reflection at every step, and students were asked to consider, “Is it okay to train models with data you didn’t create?” and “What happens if your model gives wrong or biased information?” 

For Carly Netting, manager of youth programs at the Franklin Institute, this ethical grounding was essential. 

“A lot of students came in with fears or biases about AI, mostly from what they’ve heard online or from teachers,” she said. “Being able to build these models and see how they work demystifies the technology and helps them understand that AI is a human-designed system.”

Morales-Navarro agrees. “Ethics can’t be an afterthought,” he said. “We designed activities so that students are reflecting on authorship, copyright, and representation before they even train their models. It’s about recognizing that decisions you make during data collection or model design shape the outputs—and the potential impact on others.”

This summer’s workshop was part of the Franklin Institute’s STEM Scholars Program, a four-year college- and career-readiness initiative that recruits 20 high school freshmen each year from communities historically underrepresented in STEM. Youth facilitators—undergraduate researchers from Penn’s School of Engineering and School of Arts and Sciences—also played a crucial role in its success. Through the Penn Undergraduate Research Mentoring Program (PURM), they spent ten weeks this summer supporting learning, analyzing data, and co-creating activities. 

“It’s a mentoring opportunity for me and a learning opportunity for them,” Morales-Navarro said.

The workshop culminated in thoughtful, sometimes surprising reflections from students. One 10th-grader shared that before the workshop, “AI felt like something out of reach, but now it doesn’t seem so scary.” Another student, initially skeptical, came away with a deeper respect. “AI is not just about writing essays or cheating—it’s about building, testing, and understanding how it works behind the scenes.”

Luis running a workshop with students, helping a student wearing a hat that reads "Perceptron" and holding an iPad and strings that are connected to several of his classmates.
Morales-Navarro assists a student on the first day of the workshop at the Franklin Institute. Photos by Darryl Moran. 

Beyond personal growth and learning, the workshop invited broader conversations about the future of AI in society. Morales-Navarro recalled a lively ethical debate about AI-generated screenplays. “Students asked, ‘Should I take credit if AI wrote this for me?’ These were high-level questions about authorship and integrity.” 

Some students questioned AI’s environmental impact, expressing concern over energy consumption and data center waste. Others imagined tools they would build, from educational models for learning math to recipe generators for healthier eating. 

“Kids are really good at poking holes,” noted Daniel Noh, a Penn GSE doctoral student and co-facilitator. “They’re able to critique not just the outputs, but also the underlying processes—and that’s what makes these conversations so rich.”

For Morales-Navarro, babyGPTs are both a subject for research and a vision for the future of AI education. 

“Most AI literacy programs focus on helping people use AI tools,” he said. “Our approach is different—we want youth to understand how these tools work, how they’re built, and how they can shape them.” 

That perspective has implications for scaling: while hardware limitations currently make classroom adoption difficult (most schools lack machines powerful enough to train models), Morales-Navarro and his team are exploring solutions like remote training and simplified toolkits.

The second week of the summer program shifted the workshop’s focus from building to auditing AI models, a nod to Morales-Navarro’s dual research focus on participatory design and critical evaluation.

“We want young people not just to build AI,” he said, “but to ask, ‘Should it be built? Who does it serve? What harm could it cause?’”

For now, babyGPTs offer a glimpse of what AI literacy education could look like when technical skills, ethical inquiry, and student agency are woven together. As Netting reflected, helping students build these models “from the inside” gives them agency to question, shape, and learn about AI and imagine how it could serve their world.

Media Inquiries

Penn GSE Communications is here to help reporters connect with the education experts they need.

Kat Stein

Executive Director of Penn GSE Communications

(215) 898-9642

katstein@upenn.edu

Related News