Introduction to Volume 17, Issue 1
David Joinerpp. 1–1
A brief introduction to this issue of the Journal of Computational Science Education from the editor.
pp. 1–1
A brief introduction to this issue of the Journal of Computational Science Education from the editor.
pp. 2–10
https://doi.org/10.22369/issn.2153-4136/17/1/1@article{jocse-17-1-1,
author={Ikponmwosa J. Iyinbor and Ken-ichi Nomura and Paulo S. Branicio},
title={Machine Learning Prediction of Stacking Fault Energy in Steel Alloys Based on Chemical Composition},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={2--10},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/1}
}
Stacking fault energy (SFE) is a critical parameter in the design of steels with desirable mechanical properties such as strength, ductility, and strain-hardening rate. SFE influences secondary deformation mechanisms like Transformation Induced Plasticity (TRIP) and Twinning Induced Plasticity (TWIP). This work involves creating a machine learning model to classify steel alloys into low, medium, or high SFE categories, aiding in the prediction of secondary deformation behaviors. Data from literature containing experimental and theoretical SFE values for various steel alloy compositions were compiled and preprocessed, resulting in a dataset of 374 observations. Using this dataset, several machine learning models, including Feedforward Neural Network (FFNN), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Regressor (GBR), and CatBoost Regressor (CAT), and Adaptive Boost Regressor (ADB) were trained and evaluated for SFE prediction accuracy. Two models, SVM and RF, emerged as the top-performing models. To enhance accuracy and reduce misclassification, threshold probabilities were applied, allowing fuzzy classification when model uncertainty was high. Validation against literature data showed strong agreement between predictions and reported SFE values. This study provides valuable insights into predicting SFE and guiding the development of austenitic steel alloys with tailored properties.
pp. 11–18
https://doi.org/10.22369/issn.2153-4136/17/1/2@article{jocse-17-1-2,
author={Katy Luchini-Colbry and Dirk Colbry and Julie Rojewski},
title={Expanding the CyberAmbassadors Program to Include Mentoring for Emerging CI Careers},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={11--18},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/2}
}
Advanced computing infrastructure has fostered tremendous growth and innovation across research and practice in STEM. Cyberinfrastructure (CI) professionals often collaborate with disciplinary experts who want to leverage computation; in order to contribute effectively to this work CI professionals need both technical and professional skills. There are many formal and informal opportunities for the CI workforce to gain technical skills, and the CyberAmbassadors program developed new curriculum to provide CI professionals with opportunities to build their professional skills. More than 19,000 participant trainings have been completed, including almost 900 individuals who have earned a certificate for completing the entire CyberAmbassadors program. This paper describes initial efforts to expand CyberAmbassadors to include training on culturally-aware mentoring skills, with a focus on fostering professional success in the CI workforce, which is still an evolving profession with no single entry path. The new mentoring curriculum will help CI professionals at all levels develop the self-assessment, planning, and networking skills necessary to build strong mentoring relationships that can help them navigate emerging CI career paths. The mentoring curriculum will build on the communications, teamwork and leadership skills training from the existing CyberAmbassadors program, and will offer specialized practice in key career development activities like offering constructive feedback, fostering a growth mindset, developing a mentoring network, and building transferable skills. The new curriculum will also integrate research about the benefits of culturally-aware mentoring, which seeks to provide broad support for mentees with diverse identities and experiences. Once finalized, the new curriculum will be distributed broadly through a national network of volunteer facilitators who provide trainings for their own campuses, companies and communities.
pp. 19–27
https://doi.org/10.22369/issn.2153-4136/17/1/3@article{jocse-17-1-3,
author={Patrick Diehl and Ying Wai Li and Christoph Junghans and John K. Holmen and Elijah MacCarthy and Suzanne Parete-Koon and Yun (Helen) He and Rebecca Hartman-Baker and Charles Lively and Kevin Gott and Lipi Gupta and Kristina Streu and Yasaman Ghadar and Paige Kinsley and Jane Herriman and Erik W. Draeger and Victor Eijkhout and Susan Mehringer},
title={Shaping the FutureWorkforce: Challenges and Lessons Learned in HPC Education from National Labs and Computing Centers},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={19--27},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/3}
}
Workforce training at national laboratories and computing centers is essential and typically falls into two categories: foundational training for newcomers and advanced training for experienced users. Foundational topics—such as version control, build systems, and basic HPC usage—are largely transferable across institutions, while cluster-specific training varies due to differences in hardware, job schedulers, and local workflows. Training on emerging technologies is split between hardware-specific content and broadly applicable programming paradigms. To reduce redundancy and increase impact, national labs, computing centers, and vendors are collaborating through initiatives like the HPC Training Working Group to share best practices, co-develop materials, and broaden outreach. These coordinated efforts aim to make HPC training more accessible, scalable, and consistent across the community.
pp. 28–33
https://doi.org/10.22369/issn.2153-4136/17/1/4@article{jocse-17-1-4,
author={Julia Mullen and Sam Corey and Lauren Milechin and Riya Tyagi and Daniel Burrill},
title={Advancing HPC skills by Developing Large Language Model Retrieval Augmented Generation (LLM-RAG) Systems},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={28--33},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/4}
}
Large Artificial Intelligence (AI) and generative large language models (LLM) are key computational drivers. For researchers developing new tools or incorporating LLMs into their processing pipeline, the scale of data and models require supercomputing resources which can only be met through cloud or High Performance Computing (HPC) architectures. Many of these researchers have deep experience with AI, LLMs, and their research area but are new to HPC concepts, challenges, tools, and practices. To assist this researcher community, the Research Facilitation Teams at MIT Office of Research Computing and Data (ORCD) and the MIT Lincoln Laboratory Supercomputing Center (LLSC) have developed tutorial materials to teach researchers how to build their own Retrieval Augmented Generation (RAG) workflows. Selecting RAG systems as the project focus provides motivation for developing a wide range of skills necessary for efficiently working with LLMs on an HPC system while creating a useful application. This work details LLM-RAG implementation concerns on two different systems, the design decisions associated with developing the examples, deployment of the workshop training, and the feedback received from the participants. Both the MIT ORCD and MIT LLSC systems are representative of HPC community systems and we plan to refactor the in-person and live virtual workshops into a micro-course built from online, self-paced modules that will be reusable across other HPC centers with slight modifications.
pp. 34–41
https://doi.org/10.22369/issn.2153-4136/17/1/5@article{jocse-17-1-5,
author={Habiba Morsy and Essence Toone and Charlie Dey and Zilu Wang and Mary Thomas and David Joiner},
title={HPC-ED: Testing Automated Agents to Assess the Quality of Training Resource Metadata},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={34--41},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/5}
}
We present a proof-of-concept system for automating quality assurance in the HPC-ED federated training catalog using large language models (LLMs). The HPC-ED catalog system integrates metadata crawling, video transcript extraction, and model-based evaluation to score and provide recommendations on metadata quality at scale.
pp. 42–49
https://doi.org/10.22369/issn.2153-4136/17/1/6@article{jocse-17-1-6,
author={Bryan Johnston and Nick Thorne and Matthew Cawood and Eugene de Beste and David Macleod and John Poole},
title={A Retrospective on South Africa's Student Cluster Competition and its Model for Inclusive HPC Outreach and Training (2012-2020)},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={42--49},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/6}
}
The Centre for High Performance Computing (CHPC) is South Africa's national supercomputing facility. In 2012, it launched an outreach initiative to raise awareness of High-Performance Computing (HPC) among undergraduate students through the creation of the Student Cluster Competition (SCC). A national contest was designed to train and showcase student talent in a spirited, hands-on environment. The initial stage of the CHPC SCC saw twenty teams of four undergraduate students undergo an intensive week of HPC training, covering Linux fundamentals, cluster design, and system administration. Finalists from this selection round would then compete in a live challenge using HPC systems of their own design, with the top competitors selected to represent the CHPC at the International Student Cluster Competition hosted at the ISC High Performance conference in Germany. From its inception, the CHPC SCC has prioritised demographic diversity and equal opportunity, actively recruiting students from historically disadvantaged communities to ensure inclusive participation and representation. A rapid teaching framework was developed to address key knowledge gaps in HPC system design, administration, and optimisation: the empowerment of students with limited prior exposure in the field of HPC to excel. This approach has proven highly effective: South African teams ranked in the top three internationally for eight consecutive years, demonstrating the strength of the program. This paper presents the strategy and structure behind the CHPC SCC, detailing the training model, selection process, and evaluation methods used for both national and international rounds. It highlights how the initiative has evolved into a recognised platform for HPC education, enabling students to learn about HPC and become global contenders in the field.
pp. 50–56
https://doi.org/10.22369/issn.2153-4136/17/1/7@article{jocse-17-1-7,
author={Charlie Dey and Susan Lindsey},
title={Teaching AI Through Narrative Data: A Practical Framework for Data Science and Retrieval-Augmented Generation},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={50--56},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/7}
}
Artificial intelligence (AI) and machine learning (ML) education has traditionally been split between technical model-building and data literacy. While these skills are often taught separately, the emergence of large language models (LLMs) offers an opportunity to unify them through narrative-driven, human-readable data transformation. This approach enables learners to query structured data using natural language while still engaging deeply with the underlying analytical processes. We present a hands-on educational framework, debuting at the 2025 Big Data School in Costa Rica that grounds AI learning in real-world data by transforming a single, richly structured dataset into narrative text that LLMs can ingest and reason over.
pp. 57–58
https://doi.org/10.22369/issn.2153-4136/17/1/8@article{jocse-17-1-8,
author={Cristina Carbunaru and Sriram Sami},
title={Enhancing HPC Curriculum through Competitions},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={57--58},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/8}
}
High Performance Computing (HPC) supports breakthroughs in artificial intelligence (AI), data-intensive science, and engineering. At the National University of Singapore (NUS), core parallelism concepts are currently taught through courses in Parallel Computing and Concurrent Programming, with additional domain-specific exposure in courses. While these offerings build strong theoretical foundations, they leave a gap in systems-level competencies essential for deploying, optimizing, and scaling applications on real HPC infrastructure. We addressed this gap by initiating several projects meant to increase the knowledge in system-level skills for HPC. A main initiative is the participation in HPC student cluster competitions through which we integrated training in resource management, profiling, monitoring, containerized workflows, and distributed AI workloads for our selected students. This focus enables participants to bridge programming theory with operational expertise, preparing them to work effectively with clusters and heterogeneous architectures. Building on the interest around HPC competitions, the main curriculum in computer science is developing to include full-fledged HPC courses. We faced several challenges in this process, including a steep learning curve with complex systems, limited access to costly and shared cluster resources, and a shortage of instructors with up-to-date expertise. Pedagogically, bridging theory and large-scale practice is difficult, especially in the HPC context where the access to resources is remote. Therefore, sustainable curriculum development calls for a gradual expansion of teaching topics and resources, coupled with the integration of hands-on, competition-driven learning to maintain engagement. Formal HPC training enhances students' readiness for careers in computational science, promotes cross-disciplinary collaboration, and equips graduates with the advanced skills essential for solving complex challenges in AI and data-intensive fields.
pp. 59–64
https://doi.org/10.22369/issn.2153-4136/17/1/9@article{jocse-17-1-9,
author={Aaron Jezghani and Jason Fry},
title={Experience and Outcomes Organizing a Hackathon in the Physical Sciences},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={59--64},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/9}
}
Despite its growing importance in physical sciences, research computing with cluster resources remains difficult to access and sustain, especially in long-term, multi-institutional projects. Challenges include site-specific workflows, evolving software stacks, and rapid changes in hardware post-Generative AI. The Nab collaboration, conducting a precision test of the Standard Model at Oak Ridge National Laboratory, hosted a hackathon to address these issues. Over four half-days, 25 participants engaged in training and collaborative problem-solving across four priority areas, supported by mentors and structured sessions. Post-event surveys showed improved computational knowledge and strong interest in recurring events. This paper shares insights from organizing the hackathon and discusses scalable strategies for computational training in experimental research.
pp. 65–69
https://doi.org/10.22369/issn.2153-4136/17/1/10@article{jocse-17-1-10,
author={Yue Yu},
title={Building Scalable and Inclusive Foundations for HPC: Lessons from UC Merced's Introductory HPC Training Program},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={65--69},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/10}
}
High-performance computing (HPC) is becoming essential across a broad range of disciplines, including those historically underrepresented in computational research, such as sociology, psychology, and the arts. To reduce barriers to entry, the University of California, Merced (UC Merced) developed a 90-minute introductory HPC workshop designed for participants with no prior technical background. The workshop includes a theoretical overview of campus clusters, fundamental Linux commands, and core HPC concepts, followed by a hands-on session where participants connect through SSH and browser-based tools, load software modules, and submit jobs to institutional HPC resources using Slurm. Delivered in a hybrid format with both synchronous and asynchronous learning materials, the program has been offered more than 20 sessions since 2021. Post-workshop surveys indicate that 83 percent of participants are more likely to incorporate HPC into their research after attending, contributing to a doubling of active HPC users on campus since the program's launch. This scalable and inclusive model provides an effective framework for expanding HPC adoption and fostering computational engagement across disciplines.
pp. 70–74
https://doi.org/10.22369/issn.2153-4136/17/1/11@article{jocse-17-1-11,
author={Nitin Sukhija and Shelley Knuth and Alana Romanella and Marisa Brazil},
title={Building Expertise, Connections, and Communities for Computational AI and HPC Training and Education: NAIRR Pilot User Experience Group Initiatives},
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={70--74},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/11}
}
Given the rapidly changing computing landscape propelled with innovations and convergence of new cutting-edge technologies such as high-performance computing (HPC), AI, Cybersecurity, Quantum computing and more, the accelerated need for upskilling/ reskilling the workforce to mitigate skills gaps is becoming increasingly important. Whether you are student, researcher, faculty, staff, or other stakeholder of academia/industry who is part of this evolving digital ecosystem, the continuous learning and adaptation of HPC along with AI best practices, research and technology is a key to remain competitive. Furthermore, a triumvirate of user expertise, connections, and communities is required to enable efficient integration of (HPC) and AI ecosystem to offer key technologies for meeting performance requirements that pushes innovations to their limits in science, engineering and other domains. To address the challenges involved in leveraging Artificial Intelligence (AI) along with computational, data, software, training, and educational resources for the U.S. research and education communities, the National Artificial Intelligence Research Resource (NAIRR) Pilot was launched in 2024. As part of this effort, the NAIRR Pilot User Experience Working Group (UEWG) have conducted various engagement initiates, such as researcher showcases, pilot industry partner showcases, webinar series, regional workshops and one national workshop on AI Training. This paper presents a reproducible roadmap based on the observations and results of the above-mentioned training and education efforts that can be used to efficiently train the next generation workforce in AI and HPC at all levels. Thus, bridging the talent gap and advancing secure and trustworthy AI in research and society.
pp. 75–78
https://doi.org/10.22369/issn.2153-4136/17/1/12@article{jocse-17-1-12,
author={Injila Rasul and Georgia Stuart},
title={Investigating User Attitudes Towards and Benefits from Integrating AI Assistants into Research Computing Support },
journal={The Journal of Computational Science Education},
year=2026,
month=mar,
volume=17,
issue=1,
pages={75--78},
doi={https://doi.org/10.22369/issn.2153-4136/17/1/12}
}
High-Performance computing clusters used for Research Computing, hosted by universities, are an essential part of the ongoing teaching, learning, and research at these institutions. Users must understand myriad scientific, mathematical, and computing concepts. They have a range of experience and comfort with these platforms, requiring regular support as they engage with it for their research. To assist users on the Unity Research Computing Platform, the support team provides the Facilitation Slack channel to get help, find relevant documentation, learn new information, and troubleshoot, requiring significant staff time and funding. This study explores the design and implementation of an AI assistant chatbot augmenting existing support with HPC Facilitator oversight. We investigate the efficacy of AI assistants in extending the productivity and impact of research computing personnel while maintaining a high degree of direct contact with users. We discuss the Human-Centered AI Design and testing process and its significance for large-scale interventions.