Volume 3 Issue 2 — December 2012

PDF icon Download Full Issue PDF

Contents

Cyber Collaboratory-based Sustainable Design Education: A Pedagogical Framework

Kyoung-Yun Kim, Karl R. Haapala, Gül E. Okudan Kremer, and Michael K. Barbour

pp. 2–10

https://doi.org/10.22369/issn.2153-4136/3/2/1

PDF icon Download PDF

BibTeX
@article{jocse-3-2-1,
  author={Kyoung-Yun Kim and Karl R. Haapala and G\"{u}l E. Okudan Kremer and Michael K. Barbour},
  title={Cyber Collaboratory-based Sustainable Design Education: A Pedagogical Framework},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={2--10},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/1}
}
Copied to clipboard!

Educators from across the educational spectrum are faced with challenges in delivering curricula that address sustainability issues. This article introduces a cyber-based interactive e-learning platform, entitled the Sustainable Product Development Collaboratory, which is focused on addressing this need. This collaboratory aims to educate a wide spectrum of learners in the concepts of sustainable design and manufacturing by demonstrating the effects of product design on supply chain costs and environmental impacts. In this paper, we discuss the overall conceptual framework of this collaboratory along with pedagogical and instructional methodologies related to the collaboratory-based sustainable design education. Finally, a sample learning module is presented along with methods for assessment of student learning and experiences with the collaborator.

A Hands-on Education Program on Cyber Physical Systems for High School Students

Vijay Gadepally, Ashok Krishnamurthy, and Umit Ozguner

pp. 11–17

https://doi.org/10.22369/issn.2153-4136/3/2/2

PDF icon Download PDF

BibTeX
@article{jocse-3-2-2,
  author={Vijay Gadepally and Ashok Krishnamurthy and Umit Ozguner},
  title={A Hands-on Education Program on Cyber Physical Systems for High School Students},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={11--17},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/2}
}
Copied to clipboard!

Cyber Physical Systems (CPS) are the conjoining of an entities' physical and computational elements. The development of a typical CPS system follows a sequence from conceptual modeling, testing in simulated (virtual) worlds, testing in controlled (possibly laboratory) environments and finally deployment. Throughout each (repeatable) stage, the behavior of the physical entities, the sensing and situation assessment, and the computation and control options have to be understood and carefully represented through abstraction. The CPS Group at the Ohio State University, as part of an NSF funded CPS project on "Autonomous Driving in Mixed Environments", has been developing CPS related educational activities at the K-12, undergraduate and graduate levels. The aim of these educational activities is to train students in the principles and design issues in CPS and to broaden the participation in science and engineering. The project team has a strong commitment to impact STEM education across the entire K-20 community. In this paper, we focus on the K-12 community and present a two-week Summer Program for high school juniors and se- niors that introduces them to the principles of CPS design and walks them through several of the design steps. We also provide an online repository that aids CPS researchers in providing a similar educational experience.

Using Supercomputing to Conduct Virtual Screen as Part of the Drug Discovery Process in a Medicinal Chemistry Course

David Toth and Jimmy Franco

pp. 18–25

https://doi.org/10.22369/issn.2153-4136/3/2/3

PDF icon Download PDF

BibTeX
@article{jocse-3-2-3,
  author={David Toth and Jimmy Franco},
  title={Using Supercomputing to Conduct Virtual Screen as Part of the Drug Discovery Process in a Medicinal Chemistry Course},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={18--25},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/3}
}
Copied to clipboard!

The ever-increasing amount of computational power available has made it possible to use docking programs to screen large numbers of compounds to search for molecules that inhibit proteins. This technique can be used not only by pharmaceutical companies with large research and development budgets and large research universities, but also at small liberal arts colleges with no special computing equipment beyond the desktop PCs in any campus' computer laboratory. However, despite the availability of significant quantities of compute time available to small colleges to conduct these virtual screens, such as supercomputing time available through grants, we are unaware of any small colleges that do this. We describe the experiences of an interdisciplinary research collaboration between faculty in the Chemistry and Computer Science Departments in a chemistry course where chemistry and biology students were shown how to conduct virtual screens. This project began when the authors, who had been collaborating on drug discovery research using virtual screening, decided that the virtual screening process they were using in their research could be adapted to fit in a couple of lab periods and would complement one of the instructors' courses on medicinal chemistry. The resulting labs would introduce students to the virtual screening portion of the drug discovery process.

Metadata Management in Scientific Computing

Eric L. Seidel

pp. 26–33

https://doi.org/10.22369/issn.2153-4136/3/2/4

PDF icon Download PDF

BibTeX
@article{jocse-3-2-4,
  author={Eric L. Seidel},
  title={Metadata Management in Scientific Computing},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={26--33},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/4}
}
Copied to clipboard!

Complex scientific codes and the datasets they generate are in need of a sophisticated categorization environment that allows the community to store, search, and enhance metadata in an open, dynamic system. Currently, data is often presented in a read-only format, distilled and curated by a select group of researchers. We envision a more open and dynamic system, where authors can publish their data in a writeable format, allowing users to annotate the datasets with their own comments and data. This would enable the scientific community to collaborate on a higher level than before, where researchers could for example annotate a published dataset with their citations. Such a system would require a complete set of permissions to ensure that any individual's data cannot be altered by others unless they specifically allow it. For this reason datasets and codes are generally presented read-only, to protect the author's data; however, this also prevents the type of social revolutions that the private sector has seen with Facebook and Twitter. In this paper, we present an alternative method of publishing codes and datasets, based on Fluidinfo, which is an openly writeable and social metadata engine. We will use the specific example of the Einstein Toolkit, a part of the Cactus Framework, to illustrate how the code's metadata may be published in writeable form via Fluidinfo.

Bringing ab initio Electronic Structure Calculations to the Nano Scale through High Performance Computing

James Currie, Rachel Cramm Horn, and Paul Rulis

pp. 34–40

https://doi.org/10.22369/issn.2153-4136/3/2/5

PDF icon Download PDF

BibTeX
@article{jocse-3-2-5,
  author={James Currie and Rachel Cramm Horn and Paul Rulis},
  title={Bringing ab initio Electronic Structure Calculations to the Nano Scale through High Performance Computing},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={34--40},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/5}
}
Copied to clipboard!

An ab initio density functional theory based method that has a long history of dealing with large complex systems is the Orthogonalized Linear Combination of Atomic Orbitals (OLCAO) method, but it does not operate in parallel and, while the program is empirically observed to be fast, many components of its source code have not been analyzed for efficiency. This paper describes the beginnings of a concerted effort to modernize, parallelize, and functionally extend the OLCAO program so that it can be better applied to the complex and challenging problems of materials design. Specifically, profiling data were collected and analyzed using the popular performance monitoring tools TAU and PAPI as well as standard UNIX time commands. Each of the major components of the program was studied so that parallel algorithms that either modified or replaced the serial algorithm could be suggested. The program was run for a collection of different input parameters to observe trends in compute time. Additionally, the algorithm for computing interatomic interaction integrals was restructured and its performance was measured. The results indicate that a fair degree of speed-up of even the serial version of the program could be achieved rather easily, but that implementation of a parallel version of the program will require more substantial consideration.

A Performance Comparison of a Naïve Algorithm to Solve the Party Problem using GPUs

Michael V.E. Bryant and David Toth

pp. 41–48

https://doi.org/10.22369/issn.2153-4136/3/2/6

PDF icon Download PDF

BibTeX
@article{jocse-3-2-6,
  author={Michael V.E. Bryant and David Toth},
  title={A Performance Comparison of a Na\"{i}ve Algorithm to Solve the Party Problem using GPUs},
  journal={The Journal of Computational Science Education},
  year=2012,
  month=dec,
  volume=3,
  issue=2,
  pages={41--48},
  doi={https://doi.org/10.22369/issn.2153-4136/3/2/6}
}
Copied to clipboard!

The R(m, n) instance of the party problem asks how many people must attend a party to guarantee that at the party, there is a group of m people who all know each other or a group of n people who are all complete strangers. GPUs have been shown to significantly decrease the running time of some mathematical and scientific applications that have embarrassingly parallel portions. A brute force algorithm to solve the R(5, 5) instance of the party problem can be parallelized to run on a number of processing cores many orders of magnitude greater than the number of cores in the fastest supercomputer today. Therefore, we believed that this currently unsolved problem is so computationally intensive that GPUs could significantly reduce the time needed to solve it. In this work, we compare the running time of a naive algorithm to help make progress solving the R(5, 5) instance of the party problem on a CPU and on five different GPUs ranging from low-end consumer GPUs to a high-end GPU. Using just the GPUs computational capabilities, we observed speedups ranging from 1.9 to over 21 in comparison to our quad-core CPU system.