Rose-Hulman Institute of Technology Parallel Computing
This site is under construction. Items not yet implemented are in red.
Cluster Computing Cluster computing connects a number of commodity PC's (called nodes) by means of a standard LAN. There is one head node and a number of slave nodes. Jobs are submitted to the cluster through the head node. A typical problem will be broken up into a series of sub-problems which are assigned to slave nodes by the head node. If necessary, intermediate results are communicated to each other by a message passing interface (MPI). At the end of the computation the results are integrated and returned to the head node. A critical problem solving step is to parallelize the calculation, i.e., break up the problems so that the individual pieces can be run simultaneously on different nodes, with some exchange of data between the nodes. If there is too much internode communication or waiting on results of other calculations on other nodes, then the parallelization doesn't really help. Thus the parallel design step may require some deep thought.
The cluster brain.rose-hulman.edu is an academic computing resource for faculty, students and staff, primarily to support teaching, research, and project work utilizing parallel computing methods. The facility is shared among all academic departments and is hosted and supported by the Computing Center. The cluster was purchased in the summer of 2001, and was installed and put online during the fall of 2001. Dean Western has appointed a committee, the Parallel Computing Steering Committee to establish usage policy and to promote the usage of the facility by establishing an inter-departmental users group. The cluster was funded by Academic Affairs with supplemental amounts by IAIT (Computing Center) and the Mathematics Department.
Open MP computing The OpenMP version of parallel computing takes programs written in C or Fortran for a single CPU computers and tweaks the iterative parts of the program to take full advantage of multiple processors on a single computer. The parallelization is effected by smart compiling in which the work in the for loops and do loops is spread out over all the processors on the machine. In order to work all the CPU's must share the same memory and the amount of memory should be fairly large. The advantage of OpenMP is that the parallelization does not have to be built in the design of the program, it is taken care of automatically by the compiler. The program can be run on a single processor machine by turning off the OpenMP directives and compiling. OpenMP compiling can either be run on a single node of the cluster (two CPU's) or one of the SUN computer servers with 4 CPU's.
The two compute servers abacus and sliderule were purchased by IAIT to support academic computing as well as ftp services. The OpenMP compilers are not yet implemented on them.
Home | Resources | Accounts | Updates | People | Activities | Projects/Reports | Links
Send questions and comments to: email@example.com