Date of Award

6-1-1999

Document Type

Thesis (Undergraduate)

Department

Department of Computer Science

First Advisor

David Nicol

Abstract

The Dartmouth implementation of the Scalable Simulation Framework (DaSSF) is a discrete-event simulator used primarily in the simulation of networks. It achieves high performance through parallel processing. DaSSF 1.22 requires shared memory between all processors in order to operate. This limits the number of processors available and the hardware platforms that can exploit parallelism. We are interested in extending parallel DaSSF operation to architectures without shared memory. We explore the requirements of this by implementing parallel DaSSF using MPI as the sole form of interaction between processors. The approaches used to achieve this can be abstracted and applied to the current version of DaSSF. This would allow parallel simulation using shared memory by processors within a single machine, and also at a higher level between separate machines using distributed memory.

Comments

Originally posted in the Dartmouth College Computer Science Technical Report Series, number PCS-TR99-346.

Share

COinS