Tags

Running Programs

For the purposes of discussion here, it is assumed that if you are running programs Off-Site, a Bmad Distribution has already been built. If not, Please consult the instructions for building a Distribution.

Environment Setup

The first thing to do is to check that your environment has been properly setup. To do this, use the command
accinfo

This command should display a summary of the active Distribution's information, build architecture, selected Fortran compiler and enviromental configuration. If not, instructions for setting up the environment for a Distribution is given here. On-Site environment setup is given here.

List Programs

To get a list of the Bmad programs that have been built use the command:
ls $ACC_EXE

[If On-Site you will also see other CESR related programs.]

Note: If you have built your own custom program, this program will not be in the $ACC_EXE directory (unless you have done your program build from within the Distribution tree which is not recommended). See the build system documentation for infomation on building your own Bmad based programs.

Run a Program

The $ACC_EXE directory is in the system path so you can type the name of any of the programs that are in the directory to run the program. As a test you can run the Tao program:
tao -lat $ACC_ROOT_DIR/tao/examples/cesr/bmad_L9A18A000-_MOVEREC.lat 

Assuming there is no tao.init file in your working directory, running Tao should result in a plot window poping up that looks like:

tao.jpg

Run an MPI based Program

Some Bmad based proprograms use MPI (Message Passing Interface) for parallel running. To create MPI programs when using an Off-Site Distribution, the Distribution needs to have been built with MPI compiling turned on (specifically, the ACC_ENABLE_MPI variable in the file util/dist_prefs must be set to "Y"). For On-Site work, the program will need to be locally built as Releases are not built with MPI building enabled (see a local Guru for details).

Running MPI based programs is more complicated due to the coordination that is required between processors. At Cornell, the Compute Farm is available for parallel processing (as well as standard single threaded running). See the Compute Farm documentation for more details. For offsite work, see your local computer Guru for details about running an MPI job. If you just want to run on a single machine using multiple cores, a program like MPIRun can be used (the web is a good source of information on MPIRun).
Topic revision: r3 - 16 Apr 2019, DavidSagan
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding CLASSE Wiki? Send feedback