Tags

ACC's Computing Resources

This page summarizes ACC's centralized computing resources at CLASSE

If you need to contact someone for help, please see the Getting Help section.

Operating Systems and Architectures

On Linux, uname -m will tell you the architecture, and cat /etc/redhat-release will tell you the operating system.

Network and System Layout

italics = gigabit

Public

Servers and Interactive Nodes

Node name network connection functionSorted ascending details Location
accserv
 
64-bit SL6.5 Repository & Libraries server. Do not run processing jobs on this machine!
W221
edit
lnx4200
 
BMAD development
 
EM107
edit
lnx201
 
general use login node
use for daily work, and to launch interactive and batch jobs in the Compute Farm
 
 
edit
lnx498 (ilc201)
hps76 port 8
interactive compute node
 
LEPP Trailer Computer Room
edit
lnx4124 (ilc202)
hps54 port 7
interactive compute node
 
LEPP Trailer Computer Room
edit
lnx6153 (ilc203)
hps54 port 19
interactive compute node
 
LEPP Trailer Computer Room
edit
lnx6166 (cesrbuild)
hps16 port C2
linux build system
 
LEPP Trailer Computer Room
edit
accfs
 
Primary Fileserver
Experimental Disk Backup
/nfs/acc/user, /nfs/acc/temp
/nfs/acc/srf
/nfs/ac/cesrtacam
/nfs/cesrta
CLASSE Cluster in W221
edit

Farm (172.16.)

Node name network connection function details
lnx1625
hps50-f1 port
NOT part of linux compute farm
has AMD CPUs which produce different results from Intel CPUs
edit
lnx1626
hps50-f1 port B23
part of linux compute farm
General purpose CESR build and processing machine
edit
lnx301-lnx305 and lnx313-lnx324
hps50-f1
part of linux compute farm
CESR members are allowed up to 2 interactive jobs on each of these machines. Please use the top program to make sure that no more than 2 CESR jobs are running. These machines are rather isolated from the lnx209 server, so linking and I/O intensive use of these machines is discouraged.
edit
ilc3[01-11]
hps50-f1
part of linux compute farm
 
edit
ilcsim (lnx11a)
hps50-f1 port C14
simulation disk server
/nfs/ilc/sim[1-3]
edit
ilc204 (lnx16511)
hps50-f1
interactive compute node
 
edit

A number of other Linux nodes are available via the GridEngine batch queues for running production jobs.

CESR Private (192.168.1.)

The CESR Linux Control System Cluster

The CESR Linux Control System cluster provides clustered file systems and high-availability services using the Red Hat Cluster Suite. All members have connections to both the CESR Private and CESR BPM subnets. See CesrHAC for more information.

Servers and Interactive Nodes

Node name network connection function details Location

Linux Consoles

All linux consoles are running SL6.5. They auto-login to local user accounts and are only used as displays (windows appear from remote systems). They are also known as the "Grinnel replacement". In general, they are not used with a mouse or keyboard.
Node name network connection function details
cesr18[1-4], cesr186
hps63-p1
Control Console
 
edit
cesr185
hps45-p1 port 17
Control Console
 
edit


Software Versions

The primary Linux server and desktop operating system in use the in the lab is Scientific Linux (SL) which is assembled jointly by Fermilab and CERN as a standardized experimental environment. It is based on Red Hat Enterprise Linux. CLASSE currently deploys SL6.5 unless otherwise noted.

VMS/Linux Shared Disk

Backups

See BackupSchedule for a list of Linux, Dec-Alphas (Unix), VMS, and Solaris systems we currently backup.

See WindowsBackupSchedule for a list of the Windows systems we currently backup.


Please contact theCESR Code Librarian with questions or problems about this page.
Topic revision: r64 - 09 Jan 2023, JamesPulver
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding CLASSE Wiki? Send feedback