Tags

ACC's Computing Resources

This page summarizes ACC's centralized computing resources at CLASSE

If you need to contact someone for help, please see the Getting Help section.

Operating Systems and Architectures

On Linux, uname -m will tell you the architecture, and cat /etc/redhat-release will tell you the operating system.

Network and System Layout

italics = gigabit

Public

Servers and Interactive Nodes

Node name network connection function details Location
acc101 (accserv)
 
64-bit SL6.5 Repository & Libraries.
Use this machine ONLY to build and link. Do not run processing jobs on this machine!
/nfs/acc/libs (400G), https://accserv.classe.cornell.edu/svn/ (400G)
W221
edit
lnx209
s7i01 port 4
32-bit SL4 build machine
 
LEPP Trailer Computer Room, directly to your right as you enter the LEPP Trailer Computer Room from the Computer Group "central area"
edit
lnx113 (accfs)
 
Primary Fileserver
Experimental Disk Backup
/nfs/acc/user (500G), /nfs/acc/temp (2T)
/nfs/acc/srf
/nfs/ac/cesrtacam
/nfs/cesrta
/mnt/auto/duple (1TB), /mnt/auto/delta (1.5TB)
(SL3)
CESR rack in W221
edit
lnx106
s7i01 port 26
disk server
/nfs/grp/cesr (100G) - Opera
(SL3)
LEPP Trailer Computer Room, disk server rack
edit
erd101 (erlbuild)
 
Primary ERL development machine
/nfs/erl/erd
W221
edit
lnx156
home disk server
/home
LEPP Trailer Computer Room
edit
lnx248 (webdb)
 
public web and database server
ELOG, CESRTACAM
LEPP Trailer Computer Room, to your right as you enter the LEPP Trailer Computer Room from the Computer Group "central area"
edit
lnx6166 (cersbuild1)
hps16 port C2
linux build system
 
LEPP Trailer Computer Room
edit
lnx498 (ilc201)
hps76 port 8
interactive compute node
 
LEPP Trailer Computer Room
edit
lnx4124 (ilc202)
hps54 port 7
interactive compute node
 
LEPP Trailer Computer Room
edit
lnx6153 (ilc203)
hps54 port 19
interactive compute node
 
LEPP Trailer Computer Room
edit
pc6115 (cesrbuild3)
hps02 port A2
Windows build system
 
 
edit
ilcsim (lnx11a)
s7i01 port 2
Simulation disk server
/nfs/ilc/sim[1-3]
LEPP Trailer Computer Room, disk server rack
edit
lnx280c (cesrweb)
s48t port 15
CESR Web and Database Server
/nfs/cesr/glassfish, /nfs/elog, /nfs/cesr/archive, /nfs/cesr/web
W221
edit

CESR OSF1 Unix (HP Tru64) Computers

ACC has several DEC Alpha computers running Tru64 Unix (originally named OSF/1) which are used as file- and compute-servers.

When building CESR codes on OSF, care must be taken to build only on the CESR OSF machines which are named with the cesr prefix (eg, cesr38). The reason for this is that CESR codes use a more recent version of the HP compilers than is installed on the general LEPP OSF machines (which are all named with the lns prefix). OSF machines to build on include:
Node Name Alias Type CPU Speed primary connection
cesr34
 
Digital Personal Workstation (DPW 500au)
500 MHz
hps56 port 7
edit
cesr35
 
AlphaStation (AS 500/333)
333 MHz
hps12 port 16
edit
cesr38
 
Digital Personal Workstation (DPW 500au)
500 MHz
hps56 port 1
edit
cesr63
 
Digital Personal Workstation
600 MHz
 
edit
cesr64
 
Digital Personal Workstation
600 MHz
 
edit
cesr65
 
Digital Personal Workstation (DPW 433au)
433 MHz
hps04 port A7
edit
cesr66
 
DS20E (dual processor) (DS20E 833)
833 MHz
hps56 port 15
edit
cesr68
cesrbuild2
DS20E (dual processor) (DS20E 833)
833 MHz
hps56 port 22
edit
cesr71
 
DS10E (DS10L 617)
617 MHz
hps56 port 3
edit
cesr72
 
DS10E (DS10L 617)
617 MHz
hps56 port 4
edit
cesr73
 
DS10E (DS10L 617)
617 MHz
hps56 port 5
edit

Desktop Systems

Current desktop systems are deployed with at least one hyperthreaded 3GHz Core-i5 processor and 2GB memory. Linux desktops use 8G swap.

Farm (172.16.)

Node name network connection function details
sol105
hps50-f1 port F18
home disk server
/home
edit
lnx209 (accserv)
hps50-f1 port B16
Repository & Libraries.
Use this machine ONLY to build and link. Do not run processing jobs on this machine!
/nfs/acc/libs (400G), https://accserv.classe.cornell.edu/svn/ (400G)
edit
lnx113 (accfs)
hps50-f1 port F8
Primary Fileserver
Experimental Disk Backup
/nfs/acc/user (500G), /nfs/acc/temp (2T)
/nfs/acc/srf
/nfs/ac/cesrtacam
/nfs/cesrta
/mnt/auto/duple (1TB), /mnt/auto/delta (1.5TB)
(SL3)
edit
lnx106
hps50-f1 port B21
disk server
/nfs/grp/cesr (100G) - Opera
(SL3)
edit
lnx1625
hps50-f1 port
NOT part of linux compute farm
has AMD CPUs which produce different results from Intel CPUs
edit
lnx1626
hps50-f1 port B23
part of linux compute farm
General purpose CESR build and processing machine
edit
lnx301-lnx305 and lnx313-lnx324
hps50-f1
part of linux compute farm
CESR members are allowed up to 2 interactive jobs on each of these machines. Please use the top program to make sure that no more than 2 CESR jobs are running. These machines are rather isolated from the lnx209 server, so linking and I/O intensive use of these machines is discouraged.
edit
ilc3[01-11]
hps50-f1
part of linux compute farm
 
edit
ilcsim (lnx11a)
hps50-f1 port C14
simulation disk server
/nfs/ilc/sim[1-3]
edit
ilc204 (lnx16511)
hps50-f1
interactive compute node
 
edit

A number of other Linux nodes are available via the GridEngine batch queues for running production jobs.

CESR Private (192.168.1.)

The CESR Linux Control System Cluster

The CESR Linux Control System cluster provides clustered file systems and high-availability services using the Red Hat Cluster Suite. All members have connections to both the CESR Private and CESR BPM subnets. See CesrHAC for more information.

The CESR OpenVMS Control System Cluster

The CESR Control System Cluster is a set of Alpha workstations (as well as a couple de-supported VAXes) running the OpenVMS operating system.
Machine Configuration MPM Access Special Comments
cesr23
Alpha
Elec Shop Rack
DO NOT USE!
edit
cesr24
VAX
nomore: bottom VME backplane
DO NOT USE! Failed
edit
cesr25
VAX
Yes: top VME backplane
DO NOT USE!
edit
cesr26
Alpha
nomore
Failed replaced by cesr2E
edit
cesr27
Alpha
Yes: middle VME backplane
 
edit
cesr28
Alpha
Yes: top VME backplane
 
edit
cesr29
Alpha
No
Primary Cluster Disk Server
edit
cesr2A
Alpha
Yes: bottom VME backplane (29)
CESR control (was Simulation Node)
edit
cesr2B
Alpha
Yes: top VME backplane
 
edit
cesr2C
Alpha
MPM Test Rack
HDW Testing Node in CRS Office (W231)
edit
cesr2D
Alpha
Yes: middle VME backplane
 
edit
cesr2E
Alpha
Yes: bottom VME backplane (26)
was in LT107, moved to W221 for CESR control
edit
cesr2F
Alpha
No
Release Build Node
edit

Servers and Interactive Nodes

Node name network connection function details Location
lnx280c (cesrweb)
hps08-p1
CESR Web, database, and data server
/nfs/cesr/glassfish, /nfs/elog, /nfs/cesr/archive, /nfs/cesr/web
W221
edit
cesr3001-7
 
Control System Workstation
 
W201
edit
cesr3101-4
 
Control System Workstation
 
W207
edit
cesrW3105
 
Control System Windows Workstation
Sync Light fremegrabber
W207
edit
cesrW3106
 
Control System Windows Workstation
replacement framegrabber
W207
edit
cesrW3107
 
Control System Windows Workstation
Vacuum Monitor Kiosk
W207
edit
cesr3201
 
"Xterm"
 
W101
edit
cesr3301
 
 
 
North Area Spur
edit
lnx183c
hps08-p1 port G5
Control System Server
EPICS (4ns Fdbk)
W221
edit
lnx184c
hps08-p1 port G6
Control System Server
CBPM DAQ
W221
edit

Linux Consoles

All linux consoles are running SL6.5. They auto-login to local user accounts and are only used as displays (windows appear from remote systems). They are also known as the "Grinnel replacement". In general, they are not used with a mouse or keyboard.
Node name network connection function details
cesr18[1-4], cesr186
hps63-p1
Control Console
 
edit
cesr185
hps45-p1 port 17
Control Console
 
edit

Instrumentation (192.168.32.)

BPMs, scopes, etc.
Node name network connection function details
lnx184c
hps84-p32
interactive compute node
CBPM DAQ Server
edit

Longitudinal Feedback Test Network (192.168.130.)

Node name network connection function details
lnx183c
 
interactive compute node
EPICS (4ns Fdbk)
edit


Software Versions

The primary Linux server and desktop operating system in use the in the lab is Scientific Linux (SL) which is assembled jointly by Fermilab and CERN as a standardized experimental environment. It is based on Red Hat Enterprise Linux. CLASSE currently deploys SL6.5 unless otherwise noted.

VMS/Linux Shared Disk

Backups

See BackupSchedule for a list of Linux, Dec-Alphas (Unix), VMS, and Solaris systems we currently backup.

See WindowsBackupSchedule for a list of the Windows systems we currently backup.

Disk backup strategy

For quick access, the ELOG and repository backups are also done to disk on lnx113 (/mnt/backup).


Please contact theCESR Code Librarian with questions or problems about this page.
Topic revision: r61 - 04 Sep 2014, AttilioDeFalco
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding CLASSE Wiki? Send feedback