The Argon HPC system is the latest HPC system of the University of Iowa. It consists of 346 compute nodes, each of which contain 28 2.4GHz Intel Broadwell processor cores, running CentOS-7.4 Linux. There are several compute node configurations,
There are 16 machines with single Nvidia P100 accelerators, 6 machines with dual Nvidia P100 accelerators, 2 machines with an Nvidia K80 accelerator, 2 machines with an Nvidia P40 accelerator, and 6 machines with a Titan V accelerator.
The Titan V is now considered as a supported configuration in GPU-capable compute nodes but is restricted to a single card per node. Staff have completed the qualification process for the 1080 Ti and concluded that it is not a viable solution to add to current Argon compute nodes.
The Rpeak (theoretical Flops) is 285.60 TFlops, not including the accelerators, with 67.25 TB of memory. In addition, there are 2 login nodes of the same system architecture. The login nodes have 256GB of memory.
While on the backend Argon is a completely new architecture, the frontend should be very familiar to those who have used previous generation HPC systems at the University of Iowa. There are, however, a few key differences that will be discussed in this page.
One important difference between Argon and previous systems is that Argon has Hyperthreaded processor cores turned on. Hyperthreaded cores can be thought of as splitting a single processor into two virtual cores, much as a Linux process can be split into threads. That oversimplifies it but if your application is multithreaded then hyperthreaded cores can potentially run the application more efficiently. For non-threaded applications you can think of any pair of hyperthreaded cores to be roughly equivalent to two cores at half the speed if both cores of the pair are in use. This can help ensure that the physical processor is kept busy for processes that do not always use the full capacity of a core. The reasons for enabling HT for Argon are to try to increase system efficiency on the workloads that we have observed. There are some thing to keep in mind as you are developing your workflows.
Like previous UI HPC systems, Argon uses SGE, although this version is based off of a slightly different code-base. If anyone is interested in the history of SGE there is an interesting writeup at History of Grid Engine Development. The version of SGE that Argon uses is from the Son of Grid Engine project. For the most part this will be very familiar to people who have used previous generations of UI HPC systems. One thing that will look a little different is the output of the qhost command. This will show the CPU topology.
As you can see that shows the number of cpus (NCPU), the number of CPU sockets (NSOC), the number of cores (NCOR) and the number of threads (NTHR). This information could be important as you plan jobs but it essentially reflects what was said in regard to HT cores. Note that all argon nodes have the same processor topology. SGE uses the concept of job slots which serve as a proxy for the number of cores as well as the amount of memory on a machine. Job slots are one of the resources that is requested when submitting a job to the system. As a general rule, the number of job slots requested should be equal to or greater than the number of processes/threads that will actually consume resources. The parallel environment to request an entire node on Argon is called
56cpn. For one node you would request
More nodes would be requested by specifying a slot count that is a multiple of 56. So for 2 nodes
and so on.
You will need to be aware of the approximate amount of memory per job slot when setting up jobs if your job uses a significant amount of memory. The actual amount will vary due to OS overhead, and will be slightly lower than the values given below.
|Node memory (GB)||Job slots||Memory (GB) per slot|
Using the Basic Job Submission and Advanced Job Submission pages as a reference, how would one submit jobs taking HT into account? For single process high throughput type jobs it probably does not matter, just request one slot per job. For multithreaded or MPI jobs, request one job slot per thread or process. So if your application runs best with 4 threads then request something like the following.
That will run on two physical cores and two HT cores. For non-threaded processes that are also CPU bound you can avoid running on HT cores by requesting 2x the number of slots as cores that will be used. So, if your process is a non-threaded MPI process, and you want to run 4 MPI ranks, your job submission would be something like the following.
and your job script would contain an mpirun command similar to
That would run the 4 MPI ranks on physical cores and not HT cores. Note that this will work for non-MPI jobs as well. If you have a non-threaded process that you want to ensure runs on an actual core, you could use the same 2x slot request.
Note that if you do not use the above strategy then it is possible that your job process will share cores with other job processes. That may be okay, and preferred for high throughput jobs, but is something to keep in mind. It is especially important to keep this in mind when using the
orte parallel environment. There is more discussion on the
orte parallel environment on the Advanced Job Submission page. In short, that parallel environment is used in node sharing scenarios, which implies potential core sharing as well. For MPI jobs, that is probably not what you want. As on previous systems, there is a parallel environment (56cpn) for requesting entire nodes. This is especially useful for MPI jobs to ensure the best performance.
For MPI jobs, the system provided openmpi will not bind processes to cores by default, as would be the normal default for openmpi. This is set this way to avoid inadvertently oversubcribing processes on cores. In addition, the system openmpi settings will map processes by socket. This should give a good process distribution in all cases. However, if you wish to use less than 28 processes per node in an MPI job then you may want to map by node to get the most even distribution of processes across nodes. You can do that with the
--map-by node option flag to mpirun.
If you wish to control mapping and binding in a more fine-grained manner, the mapping and binding parameters can be overridden with parameters to
mpirun. Openmpi provides many options for fine grained control of process layout. The options that are set by default should be good in most cases but can be overridden with the openmpi options for
See the mpirun manual page,
for more detailed information. The defaults should be fine for most cases but if you override them keep the topology in mind.
If you set your own binding, for instance
--bind-to core, be aware that the number of cores is half of the number of total HT processors. Note that core binding in and of itself may not really boost performance much. Generally speaking, if you want to minimize contention with hardware threads then simply request twice the number of slots than cores your job will use. Even if the processes are not bound to cores, the OS scheduler will do a good job of minimizing contention.
If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention.
While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.
On previous UI HPC systems it was possible to briefly ssh to any compute node, before getting booted from that node if a registered job was not found. This was sufficient to run an ssh command, for instance, on any node. This is not the case for Argon. SSH connections to compute nodes will only be allowed if you have a registered job on that host. Of course, qlogin sessions will allow you to login to a node directly as well. Again, if you have a job running on a node you can ssh to that node in order to check status, etc. You can find the nodes of a job with the
nodes-in-job command mentioned above. We ask that you not do more than observe things while logged into the node as it may have shared jobs on it.
While there are many software applications installed from RPM packages, many commonly used packages, and their dependencies, are built from source. See the Argon Software List to view the packages and versions installed. Note that this list does not include all of the dependencies that are installed, which will consist of newer versions than those installed via RPM. Use of these packages is facilitated through the use of environment modules, which will set up the appropriate environment for the application, including loading required dependencies. Some packages like Perl, Ruby, R and Python, are extendable. We build a set of extensions based on commonly used and requested extensions so loading modules for those will load all of the extensions, and dependencies needed for the core package as well as the extensions. The number of extensions installed, particularly for Python and R is too large to list here. You can use the standard tools of those packages to determine what extensions are installed.
Like previous generation UI HPC systems, Argon uses environment modules for managing the shell environment needed by software packages. Argon uses LMod rather than the TCL modules used in previous generation UI HPC systems. More information about Lmod can be found in the Lmod: A New Environment Module System — Lmod 6.0 documentation. Briefly, Lmod provides improvements over TCL modules in some key ways. One is that Lmod will automatically load and/or swap dependent environment modules when higher level modules are changed in the environment. It can also temporarily deactivate modules if a suitable alternative is not found, and can reactivate those modules when the environment changes back. We are not using all of the features that Lmod is capable of so the modules behavior should be very close to previous systems but with a more robust way of handling dependencies.
Lmod provides a mechanism to save a set of modules that can then be restored. For those who wish to load modules at shell startup this provides a better mechanism than calling individual module files. The reasons are that
module purgewhich will ensure that the environment, at least the part controlled by modules, is predictable.
To use this, simply load the modules that you want to have loaded as a set. Then run the following command.
That will save the loaded modules as the default set. To restore that run
That command could then be put in your shell initialization file. In addition to saving/restoring a default set you can also assign a name to the collection.
There is also a technical reason to use the module save/restore feature as opposed to individual modules that involves how the LD_LIBRARY_PATH environment variable is handled at shell initialization.
One of the things that environment modules sets up is the
$LD_LIBRARY_PATH. However, when a setuid/setgid program runs it unsets
$LD_LIBRARY_PATH for security reasons. One such setgid program is the duo login program that runs as part of an ssh session. This will leave you with a partially broken environment as a module is loaded, sets
$LD_LIBRARY_PATH but then has it get unset before shell initialization is complete. This is worked around on previous systems by always forcing a reload of the environment module but this is not very efficient. Use
module restore to load saved modules if you are loading modules from your
~/.bashrc or similar.
Other than the above items, and some other additional features, the environment modules controlled by Lmod should behave very similarly to the TCL modules on previous UI HPC systems.
Unix attributes are now available in the campus wide Active Directory Service and Argon makes use of those. One of those attributes is the default Unix shell. This can be set via the following tool: Set Login Shell - Conch. Most people will want the shell set to
/bin/bash so that would be a good choice if you are not sure. For reference, previous generation UI HPC systems set the shell to
/bin/bash for everyone, unless requested otherwise. We recommend that you check your shell setting via the Set Login Shell - Conch tool and set it as desired before logging in the first time. Note that changes to the shell setting may take up to 24 hours to become effective on Argon.
|Queue||Node Description||Queue Manager||Slots||Total memory (GB)|
|AML||(1) mid memory||Aaron Miller||56||256|
|ANTH||(4) standard memory||Andrew Kitchen||224||512|
|(8) standard memory||Jun Wang||448||1024|
|AS||(5) mid memory|
|BH||(1) high memory||Bin He||56||512|
|(13) mid memory|
|(1) mid memory|
|(2) standard memory||Grant Brown||112||256|
|BIO-INSTR||(3) mid memory||JJ Urich, Albert Erives||168||768|
|CBIG||(1) mid memory with P100 accelerator||Mathews Jacob||56||256|
|CBIG-HM||(1) high memory with P100 accelerator||Mathews Jacob||56||512|
|CCOM||(18) high memory|
5 running jobs per user
|CCOM-GPU||(2) high memory with P100 accelerator|
CGRER + LMOS
|(10) standard memory|
|CHEMISTRY||(3) mid memory|
|(2) mid memory|
|CLL||(5) standard memory|
|COB||(2) mid memory||Brian Heil||112||512|
(10) mid memory
Note: Users are restricted to no more than
three running jobs in the COE queue.
|(1) mid memory|
|FERBIN||(13) standard memory||Adrian Elcock||728||1664|
|(6) standard memory|
|MF-HM||(2) high memory||Michael Flatte||112||1024|
|(8) standard memory|
|AIS||(1) mid memory||Grant Brown||56||256|
(3) standard memory
|GV||(2) mid memory|
|HJ||(10) standard memory||Hans Johnson||560||1280|
|HJ-GPU||(1) high memory with P100 accelerator||Hans Johnson||56||512|
|IFC||(10) mid memory|
|IIHG||(10) mid memory|
|(12) mid memory||Ben Rogers||672||3072|
|(2) mid memory with Titan V accelerators||Ben Rogers||112||512|
|(1) high memory with (2) P100 accelerators||Ben Rogers||56||512|
|IVR||(4) mid memory|
(1) high memory
|IVR-GPU||(1) high memory with K80 accelerator||Todd Scheetz||56||1536|
|IVRVOLTA||(4) high memory with Titan V||Mike Schnieders||224||2048|
|IWA||(11) standard memory|
|JM||(3) high memory|
|JM-GPU||(1) mid memory with P100 accelerator||Jake Michaelson||56||512|
|JP||(2) high memory|
|JS||(10) mid memory||James Shepherd||560||2560|
|LUNG||(2) high memory with P40 accelerator||Joe Reinhardt||112||1024|
|MANSCI||(1) standard memory|
|MANSCI-GPU||(1) high memory with P100 accelerator||Qihang Lin||56||512|
|MANORG||(1) standard memory||Michele Williams/Brian Heil||56||128|
|(5) mid memory|
William (Daniel) Walls
|MORL-GPU||(5) mid memory with dual P100 accelerators|
William (Daniel) Walls
|NEURO||(1) mid memory||Marie Gaine/Ted Abel||56||256|
|NOLA||(1) high memory||Ed Sander||56||512|
|PINC||(6) mid memory||Jason Evans||336||1536|
|REX||(4) standard memory|
|REX-HM||(1) high memory|
|SB||(4) standard memory|
|STATEPI||(1) mid-memory||Linnea Polgreen||56||256|
|UDAY||(4) standard memory|
|UI||(20) mid memory||1120||5120|
|(1) mid memory|
(1) mid memory with P100 accelerator
(4) mid memory with P100 accelerator
|UI-HM||(5) high memory||280||2560|
(19) mid memory
(115) standard memory
|NEUROSURGERY||(1) high memory with K80 accelerator|
|SEMI||(1) standard memory|
|ACB||(1) mid memory||Adam Dupuy||56||256|
|FFME||(16) standard memory||Mark Wilson||896||2048|
|FFME-HM||(1) high memory||Mark Wilson||56||512|
|RP||(2) high memory||Robert Philibert||112||1024|
|LT||(2) high memory with P100 accelerator||Luke Tierney||112||1024|
|KA||(1) high memory||Kin Fai Au||56||512|
A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.
These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues. Also note that there is a limit of 10000 active (running and pending) jobs per user on the system.
|Centrally funded queues||Node Description||Wall clock limit||Running jobs per user|
(20) mid memory
(5) high memory
(20) mid memory
(4) mid memory with P100 accelerator
|UI-DEVELOP||(1) mid memory|
(1) mid memory with P100 accelerator
Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system.
Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job.
In addition to the above, the HPC systems have some nodes that are not part of any investor queue. These are in the all.q queue and are used for node rentals and future purchases. The number of nodes for this purpose varies.
There are many resources that SGE keeps track of and most of them can be used in job submissions. However, the resource designations for machines based on memory and GPU are more likely to be used in practice. For the most part, machines with different amounts of memory and GPU capability are segregated by queues. However, the all.q queue contains all machines and when running jobs in that queue it may be desirable to request specific machine types. The following table lists these out. They would be selected with the '
-l resource' flag to qsub. These are all Booleans.
|Full Resource Name||Shortcut Resource Name|
For example, if you run a job in the all.q queue and want to use a node with a GPU, but do not care which type,
qsub -l gpu=true
If you specifically wanted to use a node with a P100 GPU,
qsub -l gpu_p100=true
or use the shortcut,
qsub -l p100=true
There some non-Boolean resources for GPU nodes that could be useful in a shared node scenario. Most of these are requestable but some are informational. Note that these are host based resources so are probably mostly useful when using the all.q queue for jobs. GPU jobs in investor queues will most likely want to use the Boolean resources listed in the previous table.
number of CUDA GPUs on the host
number of OpenCL GPUs on the host
total number of GPUs on the host
free memory on CUDA GPU N
number of processes on CUDA GPU N
maximum clock speed of CUDA GPU N (in MHz)
compute utilization of CUDA GPU N (in %)
|total number of processes running on devices||NO|
|number of devices with no current processes||YES|
maximum clock speed of OpenCL GPU N (in MHz)
global memory of OpenCL GPU N (in MHz)
semi-colon-separated list of GPU model names
For example, to request a node with at least 2G of memory available on the first GPU device:
qsub -l gpu.cuda.0.mem_free=2G
If there are more than one GPU devices on a node you will need to determine which device you will use and specify it accordingly in your code.