ARCHER2 is the UK’s primary academic high performance computer (HPC) system, hosted by EPCC at the University of Edinburgh. ARCHER2 is the replacement for the ARCHER service which was also hosted by EPCC. ARCHER2 allows you to access supercomputing resources without having to incur the high capital costs of purchasing or maintaining one yourself.
Researchers have used supercomputing systems similar to ARCHER2 to model, visualise and analyse large systems, or process large amounts of data across many different scientific fields, including chemistry, physics, materials science, climate science and biomedical sciences. You can use ARCHER2 for most tasks requiring large amounts of computational power, particularly tightly-coupled parallel problems.
Computational Science and Engineering (CSE) support on ARCHER2 is provided by EPCC. You can get help with porting, optimising and developing your codes for ARCHER2, ensuring that the correct scientific software is installed on the system, and obtain training, advice and best practices to enable you to exploit ARCHER2 resources.
Find out more about ARCHER2Free or paid for:
Paid access to ARCHER2 is primarily through technical application with costs to be included in a grant proposal. Exact processes and procedures will vary depending on the access route used and the research funder. Costs are £0.20 per CU (1 node hour on ARCHER2) for EPSRC and NERC projects and £0.39 for other academic research.
For industrial access, please contact support@epcc.ed.ac.uk. ARCHER2 is initially available to research that falls under the EPSRC remit only. As the service matures it will be opened up to NERC and other funder's projects.
About ARCHER2 accessProduct Features
- Processors:
- ARCHER compute nodes each contain two, 64-core processors running at 2.25 GHz.It has 748,544 processor cores in total on 5,848 nodes
- Backing Storage:
- 1 PB NFS storage in available in data analysis nodes with an additional 14.5 PB HPE Cray ClusterStor storage available in the login, data analysis, and computer nodes.
- 1 PB NFS storage in available in data analysis nodes with an additional 14.5 PB HPE Cray ClusterStor storage available in the login, data analysis, and computer nodes.
- Memory:
- ARCHER2 has 256 GiB standard memory (512 GiB high memory) per computer node and 512 GiB per login and data analysis node.
- ARCHER2 has 256 GiB standard memory (512 GiB high memory) per computer node and 512 GiB per login and data analysis node.
- Software:
- CASTEP, ChemShell, Code_Saturne, CP2K, Elk, FEniCS, GROMACS, LAMMPS, Met Office Unified Model, NAMD, Niktar++, NEMO, NWChemi, ONETEP, OpenFOAM, Quantum Espresso, VASP
- More applications can potentially be installed on request
Applicable Disciplines
- Materials Science
- Engineering
- Industrial Research
- Physiology
- Climate Science
- Bioscience
- Palaentology
- Veterinary Sciences
- Physics
- Medical Research
- Anatomy
- Any Project using large quantities of Data
Case Studies
Terms & Conditions
ARCHER2 policies
Technical Requirements
- Internet Access
Skills Required
- Linux command line knowledge
- Knowledge of cluster computing and parallel applications written using programmes such as MPI and OpenMP