The equipment used in the ATTOSTRUCTURA project is based on a high-performance computing (HPC) cluster for conducting complex simulations and analyses. The construction of this cluster began in 2017 thanks to a Leonardo BBVA grant, a subsidy that recognizes and supports researchers and creators with great potential in Spain.
Since its inception, the cluster has experienced significant growth driven by various research projects, particularly the ERC Starting Grant ATTOSTRUCTURA, funded by the European Research Council. The ATTOSTRUCTURA project has been fundamental, providing additional resources that have allowed the expansion of the cluster and the incorporation of new technologies and processing capabilities.
Thanks to ATTOSTRUCTURA, the cluster now includes 13 computing nodes equipped with Intel Xeon Gold processors, known for their high performance and efficiency in handling intensive computing workloads. Additionally, the cluster has 5 nodes that utilize graphical processing units (GPUs), essential for tasks requiring massive parallel processing, such as machine learning, the simulation of complex physical phenomena, and the analysis of large datasets.
This HPC cluster is a crucial tool, allowing researchers to execute complex calculations and model systems on a scale that would be impossible with conventional computing equipment. The combination of high-performance CPUs and specialized GPUs offers a versatile and powerful platform that can be adapted to a wide variety of scientific and technological research projects. The contribution of ATTOSTRUCTURA has been key to achieving this level of capacity and performance.
In addition to its own equipment, the ERC ATTOSTRUCTURA project has access (either through competitive calls or through collaboration agreements) to the Spanish Supercomputing Network (RES), specifically to the Marenostrum machine at the Barcelona Supercomputing Center and the SCAYLE supercomputer (Castilla y León).
CLUSTER COMPONENTS AND MAIN DETAILS
*Financed totally or partially by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 851201)
Main node (ATTO)*
– 2x CPU Intel(R) Xeon(R) Gold 6140 (18 cores, 36 threads) @ 2.30 GHz
– 192 GB RAM DDR4 2666 MHz ECC
– 4x GPU NVIDIA A30
– Hard drive 1TB SSD
– Ethernet 10G
– Infiniband 56G
Storage node (NAS)*
– 2x CPU Intel(R) Xeon(R) Gold 6140 (18 cores, 36 threads) @ 2.30 GHz
– 192 GB RAM DDR4 2666 MHz ECC
– 1x GPU NVIDIA GeForce RTX 3080
– Hard drive 512GB SSD
– Shared storage 24TB RAID 5
– Ethernet 10G
– Infiniband 56Gb
Computing nodes (nodo01, nodo02)*
– 2x CPU Intel(R) Xeon(R) Gold 6140 (18 cores, 36 threads) @ 2.30 GHz
– 192 GB RAM DDR4 2666 MHz ECC
– Hard drive 512GB SSD
– Ethernet 10G
– Infiniband 56Gb
Computing nodes (nodo03, nodo04, nodo05, nodo06)*
– 2x CPU Intel(R) Xeon(R) Gold 6240 (18 cores, 36 threads) @ 2.60 GHz
– 192 GB RAM DDR4 2933 MHz ECC
– Hard drive 500GB NVME
– Ethernet 10G
– Infiniband 56G
Computing nodes (nodo07, nodo08, nodo09, nodo10)*
– 2x CPU Intel(R) Xeon(R) Gold 6240R (24 cores, 48 threads) @ 2.40 GHz
– 192 GB RAM DDR4 2933 MHz ECC
– Hard drive 1TB SSD
– Ethernet 10G
– Infiniband 56Gb
High performance node (nodo11)
– 2x Intel(R) Xeon(R) Platinum 8362 (32 cores, 64 threads) @ 2.80 GHz
– 1 TB RAM DDR4 3200 MHz ECC
– Hard drive 2TB NVME
– Ethernet 10G
Ethernet switches:
– 24 port TP-LINK TL-SG3424
– 24 port D-LINK DGS-1210-24
Infiniband switch
– 12 port Mellanox MSX6005F-1BFS 56G
Backup storage (memento)*
– Synology DS1821+
– 100 TB RAID 5
UPS (Uninterruptible Power Supply)
– RIELLO SDH 2200 VA