GRICAD platforms

A light overview of our computing capabilities

Pierre-Antoine Bouttier

w:400 center

GRICAD infrastructures (very briefly)

w:700 center

GRICAD services

Infrastructures and expertise for:

  • Scientific and intensive computing
  • Cloud computing
  • Research data management
  • Scientific software development
  • Training and animation
  • Audiovisual production

GRICAD access policy

  • Freely accessible to all academic researchers belonging to a UGA COMUE institution and to all their collaborators within the context of research projects
  • Pooling and rationalisation of material and human resources within the Grenoble site (COMUE UGA)

Focus on computing resources

Scientific computing and data analysis

w:1000 center

Scientific computing and data analysis

Several platforms for several needs

  • Froggy and dahu clusters: knife tools (HPC, HTC, data analysis, AI, visualisation)
  • Luke for non-standard computing nodes
  • HTC and data analysis: Computing grid
  • NOVA for cloud computing and development (including GPU)

...and high perfomance and capacity storage

Platforms in detail

Froggy (end-of-life)

  • 190 nodes with two 8 core Xeon E5-2670 and 64Gb RAM
  • Fat node with four 8 core Xeon E5-4620 and 512Gb RAM
  • 3 gpu nodes (Tesla K20m and K40t)
  • One visu node
  • 90Tb shared lustre scratch and 30Tb shared home

Luke

  • 1126 heterogeneous cores
  • 62 heterogeneous nodes (in RAM, disk space, CPUs, etc.)

Platforms in detail

Dahu

  • Up to 3500 cores, 15Tb shared home
  • 3 GPU nodes with 4 Tesla V100 SXM2 and 192Gb
  • One visu node
  • One fat node (1192GB RAM)
  • 2 fast nodes (HF CPUs)

Storage

  • Bettik, scratch for Luke and Dahu clusters, 1,3Pb, BeeGFS
  • MANTIS, cloud storage accessible from all clusters, ~1Pb, iRODS

Accessing GRICAD computing resources

GRICAD web portal

  • Permanent people
    • Create an account (with your agalan credentials)
    • Create or join an existing project
    • Access the machines
  • Non-permanent people
    • Create an account (with your agalan credentials)
    • Join an existing project
    • Access the machines

Prerequisites: being familiar with linux commands, read the documentation

Software environment on our clusters

NIX and Guix

  • Package managers
  • In user space
  • Reproducibility oriented
  • Easy to set up the same environment on multiple platforms (even your local machine)

For GPU, conda global environments

Software environment on our clusters

Containers

  • Available on Luke and Dahu
  • Singularity or Charliecloud
  • Could be tricky for multi-nodes or GPU computing

In user space (within reasonable limits)

  • Conda, spack
  • No help from GRICAD for these solutions
  • Not shared solutions (bad)...
  • ... but sometimes unavoidable

Good practices for computing and storage

  • Read the doc.
  • Read the doc.
  • Be aware of rules of usage (read the doc)
  • Do not launch heavy workload on shared spaces
  • Identify adapted platforms for your needs (especially for storage!)
  • No backups of your data on our clusters
  • Do not hesitate to contact us!

Some useful links

GRICAD website
GRICAD documentation
GRICAD web portal
GRICAD support

How to use R on our clusters (here, with Guix on Dahu)

To install R

$ source /applis/site/guix-start.sh # We activate guix package manager
$ guix install r r-data-table r-plotly # We install r and needed packages

To search a package

$ guix search packagename

To import in guix a CRAN package (and its dependencies)

$ guix import cran --recursive cran_packagename

Many thanks for your attention!