Setting up Cryo-EM tools and workflows in HPC Phoenix: lessons learnt

Dr Fabien Voisin1

1University Of Adelaide, Adelaide, Australia

Biography:

After finishing my PhD in high energy astrophysics at the University of Adelaide, I have started my journey in high performance computing. I have since helped countless researchers and projects with their research compute needs; from troubleshooting codes to setting up cloud environment that could be used in parallel to our HPC Phoenix.

I thrive in tackling complex research infrastructure problems and still count on getting back to academia when the opportunity presents itself.

My ORCiD is https://orcid.org/0009-0004-6872-9858

Abstract:

The Adelaide Microscopy has acquired a FEI Glacios 200kV Cryo-Transmission Electron Microscope. This instruments easily generates Terabytes of data that need to be processed as soon as possible. We have opted to make use of our existing storage and compute infrastructure rather than buy more local machines for data processing. Notably, we offered our HPC GPU capabilities (i.e 200x 40GB A100 SXMs) and high performance scratch storage to minimize bottlenecks, with dedicated GPU nodes to account for Cryosparc live processing.

In this talk, we will highlight the workflow and infrastructure linked to this setup. While we show that it has shown good performance and scale, as we can easily add GPU resources when demands arise, we will also discuss some serious challenges commonly associated with shared infrastructures: from Cryotools software installations and setup to data transfer and data management.

 

Categories