Intelligent Architectures for HPC Platforms

Andrew McGee1

1Senior Technologist Huawei Technologies

 

 

Huawei has successfully deployed high-performance computing (HPC) clusters for its customers in industries such as manufacturing, supercomputing, and higher education through a complete series of HPC products and solutions. Huawei has claimed a position on the top rung of the global supercomputing markets and accumulated rich experience and industry practices in the AI domain, which is represented by public cloud and video analytics applications. These advantages are helping Huawei build industry-leading HPC cloud and AI convergence solutions.  Come and here more at this valuable session.

Fast and Accurate Training of an AI Radiologist on the Zenith Supercomputer

Andrew Underwood1

1Director, HPC & AI Dell EMC

 

The health care industry is expected to be an early adopter of AI and deep learning to improve patient outcomes, reduce costs, and speed up diagnosis. We have developed models for using AI to diagnose pneumonia, emphysema, and other thoracic pathologies from chest x-rays. Using the Stanford University CheXNet model as inspiration, we explore ways of developing accurate models for this problem with fast parallel training on Zenith, the Intel Xeon-based supercomputer at Dell EMC’s HPC and AI Innovation Lab. We explore various network topologies to gain insight into what types of neural networks scale well in parallel and improve training time from days to hours. We then explore transferring this learned knowledge to other radiology subdomains, such as mammography, and whether this leads to better models than developing subdomain models independently.

Building AI tools and frameworks on Dell EMC HPC Systems

Matt Wallis1

1HPC Solution Architect, Dell EMC

 

This session is a technical demonstration of deploying and managing deep learning frameworks, tools and libraries used on Dell EMC Supercomputers, with the objective of accelerating data science and artificial intelligence research projects.

The Future Impact of Sequencing Everyone

Michael McManus

 

As the use of genome sequencing becomes mainstream in clinical practice we are faced with the daunting realization of a future where we are “sequencing everyone”. What would it take to sequence 7.6 billion people on the planet? This presentation will describe a qualitative methodology to design the required systems for compute and storage as well as the energy usage to sequence 7.6 billion people.

Intersect and La Trobe University – Leveraging a relationship through partnership

Jason Schimming1

1La Trobe University

 

La Trobe University (LTU) joined Intersect as a member in 2016, primarily in response to an increasing need for high-performance computing (HPC) capability in support of La Trobe’s strategic priorities. Over the ensuing three years, the relationship has grown and matured, resulting in LTU expanding its use of HPC, and adopting other major Intersect product and service groups: Time, Energy, Learn, and Space.

This presentation will provide an overview of the development of the relationship between La Trobe University and Intersect Australia, with a focus on three key performance metrics: High Performance Computing, eResearch Analyst initiatives, and realising strategic opportunities.

During the time of La Trobe’s membership, HPC usage has grown strongly, from 75% of the 2016 compute allocation to exceeding 100% of the 2017 allocation and growing further in 2018.

The presence of a full-time, on-campus Intersect eResearch Analyst has been transformative for La Trobe in terms of:

  • Provision of, or the expansion of, eResearch services usage,
  • eResearch skills development encompassing training and expert advice,
  • Researcher consultation and engagement,
  • Identification of strategic opportunities/initiatives and analysis of associated requirements.

As part of the maturing partnership, the timely identification of key technology related pain points and consultative remedial or tactical technical advice, has led to core strategic projects, including a recent project to improve management of the capture and analysis of instrument generated data. Following a detailed requirement gathering and scoping exercise, Intersect worked with LTU to develop a solution that automates the process of capturing data from instruments, upload into Intersect Space, and processing in an Analytical Hub powered by Intersect Time.

La Trobe has a strategic imperative to develop and grow its research capacity, and Intersect play a pivotal role, as a valued partner, in supporting that agenda.


Biography:

Jason Schimming, is the Manager of Infrastructure, Facilities and Storage at La Trobe University, and is also Acting Manager of the Networks group. Jason joined ICT at La Trobe in 2005 and has been involved many and varied technological advances and changes within the university within the IT infrastructure space. In his capacity as Network Manager, Jason is focussed on providing strategic and implementation programs for La Trobe’s storage and data networks that align with the University’s future ready and cloud based strategies

Accelerating HPC innovation for today and tomorrow

Steve Tolnai1

1Hewlett Packard Enterprise, HPC & AI Lead, Asia Pacific & Japan

 

System developers, scientists and researchers face obstacles deploying complex new HPC technologies, such as: energy efficiency, reliability and resiliency requirements and developing software to exploit HPC hardware. All can delay technology adoption and critical projects. The requirement to accelerate real-time insights and intelligence for deep learning with innovations is growing at break-neck speed. HPE & Intel have forged a multi-faceted alliance to advance customer innovation and expand HPC accessibility to enterprises of all sizes. Join this session to discover how HPE’s ecosystem of industry partnerships is delivering breakthroughs in HPC for deployment, security, power use and density, to make supercomputing more accessible and affordable for today and tomorrow.

Milton: The Research Computing Cloud – Staying flexible

Jakub Szarlat1

1Manager, ITS eResearch, The Walter Eliza Hall Institute of Medical Research

 

The Walter and Eliza Hall Institute made the decision last year to substantially invest in moving its research compute capability to the private cloud model. This included the creation of a dynamic HPC cluster.

The benefits of this strategy have been in a flexible infrastructure that has enabled researchers across multiple divisions of the institute to undertake a range of different computational science approaches that were previously not available to them. We have also been able to progressively evolve the research computing capability to meet the changing demands of researchers.

Accelerating HPC innovation with Artificial Intelligence for today and tomorrow

Steve Tolnai1
1Group Manager & Chief Technologist, HPC & AI Asia Pacific and Japan, Hewlett Packard Enterprise

 

System developers, scientists and researchers face obstacles deploying complex new HPC technologies, such as: energy efficiency, reliability and resiliency requirements and developing software to exploit HPC hardware. All can delay technology adoption and critical projects. The requirement to accelerate real-time insights and intelligence for deep learning with innovations is growing at break-neck speed. HPE & Intel have forged a multi-faceted alliance to advance customer innovation and expand HPC accessibility to enterprises of all sizes. Join this session to discover how HPE’s ecosystem of industry partnerships are delivering breakthroughs in HPC for deployment, security, power use and density, to make supercomputing more accessible and affordable for today and tomorrow.


Biography

Steve Tolnai is the Group Manager and Chief Technologist in the Hewlett Packard Enterprise servers business across Asia Pacific & Japan. In this position, he is responsible for all aspects of High Performance Computing technical strategy within Asia Pacific & Japan.

Steve manages the Technical and Solutions team for High Performance Computing and Artificial Intelligence encompassing Scientific Research, Financial Services Industry, Life and Material Sciences, Computer Automated Engineering, Government and Defence, Oil and Gas, Digital Content Creation and Electronic Design Automation.

Steve’s prior role was Principle Architect for HPC in Asia Pacific, where he architected the largest Life Science Supercomputers in Australia, Korea and Singapore also including the largest Supercomputer in Asia Pacific (ex Japan) in Australia and the largest Supercomputer in Asia Pacific.

In his 30+ years at Digital, Compaq, HP and HPE he has worked as a member of the Technical Sales and Marketing divisions where he has been involved with IT consulting to major corporations and governments in the areas of UNIX/Linux, Windows and VMS; using VAX, Alpha, x86 and NVIDIA architectures.

Building Artificial Intelligence workflows for 21st Century research

Andrew Underwood1 and guest from NVIDIA

1Dell EMC, High-Performance Computing Leader, Dell EMC Australia and New Zealand

 

The world we know is built on artificial intelligence; Netflix suggests our entertainment, Facebook suggests or friends, and transportation is fast becoming driverless.

The potential for artificial intelligence in the field of scientific research has the same potential to complete tasks that may be mundane, impossible (or so we thought!) or too complex for us to undertake manually. This session will take you through the concept of the modern day precursor to artificial intelligence, known as “Deep Learning” and how Dell EMC PowerEdge HPC systems with NVIDIA technology can be used to build your first “AI workflow”.

 


 

Biography:

Andrew Underwood leads the Dell EMC High-Performance Computing and Machine Learning strategy in Australia and New Zealand. His passion for innovation has driven him to architect some of the world’s largest and most powerful Supercomputers and Artificial Intelligence platforms critical to the scientific advancement and global economic competitiveness of our clients throughout the world.

About the conference

eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

Conference Managers

Please contact the team at Conference Design with any questions regarding the conference.

© 2017 - 2018 Conference Design Pty Ltd