AeRO Leadership Development Forum


Whether you are just starting out or already an experienced leader, improving your communication and pitching skill is critical to effective influence.

Communicate with Influence

Spend the day learning how to communicate your research with the Happiness Concierges. In these sessions, we will cover:


Brand You 9am – 10.30am

with Nicole Hatherly

Learn how to articulate your value and communicate your elevator pitch. In this fun 90 minute session, you will be guided through a framework for communicating your expertise and will draft your personal elevator pitch.

Case Studies 11am – 12.30pm

with Rachel Service

Learn the critical elements to a compelling case study so you can communicate the outcomes of your research. In this practical session you will be asked to create an outline of your research and present it to the audience (optional).

Work The Room 1.30pm – 3pm

with Huon Latham

Learn the fundamentals of networking in this fun, practical 90 minute session. Identify what to talk about with strangers, how to enter a conversation and how to exit warmly.

Executive Insights – Q&A with experienced leaders 3:30pm – 5:00pm

Hear from experienced leaders drawn from the eResearch sector in this Q&A session.


The workshop is aimed at eResearch professionals who are in a management/leadership role or who aspire to be the next generation of managers and leaders in eResearch.



 Nicole Hatherly

Nicole Hatherly is a Senior Consultant and coach with over 24 years experience working with global brands including Commonwealth Bank and FOXTEL. Nicole focusses on personal brand and internal visibility for Happiness Concierge’s academic clients including USYD, Franklin Women and Macquarie University.

Huon Latham

Happiness Concierge Huon Latham teaches interpersonal communication skills across Australia, working with clients such as RMIT Activator, Swinburne University and the University of Sydney. Huon’s experience with research has been influenced by being raised in an academic family and persuing a career in education and teaching.

Rachel Service

Rachel Service is the Founder of Happiness Concierge. Working with the biggest and most influential experts in Australia, Rachel has built Happiness Concierge from a blog into the training company it is today, servicing clients including Reserve Bank of Australia, Powercor, AGL, Vanguard Investments, Lazard Australia, RMIT, UNSW, Chief Scientists, Biologists, Doctors, and top researchers.


Fundamentals of Deep Learning for Computer Vision

Christopher Watkins1

1CSIRO, Melbourne, Australia

This workshop teaches you to apply deep learning techniques to a wide range of computer vision tasks through a series of hands-on exercises. You will get to work with widely-used deep learning tools, frameworks,   and   workflows   by   performing   neural   network   training   and   deployment   on   a fully-configured GPU accelerated workstation in the cloud. After a quick introduction to deep learning to start  the  course,  you  will  advance  to  building and deploying deep learning applications for image classification and object detection, followed by modifying the applications to improve their accuracy and performance, and finish by implementing the workflow that you have learned on a final project. At the end of the workshop, you will have access to additional resources to create new deep learning applications on your own.

  • Duration: 8 hours
  • Certification: Upon successful completion of this workshop, you will receive NVIDIA DLI Certification to prove subject matter competency and support professional career growth
  • Prerequisites: Familiarity with basic programming fundamentals, like functions and variables
  • Languages: English
  • Tools, libraries, and frameworks: Caffe, DIGITS


At the conclusion of the workshop, you will have an understanding of the fundamentals of deep learning and be able to:

  • Implement common deep learning workflows, such as image classification and object detection
  • Experiment with data, training parameters, network structure, and other strategies to increase performance and capability of neural networks
  • Integrate and deploy neural networks in your own applications to start solving sophisticated real-world problems


  Components Description

(45 mins)

  • Course Overview
  • Getting Started with Deep Learning
Introduction to deep learning, situations in which it is useful, key terminology, industry trends, and challenges.
Break (15 mins)
Unlocking New Capabilities

(120 mins)

  • Biological inspiration for Deep Neural Networks (DNNs)
  • Training DNNs with Big Data
Hands-on exercise: Training neural networks to perform image classification by harnessing the three main ingredients of deep learning: Deep Neural Networks, Big Data, and the GPU.
Break (45 mins)


Unlocking New Capabilities

(40 mins)

  • Deploying DNN models
Deployment of trained neural networks from their training environment into real applications.
Measuring and Improving Performance

(100 mins)

  • Optimizing DNN Performance
  • Incorporating Object Detection
Hands-on exercise: neural network performance optimization and applying DNNs to object detection.

(20 mins)

  • Summary of Key Learnings
Review of concepts and practical takeaways.
Break (15 mins)

(60 mins)

  • Assessment Project: Train and Deploy a Deep Neural Network
Validate your learning by applying the deep learning application development workflow (load dataset, train and deploy model) to a new problem.
Next Steps

(15 mins)

  • Workshop Survey
  • Setting up your own GPU enabled environment
  • Additional project ideas
  • Getting Data
Learn how to setup your GPU-enabled environment to begin work on your own projects. Get additional project ideas along with resources to get started with NVIDIA AMI on the cloud, nvidia-docker, and the NVIDIA DIGITS container.


This course is also available as a self-paced online option at


  • Learn how to build deep learning and accelerated computing applications across a wide range of industry segments such as Autonomous Vehicles, Digital Content Creation, Finance, Game Development, and Healthcare
  • Obtain guided hands-on experience using the most widely used, industry-standard software, tools, and frameworks
  • Attain real world expertise through content designed in collaboration with industry leaders such as the Children’s Hospital of Los Angeles, Mayo Clinic, and PwC
  • Earn NVIDIA DLI Certification to prove your subject matter competency and support professional career growth
  • Access courses anywhere, anytime with a fully configured GPU-accelerated workstation in the cloud

Next Generation eResearch Leaders Roundtable



  • The intent of the workshop is to provide a series of sessions which are highly interactive, with practical exercises which demonstrate in a practical sense the key learning outcomes.  The style of sessions also allows for the knowledge of the participants to be shared as well as an opportunity for the participants to have access to
  • maximum of 36 participants.


The Next Generation eResearch Leaders Roundtable is a small-cohort workshop for new managers or leaders working in the eResearch system and individuals actively seeking their first management/leadership role. The roundtable provides over six hours of sessions, designed to broaden perspectives and develop leadership abilities, enabling attendees to assume greater leadership roles within their organisations.  The goals of the workshop are to enhance grounding in the overall context in which leadership takes place in the eResearch sector, develop an understanding of the style and context in which decisions are made, and enhance awareness of the need for strong communication, partnership building, and organisational skills.

You’ll have time for reflection, synthesis, and informal networking during the eResearch Australiasia Conference.

You will have the opportunity to develop relationships with a cohort you can turn to for advice and guidance as you progress in the CIO and/or CISO role.



A 9.00am start and 5.00pm finish

1 hour for lunch

  1. Leadership in the eResearch Landscape.

60 minutes

  1. Emotional Intelligence.

45 minutes

  1. Influencing with Stories.

60 minutes

  1. Lunch

60 minutes

  1. 20/20 insight session.

60 minutes

  1. Spheres of Influence and Partnerships.

45 minutes

  1. Organisational Decision Making.

60 minutes

  1. Leadership Roundtable.

60 minutes


The workshop is aimed at eResearch professionals who are in a management/leadership role or who aspire to be the next generation of managers and leaders in eResearch.


No special requirements for delegates.


Workshop for RSEs: Recognition and Career Development for Researchers who Code

Mr Rowland Mosbergen1, Mr Nicholas May2, Dr Georgina Rae3, Dr Manodeep Sinha4,5, Dr Lance Wilson6, Dr Steven Manos7

1ARDC, Parkville, Australia,
2RMIT University, Melbourne, Australia,
3NeSI, Auckland, New Zealand,
4Centre for Astrophysics & Supercomputing, Swinburne University of Technology,
5ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D)
6Monash eResearch Centre, Monash University, Clayton, Australia,
7University of Melbourne,



  • Workshop Length: 1 day
  • Hands-on component: No technical hands-on components
  • Attendees: All welcome.


This workshop will bring together people who identify as Research Software Engineers (RSE), as well as leaders, policy makers and academics who are keen to see this community grow and be recognised. This one day workshop is the first time a community gathering will take place. The ambition is to build recognition and better define career opportunities for RSEs.

The term RSE was originally coined by the UK RSE association (, who say the following about RSEs: “A growing number of people in academia combine expertise in programming with an intricate understanding of research. Although this combination of skills is extremely valuable, these people lack a formal place in the academic system.” In Australia, the term RSE includes academics/researchers who code but are lacking recognition and metrics needed to progress their academic career; professional software engineers working in the research space; and system administrators who maintain research systems.

Inspired by the success of the RSE Association in the UK, we are seeking to establish an  Australasian Chapter of the RSE Association. Together with international bodies and support from our national organisations such as AeRO, NeSI, CAUDIT, ARDC, and research institutions, we aim to campaign for the recognition and adoption of the RSE role within academia, alongside the need for appropriate recognition, reward and career opportunities. We plan to organise regular events to allow RSEs to meet, exchange knowledge and collaborate.

We ran an initial Expression of Interest survey in 2017 among Australian and New Zealand researchers and found that majority of the respondents prioritised: (1) Increasing recognition of the RSE role, (2) Building more appropriate, consistent and representative position descriptions and KPIs, and (3) Developing a stronger community through regular collaborative events and gatherings.

Please join us on this event to actively work on how we can grow this community and advocate for others. Together, we can build a sustainable community that supports RSEs, that ultimately advances our reputation in data-intensive research, and contributes to more efficient and reproducible computational research.


  1. Introduction and background on RSE landscape, including meetings held, institutional review.

30 minutes

  1. Introductions and Lightning Talks from delegates

60 minutes

  1. Break

30 minutes

  1. Practical solution brainstorming breakout session, including current sustainable career stories

90 minutes

  1. Break

60 minutes

  1. Virtual meeting time to connect with other RSEs nationally to assign champions/steering committee members

60 minutes

  1. Virtual breakout to organise into smaller teams to start planning implementation strategies

30 minutes

  1. Break

30 minutes

  1. Write up details of the day and close

60 minutes


This workshop is for building a community for academics/researchers who create and maintain research software, but are lacking recognition and metrics needed to progress their academic career. Professional software engineers working in the research space, research support team members that work closely with researchers, system administrators who maintain research systems; academics who rely on such expertise; eResearch leaders and policy makers.


Attendees need to bring enthusiasm and a willingness to volunteer their thoughts and their time. A lightning talk to introduce attendees and their tools for research.



Rowland is the Manager of Community Platforms at the ARDC. He has 20 years experience in IT along with 8 years experience in research computing.

MyTardis: FAIR data management for instrument data

Conveners: Wojtek J. Goscinski1 , Amr Hassan2

Presenters: Andrew Janke3Andrew Mehnert4Aswin Narayanan5Dean Taylor6James M. Wettenhall7Jonathan Knispel8Keith E. Schulze9Lance Wilson10Manish Kumar11Samitha Amarapathy12

1Monash eResearch Centre, Monash University, Melbourne,
2Monash eResearch Centre, Monash University, Melbourne,
3National Imaging Facility, Center for Advanced Imaging, The University of Queensland, Brisbane,
4Centre for Microscopy, Characterisation and Analysis, The University of Western Australia, Perth,
5National Imaging Facility, Center for Advanced Imaging, The University of Queensland, Brisbane,
6Centre for Microscopy, Characterisation and Analysis, The University of Western Australia, Perth,
7Monash eResearch Centre, Monash University, Melbourne,
8Centre for Microscopy, Characterisation and Analysis, The University of Western Australia, Perth,
9Monash eResearch Centre, Monash University, Melbourne,
10Monash eResearch Centre, Monash University, Melbourne,
11Monash eResearch Centre, Monash University, Melbourne,
12Monash eResearch Centre, Monash University, Melbourne,


  • Workshop Length: One Day
  • This workshop will have the last 2 hours as a hands-on component


Research data management platforms aim meet the challenges of capturing and managing large volumes of research data,  while ensuring that the data is Findable, Accessible, Interoperable and Reusable (FAIR). One such platform is MyTardis (, an open source research data management platform that was initially establish to handle and store macromolecular crystallography data {Meyer:2014ub, Androulakis:2008ku}. Through several national projects like the NeCTAR Characterisation Virtual Laboratory (, ImageTrove ( and the ANDS Trusted Data projects (, MyTardis has evolved into a general purpose research data management system, with a focus on integrating scientific instruments and instrument facilities. It is used across light microscopy, electron microscopy, proteomics, cytometry, magnetic resonance imaging (MRI), positron emission tomography (PET), and other scientific techniques. It integrates over  100  Australian  instruments  across  Monash  University,  University  of  Queensland,  University  of  Newcastle, University of New South Wales, RMIT, and University of Western Australia.

In this workshop, representatives from the Characterisation community will share their experience in developing and operating large deployments of MyTardis. We will emphasise how MyTardis helps to securely store and manage data from  a  variety  of  different  instruments.  We  will also outline the short- to medium-term roadmap for MyTardis development and our plan to engage the wider community to help us build the next-generation platform for instrument data management. Finally, we will run a hands-on workshop on best-practices for deploying and operating MyTardis, specifically targeted at developers and system administrators.

Workshop Contents:

Overview of MyTardis and its deployments

  • Overview of MyTardis
  • Developing and operating MyTardis at Monash University
  • NIF Trusted Data Repositories
  • Developing and operating MyTardis at the University of Queensland and NIF
  • Developing and operating MyTardis at the University of Western Australia
  • Developing and operating MyTardis at the University of Newcastle
  • MyTardis features for instrument facilities

Future Roadmap

  • The Future of MyTardis
  • Requirements from instrument facilities
  • Addressing FAIR by integrating with the experiment, trusted data repositories.
  • Panel Discussion / BOF- Next-generation Instrument data Future and challenges

Hands On

  • Hands on session on deployment of MyTardis


  • Instrument facility managers
  • Data Managers
  • IT Managers & Directors
  • Professionals in associated disciplines
  • Research Computing Specialists
  • Research Managers
  • University Representatives
  • Researchers
  • Librarians
  • Software & App engineers


Attendees need to bring a laptop.


Dr Wojtek James Goscinski is the coordinator of MASSIVE, a national high performance computing facility for data science, and Associate Director at the Monash eResearch Centre a role in which he leads teams to develop and implement digital strategies to nurture and underpin next-generation research. He holds a PhD in Computer Science, a Bachelor of Design (Architecture), and a Bachelor of Computer Science.

Dr Amr Hassan is the eResearch Delivery leader at the Monash eResearch Centre. He leads a team of eResearch professionals to ensure the delivery of high-quality ICT services, projects and programmes that enable the achievement of the eResearch strategic agenda of Monash University. He holds an interdisciplinary PhD in Computational Sciences, an M.Sc in Scientific Computing, and a B.Sc. of Computer Science.


Systems Administration in Research Computing

Conveners: Mr Greg Lehmann1, Mr Jake Carroll2

Gin Tan3, Dr Robert Bell4, Michael Mallon6, Linh Vu7, Steve McMahon5

1CSIRO, Pullenvale, Australia,
2The University of Queensland, St. Lucia, Australia,
3Monash University, Melbourne, Australia,
4CSIRO, Melbourne, Australia,
5CSIRO, Canberra, Australia,
6The University of Queensland, Brisbane, Australia,
7The University of Melbourne, Melbourne, Australia,


The workshop will be a full day event, without a hands on component. There are no limits on the number of attendees. There are no special requirements in equipment.


Research Computing uses tools and techniques that are specialized in nature. Systems administrators working with these tools and the scientists who use them have a different skill set to the norm in IT. This workshop will present information in this area and showcase use cases with the aim of knowledge transfer between practitioners.

1. Workshop introduction and site introductions. 5 minutes per site e.g.

a. Pawsey
c. NCI
d. DST
e. Monash
f. Swinburne
g. CQU
h. From the floor

2. Space/data management techniques. Flushing, quotas and HSM with encapsulation. Data life cycle, dataset concept. Exclude publication of datasets. – various – Rob Bell, Greg Lehmann, David Rose
45 mins


3. BeeGFS Use Cases in Australian HPC – Jake Carroll and Greg Lehmann
(1) Filesystems for accelerated computing – Australia’s first all flash BeeGFS production environment

Through analysis and system observability, it has become evident that accelerated supercomputing has presented a new kind of challenge to filesystems. This presentation discusses the challenges the University of Queensland faced in the process of scaling DL, AI, ML and deconvolution workloads and the pressures these workloads created on traditional parallel filesystems. Arriving eventually with the use of an RDMA all flash BeeGFS implementation, this presentation details the architectural considerations, workloads and corner cases that obviated such an approach.

(2) CSIRO’s new scratch FS – a first look a couple of months in.
30 mins

4. A Year with CephFS for HPC – Linh Vu
This presentation discusses the findings and challenges that the University of Melbourne experienced within a year of implementing CephFS as the storage solution for our growing HPC service. I will talk about our journey from a small POC 6-node 768TB (raw) NLSAS cluster to over 10 times the size, with a mix of NLSAS, SAS SSD and NVME SSD storage pools to cater for different workloads. I will address the design, technical and managerial challenges we have had to face to bring a relatively unknown filesystem to HPC, which we are now heavily investing in.
30 mins

5. Efficiently sharing data between HPC and cloud computing platforms – Michael Mallon
One of the guiding principles of the Medici project is to make where data lives somewhat independent from how a researcher might want to consume data. Adhering to this principle enables researchers to choose the most appropriate tool for a particular part of a workflow without incurring a mirroring or replication overhead. One of the more difficult places to adhere to this principle is the intersection cloud computing and HPC resources in workflows. I’ll talk about how we’ve addressed this using GPFS’s unified object and file interface and swifthlm.
30 mins


6. Ansible for Cluster Build – Gin Tan
The new M3 cluster is a bit different to a traditional HPC cluster. The cluster sits on the Monash research cloud and instances are provisioned with ansible – we called it cluster-in-a-box. The idea is to be able to provision a cluster anytime and anywhere we want.
30 mins

7. OpenHPC Experiences on the UQ Wiener cluster – Jake Carroll
30 mins

8. Using Bright Cluster Manager to streamline and improve HPC operations – Steve McMahon
Managing HPC systems can be complex.  There’s a lot happening and a lot of things to check to make sure they are working correctly.  This talk is about how using a product like Bright Cluster Manager can simplify HPC operations, check for common problems and improve service levels.
30 mins


8. Slurm on Ozstar at Swinburne – Chris Samuel
This short talk will cover how we use Slurm on Swinburne’s OzStar GPU cluster. It will cover what plugins we use, and why, as well as how we try and balance the various competing requirements for scheduling our workload through fair-share, partition configurations and our Lua job submit plugin. If time permits it will also cover as yet unsolved problems we wish to address.
30 mins

9. Scheduling containers in the cloud and hpc – Gin Tan
How we use the same container to run jobs in both Kubernetes and Slurm. The idea is to take HPC workload bursting into the cloud and looking for suggestions from the crowd as well if there’s any. The workload will be as simple as using Tensorflow in the container.
30 mins

10. HPC procurement panel discussion – various speakers including Jake Carroll
30 mins


IT workers who maintain the underlying Computing and Data Infrastructure used by scientists to do eResearch.


No special equipment required. Some background in IT required, preferably in HPC/Cloud computing.



Greg Lehmann has 35 years IT experience. Greg worked at the University of Queensland in his early career and has had varied mini careers in CSIRO. At present he works in the data team focused on filesystem delivery for HPC and cloud. Greg still has a strong interest in HPC systems in general which was his previous role. He is also the Infiniband fabric tech lead for CSIRO.

Jake Carroll is currently the Associate Director of Research Computing for UQ’s three large scientifically intensive research institutes – the Australian Institute for Bioengineering and Nanotechnology, the Institute for Molecular Bioscience and the Queensland Brain Institute.

Jake has spent the last 12 years in scientific computing, working on everything from building supercomputers to managing the strategy and complexity that comes with scientific endeavour.

Jake spends his time working to make scientific computing platforms, technology and infrastructure as good as it can be, such that world class research can be conducted, unencumbered.


Collaborative Research in Practice – a roadmap to using Git and Docker

Convener: Rebecca Lange
Presenters: Dr Rebecca Lange1, Mark Gray2, Brian Skjerven2

1Curtin Institute for Computation, Curtin University, Perth, Australia,
2Pawsey Supercomputing Center, Perth, Australia,
3Pawsey Supercomputing Center, Perth, Australia,


  • Workshop Length: 1 day
  • Hands-on component: Yes, we are aiming for half of the workshop being hands-on
  • Attendees: less than 40


The Internet has made it easy for researchers to collaborate across universities using tools such as emails, instant messaging and videoconferencing. However, these tools do not encourage transparency and reproducibility of research. For example, what would happen if you forget to put someone on your email chain and how do you reconcile different code snippets that various team members have worked on?

In this workshop we will first have a look at the modern scientific landscape in which code plays an increasingly important role in obtaining and reproducing research results.

We then move on to a hands-on introduction to version control (with Git and GitHub) and “containerising” with Docker. Finally, we will conclude the workshop with a session on using version control and Docker in practice for a collaborative and migrateable workflow.

1. Welcome, introductions and workshop outline.

The Importance of software in research and academia.  30 minutes

2. Introduction to version control with Git and Github – Part I. 60 minutes

  • What is version control
  • Setting up git
  • Creating your first repository

3. Break – 30 minutes

4. Introduction to version control with Git and Github – Part II. 90 minutes

  • Tracking changes
  • Exploring the repo history
  • Remote repositories on Github

5. Lunch Break – 60 minutes

6. Introduction to containers using Docker and Dockerhub. 60 minutes

  • What is Docker
  • Creating a Docker container
  • Running a Docker container
  • Containers on Dockerhub

7. Break – 30 minutes

8. A collaborative (software/coding) workflow in practice. 120 minutes

  • Working collaboratively on Github
  • Managing containers and dockerfiles with Git
  • Managing your workflows using containers
  • Where to next? Pointers to further reading and learning


This workshop is aimed at academics/researchers who regularly write and/or work with software, and research support staff who would like to know more abut useful tools for code and workflow management. We will cover:

  • how to track changes,
  • how to make sharing of code and software easier, and
  • how to manage issue tracking and workflows in a collaborative software development environment.


Attendees will need to bring their own laptop with some software pre-installed.

It would be beneficial if participants have used a bash terminal (or similar) before.

We will provide information on what to install and how to install closer to the time.


Rebecca Lange received her PhD in astronomy from the International Centre for Radio Astronomy Research at the University of Western Australia.

Before Rebecca moved to Australia she studied Astronomy and Physics at Nottingham Trent University where she also worked as a research assistant in scientific imaging for art conservation and archaeology. Her work there included the development and testing of instruments and software for imaging and spectroscopy as well as the organisation and supervision of field trips, which often required liaising with art curators and conservators.

Throughout her studies and research Rebecca has gained extensive programming as well as data analytics and visualisation experience in various programming languages.

Currently she is working as a data scientist for the Curtin Institute for Computation where she helps researchers by providing data analytics and computational support and training.


Recent Comments


    About the conference

    eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

    Conference Managers

    Please contact the team at Conference Design with any questions regarding the conference.

    © 2018 - 2020 Conference Design Pty Ltd