Best Practice for FAIR and Sustainable Research Software

Dr Michelle Barker1,Mr Nicholas May2, Mr Tom Honeyman3, Rebecca Lange4, Justin Baker5

1Research Software Alliance
4Curtin University


This BoF aims to utilise audience expertise to identify examples of best practice in developing FAIR/sustainable software for research, to advance community knowledge and networks.

The FAIR For Research Software Working Group (FAIR4RS WG) is leading the research software community in the crucial step of agreeing how to apply the FAIR principles to research software by mid-2021. This BoF will engage the audience by identifying local examples of best practice in creating FAIR software and utilising the Lamprecht et al paper to provide guidance on what the FAIR principles might include. The best practice examples can be used to promote the FAIR principles for research software when finalised.

This BoF brings together 3 organisations:

– Research Software Alliance (ReSA)’s co-convening of the FAIR4RS WG with Research Data Alliance (RDA) and FORCE11 to create international community-agreed standards and guidelines

– RSE-AUNZ in sharing the best research software practices across the community

– ARDC in working towards research software as a first class research output

Why attend?

– Engagement with work developing the FAIR principles for research software

– Sharing of best practice to enable knowledge transfer across projects

– Development of networks across the RSE community

– Contribution to a set of impact stories from the community demonstrating best practice for sustainable sesearch software.

– Increased awareness of the activities of RSE-AUNZ, ReSA and ARDC


– 3 x short talks from each organisation

– Topic identification  to guide breaking into smaller groups

– Breakout into groups

– Report backs


Michelle is the Director of the Research Software Alliance (ReSA). She has extensive expertise in open science, research software, digital workforce capability and digital research infrastructure. As a sociologist, Michelle is passionate about building collaborative partnerships to achieve system change.

She recently chaired the OECD Global Science Forum expert group on digital skills for the research sector, is an Advisory Committee Member of the US Software Sustainability Institute (URSSI). Michelle is a former Director of the Australian Research Data Commons.

Research tool roadmap

Dr Christopher McAvaney2, Dr Jerry Lai1, Dr Yoong Goh1

1Intersect Australia, Sydney, Australia, 2Deakin University, Melbourne, Australia

‘Should I use R or Python in my research?’, ‘Can Tableau do anything that Power BI can do?’ and ‘Is Excel able to process my 3 terabytes of data?’ are some of the questions that have always been asked by researchers when they are deciding on which research tools to use in their research. The primary reason for these questions is that people, in general, do not have enough knowledge about the pros-and-cons of these tools in relation to their research context. As a result, many researchers may select tools that are not best suited for their research timeline and budget.

To assist researchers with the selection process, we propose to design a roadmap for research tools. The roadmap would be structured as an interactive decision tree which can be implemented into an online recommender system.

As the initial stage of the roadmap, we focus primarily on data collection, manipulation and visualization tools that are commonly available to researchers. We began by designing a rating scale to measure the usability, complexity / technical threshold, and ubiquitousness of these tools in different research contexts. Moreover, we are also considering other technical and financial aspects, such as  operating platforms, licensing and availability of support  within Deakin University. This roadmap will provide a template for designing decision trees for other tools to the wider eResearch community.

In this presentation, we will provide a progress update and details of the roadmap development at the initial stage which may lead to an online recommender system.


Andrew (Yoong) has a strong background in computer science and mathematics. His work interests are natural language processing, data analysis and machine learning.

Jerry has a background in psychological science and statistics. Jerry works with multiple research teams in Deakin on survey design; statistical programming; data-analytics and visualisation and promoting statistical literacy.

Christopher McAvaney is Services Manager, eResearch at Deakin University. He is responsible for establishing an eResearch program of work at Deakin which includes REDCap.  Christopher has extensive research administration experience at Deakin and is keen to leverage as much as possible from REDCap via its extensible design.

Training for and by the RSE community

Dr Rebecca Lange1, Heidi  Perrett6, Dr Manodeep Sinha2, Mrs  Nooriyah  Lohani3, Mr Justin  Baker4, Nicholas  May5

1Curtin University, Bentley, Australia
2Swinburne University of Technology, Hawthorn, Australia
3New Zealand eScience Infrastructure (NeSI), New Zealand
4CSIRO, , Australia
5RMIT University, Melbourne, Australia
6Ceres Tag, , Australia

The successful collaboration between Research Software Engineers (RSEs) and researchers depends on:

– the RSEs understanding of the different workflows of software vs research projects,

– the RSEs technical and soft skills, and

– the researchers’ basic understanding of the technical skills required.

All of these aspects are underpinned by training for and often by RSE.

This BoF aims to establish an overview of this training landscape and its gaps by exploring two questions:

What training would be beneficial for the RSE’s career development?

The technical landscape changes constantly, and RSEs need (re-)training to stay up-to-date with new developments. Additionally, senior RSEs need management skills for further career advancement.

In this part of the BoF participants will share and discuss training available to RSE across AUNZ and whether this covers the skills and professional development desired by RSE.

What training would improve the interactions between RSEs and researchers?

Researchers are not software developers, nor do they need be. However, having a basic understanding of coding and software development practices will improve communication between RSEs and researchers and lead to better project outcomes.

Participants will share and discuss what computational training they are expected to deliver and if this actually covers the concepts researchers should be taught to enable better collaboration with RSE.

The outcomes will be collated and shared publicly and a working group may be established during the BoF to assess the training needs and evaluate mechanisms to establish the best-practices for employers towards training for and by RSEs.


Rebecca is a data scientist at the Curtin Institute for Computation where she applies her research, data analysis, and coding skills to support researchers with their computational projects. In addition, she leads the Curtin University node of the Astronomy Data And Computing Services (ADACS) initiative and oversees the ADACS computational training efforts.

Across her roles Rebecca actively supports and promotes open source software and open source development. She is  involved with the open astronomy community, a member of the AUNZ Research Software Engineer (RSE) association steering committee, and a founding and organisational member of the R-Ladies Perth chapter.

Research software engineers : Creating a community and an identity within complex research ecosystems

Mrs Nooriyah  Lohani1, Mr Nick May2, Mr Justin Baker3, Dr Rebecca Lange4, Dr. Manodeep Sinha5, Heidi Perrett6

1New Zealand eScience Infrastructure (NeSI), New Zealand
2RMIT University, Melbourne, Australia
3CSIRO, Clayton, Australia
4Curtin University, Bentley, WA, Australia
5Centre for Astrophysics & Supercomputing at Swinburne University, Melbourne
6CERES, Brisbane

The Research Software Engineering (RSE) movement, which started in the UK around 2013, describes the RSE community as: “A growing number of people in academia combine expertise in programming with an intricate understanding of research. Although this combination of skills is extremely valuable, these people lack a formal place in the academic system.”

In Australasia, we interpret the term RSE inclusively to encompass; academics and researchers who code; professional software engineers working in the research space; system administrators who maintain research systems and closely interact with their researchers; and generalists who bring communities together and can engage with both researchers and technical staff. RSEs are now acknowledged in many countries across Europe and North America,  and are gaining recognition in Australia and New Zealand.

This presentation will highlight trends in the global RSE movement, and provides an update on the current status of the RSE community in Australia and New Zealand (RSE-AUNZ), which has seen an upward trend in membership since 2018. Also, the relationship between RSE-AUNZ and other communities within the Research ecosystem will be explored. Finally, we will take a look at the various ways that an RSE can contribute to and benefit from the RSE-AUNZ community.


Nooriyah Lohani is a research communities advisor at the New Zealand eScience Infrastructure, a national HPC provider in New Zealand. Nooriyah has a background in Genetics and computer science. Working as a Bioinformatician at the University of Auckland and a cancer diagnostics company Pacific Edge.

Nicholas May is an accredited software engineer with over thirty years of Information Technology experience, across a variety of roles, languages, systems and domains. The last thirteen years have been spent in the research domain, partly in a software architecture research group, but mostly helping researchers with software and data projects.

Machine learning for the rest of us

Dr Chris Hines1

1Monash Eresearch Centre, Clayton, Australia


Neural Networks are the new hawtness in machine learning and more generally in any field that  relies heavily on computers and automation. Many people feel its promise is overhyped, but  there is no denying that the automated image processing available is astounding compared to  ten years ago. While the premise of machine learning is simple, obtaining a large enough  labeled dataset, creating a network and waiting for it to converge before you see a modicum of  progress is beyond most of us. In this talk I consider a hypothetical automated kiosk called  “Beerbot”. Beerbot’s premise is stated simply: keep a database of how many beers each person  has taken from the beer fridge. I show how existing open source published networks can be  chained together to create a “good enough” solution for a real world situation with little data  collection or labeling required by the developer and no more skill than a bit of basic python. I  then consider a number of research areas where further automation could significantly improve  “time to science” and encourage all eResearch practitioners to have a go.


Chris has been kicking around the eResearch sector for over a decade. He has a background in quantum physics and with the arrogance of physicists everywhere things this qualifies him to stick his big nose into topics he knows nothing about.

Transforming Research Code to A Modern Web Architecture – Pipetools

Paulus Lahur, Kieran Lomas

CSIRO, Clayton, Australia,

CSIRO, Clayton, Australia,



In this paper, we outline the process of transforming research code to a web application, using Pipetools project as the study case. The target is to reach a wide audience of users who can benefit from the code. We are constructing infrastructure and code that support and encapsulate the research code to significantly improve its usability, as well as expand its modes of usage. Currently the project is moving along at reasonable pace. We would like to share challenges that we face and the thought process in solving them. The lessons learned here will hopefully benefit researchers and software developers working in similar projects.


Research code is a highly valuable asset hidden deep inside research institutions. It typically runs on a very specific device and environment, and is accessible only to a few researchers. Throughout the course of the research, algorithms and codes are accumulated and improved. As the research matures, the potential benefit to other people increases. In many cases, there will be people out there who are willing to give money to be able to use the software. The problem is, the software is practically usable only to those who makes it, or at least to those who have intimate understanding of how it works. In order to make the software usable to a wider audience, another stage of software development is required. More code needs to be built around the research code in order to improve user experience.

There are two major approaches here. The first one is to make “native application,” that is, software that is native to a certain operating system. In fact, the research software itself belongs to this type. The other approach is to turn it into web application, that is, software that runs on remote machine. Although there are many pro s and cons for either approach, we opt for the latter, because it is accessible to people on various operating systems, and is therefore easier to support and maintain. Software licensing and protection becomes simpler. Rolling out a new version is also trivial. Furthermore, web application also opens door to collaborative work, where a number of people, possibly on different parts of the World, are working on the same set of data.


In order to develop an effective solution, we need to create a modular system, where developers can focus on specific modules. This is outlined in Figure 1. In essence, the development is split into these parts:

  • Front End. It deals with user interface. It translates user commands and send them to Back End.
  • Back End. It receives commands from Front End and calls the research code to do the actual computation.
  • Infrastructure. It deals with services that enable Front and Back Ends to work. This includes containers, as well as continuous integration and deployment.

Each parts have their own challenges. Details of the system will be presented in the paper.

Figure 1: Simplified layout


Research code: Lionel Pullum (Mineral Resources, Clayton VIC) Project manager: Andrew Chryss (Mineral Resources, Clayton VIC) Team lead on IMT side: Daniel Collins (IMT, Kensington WA)

Front End: Kieran Lomas (IMT, Clayton VIC)

Back End: Paulus Lahur, Sam Moskwa (IMT, Clayton VIC)

Infrastructure: Dylan Graham, Andrew Spiers, Sam Moskwa (IMT, Clayton VIC)

Paulus Lahur is CSIRO staff since 2015. He is in Scientific Computing of IMT.

Framework for fully automated analysis of neuroimaging data

Thomas G. Close1,2, Francesco Sforazzini1, Phillip G. D. Ward1,3,4, Zhaolin Chen1,5, Gary F. Egan1,3,4

1Monash Biomedical Imaging, Melbourne, Australia,

2Australian National Imaging Facility, Australia

3Australian Research Council Centre of Excellence for integrative Brain Function, Melbourne, Australia

4Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Melbourne, Australia.

5Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia



Despite the availability of well-established analysis packages, designing comprehensive and portable neuroimaging workflows is challenging due to the wide variety of tools, parameters, and scanner configurations involved. Abstraction of Repository-Centric ANAlysis (Arcana) ( is a Python framework for designing complex workflows that are comprehensive, portable between sites and scalable to large studies. Several neuroimaging workflows, including fibre tracking and vein analysis, were implemented using Arcana in the NiAnalysis package (, and run over a large cohort to demonstrate the scalability of this approach. These workflows are integrated with Monash Biomedical Imaging’s XNAT repository [1] and can be triggered on data ingest to fully automate analyses.


Arcana builds on Nipype [2] to encapsulate repository data within Python classes (Fig. 1). Central to its design is the Study class, which specifies all products, and the pipelines to produce them, that can be derived from a specific set of acquisitions. Study objects aggregate modular repository (XNAT [1], BIDS [3], or a custom directory format), execution (linear, multi-process and SLURM) [2] and data-matching components, in order to manage the derivation of products on demand.

Imaging contrast or modality-specific analyses are implemented in Arcana by sub-classing the Study class to specify new products and pipelines (Fig. 1). Multi-contrast/modality studies can be represented by combining Study classes in MultiStudy classes.

Susceptibility weighted imaging (SWI) (1.8 mm, 256x232x72, TE=20ms, TR=30ms), MPRAGE (1 mm, 256x 240×192, TR=2300ms, TE=2.07ms), and dMRI (2 mm, 110x100x60, TE = 95ms, TR=8200ms, 33 directions with b=1500 mm2/s and 3 b=0) datasets were acquired for 544 healthy elderly subjects.


Pipelines for producing composite vein masks from QSM, SWI and T1-weighted MRI [4], and white matter tractograms from diffusion-weighted MRI [5] were implemented in the Arcana framework and applied to cohort of healthy elderly subjects. Composite vein masks were produced for 535 subjects without manual intervention within 13k compute hours on a cluster of Intel Xeon CPU E5 2.50GHz nodes. Data is pulled from, and derived products pushed to, an XNAT repository with processing jobs submitted to MASSIVE [6].


The encapsulation of repository data and pipeline generation by Arcana enables workflow designers to create portable, comprehensive workflows while focusing purely on the core logic of their analysis. Arcana’s modular pipeline and inheritance architecture promotes code reuse through the sharing of common segments (e.g. registration) and modifications of existing workflows. Intermediate products are saved in the repository and can be reused by subsequent analyses, saving computation time and manual quality control checks.

When using an XNAT repository, it is possible to trigger Arcana workflows on data ingest and thereby fully automate the analysis. This automation is makes it practical to analyse data from large studies as they are acquired and identify any issues with the acquisition protocol that might arise (e.g. from scanner upgrades or hardware faults).

Figure 1: UML description of the Arcana framework and its application to neuroimaging analysis.
Boxes: Python classes (blue=core, green=interchangeable, grey=specialised). Arrows: orange=data, magenta=processing, diamond=aggregated-in, triangle=subclass-of. generates the requisite pipelines (specified in Study.data_specs) to produce requested data and uses the runner to execute them. Data is pulled and processed, then the products are pushed back to the repository.


By managing the complete flow of data from/to a repository in a flexible and extensible manner, Arcana enables the automation of complex analyses of large-scale neuroimaging studies.


  • Marcus, D.S., Olsen, T.R., Ramaratnam, M., & Buckner, R.L. (2007), ‘The extensible neuroimaging archive toolkit’. Neuroinformatics, vol. 5, pp. 11–33.
  1. Gorgolewski, K., Burns, C.D., Madison, C., Clark, D., Halchenko, Y.O., Waskom, M.L., & Ghosh, S.S. (2011), ‘Nipype: A Flexible, Lightweight and Extensible Neuroimaging Data Processing Framework in Python’. Frontiers in Neuroinformatics, vol. 5.
  2. Gorgolewski, K.J. Auer, T. Calhoun, V.D. Craddock, R.C. Das, S. Duff, E.P. et al. (2016): ‘The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments’. Scientific Data, vol. 3.
  3. Ward, P.G.D. Ferris, N.J. Raniga, P. Dowe, D.L. Ng, A.C.L. Barnes, D.G. & Egan, G.F. (2018): ‘Combining images and anatomical knowledge to improve automated vein segmentation in MRI’. NeuroImage, vol. 165, pp. 294–305.
  4. Tournier, J.D. Calamante, F. & Connelly, A. (2012): ‘MRtrix: Diffusion tractography in crossing fiber regions’. International Journal of Imaging Systems and Technology, vol. 22, pp. 53–66.
  5. Goscinski, W.J. McIntosh, P. Felzmann, U. C. Maksimenko, A. Hal, C.J.l. Gureyev, T. D. Thompson, D. Janke, A. Galloway, G. Killeen, N.E.B. Raniga, P. Kaluza, O., Ng, A., Poudel, G., Barnes, D., Nguyen, T., Bonnington, P. and Egan, G.F. (2014). ‘The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research’ Frontiers in Neuroinformatics


Tom completed his PhD on global tractography methods from diffusion MRI at the University of Melbourne. He became interested in neuroinformatics during a post-doc at the Okinawan Institute of Science and Technology, Japan, where he worked on standardising model descriptions of neural systems. Since returning to Melbourne, he has been developing robust and reproducible informatics workflows for the analysis of neuroimaging data at Monash Biomedical Imaging.

Access to the Atlas of Living Australia’s data and tools using ALA4R

Ms Peggy Newman1

1Atlas of Living Australia, Carlton, Australia, 


The Atlas of Living Australia (ALA) [1] is almost 10 years old. Its primary goal was always clear: to aggregate biodiversity data and make it available for reuse, and it currently holds around 75 million occurrence records for more than 120 000 species. Now in the heyday of big data the ALA finds itself able to comfortably provide large, interesting, standardised datasets to anyone who wants them through a ‘data as a service’ model. Internal infrastructure and external services comprise of the same set of APIs.

In that time the open source R language has become a tool of choice for researchers for statistical computing, publication grade visualisation and reproducible research. The CRAN repository hosts more than 12,000 R packages, amongst which ecology and spatial research are well represented. ALA4R joins a newer brigade of packages which provide an R wrapper to ALA’s APIs for convenient data retrieval.

ALA4R’S core functions reflect ALA’s core functions: searching for information about species and names, providing taxon information, and downloading occurrence and environmental/contextual information. The package caches datasets locally to minimise network traffic. This presentation demonstrates some of ALA4R’s functionality using two case studies.


  • Atlas of Living Australia (ALA) –
  1. R –
  2. ALA4R Package –


Peggy Newman is a Project Manager at the Atlas of Living Australia with a background in software engineering and database development.

Delivering Software Solutions to Astronomy Researchers

Prof. Jarrod Hurley1, Dr Jenni Harrison2, Dr Rory Smith4Dr Greg Poole3

1Swinburne University of Technology, Melbourne, Australia,

 2Pawsey Supercomputing Centre, Perth, Australia,

3Swinburne University of Technology, Melbourne, Australia,

4Monash University, Melbourne, Australia



Astronomy Data and Computing Services (ADACS) was established in early 2017 by Astronomy Australia Limited (AAL) to empower the national astronomy community to maximize the scientific return from their data and eResearch infrastructure. ADACS is delivered through a partnership that has been created between Swinburne University, Curtin University and the Pawsey Supercomputing Centre – comprising Melbourne-based and Perth-based nodes.

A key element of ADACS is to provide professional software development and data management services to astronomy researchers. By developing partnerships between researchers and computational scientists – adding professional software engineering, project management, system analysis and design methodologies to projects – we aim to address the quality and performance benchmarks that can be lacking in platforms developed by researcher-only teams. Targeted eResearch fields include big-data analysis and processing, optimization of workflows for high-performance computing, parallel computing techniques, data-sharing and dissemination, large-scale visualization and construction of real-time data analysis platforms. The long-term goal is to provide a central hub for resources in these areas that can be accessed by astronomers – embedding the service within projects to develop nationally significant software pipelines and data platforms.

The proposed session is aligned with the Research Software Engineering stream and specifically the Software as a Service and Development Methods, Tools & Practices themes.

Astronomers apply for ADACS services through a merit-based allocation scheme that is operated twice per year, first submitting an expression of interest, then working with an ADACS member to develop the technical requirements for their project and finally submitting a full proposal which is considered by an independent Time Allocation Committee drawn from the national astronomy community by AAL. Proposals are ranked on merit and matched against the developer weeks (and expertise) available (generally the equivalent of 2-3 full-time developers per year). To date the scheme has focused on short to mid-range projects with an over-subscription rate of 300-500%. This clearly demonstrates a need for such services and a lack of provision in the past. Projects have ranged from developing graphics programming unit (GPU) algorithms for speeding up gravitational wave analysis to enhancing the user interface and back-end for citizen science projects.

A key aspect for the success of these projects is a constructive working relationship between the computational scientists and the researcher (the sponsor) and how this operates within a project management framework. Such a framework can be foreign environment to a researcher, e.g. working in sprints, providing user-stories upfront, so there is a need to be adaptable on both sides. In this session we aim to explore this relationship, providing ADACS case studies from both perspectives, with the aim of sharing our experiences to date and starting discussions with others who have worked through similar experiences. We are also interested in discussions around how we ensure the long-term sustainability of such schemes, how the ongoing needs of delivered projects should be managed and how we can prioritise larger-scale technically challenging projects while still meeting the needs of the general research community.

The proposed BoF session is intended to be 40 minutes in duration.

The proposed format of the session is a mixture of presentations, contributed talks and facilitated discussion as follows:

  • Introduction to ADACS and the methodology behind the delivery of software as a service to researchers within the national astronomy community [10 mins];
  • Case study of an ADACS development project from the developer perspective, focusing on project management, techniques applied and delivered outcomes [10 mins];
  • Case study of an ADACS development project from the researcher perspective, focusing on the science goals, project interaction experience and application of the delivered outcome [10 mins];
  • Facilitated discussion on the developer-researcher relationship when delivering software as a service to a research community, including best practice and lessons learnt to date [10 mins].

The case study from a researcher perspective will be sourced from the pool of completed ADACS projects as a contributed talk. The introduction, case study from a developer perspective and discussion will be provided/convened by the listed convenors/presenters who are all ADACS members.

The targeted audience for the session includes researchers and technical staff with an interest in bringing professional software development practices into the methodology of the scientific research community. The audience need not be astronomy specific. In fact, a primary goal for the facilitated discussion is to initiate conversations aimed at translating support services across scientific domains, promoting collaboration and skill sharing between like-minded entities.


An example of a success story for an already completed ADACS project can be found here:

Below are two examples of ADACS projects currently under development. Both are on track for completion by end of June 2018.

  1. An Automated Data Reduction Pipeline for AAO Data Central

Lead Researcher – Simon O’Toole (Australian Astronomical Observatory: AAO)

Development – ADACS Perth node

Summary: This project will create a data reduction pipeline using python and django to manage CLI functions of an application 2DFDR using a restful API. The API will be accessible by the AAO team in addition to their partners.

  1. GPU Acceleration of Gravitational Wave Signal Models

Lead Researcher – Rory Smith (Monash)

Development – ADACS Swinburne node

Summary: This project will develop a CUDA-based GPU implementation of highly parallelizable gravitational-wave signal models to alleviate the computational bottleneck in Laser Interferometer Gravitational wave Observatory (LIGO) parameter estimation codes. Parameter estimation is an optimal tool for gravitational-wave signal detection but the current high cost prohibits use as a search pipeline.


Professor Jarrod Hurley has led the supercomputing program at Swinburne for the past decade and is the manager of the NCRIS-funded OzSTAR national facility. Hurley obtained his PhD in astrophysics at the University of Cambridge (UK) before research positions as a Hubble Fellow at the American Museum of Natural History (USA) and at Monash University. Hurley has a strong research background in computational astrophysics, specialising in realistic N-body simulations of star cluster evolution. He was a founding member of the Astronomy Supercomputing Time Allocation Committee (ASTAC), a former steering committee member for the Australia National Institute for Theoretical Astrophysics (ANITA), a member of the Astronomy eResearch Advisory Committee (AeRAC) for Astronomy Australia Limited (AAL) and manager of the Swinburne node of the Astronomy Data and Computing Services (ADACS) initiative. Hurley is passionate about creating a HPC environment that readily adopts new technology and enables researchers to meet their research goals.

BoF for RSEs: Recognition and Career Development for Researchers who Code

Ms Kerri Wait1, Dr Rebecca Lange2, Ms Amanda Miotto3, Dr Manodeep Sinha4,5, Dr Jens Klump6, Mr Rowland Mosbergen7, Dr Steven Manos8, Ms Heidi Perrett9

1 Monash eResearch Centre, Monash University, Australia,

2 Curtin Institute for Computation, Curtin University, Perth, Australia,

3 eResearch Services Griffith/QCIF, Griffith University, Nathan, Australia,

4 Centre for Astrophysics & Supercomputing, Swinburne University of Technology, Australia,

5 ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D)

6 CSIRO Mineral Resources, Perth, Australia,

7 University of Melbourne, Parkville, Australia,

8 University of Melbourne, Australia,

9 eResearch Support Services, Griffith University, Nathan, Australia, 



This BoF is to build awareness of the Research Software Engineer (RSE) community and to identify volunteers to help implement the practical solutions for the RSE community that have been identified at the pre-conference workshop. If you are an academic/researcher who codes but are lacking recognition and metrics needed to progress your academic career; a professional software engineer working in the research space; or a  system administrator who maintains research systems, you should attend.

The term RSE, originally coined by the UK RSE association (, says the following about RSEs: “A growing number of people in academia combine expertise in programming with an intricate understanding of research. Although this combination of skills is extremely valuable, these people lack a formal place in the academic system.”

Inspired by the success of the RSE Association in UK, we are seeking to establish an Australasian Chapter of the RSE Association. Together with international bodies and support from our national organisations such as AeRO, NeSI, CAUDIT, the Australian Research Data Commons (ARDC), and research institutions, we aim to campaign for the recognition and adoption of the RSE role within the research ecosystem. Alongside this, appropriate recognition, reward and career opportunities for RSEs are needed. We plan to organise regular events to allow RSEs to meet, exchange knowledge and collaborate on methods to create these opportunities.

We ran an initial Expression of Interest survey in 2017 among Australian and New Zealand researchers and found that majority of the respondents prioritised:  (1) Increased recognition of the RSE role, (2) More appropriate, consistent and representative position descriptions and KPIs, and (3) Community development through regular events and gatherings.

Please join us on this event to actively work on how we can grow this community and advocate for others. Together, we can build a sustainable community that benefits research software engineers, and ultimately contributes to more efficient and reproducible research.


A quick presentation of “Who is an RSE?”, the problems that RSEs face, the status of different institutions with regards to employing RSEs and suggested solutions will kick off the BoF. There will then be a breakout session to identify who would like to volunteer their time for which solution.


  • Research Software Engineers Association. Available from:, accessed 6th June


Kerri Wait is an HPC Consultant at Monash University. As an engineer, Kerri has a keen interest in pulling things apart and reassembling them in novel ways. She applies the same principles to her work in eResearch, and is passionate about making scientific research faster, more robust, and repeatable by upskilling user communities and removing entry barriers. Kerri currently works with the neuroscience and bioinformatics communities.


Recent Comments


    About the conference

    eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

    Conference Managers

    Please contact the team at Conference Design with any questions regarding the conference.

    © 2018 - 2020 Conference Design Pty Ltd