Don’t Forget the Workflows – the Drivers of the Compute and Storage Infrastructure

Mr Jake Carroll1

1The University of Queensland, QBI, St Lucia (Brisbane), Australia,


Within the research computing sector, significant time, money and resources are spent upon the architecture, engineering and design of infrastructure to undertake a task. Whether expenditure is in building localised capability, or in the consumption of cloud resources, the result is generally the same. A user, group or institution pays for the capability to run or undertake a task on a platform to achieve a scientific and statistically significant output. Missing from this, however, and is oftentimes left up to the end user to consider for themselves, is the nature of what workflows best compliment infrastructure and interlock to provide best performance characteristics. Workflows1 are not novel and have an established place in scientific computing. A growing trend and performance driven objective, is the use of workflow aware infrastructure and the converse of this, infrastructure aware workflows. This presentation will provide insight into the University of Queensland’s Research Computing Centre efforts in the development, surfacing and eventual use of a data locality, caching and workflow aware storage infrastructure engine, known as the MeDiCI (Metropolitan Data Intensive Caching Infrastructure) project. The presentation will, further, illustrate a real-life example of how this infrastructure is being used to deliver a multi-site, multi supercomputing, multi data locality aware scientific outcome.


The concept of location aware infrastructure is still relatively nascent and is the subject of significant research in industry and academia. At a macro level, in code path optimisation, a great deal of work has taken place in finding ways to determine lower cost code path latency in respect to core-to-memory and core-to-L1/L2/L3 time to surface, to avoid inefficiency in pipeline execution2. Where data caching locality has only had light treatment however, is at the storage IO layer. The MeDiCI architecture and the underpinning work of Abramson et al. challenges the notion that a supercomputer should have data in one location at all times, due to the nature of the workflow used to initially generate that data. Moreover, there are circumstances where the generation of data is widely distributed and disconnected from the processing hub or sink. MeDiCI aims to tie instrument and processing together, via the use of intelligent policy, autonomous filesystem movement semantics and workflow aware migratory patterns.

Figure 1: The caching architecture touch points of the UQ campus for the MeDiCI framework

In several cases, workflows have been constructed to take advantage of advanced computing functionality locally, within the organisational units within the University of Queensland, but, equally, within the remote site, where the majority of the larger scale supercomputing platforms reside. One such example is the work of UQ’s largest genomics research group, who have found purpose in the utility of local HPC facilities as well as remote HPC facilities for different workloads. The use of these combined facilities provides not only an enhanced number of resources and more scale to the group, but flexibility in the determination of where a workload is run to suit specific use cases for different workload categorisation. An example of this can be found in code known as GCTA ( which is MPI aware and benefits significantly from the use of well coupled, low latency RDMA constructs in infrastructure, whereas other codes such as BayseR ( benefit from large shared memory spaces. Yet another code known as epiGPU ( benefits from the use of massively scaled GPU arrays via CUDA frameworks. Using the MeDiCI fabric-semantic and single transport namespace allows the data to surface where it needs to, but importantly, only when it needs to.

The implementation and use of the AFM (Advanced File Management) data transport for parallel IO movement between cache and home nodes is the key to automation of data movement upon fopen() of a file or structure on the filesystem within the workflow. MeDiCI challenges the need to put data into more than one location manually, and provides an autonomous mount and data distribution methodology to all of the resources mentioned above from a computational standpoint, to an eventually consistent filesystem namespace. In the use of these workflows, this is taken advantage of, as processing can occur at one computation ‘edge’, then trigger processing at another facility for a different treatment.

Figure 2: Data location and transports from one platform to another, autonomously, exhibiting automated cache hierarchy in the workflow.


  1. Taylor, I.J., Deelman, E., Gannon, D.B., Shields, M. Workflows for e-Science: Scientific Workflows for Grids. (2007). 1st Edition [Ebook]. New York: Springer Press, PP. 3-6. Available at [Accessed 13th June, 2017]
  2. Shi, Q., Kurian, G., Devadas., S, Khan, O. LDAC: Locality-Aware Data Access Control for Large-Scale Multicore Cache Hierarchies. (2016). ACM Transactions on Architecture and Code Optimisation (TACO), Volume 13, Issue 4, Article No. 37.



Jake Carroll is the Senior ICT Manager (Research) for one of the largest neuroscientific research organisations in the world – the Queensland Brain Institute, within the University of Queensland.

Jake has worked in the research and scientific computing sector for around a decade in various roles. On the way, as he’s run all over the planet, he’s collected many wonderful colleagues, friends, collaborators and associates.  You might find it odd to hear this from a ‘once was IT guy’, but the truth is, Jake believes in people and their passion, brilliance and innovation as the most precious commodity we have. People and their passion are what makes Jake still come to work every day.

Jake is passionate about making technology go further and work harder for research and has a track record of building things that allow scientists to do things that were not possible, previously, at the campus, the national and even the international scale.

Slightly older (and if he’s very lucky, slightly wiser), Jake now takes a more quiet and considered seat in the leadership, governance and directionality of research focused technology teams, infrastructure projects and think-tanks in Australia and across the world.

About the conference

eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

Conference Managers

Please contact the team at Conference Design with any questions regarding the conference.

© 2016 - 2017 Conference Design Pty Ltd