Bending the Rules of Reality for Improved Collaboration and Faster Data Access

David Hiatt

WekaIO, San Jose, CA, United States, Dave@Weka.IO

 

The popular belief is that research data is heavy, therefore, data locality is an important factor in designing the appropriate data storage system to support research workloads. The solution is often to locate data near compute and depend on a local file system or block storage for performance. This tactic results in a compromise that severely limits the ability to scale these systems with data growth or provide shared access to data.

Advances in technology such as NVMe flash, virtualization, distributed parallel file systems, and low latency networks leverage parallelism to bend the rules of reality and provide faster than local file system performance with cloud scalability. The impact on research is to greatly simplify and reduce the cost of HPC class storage, meaning researchers spend less time waiting on results and more of their grant money goes to research rather than specialty hardware.


Biography:

David Hiatt is the Director of Strategic Market Development at WekaIO, where he is responsible for developing business opportunities within the research and high-performance computing communities. Previously, Mr. Hiatt led market development activities in healthcare and life sciences at HGST’s Cloud Storage Business Unit and Violin Memory. He has been a featured speaker on data storage related topics at numerous industry events. Mr. Hiatt earned an MBA from the Booth School of Management at the University of Chicago and a BSBA from the University of Central Florida.

Gravitational Wave Astronomy in the Era of Big Data

Dr Kendall Ackley1

1Monash University/OzGrav, Murrumbeena, Australia, kendall.ackley@monash.edu 

 

The uniquely sensitive Laser Interferometric Gravitational-Wave Observatory (LIGO) facilities have begun routinely detecting signal traces from distant massive black hole and neutron star mergers, some of which happened hundreds of millions of years ago. Representing a multi-layered data analysis problem for real-time and offline analyses, with the aid of computing clusters around the world, successful attempts to extract minute gravitational wave signatures from detector noise have become reality.

On 17 August 2017, LIGO detected its first signal from less massive objects thought to be neutron stars, reinforced by the observation of a coincident weak gamma-ray burst by the Fermi satellite.  Neither instrument has good spatial resolution, and with LIGO being an all-sky instrument, the challenges for astronomers to find the single light-emitting source amongst billions of objects in the sky that is associated with a particular event is not to be understated. Thus began a race of astronomical facilities around the world to be the first to detect the electromagnetic counterpart signal of the event.

The fact that the source was detected within hours of the first alert on the first ever occasion established and validated the field of multi-messenger gravitational wave astronomy, which had been a growing initiative, practically overnight. I will give insights into how this feat was accomplished and, as we begin to build larger and more sensitive telescopes, how we plan to manage the massive in-flux of nightly data, and how we utilise machine-learning to help us accomplish the most data-intensive tasks in an automated fashion.


Biography:

Dr Kendall Ackley has been a member of the LIGO Scientific Collaboration since 2012. She joined the School of Physics and Astronomy at Monash University in 2017 as part of the ARC Centre of Excellence (OzGrav) working on identifying optical counterparts to gravitational-wave events with the Gravitational-wave Optical Transient Observer (GOTO) telescope. Her research interests include optimising follow-up studies for detecting coincident gravitational-wave and electromagnetic counterpart events, searches for gravitational waves from massive compact binaries, and utilising machine-learning algorithms to identify high-energy astrophysical transients which may accompany gravitational-wave events discovered with LIGO.

Improving Predictive Machine Learning Using Wavelet Reconstructions

Rakib Hassan1, John Wilford2

1Geoscience Australia, Canberra, Australia, rakib.hassan@ga.gov.au

2Geoscience Australia, Canberra, Australia, john.wilford@ga.gov.au

 

‘Uncover’ Machine Learning

Uncover Machine Learning is an initiative at Geoscience Australia to exploit recent advances in machine learning as a predictive analytics tool to support mineral exploration in Australia. Uncover-ML, a codebase developed in collaboration with CSIRO’s Data61, implements Bayesian regression models for supervised learning and leverages a suite of clustering and regression algorithms implemented in Scikit-Learn, a widely used, open-source library for machine learning.

The Uncover-ML codebase can be categorized logically into three sets of modules that comprise its machine learning pipeline: (1) Preprocessing, (2) Training and Prediction, (3) Output Generation. The Preprocessing modules implement a suite of algorithms for transforming, filtering and manipulating high resolution (~90 m), continental scale raster data sets representing e.g. topography, gravity, magnetics, etc. The Training and Prediction modules expose machine learning algorithms that consume raster and point data sets, also known as covariates and targets, respectively, during the training phase. The last leg of the pipeline takes a trained model and generates probabilistic predictions e.g. the likelihood of the occurrence of a mineral of interest at a given location. The pipeline is highly parallelized and is optimized for predictive modelling on large national datasets.

Self-similarity of geophysical datasets

Many landscape and geophysical datasets e.g. topography, drainage networks, magnetic intensity and earthquake epicenters exhibit fractal patterns (Turcotte 1992). Fractal patterns show the same statistical properties at many different scales.

Figure 1: Drainage networks as illustrate this Landsat TM 8 image are often used as an exemplar of fractal  .

However, machine learning algorithms are typically unable to exploit the self-similarity of input data sets at long wavelengths, such as the similarity of the branching patterns of the drainage system at different scales in Fig. 1. Targets and the corresponding covariate values used for training are point measurements/observations and invariably don’t take into account neighborhood relationships. We capture these neighborhood relationships by generating several multiscale versions of each covariate using 2D wavelet reconstructions (Kalbermatten et al. 2012). By including these multiscale versions of each raster in the input data, we enable machine learning algorithms to embed these relationships into a model during the training phase.

We use PyWavelets, an open-source python package, for decomposing and reconstructing raster data based on dyadic wavelet transforms, as shown in Fig 2. We apply the following steps to decompose and reconstruct each raster into progressively longer wavelength representations, while preserving their original pixel resolution, which is an essential requirement for the machine learning pipeline:

  • Compute 2D wavelet transform of raster
  1. Keep the low-pass filter coefficients and set the horizontal, vertical and diagonal high-pass filter coefficients to zero
  2. Compute 2D inverse wavelet transform based on the coefficients in step 2.

The above steps produce a Level-1 representation of the original raster, but with the spatial wavelength doubled. The same procedure can be applied again on the Level-1 raster to obtain a  representation of the original raster, but with the spatial wavelength quadrupled. These steps are repeated to produce successively longer wavelength versions of a given raster.

We have incorporated this multiscaling functionality into the Preprocessing module of Uncover-ML, which allows us to selectively apply it on continuous, non-categorical raster data. Preliminary prediction results  obtained by including multiscale rasters in the training phase show improvements compared to those from standard models. With further tests and parameter-tuning we expect further improvements in predictive mapping capabilities.

REFERENCES

  1. Turcotte, D. L. (1992), Fractals, chaos, self‐organized criticality and tectonics. Terra Nova, 4: 4-12. doi:10.1111/j.1365-3121.1992.tb00444.x
  2. Kalbermatten, Michael, et al. “Multiscale analysis of geomorphological and geological features in high resolution digital elevation models using the wavelet transform.” Geomorphology 138.1 (2012): 352-363.
  3. Mallat, S., 2000. Une exploration des signaux en ondelettes. Paris: Les éditions del’école polytechnique.

This paper is published with the permission of the CEO, Geoscience Australia


Biography:

Dr Hassan has worked as a computational software developer in both industry and academia since 2004. He obtained a bachelor in applied physics in 2003 at RMIT University, a master of geoscience at Macquarie University in 2009 and more recently, a PhD in computational geophysics at the University of Sydney in 2016.

Performance Improvements with GPUs for Marine Biodiversity: A Cross-Tasman Collaboration

Lev Lafayette1, Mitch Turnbull2, Mark Wilcox3, Eric A. Treml4

1University of Melbourne, Parkville, Australia, lev.lafayette@unimelb.edu.au

2Nyriad, Cambridge, New Zealand, mark.wilcox@nyriad.com

3Nyriad, Cambridge, New Zealand, mitch.turnbull@nyriad.com

4Deakin University, Geelong, Australia, e.treml@deakin.edu.au

 

Identifying probable dispersal routes and for marine populations is a data and processing intensive task of which traditional high performance computing systems are suitable, even for single-threaded applications. Whilst processing dependencies between the datasets exist, a large level of independence between sets allows for use of job arrays to significantly improve processing  time. Identification of bottle-necks within the code base suitable for GPU optimisation however had led to additional performance improvements which can be coupled with the existing benefits from job arrays. This small example offers an example of how to optimise single-threaded applications suitable for GPU architectures for significant performance improvements. Further development is suggested with the expansion of the GPU capability of the University of Melbourne’s “Spartan” HPC system.

University of Melbourne HPC and Marine Spatial Ecology With Job Arrays

From 2011-2016, the University of Melbourne provided general researcher access to a medium-sized HPC cluster system called “Edward”, designed in a traditional fashion. As “Edward” was being retired an analysis of actual job metrics indicated that the overwhelming majority of jobs were single node or even single core, especially as job arrays.  The successor system, “Spartan”, was therefore designed more with a view of high throughput rather than high performance. A small traditional HPC system with a high-speed interconnect was partitioned from a much larger partition built on OpenStack virtual machines from the NeCTAR research cloud. This proved to a highly efficient and optimised method both in terms of finances and throughput [1].

A specific example of large number of computational tasks that are designed for single-threaded applications with modest memory requirements is that for research in the marine biodiversity and population connectivity, which has significant implications for the design of marine protected areas. In particular there is a lack of quantitative methods to incorporate, for example, larval dispersal via ocean currents, population persistence, impact on fisheries etc. The Marine Spatial Ecology and Conservation (MSEC) laboratory at the University of Melbourne has been engaging in several research projects to identify the probable dispersal routes and spatial population structure for marine species, and integrate these connectivity estimates into marine conservation planning [2].

Code Review for GPGPU Optimisation

There are a number of architectural constraints on GPUs. They are, to a very large extent, independent of their host system. Object code needs to be compiled for the GPU (e.g., using OpenCL or nvcc). There is no shared memory between the GPU and CPU and any unprocessed data must be transferred to the GPGPU environment and then back to the CPU environment when completed. This said, GPUs typically only have small amounts of cached memory, if at all, replacing the need with GPU pipelining and ensuring very high memory transfer between the GPU and the host [2].

During the first half of 2017 Nyriad reviewed the HPC infrastructure, existing MATLAB(R) source code and sample data, and wrote a test suite designed to run the CPU and GPU versions at the same time. There were two review stages; the first for optimisation of the existing MATLAB (R) code base, followed by identification of functions that could be distribution and rewritten for GPUs.

Performance Improvements

Nyriad code review identified bottlenecks that were available for GPGPU workloads. On the University of Melbourne HPC system, “Spartan”, using a single GPU, a 90x performance improvement was achieved over the original code and a 3.75x improvement over the CPU version with 12 threads available for the 4.6 GB Atlantic Model simulating 442 reefs. The simulation, previously taking 8 days to complete on one of the most powerful nodes (i.e. GPU or physical), could be completed in 2 hours. On the other hand, for the 4 MB South Africa Benguela Region dataset the GPU version is faster than the original code, but slower than the improved CPU implementation.

If the code is refactored to process reefs in parallel we anticipate that utilisation of the node would improve on a per-GPU and multi-GPU level, significantly reducing the single simulation time by fully utilising the Spartan GPU node on which it is run. With this change we predict a performance improvement of over 5x compared to the existing GPU code on meaning while using more resources on a node the execution time of a single simulation would greatly reduce. Smaller datasets would also likely achieve some improvement as per-GPU utilisation would increase. Demonstrated in Figure 2. is the performance increase of the current two versions, and the predicted performance of the multithreaded GPU version, when running a single simulation on the Atlantic data set of 442 reefs over 100 days.

Further Developments

Nyriad’s review found that there is significant opportunity in the use of data integrity and mathematical equivalence algorithmic techniques for enabling porting of code to GPUs with minimal impact to the research workflow. With notable performance improvements to a range of job profiles, a significant expansion of Spartan’s GPGPU capacity has just been implemented. The partition, funded by Linkage Infrastructure, Equipment and Facilities (LIEF) grants from the Australian Research Council is composed of 68 nodes and 272 nVidia P100 GPGPU cards The major usage of the new system will be for turbulent flows, theoretical and computational chemistry, and genomics, representative of the needs of major participants.

The University of Melbourne and Nyriad will continue their research collaborations, especially in the GPGPU environment for data integrity and mathematical equivalence, scalability testing and hybrid clusters to enable more scientific programming users to progressively scale their work up to larger systems.

REFERENCES

  1. Lev Lafayette, Greg Sauter, Linh Vu, Bernard Meade, “Spartan : Performance and Flexibility: An HPC-Cloud Chimera”, OpenStack Summit, Barcelona, October 27, 2016
  2. For example, Keyse, J., Treml, EA., Huelsken, T., Barber, P., DeBoer, T., Kochzuis, M., Muryanto, A., Gardner, J., Liu, L., Penny, S.,  Riginos, C.  (2018),  Journal of Biogeography, February 2018
  3. Shigeyoshi Tsutsui, Pierre Collet (eds), (2013), Massively Parallel Evolutionary Computation on GPGPUs, Springer-Verlag

Biography:

Lev Lafayette is the Senior HPC Support and Training Officer at the University of Melbourne, where he has been since 2015. Prior to that he worked at the Victorian Partnership for Advanced Computing in a similar role for eight years.

Fostering an organisation-wide accelerated computing strategy

Jake Carroll1

1The University of Queensland, Brisbane, Australia, jake.carroll@uq.edu.au

 

Background

The use of accelerators (GPU, ASIC, FPGA) in research computing has become more prevalent as hardware/software ecosystems have matured. To complement this, frameworks from vendors such as nVidia and AMD have become fully-featured. As a result of a series of significant ARC/NHMRC grants – an unprecedented amount of scientific imaging infrastructure is being commissioned on the University of Queensland St Lucia campus. To leverage scientific outputs and process the data that this new infrastructure would generate, UQ procured its first tranche of accelerated computing capability in late 2017. This presentation discusses the user-engagement strategy behind UQ’s accelerated computing deployment, how it worked, why it worked and why it was a novel approach in the sector.

WIENER

In late 2017, after an extensive benchmarking, analysis and design process, the Wiener supercomputer was procured to enable near real time deconvolution and deskew from imaging infrastructure, such as UQ’s new Latice Light Sheet Microscope (LLSM)[1]. This platform was the first in the Asia Pacific to feature the nVidia Volta V100 GPU and only the fourth production deployment in the world. The Wiener supercomputer was the largest investment in GPU/accelerated supercomputing that the state had ever made. The initial intention of Wiener was to provide a powerful means of deconvolution [2] to the LLSM, but it was quickly realised that with this many GPU’s connected tightly in a dedicated supercomputing deployment, the platform would serve as UQ’s launchpad for a general accelerated computing strategy.

basis of advanced computing strategy

UQ, as with several of its contemporaries has a significant investment in supercomputing. UQ’s strategy differs somewhat from its equivalent national and sister-state facilities in that it provides different pillars of supercomputing for different workloads in dedicated infrastructure.

 

Table 1: UQ’s Supercomputing Infrastructure load-out

Platform name Machine domain focus Workload characterisation Expected user demand Actual user demand
Tinaroo Multi-discipline MPI, tightly coupled shared memory, massively parallel High High
Awoonga Multi-discipline Loosely coupled, MPI-slack, high latency, cloud-like. Medium Medium
FlashLite Multi-discipline High throughput, high memory High Low
Wiener Multi-discipline GPU, ML, DL, CNN and imaging specific. Low High

 

UQ misjudged the user demand for both FlashLite and Wiener, but for different reasons, which strategic discussion in this presentation will explain and articulate.

Fostering an accelerated computing community

In the initial, as can be seen in Table 1, UQ made some assumptions about where it thought the most user demand would be, which proved incorrect. This lead to initial interest in Wiener being far more profound than first anticipated. UQ expected that Wiener would cater to a niche subset of imaging workloads, but what was unanticipated was the level of sophistication and understanding of application of convolutional neural networks, deep learning and machine learning techniques in the domain of imaging itself. An example was our overt expectation that deconvolution algorithms would run against the GPU infrastructure using codes such as Microvolution and SVI’s Huygens. The truth was, researchers were already considering using machine vision techniques and TensorFlow at scale to characterise and train image sets for more accurate detection of cells, cancers and viruses. [3]

At this point, UQ rationalised that it needed to take a more direct approach in engagement and collaboration with end users to effectively liberate the capability of this new platform. A core tenant of this was a personal and one on one approach to each workload. Whilst this is an  administrative burden, it has been demonstrated that it delivers significantly better outcomes. Thus, the general ‘onboarding’ process to Wiener, from an early point of production state became the following process:

  1. User approaches RCC with a request for compute time on accelerator based HPC.
  2. A subject matter (computer science, HPC) expert will then make an appointment to meet with the researcher or research group in order to better understand the science.
  3. A longer discussion takes place, to learn about the workload type, the potential hardware/software and computing environment impact. At this point the researcher and subject matter expert work towards a defined job-layout which is both optimal for the workload and best fit for infrastructure.

The initial consultation process generally takes between two to three hours.

UQ has empirical and measured evidence to suggest this method of personal interaction to breed a stronger capability in accelerated computing creates a far more efficient use of infrastructure, than the generally accepted process of providing a user a set of documents, readme’s and how-to instructions at a distance.

conclusion

Early analysis suggests that there is a correlation between the employment of direct consultation and scientific discussion between a domain expert (in the scientific research domain) and a research computing specialist and the quality of the computational run or input in these accelerated computing platforms. This now forms the basis of the operating procedures of the Wiener supercomputing facility.

REFERENCES

  1. UQ IMB ARC/NHMR Lattice Light Sheet Microscopy installation. Retrieved from https://imb.uq.edu.au/article/2016/11/45-million-imb-led-discovery-research, accessed June 8th, 2018
  2. Deconvolution Definition, Retrieved from https://en.wikipedia.org/wiki/Deconvolution, accessed June 8th, 2018.
  3. HPC Wiener harnessed for automating skin cancer diagnosis, Retrieved from https://rcc.uq.edu.au/article/2018/05/hpc-wiener-harnessed-automating-skin-cancer-diagnosis, accessed June 8th, 2018.

Biography:

Jake is currently the Associate Director of Research Computing for UQ’s three large scientifically intensive research institutes – the Australian Institute for Bioengineering and Nanotechnology, the Institute for Molecular Bioscience and the Queensland Brain Institute.

Jake has spent the last 12 years in scientific computing, working on everything from building supercomputers to managing the strategy and complexity that comes with scientific endeavour.

Jake spends his time working to make scientific computing platforms, technology and infrastructure as good as it can be, such that world class research can be conducted, unencumbered.

Jake’s background is in both computer science and business leadership – constantly fighting with himself, trying to accommodate both (very different concepts) in his working life – ultimately to try and make them work together.

High-level Cloud Application Description and Management

Gabor Terstyanszky1Gab Pierantoni2, Tamas Kiss3

1University of Westminster, London, United Kingdom, terstyg@gmail.com

2University of Westminster, London, United Kingdom, G.Pierantoni@westminster.ac.uk

3University of Westminster, London, United Kingdom, T.Kiss@westminster.ac.uk

 

Introduction

Cloud computing has successfully and steadily addressed issues of how to run applications on complex distributed computing infrastructures. However, it must address specific deployment, scalability and security requirements. Nowadays, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) solutions are widely used in academia, business and public sector to manage applications in the Cloud. At one hand, on-demand access to the Cloud in a flexible and elastic way could result in significant cost savings due to more efficient and convenient utilization. It can also replace large investment costs with long-term operational costs. On the other hand, however, the efficient and dynamic utilization of the Cloud to run applications is not trivial. The take up of cloud computing in some application areas is still relatively low due to limited application-level flexibility and shortages in cloud specific skills. As a result, the move to the Cloud has been somehow slower and more cautious in these areas due to both application- and infrastructure-level complexity.

To enable the execution of a large variety of applications in the Cloud in a cost effective, flexible, seamless and secure way, applications must be deployed, launched, executed and removed through a framework that hides cloud specific details. To manage applications in the Cloud it needs information, such as their architecture, resources and services they need, and QoS parameters they have to meet. Application description languages can define the application architecture, specify where to deploy and run applications, how to achieve their cost effective execution, and how to provide the required security to protect data.

TOSCA-BASEd HIGH-LEVEL Application description and execution

The Cloud Orchestration at the Level of Application (COLA) project [1], funded by H2020, aims at fostering the adoption of applications to the Cloud for public sector organisations and SMEs. COLA is elaborating a generic and pluggable framework, called Microservices-based Cloud Application-level Dynamic Orchestrator (MiCADO) [2], to support the optimal and secure deployment and run-time orchestration of cloud applications. Application Developers can describe applications including their Quality of Service (QoS) parameters related to deployment (flexibility), economic viability (costs), performance (scalability) and security (data protection and privacy) and submit this description to the MiCADO framework. This framework is based on existing low-level cloud container technologies (e.g. Docker Swarm [3], management and orchestration solutions (e.g. Occopus [4]), MiCADO is generic in the sense that its services are not restricted to particular technologies and can be implemented using different existing technologies and services.

We are focusing on application description and management in the Cloud. There are three major applications description approaches: cloud platform (Amazon, Microsoft Azure, etc.) and cloud orchestration tool dependent approaches (Chef, Heat etc.); and platform independent applications description languages (Camp and TOSCA). All these approaches properly describe the applications’ architecture specifying services they are composed of and how they are connected and artefacts and resources needed to run applications. Approaches used by cloud platforms and cloud orchestration tools are not based on standards and tied to specific implementations or platforms. As a result, it’s not easy to reuse their application descriptions in heterogeneous cloud environments. There are major differences in how these approaches specify and manage QoS properties. We use TOSCA [5] to describe applications that is emerging standard but it also has some limitations. TOSCA supports management of containers and virtual machines but these entities are assigned only node types not applications. TOSCA specification defines only abstract policy classes that cover only sub-set of QoS properties. Neither the original policy taxonomy nor the extended ones contains all parameters required to manage wide range of policies. Currently there is no a platform independent solution to process TOSCA application descriptions and run the applications in the Cloud. Considering these limitations we addressed the following challenges:

  • how to describe and manage containerized applications with policies assigned to them,
  • how to extend the TOSCA policy hierarchy to manage wide range of QoS properties and how TOSCA policies can parametrized to support these policies, and
  • how to process and execute TOSCA specifications in a technology agnostic way.

To address these challenges we created three major contributions. First, to combine the flexibility offered by technology-oriented agnosticism with the expressiveness required to describe different properties of a large variety of applications we elaborated the Application Description Template (ADT) to specify two main aspects of applications: their architecture (application topology) and QoS properties (application policy). ADTs connect Application Developers to the application component. Each ADT contains a parameter section, a topology section with container and virtual image sub-sections, and a policies section. The first one holds the input and output parameters of the application. The topology section incorporates the container and virtual images sub-section. The policy section describes QoS parameters as TOSCA policies. As a second contribution we introduced a flexible policy hierarchy and extended the TOSCA policy hierarchy by adding a security policy with several sub-policies such as authentication, authorisation, data protection and further sub-policies to the deployment and scaling policy. We also defined a Policy Template to describe policy properties. Each template is divided into two main sections: description and properties section. The first one outlines in plain text to which service and when the policy is applied. The second one contains two types of parameters: common and specific properties. Finally, we extended the MiCADO framework with the MiCADO Submitter (Fig. 1) to process TOSCA descriptions. The ADT is submitted to the MiCADO Submitter and parsed and validated by the OpenStack TOSCA Parser and the MiCADO Validator. Next, the Mapper uses a key list to isolate information and pass it to adaptors that translate the information for the Container Orchestrator, which manages Docker containers, the Cloud Orchestrator, which handles Virtual Machines in which the containers are deployed and ran, the Policy Keeper, which manages all policies but security policies, and the Security Enforcer, which handles security policies, of the MiCADO framework.

Figure 1: MiCADO Submitter

To assess the applications descriptions and how applications are executed through the MiCADO framework, COLA tests its applicability using demonstrators and proof of concept case studies from four distinct application areas that include public sector organisations and SMEs. For example these use cases incorporate social media data analytics for local governments, simulation-based evacuation planning, data-intensive web applications, and simulation solutions for manufacturing and engineering.

This presentation will outline the MiCADO framework, the Application Description Template, the extended TOSCA policy architecture with the Policy Template and how ADTs are managed in the MiCADO framework. Further, it will present how a particular public sector organization’s application can be cost-effectively and efficiently executed through the MiCADO framework in the Cloud.

REFERENCES

  1. COLA Project – Cloud Orchestration at the Level of Application, [Online]. Available:http://www.project-cola.eu/cola-project/
  2. T. Kiss, P. Kacsuk, J. Kovacs, B. Rakoczi, A. Hajnal, A. Farkas, G. GesmierG. Terstyanszky.: MiCADO –Microservice-based Cloud Application-level Dynamic Orchestrator, Future Generation Computer Systems, 2017,https://doi.org/10.1016/j.future.2017.09.050
  3. Docker Swarm overview, [Online], Available: https://docs.docker.com/swarm/overview/
  4. Kovács J. and Kacsuk P.: Occopus: a Multi-Cloud Orchestrator to Deploy and Manage Complex Scientific Infrastructures, Journal of Grid Computing, March 2018, Volume 16, issue 1, pp 19–37
  5. OASIS: TOSCA – Simple Profile in YAML Version 1.0, [On-line], available at: http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csd03/TOSCA-Simple-Profile-YAML-v1.0-csd03.html

Biography:

Prof. Dr Gabor Terstyanszky is a Professor in Distributed Computing at the University of Westminster. His research interests include distributed and parallel computing, cloud, cluster and Grid computing. He supervised several European projects, such as: COPERNICUS, COST, WINPAR, HPCTI, and SEPP as local coordinator. He had a leading role in the FP7 EDGeS, DEGISCO, EDGI, SHIWA, SCI-BUS, ER-flow and H2020 CloudSME research projects. Currently ha is working on the H2020 COLA and CloudFacturing project. He published more than 130 technical papers at conferences and journals. He was member of programme committees of several conferences and workshops.

Introduction to and Demonstration of Containers in the ARDC Nectar Research Cloud

Conveners: Dr Glenn Moloney5, Wilfred Brimblecombe1

Presenters: Andy Botting2, Sam Morrison3, Jake Yip4

1Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), wilfred@nectar.org.au
2Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), andrew.botting@unimelb.edu.au
3Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), sam.morrison@unimelb.edu.au
4Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), jake.yip@unimelb.edu.au
5Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), glenn.moloney@nectar.org.au

GENERAL INFORMATION

  • Half day workshop
  • Include a hands-on component
  • Maximum of 40 people

DESCRIPTION

Containers provide a solution to the problem of how to get software to run reliably when moved from one computing environment to another.  This workshop provides an introduction to this popular technology by briefly going over container concepts and then demonstrating containers in use on the ARDC Nectar Research Cloud.

The workshop will have a “hands on” component so please bring you laptop.

The following topics/activities will be covered:

  1. Introduction to container concepts and products
  2. Using Docker and Kubernetes technologies on the Research Cloud
  3. Exercise using a simple tool kit that can be deployed and experimented with at the workshop and may be extended for used after the workshop
  4. Seek feedback from participants on “tuning” the container offering on the Research Cloud to meet their needs.

This workshop will provide you with a useful introduction to Container technology and help ARDC determine the container offerings that may be supported on the ARDC Nectar Research Cloud.

WHO SHOULD ATTEND

Target Audience – researchers who may benefit from a lightweight easy to use Container service or are looking for an introduction into the area.  We are assuming current sophisticated heavy users of container technology already have set up their environments and will continue to want to do so.

eResearch staff who are interested in learning about container technologies and how they can be used on the Nectar Research Cloud or across multiple cloud services.

WHAT TO BRING

Bring your laptop.  Required prerequisite knowledge – Moderate to advanced understanding of Unix and cloud environments.  If you are not from an Australian or New Zealand University you will need an AAF account to gain access for the hands-on component.


BIOGRAPHIES

Andy Botting – Senior Engineer at the Australian Research Data Commons (ARDC), Nectar Research Cloud.  I’m a cloud-native Systems Engineer with a background in Linux, HPC.  Specialities: Linux, Android, Puppet, OpenStack and AWS.

Wilfred Brimblecombe – ICT Manager at the Australian Research Data Commons (ARDC), Nectar Research Cloud, is an IT management veteran with over 20 years of leadership experience across various organisations.

Sam Morrison – Senior Engineer at the Australian Research Data Commons (ARDC), Nectar Research Cloud.  Specialties: Linux system administration, Python/Django web programming, Security, Openstack cloud technologies.

Jake Yip – DevOps Engineer at Australian Research Data Commons (ARDC), Nectar Research Cloud. Specialities: Puppet, OpenStack, Networks, DevOps and Security.

 

ARDC Nectar Research Cloud 101 for Beginners

Conveners: Dr Glenn Moloney5, Wilfred Brimblecombe1
Presenters: Andy Botting2, Sam Morrison3, Jake Yip4

1Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), wilfred@nectar.org.au
2Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), andrew.botting@unimelb.edu.au
3Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), sam.morrison@unimelb.edu.au
4Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), jake.yip@unimelb.edu.au
5Australian Research Data Commons (ARDC) (built from ANDS, Nectar, RDS), glenn.moloney@nectar.org.au

GENERAL INFORMATION

  • Half day workshop
  • Includes a hands-on component
  • Maximum of 30 people

DESCRIPTION

This workshop will provide an introduction to using the Australian Research Data Commons (ARDC) Nectar Research Cloud.  It  is for people who have no or limited experience in setting up and using Virtual Machines (VMs) in any cloud environment.

The workshop will have a “hands on” component so please bring your laptop.

The following topics/activities will be covered:

  1. What is the ARDC Nectar Research Cloud and how does it compare with commercial offerings (at technical usage level only – will not go into cost comparisons etc.) 20 minutes
  1. What is the Research Cloud good for? And what it is not good for? 10 minutes
  1. Access to the Research Cloud – How to get some of the Nectar Research Cloud. 20 minutes
    1. How to request access
    2. How to request resources
    3. How to request support to help you deploy and use your VM’s
  1. Exercise – Spin up a VM and install some software on it using Nectar Research Cloud. 120 minutes

WHO SHOULD ATTEND

Researchers or research support people who have perhaps heard of cloud computing or of the Nectar Research Cloud and would like an introduction.

WHAT TO BRING

Bring your laptop.  Required prerequisite knowledge – Moderate understanding of Unix or any other software development environment.  If you are not from an Australian or New Zealand University you will need an AAF account to gain access for the hands-on component.


BIOGRAPHIES

Andy Botting – Senior Engineer at the Australian Research Data Commons (ARDC), Nectar Research Cloud.  I’m a cloud-native Systems Engineer with a background in Linux, HPC.  Specialities: Linux, Android, Puppet, OpenStack and AWS.

Wilfred Brimblecombe – ICT Manager at the Australian Research Data Commons (ARDC), Nectar Research Cloud, is an IT management veteran with over 20 years of leadership experience across various organisations.

Sam Morrison – Senior Engineer at the Australian Research Data Commons (ARDC), Nectar Research Cloud.  Specialties: Linux system administration, Python/Django web programming, Security, Openstack cloud technologies.

Jake Yip – DevOps Engineer at Australian Research Data Commons (ARDC), Nectar Research Cloud. Specialities: Puppet, OpenStack, Networks, DevOps and Security.

 

Galaxy architecture and deployment experiences: a case study in how to build complex analysis systems for data-focussed science

Mr Simon Gladman1, Mr Derek Benson4, Dr Jeff Christiansen2, Dr Gareth Price3, A/Prof. Andrew Lonie1

1Melbourne Bioinformatics, University of Melbourne, Melbourne, Australia, simon.gladman@unimelb.edu.au
2Queensland Cyber Infrastructure Foundation, Brisbane, Australia, j.christiansen@uq.edu.au
3Queensland Facility for Advanced Bioinformatics, Brisbane, Australia, g.price@qfab.org
4Research Computing Centre, University of Queensland, Brisbane, Australia, d.benson.imb.uq.edu.au
5Melbourne Bioinformatics, University of Melbourne, Melbourne, Australia, alonie@unimelb.edu.au

GENERAL INFORMATION

  • Half day Workshop
  • includes a hands-on component
  • Up to 20 attendees

DESCRIPTION

Galaxy (https://galaxyproject.org) is a widely used, highly capable bioinformatics analysis platform. It provides users with a large library of analysis and visualization tools, reference datasets, interfaces to global databases, and evolving workflow capabilities that provide provenance and reproducibility. Users build complex analysis jobs in a highly accessible interface, which are then deployed via a scheduler to underlying computational resources. Galaxy has a relatively sophisticated approach to managing user jobs to compute resources and can, for instance, be configured to schedule jobs to disparate HPC and/or cloud resources depending on the job characteristics.

In this workshop we will explore the architecture of Galaxy Australia (http://usegalaxy.org.au), understanding how it is architected to deploy jobs from a common front end to compute resources in Queensland and Victoria. Jobs have access to a common multi-hundred-terabyte reference dataset collection that is intelligently mirrored in real time from the US-based  Galaxy Main (http://usegalaxy.org) using the CernVM file system (https://cernvm.cern.ch/portal/filesystem). We will explore the technologies, cover our experiences of how they work in practice, and discuss the ambitions of a global Galaxy infrastructure network that can leverage the efforts of a global community to maintain and support critical data and software resources.

OUTLINE OF WORKSHOP CONTENT:

  1. Overview of Galaxy. Technical overview of the componentry of Galaxy as a software platform and as a workflow generation and deployment system30 minutes
  1. Galaxy Australia architecture. Overview of the Galaxy Australia archtictural and deployment model.30 minutes
  1. Underlying technologies. Detailed exploration of the job distribution and data sharing technologies being used for Galaxy Australia.90 minutes
  1. Galaxy ‘World’ – roadmap discussion. How can multiple instances of Galaxy make use of complex, high maintenance resources including a tool library which is dependency-free and growing global reference datasets, whilst appearing as a seamless experience to non-expert users?30 minutes

WHO SHOULD ATTEND

Research infrastructure staff  interested in complex, distributed software systems and cutting edge technologies for job and data distribution.

WHAT TO BRING

A laptop, no special software required. We hope to demonstrate some of the technologies being used in Galaxy.


BIOGRAPHY

Andrew Lonie is Director of the Melbourne Bioinformatics, Director of the EMBL Australia Bioinformatics Resource (EMBL-ABR: http://embl-abr.org.au), and an associate professor at the Faculty of Medicine, Dentistry and Health Sciences at the University of Melbourne, where he coordinates the MSc (Bioinformatics). Andrew directs a group of bioinformaticians, computational biologists and HPC specialists within the Melbourne Bioinformatics and EMBL-ABR to collaborate with and support life sciences researchers in a variety of research projects across Australia.

 

Pawsey Supercomputing Centre – Engaging for the Future

Dr Neil Stringfellow1, Dr Daniel Grimwood1

1Pawsey Supercomputing Centre, Kensington, Australia

 

ABSTRACT

The Pawsey Supercomputing Centre continues to evolve and grow.  Recent developments at Pawsey include the new advanced technology testbed called Athena, as well as the expansion of the Zeus commodity Linux cluster.  The Athena testbed includes Intel Xeon Phi “Knight’s Landing” processors as well as Nvidia “Pascal” GPUs.

I will also touch on the longer term vision for Pawsey and the Federal infrastructure roadmap.

 


Biography:

Dr Neil Stringfellow is the Executive Director of the Pawsey Supercomputing Centre.

Neil has led the Pawsey Supercomputing Centre since 2013, overseeing the operational launch of of the Magnus Cray supercomputer.  Neil joined Pawsey from the Swiss National Supercomputing Centre (CSCS) where he was involved in application and science support, the management of strategic partnerships and Switzerland’s flagship supercomputing systems.

About the conference

eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

Conference Managers

Please contact the team at Conference Design with any questions regarding the conference.

© 2017 - 2018 Conference Design Pty Ltd