Dr Laura Stanford1, Mr James Kingsmill1, Mr Tom Butler1
1Geoscience Australia, Canberra, Australia
Over the last two years Geoscience Australia has developed and commenced implementing its Digital Science Strategy , a project that transforms how scientific processes and analysis within the agency are performed. A key aspect of this project is using cloud services in the delivery of agency applications and in the analysis of data. In these domains, cloud services can enable a massive shift in the way developers and scientists create end-to-end solutions for their own applications. The success of this new way of working requires cultural buy-in from all levels of the organization, and also necessitates increased upskilling of staff across infrastructure, security, and governance. Here, we describe the processes and methodologies used to enable this within the agency, and resulting success of enterprise applications with measurable improvements to services and science.
Cloud services are enabling a massive shift in the way that scientific analysis is being performed, from provisioning of servers and deployment of applications to providing access to high performance compute to extract new information from big data. Traditional infrastructure in a government agency involves on-premise servers, centralised IT teams to provision and deploy applications, as well as specialised operations teams to maintain those applications into the future. These act as a constraint, restricting the ability of the organisation to move at the necessary speed to deliver value to its consumers. Equally significantly, the scale of scientific analysis and research is limited by the infrastructure available on premise, its performance optimisation, and the speed at which it can be requested and deployed. With high volumes of data across the science fields there is an increased demand for reliable and elastic infrastructure that is optimised for individual applications and analyses.
CLOUD AT GA
In 2015 Geoscience Australia started moving towards using cloud services as the new standard in deploying applications and in science analysis. Science teams within the organization now have the ability to deploy their own cloud infrastructure and applications, granting them responsibility for the entire stack with high level support available from a centralised cloud enablement team. The main provider for cloud services at GA is currently Amazon Web Services (AWS) .
Moving to a self-service model for infrastructure gave the science areas within the agency more control over their own work, thereby increasing the ability to innovate scientific analysis, and deliver more reliable and secure solutions to stakeholders. However, a significant aspect of this change was the cultural aspect. To be successful required buy-in not only from team members implementing the solutions, but all levels within the organisation. Strong support for innovation and a commitment of the team members were essential. In addition, a large advantage of this move was the increasing technical skills of the scientists involved.
The cloud enablement team provided the governance, advice and tools to support the business areas in building their solutions. This involved example architectures for cloud infrastructure using Docker, Packer, Terraform and Bitbucket Pipelines for continuous delivery. Infrastructure and application code are version controlled and deployable within minutes. The application of DevOps principles has increased productivity and has led to quicker software release cycles, continuous improvement and an increase in quality and security. Automation of cloud infrastructure and software deployment enables rapid response to feedback, changes and improvements.
SECURITY AND COST
Cloud services raise new challenges around security, but can also provide exciting opportunities for increased visibility and automation. Security configuration can be ‘baked into’ application and infrastructure code early on in development. Secure configurations can be developed once and used as the basis for multiple applications. Security testing can be performed automatically in build and deployment pipelines. The data from these tests can be easily radiated to all stakeholders.
In a similar way, cloud costing models can raise risks and generate benefits for the organisation. Cloud costs can have huge peaks. This can occur from expected events, such as autoscaling from increased demand, as well as from unexpected events, such as the public loss of access keys. These risks are serious, but can be mitigated with appropriate monitoring. The benefits of cloud’s on-demand pricing, however, include the ability to shutdown development and test environments when they aren’t needed, the ability to create new environments at a moment’s notice, and the visibility into costs that have traditionally been hidden from the business.
Sentinel Hotspots (sentinel.ga.gov.au) is a service that provides bushfire information to both ordinary citizens as well as emergency services. It is one of GA’s mission critical applications. When it was hosted on-premise, it had service availability of 90% over the 2015/16 bushfire season. Outages were caused by a combination of unreliable compute and networking infrastructure, lack of high-availability in architectural design, and poor monitoring. The outages occurred at moments of high demand, which is precisely when service availability was essential.
By migrating the application to AWS in time for the 2016/17 bushfire season, the newly cloudified application was 100% available for the entirety of the bushfire season, using autoscaling, serverless technology, and blue/green deployments to ensure uptime.
Figure 1. Screen capture of Sentinel application
AUSCORS NTRIP Broadcaster (auscors.ga.gov.au) provides data streaming from Australia’s Global Navigation Satellite System (GNSS) stations throughout Australia, Antarctica and the Pacific. It had suffered service degradation for similar reasons as Sentinel Hotspots.
The architecture for AUSCORS was somewhat restricted the requirements of end-users and by the design of the NTRIP protocol. However despite these architectural restrictions, the service has had far higher reliability than the on-premise system. The day-to-day management of the system is also entirely within the hands of its developers, allowing them a high level of control and responsiveness in its operation. It has the added benefit of freeing up centralised IT staff to work on improving core services.
Geoscience Australia’s move towards the use of cloud infrastructure and technologies have had major benefits for the organisation not only in the way applications and services are deployed but also in their reliability and security.
- Digital Science Strategy, Geoscience Australia, DOI:
- Amazon Web Services, Available from: http://aws.amazon.com
- Shakeabubakor, A., Sundararajan, E. & Hamdan, A., 2015. Cloud Computing Services and Applications to Improve Productivity of University Researchers. International Journal of Information and Electronics Engineering, 5(2), p.153. Available at: http://www.ijiee.org/vol5/521-F005.pdf
Laura Stanford obtained her Ph.D in Astronomy & Astrophysics from the Australian National University in 2007. After completing a postdoctoral position at the University of Texas at Austin, she moved away from academia and in 2012 began working at Geoscience Australia in the National Geodetic Reference System Section as an astronomer working on the International Celestial Reference Frame. She then moved into IT and as a Data Scientist covered many roles including data governance, citation, modelling and enterprise provenance. In the last year she has begun working with the Cloud Enablement team learning new and exciting technologies and working with people across the agency.