Demonstrating the Impact of Research Infrastructures

Michelle Barker1, Ingrid Mason2, Peter Elford2

1Nectar, Parkville, Australia

2AARNet, Canberra

DESCRIPTION

This BoF adds to the expanding discussion on how to demonstrate the value of research infrastructures. A wide range of approaches to demonstrating the outcomes and/or impact of research infrastructures already exist, alongside exemplars of different methods focused on a wide range of national and international infrastructures, including virtual infrastructures. This BoF will introduce some of the major types of impact categories – scientific, economic, workforce and social/cultural – and some examples of their use. Participants will then engage in small-group discussion on the most appropriate metrics for the type of value they are trying to evidence, taking into consideration the stakeholders to whom this is being targeted. Participants will have the opportunity to reflect on their current approach to metrics to demonstrate the value of physical and/or virtual facilities they are involved with, and consider alternatives. The BoF will end with a group feedback session to enable participants to widen their perspectives by hearing how others are approaching similar issues.

This BoF is targeted at eResearch professionals involved in developing and managing research infrastructure, on behalf of specific research communities or providing services for all research domains. Participants may include: research managers and infrastructure managers, eResearch directors, academic librarians, eResearch analysts, communication and service managers.

60 mins duration.

EXTENDED ABSTRACT

Research infrastructures continue to play a key role in enabling increased research impacts. There is an increasing demand for methodologies and tools for assessing the impact of research infrastructures. The role of research infrastructures in facilitating different parts of the research lifecycle can cause challenges for detailing their importance in impact pathways, resulting in increasing national and international conversations on how to best demonstrate their value.

A range of quantitative and qualitative methodologies currently exist to demonstrating the value of research infrastructures in terms of outcomes and/or impacts, including:

  1. Return on investment
  2. Cost-benefit ratio
  3. Contingent valuation
  4. Policy change
  5. On-ground intervention and actions change
  6. Amplification and improvement in the creation, delivery and competitive advantage of new products and services and enterprises by others
  7. Efficiency gains: productivity and efficiency gains in information management and delivery of services
  8. Cultural change in the behaviours relating to open sharing of data and integration and interoperability between platforms and services and data
  9. Extent of help in the application of knowledge and activities in key sectors including through the creation of “information derivatives”
  10. Instrument and use value/Uptake and usage of services/accessibility
  11. Platform and facility attribution tracking
  12. Production of reports, papers and publications
  13. Data sharing, data re-use
  14. Software sharing, software re-use
  15. Software Citation and Attribution
  16. Data Citation and Attribution
  17. Facilitation of collaboration
  18. Skills development

Australian exemplars include an evaluation of AuScope impacts [1], Atlas of Living Australia  [2] and CSIRO [3]. International exemplars include studies on the European Bioinformatics Institute and CERN. Examples of methodological approaches include tools from CSIRO [6], the Council of Rural Research and Development Corporations [7] and Beagrie Cost Benefit Toolkit [8].

This BoF will frame discussion around four major types of impact categories: scientific, economic, workforce and social/cultural. After discussion of these approaches and their use, participants will have the opportunity to reflect on their own approaches to evidencing value and discuss this with their peers, to encourage critical engagement on effective demonstration of value. These methodologies can be applied to both physical and virtual infrastructures.

REFERENCES

  1. Lateral Economics. AuScope Infrastructure Program – evaluation of impacts. August 2016. http://www.auscope.org.au/wp-content/uploads/2017/02/Lateral-Economics-report.pdf, accessed on 3 June, 2017.
  2. Alluvium Consulting. Assessment of the Atlas of Living Australia’s Impact and Value. 2016. https://docs.google.com/document/d/1QQLG-wivYDwDMNXIMsKpRIydnwcqPJe4LDCBWloB5Hk/edit, accessed on 12 June, 2017.
  3. ACIL Allen Consulting. CSIRO’s Impact and Value. December 2014. http://www.acilallen.com.au/cms_files/ACILAllen _CSIROAssessment_2014.pdf, accessed on 12 June, 2017.
  4. Beagrie, Neil, and Houghon, John. 2016. The Value and Impact of the European Bioinformatics Institute. 2016. https://beagrie.com/static/resource/EBI-impact-report.pdf, accessed on 10 June, 2017.
  5. Organisation for Economic Co-operation and Development. The Impacts of Large Research Infrastructures on Economic Innovation and on Society: Case Studies at CERN. 2014. https://www.oecd.org/sti/sci-tech/CERN-case-studies.pdf,, accessed on 5 March, 2017.
  6. Evaluating Our Impact. http://www.csiro.au/en/About/Our-impact/Evaluating-our-impact, accessed on 7 April 2017.
  7. Council of Rural Research and Development Corporations. Impact Assessment Guidelines. 2014. http://www.ruralrdc.com.au/wp-content/uploads/2016/04/CRRDC-Impact-Assessment-Guidelines-V.1-070514.pdf, accessed on 12 June, 2017.
  8. Consortium of European Social Science Data Archives. Cost-Benefit Advocacy Toolkit. https://cessda.net/CESSDA-Services/Projects/Current-projects/CESSDA-SaW/Work-Packages/WP4/Cost-Benefit-Advocacy-Toolkit, accessed on 12 June, 2017.

 


BIOGRAPHY

Michelle Barker is Deputy Director (Research Software Infrastructure) at National eResearch Collaborative Tools and Resources (Nectar), a National Collaborative Research Infrastructure Strategy (NCRIS) funded program. She is one of the convenors of the annual International Workshop on Science Gateways – Australia, and the International Coalition on Science Gateways. She also facilitates national conversations around common challenges such as research impact metrics, software sustainability and reproducibility.

Ingrid Mason, Deployment Strategist with AARNet, provides support for engagement and the uptake of national research and education network (NREN) and services with AARNet members across the research, cultural and collections sectors. Ingrid has worked on several NCRIS programs: Australian National Data Service, National eResearch Collaborative Tools and Resources, and Research Data Services.

As the Director, Government Relations and eResearch, Peter Elford manages AARNet’s relationships across a broad range of Federal and state government agencies, and AARNet’s engagement with the Australian (e)Research community. He is a strong and passionate advocate for the role Information and Communications Technology (ICT) plays in enabling globally collaborative and competitive research through ultra-high speed broadband connectivity. Peter is an ICT professional with over 30 years’ experience within the government, education, research and industry sectors having worked at the Australian National University, AARNet (twice) and Cisco. In his first stint at AARNet (in 1990) he engineered much of the original Internet in Australia.

Using the NeCTAR research cloud to develop national on-line services to annotate and analyse underwater imagery: SQUIDLE+ and GlobalArchive

Dr Roger Proctor1, Dr Timothy Langlois2, Dr  Ariell Friedman3, Mr Brendan Davey4

1Imos, Hobart, Australia,

2UWA, Perth, Australia,

3Greybits, Sydney, Australia,

4TPAC, Hobart, Australia

ABSTRACT

Fish image annotation data, is currently collected by various research, management and academic institutions globally (+100,000’s hours of deployments) with varying degrees of standardisation and limited formal collaboration or data synthesis. We present a case study of how national on-line services, developed within the NeCTAR Marine Sciences Cloud, have been used to annotate habitat images and synthesise fish annotation data sets collected using Autonomous Underwater Vehicles (AUVs) and  baited remote underwater stereo-video (stereo-BRUV).

Two developing software tools have been brought together in the science cloud to provide marine biologists with a powerful service for image annotation. SQUIDLE+ is an online platform designed for exploration, management and annotation of georeferenced images & video data. It provides a flexible annotation framework allowing users to work with their preferred annotation schemes. we have used SQUIDLE+ to sample the habitat composition and complexity of images of the benthos collected using stereo-BRUV. GlobalArchive is designed to be a centralised repository of aquatic ecological survey data with design principles including ease of use, secure user access, flexible data import, and the collection of any sampling and image analysis information. To easily share and synthesise data we have implemented data sharing protocols, including Open Data and synthesis Collaborations, and a spatial map to explore global datasets and filter to create a synthesis.

These tools in the science cloud, together with a virtual desktop analysis suite offering python and R environments offer an unprecedented capability to deliver marine biodiversity information of value to marine managers and scientists alike.


BIOGRAPHY

Roger Proctor has been the Director of the Australian Ocean Data Network of the NCRIS Integrated Marine Observing System since 2011. He is a NeCTAR Board member, co-leader of the Marine Virtual Laboratory and co-leader of the Marine Sciences Cloud. He has a PhD in coastal oceanography from the University of Liverpool, UK.

Geoscience in the Cloud

Dr Laura Stanford1, Mr James  Kingsmill1, Mr Tom Butler1, Dr Carina Kemp1

1Geoscience Australia, Canberra, Australia

ABSTRACT

Over the last two years Geoscience Australia has developed and commenced implementing its Digital Science Strategy [1], a project that transforms how scientific processes and analysis within the agency are performed.  A key aspect of this project is using cloud services in the delivery of agency applications and in the analysis of data. In these domains, cloud services can enable a massive shift in the way developers and scientists create end-to-end solutions for their own applications.  The success of this new way of working requires cultural buy-in from all levels of the organization, and also necessitates increased upskilling of staff across infrastructure, security, and governance.  Here, we describe the processes and methodologies used to enable this within the agency, and resulting success of enterprise applications with measurable improvements to services and science.

INTRODUCTION

Cloud services are enabling a massive shift in the way that scientific analysis is being performed, from provisioning of servers and deployment of applications to providing access to high performance compute to extract new information from big data.  Traditional infrastructure in a government agency involves on-premise servers, centralised IT teams to provision and deploy applications, as well as specialised operations teams to maintain those applications into the future.  These act as a constraint, restricting the ability of the organisation to move at the necessary speed to deliver value to its consumers.  Equally significantly, the scale of scientific analysis and research is limited by the infrastructure available on premise, its performance optimisation, and the speed at which it can be requested and deployed.  With high volumes of data across the science fields there is an increased demand for reliable and elastic infrastructure that is optimised for individual applications and analyses.

CLOUD AT GA

In 2015 Geoscience Australia started moving towards using cloud services as the new standard in deploying applications and in science analysis.  Science teams within the organization now have the ability to deploy their own cloud infrastructure and applications, granting them responsibility for the entire stack with high level support available from a centralised cloud enablement team.   The main provider for cloud services at GA is currently Amazon Web Services (AWS) [2].

Moving to a self-service model for infrastructure gave the science areas within the agency more control over their own work, thereby increasing the ability to innovate scientific analysis, and deliver more reliable and secure solutions to stakeholders.  However, a significant aspect of this change was the cultural aspect.  To be successful required buy-in not only from team members implementing the solutions, but all levels within the organisation.  Strong support for innovation and a commitment of the team members were essential.  In addition, a large advantage of this move was the increasing technical skills of the scientists involved.

The cloud enablement team provided the governance, advice and tools to support the business areas in building their solutions.  This involved example architectures for cloud infrastructure using Docker, Packer, Terraform and Bitbucket Pipelines for continuous delivery.  Infrastructure and application code are version controlled and deployable within minutes.  The application of DevOps principles has increased productivity and has led to quicker software release cycles, continuous improvement and an increase in quality and security.  Automation of cloud infrastructure and software deployment enables rapid response to feedback, changes and improvements.

SECURITY AND COST

Cloud services raise new challenges around security, but can also provide exciting opportunities for increased visibility and automation. Security configuration can be ‘baked into’ application and infrastructure code early on in development. Secure configurations can be developed once and used as the basis for multiple applications. Security testing can be performed automatically in build and deployment pipelines. The data from these tests can be easily radiated to all stakeholders.

In a similar way, cloud costing models can raise risks and generate benefits for the organisation. Cloud costs can have huge peaks. This can occur from expected events, such as autoscaling from increased demand, as well as from unexpected events, such as the public loss of access keys. These risks are serious, but can be mitigated with appropriate monitoring. The benefits of cloud’s on-demand pricing, however, include the ability to shutdown development and test environments when they aren’t needed, the ability to create new environments at a moment’s notice, and the visibility into costs that have traditionally been hidden from the business.

EXEMPLARS

Sentinel Hotspots (sentinel.ga.gov.au) is a service that provides bushfire information to both ordinary citizens as well as emergency services. It is one of GA’s mission critical applications. When it was hosted on-premise, it had service availability of 90% over the 2015/16 bushfire season. Outages were caused by a combination of unreliable compute and networking infrastructure, lack of high-availability in architectural design, and poor monitoring. The outages occurred at moments of high demand, which is precisely when service availability was essential.

By migrating the application to AWS in time for the 2016/17 bushfire season, the newly cloudified application was 100% available for the entirety of the bushfire season, using autoscaling, serverless technology, and blue/green deployments to ensure uptime.

Figure 1. Screen capture of Sentinel application

AUSCORS NTRIP Broadcaster (auscors.ga.gov.au) provides data streaming from Australia’s Global Navigation Satellite System (GNSS) stations throughout Australia, Antarctica and the Pacific. It had suffered service degradation for similar reasons as Sentinel Hotspots.

The architecture for AUSCORS was somewhat restricted the requirements of end-users and by the design of the NTRIP protocol. However despite these architectural restrictions, the service has had far higher reliability than the on-premise system. The day-to-day management of the system is also entirely within the hands of its developers, allowing them a high level of control and responsiveness in its operation. It has the added benefit of freeing up centralised IT staff to work on improving core services.

CONCLUSIONS

Geoscience Australia’s move towards the use of cloud infrastructure and technologies have had major benefits for the organisation not only in the way applications and services are deployed but also in their reliability and security.

REFERENCES

  1. Digital Science Strategy, Geoscience Australia, DOI:
  2. Amazon Web Services, Available from: http://aws.amazon.com
  3. Shakeabubakor, A., Sundararajan, E. & Hamdan, A., 2015. Cloud Computing Services and Applications to Improve Productivity of University Researchers. International Journal of Information and Electronics Engineering, 5(2), p.153. Available at: http://www.ijiee.org/vol5/521-F005.pdf

 

About the conference

eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

Conference Managers

Please contact the team at Conference Design with any questions regarding the conference.

© 2016 - 2017 Conference Design Pty Ltd