The importance of research data management: The value of electronic laboratory notebooks in the management of data integrity and data availability

Mr Paul Cremin1, Matt Miller1, Mr Matt Dunie2

1Labarchives LLC, Coogee, Australia, 

2LabArchives LLC, San Diego, United States, 



The data produced in the practice of the scientific method is not managed in a uniform way by design. Specifically, in Academia, the breadth of mechanisms in use is extreme.  Labs, researchers and collaborators are using thumb-drives, isolated PCs, network drives, large commercial storage services, open access blogging sites or hosted blogs.  Some labs are handling sensitive information in paper notebooks, not under lock & key, and others are using smart phones.  Major issues with compliance, intellectual property protection and data integrity are raised in these environments.  The use of secure, online or locally based (data resides in-country or in-institution) systems designed for research data management, research workflow and institutional support facilitate better research data management, researcher & institutional oversight, access, provisioning and reporting. The collaborative nature of scientific research more enabled now than ever before.  Scientists of any discipline use sophisticated technology in their workflows and should do the same for research data management.


There has been a growth in research-related data for several reasons. But the advancement and use of sophisticated research technology combined with the advent of immediate and low-cost communications and collaboration technology are enabling the creation of massive amounts of research data. These data must be managed properly in order to facilitate quality research. Research funding agencies make statements regarding data management and publishers provide for the submission of supplemental data. Yet there are research retractions for faulty and falsified data every month. Good data management tools and process can help to limit “bad” research.

According to a report published by the International Association of Scientific, Technical and Medical Publishers, there are more than twenty-eight thousand one hundred English language peer-reviewed journals in publication with an output of an estimated 2.5 million articles [3]. In addition, the report mentions that surveys of researchers suggest as many as 1%–2% of scientists have fabricated or falsified research data. The report states there were more than four million unique authors in 2014. Simple arithmetic then suggests that anywhere from forty thousand to eighty thousand of the named authors of the 2014 sample may have used incomplete or inaccurate data at some point in their career.

Data management and Laboratory notebooks

Data Management Plans have been a requirement of various funding agencies for most of the past decade. Governmental agencies have been issuing policies on data and management for much longer. The basics are the same across all of the plans: preserve data & provide access. There are other components to be considered, however the basic premise is to make the data available for future research and evaluation for the purposes of reproducibility, research integrity, further research, or challenge.

The reasons for keeping a lab notebook are well known. The U.S. Health & Human Services Office of Research Integrity web page sums up the reasons very nicely [1]:

  1. To establish good work practices.
  • To teach people in your lab.
  • To meet contractual requirements.
  • To avoid fraud.
  • To defend patents.
  • To allow work to be reproduced by others.
  • To facilitate preparation of formal reports, presentations and papers.
  • To validate your research.
  • To serve as a source for assigning credit to lab members.

The sixth bullet above may be the most important. Among the most basic underpinnings of science is the reproducibility of research. Without strong data management policies, documentation, and data management, reproducibility is at risk. Research labs of all disciplines have varying types of equipment, but there is at least one standard among them: Research is to be documented in accordance with the scientific method. Good data is data that is documented, stored, and accessible.

The scientific method at the highest level, appears quite simple. It is a circular process by which a researcher goes through at least four phases: ideation, research, analysis, and conclusion. The output is typically either a peer reviewed, or “grey” paper or report. However, when evaluated in more detail, the process is much more complex than the four steps just mentioned.

In granularity, good research requires skill, tools, and rigor. One of the more granular components is data management. . . and possibly, the definition of data. To the lay person, the data is limited to the results of an experiment or survey. But in reality, and depending upon the discipline, the data may include documentation of process and procedure. A research notebook may include information on failed experiments and documentation of missteps or unresolved questions. Research notes, environmental observations, lab notes and evaluations are all part of the research data. All of this can be “templated” into an Electronic Laboratory Notebook (ELN), ensuring that the complete research record is preserved and available for future use.

This is especially true when considering that a major element of validation in the research process is the reproducibility of previous work. Many retractions are the result of irreproducible research. According to some in the research community, the community itself has conflicting objectives. The pressure placed on researchers to increase their research output, to win more grant proposals, and to regularly publish their research can be at odds with solid scientific research management. Indeed, a recent article details two peer reviewed articles which “. . . urge scientists to make research more reproducible” [2].

It appears Research Data Management practices are highly-variable. If you want proof, walk the halls on different floors of a research building in academia and visit a dozen research labs. Depending on institutional policies and the level of independence provided lab heads, you may see as many as a dozen different mechanisms to manage research data in a dozen different labs. But, research labs must be different. . . that is the nature of the business of research, especially in Academia. And those differences are the basic nature and strength of academia.

Electronic Laboratory notebooks

But there are tools in the marketplace that can help researchers maintain their independence, provide researchers and their administrators with the ability to protect their work product and enable scientific reproducibility. They are Electronic Laboratory Notebooks (ELN – a terrible 1980’s-era name describing a product that essentially replaces the analog paper notebook with a much-improved digital version).

When data are entered into an ELN – and data means research data, notes, observations, formulas, equations, sketches, data sets, images. . . any type of data – the platform must provide the ability to support several interested parties: 1) a funding agency’s requirement for a Data Management Plan; 2) the researcher’s need to document their research; 3) the administration’s need to be able to protect IP and prove discovery; and 4) the publisher’s need to review and publish data supporting the research; and 5) the institutes industry partner’s need to easily review and acquire data in a format that improves commercialization efforts and is frankly the standard in the industry.

ELNs can support the Scientific Method in ways traditional paper notebooks cannot. They also support institutional research policies and objectives and provide a platform for institutional data management and research support. A robust ELN supports Data Integrity, Data Lifecycle Management, Data Management, Data Accessibility, Collaboration, and Research Reproducibility. In today’s world of global collaborative research, digital information, and robust, advanced information technology, ELNs are becoming the “must have” tool for researchers and institutions.



[2] Two manifestos for better science, Discover Magazine January 11 (2017), neuroskeptic/2017/01/11/manifestos-better-science/#.WT6mvmjyuUk.

[3] M. Ware and M. Mabe, The STM Report: An Overview of Scientific and Scholarly Journal Publishing, 4th edn, International Association of Scientific, Technical and Medical Publishers, Netherlands, 2015, 2015_02_20_STM_Report_2015.pdf.

** Full article published by Matt Dunie, President and Co-Founder, LabArchives in:

Information Services & Use 37 (2017) 355–359

DOI 10.3233/ISU-170843

IOS Press


Experienced data management consultant, specialist literature researcher, trainer and reference info specialist.

I’ve had 20 years of high level research experience, across fields such as competitive intelligence, company research, medical literature searching and pharmacovigilance, market research, drug safety literature searching, intellectual property and finance. My significant experience in the corporate, academic and scientific community means I have a working knowledge of the required knowledge resources, the various data management resources available and how these resources can be leveraged to improve research workflows.

A conceptual model to deliver end-to-end user-centric e-infrastructure for ecosystem science

Siddeswara Guru1, Beryl Morris1

1TERN, University of Queensland, Brisbane, Australia,, 



The environment is changing at a faster pace than anticipated due to human-induced activities potentially leading to climate change, loss of biodiversity and seasonal variations. The changes in our environment need to be tracked, studied and predicted so that adverse changes can be mitigated and plans made for a sustainable future. It is also essential to understand how the biological and social systems respond to environmental changes.

There is an increasing need for ecological information to address some of the challenges explained above and other significant social, economic and environmental problems. There are several initiatives at regional, national and international scales to collect and publish a large variety of ecosystem science data such as TERN (, NEON (, TERENO ( and ICOS ( apart of state and federal government agencies.

Terrestrial ecosystem data is exceptionally heterogeneous concerning data types and data collection methods – data collected from human, hand-held, in-situ sensors and remote sensing observations. The observations also vary in spatial scale – point, plot, site, regions, biome, states and continent. The data handling mechanisms for these datasets are also different.

One of the challenges is to provide complex data streams in an easily accessible form with standard data formats and exchange protocols along with tools and platforms to interrogate, access, analyse and share analysis pipelines. The data infrastructure should enable easy access to custom data and information for a particular problem, access to scientific computing, visualisation and toolkits to enable data-driven science. The data infrastructure needs to evolve in response to community needs and provide a robust platform to meet the requirements of not only ecosystem science community but also inter-discipline researchers. It should also enable researchers to use same datasets for multiple applications and allow complete re-use and re-purpose of data and infrastructure capabilities.

Conceptual Model

The emerging data e-infrastructure should support holistic capabilities that not only manage to store, curate and distribute data but also enable processing, transformation based on user needs, access control, model-data assimilation, tracking data transformation at different stages of the analysis, linking consistent data to different analysis tools and pipelines.

The data e-infrastructure should also move towards big data technologies as drivers to deals with the increase in data production by different data observing initiatives and need to provide coherent eResearch platform to provide services for research data management and cross-disciplinary collaboration with flexible governance model. In ecosystem science, this should enable access to large collection of earth, environment, biodiversity datasets that support regional, continental and global issues and policy matters, while supporting on-demand compute and storage need for data-centric scientific applications; secure environment for data storage and processing; policy to support data security, privacy and confidentiality; authentication and authorisation to support virtual collaboration and building scientific communities.

A conceptual framework has been developed to identify significant elements and functionalities to bring coherent services to the ecosystem science community. The framework brings together all the data management components in addition to the computation platforms to enable users to access and perform data analysis in a data-centric approach. A collective interaction between different components of the framework will deliver to the needs of the ecosystem science data users. The conceptual framework identifies most of the services required but broadly classified into data source, transformation services, enabling services, delivery services and governance.

Data sources are the important part of any data e-infrastructure, data can come from different sources but we are focusing on the types of data sources that are most useful in terrestrial ecosystem science, the key is to provide a standardised access to different data streams to extract information and knowledge. Transformation services are part of the data management suites that transform data into information. The enabling services perform key activities to meet user needs, these are intermediate between user access and raw information. The delivery services are service accessibility points to interact with the infrastructure. Service initiation describes how users will initiate services to get some sample outcomes listed under outcomes.

The framework is structured such that each of the services fulfil certain task and run independently and overall framework run as a loosely coupled collaborative services. The services can also be managed by different organisations but can coupled with other services to meet certain use cases. The interaction between the services are the key.

Figure 1: A conceptual framework to deliver End-to-End services to the ecosystem science community.

We envisage that the infrastructure will be hosted on cloud-based NeCTAR and RDS infrastructure of the Australian Research Data Commons (ARDC).  The cloud technologies should simplify the building of such infrastructure with access to combinations of IaaS, PaaS and SaaS. Access to multiple nodes of the ARDC would enable to spread the infrastructure across multiple geographic regions and improve fault tolerance and availability.


The abstract present some of the future thinking in the development of more cohesive e-infrastructure to meet user needs. The approach taken is to look at potential use cases and identify several critical services that may require to meet those use cases.  Each of the services perform certain tasks with an ability to interact with other services.


Siddeswara Guru is a program lead for the data services capability of TERN. He has substantial research and management background and has worked in the eResearch projects across multiple domains.


LibraryBox: Boxing the Internet

Mr Hasitha Thalaththani Ralalage1

1Deakin University, Geelong, Australia,


Websites and online file sharing services have become so ubiquitous and accessible, that most of us take them for granted. But what about when we don’t have access to fast, cheap or unrestricted access to the internet?

Meet the LibraryBox:  an inexpensive portable webserver that will serve rich digital content without requiring an internet connection!

Early this year Deakin University Library, in conjunction with a Humanities and Social Sciences researcher, developed an oral history website to showcase Papua New Guineans’ experiences of World War 2. The aim of this project was to help keep alive memories and war stories handed down through the generations, and share them with others around the world. The website PNG Voices arose out of this project.

However, many parts of Papua New Guinea don’t have access to a reliable or cheap internet, so PNG Voices was largely inaccessible to many Papua New Guineans. The LibraryBox provided us with an excellent opportunity to bridge this gap between poor internet infrastructure and a multi-media rich website.

Hosting a full-blown website utilising advanced server technologies, on a low powered device proved to be challenging. This talk discusses how we overcame the challenges and how the LibraryBox can be a useful tool for the Australian eResearch community.

Among the advantages, LibraryBox:

  • Provides a quick and easy method of sharing large data sets over a local area network wirelessly
  • Allows sharing data without needing to rely on third party infrastructure or network restrictions
  • Provides a secure, inexpensive, low-power consuming data sharing platform

Enables sharing of data sets or websites at remote locations


Hasitha Thalaththani Ralalage is a DevOps Engineer at the Deakin University Library. He provides creative technical solutions to challenging problems in the digital libraries and repositories space. In his current role, he contributes to the development, administration and maintenance of the University’s research publication management, repository and reporting systems.


“In teaching you will learn”: Building HASS DEVL training elements collaboratively

Ms Alexis Tindall1

1eRSA, Thebarton, Adelaide, Australia,


The Humanities, Arts and Social Sciences Data-Enhanced Virtual Lab (HASS DEVL) promises a national skills and training framework that supports the range of other project deliverables, including a data curation framework and a workbench environment. This project aims to support an diverse research community, both in ambition,  experience and confidence with data and digital research methods. The project team worked on skills and training elements concurrently with teams working on other project outputs, and these outputs will be tested at a two day Train the Trainer program to be delivered in November.

In this lightning talk Alexis will provide an overview of the HASS DEVL skills and training framework, highlight some of the opportunities in this area, and briefly explore our challenges, including simultaneous development of tools and training, working across a distributed team, and determining the needs of a diverse research community. The delivery of these project outputs has been a learning process, and we might be a lot better at it once the project is finished!


Alexis is part of eRSA’s Research Engagement team, working with humanities and social sciences researchers to help them access data storage and compute facilities through eRSA. In this role she leads the skills and training elements of the Humanities, Arts and Social Sciences Data-Enhanced Virtual Lab, as well as contributing to the project team and project community engagement.
Alexis has a postgraduate qualification in Museum Studies and extensive project management experience in a variety of environments. Prior to working with eRSA, she worked in natural history museum digitisation and for the Atlas of Living Australia, improving digital accessibility of the nation’s biological collections.

Transforming research capabilities for evidence-based policy making

Ms Michelle Zwagerman1

1Analysis & Policy Observatory (, Hawthorn, Australia, 


The “Linked semantic platforms” LIEF project1 aims to develop the next generation of decision-support tools for interdisciplinary research on critical public policy issues. This presentation is on the results of the needs analysis phase of this project. The needs analysis focuses on a specific subset of interdisciplinary policy topics and hones in on particular research subgroups within government, industry and academia. With the emergence of systems thinking, we expect to identify the unique challenges that confront interdisciplinary research, where information is often contained in silos and not easy to apply to wicked policy problems. An example is the topic of digital health where research combining the disciplines of health and technology must seek to connect evidence based on dissimilar research methods and various research publication practices.


Our project is investigating the use of information technology to assist researchers working in interdisciplinary fields. Through applying appropriate information architectural designs, we aim to develop eResearch tools to support existing and new forms of research.


Michelle Zwagerman is the Digital Product Manager for Swinburne’s and the CRC for Low Carbon Living’s Knowledge Hub. She has completed a Master of Public Policy at RMIT, a Master of Business Administration at University of NSW, and a Bachelor of Science at University of Melbourne. She has over 20 years’ experience in Information Technology having delivered numerous IT projects and managed various IT support services.

Archives and the semantic web: Contextualising the record

Mr Owen Oneill1, Mr Nick Fahey1

1Public Record Office Victoria, North Melbourne, Australia, 


Public Record Office Victoria (PROV) is the archive of the State of Victoria. PROV is responsible for preserving and facilitating access to the permanent records of the Victorian Government. The collection is used by a very broad range of users including Government, academic researchers and the public.

Users of the collection may also create contextual information, including the definition of relationships to external knowledge/data sources. By structuring and describing the collection using semantic web technologies, PROV is keen to support the reuse of this valuable contextual information by enabling it to be more explicitly linked to the collection using machine readable and persistent methods. This will enable the  accumulation of this knowledge of the collection to be assembled built upon more easily. It will also enable the PROV collection to be more actively linked to other sources of knowledge.

PROV has developed and is progressively implementing a data model for associating contextual information to records in its collection. PROV  has implemented that data model using RDF and a Fedora Commons repository. Records are further disaggregated to their component parts using the International Image Interoperability Framework (IIIF) specification. This approach facilitates external users to curate, cite and reference the collection.


Owen ONeill is responsible for a program of work at Public Record Office Victoria (PROV) to implement a number of key software systems for preserving, managing and facilitating access to the PROV collection.  He is an IT professional with experience working on a number of eResearch and digital preservation projects.

Priming the pump: an experiment enabling HASS research students to become digital research practitioners

Mr Marco Fahmi1

1University Of Queensland, St Lucia, Australia,



Adopting digital tools in research opens up new and exciting domains of enquiry. However, the very wide variety of research disciplines, research practices and levels of digital literacy makes attempts to adequately support digital researchers non-trivial and labour-intensive.

One particular challenge is supporting communities that have traditionally been underserved by eResearch and that lack the opportunity and capability to support themselves. One such community is research higher degree students in the Humanities, Arts and Social Sciences (HASS.)

Although HASS disciplines represent more than 40% of Australia’s research output [1], they often do not have a commensurate level of digital research support. In 2018, for example, HASS’s share of federal funding for digital research infrastructure was less than 1% [2].

Among HASS researchers, higher degree research students are the most likely beneficiaries of digital tools as they investigate new research directions and methods. Yet, they encounter informational, technical and cultural barriers that make it difficult for them to engage in digital scholarship.

A 2017 report [3] by the Digital Humanities and Social Sciences program at The University of Queensland highlights the challenges that research students face when pursuing digital research: lack of awareness of how technology is being used in their research disciplines, limited technical know-how, lack of exposure of research supervisor to digital research, focus on degree completion on time and lack of recognition of digital research and of non-traditional theses as legitimate and prestigious research.

Meeting the Challenge

The main suggestion of the 2017 report is to not only create a multitude of points of engagement with research students (through the delivery of training and education programs, provision of advisory services, connecting research students to digital research specialists etc.) but also create the social environment and intellectual context that addresses existing barriers, encourages research students to engage in digital research and increases the likelihood of success of digital research projects.

The Graduate Digital Research Fellowship set up by the The University of Queensland is one such opportunity. Based on Stanford University’s CESTA Fellowships [4], it is a one-year fellowship for confirmed PhD students to gain the skills and experience necessary to carry out digital research in their disciplines.

The fellowships have two main objectives: for the fellows to gain a well-rounded understanding of digital research practice in their discipline as well as practical, hands-on experience working with digital research specialists and building scholarly digital artefacts. Working together as a cohort, and with the support of specialist staff, the fellows are likely to save time and effort they would have otherwise wasted searching for information, experimenting with inadequate tools and reinventing wheels.

The second objective of the fellowships is focused on the research community. Recognising digital research by the university normalises and raises the profile of digital research to the level of legitimate scholarship. It also allows traditional researchers to observe digital research as carried out by fellows up close and to demystify digital skills and practices.


In 2018, the Digital Humanities and Social Sciences program at the University of Queensland partnered with the Graduate School, to offer a one-year Graduate Digital Research Fellowship starting in July 2018. The fellowship was open to all confirmed PhD students who had the support of their PhD advisors and who self-identified as researchers in the Humanities, Arts or Social Sciences.

Fellows must nominate the digital research proposal they will be working on and are required to give a non-technical public seminar to present their digital research area and demonstrate its scholarly merit.

Fellows meet weekly with a multi-disciplinary support team composed of librarians, technology specialists and eResearch analysts. The purpose of the meetings is to manage the fellowships as digital research projects –with clear deliverables, timelines and planned activities; and an opportunity for the fellows to work together as a cohort and exchange ideas and experiences with their peers.

While the fellowships do not offer a stipend, there is a budget allocation to support conference attendance. In addition, fellows have priority one-on-one consultations with technical staff, and participate in digital research workshops and technical seminars on offer at the university.

The Graduate School received ten applications for the 2018 round of the Graduate Digital Research Fellowship. Six fellows were selected across Literature, Political Science, Psychology, Cultural Studies, History, and Communication. The presentation will provide an update to on the Graduate Digital Research Fellowship including preliminary findings for the first three months of its operation and insights into how it can be applied in other environments.


  1. Turner, G. and K. Brass. Mapping the Humanities, Arts and Social Sciences in Australia. October 2014. p.2. Available from: accessed 22 June 2018
  2. DASSH response to the Australian Government Research Infrastructure Investment Plan. 16 May 2018. Available from: accessed 22 June 2018
  3. Fahmi M. Digital Humanities and Social Sciences at the University of Queensland: A report to the Faculty of Humanities and Social Sciences on digital research activities. 2 May 2017.
  4. Graduate Digital Humanities Fellows. Center for Spatial and Textual Analysis. Stanford University. Available from: accessed 22 June 2018


Marco’s expertise is in technology- and data-driven research with experience in humanities, social sciences and ecological disciplines.

Curating Arts and Social Sciences collections

Ms Tanya Holm1

1UNSW Library, Coogee, Australia, 


My lightning talk focuses on the Arts and Social Sciences Repository (A&SS Repository). The Faculty of Arts and Social Sciences is responsible for content and the Library is responsible for the curation of collections. These collections include at-risk and/or orphaned collections, high-value collections and non-traditional research material. The material is often directly generated from a research project and the creators of the collections are not necessarily affiliated with the university.

The talk addresses the provenance of current collections: the Irish Anzacs collection being the output of research funded by the Irish Government’s Emigrant Support Program in 2013; the streaming video collection of ‘dance on screen’ which is a product of the ReelDance organization losing its funding in 2012; the collection of audio interviews and transcripts which are the result of ARC-funded research related to the School of the Arts and Media’s participation in an AusStage project; and the Social Science Data Guide collection describing datasets on elections, democracy and autocracy, developed by a researcher from the School of Social Sciences.

Further discussion includes how users engage with the repository (research, study, teaching and artistic development), its content model (content creation and management, the responsibilities of the library and faculty, and rights and permission management) and how it incorporates FAIR principles.

Relevance to conference:

The topic is relevant to the conference topics of ‘Making data FAIR’ and ‘Libraries Supporting Research’ – since the topic of the talk is how the Library is supporting the Arts and Social Sciences faculty in making their collections findable, accessible, interoperable and re-usable via the A&SS repository.


Tanya Holm has been a Senior Data Librarian at UNSW Library in Sydney, Australia for the last 2 years. She overseas the institutional and disciplinary repositories managed by the Library.  Previously she worked in Library Application Support for UNILINC,  a not-for-profit organisation.

Strudel V2: Lowering the barriers to entry for batch computing

Dr Chris Hines1

1Monash eResearch Centre, Clayton, Australia


Once upon a time, the CVL wanted to make graphical applications easy to use on HPC systems  with big data storage. As part of this we created a simple tool that can be installed on many of  the major operating systems that can setup ssh tunnels, initiate a vnc server and launch a  vncviewer locally. This worked great, we hid all the machinations of using a batch scheduler and  ssh tunnels from the user being a simple gui. We called this Strudel. Then a few years later  someone suggested it would be really sweet if we could handle the whole thing from within a  web browser. Thus was born Strudel-web. We made some neat hacks with strudel web. It used  the same config files as Strudel (desktop). We also found a way to create short lived credentials  and married this to OAuth2 and AAF authentication to make a general way for web apps to  perform SSH Access.  Strudel-web has been chugging along happily for a couple of years now, and we’ve learn a few  lessons, and Strudel desktop even longer. In this talk I’ll talk about some of the things we’ve  learnt and features we’re going to add to strudel v2 including

1. Alternative applications (like Jupyter notebooks)

2. Enhanced separation of configuration between batch scheduler and application

3. Unified codebase for both web and desktop applications

Please come along to hear about this tasty new solution


Chris has been kicking around the eResearch sector for over a decade. He has a background in quantum physics and with the arrogance of physicists everywhere things this qualifies him to stick his big nose into topics he knows nothing about.

Altmetrics: ‘big data’ that map the influence of Australian research

Mrs Stephanie Guichard1, Ms Stacy Konkiel2

1Digital Science, Melbourne, Australia,

2Altmetric, London, United Kingdom,



Altmetrics : scholarly “big data” illustrating where, how often, and by whom research is discussed and commented; offering universities and research intensive organizations a unique opportunity to understand non-traditional scholarly and public influence of institutions’ research. Crucially, academic librarians are uniquely situated as information experts to discover and interpret altmetrics for their organizations.

This presentation will explore altmetrics data for Australian research published in 2017 in the most popular journals.  We will discuss the rate of online engagement, the most popular platforms for sharing Australian research, and where in the world Australian research is most mentioned in news, public policy, Facebook and Twitter.


Using a linked research insights tool, Dimensions, we extracted 4,354 publications published in 2017 by authors from Australian institutions in the country’s five most popular journals (Scientific Reports; PLoS One; Heart, Lung & Circulation; RSC Advances; and Journal of Materials Chemistry). We imported article identifiers into the Altmetric Explorer database, a leading resource for finding and analyzing altmetrics data then used the database to generate reports and visualizations for research outputs.


Overall, 2,082 (47.8%) of articles had attention in source that Altmetric tracks. The articles gathered 32,533 mentions overall.

In terms of demographics engaging with Australian research, results indicated the following: 27,474 tweets by 17,375 unique tweeters in 163 countries; 1,515 Facebook posts on 1,141 unique Facebook Pages in 51 countries; 2,701 news stories by 635 unique news outlets in 49 countries; and 11 policy documents written by 4 unique policy sources in 4 countries. The presentation will include detailed maps showcasing the country-level engagement with Australian research.


Overall, Australian-authored research from 2017 demonstrated great international influence across a variety of stakeholder groups.

Altmetrics offer a unique view onto the engagement with domestic research and should be considered as part of any organization’s larger reporting and evaluation plans.


I have previously worked in corporate banking and academic publishing before joining Digital Science. I have an interest in promoting academia, particularly literature and history, and have a keen passion for innovative digital research technologies.

Recent Comments

    About the conference

    eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

    Conference Managers

    Please contact the team at Conference Design with any questions regarding the conference.

    © 2018 - 2019 Conference Design Pty Ltd