Harnessing The Wisdom of Crowds: The SWARM Project

Prof. Richard O. Sinnott1 & the SWARM Team1

1University of Melbourne, rsinnott@unimelb.edu.au

 

The Smartly-assembled,  Wiki-style Argument Marshalling (SWARM) project was funded by the   US   Intelligence   Advanced   Research   Projects   Activity   (IARPA)   as   part   of   the Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) program. The project  formally  commenced  in January  2018 and has been awarded  (up to $19m).  The SWARM is one of 4-projects funded globally through the IARPA CREATE program. These projects are tasked with suporting improved reasoning to aid the intelligence community by leveraging the wisdom of crowds. Whilst previous IARPA programs have demonstrated the benefits in leveraging the wisdom of crowds to get improved answers, the actual reasoning and deliberation in what makes a good answer remains unclear. This is the primary goal of SWARM.

The evaluation of the SWARM platform and the other platfoms is currently being undertaken by an independent crowd managed by IARPA and their Test & Evaluation team. This crowd will be organised into separate teams. Each team will be assigned a set of questions that require reasoning and evaluation to come up with the most highly regarded answers over several months, i.e. those answers (hypotheses) with the best reasoning and presentation.

This presentation will cover the overarching goals of SWARM and the underpinning technical solutions that have been developed. This includes the mobile applications that have been developed  to encourage  crowd  participation.  The talk will also briefly  include  early (non- funded SWARM work) exploring the extent that deep learning approaches can be used for automation of the assessment of collective reasoning.


Biography:

Professor Richard O. Sinnott is the Director of eResearch at the University of Melbourne and Chair of Applied Computing Systems. In these roles he is responsible for all aspects of eResearch (research-oriented IT development) at the University. He has been lead software engineer/architect  on  an  extensive  portfolio  of  national  and  international  projects,  with specific focus on those research domains requiring finer-grained access control (security). He has over 300 peer reviewed publications across a range of applied computing research areas.

Visualisation of research activity in Curtin’s virtual library

Mr Peter Green1, Dr Pauline  Joseph1, Ms Amanda  Bellenger1, Mr Aaron  Kent1, Mr Matthew Robinson1

1Curtin University, Perth, Australia, P.Green@curtin.edu.auP.Joseph@curtin.edu.auA.Bellenger@curtin.edu.au,  Aaron.J.Kent1@gmail.com,  Matt.Robinson@curtin.edu.au 

 

Curtin University Library manages authenticated access to its online journal, book and database collections using the URL re-writing proxy service called EZproxy. EZproxy mediates the request between user and publisher platform via the Library. The proxy service is widely deployed in libraries worldwide and has been a standard authentication solution for the industry for many years. The EZproxy software creates a log entry for each request in the Combined HTTP Log format[2]. The log files are extensive, with approximately 30 million lines written per month. The log files capture information for each request such as the IP address, client ID, date and time, HTTP request and response and so forth. The Curtin Library has retained at least five years of the log files.

This large dataset presents an opportunity to learn more about the information seeking behaviour of Curtin Library clients, but also presents a challenge. Traditional analysis of such data tends to produce aggregated usage statistics that do not reveal activity at a granular level. Immersive visualisation could provide a means to see the data in a new way and reveal insights into the information seeking behaviour of Curtin Library clients. In collaboration with Dr Pauline Joseph, Senior Lecturer (School of Media, Creative Arts and Social Inquiry) the Curtin Library proposed this work for funding under the Curtin HIVE[3] Research Internships program. The proposal was successful and a computer science student, Aaron Kent, was employed for a ten week period to produce visualisations from the EZproxy log file dataset.

The data was anonymised to protect client confidentiality whilst retaining granularity. The number of lines in the log file were reduced by removing ‘noise’. The Unity3D[4] software was chosen for its ability to provide visualisations that could be displayed on the large screens of the HIVE but also desktop screens. Many possibilities were discussed for visualisations that might give insight into client behaviour, but two were chosen for the internship.

The first visualisation focusses on the behaviour of individual users in time and space and represents each information request using an inverted waterfall display on a global map as illustrated by Figure 1. Different sizes and shapes are used to present different client groups and the size of the information request is reflected in the size of the object. Geolocation information is used to anchor each request on the map.

Figure 1: Global user visualisation

The second visualisation focusses on the usage of particular resources over time and represents each information request as a building block in a 3D city as illustrated by Figure 2. The different client groups and the volume of requests are illustrated over time by location and size against each particular scholarly resource.

Figure 2: Scholarly resource visualisation

The successful visualisation prototypes have shown that the EZproxy log file data is a rich source of immersive visualisation and further development will yield tools that Curtin Library can use to better understand client information seeking behaviour.

REFERENCES

  1. EZproxy Documentation. Available from: https://www.oclc.org/support/services/ezproxy/documentation/learn.en.html accessed 8 June 2018.
  2. Combined Log Format. Available from: http://fileformats.archiveteam.org/wiki/Combined_Log_Format accessed 8 June 2018.
  3. Curtin HIVE (Hub for Immersive Visualisation and eResearch). Available from: https://humanities.curtin.edu.au/research/centres-institutes-groups/hive/ accessed 8 June 2018.
  4. Unity3D. Available from: https://unity3d.com/ accessed 8 June 2018.

Biography:

Peter Green is the Associate Director, Research, Collections, Systems and Infrastructure in the Curtin University Library, Australia. He is responsible for providing strategic direction, leadership and management of library services in support of research, the acquisition, management, discovery and access of scholarly information resources, and information technology, infrastructure and facilities.

We attended Library Carpentry, what next?

Ms Janice Chan1

1Curtin University, Perth, Australia, Janice.Chan@curtin.edu.au

 

In 2017, a few Curtin Library staff ran a series of Library Carpentry[1] workshops. The workshops were well-attended by library staff. Feedback from these workshops suggested attendees recognised the need to develop data and computational skills, and expressed interest to discuss how they can apply the skills to their day-to-day work.

Base on the HackyHour model[2], a regular meetup called “Library HackyHour” has been established in 2018. Using the Curtin Makerspace[3] as a neutral platform, Curtin Library staff and Curtin School of Media, Creative Arts, and Social Inquiry staff and students get together every fortnight to share ideas, get help, discuss a problem, and work on projects with like-minded colleagues.

Some of the activities in Curtin Library HackyHour have included lightning talks, learning group for a specific programming language, discussion of work problems related to data and automation, and exploration of open datasets in preparation for future hackathons.

This lightning talk is relevant to this year’s conference because libraries play a key role in supporting eResearch. This is an example of how a workplace can foster a peer-to-peer learning community which is inexpensive, has low barrier of entry to any library staff (and academics training future library staff) who want to become digitally fluent and be better equipped to support researchers now and into the future.

Reference

  1. http://librarycarpentry.org/
  2. https://github.com/amandamiotto/HackyHourHandbook
  3. https://maker.library.curtin.edu.au/

Biography:

https://orcid.org/0000-0001-7300-3489

Janice Chan is Coordinator, Research Services at Curtin University, Perth, Western Australia. Janice’s experience is in repository management and scholarly communications. She is interested in open research, metrics and impact assessment, research data management, library-led publishing, data analysis and visualisation, and innovative practice in library service delivery.

An Open Question: A comparison of proprietary and open-access teaching materials for researchers

Weisi Chen

Intersect Australia, Sydney, Australia

 

Intersect Australia has been a significant eResearch training provider for several years. Since the first courses in eResearch tools like HPC and Microsoft Excel, the Intersect repertoire has expanded to over 25 distinct courses, delivered at our 12 member universities, hundreds of times per year to thousands of researchers.

Intersect began utilising open access training materials in 2015: teaching Software Carpentry’s Creative Commons licensed courseware in Python, Matlab, R, Unix, and Git. Shortly thereafter, two Intersect eResearch Analysts were accredited as Software Carpentry instructors. The following year this was expanded with four more accredited instructors, and in 2017, a further six instructors were accredited and Intersect joined the Software Carpentry Foundation as a silver member, a status we recently reaffirmed.

Throughout this period, Intersect has continued to maintain a proprietary catalogue of Intersect-developed courses taught alongside the Software Carpentry materials.

In this presentation, we will explore the differences, if any, in the reception of Intersect developed course material and openly available Software Carpentry material by course attendees. The differences in cost to maintain proprietary courseware or utilise openly available materials is explored. We will also analyse differences between the delivery of the two sets of courses based on other variables, such as the experience level and teaching style of the trainer.

This presentation will be valuable to similar organisations who are grappling with the logistics of running eResearch training courses, and deciding on strategies regarding developing their own material or using material that already exists in the public domain.

As one of Australia’s most recognised eResearch training organisations, Intersect hopes that other, similar organisations may be able to benefit from our experiences, so that the research community can ultimately benefit from high-quality training from a diverse range of providers.

10 Sports Science Things – building a strong data management culture

Mr Cameron Barrie1, Mrs Linda Forbes1

1Victoria University, Footscray, Australia

 

Victoria University has a strong track record in producing high quality research in sports science. We are noted as one of the best institutions to study or conduct research into sport science and are ranked Top #20 in the World[1] and produce research well above world average in Human Movement and Sports Science. We have built strong connections to industry e.g. AFL and FIFA (World Cup soccer), and are data-intensive in this discipline area.  But what are we doing to make this research data more FAIR (Findable, Accessible, Interoperable and Re-usable)?

Sports science has not had a 23 Research Data Things focus to date and it was decided that producing a 10 Sports Science Data Things guide. Creating the guide would be a good opportunity for VU Library to engage with this group of researchers around the issues or challenges Sports Researchers have around managing their data e.g. storage, description, documentation, archiving etc. 10 Sports Things also starts a conversation around the opportunities that exist when you add value to datasets such as: increased collaboration with other researchers (nationally & internationally), further engagement with industry (enhanced translation), exploiting Australia’s data advantage in this discipline, and accelerating innovation to broaden the impact of our research output.

10 Sports Science Data Things is a self-paced learning program that provides an opportunity for our sports researchers and research students to explore issues surrounding the management of research data.

The activities in this learning program aim to encourage better data management practices and get them thinking about possibilities they may not have considered.

[1] https://www.vu.edu.au/study-at-vu/why-choose-vu/our-global-ranking


Biography:

Cameron Barrie, Research Services Librarian with Victoria University, provides support to researchers and research students across the university. He has nearly 20 years work experience in academic libraries and is passionate about the role research plays in driving innovation. He has a particular interest in new technologies and finding more efficient ways to get things done.

Linda Forbes, College Librarian for Sport & Exercise Science and Engineering & Science. She provides advice and leadership in library liaison and collaborative work with teaching and research staff. She is passionate about library and educational environments and their ability to extend participatory learning and building connected communities through library programs and services.

Managing data provenance for gravitational wave data analysis

Mr Patrick Clearwater1,2,3, Dr Surya Nepal3, Prof Andrew Melatos1,2

1The University Of Melbourne, Melbourne, Australia, p.clearwater@student.unimelb.edu.au 

2ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), Australia

3Data61, CSIRO, Marsfield, Australia

 

First detected in 2015 by the Laser Interferometer Gravitational-wave Observatory (LIGO), gravitational waves are an exciting new area in astronomy, allowing astrophysicists to directly observe hitherto-invisible objects in the sky. However, this breakthrough brings with it a plethora of challenges in managing the terabytes of data LIGO produces. Many of the expected future detections will be from using complex signal processing algorithms to search large parameter spaces. To make such a detection scientifically credible, understanding data provenance—what original source data were used, and exactly how they were processed—is crucially important. In particular, without a thorough record of what was done to produce a scientific result, that result can neither be reproduced, nor can it be extended by other researchers.

One approach to solving this problem is through a virtual laboratory — an environment that knows what data are available, and understands the semantics of the tools used to process it. As well as addressing the data provenance problem, this approach also allows the existing LIGO software suite to be repackaged in an easy-to-use way that makes it accessible to astronomers who are not necessarily gravitational wave domain experts. We discuss some of the work that has been done to apply these techniques to gravitational wave astronomy, as well as how a gravitational wave virtual laboratory could solve the data provenance problem.


Biography:

Patrick Clearwater is a PhD student at in the Astrophysics group, School of Physics at the University of Melbourne. His research interests lie in analysing gravitational wave data to find evidence of signals at very low signal to noise, such as those expected from neutron stars; as well as building tools to make such searches computationally efficient, easy to use, and reproducible. He is a member of the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) and the LIGO Scientific Collaboration.

Digital Earth Australia’s approach to FAIR Data and Open Architectures via the Open Data Cube, Cloud Optimized GeoTIFFs and STAC

Mr David Gavin1,2

1Geoscience Australia, Symonston, Australia,

2Digital Earth Australia, Symonston, Australia, David.Gavin@ga.gov.au

 

Digital Earth Australia is embracing emergent industry and community trends by releasing digital Satellite Observation products directly to the public and providing a variety of tools for users to derive new insights from the data. This is achieved via:

  1. AWS Simple Storage Service (S3);
  • Cloud Optmized GeoTIFF (COG) format;
  • SpatioTemporal Asset Catalog (STAC);
  • OGC Web Services;
  • The Open Data Cube initiative.

Why it is relevant?

Digital Earth Australia aims to supply Australian Earth Observation (EO) data to: the international research community, the Australian Public, Australian Government and to Australian industry clients. The guiding principles of our work is to deliver access to “Big EO Data” directly to users who don’t have the resources to store petabytes of data themselves and provide the data in an open and interoperable format/standard to allow users to bring their tools to the data.


Biography:

David has been working for Geoscience Australia for over 10 years, working as a Web Application Developer, Web System’s Administrator, AWS Engineer and Agile Team Leader.

The importance of research data management: The value of electronic laboratory notebooks in the management of data integrity and data availability

Mr Paul Cremin1, Matt Miller1, Mr Matt Dunie2

1Labarchives LLC, Coogee, Australia, pcremin@labarchives.commmiller@labarchives.com 

2LabArchives LLC, San Diego, United States, mdunie@labarchives.com 

 

Background

The data produced in the practice of the scientific method is not managed in a uniform way by design. Specifically, in Academia, the breadth of mechanisms in use is extreme.  Labs, researchers and collaborators are using thumb-drives, isolated PCs, network drives, large commercial storage services, open access blogging sites or hosted blogs.  Some labs are handling sensitive information in paper notebooks, not under lock & key, and others are using smart phones.  Major issues with compliance, intellectual property protection and data integrity are raised in these environments.  The use of secure, online or locally based (data resides in-country or in-institution) systems designed for research data management, research workflow and institutional support facilitate better research data management, researcher & institutional oversight, access, provisioning and reporting. The collaborative nature of scientific research more enabled now than ever before.  Scientists of any discipline use sophisticated technology in their workflows and should do the same for research data management.

Introduction

There has been a growth in research-related data for several reasons. But the advancement and use of sophisticated research technology combined with the advent of immediate and low-cost communications and collaboration technology are enabling the creation of massive amounts of research data. These data must be managed properly in order to facilitate quality research. Research funding agencies make statements regarding data management and publishers provide for the submission of supplemental data. Yet there are research retractions for faulty and falsified data every month. Good data management tools and process can help to limit “bad” research.

According to a report published by the International Association of Scientific, Technical and Medical Publishers, there are more than twenty-eight thousand one hundred English language peer-reviewed journals in publication with an output of an estimated 2.5 million articles [3]. In addition, the report mentions that surveys of researchers suggest as many as 1%–2% of scientists have fabricated or falsified research data. The report states there were more than four million unique authors in 2014. Simple arithmetic then suggests that anywhere from forty thousand to eighty thousand of the named authors of the 2014 sample may have used incomplete or inaccurate data at some point in their career.

Data management and Laboratory notebooks

Data Management Plans have been a requirement of various funding agencies for most of the past decade. Governmental agencies have been issuing policies on data and management for much longer. The basics are the same across all of the plans: preserve data & provide access. There are other components to be considered, however the basic premise is to make the data available for future research and evaluation for the purposes of reproducibility, research integrity, further research, or challenge.

The reasons for keeping a lab notebook are well known. The U.S. Health & Human Services Office of Research Integrity web page sums up the reasons very nicely [1]:

  1. To establish good work practices.
  • To teach people in your lab.
  • To meet contractual requirements.
  • To avoid fraud.
  • To defend patents.
  • To allow work to be reproduced by others.
  • To facilitate preparation of formal reports, presentations and papers.
  • To validate your research.
  • To serve as a source for assigning credit to lab members.

The sixth bullet above may be the most important. Among the most basic underpinnings of science is the reproducibility of research. Without strong data management policies, documentation, and data management, reproducibility is at risk. Research labs of all disciplines have varying types of equipment, but there is at least one standard among them: Research is to be documented in accordance with the scientific method. Good data is data that is documented, stored, and accessible.

The scientific method at the highest level, appears quite simple. It is a circular process by which a researcher goes through at least four phases: ideation, research, analysis, and conclusion. The output is typically either a peer reviewed, or “grey” paper or report. However, when evaluated in more detail, the process is much more complex than the four steps just mentioned.

In granularity, good research requires skill, tools, and rigor. One of the more granular components is data management. . . and possibly, the definition of data. To the lay person, the data is limited to the results of an experiment or survey. But in reality, and depending upon the discipline, the data may include documentation of process and procedure. A research notebook may include information on failed experiments and documentation of missteps or unresolved questions. Research notes, environmental observations, lab notes and evaluations are all part of the research data. All of this can be “templated” into an Electronic Laboratory Notebook (ELN), ensuring that the complete research record is preserved and available for future use.

This is especially true when considering that a major element of validation in the research process is the reproducibility of previous work. Many retractions are the result of irreproducible research. According to some in the research community, the community itself has conflicting objectives. The pressure placed on researchers to increase their research output, to win more grant proposals, and to regularly publish their research can be at odds with solid scientific research management. Indeed, a recent article details two peer reviewed articles which “. . . urge scientists to make research more reproducible” [2].

It appears Research Data Management practices are highly-variable. If you want proof, walk the halls on different floors of a research building in academia and visit a dozen research labs. Depending on institutional policies and the level of independence provided lab heads, you may see as many as a dozen different mechanisms to manage research data in a dozen different labs. But, research labs must be different. . . that is the nature of the business of research, especially in Academia. And those differences are the basic nature and strength of academia.

Electronic Laboratory notebooks

But there are tools in the marketplace that can help researchers maintain their independence, provide researchers and their administrators with the ability to protect their work product and enable scientific reproducibility. They are Electronic Laboratory Notebooks (ELN – a terrible 1980’s-era name describing a product that essentially replaces the analog paper notebook with a much-improved digital version).

When data are entered into an ELN – and data means research data, notes, observations, formulas, equations, sketches, data sets, images. . . any type of data – the platform must provide the ability to support several interested parties: 1) a funding agency’s requirement for a Data Management Plan; 2) the researcher’s need to document their research; 3) the administration’s need to be able to protect IP and prove discovery; and 4) the publisher’s need to review and publish data supporting the research; and 5) the institutes industry partner’s need to easily review and acquire data in a format that improves commercialization efforts and is frankly the standard in the industry.

ELNs can support the Scientific Method in ways traditional paper notebooks cannot. They also support institutional research policies and objectives and provide a platform for institutional data management and research support. A robust ELN supports Data Integrity, Data Lifecycle Management, Data Management, Data Accessibility, Collaboration, and Research Reproducibility. In today’s world of global collaborative research, digital information, and robust, advanced information technology, ELNs are becoming the “must have” tool for researchers and institutions.

REFERENCES

[1] https://ori.hhs.gov/education/products/wsu/data_lab.html.

[2] Two manifestos for better science, Discover Magazine January 11 (2017), http://blogs.discovermagazine.com/ neuroskeptic/2017/01/11/manifestos-better-science/#.WT6mvmjyuUk.

[3] M. Ware and M. Mabe, The STM Report: An Overview of Scientific and Scholarly Journal Publishing, 4th edn, International Association of Scientific, Technical and Medical Publishers, Netherlands, 2015, http://www.stm-assoc.org/ 2015_02_20_STM_Report_2015.pdf.

** Full article published by Matt Dunie, President and Co-Founder, LabArchives in:

Information Services & Use 37 (2017) 355–359

DOI 10.3233/ISU-170843

IOS Press


Biography:

Experienced data management consultant, specialist literature researcher, trainer and reference info specialist.

I’ve had 20 years of high level research experience, across fields such as competitive intelligence, company research, medical literature searching and pharmacovigilance, market research, drug safety literature searching, intellectual property and finance. My significant experience in the corporate, academic and scientific community means I have a working knowledge of the required knowledge resources, the various data management resources available and how these resources can be leveraged to improve research workflows.

A conceptual model to deliver end-to-end user-centric e-infrastructure for ecosystem science

Siddeswara Guru1, Beryl Morris1

1TERN, University of Queensland, Brisbane, Australia, s.guru@uq.edu.au, beryl.morris@uq.edu.au 

 

introduction

The environment is changing at a faster pace than anticipated due to human-induced activities potentially leading to climate change, loss of biodiversity and seasonal variations. The changes in our environment need to be tracked, studied and predicted so that adverse changes can be mitigated and plans made for a sustainable future. It is also essential to understand how the biological and social systems respond to environmental changes.

There is an increasing need for ecological information to address some of the challenges explained above and other significant social, economic and environmental problems. There are several initiatives at regional, national and international scales to collect and publish a large variety of ecosystem science data such as TERN (www.tern.org.au), NEON (www.neonscience.org), TERENO (www.tereno.net) and ICOS (http://www.icos-cp.eu) apart of state and federal government agencies.

Terrestrial ecosystem data is exceptionally heterogeneous concerning data types and data collection methods – data collected from human, hand-held, in-situ sensors and remote sensing observations. The observations also vary in spatial scale – point, plot, site, regions, biome, states and continent. The data handling mechanisms for these datasets are also different.

One of the challenges is to provide complex data streams in an easily accessible form with standard data formats and exchange protocols along with tools and platforms to interrogate, access, analyse and share analysis pipelines. The data infrastructure should enable easy access to custom data and information for a particular problem, access to scientific computing, visualisation and toolkits to enable data-driven science. The data infrastructure needs to evolve in response to community needs and provide a robust platform to meet the requirements of not only ecosystem science community but also inter-discipline researchers. It should also enable researchers to use same datasets for multiple applications and allow complete re-use and re-purpose of data and infrastructure capabilities.

Conceptual Model

The emerging data e-infrastructure should support holistic capabilities that not only manage to store, curate and distribute data but also enable processing, transformation based on user needs, access control, model-data assimilation, tracking data transformation at different stages of the analysis, linking consistent data to different analysis tools and pipelines.

The data e-infrastructure should also move towards big data technologies as drivers to deals with the increase in data production by different data observing initiatives and need to provide coherent eResearch platform to provide services for research data management and cross-disciplinary collaboration with flexible governance model. In ecosystem science, this should enable access to large collection of earth, environment, biodiversity datasets that support regional, continental and global issues and policy matters, while supporting on-demand compute and storage need for data-centric scientific applications; secure environment for data storage and processing; policy to support data security, privacy and confidentiality; authentication and authorisation to support virtual collaboration and building scientific communities.

A conceptual framework has been developed to identify significant elements and functionalities to bring coherent services to the ecosystem science community. The framework brings together all the data management components in addition to the computation platforms to enable users to access and perform data analysis in a data-centric approach. A collective interaction between different components of the framework will deliver to the needs of the ecosystem science data users. The conceptual framework identifies most of the services required but broadly classified into data source, transformation services, enabling services, delivery services and governance.

Data sources are the important part of any data e-infrastructure, data can come from different sources but we are focusing on the types of data sources that are most useful in terrestrial ecosystem science, the key is to provide a standardised access to different data streams to extract information and knowledge. Transformation services are part of the data management suites that transform data into information. The enabling services perform key activities to meet user needs, these are intermediate between user access and raw information. The delivery services are service accessibility points to interact with the infrastructure. Service initiation describes how users will initiate services to get some sample outcomes listed under outcomes.

The framework is structured such that each of the services fulfil certain task and run independently and overall framework run as a loosely coupled collaborative services. The services can also be managed by different organisations but can coupled with other services to meet certain use cases. The interaction between the services are the key.

Figure 1: A conceptual framework to deliver End-to-End services to the ecosystem science community.

We envisage that the infrastructure will be hosted on cloud-based NeCTAR and RDS infrastructure of the Australian Research Data Commons (ARDC).  The cloud technologies should simplify the building of such infrastructure with access to combinations of IaaS, PaaS and SaaS. Access to multiple nodes of the ARDC would enable to spread the infrastructure across multiple geographic regions and improve fault tolerance and availability.

Conclusion

The abstract present some of the future thinking in the development of more cohesive e-infrastructure to meet user needs. The approach taken is to look at potential use cases and identify several critical services that may require to meet those use cases.  Each of the services perform certain tasks with an ability to interact with other services.


Biography:

Siddeswara Guru is a program lead for the data services capability of TERN. He has substantial research and management background and has worked in the eResearch projects across multiple domains.

 

LibraryBox: Boxing the Internet

Mr Hasitha Thalaththani Ralalage1

1Deakin University, Geelong, Australia, hasitha.ariyaratne@deakin.edu.au

 

Websites and online file sharing services have become so ubiquitous and accessible, that most of us take them for granted. But what about when we don’t have access to fast, cheap or unrestricted access to the internet?

Meet the LibraryBox:  an inexpensive portable webserver that will serve rich digital content without requiring an internet connection!

Early this year Deakin University Library, in conjunction with a Humanities and Social Sciences researcher, developed an oral history website to showcase Papua New Guineans’ experiences of World War 2. The aim of this project was to help keep alive memories and war stories handed down through the generations, and share them with others around the world. The website PNG Voices arose out of this project.

However, many parts of Papua New Guinea don’t have access to a reliable or cheap internet, so PNG Voices was largely inaccessible to many Papua New Guineans. The LibraryBox provided us with an excellent opportunity to bridge this gap between poor internet infrastructure and a multi-media rich website.

Hosting a full-blown website utilising advanced server technologies, on a low powered device proved to be challenging. This talk discusses how we overcame the challenges and how the LibraryBox can be a useful tool for the Australian eResearch community.

Among the advantages, LibraryBox:

  • Provides a quick and easy method of sharing large data sets over a local area network wirelessly
  • Allows sharing data without needing to rely on third party infrastructure or network restrictions
  • Provides a secure, inexpensive, low-power consuming data sharing platform

Enables sharing of data sets or websites at remote locations


Biography:

Hasitha Thalaththani Ralalage is a DevOps Engineer at the Deakin University Library. He provides creative technical solutions to challenging problems in the digital libraries and repositories space. In his current role, he contributes to the development, administration and maintenance of the University’s research publication management, repository and reporting systems.

 

About the conference

eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

Conference Managers

Please contact the team at Conference Design with any questions regarding the conference.

© 2017 - 2018 Conference Design Pty Ltd