Presentations 2011

PAPERS
Guido AbenWendy MasonPaul BonningtonLouis Moresi The FileSender project: integrated software development from Proof-of-Concept code to package
The FileSender project (http://www.filesender.org) is a project to bring very user-friendly large file transfer to the widest possible constituency of researchers — certainly beyond the traditional remit of comparatively computer-literate users of eScience-type tools. The project delivers software that allows an R&E service provider to bring up a file transfer service presented as a webservice. The Australian version of this service is called “CloudStor” (https://cloudstor.aarnet.edu.au) and is hosted by AARNet. Judging by the uptake of the FileSender project’s principal software package, the project has been a success; user uptake is swift, does not require any hand-holding and the majority of users are repeat customers. Also, the number of international R&E providers running this package is increasing rapidly, thereby spreading maintenance load and providing many more critical eyeballs to spot quirks and bugs.
This talk focuses on a handful of reasons why we think the FileSender project has managed to increase its chances of success; mostly they revolve around our policy of assuming from the outset that the project might have to scale up significantly beyond a small in-house development, rather than taking an ad-hoc path-of-least-resistance early on. We hope the talk may inspire members of the audience involved in software development, software project management and eResearch tools policy to take the longer view and plan for large scale and longer horizons; the reward, we feel, is well worth it.Biography – Guido Aben
Guido Aben is AARNet’s director of eResearch. He’s involved with challenges ranging from developing networking specials with particular research groups, to enticing business to deliver their products in specially packaged or licensed forms to turn them into feasible tools for research, and the occasional stint at software development. Guido has interests in international collaboration between NRENs, as well as collaboration and partnerships between NRENs and market players. He holds an MSc in Physics from Utrecht University.Biography – Wendy Mason
Wendy has been completing a PhD in Computational Geodynamics and eResearch, in the School of Geosciences at Monash University. She has several years experience in writing user documentation and software beta / release testing for open source software development projects.Biography – Paul Bonnington
Prof. Paul Bonnington, Director of the Monash e-Research Centre, is a member of the Go8 Digital Futures group, and on the Board of Directors for the Victorian Partnership for Advanced Computing. He is a member of the steering committees for the Victorian Life Sciences Computing Initiative (VLSCI), Victorian e-Research Strategic Initiative (VeRSI) and National Computational Infrastructure’s Specialist Facility for Imaging and Visualisation (MASSIVE). Paul is also a member of CSIRO’s e-Research Council, and currently serving on the National Research Infrastructure Roadmap Expert Working Group for e-Research. He recently served as the Chair of the Steering Committee for the Australian National Data Service Establishment Project. Paul is a Professor in association with the School of Mathematical Sciences at Monash University.Biography – Louis Moresi
Louis Moresi is a professor of geophysics and computational mathematics at Monash university. He is interested in the deformation and failure of geological materials and develops novel computational techniques in this area. He oversees the development of the Underworld parallel, particle-in-cell finite element code, and has a keen interest in techniques for interacting with large-scale simulations running remotely.
Rasika AmarasiriPhilip Gharghori MonFiDS: An Integrated Financial Database
As there are a number of researchers working on the same datasets, this increases duplication of these efforts and accumulates a significant waste of valuable resources and time, which could have been spent on more productive work. The Monash Financial Database System (MonFiDS) was developed with the objective of minimising these efforts and to allow researchers to concentrate on the actual research by taking off the pressure of integrating the different datasets and checking the integrity of the merger. MonFiDS currently integrates financial, accounting, market index and earnings estimate data for Australian listed companies from five different data sources. The database allows researchers to download this data selectively or as a whole via a web portal. Common manipulations in the merging process are included in the extraction process that minimise the requirement for post-processing of the data.Biography – Rasika Amarasiri
Rasika Amarasiri obtained his PhD from the Faculty of Information Technology in Monash University. He completed his Bachelor of Science in Engineering (honours) and the Master of Science degrees from the University of Moratuwa, Sri Lanka. Prior to his PhD, Rasika has held various positions in the industry and academia including being the Hostmaster for the .LK domain registry, web master, systems administrator, consultant and lecturer. He has had numerous publications accepted in journals and conferences including a best paper award. Currently, he is the Manager, Research IT Services at the Department of Accounting and Finance in Monash University, Australia.Biography – Philip Gharghori
Philip Gharghori is a senior lecturer in Finance at the Department of Accounting and Finance at Monash University. Prior to this, he was a PhD student and tutor in the department. His PhD was supported by both an Australian Postgraduate Award and a Departmental Scholarship. He is the author of the finance chapters of Principles of ccounting and Finance, a text used in undergraduate business and commerce courses. He has won awards for conference papers presented at the Australasian Finance and Banking Conference and at the AFAANZ Conference. Phil’s research is primarily in the area of asset pricing. In particular, he has focused on the performance of multifactor models and stock market anomalies. He also has written papers in the areas of funds management, default risk analysis and the market reaction to corporate actions.
Steve AndroulakisUlrich FelzmannAlistair GrantIan ThomasRyan GreenGrischa MeyerAnthony BeitzPaul BonningtonChris MyersHeinz SchmidtAshley Buckle Taking TARDIS Into New Dimensions: Results and Reflections
MyTARDIS began as an automated solution for managing and sharing raw protein crystallography data. Since then, efforts from many independent projects have enhanced and evolved the central MyTARDIS product. New features such as data staging mounts, automated metadata extractors, parameter set creation and high performance computing task scheduling have been added to meet researcher needs. With these new features in hand, MyTARDIS is currently being deployed to manage data from diverse areas of research, including microscopy / microanalysis, particle physics, next-gen sequencing in addition to expansion at the Australian Synchrotron and ANSTO to support small / wide angle x-ray scattering, infrared microspectroscopy, powder diffraction, neutron reflectometery, small-angle neutron scattering and strain scanning data. Furthermore, an initiative to capture and publish all types of research data at an institutional level has begun.
This presentation will feature speakers from individual projects working with the open source MyTARDIS code base, along with an explanation of the software’s new developments and personal experiences from attempting to richly capture and manage an expanding range of research data.Biography – Steve Androulakis
Steve Androulakis is a software developer with the Monash e-Research Centre. He is the lead developer of the MyTARDIS project. His focus is on solving research data management problems, particularly in areas of structural biology.Biography – Grischa R Meyer
Grischa R. Meyer was a software developer with the Monash e-Research Centre. He used to work as a post-doctoral researcher in the field of computational biophysics. In addition to data processing expertise, his background in research provided the MyTARDIS project with important feedback on what researchers want and expect.Biography – Paul Bonnington
Prof. Paul Bonnington, Director of the Monash e-Research Centre, is a member of the Go8 Digital Futures group, and on the Board of Directors for the Victorian Partnership for Advanced Computing. He is a member of the steering committees for the Victorian Life Sciences Computing Initiative (VLSCI), Victorian e-Research Strategic Initiative (VeRSI) and National Computational Infrastructure’s Specialist Facility for Imaging and Visualisation (MASSIVE).Paul is also a member of CSIRO’s e-Research Council, and currently serving on the National Research Infrastructure Roadmap Expert Working Group for e-Research. He recently served as the Chair of the Steering Committee for the Australian National Data Service Establishment Project. Paul is a Professor in association with the School of Mathematical Sciences at Monash University.Biography – Ian Thomas
Ian Thomas is a software developer and system administrator at the eResearch Office of RMIT University. He was formally a post-doctoral researcher in the School of Computer Science and IT at RMIT University, investigating software engineering for real-time reliable systems and agent-based management systems for e-Health (with Monash University). His current work is in data curation for three domains: high-performance computing, microscopy data for materials engineering, and screen media objects (films and television).
Baden Appleyard Licensing Australia’s Research Outputs Under a Single Framework
AusGOAL (Australian Governments Open Access and Licensing Framework) provides support and guidance to government and related sectors to facilitate open access to publicly funded information. AusGOAL makes it possible to use and re-use information or data in a way that drives innovation and entrepreneurial activities, and provide economic and social benefits to the community. Opening Australia’s Publicly Funded Research and Innovation Australia has a vibrant research and innovation sector which supports and enhances virtually every aspect of our life experiences. Although all aspects of research are important, this part of the AusGOAL website is focuses on the licensing of research outputs. A significant fraction of the research output produced in Australia is publicly funded, either directly via the major funding bodies, or, indirectly via institutional funding. Like the Australian Governments, the major research funding bodies and research organisations are moving towards requiring that information and data from their projects be published and made available for re-use. Licensing research data using AusGOALOne of the essential ingredients for making data re-usable is the provision of clarity of permissions, terms, and conditions. Prospective re-users need to know what they can and cannot do with the data. A lack of clarity about permission to re-use data can have the same result as forbidding data re-use, because uncertainty can be enough to discourage the potential re-user. While there are many different ways of licensing data, there is a strong advantage in using a consistent approach. The technology now exists to allow individual datasets to be combined with others in novel ways to help solve ever more complex problems. The use of a single licensing framework, such as AusGOAL, across the research community and public sector enhances the potential for data sets to be combined for analysis, relieving researchers of the burden of keeping track of different conditions and permissions.Biography
Baden holds degrees in law and commerce, in addition to tertiary qualifications in management. Baden is a Barrister of the Supreme Court of Queensland and of the High Court of Australia. From 2000 – 2008 Baden was engaged with the Australian Taxation Office, predominately as Principal Litigator and Litigation Manager within the ATO Legal Services Branch. In those roles Baden had carriage of a broad variety of complex litigation in general and commercial law, insolvency and bankruptcy law, workplace law, family law, and administrative law. From 2007 – 2008 Baden was Principal Research Fellow with the Faculty of Law at the Queensland University of Technology. During that period Baden was Project Manager of CRC for Spatial Information Project 3.05 which provided assistance to underpin the legal and policy framework development of Queensland Government Information Licensing Framework (GILF). Baden is currently National Programme Director of the Australian Governments Open Access and Licensing Framework (AusGOAL – GILF’s successor ). He has responsibility for the development and implementation of AusGOAL in each of the jurisdictions and related sectors. He advises all levels of government, and their agencies, and research sector on the implementation of AusGOAL and related copyright, contractual and administrative law (eg. FOI and Privacy) issues.
Justin BakerPeter Tyson CSIRO Remote Visualisation Capability
CSIRO is a large, geographically dispersed research organisation with over six-thousand staff at fifty-six sites. Like all modern research organisations, CSIRO researchers are dealing with ever larger, more complex scientific datasets. In response to these two significant and disparate issues, geographical separation of staff and increasing data complexity, an enterprise-wide remote visualisation capability has been developed. Scientific visualisation is generally recognised as a key to addressing many data complexity issues. As an adjunct to that, remote visualisation provides researchers with virtualised access to advanced visualisation hardware and software, regardless of their location. CSIRO’s remote visualisation capability is being progressively rolled out across the organisation. The architecture is based on commodity computing hardware and two key open source projects. The first of these, VirtualGL virtualises the graphics hardware making it remotely accessible. The second, VizStack, is used to allocate limited compute and graphics hardware resources to users as required. There are several significant advantages to this approach. The remote visualisation service enables scientists to access high-end graphics hardware directly from their desktop PCs, mitigating the need to purchase dedicated visualisation workstations. It provides a shared/remote collaborative viewing capability enabling researchers at different locations to interact with one another’s data in real-time.
The virtualised environment provides access to a wider range of applications on different platforms rather then being limited to the user’s local operating system. Lastly, the visualisation hardware is centrally managed in a data centre, and makes use of a shared storage system minimizing the need to transfer large volumes of data between different systems.Biography – Justin Baker
Justin Baker is the manager of CSIRO’s Visualisation and Collaboration Team as a part of the eResearch Program.Biography – Peter Tyson
Peter Tyson is the principal architect of CSIRO’s remote visualisation system.
Venki BalasubramanianAmir AryaniIan ThomasHeinz Schmidt Data Capture from High-Performance Computing Facilities: A Case Study
E-Research has enabled researchers to develop new insights and new solutions to complex problems by use of technologies in research collaboration. The Data Curation (DC) application is being developed in eResearch Office at RMIT University for curating of datasets generated by various material physics simulation packages from High Performance Computing (HPC) facilities.
The purpose of this presentation is to exemplify the challenges faced during the development and shows how some of these problems were resolved. An end-to-end e-research system involves sub-systems that are heterogeneous and domain specific. We addressed interoperability as a prime consideration because of lack of established e-Research standards for both systems and data. This necessitated the use of data adapters and converters. Contemporary designs that have role-based security models may not work well with e-Research software. It is difficult to define definite roles for researchers due to extensive collaboration between institutions. We identify that authorisation of researchers is a major concern best addressed by federated approach. In e-Research, we identify harvesting data from existing legacy systems as a common concern. We chose to use a “system of systems” architecture. Due to the ad-hoc nature of orchestration of such components we note difficulties in controlling overall reliability. While reusability is a prime consideration to reduce development time and costs, we identify that researchers and developers in specific disciplines create software to solve their own problems. This limits reuse of solutions more widely for the long term. However, the involvement of domain experts in the developments is imperative for success. We observed that the project would not be successful without the collaboration of the researchers (domain users). However users want to limit the changes to their current workflows as little possible, so solutions must carefully integrate and show clear advantages to the researchers.Biography – Venki Balasubramanian
Venki Balasubramanian is one of the core developers for Data Curation project. He also works in other eResearch projects in the RMIT eResearch office. He completed his PhD in sensor networks at iNext, University of Technology at Sydney under Prof. Doan B. Hoang. He also completed his Masters and Post-Graduate Diploma in Networking from University of Sydney and University of New South Wales, Australia respectively. He worked as Research Assistant in Advance Networking Lab in University of Sydney. His research interest includes eResearch, sensor and wireless ad hoc networks, QoS, web adaptation techniques for mobile devices and health care monitoring.Biography – Amir Aryani
Amir Aryani is a software engineer in the RMIT eResearch office, and working on myTardis extensions as part of the RMIT data capture project. He is a PhD candidate in the School of Computer Science & Information Technology at RMIT University. His research interest is software engineering with focus on software evolution and maintenance.Biography – Ian Thomas
Ian Thomas is a software developer and system administrator at the eResearch Office of RMIT University. He was formally a post-doctoral researcher in the School of Computer Science and IT at RMIT University investigating software engineering for real-time reliable systems and agent-based management systems for e-Health (with Monash University). His current work is in data curation for three domains: high-performance computing, microscopy data for materials engineering, and screen media objects (films and television).Biography – Heinz Schmidt
Professor Heinz Schmidt is the E-Research Director at RMIT University. He is a member of VDI, IEEE and ACM. He is professor of Software Engineering and an Adjunct Professor, Mälardalen University, Västerås, Sweden.
Anthony BeitzCalvin ChowPaul BonningtonSteve AndroulakisVirginia GutierreSimon Yu Platforms for Research Data Management: Lessons Learned
There is growing importance in properly capturing, managing, publishing, and reusing research data and metadata, and there have been injections of research infrastructure funds towards this activity by the Australian Government (NCRIS & Super Science initiatives) which is distributed and managed by the Australian National Data Service (ANDS). Consequently, there has been growth of new and adapted platforms for Research Data Management (RDM) at Australian research institutions. This trend is expected to continue over the next few years due to additional Australian research infrastructure investments.Monash University has experience in: developing research data management platforms for the DART and ARCHER projects; determining research data capture and management solutions for various research disciplines for ANDS data capture projects conducted at Monash; and developing and deploying innovative research capabilities within the university. This paper, informed by these experiences, provides some useful guidance in the selection, development, and deployment of platforms for research data management.Biography – Anthony Beitz
Anthony Beitz, Manager of the Monash e-Research Centre, started his career in 1992 at the Telstra Research Laboratories. In 2006; he joined Monash University, where he managed the development of a prototype system which successfully demonstrated the management of research data from its acquisition to publication (DART Project); and then successfully lead the development of a Data Management Portal for research data (ARCHER Project). He then joined the Monash e-Research Centre in 2008 as its Technical Manager.Biography – Paul Bonnington
Prof. Paul Bonnington, Director of the Monash e-Research Centre, is a member of the Go8 Digital Futures group, and on the Board of Directors for the Victorian Partnership for Advanced Computing. He is a member of the steering committees for the Victorian Life Sciences Computing
Initiative (VLSCI), Victorian e-Research Strategic Initiative (VeRSI) and National Computational Infrastructure’s Specialist Facility for Imaging and Visualisation (MASSIVE). Paul is also a member of CSIRO’s e-Research Council, and currently serving on the National Research Infrastructure Roadmap Expert Working Group for e-Research. He recently served as the Chair of the Steering Committee for the Australian National Data Service Establishment Project. Paul is a Professor in association with the School of Mathematical Sciences at Monash University.Biography – Steve Androulakis
Steve Androulakis is a software developer with the Monash e-Research Centre. He is the lead developer of the MyTARDIS project. His focus is on solving research data management problems, particularly in areas of structural biology.Biography – Calvin Chow
Calvin Chow, MeRC Program Manager (Software Development) is a certified project manager and an Agile SCRUM master with over 14 years of software development experience. He is the project manager for the 8 Data Capture projects funded by Australian National Data Services (ANDS). Calvin graduated from Monash University in 1996 with 1st class honours in Computer Science and Engineering.Biography – Simon Yu
Simon Yu is a Senior Software Developer at Monash eResearch Centre since 2007. He is the lead developer for 4 of the 8 ANDS Data Capture projects: Ecosystem, Climate and Weather, Interferome, Multimedia Collections and ARROW. Prior to that, he was the lead developer for the ARCHER project and a java developer on the Persistent Identifier Linking Infrastructure (PILIN) project. Simon has a Master of Technology (Internet and Web Computing) from RMIT University.Biography – Virginia Gutierre
Virginia Gutierre is Research Systems Facilitator at Monash eResearch Centre. She is the senior business analyst on the ANDS Data Capture projects at Monash University. Virginia has a honours degree in Software Engineering and Commerce from University of Melbourne.
Craig Bellamy Teaching the Digital Humanities through Virtual Research Environments
At the core of the work done within the digital humanities is a difficult interdisciplary relationship between the at times divergent cognate fields of computer science and the humanities. This presentation will discuss some of the central characteristics of the digital humanities whilst examining some of its ‘hard-interdisciplinary’ relationships.
The author will suggest a model where ‘hard-interdisciplarity’ may be taught and assessed; through the framework of Virtual Research Environments (VREs). The presentation will demonstrate some of the latest work in the development of VREs in the humanities that encourage the critical use and analysis of the digital objects within them. It is the contention of the author that building ‘hard-interdisciplary’ relationships between humanities and computing technology should engender a critical and deeply scholarly understanding of technological production and VREs are one way to achieve this in the classroom.Biography
Craig Bellamy has a research background in history and has worked work at the intersection between computing technology and the humanities for a number of years. He has a MA in history (history and hypertext) and a PhD in history and new media (interactive hypertextual video). He has worked at a number of leading digital humanities centres in the US and the UK (including King’s College; London, and the University of Virginia) and is Secretary for the recently formed Australasian Association for Digital Humanities. At VeRSI, he promotes the use of computing within the humanities through the development of a number of innovative projects and activities.
Jared BergholdTim Churches SURE: a secure remote-access data analysis laboratory for research using linked health data.
Australia has a well-established system of health data collections, many of which cover the entire population and/or both the public and private health care sectors. The scope and population coverage of this health data infrastructure offers enormously valuable opportunities for research which explores disease causation and prevention, health differentials and inequalities, geographic and spatial aspects of health, and the effectiveness of treatments and health services. However, most of these health data collections relate to episodes of care or to specific diseases. In order to assemble research data which provides a longitudinal view at the person level, the records in these data collections which relate to a particular individual must be linked together. In the absence of a unique personal identification number of broad scope in Australia, linkage of routinely-collected health records from multiple sources and settings is performed for research purposes by special-purpose Data Linkage Units (DLUs). These DLUs receive names, addresses, dates of birth and other identifying details abstracted from health system records, but they have no access to the health or medical details contained in those source records. Researchers are provided with de-identified versions of the relevant health records, together with research-study-specific sets of person-level links between those records, enabling them to conduct longitudinal and other complex linked-data analyses. Such arrangements provide excellent first-order protection of individual privacy. However, residual risks to privacy remain, despite the nominal de-identification of the linked data provided to researchers, due to the very high dimensionality of the linked data sets, and the high cardinality of many of the data items which they contain – this makes re-identification of the linked data feasible, and in some cases, easy. Therefore, it is important that the linked, de-identified data are treated as highly confidential by researchers, and that these data are kept very safe. The impact of unauthorised access to or loss off control over these data is potentially large, given that many linked data research studies require access to health data for hundreds of thousands or millions of individuals. Currently in Australia, this risk is managed through researcher undertakings that they will not attempt any form of re-identification or further record linkage of the data supplied to them, and that they will prevent unauthorised access to those data. There are no reasons to believe that researchers do not strive to honour these obligations. However, the data supplied to researchers are, of course, in digital form, and are typically stored and used in institutional computing environments which may not have been designed with security in mind. Often researchers have limited ability to influence or even determine the computing security arrangements in their workplaces. Such problems are compounded when researchers in multiple locations need to work on the same linked research datasets as part of a collaborative study.In order to better manage these risks, as part of the NCRIS Population Health Research Network capability and supplemented by EIF SuperScience funds, the Australian and NSW governments have jointly funded the establishment of a secure, remote-access data analysis facility specifically for use by population health, health services and clinical researchers working with linked data. The facility, known as SURE, is currently undergoing pre-production testing prior to becoming operational in December 2011. It provides researchers with a highly secure remote virtual computing desktop for each research study on which they are an investigator. All data ingress and egress from the facility is via a “Curated Gateway” in order to ensure that only those data files which have been approved and permitted by the Human Research Ethics Committee(s) with oversight of each research study are brought into the SURE environment, and that only those research outputs which have been screened for potential privacy disclosure risk are released from the facility to the relevant researchers’ normal computing environments . Researchers must undergo compulsory training in privacy, IT security and statistical disclosure risk assessment and control, before being permitted to use the facility. Inside the facility, each research study is additionally confined within its own security perimeter – there is no possibility of data exchange between research studies. Access to the facility is strongly authenticated using two factors. The remote virtual computing environments provided to each researcher are powerful, highly-specified Microsoft Windows 7 desktops, furnished with a wide range of proprietary and open-source data manipulation and analysis software. However, despite all these perimeter controls, within the workspaces for each study, there are no additional restrictions placed on researchers – they may examine, manipulate and analyse the data in whatever ways they see fit, within the constraints placed upon them by the data providers and overseeing ethics committee(s) for that study. The centralised nature of the facility allows significant economies of scale with respect to hardware provisioning and software licensing costs, and it is expected to be financially sustainable at levels of cost-recovery which are acceptable to researchers and research funding agencies. SURE also facilitates the creation of specialised data manipulation and analysis tools which would be difficult to deploy in existing, heterogeneous research computing environments – these tools will be described.Biography – Jared Berghold
Jared Berghold has a research and development background in computer visualisation, interactivity and enterprise architecture. Jared has practiced software engineering for over eight years and prior to Intersect worked at iCinema, a research centre at UNSW, working on interdisciplinary projects with a focus on interactive and immersive narrative systems. Before this, he worked at Avolution, developers of leading enterprise architecture modelling tools, and CiSRA, the Australian research and development lab for Canon. Jared has a BE/BA (Hons) in Computer Systems and International Studies from the University of Technology, Sydney.Biography – Tim Churches
Tim Churches is a medical epidemiologist with a clinical background in general practice and geriatrics. However, over the last two decades he has spent much of his time developing and running various population health and health services data collections and analysis systems, covering a wide range of areas from cancer, diabetes and intensive care medicine to near-real-time surveillance and outbreak control of communicable diseases, including most recently pandemic influenza. He is also an active researcher, and is a co-investigator on several NH&MRC- and ARC-funded population health research studies. He is currently the consulting epidemiologist and chief technical advisor for the SURE project.
Peter BlainPaola PetrelliJason LohreyNathan Bindoff Outcomes of the Marine and Climate Data Discovery and Access Project (MACDDAP)
The Marine and Climate Data Discovery and Access Project (MACDDAP) was an e-Research project, funded by the National eResearch Architecture Taskforce (NeAT) under the National Collaborative Research Infrastructure Strategy (NCRIS). The project was completed in June 2011 and successfully delivered on its stated objective, which was to integrate large marine and climate data sets, and to deliver them through a wide range of data streams – thus engaging a broad community. The project built on web services technology to integrate marine and climate data sets distributed across Australian research institutions. The outputs delivered by MACDDAP facilitate knowledge discovery for marine and climate related applications by enabling researchers to collect, combine and analyse relevant data across scientific disciplines. MACDDAP has built on open scientific and geospatial data standards to enhance specialised web harvesters and search tools, to deliver large geospatial data-sets to users via web portals. MACDDAP also provides the functionality required to support these services, including an aggregator for combining geospatial data from distributed sources, and a translator for translating data sets into standard vocabularies used in meteorology and oceanography.Biography – Peter Blain.
Dr Blain is the software development manager and software architect at the Tasmanian Partnership for Advanced Computing (TPAC) at the University of Tasmania. He is the project manager for a number of eResearch projects at TPAC, including the Marine and Climate Data Discovery and Access Project (MACDDAP). Dr Blain received his PhD from the School of Computing and Information Technology at Griffith University in 2007, and a Bachelor of Engineering in computer systems from the University of Queensland in 1992. Prior to joining TPAC in 2008, he worked as a freelance software engineer/consultant for large Australian and International financial services companies including the Bank of Tokyo, HSBC, Westpac, the Commonwealth Bank of Australia, and the National Australia Bank.Biography – Paola Petrelli.
Dr Petrelli is the earth system data librarian at the Tasmanian Partnership on Advanced Computing (TPAC) at the University of Tasmania. She is the data manager for the TPAC Oceans and Climate Digital Library Portal. Dr Petrelli received a PhD from the Department of Earth Science of the University of Siena (Italy) in 2005, and a Bachelor in Marine Environmental Sciences at the University of Venice (Italy) in 1999. Her research interests included modelling ocean and atmosphere interactions in Antarctica and sea ice processes. For the past 4 years she have been managing oceanographic and climatological datasets, acquiring extensive experience in web services and software used by the earth science research community.Biography – Jason Lohrey.
Jason Lohrey is the CTO of Arcitecta and the conceiver and architect of Mediaflux™. Jason has a degree in Physics and Computer Science augmented with Fine Arts from the University of Tasmania and has worked in the IT industry for approximately 16 years. His background includes industrial, commercial, scientific, and creative applications for computing. For most of his working career, Jason has focused on research, design and development of digital asset management and database systems with companies including Kodak (with the Academy award winning non-linear editing and compositing system, Cineon), Discreet Logic and Silicon Graphics. Five years ago, while in residence at painter Arthur Boyd’s property at Bundanon, New South Wales, he penned the first lines of software for Mediaflux™.Biography – Nathan Bindoff.
Professor Bindoff is Professor of Physical Oceanography at the University of Tasmania, and CSIRO Marine Research Laboratories, Director of the Tasmanian Partnership for Advanced Computing. Professor Bindoff is a physical oceanographer, specialising in ocean climate and the earth’s climate system. He was the coordinating lead author for the ocean chapter in the Inter-Governmental Panel on Climate Change (IPCC) Fourth Assessment Report and for the Fifth Assessment Report for the detection and attribution chapter. His current interests are primarily in understanding how the changing ocean can be used to infer changes in atmosphere and whether these changes can be attributed to rising greenhouse gases and projecting future changes and its impacts on regional climates. Professor Bindoff has served on 13 international committees, six of which are still current, invited speaker at 18 conferences and workshops, co-chaired 2 workshops and was guest editor on two special volumes of Deep Sea Research, and convened Oceans session of the Climate Change Congress, Copenhagen March 2009. He has published more than 64 scientific papers and 35 reports. He established the programs and experiments that determined the total production of Adelie Land Bottom Water formation and its contribution Antarctic Bottom Water Formation, contributed to the development of some of the largest and highest resolution model simulations of the oceans and was deeply involved in oceanographic data and data management as the chairman of the Data Products Committee for the World Ocean Circulation Experiment and the International Polar Year.
Ann BordaLyle Winton Research Data Infrastructure Approaches
The development and implementation of research infrastructures have been shaped by the need to collaborate, retain and reuse data. Documenting the practice of research, and therefore the context of the data, is essential for the easy discovery of appropriate data for reuse in the future. Increasingly institutions are providing support systems that facilitate the management of and collaboration in research projects. Such systems allow the creation of virtual research environments (VREs) or collaboration environments, which can serve as documentation of the research process, as well as repositories of data and records. Research infrastructure providers take on responsibility in supporting this research need but are not always ideally suited to long-term retention.Biography – Ann Borda
Dr. Ann Borda has held strategic and operational roles in academic and research organizations for over 10 years. Ann is currently Executive Director of the Victorian eResearch Strategic Initiative (VeRSI – www.versi.edu.au) which is a Victorian State funded Program to provide a coordinated approach to accelerating the uptake of eResearch across the Victorian research community. Previously, Ann held the position of eResearch Senior Programme Manager with the Joint Information Systems Committee (JISC – www.jisc.ac.uk) based at King’s College, London, at which time she was responsible for the quality delivery of multi-million government-funded projects in building a UK wide e-Infrastructure to support research, and in facilitating broader community take-up of e-Science tools, grid services and resources. Concurrently, she was a Research Fellow at the Institute for Computing Research, London South Bank University, where she investigated HCI and collaborative technologies among other topics. In December 2009, Ann was appointed Associate Professor & Honorary Principal Fellow in Computer Science & Software Engineering (CSSE) at the University of Melbourne.Biography – Lyle Winton
Lyle is the Associate Director for eResearch at Victoria University working with the Office for Research, IT Services and Library to develop e-research capability across VU. Lyle was formerly an analyst with the Victorian eResearch Strategic Initiative (VeRSI http://www.versi.edu.au/),
a senior research support officer with the eScholarship Research Centre (http://www.esrc.unimelb.edu.au/) supporting the research community and eResearch initiatives at the University of Melbourne, and also a consultant to the DEEWR/JISC led international e-Framework for Education and Research (http://www.e-framework.org/). His research background is in experimental high energy physics and distributed computing, involving large-scale international collaborations. Lyle’s professional background is in the IT areas of infrastructure development, software design, development and project management.
Joshua Bowden OpenCL implementations of principal component analysis for large scale data analysis and benchmarking of heterogeneous clusters.
Programming environments for General-Purpose computation on Graphics Processing Units (GPU) have improved rapidly in the past decade. They allow a programmer to tap into the potential of GPU based devices for non-graphics tasks. As a widely adopted programming standard, OpenCL attempts to standardize the programming of the various devices constituting a heterogeneous computing system. A benefit of using widely adopted standards such as OpenCL is that it allows the comparison of performance of an algorithm on a variety of modern CPU architectures and GPU based system. An OpenCL implementation of the Non-linear Iterative Partial Least Squares algorithm used in principal component analysis has been used as a benchmarking program to test a range hardware for the core vector-matrix operations that are at the heart of the algorithm. This algorithm can be time-demanding for large data sets owing to its iterative nature. Results of benchmarking workstation, cluster and cloud based solutions are described. The measurement and modelling of these workloads results in a better understanding of the economies the different systems bring to research based computation.Biography
Dr Joshua Bowden has an Honors degree in Chemistry from Flinders University and a PhD in Materials Science from the University of South Australia. He also has two graduate Certificates in Information Technology and Object Oriented Programming gained during the completion of his PhD. He undertook a post doctoral position at CSIRO in the area of informatics systems where he developed software solutions to the problems involving analysis of large data sets used to determine wood properties from a range of analytical equipment including X-ray diffraction and near infrared spectroscopy. While finishing his PhD he worked for three years at Queensland University of Technology in the bio-materials field investigating the microstructure and chemistry of cartilage. There he developed his interest in computational problems of multivariate methods. He has been working with the CSIRO Advance Scientific Computing group for the past year providing software support to a number of high performance computing projects.
Andrew ButtsworthRhys NewmanPeter Wheeler The SkyNet: Harnessing the Power of the Community for Radio Astronomy Research
The International Centre for Radio Astonomy Research initiated the SkyNet project to engage the community and raise awareness of radio astronomy. The initial proposal was inspired by the many citizen science projects that have gained from making science accessible to the broader community; this community engagement enabled these projects to have access to resources which would have been impossible to fund by normal means. The distributed computing backend of the SkyNet is based on the Nereus-V; which is an open-source pure Java™ desktop cloud distributed computing technology. If we can achieve a client base of approximately 10,000 workstations in the higher education sector around Perth, we will have a distributed computing network capable of approximately 100 TFLOPS. This has the potential to have a significant impact on the Australia/New Zealand bid to host the Square Kilometer Array, and will allows thousands of users to contribute directly in ground breaking research.Biography – Andrew Buttsworth
Andrew Buttsworth graduated from Curtin University with a BSc in Applied Science and a GradDip in Computing, before starting work in the Australian higher education IT sector in 1996. After working at the University of Western Australia for 4 years, Andrew started work at Curtin University in 2000 for the Faculty of Science & Engineering, as an IT systems administrator. He is currently working as the Team Leader for Curtin IT Service’s eResearch Support team, with a focus on enabling researchers to enhance their research outcomes through the use of ICT.Biography – Rhys Newman
Rhys Newman obtained a BSc(Hons) and MSc in Applied Mathematics from the University of Western Australia, before winning several scholarships to study at Oxford University in Computer Vision. Completing his DPhil in 1998, Dr Newman since worked in internet startups and city finance houses but in 2004 returned to Oxford (Physics) to develop desktop cloud computing technology. In 2010 he became CTO of a spin-out company “eMediaTrack” which aims to commercialise the award winning NereusV desktop cloud and JPC emulation technologies. Dr Newman maintains an active research profile including building the backend systems at the heart of the “theSkyNet” outreach project.Biography – Pete Wheeler
After graduating from Leeds University in 2001 with a BSc in Physics Pete Wheeler worked in London as a Test Engineer for a company called Electron Tubes. In late 2002 Pete immigrated to Western Australia and began working for Scitech, Perth’s Science Discovery Centre, as an Outreach Presenter honing his skills as an effective science communicator. After a series of roles involving the development of education resources for WA teachers, managing Perth’s first Planetarium and coordinating state wide education and outreach initiatives, Pete now calls himself a Science Communicator, a job type that’s becoming more and more recognised by science based organisations seeking to engage the outside world. As such Pete now performs the role of Outreach and Education Manager for the International Centre for Radio Astronomy Research (ICRAR), a joint venture of Curtin University and The University of Western Australia.
Leslie Carr Mind the Gap! Moving From Aspiration to Experience in UK Institutional Research Data Management
The aim of the Institutional Data Management Blueprint (IDMB) project, funded by JISC in the UK, has been to create a practical and attainable institutional framework for managing research data that facilitates ambitious e-research practice. A candidate tool to support this responsibility is the institutional repository – an information storage and management tool conjoined with extensive social support and advice structures from the library. In order to acknowledge and manage their data management responsibilities, IDMB provides an overall framework within which to plan and develop institutional data management strategy. This paper describes the main practical developments being made to an institutional repository platform as a result of the IDMB data management survey and audit. The University of Southampton Institutional Repository is based on the EPrints platform (v. 3.2), configured for some rudimentary data support that makes research data discoverable, but not easily interpretable or reusable. A table of data points may be provided as a spreadsheet, a database or a PDF, but guidance as to the interpretation of those figures is not easy to come by. Nor is it easy to understand the relationship between multiple data files (components of complex data objects.) The paper describes some simple amendments to the repository’s document model to facilitate human and software interpretation of the document contents and the role of individual data components.Biography
Dr Leslie Carr is a senior lecturer in the Web and Internet Science research group at the University of Southampton, UK, and a director of the Web Science Doctoral Training Centre. Dr Carr is the director of EPrints (eprints.org), the first institutional repository platform established in 1999, which supports over 320 public repositories supporting open access, open data and open educational resources agendas. Dr Carr is also the Director of EPrints Services, a spinout of the University of Southampton that commercially exploits the open source EPrints software, providing repository services and training to the international research industry. In conjunction with the EPrints team at Southampton, Dr Carr has led over twenty funded digital library and repository projects including IDMB and DepositMO.
Ron ChernichPeggy NewmanSimon McNaughtonJane Hunter CABER – A Registry for Recording and Reporting Australian Algal Blooms
CABER is a web based interface for capturing monitoring data and sightings of potentially toxic algal blooms.The project is a collaboration between the Qld Dept of Environment and Resource Management (DERM), the UQ eResearch Lab and Healthy Waterways. Currently it contains data specific to the coastal and estuarine regions of South-East Queensland but is designed to support algal bloom observations from across Australia.Our presentation will demonstrate the data capture, upload and visualization methods (including the mapping and timeline search and browse interface) for recording and analysing algal bloom observations. We will also describe the iPhone application developed to enable field data capture (including photos and species identification). Challenges experienced in adopting smart-phone technology will be described together with future directions and the problems associated with managing community-generated data.Biography – Jane Hunter
Jane Hunter is the Director of the eResearch Lab at the University of Queensland where she manages a team of PhD students, Post-docs and software engineers, developing innovative IT solutions for the capture, analysis and visualization of research data. She has published over 100 papers on the topics of scientific data management, Semantic Web/Linked Data and digital libraries. She is Deputy Chair of the Academy of Sciences National Committee for Data in Science and also Deputy Chair of the Australasian Association for Digital Humanities (aaDH) and on the boards of the Journal of Web Semantics, International Journal of Digital Curation and IEEE Multimedia.Biography – Peggy Newman
Peggy Newman is a software developer in the eResearch lab at the University of Queensland. She has a professional background in data warehouse development and maintenance, and is interested in scientific data access, analysis and visualisation. Her current focus on passive acoustic and GPS based animal tracking data and automating systems to collect, statistically analyse and geographically represent data so that researchers can spend more time answering science questions and less time negotiating disparate software systems.
Paul Coddington Outcomes of the NeAT Program: eResearch tools and services for national research communities
The National eResearch Architecture Taskforce (NeAT) was a committee of experts established under the NCRIS Platforms for Collaboration capability. NeAT was responsible for identifying a portfolio of projects to develop and implement new eResearch tools and services. Each project needed to: provide production eResearch tools or services to meet the needs of a particular research community, but with potential for broader use; encourage eResearch uptake, awareness raising and skills development; aim to significantly improve research processes; identify long-term providers that will host and support the services; and have significant co-investment from the user community. Fifteen projects targeting a broad range of research disciplines were selected by NeAT. Projects received funding for 2.5 to 4 EFTs for 18 months to 3 years, with significant additional in-kind effort and resources provided by the project partners. Funding, management and technical input to the NeAT program was provided jointly by the Australian National Data Service (ANDS) and the Australian Research Collaboration Services (ARCS). Ten of the NeAT projects targeted the requirements of NCRIS national scientific research communities, with four other projects working with national organisations in the humanities and social sciences. Most of the projects were strongly focussed on managing, accessing or sharing data, with the others providing tools and services for analysis, processing and visualisation of data sets. The NeAT projects provide exemplars of how research practices can be improved or transformed by the use of eResearch tools. Researchers and research organisations are reporting that the NeAT-funded tools are having significant impact on their research or how they deliver data or eResearch services to their community. This pres entation will provide an overview of the NeAT program and a brief summary of the outcomes of each of the NeAT projects, and the overall program.Biography
Dr Paul Coddington is Deputy Director of eResearch SA, where he has managed eResearch projects and infrastructure since 2002. From 2007 to 2011 he also worked for the Australian Research Collaboration Service and managed the National eResearch Architecture Taskforce (NeAT) program, which funded projects to implement eResearch tools and services for many national research communities. He has over 25 years of experience in eResearch, working on university research and development projects in high-performance and distributed computing, computational science and research data management.
Michael D’SilvaChris Myers The ‘Imax’ of science labs – the next generation of eResearch
In the past VeRSI has demonstrated the eVBL (educational Virtual BeamLine), which proved that remote access to the Australian Synchrotron was possible. VeRSI then showed us that Synchrotron Users could remotely load samples and move motors on MX1 (Macromolecular Crystallography). VeRSI has now pushed the boundaries of remote access and remote control in the Australian research space. Remote access and remote control in a collaborative space to an expensive instrument like a beamline at the Australian Synchrotron or the XPS (X-ray photoelectron spectroscopy) at La Trobe University Bundoora is really hard. The people responsible of these expensive instruments do not like having more than a few people near the instruments. Special training like OHSE and Radiation Safety need to be undertaken by all users who go near the instrument and/or the facility. Also, “due to the nature and expense of these instruments, sharing instruments is essential and may require researchers to travel to the location of the instrument.”[2] This costs both time and money and often causes scheduling and data transportation problems. To tackle this problem, a collaboration of La Trobe’s eResearch Office, La Trobe’s CMSS (Centre for Materials and Surface Science) and VeRSI built a room called VisLab1. This room provides an immersive environment for a group of up to 30 researchers or students to access instruments from a remote location. The high-tech laboratory contains all the latest in visualisation technology including a 95m2 multi-screen projection wall, six touch screens and video conferencing equipment, all in 1080p High Definition. It also has a twelve monitor 175” display wall running a Microsoft Windows PC for displaying ultra-high resolution visualisation data.
REFERENCES1. La Trobe University Bulletin,
The ‘Imax’ of science labs – next generation facilities, 2011. http://latrobeuniversitybulletin.com/2011/06/08/the-‘imax’-of-science-labs/2.
VeRSI, VisLab Sneak Peek, eNewsletter 15, 2011, https://www.versi.edu.au/news-and-publications/enewsletter/enewsletter-15/vislab-launchBiography – Michael D’Silva
Michael is the Project Manager and a Software Systems Engineer for the Collaborative Cyber Infrastructure for Instrumentation team at VeRSI. His key responsibilities include software project management and the design and development of eResearch software. His current projects include implementing a collaborative working environment by developing the Virtual BeamLine (VBL) at the Australian Synchrotron. The VBL enables users to access experiments and collaborate remotely. In addition, he was involved in several of VeRSI’s life science projects, such as the Genome Data Mining and the Laboratory Supervisor Management System projects. Some other projects he is involved in are the Remote Laboratory Instrumentation at La Trobe University and the VisLab1 room at La Trobe University. Used for remote teaching purposes, the VisLab1 supersedes the need for small group size as well as time spent travelling and completing safety induction, as it allows the students to run experiments remotely, while enjoying the same immersive experience as at the Synchrotron and La Trobe.Biography – Chris Myers
Chris is the Program Director at VeRSI. His duties include delivery of a collaborative communication environment that allows researchers to remotely interact with the MX1 (Macromolecular Crystallography) beamline and Powder Diffraction beamline (PD) at the Australian Synchrotron. He is managing the construction of a remote VBL environment at La Trobe University as well as an integrated instrumentation environment for materials and surface scientists.
David EyersRussell Butson Managing sensitive data across the data life cycle
The management of and responsibility for raw data is a central aspect of empirical research. Most traditional texts on the practice of research have sections outlining various approaches for categorizing, storing and recovering information from concrete artefacts like paper and tape. For many, the adoption of the more abstract digital media meant a shift from autonomous control to one reliant on technologists. Over the years researchers have become more self-reliant through the wide-spread use of personal computing.The proliferation of cheap, high-capacity storage technology has made it possible for researchers to store large amounts of data. However, the ethical responsibility on principal investigators requires the management of raw data beyond the task of storage. Many types of research projects require collaborative sourcing, management or sharing of sensitive datasets. Often researchers make do with an ad hoc approach to sharing data, without fully appreciating (or even considering) the risks involved, often because of the perceived inaccessibility of higher quality, managed solutions. However, the significant economies of scale to be gained having backup and on-line redundancy of physical media managed independently from research data create difficulties in cases where the repositories contain highly sensitive data. In these contexts a technology host that was previously able to remain agnostic to the application specifics of researchers, now must partition their infrastructure in a complementary manner in order to provide appropriate security assurances.The University of Otago is currently exploring the implications of developing a secure storage capability, including the integrated Rule Oriented Data Systems (iRODS) storage middleware, that aligns with the workflow needs of researchers working with patient data within the healthcare sector. The project aims to achieve a workable model and a set of guidelines for controlling the access, storage, retrieval, replication and analysis of highly sensitive data within a secure environment.Biography – David Eyers
David Eyers is a lecturer in Computer Science at the University of Otago. He does research into wide-area distributed systems, with particular interests in security and efficient network communication. Security topics of relevance to this project include distributed access control systems, and data management techniques that facilitate the effective tracking and protection of sensitive information. David is interested in the evolution of policy within access control systems, i.e. their ongoing management, in addition to what they are able to enforce at any point in time. As academic projects in the aforementioned fields can sometimes drift away from the actual needs of users, David is keen to undertake projects that examine how such research topics can be grounded in reality. Undertaking eResearch projects often involves researchers being required to incorporate and evaluate new technologies, and thus is a particularly appropriate target community for his research.Biography – Russell Butson
Russell Butson is a lecturer in Higher Education at the University of Otago. His primary area of interest is the practice of collaboration, particularly within virtual research environments. The emerging nature of this field has meant that Russell has had to design and create a number of virtual research environments in order to advance his research area. While there is a technical component to this work, the actual research is more sociological than technical; with an emphasis on the nature of knowledge working through the analysis of the environments, communities, and practices involved in the practice of researcher collaborations. Russell has been in the eResearch domain for some years, promoting the uptake of ICT to support the practice of research both nationally and locally at the University of Otago. He is a member of the University of Otago’s eResearch Advisory Group and an advocate for eResearch generally.
Ryan FraserTerry RankineJosh VoteLesley WybornBen Evans Virtual Geophysics Laboratory (VGL): scientific workflows exploting the Cloud
The Virtual Geophysics Laboratory (VGL) is a scientific workflow portal that provides Geophysicists with access to an integrated environment that exploits eResearch tools and Cloud computing technology. The VGL is a collaboration between the CSIRO, Geoscience Australia (GA) and National Computational Infrastructure (NCI) and has been funded by the Federal Government’s Education Investment Funds.The VGL provides scientists with easy agent to exploit multiple technologies provided by eResearch and Cloud in a user driven interface. The VGL was developed in close collaboration with the geophysics user community and, with representatives from GA and ANU, and has been deployed directly into their environmentBiography – Ryan Fraser
Ryan Fraser is a Project Leader within CSIRO’s Minerals Down Under flagship. He leads several large projects dealing with the exchange and delivery of spatial information and eResearch tools. He manages projects that focus on enabling the delivery of data in an interoperable manner to the various science domains. He has a software engineering background, has expertise in high performance data and computational technologies, and has primarily been involved in the design and execution of systems to deliver spatial information and the provision of data and computing services to the research community and industry. He leads a large team to deliver technologies to enable data exchange and orchestrate change within the community.Biography – Josh Vote
Joshua Vote is trained in Computer Science and has been involved in developing a broad spectrum of applications for both government and commercial organisations. His professional interests include user interface design, human computer interaction and generally making software accessible to as many people as possible. Since joining CSIRO in 2009, Joshua has been involved with the AuScope Grid project and has taken an active role in developing the AuScope Portal and integrating it with the Spatial Information Services Stack (SISS). This has involved supporting various community standards over numerous components of the SISS and weaving them into a single consistent application.Biography – Terry Rankine
Terry Rankine is a Research Group Leader in CSIRO’s Earth Science and Resource Engineering Division. He leads the Computational Geoscience group, a capability providing science and technologies to integrate and interpret geoscience data and knowledge in order to understand, quantify, and predict geological processes. In particular, these tools have been applied in the minerals exploration context, with the aim of reducing risks and uncertainties and potentially leading to cheaper, faster discovery. Terry originally studied Computational Chemistry, and has a background in High Performance Computing, data management, data mining, and workflow engines, and various collaboration toolkits. He is currently working with Ryan Fraser and his team, combining those skills into community virtual laboratories, like the VGL.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.Biography – Lesley Wyborn
Lesley Wyborn is a Senior Geoscience Advisor at Geoscience Australia and is a member of the Australian Academy of Science National Data in Science Committee, and the Executive Committee of the Earth and Space Science Informatics Focus Group of the American Geophysical Union. She is currently lead on the GA/CSIRO eResearch Collaboration Project and the GA/NCI High Performance Computing Pilot Project.
Dave Fulker OPeNDAP Roadmap to New Server-Side Capabilities and Other Supports for Data-Intensive
This presentation summarizes the roadmap (parts of which are firm while others are tentative) being charted for the future of Hyrax and OPeNDAP. Four topics will be covered: a) server-side subsetting of non-rectangular meshes, UGRIDs and other classes of non-rectangular meshes as well as unstructured collections of (space-time) point observations such as station data; b) building and utilizing inventories of OPeNDAP-accessible data sets that reflect user-specified constraint expressions and space-time resolutions; c) increased compatibility and commonality between OPeNDAP’s Hyrax and Unidata’s THREDDS Data Server (TDS) based on a newly minted set of OPeNDAP protocol specifications and an associated set of conformance tests; d) the impact of cloud computing on needs for (Hyrax) data services, including potential changes in the social aspects of data exchange, use and reuse. For example, might a new paradigm emerge in which cloud-based processing systems are expected to create provenance and citation records, immediately suitable for publicationBiography
Dave Fulker, President of OPeNDAP, has focused his career on serving scientists and science educators via computing and networking advances. His teams have combined leading-edge technologies with end-user service, underpinned by expertise in both technical and social aspects of the information age. Dave directed the Unidata Program (at the University Corp. for Atmospheric Research) from inception until 2002, overseeing development of the Network Common Data Form (netCDF) and other software now considered critical infrastructure in the geosciences and other fields. Unidata is often considered an exemplar of community participation and data sharing. Before launching Unidata and serving as (founding) Executive Director of the National Science Digital Library (NSDL), Dave spent 18 years in software-development and leadership at the National Center for Atmospheric Research (NCAR). Dave is a Fellow of the American Meteorological Society (AMS) and recipient of the AMS Cleveland Abbe Award, the Educom Medal for Technology in Education and the NCAR Technology Advancement Award. Dave holds Master and Bachelor of Arts degrees in Mathematics from the University of Colorado, is a professional trumpet player (jazz and classical), and is President of the Boulder Philharmonic Orchestra.
Wojtek GoscinskiTimur GureyevChris HallAnton MaksimenkoArthur SakellariouDarren Thompson The Multi-Modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) for Near Realtime CT Reconstruction using XLI at Australian Synchrotron
The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) is a specialised Australian high performance computing facility for computational imaging and visualisation. This facility has been formed to provide researchers with the hardware, software and expertise to drive research in the characterization, biomedical science, materials research, engineering, and geoscience communities, and stimulate advanced imaging research that will be exploited across a range of imaging modalities, including synchrotron imaging, neuroimaging, electron microscopy and optical microscopy. This presentation will introduce the MASSIVE project, and present it’s role in a near-realtime Computed Tomography (CT) reconstruction service for the Imaging and Medical Beamline (IMBL) at the Australian Synchrotron, using XLI / X-TRACT software. This service allows researchers using the IMBL to perform fast CT reconstruction and visualisation while in-experiment. We will present XLI CT reconstruction performance results across the MASSIVE platform and early experience using the prototype service.Biography – Wojtek Goscinski
Wojtek James Goscinski is the Coordinator of the Multi-Modal Australian ScienceS Imaging and Visualisation Environment.
Rhys HawkinsBen EvansDeborah MitchellSteven Mceachern Visualising spatially-coded data at the Australian Data Archive
There has been an increasing need for spatial data information to be made available through web-based tools; which link seemlessly to data repositories. The Australian Data Archive, ADA, (formally the Australian Social Science Data Archive – ASSDA) is one such example of a critical research data repository with a potential for such tools. In this paper we will present our work on the ADA spatial data framework and describe our new online tools for exploring spatial social science data. This new capability has had implications for the entire data workflow for archiving of survey data. From the design of surveys to incorporate the accurate recording of geospatial identifiers, maintaining confidentiality of geo-located respondents information to prevent identification by unauthorised users and allowing researchers access to the data in new and powerful ways.Biography – Rhys Hawkins
Rhys Hawkins is a programmer at the ANU Supercomputer Facility, specializing in visualization.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.Biography – Deborah Mitchell
Deborah Mitchell is the Director of the Australian Data Archive.Biography – Steven McEachern
Steven McEachern is the National manager of the Australian Data Archive.
Leonie Hellmers eResearch Survey: first longitudinal report
This paper will present findings from the first longitudinal survey investigating eResearch practices and attitudes across the higher education sector. The survey was first rolled out in 2009 across seven NSW universities, with over 1,000 participating researchers. It is about to be repeated and this second round of responses, available in time for the conference, will allow us to observe changes in attitudes and behaviours with respect to eResearch over the last two years. Results from 2009 provided valuable baseline data. Significantly, results pointed to a gap between researchers’ willingness and obvious need to adopt eResearch practices and their limited awareness and utilisation of eResearch and eResearch bodies.
This presentation offers findings from the second round of the survey towards three aims: a) tracking movements in researcher technology-enhanced practices, needs and constraints; b) continuing the discussion about the importance of considering these practices when developing research infrastructures and services; and c) monitoring the effectiveness of eResearch support agencies over time. Are we effectively engaging with researchers? Is there a noticeable impact on the uptake of eResearch technologies? What improvements are evident in specific areas such as: research data management, data re-use and research collaboration?Biography
Leonie provides communication and special project support to Intersect. Leonie worked in the digital humanities as a project lead, manager and advocate for 15 years, variously with the National Portrait Gallery, the National Gallery of Australia, Fairfax, Brainwa@ve. More recently she was founding Director of the Dictionary of Australian Artists Online for a consortium led by the University of NSW.
Mary HobsonAndrew RohlRob CookIan GibsonBill ApplebeNathan Bindoff Australian eResearch Organisation – a Nationwide Collaboration for Research Infrastructure & Services
High performance infrastructure and services are essential for the effective conduct of globally significant research in many disciplines. Their impact is spreading rapidly as modeling, simulation, visualisation, information-centric computing, collaboration and other applications of computing open up new possibilities for innovative research. The NRIC (National Research Infrastructure Council) Roadmapping exercise conducted during 2011 is planning the next stages in the national support for what are now essential platforms for advanced research. Access to infrastructure and services is promoted through institutional information technology (ITS) and eResearch services groups and CAUDIT, through the regional eResearch service providers established in each state and through the Commonwealth organisations established under the NCRIS and EIF Super Science programs collectively known as the Platforms for Collaboration. To date these separate entities have collaborated loosely to develop and promote the facilities available to researchers. To enable greater research impact through broad access to high performance infrastructure the eResearch service providers have organized themselves into AeRO. This presentation “launches” AeRO to the national eResearch community, presents its program and opens membership to any organisation involved in the provision and support of eResearch services.Biography – Mary Hobson
Mary Hobson is the Director of eResearch SA, a joint venture between the University of Adelaide, Flinders University and UniSA. Mary started her technology career in 1975 programming for the Ministry of Defence in the UK. She went on to own a software house and then lectured in systems analysis and management information systems for 10 years. In the early 1990s she went to work in Russia with the Moscow University of Microelectronic Technology, setting up an innovation park. She worked with Russian engineers starting an independent integrated circuit design house, with clients including GEC Plessey Semiconductors, Intel and Alcatel. She also set up a technology transfer consultancy, introducing western companies to technologies developed in the research institutes in the former USSR. In 2005 she moved to the Polytechnic sector in New Zealand, working as a Head of School and later a member of senior management. She became Director of eResearch SA in August 2010.Biography – Andrew Rohl
Andrew is a world recognized leader in the field of computer simulation of surfaces. He has focused on the simulation of surface interactions in growing crystals but the methods and programs that he has developed are directly relevant to all areas of materials science and nanotechnology. His success in this endeavour is demonstrated by his large number of publications with world-leading research groups, his major roles in 2 Cooperative Research Centres and bringing together researchers from Curtin University and the University of Oxford, leading to several patents. In 2004, he became the Executive Director of iVEC. His skills have been utilized at iVEC to develop a successful partnership across five institutions that provides major advanced computing facilities for all Western Australian researchers. In May 2009, iVEC was awarded $80M from the Commonwealth Government to build and operate one of the world’s leading supercomputing centres – the Pawsey Centre.Biography – Nathan Bindoff
Nathan Bindoff is Professor of Physical Oceanography at the University of Tasmania, and CSIRO Marine Research Laboratories, Director of the Tasmanian Partnership for Advanced Computing. Professor Bindoff is a physical oceanographer, specialising in ocean climate and the earth’s climate system. He was the coordinating lead author for the ocean chapter in the Inter-Governmental Panel on Climate Change (IPCC) Fourth Assessment Report and for the Fifth Assessment Report for the detection and attribution chapter. His current interests are primarily in understanding how the changing ocean can be used to infer changes in atmosphere and whether these changes can be attributed to rising greenhouse gases and projecting future changes and its impacts on regional climates. He has established the programs and experiments that determined the total production of Adelie Land Bottom Water formation and its contribution Antarctic Bottom Water Formation, contributed to the development of some of the largest and highest resolution model simulations of the oceans and was deeply involved in oceanographic data and data management as the chairman of the Data Products Committee for the World Ocean Circulation Experiment and the International Polar Year. As Director of TPAC, Professor Bindoff provides expertise and educational programs, as well as high performance computing facilities to the Australian and International research community.Biography – Bill Appelbe
Bill Appelbe is the founding CEO and Chief Scientist of Victorian Partnership For Advanced Computing (VPAC) since 2000. Bill completed an undergraduate honours science degree at Monash University in 1974 then completed a Masters then Doctorate in Computer Science and Electrical Engineering in 1978 at the University of British Columbia. Subsequently, he was employed as an Assistant Professor at the University of California, San Diego (1979-1986), then as an Associate Professor at Georgia Tech (1987-1998). Bill’s research interests are in parallel programming tools, software engineering and software frameworks. Bill’s research in parallel programming dates back to the early 1980’s with the development of a unique parallel programming static debugging tool, followed by ongoing development of interactive parallelization toolkits and animation tools for parallel programming (funded by the NSF, IBM, and LANL). More recently, Bill and the team at VPAC, in collaboration with Monash and CIG/Caltech since 2001, have been developing frameworks and tools for computational mechanics/geophysics: StGermain, Underworld, Gale, and MADDS. Bill is an honorary faculty member of Monash University and RMIT.Biography – Rob Cook
Rob is the CEO of QCIF (the Queensland Cyber Infrastructure Foundation), a not-for-profit company established by the Queensland universities to provide high performance infrastructure and services. His consulting company, Pangalax, has been active in the research sector helping with the establishment and development of major research and research infrastructure facilities including several Cooperative Research Centres. Prior to Pangalax, Rob spent several years in North America leading Astracon, a start-up company providing broadband network provisioning software to the telecommunications industry and before that CiTR – a telecoms software company in BrisbaneBiography – Ian Gibson
Dr Ian Gibson is Chief Executive Officer of Intersect Australia Ltd. Ian has extensive experience at executive level R&D management. He has a strong track record in the research, development and commercialisation of new technology across a broad range of electrical engineering, computer science and digital imaging. Previously Ian was a Division General Manager at CiSRA, the Australian R&D lab for Canon. There he built research capability over several years to deliver original, world leading technology into a wide range of Canon’s major product groups. Ian has a PhD from the University of New South Wales in Computer Science, a BE in Electrical Engineering (Hons) and a BSc, is on several industry advisory boards at Australian universities and is an Adjunct Professor at the University of Queensland.
Nick HorspoolJeffrey JohnsonBen Evans From the sea to the clouds: TsuDAT, a community-based tsunami simulation application hosted in the cloud
TsuDAT, the Tsunami Data Access and Simulation Tool, is a novel new approach to tsunami inundation simulations that farms out computationally intensive tsunami simulations to a computing cloud. TsuDAT consists of a web-based mapping application where users can explore their offshore tsunami hazard and easily build detailed tsunami inundation simulations, and a backend that employs an open-source hydrodynamic modelling application run on virtual machines in the cloud. This approach allows non-modellers, such as emergency managers, planning officials and land-use planners, to create detailed tsunami inundation hazard maps and assess the tsunami risk for coastal communities. The benefit of this approach is that users do not require specialist software or high performance computing resources to be installed locally. In addition the scalability of cloud computing offers increased supply of computational resources as demand increases. TsuDAT aims to transfer the capability of computational tsunami simulations from handful of experts to a much wider community of modelers located in the state and territory governments around Australia.Biography – Nick Horspool
Nick Horspool is a natural hazard scientist in the Risk and Impact Analysis Group at Geoscience Australia. His work involves developing risk assessments for a variety of geological hazards in Australia and the Asia-Pacific region.Biography – Jeffrey Johnson
As a member of the GeoNode team, Jeff works with the user community to solve challenging problems related to the collaborative use of geospatial data. He’s excited to be able to help people use technology to describe, analyze and understand the earth, and share that knowledge with others.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.
John HoughtonGreg Laughlin Costs and benefits of data provision
Over the last decade there has been increasing awareness of the potential benefits of more open access to Public Sector Information (PSI), research publications and research data both within Australia and around the world. That awareness is based on economic principles and evidence, and it finds expression in policy at organisational, national and international levels. Government and research policies seek to optimise innovation by making publicly funded data available for use and re-use with minimal barriers in the form of cost or convenience. This confers three responsibilities on publicly funded agencies: (i) to arrange stewardship and curation of their data, (ii) to make their data readily discoverable and available for use and re-use with minimal restrictions, and (iii) to forgo fees wherever practical. This paper reports on the findings of a study (currently underway) which: presents case studies examining the costs and benefits involved in making publicly funded data freely available for the agencies and their users; estimates the wider impacts of making publicly funded data available; and draws out lessons for the research sector regarding the curation and open sharing of research data.Biography – John Houghton
John Houghton is currently Professorial Fellow at Victoria University’s Centre for Strategic Economic Studies (CSES) and Director of the Centre’s Information Technologies and the Information Economy Program. He has had a number of years experience in information technology policy, science and technology policy and more general industry policy related economic research. He has published and spoken widely, and is a regular consultant to the Organization for Economic Cooperation and Development (OECD) in Paris. In 1998, John was awarded a National Australia Day Council, Australia Day Medal for his contribution to IT industry policy development. John’s research is at the interface of theory and practice with a strong focus on the policy application of economic and social theory, and of leading-edge research in various relevant fields. Consequently, his contribution tends to be in bringing knowledge and research methods to bear on policy issues in an effort to raise the level of policy debate and improve policy outcomes.Biography – Greg Laughlin
Greg Laughlin is currently Principal Policy Adviser at The Australian National Data Service. He has a strong background in the physical sciences (formerly Senior Principal Scientist at the Bureau of Rural Sciences), risk management and statistics. Greg has significant experience and publications in applied modelling, ecology and climatology. He has held positions at the Australian National University, the Resource Assessment Commission, the Australian Bureau of Statistics and the Department of Agriculture, Fisheries and Forestry.
Andrew IsaacSam Morrison Karaage – Cluster account management and reporting
In just over 12 months, the Victorian Life Science Computation Initiative (VLSCI, http://vlsci.org.au) has gained over 300 users, 80 projects and collected user feedback and usage data for four quarterly reports and one annual report. This has all been done through Karaage (https://code.vpac.org/trac/karaage), an online account management and reporting system, developed at the Victorian Partnership for Advanced Computing (VPAC, http://vpac.org). When a user’s access to the supercomputers has been approved, they are given a Karaage account. This allows users to avoid a major impediment to account management – the paper trail required to perform many account management tasks and to collect reporting information – by delegating account management and responsibility to the most suited people – the users. Users are assigned different levels of privilege. Through the user portal, for example, a nominated project manager can invite and approve new users, complete reports and questionnaires, and track project resource usage. Systems’ administrators need only monitor user management and collate report information. This simplifies the entire account management and reporting process for users and administrators. Karaage simplifies the workflow of account management. Built on the Django web framework, it provides web-based account management facilities to administrators and users. Its design allows users to manage their own accounts and projects. This is achieved by providing modules and middleware to connect the various sub-systems of compute resources and their administration systems. This not only streamlines authentication and authorisation, but also provides interaction with various databases to collect and collate information in a readily accessible manner, all through a single web-portal. Karaage is developed and supported by VPAC and released through the GPL v3 licence.Biography – Andrew Isaac
Andrew Isaac is a specialist programmer at VLSCI. He has been involved in the development of Karaage modules specific to the policy and procedure of VLSCI. His research interest is in statistical and high throughput computing. He has a PhD in machine learning and has worked as a researcher in bioinformatics, autonomous systems, and human factors engineering.Biography – Sam Morrison
Sam Morrison is a systems administrator and programmer at VPAC. He created Karaage in 2007 and has been the main developer since. He has assisted in the installation of Karaage at several institutions.
Peter IsaacCalvin ChowSimon YuVirginia Gutierre Transforming Research in Ecosystem
Ecosystem research is about the Australian ecosystem dynamics: the role of Australian ecosystems in the cycling of water and carbon between biospheric and atmospheric stores and the response of these ecosystems to changes in these cycles. Effective research is hampered by the lack of coordination in data collection, archiving and quality control from measurement stations across remote Australia that has been implemented independently. Underpinning this initiative was the need for a more collaborative research environment to address global climate challenges.
This presentation will review the systems in place that will provide an integrated research data access and facilitate collaborative approach for researchers by addressing the following key principles:• standardisation and automation of the data collection, archival and quality control of measurements from a network of measurement stations;• integration of complementary data streams from different sources into a single data and metadata repository;• facilitation of the linking the data into a common research data space, through the Australian Research Data Commons to encourage re-use of research data.Biography – Peter Isaac
Dr Peter Isaac (presenting) is part of the Monash Weather and Climate Groupand School of Geography and Environmental Science, Faculty of Arts, Monash University. Peter specialises in micrometeorology and numerical modelling of land surface and his research interest is in climate science. Peter is interested in the detection of changes in the environment and vegetation and its influence on surface energy and trace gas exchanges and ecosystem processes – especially in Australia’s Northern Territory, tropical savanna region. Dr Peter Isaac is a research fellow and lecturer at Monash University.Biography – Simon Yu
Simon Yu is a Senior Software Developer at Monash eResearch Centre since 2007. He is the lead developer for Ecosystem platform. Prior to that, he was the lead developer for the ARCHER project and a java developer on the Persistent Identifier Linking Infrastructure (PILIN) project. Simon has a Master of Technology (Internet and Web Computing) from RMIT University.Biography – Virginia Gutierre
Virginia Gutierre is Research Systems Facilitator at Monash eResearch Centre. She is the senior business analyst on the Ecosystem project and the other 7 ANDS Data Capture projects. Virginia has a honours degree in Software Engineering and Commerce from University of Melbourne.
Edward KingBen EvansLesley WybornWenjun WuLeo LymburnerMedhavy ThankappanPeter TanFei ZhangMark GrayJoseph AnthonyMuhammad Atif

Matt Paget

Stefan Maier

Thomas Schroeder

A National Environmental Satellite Data Virtual Laboratory
We have constructed an environment in which different research communities using large remote sensing data sets can coalesce, based on a common platform for data, workflows, and analysis tools in a high-performance environment. Earth Observing (EO) sensors carried on space-borne platforms produce large (multiple TB/year) data sets serving multiple research and application communities. The limited overlap between these end-user groups, together with the data management challenges, often leads to fragmentation of data storage and duplication of processing systems and user analysis environments. Moreover, where overlaps exist, they are often difficult to exploit because of specific implementation differences such as agency network firewalls, incompatible storage formats and the degree of intermediate processing. This problem is common across a number of existing satellite sensors and will only get worse as new sensors are launched in the future. A virtual laboratory is a means by which communities can work together to collectively overcome the problems in common and focus on their specific research interests. This virtual laboratory has been constructed around both the computing power and data-intensive cloud facility at the NCI with the support of both the IMOS and TERN NCRIS capabilities. The result is a scalable platform for collaboration in this data-rich area with far reaching interests in the research community. This development facilitates a long term goal of the remote sensing community; to convert earth observation data into information at the spatial and temporal scales that are relevant to decision makers.Biography – Edward King
Edward King heads the Satellite Remote Sensing Facility in IMOS and leads projects in the Water for a Healthy Country and Wealth from Oceans Flagships in CSIRO.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.Biography – Lesley Wyborn
Lesley Wyborn is a Senior Geoscience Advisor at Geoscience Australia and is a member of the Australian Academy of Science National Data in Science Committee, and the Executive Committee of the Earth and Space Science Informatics Focus Group of the American Geophysical Union.Wenjun Wu, Leo Lymburner, Medhavy Thankappan, Peter Tan and
Fei Zhang
are members of the National Earth Observation group at Geoscience Australia.Biography – Mark Gray
Mark Gray is a specialist in MODIS data processing and use.Joseph Antony and Muhammad Atif are data-intensive computing specialists at the NCI.Biography – Matt Paget
Matt Paget is the TERN/AusCover data and systems coordinator.Biography – Stefan Maier
Stefan Maier is an expert in the use of MODIS data for fire and fire scar detection.Biography – Thomas Schroeder
Thomas Schroeder is a CSIRO research scientist developing remote sensing algorithms for inland, coastal, marine and coral reef ecosystems.
Qing LiuGreg TimmsYanfeng ShuDaniel SmithAndrew Terhorst Provenance-Aware Automated Data Quality Control
As automated data collection has become more commonplace (e.g. through industrial and environmental sensors and sensor networks), the volume of data produced has risen exponentially. To shared and re-use, it is crucial that automated techniques for the assessment of data quality are also developed. Such techniques have begun to appear in the literature [1, 2] in recent years, combining data statistics and domain expertise to produce data quality flags or estimates of uncertainty. Where the quality of data is assessed by the organisation responsible for collecting the data, these approaches are relatively easy to implement. However, in many circumstances, it is unavoidable that users will have to sometimes use data provided by a third party. Therefore, knowledge of how data quality is assessed plays a critical role for users to decide if data is trustworthy and fit-for-purpose. In this paper we discuss how data provenance can enable proper assessement of an automated quality assessment (QA) process.Biography – Qing Liu
Qing Liu received her Ph.D. degrees in computer science from the University of New South Wales, Australia, in 2006. After graduated, she joined the University of Queensland as a postdoc researcher and worked in the area of spatial temporal data management. Qing currently works as a research scientist with CSIRO since 2007. Her research interest is to develop effective and efficient solutions for managing, integrating and analysing large amount of complex datasets for the biology and hydrology applications.Biography – Greg Timms
Greg Timms received the B.Sc. (Hons) and Ph.D. degrees in physics from the University of Sydney, Australia, in 1993 and 1997 respectively. In 1997, he joined the Australian Nuclear Science and Technology Organisation where he spent five years investigating the environmental impacts of mining, focusing on the physical transport of reactants and pollutants within mine wastes.
Since 2002, Greg has been with the Commonwealth Scientific and Industrial Research Organisation (CSIRO), initially engaged in research on microwave communication networks and then leading a team which developed a novel 190 GHz millimetre-wave imager in 2006.
For the past four years Greg has been based at the Tasmanian ICT Centre at CSIRO’s Hobart site, where he has been part of a team developing low-cost sensor network and information system technologies for deployment in marine environments. Greg’s particular interest is in the development of techniques for automated quality control of real-time streaming data.Biography – Yanfeng Shu
Yanfeng Shu is a research scientist at the CSIRO ICT Centre. Her research interests include but are not limited to database systems, semantic web, Peer-to-Peer systems, and sensor networks.Biography – Paulo de Souza
Paulo de Souza is a Physicist with M.Sc. from UFES (Brazil) and he obtained his Ph.D. in Natural Sciences(Dr. rer. nat.) with The Johannes Gutenberg Universty in Mainz, Germany. Paulo is an internationally recognized researcher in the fields of geochemistry, mineralogy, environmental sciences and sensor networks. Paulo has 10+ years of experience in industrial research having worked for mining, steelmaking andconsultancy before joining CSIRO. He is currently the acting Research Director for Tasmanian ICT Centre and Science Leader – Sensor Networks with CSIRO ICT Centre. Paulo is a collaborator scientist with NASA’s Mars Exploration Rovers program, has over 200 scientific publications and is co-author of the scientific papers identified as the ‘Breakthrough of the year 2004’ by Science Magazine.Biography Daniel Smith
Daniel Smith received the B.Eng. (Hons) and PhD in engineering from the University of Wollongong, Australia in 2002 and 2007, respectively. He is currently a research scientist at the CSIRO Tasmanian ICT Centre where he has worked on technology projects in the aquaculture and marine space. His research interests include audio processing, blind signal separation and quality control.Biography – Andrew Terhorst
Andrew Terhorst is a geoscientist with considerable experience in resource management. This includes more than 10 years working as an exploration geologist, running his own spatial technologies firm for 13 years, and directing projects in government research agencies the past 7 years. Andrew has participated in a range of interesting and challenging environmental projects over the years and has a record of accomplishment in innovation and leading change. He currently leads research into next-generation Sensor Webs at the CSIRO.
Nicholas May Implementing eResearch Projects using Agile Development
Agile development encompasses a family of methodologies that aim to overcome some problems common to heavyweight, document centric, software development processes. An active area of discussion in e-research relates to the benefits of using agile development with e-research projects. At the e-Research Office of RMIT University we are currently implementing two such projects with an agile process. In this presentation, we discuss why agile should be appropriate to e-research projects. We will describe the agile process that has evolved over the lifetime of the projects, including: the practices, tasks and meetings. And discuss the lessons we have learned in implementing our process. Finally, we will draw some conclusions about if and when it may be appropriate to use an agile process to develop an e-research application.Biography
Nicholas May is a software engineer with the eResearch Office at RMIT University, with responsibility for setting up and monitoring the agile development process. He is a certified professional member of the Australian Computer Society, and has more than twenty years of development experience across the software development lifecycle. In addition, he is a PhD Candidate in the School of Computer Science & Information Technology, at RMIT University, with research interests in the fields of service-oriented computing and fault tolerance.
Ann MorganMark Baldock Research repository models: can one size fit all?
University of South Australia Library has developed a number of research metadata repositories in collaboration with other divisions in the University. This presentation describes the repositories, the model used for each, and how this model has been adapted for different types of research. It also describes future developments at UniSA and how the repository model will be utilised.Biography – Ann Morgan
Ann Morgan is the Project Management and Quality Assurance Coordinator at University of South Australia Library. Prior to this, she worked as the Archivist at the Bob Hawke Prime Ministerial Library (at UniSA). During this time she also served as the Project Coordinator on the Architecture Museum Metadata Project – which involved the development of an archival database for UniSA’s Architecture Museum. Ann has also worked as a metadata librarian at UniSA, and has worked in several public and corporate libraries in Ireland. After graduating from University, Ann worked as a mainframe programmer in one of Ireland’s largest financial institutions. Ann has a Bachelor of Arts (Hons – German and Irish), a Higher Diploma in Business and Financial Information Systems from University College Cork and a Higher Diploma in Library and Information Systems from University College Dublin.Biography – Mark Baldock
Mark Baldock is the Web Coordinator for the University of South Australia Library. He manages the UniSA Library Web Team supporting a range of locally developed applications and services as well as third party systems. Mark joined UniSA after working in numerous roles for HP, including technical lead, test manager and project manager. He worked with many of Hewlett Packard’s customers in a range of industries, including government, financial services, and manufacturing. Mark was recognised globally as one of HP’s leading innovators having developed a number of internationally deployed, distinguished level, software systems. Mark has a Bachelor of Computer & Information Science and a Bachelor of Management from the University of South Australia.
John Morrissey Building Data Management services supporting a multi-disciplinary national research organization
Update on CSIRO’s development activities in building Data Management platforms supporting multi-disciplinary research. The presentation will include a discussion about DM architecture, current development activities and a overview of planned activities for the next 2-3 years. Finally CSIRO will be releasing its Review and recommendations for a data and information management strategy for CSIRO’s document as an example of an enterprise planning document for science data management.Biography – John Morrison
John Morrissey is currently working in CSIRO’s eResearch team in a strategic planning role looking at how eresearch tools can be applied to CSIRO diverse range of Science disciplines. With a CSIRO career spanning nearly 30 years in various IT related roles ranging from building CSIRO’s advanced IP network servicing all 54 sites across the organisation to establishing CSIRO’s Data Management Service, John has a unique perspective on building ICT systems supporting a major research enterprise. In recent years John has represented CSIRO on all the technical working groups formed as part of the NCRIS Platforms for Callaboration program and he has contributed to a number of major strategic planning document including the ANDS Towards the Australian Data Commons.
Sam Moskwa The Accelerated Computing Initiative
CSIRO is one of the largest and most diverse scientific institutions in the world with more than 50 sites throughout Australia and overseas. A 2007 review of high performance scientific computing (HPC) identified that CSIRO should not only offer HPC infrastructure, but should work closely with researchers to improve its uptake across all scientific domains. How best to provide the required training and engage with distributed researchers was unspecified. Impediments to commencing HPC use include both hardware costs and programming expertise. The Advanced Scientific Computing (ASC) group provides access and support to HPC facilities at no direct cost to CSIRO researchers. In response to the review, the ASC developed the Accelerated Computing Initiative, which provides targeted training and programming support via small seed projects, also at no cost to recipients. Through engaging directly with researchers on such projects, HPC uptake has significantly increased and researchers have achieved improved science results.Biography
Sam Moskwa leads the Accelerated Computing project within CSIRO’s eResearch program.
A part of the Advanced Scientific Computing team, he provides support and training, porting, and optimisation expertise to researches wishing to make use of high performance facilities. His goals include increasing the uptake of CSIRO’s GPU Cluster and productivity from the computational sciences.
Trina MyersJarrod TrevathanIan AtkinsonRob CookJeremy Vanderwal The Tropical Data Hub (TDH) – A virtual research environment for tropical science knowledge innovation and discovery
The Tropical Data Hub (TDH) as an e-Research initiative to provide a data hosting infrastructure to congregate significant tropical environmental data sets. Tropical regions support some of the world’s most diverse and unique ecosystems. However, these sensitive areas are coming under increased pressures from human activities, which significantly threaten their sustainability into the future. Therefore, a need exists for more informed use of environmental monitoring procedures to help better manage tropical regions. At present data is collected in disjoint repositories and is not visible/accessible for reuse by other lines of enquiry. Without this data being publicised, many opportunities are missed for holistic discovery of major trends that influence tropical ecosystems. The TDH serves as a focal point for amalgamating disparate data sources to facilitate data reuse, integration/searching and knowledge discovery by environmental researchers and government departments. This will provide researchers and planners access to extensive and readily available data that can be used to give a more accurate representation of the state of tropical regions and allow for more suitable environmental management practices to be devised. We present two visualisation tools that model data from the Tropical Data Hub. The first is for assessing land space across Northern Australia and the second is a system to rapidly assess the potential impacts of climate change on global biodiversity.
Peggy NewmanNigel WardHamish CampbellMatthew WattsCraig FranklinJane Hunter OzTrack: Data Management and Analytics Tools for Australian Animal Tracking
Studying animal movements is of critical importance when addressing environmental challenges such as invasive species, infectious diseases, climate and land-use change. The number of species tracking projects in Australia is rapidly expanding – due to both the reduction in the cost of tracking devices (radio, acoustic, and satellite) and the need for ecology management communities to study the behaviour of species across taxa, space and time. The high resolution sensor and tracking devices deployed to monitor species typically generate very large datasets which can be difficult to interpret without advanced analytical computing and visualization tools. Much of the animal tracking data collected from within Australian is not analysed or stored in an efficient and systematic manner, and as a direct result data loss and study repetition is common. The aim of the OzTrack project is to develop the critical data management infrastructure needed to support the animal tracking research community. The project is developing three software components: a central repository for the data and metadata being generated; a set of analysis, modeling and visualization services; a Web portal interface that enables scientists to search, retrieve, analyse and visualize the data.
Liam O’Brien CSIRO eResearch Architecture
CSIRO is a large, geographically dispersed research organisation with over six-thousand staff at fifty-six sites. Scientists at CSIRO carry out research in various domains and deal with large complex datasets, with the need to collaborate effectively and efficiently. In response to these issues, geographical separation of staff and increasing data complexity and the need for effective collaboration, an architecture for software and systems that supports eResearch within CSIRO has been developed and is evolving to take advantage of new opportunities to support the work of scientists within the organisation. Several projects have already been completed and several are underway to build the underlying infrastructure and software systems that support eResearch. There are several significant areas that are being addressed within the eResearch Architecture including a Research Data Management Service, support for electronic Laboratory Notebooks, and support for eTools/Scientific Workflow. These systems use underlying infrastructure which includes advanced scientific computing, visualisation and imaging, data storage, networking, collaboration tools and cloud computing. There are several significant challenges to developing an eResearch Architecture for CSIRO which include the diversity of research domains and the needs of the scientists within each domain, the introduction of new technology and approaches within the organisation and the cultural change that is needed in some cases, scalability and usability of the solution architectures for the various systems that are developed and the challenge of integration and interoperability across a diverse set of systems and existing technology that are used by scientists within the organisation.Biography – Liam O’Brien
Liam O’Brien is the Chief Software Architecture for CSIRO eResearch. At CSIRO he is involved in architecting eResearch solutions based on service oriented architectures and Cloud computing. He is also an Adjunct Research Fellow at School of Computer Science and a Visiting Fellow at the School of Accounting and Business Information Systems at the Australian National University. His research and technical interests also include software and service oriented architecture, reengineering, business transformation, enterprise architectures and cloud computing. He has worked at NICTA (Australia), Lero (the Irish Software Engineering Research Centre) and at the Software Engineering Institute (at Carnegie Mellon University in the US). He holds a PhD and BSc from the University of Limerick, Ireland.
Rebecca ParkerDana McKayTerrence Bennett Lessons for data sharing from institutional repositories
Governments and institutions are increasingly interested in promoting open sharing of research data through institutional repositories: showcasing quality research data brings prestige to institutions and gives governments a visible return on financial investment in research and development. While the incentives for open data are clear for institutions and governments, any attempt to create an open data climate depends on the researchers who will choose to share their data (or not). Early attempts to foster an open data movement have met with little interest or action on the part of researchers, a result reminiscent of early attempts to recruit publications to institutional repositories. In this paper we draw on the institutional repositories literature to identify five major barriers to open data sharing: (dis)incentives, difficulty, danger, and existing disciplinary sharing practices. To change practice (and data sharing would be a major change for many disciplines), sufficient incentives must be in place to overcome old habits. There is currently little reward for researchers in data sharing: the risks are high and there are no research metrics available for measuring the impact of shared data. With so little incentive, the barrier to participation must be very low; however data sharing and curation are difficult at best. There are no standard ways to describe data, meaning cataloguing is taxing for both researchers and the repository librarians who would assist them. Not only is data sharing low-benefit and difficult, it is threatening to researchers: it may alienate their participants, and research data could be ill-used or misinterpreted. Finally, those who already share data in their own disciplines are unlikely to be willing to change their practice to meet institutional requirements: it is simply not worth it to them. The institutional repository literature highlights all these problems, and may even provide insight into some solutions.Biography – Rebecca Parker
Rebecca Parker graduated from Curtin with a Master of Information Management in 2007, and is currently the Research Services Librarian at Swinburne University of Technology. She has managed Swinburne’s institutional repository since 2008, and more recently is involved in planning Swinburne’s support for research data management, supporting online publishing, and developing the content for researcher profile pages. Rebecca is keen to ensure that academic library services are tailored not just to the needs of students, but also to researchers and corporate areas of universities. She served as subject expert on the ARROW-funded NicNames Project, which explored how best to display name variants in digital libraries from a researcher perspective. She was a member of the ARDC Party Infrastructure Project Advisory Group, a committee set up by the National Library of Australia in partnership with the Australian National Data Service. Rebecca is also a regional coordinator for the ALIA New Graduates Group and is passionate about bringing new people and perspectives to libraries.Biography – Dana McKay
Dana McKay comes from an academic background in computer science, earning a Masters degree in digital library usability at the University of Waikato in 2001. Following her degree she worked as a research fellow and usability analyst at the University of Waikato focusing on information seeking and retrieval, and as a usability analyst at Nokia focussing on mobile usability. Since 2007 she has been working at Swinburne University of Technology Library, investigating a range of issues for user experience and information seeking, including author names, institutional repository usability, and researchers’ approach to research data management.Biography – Terrence Bennett
Terrence Bennett is currently the Business and Economics Librarian at The College of New Jersey; prior to that, he held a similar position at Emory University. His current research is focussed on data services in academic libraries, and in 2010-2011 he worked as the Research Data Librarian at Swinburne University of Technology, in support of a project funded by the Australian National Data Service. Terrence has a Master of Science in Library Science from the Graduate School of Library & Information Science at the University of Illinois, where he is currently an adjunct instructor of Business Information. He has given numerous presentations on research data services, information literacy, and business information resources for IASSIST (International Association for Social Science Information Services and Technology), the Association of College and Research Libraries, and the Special Libraries Association. Terrence is also a former co-chair of the American Library Association’s PRIMO (Peer-Reviewed Information Materials Online) committee.
Kevin PuloBen EvansDeborah MitchellSteven Mceachern Panemalia: visualising longitudinal datasets at the Australian Data Archive
Longitudinal surveys are a very rich form of social science data, often containing a wealth of as-yet untapped hidden knowledge. However, such datasets are typically examined using analytic techniques and simple graphs. We believe that much better can be done in the analysis and exploration of such fertile datasets. Panemalia is the application of an advanced visualisation technique to longitudinal survey data. It is a highly interactive DHTML application, integrated with the data repository at ADA, is accessible by non-IT savvy social science users, and supports the requirements of data familiarisation, exploration and quality assurance.Biography – Kevin Pulo
Kevin Pulo is an Academic Consultant and Systems Programmer at the ANU Supercomputer Facility. He works in the areas of information and data analysis and visualisation, parallel application programming, and HPC
user environments.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.Biography – Deborah Mitchell
Deborah Mitchell is the Director of the Australian Data Archive.Biography – Steven McEachern
Steven McEachern is the National Manager of the Australian Data Archive.
Robyn RebolloMichael HaughSimon MusgraveXiaobin Shen Sustainable Solutions to Intellectual Property and Ethical Complexities in Building a National Corpus
A myriad of complicated legal and ethical issues have arisen from the Australian National Corpus (AusNC) project as an outcome of making large amounts of language data available to other researchers and the public, where permissible. This presentation will discuss the steps taken by the AusNC Project to ensure both legal protections and ethical considerations are dealt with for collections intended for inclusion to the AusNC.Biography – Robyn Rebollo
Robyn Rebollo has over 16 years experience in information management, data management, online research, library technologies and vendor relations. Her authority in the information industry has been well established, having contributed to the design and development of e-solutions in the U.S. and Australasia. Her current role is Senior Project Manager, eResearch Services, Griffith University. She presently manages the Australian National Corpus Project, an initiative that aims to provide a federated solution for Australia’s language data, enabling researchers and educators to have access to a wide range of multi-modal language data.Biography – Michael Haugh
Michael Haugh is a Senior Lecturer in Linguistics/International English teaching a range of second and third year courses in pragmatics, conversation analysis, intercultural communication, and sociolinguistics. His research interests are closely aligned with his teaching, which emphasises the importance of research-based learning and applying linguistic theory to real world data. He is an Associate Editor for Journal of Pragmatics (Elsevier, Amsterdam), and on the Editorial Board of Intercultural Pragmatics (Mouton de Gruyter, Berlin). He has edited a number of volumes on the topics of politeness, face, intention, and corpus linguistics. He is also leading the establishment of the Australian National Corpus.Biography – Simon Musgrave
Simon Musgrave is a linguist at Monash University and Treasurer of AusNC Inc, with a strong interest in data management in language documentation. Over the last few years, Simon has sought to contribute his knowledge and experience from language documentation to the design and administration of the AusNC project.Biography – Xiaobin Shen
Xiaobin Shen is a software lead and integration support at Australian National Data Service. He is particularly interested in eResearch related technology and data management, information visualization and human computer interaction. He is also the ANDS engaged research analyst for the Australian National Corpus project.
Matthias ReumannAndreas PflaumerCoeli M LopesBlake G FitchMichael C PitmanChanghuan KimSimon WailStephen MooreRyan HoefenArthur J MossJin O-Uchi

Christian Jons

Scott Mcnitt

Wojciech ZarebaI

lan Goldenberg

David Abramson

John J Rice

Clinical Application of Cardiac Modelling: a Need for Supercomputing?
Cardiac models are among the most mature biophysical models with research going back to Hodgkin and Huxley’s work on mathematical models of excitable membranes in the 1950s. Since then, the field of has advanced to a stage where clinical application of cardiac modelling can be conceived. Multiscale, multiphysics cardiac models with high degree of detail both on molecular as well as organ level require large computer resources. Having said that, we have recently developed a model to support risk stratification of long QT 1 patiens that does not necessarily require supercomputing resources. This leads to the question whether supercomputing in cardiac modelling is required to have a clinical impact. We find that clinical impact can be achieved with cardiac computer models that do not require the supercomputing capabilities of systems with thousands or tens of thousands of cores. However, when investigating patho-physiological processes on organ level that take place over hours like blockage of a coronary vessel that causes ischemia and infarction, or if electro-mechanical processes are investigated, multiscale, multiphysics models of the heart are required that demand the use of supercomputers to accomplish simulation times that can be integrated in clinical workflows.Biography – Matthias Reumann
Dr. Reumann was born in Frankfurt/Main, Germany in 1978. He studied Electronics and received the MEng in Electronics with the Tripartite Diploma in 2003 at the University of Southampton, UK. In February 2007 he received his PhD from the Institute of Biomedical Engineering at the Universität Karlstuhe (TH). From June 2007 to June 2010 he was working as a Post-Doctoral Fellow at the Computational Biology Center, IBM TJ Watson Research Center in Yorktown Heights, USA. Since June 2010 Dr. Reumann is working at the IBM Research Collaboratory for Life Sciences- Melbourne in Melbourne, Australia and holds an Honorary Senior Fellow position at the University of Melbourne. His primary research interests are systems biology, computational medicine and bioinformatics. His research work was awarded with the Wolfgang Trautwein Research Award by the German Cardiac Society and the Award of the German Society for Biomedical Engineering of the Family Klee Foundation in 2008. Dr. Reumann is member of the IEEE Engineering in Medicine and Biology Society (EMBS) since 2006 where he currently serves in the Administrative Committee as GOLD representative. He is also active in IEEE where he represents GOLD members in the Technical Advisory Board in 2011.
Matthias ReumannKathryn E HoltMichael InouyeTim StinearBenjamin W GoudeyGad AbrahamQiao WangFan ShiAdam KowalczykAdrian PearceAndrew Isaac

Bernie J Pope

Helmut Butzkueven

John Wagner

Stephen Moore

Matthew Downton

Philip C Church

Steve J Turner

Judith Field

Melissa Southey

David Bowtell

Daniel Schmidt

Enes Makalic

Justin Zobel

John Hopper

Slave Petrovski

Terence O’Brien

Precision Medicine: Dawn of Supercomputing in ‘omics Research
People vary greatly in their underlying genetic risks of diseases and in their responses to treatment for these diseases. Unpredictability of treatment outcomes results in significant personal and societal costs. Complex networks of gene regulation, gene-gene and gene-environment interactions have replaced the notion that a single gene is causative of disease or trait. Quantifying personalized risks, devising prevention strategies, and optimizing drug responses are major challenges for the application of Precision Medicine. Many research groups worldwide have been diligently working over the last few decades to develop the epidemiological and clinical resources of biospecimens and from population-based clinic-based, well-characterised large samples of cases, controls, twin pairs and families with the aim to perform analysis using complex modelling in Precision Medicine based on genomic, transcriptomic and proteomic data. However, it is currently impossible to carry out any but simplistic analyses on these large data sets due to lack of computer power and memory and therefore the full utility of the resources and technology has not been realised.. We will discuss current computational challenges in Precision Medicine and propose the use of supercomputing resources to tackle these challenges. In particular, we will present two approaches to whole genome comparison of critical importance but which are currently computationally prohibitive: multiple sequence alignment and the detection of single nucleotide polymorphism (SNP) interactions. Both approaches use the massively parallel, distributed memory supercomputer at the Victorian Life Sciences Computation Initiative (VLSCI).Biography
The authors form a group of researchers at the University of Melbourne in the area of Precision Medicine. The research goal is to make use of the supercomputing capabilities at the Victorian Life Sciences Computation Initiative for the analysis of sequencing data to identifying more accurate predictors of people at increased risk of a disease, devising prevention strategies and should they develop the disease, of the optimal drug response is a major challenge for the application of Precision Medicine.Biography – Dr Reumann
Dr. Reumann was born in Frankfurt/Main, Germany in 1978. He studied Electronics and received the MEng in Electronics with the Tripartite Diploma in 2003 at the University of Southampton, UK. In February 2007 he received his PhD from the Institute of Biomedical Engineering at the Universität Karlstuhe (TH). From June 2007 to June 2010 he was working as a Post-Doctoral Fellow at the Computational Biology Center, IBM TJ Watson Research Center in Yorktown Heights, USA. Since June 2010 Dr. Reumann is working at the IBM Research Collaboratory for Life Sciences-Melbourne in Melbourne, Australia and holds an Honorary Senior Fellow position at the University of Melbourne. His primary research interests are systems biology, computational medicine and bioinformatics. His research work was awarded with the Wolfgang Trautwein Research Award by the German Cardiac Society and the Award of the German Society for Biomedical Engineering of the Family Klee Foundation in 2008. Dr. Reumann is member of the IEEE Engineering in Medicine and Biology Society (EMBS) since 2006 where he currently serves in the Administrative Committee as GOLD representative. He is also active in IEEE where he represents GOLD members in the Technical Advisory Board in 2011.
Anna ShadboltAnn Borda Building training into the value proposition of eResearch
Advances in technology have accelerated the rate of research outputs. In spite of this, the capacity of research organisations to build and maintain the eResearch infrastructure required to enable researchers to maximise benefits from emerging technologies continues to lag behind the pace of innovation. The importance of the human e-Enabling component of this research infrastructure is well acknowledged and valued, yet institutions continue to struggle to build sustainable programs of eEnablers across their research communities. The Victorian eResearch Strategic Initiative (VeRSI) was established in 2006 and funded by the Victorian Government to accelerate and coordinate the uptake of eResearch in universities, government departments and other research organisations. In 2010, the VeRSI team commenced a review of the role that education, training and outreach (virtual and face-to-face) could play in the enhancement of eResearch outcomes. For VeRSI, outreach and training provision has been mostly opportunistic, usually coinciding with a visit from an international expert or a large regional/national meeting opportunity. This approach has been well received to date, but it is unclear if the impact would be greater with a more targeted implementation strategy. The focus of this paper is our investigation of the feasibility of embedding training and outreach into the project planning and delivery cycle, i.e. a greater coupling between training, outreach, communication planning, and project delivery. Embedding training and communications in project planning and delivery is not new in large-scale transformational technology based projects . eResearch enabling projects generally focus on ‘innovators’ and ‘early adopters’. As with research more broadly, eResearch project delivery is usually highly variable, and depending on the project, the partner requirements and the deliverables in a project, is not always clearly articulated. Leveraging project success for expanded impact could support benefits beyond the life of the project. VeRSI is looking at ways to build in ongoing Partner benefits beyond the life of a project by including outreach and training into the project delivery process. This is a shift from emphasis on ‘product creation’ to ‘value creation’ as the prime focus of project outcomes. Value creation is supported with education, training, and outreach and will be used to enhance the positioning of products and services both within and across Partner institutions. Working with Partners to communicate the value propositions of new technology from the perspective of actual and potential users should enhance uptake of that technology and support business ownership by sustaining the change required to shift the balance in eResearch uptake. In this paper we will provide examples of how these activities are evolving and what we are learning along the way as we take eResearch uptake to the eXtreme.Biography – Anna Shadbolt
Anna Shadbolt is currently working as a training consultant with the VeRSI team to develop a plan to support a coordinated eResearch training program for Partners and other stakeholders in Victoria. This will include a number of targeted workshops and the mapping of existing eResearch training programs and resources, as well as documenting unmet training needs and requirements of research communities where possible. Anna has had extensive experience in the development and evaluation of training programs in a variety of contexts. Originally trained as an educational psychologist Anna’s interest in eResearch began when she managed an APSR funded project auditing the sustainability of data management practices used by a number of data intensive research communities at The University of Melbourne. Anna’s special interest is in research information and data management policy and support infrastructure and works closely with the Melbourne Research to develop, review and audit research data management policy and code compliance. Anna is Manager, Information Management Services at The University of Melbourne and managing Melbourne’s Seeding the Commons (ANDS) project.Biography – Ann Borda
Dr. Ann Borda has held strategic and operational roles in academic and research organizations for over 10 years. Ann is currently Executive Director of the Victorian eResearch Strategic Initiative (VeRSI) which is an Australian State Government funded Program to provide a coordinated approach to accelerating the uptake of eResearch by researchers in the state of Victoria and nationally. Previously, Ann held the position of eResearch Programme Manager with the Joint Information Systems Committee (JISC – www.jisc.ac.uk) based at King’s College, London, at which time she was responsible for the quality delivery of multi-million government-funded projects in developing a UK wide e-Infrastructure to support research, and in facilitating community engagement and broader take-up of e-Science tools, services and resources. Concurrently, she is Associate Professor & Honorary Principal Fellow with the Department of Computer Science & Software Engineering at The University of Melbourne.
Richard SinnottAnthony Stell A Virtual Research Environment for International Adrenal Cancer Research
For many research areas, the need to collaborate across organizational and in certain cases national boundaries is essential. This is especially the case when dealing with rare diseases where a lack of data, information and/or sharing of expertise can cause delays in progressing the understanding and potential diagnosis/treatment of such diseases. Research into adrenal tumours and understanding their different molecular mechanisms and in turn development of targeted personalized treatments is one such area where co-ordination of international cancer efforts is essential. The European Network for the Study of Adrenal Tumours – Structuring clinical research on adrenal cancers in adults (ENS@T-CANCER – www.ensat-cancer.eu) project has been funded by the European Union to establish a state of the art Virtual Research Environment (VRE) supporting all aspects of international research and collaboration into the aetiology, diagnosis and establishing optimal treatment strategies for patients with adrenal cancer. In developing this platform it is essential that access to clinical and biological data (samples) is strictly enforced according to ethical arrangements. This presentation outlines the goals of the ENS@T-CANCERproject and outlines the on-going implementation work. We show how security-oriented information can be collected and tracked through the VRE including supporting collection of clinical data sets and their linkage with associated bio-samples in an ethically-driven framework. We also outline how it is expected that this project will shape many related efforts around the Parkville Precinct where clinical and biological matchmaking services across a range of clinical research areas are to be supported.Biography – Anthony Stell
Anthony Stell is a clinical software developer at the Melbourne eResearch Group. He holds an MSc in Information Technology from the University of Glasgow, an MPhys (Hons) in Astrophysics from the University of Edinburgh, and is a Chartered IT Professional (CITP) at the British Computing Society. He has previously been the Glasgow representative of the UK Grid Engineering Task Force (ETF), one of the lead developers on the EU FP7 Avert-IT project – an initiative to create a hypotensive event prediction system through the collection of real physiological data from specialist neurosurgery centres around Europe – and is now the senior developer for the ENSAT-CANCER project, a distributed digital repository and biobanking project, specialising in the linkage of information and samples concerning rare adrenal tumors in Europe.Biography – Richard Sinnott
Professor Richard Sinnott is the Director of eResearch at the University of Melbourne.
Prior to coming to Melbourne in July 2010 Prof. Sinnott was the Technical Director of the UK National e-Science Centre; Director of e-Science at the University of Glasgow; Deputy Director (Technical) for the Bioinformatics Research Centre also at the University of Glasgow and the Technical Director for the National Centre for e-Social Science in the UK.
He holds a PhD from the University of Stirling, Scotland where his research was based on the architectural design of open distributed processing systems (he edited numerous international standards in this area); an MSc in Software Engineering from Stirling, and a BSc Hons in Theoretical Physics from the University of East Anglia in Norwich.
Richard has published extensively across a range of computing science research areas: from theoretical computing science; real-time systems; distributed systems; with more recent focus being based around provisioning of platforms for research scientists with specific focus on those domains requiring finer-grained security.
Richard Sinnott Classifying Data Sharing Models for e-Health Collaborations
Seamless access to clinical and biomedical data sets is the cornerstone upon which the vision of e-Health depends. A multitude of projects and initiatives developing e-Health infrastructures providing access to a range of clinical and biomedical data sets have occurred [1-3], however by and large no clear consensus on the best way to build e-Health infrastructures has been established. Rather, different projects and initiatives have typically developed their own software solutions for their own particular needs and recycling of existing systems has been the exception as opposed to the rule. This is not surprising in many respects given the heterogeneity of many existing clinical systems and the rapid evolution taking place across the post-genomic (genomic, proteomics, metabolomics etc) space and the numerous advances in imaging and diagnostic techniques. However it is clear that the future success of the e-Health vision and its translation to personalised medicine, improved healthcare and the many other opportunities identified in the post-genomic age depends upon lessons learnt in developing e-Infrastructures, and ultimately being able to classify and compare solutions. Ideally a common architectural framework would exist by which reference implementations could be compared – much like the OSI protocol stack. However no such overarching framework exists and it would appear that at least for the foreseeable future, e-Health infrastructures are likely to remain largely ad hoc and uncoordinated across different communities in different countries. In this context, establishing best practice and comparing different solutions is non-trivial, as they are typically developed with different scenarios and different communities involved. The aim of this presentation is to structure the discussion of e-Health infrastructures through fundamental architectural data sharing patterns that are at the heart of many kinds of e-Health collaboration. Thus whilst no single common architecture for e-Health infrastructures exists, it is the case that common patterns of data sharing exist to support e-Health collaborations – at least at an architectural/conceptual level, as opposed to lower level implementation patterns as found in the work of Gamma [4]. We identify such patterns and outline their advantages and disadvantages. In describing these patterns we do not focus in detail on the technologies that are used to implement them per se, but rather our focus is on the fundamental nature of the data sharing and collaboration models they support. Each pattern is illustrated with an exemplar project along with the advantages and disadvantages of the pattern itself. It is intended that this classification will help better shape the future e-Health infrastructure discussions and provide the basis for comparison of solutions as well as shed insight to others in developing their own e-Health solutions.Biography – Richard Sinnott
Professor Richard Sinnott is the Director of eResearch at the University of Melbourne.
Prior to coming to Melbourne in July 2010 Prof. Sinnott was the Technical Director of the UK National e-Science Centre; Director of e-Science at the University of Glasgow; Deputy Director (Technical) for the Bioinformatics Research Centre also at the University of Glasgow and the Technical Director for the National Centre for e-Social Science in the UK.
He holds a PhD from the University of Stirling, Scotland where his research was based on the architectural design of open distributed processing systems (he edited numerous international standards in this area); an MSc in Software Engineering from Stirling, and a BSc Hons in Theoretical Physics from the University of East Anglia in Norwich.
Richard has published extensively across a range of computing science research areas: from theoretical computing science; real-time systems; distributed systems; with more recent focus being based around provisioning of platforms for research scientists with specific focus on those domains requiring finer-grained security.
Richard SinnottMartin TomkoGerson GalangRobert Stimson Towards an e-Infrastructure for Australian Urban Research
Australian urban and built environment research covers a multitude of research disciplines investigating social, economical and physical phenomena at a multitude of spatial and temporal scales and across diverse aggregation levels, from individual-level through to cohorts and populations, and across a range of scenarios, e.g. public health, voting patterns, traffic, energy and water. The development of a common software platform (e-Infrastructure) meeting the needs of such research communities must tackle many challenges associated with data intensive areas of research. This includes dealing with data sets from a multitude of federal, state, municipal, academic and private institutions, all of whom hold vast arrays of heterogeneous data. For many researchers these data sets are difficult to discover, access, interrogate and use more generally. It is also unrealistic to expect researchers to always have the technical capability and capacity to handle such large amounts of diverse data, or to develop data processing tools making use of such data sets, or indeed be able to run computationally intensive simulations and models based on these data sets. Islands of expertise and islands (silos) of data currently exist that has fragmented urban research and thwarted a holistic approach to the study of the Australian urban and built environment system.Biography – Martin Tomko
Dr. Martin Tomko is the Senior Project Manager in charge of the Information Infrastructure Design of AURIN, and Lecturer at the Faculty of Architecture at the University of Melbourne. He has a background in spatial information science, with experience in geospatial infrastructures and data handling. Most recently, he completed a post-doctoral research in Zurich, Switzerland, within the large EU FP6 project TRIPOD.Biography – Gerson Galang
Gerson C. Galang is a software developer (e-Enabler) at the Melbourne eResearch Group. He was formerly with the Victoria eResearch Strategic Research Initiative. To now, he has been the primary software developer on the AURIN project.Biography – Robert Stimson
Prof. Robert J Stimson is the Director of AURIN and Professor Emeritus in the School of Geography, Planning and Environmental Management, The University of Queensland, fellow of the Academy of Social Science in Australia and the Regional Social Science Association International, and former Convenor of the ARC Research Network in Spatially Integrated Social Sciences. He is author of 48 books and monographs and more than 300 scientific papers and book chapters.Biography – Richard Sinnott
Prof. Richard O. Sinnott is Director of eResearch at the University of Melbourne. Before this he was the Technical Director of the National e-Science Centre at the University of Glasgow; Deputy Director of the Bioinformatics Research Centre (also in Glasgow) and the Technical Director of the National Centre for e-Social Science. He is the Technical Architect of the AURIN project. He has been involved in an extensive portfolio of e-Science projects in the UK, Europe and now in Australia.
Rod HarrisJon SmillieBen Evans An archive for optical astronomy
To ensure the ongoing availability and use of data products from various optical astronomy projects throughout Australia a national astronomy data archive has been created. This data archive includes dedicated resources for hosting and long-term support of nationally significant datasets and a suite of Virtual Observatory (VO) web services implemented on top of these datasets designed to allow scientists to discover, access, and analyse various observations in a consistent fashion. In this paper we will present the work done in implementing select VO services for the SkyMapper, WiggleZ and GAMA projectsBiography – Jon Smillie
Jon Smillie is a programming consultant in the ANU Supercomputer Facility with a focus in Astronomy and Astrophysics.Biography – Rod Harris
Rod Harris is a programmer in the ANU Supercomputer Facility.Biography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility.
Salim TalebPeter Hicks Connecting the dots to unify research data and metadata
Curtin University is developing a research data management system that aims to provide researchers the necessary tools to plan, create, store, access, share, describe, archive and curate their research data. The components of the research data management system are built on the core principle of information reusability. Each component will create, manage and propogate information that can be utilised or reused by other components of the system. This interconnection provides value that is higher than that of the individual tools. The three major components forming the data management system are the data management planning tool (DMP), the data management layer, notionally labelled Research Data Portal (RDP) and a metadata management system, named the Metadata Hub. The DMP will assist researchers in determining their requirements for data capture, storage, access, reuse, ownership, archival and preservation. The RDP is a data management layer overseeing a number of data storage solutions. The information gathered in the DMP will be utilised by the RDP to initiate data storage for awarded grants, creating a data storage location with default access and security rights, files and folders structures, and default collection level descriptions. The RDP contains an engine that utilizes the DMP information to recommend or create default connection to storage solutions. The Metadata Hub will be an aggregator of information about research data (from RDP), the researchers and their respective projects (from other institutional systems). This design enables domain-specific data capture workflows and systems to be integrated with institutional metadata and data capture channels.Biography – Salim Taleb
Salim Taleb is currently a member of the eResearch Support Team at Curtin University in Bentley, Western Australia. He is the lead business analyst working on the Australian National Data Services (ANDS) projects in Curtin University. Salim has extensive requirements gathering and communication skills, has been involved in the analysis and documentation of large-scale projects, has a demonstrated ability to manage multi-faceted programs and effectively interface with all levels within an organization. His extensive knowledge and experience stem from many years as a business analyst in the higher education, telecommunication and engineering industries.Biography – Peter Hicks
Peter Hicks is currently responsible for the eResearch Support portfolio in Curtin IT Services, focused on helping researchers enhance their research outcomes through the use of ICT. For the past 10 years he has worked in various ICT service roles at Curtin University including the dial-up helpdesk, network systems engineer and team coordinator, and leading School/Faculty teams through consolidation and restructure.
John Taylor The CSIRO eResearch Strategy: Transforming the way research is done in CSIRO
In order to participate in research that is increasingly enabled by ICT infrastructure CSIRO has developed a strategic framework for developing its enterprise wide capabilities in the areas of:• Data Management• Scientific Computing Infrastructure• Advanced collaborative & visualisation environments• Scientific tools and services The CSIRO eResearch technology and infrastructure strategy is improving and transforming the way that research is conducted in CSIRO. In this presentation we will provide an update on the progress of implementing the CSIRO eResearch strategy.Biography
Dr Taylor is currently Director, CSIRO eResearch & Computational and Simulation Sciences. Dr Taylor has written more than 140 articles and books on computational and simulation science, climate change, global biogeochemical cycles, air quality and environmental policy, from the local to the global scale, spanning science, impacts and environmental policy. Dr Taylor has worked as a Computational Scientist and group leader both at the Mathematics and Computer Science Division, Argonne National Laboratory and at the Atmospheric Science Division at Lawrence Livermore National Laboratory. Dr Taylor was Senior Fellow in the Computation Institute at the University of Chicago. Dr Taylor has served on the Advisory Panel of the Scientific Computing Division of US National Center for Atmospheric Research (NCAR) and the US National Energy Research Scientific Computing Center NUGEX Advisory Committee. Dr Taylor is a Fellow of the Clean Air Society of Australia and New Zealand.
Kerry TaylorMichael ComptonLaurent Lefort Semantically-Enabling the Web of Things: The W3C Semantic Sensor Network Ontology
The ecological and agricultural sciences, industrial processes, and consumer gadgets are increasingly relying on live data streams generated by large numbers of heterogeneous sensors to deliver knowledge and services. All the traditional problems of data management and data integration arise in this context of real time data, plus a few more. Semantic technologies are being rapidly adopted for traditional data management and data integration problems, and there are many international research projects now using semantic technologies for sensor network data management. The World Wide Web Consortium (W3C) established an Incubator Group (SSN-XG) in March 2009 to develop ontologies for describing sensors and methods for using those ontologies for annotation, especially in the context of the Open Geospatial Consortium’s (OGC) Sensor Web Enablement standards. The Group completed its work in June 2011 with the publication of the final report, including the SSN OWL 2 ontology, use cases, extensive documentation and several worked examples. We present the ontology and some of the ways it is being used.Biography – Kerry Taylor
Kerry Taylor has been a CSIRO computer scientist in CSIRO for over 15 years, working in areas broadly identified as data management, most often with a focus on e-science. Kerry holds a BSc (Hons) in computer science from the University of NSW and a PhD in computer science and information technology from the Australian National University, where she is also currently an adjunct Associate Professor. Kerry leads the semantic data management research group in the CSIRO ICT Centre, was a founding co-chair of the W3C Semantic Sensor Network Incubator Group, co-chairs the series of Semantic Sensor Network Workshops held annually with the International Semantic Web Conference, and also co-chairs the annual Australasian Ontology Workshop. In 2012 she will be chairing the Mobile and Sensor Web track of the international Extending Semantic Web Conference (ESWC).Biography – Michael Compton
Michael Compton was awarded a BSc (Hons) in computer science and information technology from the Australian National University and a PhD from the University of Cambridge, UK. His research work has focused on fundamental aspects of logic, ontologies and other formal representation techniques, as well as applications in data translation, data provenance modeling and sensor composition. Michael was the lead editor of the SSN ontology for the W3C Semantic Sensor Network Incubator Group.Biography – Laurent Lefort
Laurent Lefort graduated in computer science (Engineering Degree) in 1983 from Ecole Nationale Supérieure d’Informatique et de Mathématiques Appliquées de Grenoble (ENSIMAG), France and has been a researcher from the CSIRO ICT Centre in Canberra since 2001. He is an ontologist in the team working on the application of semantic web technologies to develop environmental sensor networks for agriculture meteorology, biodiversity, water and climate change research. His current research interests include the design of ontologies, linked datasets, semantic mashups and their use in data-intensive research service infrastructures. He has also served as a co-chair of W3C’s Semantic Sensor Network Incubator Group (XG) and as the W3C Australia Office manager.
Joe Thurbon Lessons from Intersect’s Engagement Experience
Over the last three years Intersect has grown from a single employee to approximately forty. One constant during that period is that every non-trivial project and service we have provided has been based on highly interactive engagement with the research community: approximately one quarter of our staff are dedicated to engagement, and are embedded across our member organisations. Even within that constancy, we have tried many approaches, both strategic and tactical, to engage the research community. We’ve learned many lessons – some confirming our suspicions, some confounding our expectations. This presentation summarises the lessons we’ve learned, with a view to sharing our experience with the wider eResearch community. It will cover issues such as the challenges of being a distributed organisation, the importance of being research-driven vs technology-driven, the need for tailoring the engagement model to the individual needs of members, and other insights we’ve gained to maximise the value we deliver to our members.Biography
Dr Joe Thurbon is the Member Services Manager at Intersect, as well as the eResearch Analyst at Southern Cross University. He has a research background in logic and diagrammatic reasoning, and has practiced software engineering for almost 20 years. For the eight years prior to joining Intersect, Joe worked at CISRA, Canon’s Australian R&D company, researching and developing machine learning approaches to image processing problems. Joe has a BSc (Hons) from the University of Sydney in computer science and psychology, and a PhD in computer science from the University of New South Wales.
Conal TuohyAbigail Belfrage Public Record Office Victoria Crowdsourcing Transcription Project
Public Record Office Victoria (PROV), and the Victorian eResearch Strategic Initiative (VeRSI), are collaborating in a pilot software development of a crowdsourcing online transcription platform, through which members of the public can access images of public records, transcribe, tag and geo-locate them. The collaboration represents a melding of the “Gov 2.0” and “eResearch” strategies of the two organisations, and a utilisation of the emerging cultural activity of crowdsourcing that has the potential to create a valuable public information resource and a rewarding experience for participants.Biography – Abigail Belfrage
Abigail Belfrage is an historian working in Online Engagement at Public Record Office Victoria (PROV), the archives of the State of Victoria. In partnership with the Victorian eResearch Strategic Initiative she is working on a pilot project to create online transcription and text encoding software for public records held at PROV. As well as her own research interests in landscapes and places, she is passionate about and how social media and other emerging web technologies can provide new ways for people to create and share knowledge about places, people, technology and processes that they care about.Biography – Conal Tuohy
Conal Tuohy is an eResearch Business Analyst and software developer at the Victorian eResearch Strategic Initiative (VeRSI), working in the Digital Humanities area. Between 2002 and 2008, he was lead developer at the New Zealand Electronic Text Centre; a platform for online publishing of the Centre’s collection of digitised New Zealand and Pacific books. From 2008 to 2010 he worked for the eScholarship Research Centre at the University of Melbourne, particularly in the area of archival metadata. His interests include the digitisation of cultural heritage, text encoding and knowledge representation.
Paul Walk Developer Community Supporting Innovation (DevCSI)
This presentation will describe all this work, outlining some of its successful outputs and, in particular, demonstrating its relevance to publicly-funded research. The presentation will also outline some plans for the future, and indicate some opportunities for international collaboration – with an open invitation to delegates at the conference to engage with us and explore this potential.Biography – Paul Walk
Paul Walk is Deputy Director of UKOLN, based at the University of Bath in the UK, where he has worked since 2006. Prior to this he worked in the academic library at the University of North London and went on to lead the technical development in eLearning systems and related software at London Metropolitan University. At UKOLN, Paul is primarily engaged in developing UKOLN in its role as a JISC-funded ‘Innovation Support Centre’. He gives technical advice to the JISC with a particular focus on the standards and technologies related to resource discovery, and leads the Developer Community Supporting Innovation (DevCSI).
Nigel WardTung-KaiShyy SyedIrfanullah FriskaDhen Ungkara Analysis and Visualisation Tools for Spatially Integrated Social Science
The field of Spatially Integrated Social Science (SISS) recognises that much data that the social scientist examines has an associated geographic location (for example, a survey respondent’s location). SISS systems use this geographic information as the basis for both integrating heterogeneous social science data sets and for visualising the results of analyses. However, sourcing data sets, understanding relationships between the data and the geography, and implementing appropriate statistical analysis techniques are all time consuming and highly skilled processes. The UQ SISS System project aims to alleviate this burden from social scientists. The project is developing online tools that allow researchers to quickly access rich Australian socio-spatial datasets (e.g. voting outcomes and census data), conduct statistical modelling and visualize spatial relationships between the results.
BoF’s
Andrew Alexander Mobility Research Tools Roundtable. “A look at the current activity in the Research Sector with Mobile Devices and what they need to address within the changing research environment”.
Mobile devices are increasingly becoming a daily tool for improved productivity. Email, messaging, personal reference, reminders, notes, recording voice, pictures or video and accessing web based applications all from a pocket device. How is the research sector using mobile tools and what are some of the applications mobile tools are providing to assist research outcomes. What is the future role for mobile devices in the research sector and what direction should future development take to meet the sectors needs.
The BoF will hear of a number of projects from presenters on specific mobile applications and the value they have delivered in their work. This will be followed by a panel discussion with the audience to examine the likely future direction and sector needs for mobile devices and make recommendations on what should be developed for the future.Biography
Andrew is the Manager of Collaboration Services within ARCS, with over 20 years experience in the ICT Industry Andrew has spent the last 5 years working in the research and education sector. Prior to his role at ARCS, Andrew worked in the university sector for 2 years as a consultant on governance, technology management and business improvement issues. He obtained significant commercial experience as a Director of IT company’s Ram Computers and Computer Guru and within the large corporate arena as Enterprise Development Manager at Commander and Senior Business Development Manager at Ipex. He obtained his Masters in Technology Management from the University of Queensland and is a graduate of the Australian Institute of Company Directors where he was admitted as a Fellow in 2005. He also holds qualifications in ITIL, Landesk and various management practices. He has a major interest in the areas of innovation development, knowledge management and organisational assessment where he continues to maintain his interest and contribute to the wider community.
Kylie BailinJoanne Croucher Open and shut (or not): Conversations about data, access and openness
In this BoF we seek engagement with researchers and eResearch professionals to explore different approaches to having conversations about research data access and reuse. Rather than typifying open data as an all-or-nothing dichotomy, the discussion will be framed around the idea of a ‘continuum of openness’. Key areas to be explored include research communities’ expectations of reciprocity, and the changing expectations of funding agencies and publishers. Another topic for discussion is the current and future roles for libraries, data librarians and eResearch intermediaries in research data management. One of the biggest hurdles in beginning the eResearch discussion with researchers is explaining about this spectrum of open data and quelling fears that all data will have to be completely open. This discussion will look at the complexities involved with supporting researchers and informing them about all the different levels of openness.
This BoF will also look at education and training as it relates to open data and building capabilities among both support professionals and researchers.Biography – Kylie Bailin
Kylie Bailin is the Outreach Team Leader for the Science, Engineering and Medicine Unit at the University of New South Wales Library where she manages Faculty Outreach Librarians. It is the role of the Outreach Librarians to be the primary point of contact for the Faculties and to communicate with them on library resources and services. The Outreach Librarians have already started to act as eResearch Intermediaries and we envisage this conversation increasing in the future. Kylie has also previously worked as the Outreach Librarian to Engineering and before that, was in the Document Services Team. Kylie has a background in environmental management and her research interests include user experience and library design.Biography – Joanne Croucher
Joanne Croucher is based at UNSW Library within the Library Repository Services team. Current projects include the UNSW ANDS-funded Seeding the Commons project and the NCHSR Clearinghouse, a subject-based repository for Australian resources related to social and policy research in HIV/AIDS, hepatitis C and related diseases. She has previously been involved in the UNSW ‘Researching the Researchers’ project and also tutored in Information Management at the University of Technology, Sydney. Joanne has a background in health research, through previous roles at the Department of Epidemiology and Preventive Medicine, Monash University and the NSW Central Cancer Registry.
Andrew CheethamAndrew LeahyPeter Bugeia Starting up in eResearch? How to Hit the Ground Running
The session is aimed at providing young research institutions who are incubating an eResearch capability with practical advice on how to get things rolling in the right direction while delivering early value to researchers. It is hoped that individuals from institutions who have already been through the startup period will attend the session to share their experiences.Biography – Andrew Cheetham
Professor Andrew Cheetham is the Pro Vice-Chancellor (Research) at the University of Western Sydney (since 2007) leading strategic development and management of research and research training in the University, with the aim of providing a research-rich environment. His background is in plasma and fusion physics where he has worked for 15 years in laboratories in Germany, Switzerland, Australia and ultimately, as a Principal Scientific Officer at the Euratom nuclear fusion project, the Joint European Torus (JET),UK, before returning to Australia in 1990 to start working in the higher education sector where he has spent the last 20 years. Throughout his career Professor Cheetham has been involved in computer controlled instrumentation, data acquisition, analysis and display systems and he carries his enthusiasm in this area through into modern eResearch techniques. He is currently a member of the DIISR Infrastructure Roadmap Expert Working Group for eResearch and he is Deputy Chair of the NeCTAR Board.Biography – Andrew Leahy
Andrew is the eResearch Technical Advisor at University of Western Sydney. Prior to his current role he provided technical research support for the university research centres. Andrew has 20 years of experience delivering information & communications technology for academic and enterprise environments.Biography – Peter Bugeia
Peter is the Intersect eResearch Analyst for the University of Western Sydney. Peter has 27 years IT experience across a wide range of industries including medicine, banking and finance, and media. He has worked in commercial, not-for-profit and public sectors and has held various roles from Senior Software Engineer and Test Manager to Project Manager, Enterprise Architect and Business Analyst.
Anne CreganJoe ThurbonBill AppelbePeter BlainAnn BordaGraham ChenLuke EdwardsMary HobsonPhil Tannenbaum eResearch State Agencies
This Birds of a Feather session is for the purpose of sharing approaches and experience between the various Australian state-based eResearch agencies – Intersect, QCIF, VeRSI, VPAC, TPAC, eRSA and iVEC. It provides an opportunity for those on the frontline of eResearch to find out about how the other eResearch agencies are approaching engagement with the research community and national initiatives, and to discuss alternate models for providing eResearch services and products and their pros and cons. The key goal of the BoF is for those doing engagement at the State level to network with and learn from and provide feedback to others engaged in similar activities in other states.Biography – Dr Anne Cregan (Convenor)
Anne Cregan is an eResearch Analyst at Intersect. Her commercial IT background encompasses programming, business analysis, data mining and senior level management, as well as project management and co-ordination. Anne has a PhD in Computer Science from UNSW, specialising in semantic web technologies, and a BSc (Hons) in Mathematical Statistics and Psychology from the University of Sydney.Biography – Dr Joe Thurbon
Joe Thurbon is the Member Services Manager at Intersect, as well as the eResearch Analyst at Southern Cross University. He has a research background in logic and diagrammatic reasoning, and has practiced software engineering for almost 20 years. For the eight years prior to joining Intersect, Joe worked at CISRA, Canon’s Australian R&D company, researching and developing machine learning approaches to image processing problems. Joe has a BSc (Hons) from the University of Sydney in Computer Science and Psychology, and a PhD in Computer Science from the University of New South Wales.Biography – Dr Bill Appelbe
Bill Appelbe is the founding CEO and Chief Scientist of Victorian Partnership For Advanced Computing (VPAC) since 2000. Bill completed an undergraduate honours science degree at Monash University in 1974 then completed a Masters then Doctorate in Computer Science and Electrical Engineering in 1978 at the University of British Columbia. Subsequently, he was employed as an Assistant Professor at the University of California, San Diego (1979-1986), then as an Associate Professor at Georgia Tech (1987-1998). Bill’s research interests are in parallel programming tools, software engineering and software frameworks. Bill’s research in parallel programming dates back to the early 1980’s with the development of a unique parallel programming static debugging tool, followed by ongoing development of interactive parallelization toolkits and animation tools for parallel programming (funded by the NSF, IBM, and LANL). Bill is an honorary faculty member of Monash University and RMIT.Biography – Dr Ann Borda
Ann Borda is the Executive Director of Victorian E-Research Strategic Initiative (VeRSI). Ann has held senior operational and management roles within academic, research and public sector organisations in the UK and Canada, with substantial experience in overseeing and delivering large-scale initiatives to further education and research. Ann has formerly been responsible for engaging with eResearch communities across the UK in order to facilitate broader and more effective use of national eInfrastructures such as Grid services and data facilities, as well as support for new capabilities and research practices to enable leading edge research. This involved working closely with key stakeholders across the UK Research Councils, government, eScience Centres, and the JISC Committee for the Support of Research, among others, to identify national requirements. Priority areas consisted of the impact on institutions, Research Councils and communities of new technologies, relationships to industry, and the support infrastructures needed for researchers. Among other roles, Ann has overseen the UK Open Source Software Advisory Service and sat on the Advisory Board to participate on discussions about national dissemination efforts, sustainability and licensing. Additionally, Ann has pursued academic research on informatics, human-computer interaction and collaborative technologies, and has undertaken consultative projects on soft system design, data modeling and information systems development.Biography – Dr Peter Blain
Peter Blain is the software development manager and software architect at the Tasmanian Partnership for Advanced Computing (TPAC) at the University of Tasmania. He is the project manager for a number of eResearch projects at TPAC, including the Marine and Climate Data Discovery and Access Project (MACDDAP). Dr Blain received his PhD from the School of Computing and Information Technology at Griffith University in 2007, and a Bachelor of Engineering in computer systems from the University of Queensland in 1992. Prior to joining TPAC in 2008, he worked as a freelance software engineer/consultant for large Australian and International financial services companies including the Bank of Tokyo, HSBC, Westpac, the Commonwealth Bank of Australia, and the National Australia Bank.Biography – Dr Graham Chen
Graham Chen is the eResearch manager of the Queensland Cyber Infrastructure Foundation (QCIF). He is responsible for managing QCIF’s extensive HPC and Research Data Storage infrastructure and providing eResearch support services to QCIF’s six member universities in Queensland. He is also managing a range of QCIF Data Intensive Computing projects as well as managing QCIF’s interactions with national eResearch programs such as RDSI, NeCTAR, AAF, ARCS, ANDS and NCI. Graham worked in commercial software development and international R&D programs in IT and Telecommunications industry for over 20 years, and was the Chief Technology Officer of CiTR, a commercial R&D and software services company originated from UQ. Graham received his PhD in Computer Science from Heriot-Watt University, UK in 1986, and a BSc in Computer Science from Nanjing University, China in 1982.Biography – Luke Edwards
Luke Edwards is employed as a Marine Data Manager by IMOS – eMII (Integrated Marine Observing System – eMarine Information Infrastructure), iVEC, WAMSI (Western Australian Marine Science Institution) and Curtin University. He is part of the iVEC eResearch program and is working with the marine community to create a WA Node of the Australian Ocean Data Network. He is also working on the AusCover component of TERN (Terrestrial Ecosystem Research Network). Previously he worked in State Government and the university sector, focusing on geographic information systems and spatial data / metadata management. Luke holds a BSc (Env. Sci.) with first-class honours from UWA.Biography – Mary Hobson
Mary Hobson is the Director of eResearch SA, a joint venture between the University of Adelaide, Flinders University and UniSA. Mary started her technology career in 1975 programming for the Ministry of Defence in the UK. She went on to own a software house and then lectured in systems analysis and management information systems for 10 years. In the early 1990s she went to work in Russia with the Moscow University of Microelectronic Technology, setting up an innovation park. She worked with Russian engineers starting an independent integrated circuit design house, with clients including GEC Plessey Semiconductors, Intel and Alcatel. She also set up a technology transfer consultancy, introducing western companies to technologies developed in the research institutes in the former USSR. In 2005 she moved to the Polytechnic sector in New Zealand, working as a Head of School and later a member of senior management. She became Director of eResearch SA in August 2010.Biography – Phil Tannenbaum
Phil Tannenbaum is the Centre Manager at the Victorian Partnership for Advanced Computing (VPAC). He has more than 35 years experience in supercomputing and related technologies, including senior positions at the Bureau of Meteorology, NEC, and Cray. He sat on teams that defined next-generation supercomputers at NEC and Cray, including architecture, high performance peripherals, and input-output subsystems. His earlier career included technical positions with Control Data Corporation, NASA contractors, and Texaco Geophysical, all focused on high end scientific systems. He holds an M.Sc. (University of Houston, USA) and a B.Sc. (Old Dominion University, USA). He currently advises and assists the research community regarding high performance computing systems, and manages the VPAC HPC Centre.
Glenn MoloneySteve ManosTom FifieldBernard Meade The NeCTAR Project and Programs: Clouds, Apps and Virtual Labs
Participants in this session are invited to discuss the NeCTAR Project including the four NeCTAR Programs: Virtual Laboratories, eResearch Tools, the Research Cloud and National Server Programs, with a focus on the experience and lessons learned from the first node of the Research Cloud at the University of Melbourne, including:
· Deploying the first node of the Research Cloud at the University of Melbourne
· Research Applications in the Cloud – developing an ecosystem of cloud appsBiography – Glenn Moloney
Associate Professor Glenn Moloney has an established track record in eResearch leadership and industry-based management expertise, working with the European EGEE eScience program, and previously was a project leader with the APAC national grid program. His most recent position has been as Associate Director eResearch at Victoria University.Biography – Dr Steven Manos
Dr Steven Manos, Manager, Research Services in the University’s Information Technology Services, was previously at the University College of London and involved in research that aimed to produce a unique prototype system for studying brain blood flow for hypertension. “To understand how organisms develop and function, we need to not only understand how the constituent parts (such as cells, organs and tissues) operate, but how these parts interact,” Dr Manos says. “By using computers, we open the door to a radically new way of treating disease; the effectiveness of drugs or surgery can be simulated and customised for a particular person by using their genes or physical characteristics as input to a computer simulation.Biography – Bernard Meade
Bernard Meade is a Collaboration and Research Analyst with ITS Research Services at the University of Melbourne. He has managed the department’s Computer Visualisation Facility and the University’s tiled visualisation cluster, the Optiportal, and continues to work in the field of emerging technologies.
Bernard has recently been involved in the Unimelb Research Cloud pilot program as the Customer Engagement Officer, recruiting pilot users from a variety of disciplines to participate in the pilot.Biography – Tom Fifield
Tom Fifield is a software engineer, based at The University of Melbourne in Australia. After gaining experience in grid computing working to support ATLAS at the Large Hadron Collider, Tom worked extensively with collaborators from numerous overseas locations to facilitate the Belle II experiment’s distributed computing design,and investigated interoperability between grid and cloud based solutions.
Tom is now in the role of Research Infrastructure Architect, and will design develop and build the NeCTAR Research Cloud and its first node at Melbourne.
David FulkerJames Gallagher OPeNDAP Server-Side Capabilities and Other Supports for Data-Intensive Science
This session will afford attendees opportunities to hear about and influence the roadmap being charted for OPeNDAP’s future. A major focus will be on increased server-side functionality in those client-server systems built around the (evolving) DAP protocol. Four BoF segments will cover key areas of advancement: extended forms of server-side subsetting, to fully embrace non-rectangular meshes and so-called unstructured grids (UGRIDs); support for user-specified inventories of OPeNDAP-accessible data sets; increased compatibility and commonality between OPeNDAP’s Hyrax and Unidata’s THREDDS Data Server (TDS); and the impact of cloud computing on needed data services. Attendees will be asked to describe use cases and provide other feedback on the likely utility of the advances being considered.Biography – Dave Fulker
President of the nonprofit Open Source Project for a Network Data Access Protocol (OPeNDAP), Dave has focused his career on serving scientists and science educators via computing/networking advances. His teams have combined leading-edge technologies with end-user service, underpinned by expertise in both technical and social aspects of the information age. Dave directed the Unidata Program (at the University Corp. for Atmospheric Research) from inception until 2002, overseeing development of the Network Common Data Form (netCDF) and other software now considered critical infrastructure in the geosciences and other fields. Unidata is often considered an exemplar of community participation and data sharing. Before launching Unidata and serving as (founding) Executive Director of the National Science Digital Library (NSDL), Dave spent 18 years in software-development and leadership at the National Center for Atmospheric Research (NCAR). Dave is a Fellow of the American Meteorological Society (AMS) and recipient of the AMS Cleveland Abbe Award, the Educom Medal for Technology in Education and the NCAR Technology Advancement Award. Dave holds Master and Bachelor of Arts degrees in Mathematics from the University of Colorado, is a professional trumpet player (jazz and classical), and is President of the Boulder Philharmonic Orchestra.Biography – James Gallagher
OPeNDAP Vice President, James leads all software development, including that for Hyrax, and has served as principal investigator on a significant fraction of OPeNDAP’s research grants. James holds Master and Bachelor of Science degrees respectively from University of Rhode Island and Wilkes University.
David Groenewegen ANDS Projects BoF
During 2010-11, ANDS has been undertaking projects through its Seeding the Commons, Public Sector Data and Data Capture Programs, in partnership with a large number of Australian universities and research bodies. These projects are designed to improve the management of research data and to encourage the development of the Australian Research Data Commons. This Birds of a Feather session is designed to gather together all of those people who are engaged in ANDS projects, whether in universities or other research organisations or ANDS itself. Others are welcome to join in. Whatever your background, we would like you to come along and tell us about your project, share your experiences and link up with other members of the wider ANDS community.Biography
David Groenewegen is currently the Director of Research Data at the Australian National Data Service (ANDS). Previously he has been ARROW Project Manager and ARCHER Project Director, and spent a number of years working in the areas of electronic information provision and information literacy at Monash University, and in information resources at the University of Ballarat.
Kerry KilnerJonathon BollenRichard MaltbyDeb VerhoevenRoss Harley Humanities and Creative Arts eResearch Consortium: A BoF for Practitioners
This BoF session focuses on the requirements, aspirations and opportunities for collaboration between research databases containing content relating to the humanities and creative arts sector in Australia. It is designed to be a useful brain-storming event that will enable the identification and articulation of the similarities, differences, overlaps and tensions between a range of research infrastructure initiatives that serve research activities and information provision in the humanities.Biography – Jonathan Bollen
Jonathan Bollen is a senior lecturer in Drama at Flinders University in Adelaide. He coordinates research for the AusStage database of live performance (http://www.ausstage.edu.au), focusing recently on geographic mapping, network visualisation and audience research. He is co-author of Men at Play: Masculinities in Australian Theatre since the 1950s (with Adrian Kiernander and Bruce Parr, Rodopi 2008). His research on gender, sexuality and performance has been published in The Drama Review, Journal of Australian Studies and Australasian Drama Studies.Biography – Ross Harley
Ross Harley is Professor and Head of the School of Media Arts at the College of Fine Arts, UNSW. He is an artist, writer, and educator in the field of new media and popular culture. His video and sound work has been presented at the Pompidou Centre in Paris, New York MoMA, Ars Electronica in Austria, and at the Sydney Opera House. He is currently Lead Chief Investigator for the ARC-funded Design and Art Australia Online (DAAO). Other current research projects include the ARC linkage projects “Video Art Online: from Ubu to Imperial Slacks” investigating the history of video art in Sydney, and “Reconsidering Australian Media Art History in an International Context”.Biography – Kerry Kilner
Kerry Kilner is the Director of AustLit Research and Publications (www.austlit.edu.au) based at the School of English, Media Studies and Art History at The University of Queensland. She has been involved in the development and digital humanities initiatives to support research in a diverse range of fields relating to Australian literary and narrative culture for the past 12 years. She was project manager and associate editor of the Bibliography of Australian Literaturea four volume print bibliography (UQP, 2002-2008) which is likely to be the very last of its kind. Her current research, and the subject of her PhD, is in the role of Wikipedia in the university.Biography – Richard Maltby
Richard Maltby is Executive Dean of the Faculty of Education, Humanities and Law and Professor of Screen Studies at Flinders University, South Australia. He has been the lead Chief Investigator on two Australian Research Council Discovery projects examining the structure of the distribution and exhibition industry and the history of cinema audiences in Australia, which have developed the Cinema and Audiences Research Project (CAARP) database. He has also recently directed an ANDS-funded Seeding the Commons project creating a dataset from the 35,000 digital images of documents digitised from the General Correspondence files of the Motion Picture Producers and Distributors of America, Inc. (MPPDA), Hollywood’s industry trade association, covering the period from 1922 to 1939.Biography – Deb Verhoeven
Deb Verhoeven is Chair of Media and Communication at Deakin University. Deb has a longstanding interest in the digital humanities. She has developed a range of film industry related datasets including the award-winning bonza database (www.bonza.rmit.edu.au); the ANDS-funded Screen Media Research Archive and the Cinema and Audiences Research Project (CAARP) database. From 2008-2011 she was Deputy Chair of the National Film and Sound Archive. In 2011 she was elected to the foundation committee of the Australasian Association for Digital Humanities (http://aa-dh.org/).
Valerie MaxvilleLyle WintonMarkus BuchhornSam SearleAnna ShadboltBelinda Weaver eResearch Education and Training
eXtreme Research not only pushes the boundaries of technology, it challenges and extends techniques for data collection, analysis and communication. As new infrastructure is made available, we need to update researcher and support staff skills to fully utilise these resources. While the majority of researchers may not be eXtreme, their increasing reliance on technology throughout the research process is pushing the boundaries of education and training. With (near) zero resourcing, we struggle to keep up with the rapid change occuring across all research disciplines. With eResearch becoming mainstream, we face an eXtreme workforce development challenge: reskilling researchers, growing specialised research staff and updating graduate attributes and the academic curriculum. This BoF continues the conversation (2008-2010) on how to address these common issues through collaboration.Biography – Valerie Maxville
Valerie Maxville leads the Education Program at iVEC, Western Australia’s supercomputing and eResearch facility. iVEC is shared resource for advanced computing in WA, particularly serving the members at CSIRO, Curtin, ECU, Murdoch and UWA; working to increase high-end computing and eResearch expertise in the wider community. With the Education Program, Valerie coordinates the iVEC training program, promotes eResearch in WA, organises and supervises the summer internship program and is developing an outreach program for schools. She has a background in Computer Science (Curtin) and is completing a PhD in Software Engineering. Valerie is an active volunteer is the computing industry, chairing the Western Australia Section of the IEEE and IEEE Computer Society, and as a state and national representative with the Australian Computer Society.Biography – Markus Bucchorn
Dr Markus Buchhorn manages Strategic Initiatives for Intersect, the peak eResearch body in NSW. He has been a leader in the emerging field of eResearch for over 15 years. Markus was previously Director of ICT Environments at the Australian National University, and is an advisor on NCRIS Platforms for Collaboration program, the Australian National Data Service, and a member of the National e-Research Architecture Taskforce (NeAT). He has been involved in national infrastructure programs such as APAC, (Australian Partnership for Advanced Computing), for High Performance Computing services and GrangeNet for high performance network services, and continues to be actively engaged in a wide range of international eResearch initiatives.Biography – Lyle Winton
Dr Lyle Winton is the Associate Director for eResearch at Victoria University working with the Office for Research, IT Services and Library to develop e-research capability across VU. Lyle was formerly an analyst with the Victorian eResearch Strategic Initiative (VeRSI http://www.versi.edu.au/), a senior research support officer with the eScholarship Research Centre (http://www.esrc.unimelb.edu.au/) supporting the research community and eResearch initiatives at the University of Melbourne, and also a consultant to the DEEWR/JISC led international e-Framework for Education and Research (http://www.e-framework.org/). His research background is in experimental high energy physics and distributed computing, involving large-scale international collaborations. Lyle’s professional background is in the IT areas of infrastructure development, software design, development and project management.Biography – Sam Searle
Sam Searle has been the Data Management Coordinator at Monash University since August 2008. In this role, she coordinates a range of activities relating to research data management, including policy frameworks, skills development, and researcher engagement. Sam is based in the Library, and works with the Monash e-Research Centre, Research Graduate School, Research Office, Records and Archives Service, and academic and professional staff across the university’s ten faculties. She also contributes to the community of data librarians emerging out of projects sponsored by the Australian National Data Service (ANDS). She previously worked in e-research business development at Victoria University of Wellington, on digital library projects at the National Library of New Zealand, and in other research, library, archives and publishing roles in universities in Australia, New Zealand and Scotland.Biography – Anna Shadbolt
Anna Shadbolt is currently working as a training consultant with the VeRSI team to develop a plan to support a coordinated eResearch training program for Partners and other stakeholders in Victoria. This will include a number of targeted workshops and the mapping of existing eResearch training programs and resources, as well as documenting unmet training needs and requirements of research communities where possible. Anna’s interest in eResearch began when she managed an APSR funded project auditing the sustainability of data management practices used by a number of data intensive research communities at The University of Melbourne. Anna’s special interest is in research information and data management policy and support infrastructure and works closely with the Melbourne Research Office to develop, review and audit research data management policy and code compliance. Anna is Manager, Information Management Services at The University of Melbourne Library and is managing Melbourne’s Seeding the Commons (ANDS) project.Biography – Belinda Weaver
Belinda Weaver is the Manager, Research Data Collections Service, at the University of Queensland Library. She is responsible for implementing the Library’s research data management strategy. She is currently working with Nigel Ward of UQ’s eResearch Lab on the Seeding the Commons@UQ project for ANDS. She is a former manager of the UQ eSpace repository and was the co-ordinator of the UQ testbed for the APSR project.
Teula MorganLyle Winton User-facing Data Services and Capability Building – Institutional Development (BoF)
We invite people to the second Birds-of-a-Feather discussion on user-facing data services and the underlying institutional models for building research data capability. In 2010 we were at an exploratory point in the development of research data services, helped along by the stimulus of external funding and attempts to engage with our research communities around research data. Based on feedback we propose to meet again in 2011, to discuss what we’ve learnt, the service models were implementing, what has worked and what doesn’t, and looking at how we’re building a sustainable capabilities within our institutions. This BoF will consist of a summary of the 2010 discussion, followed by several two minute summaries from people across the eResearch and ANDS community on how eResearch is supported in their institutions, lessons learned, good and bad ideas. Presentations will include eResearch community members from Curtin University, CSIRO, Monash University, Queensland University of Technology, Swinburne University of Technology, University of Queensland and Victoria University, with more expected and all welcome! The short presentations will be again followed by an open discussion. We would like to discuss commonalities and differences in our approaches, and whether models have matured enough to form communities of interest and/or good practice.Biography – Teula Morgan
Teula Morgan is Associate Director, Information Management at Swinburne Library, responsible for the Online Services and Liaison areas of the Library, and the University website. In her role at Swinburne Teula has successfully established content management systems and processes across a range of materials, such as institutional research outputs, online journals, images, and corporate publications, bringing a strong focus on improving the user experience to all projects. Teula is active in the institutional repository community and the library community generally, and has a background in online publishing and content management in the university, government, and community sectors.Biography – Dr Lyle Winton
Lyle is the Associate Director for eResearch at Victoria University working with the Office for Research, IT Services and Library to develop e-research capability across VU. Lyle was formerly an analyst with the Victorian eResearch Strategic Initiative (VeRSI http://www.versi.edu.au/), a senior research support officer with the eScholarship Research Centre (http://www.esrc.unimelb.edu.au/) supporting the research community and eResearch initiatives at the University of Melbourne, and also a consultant to the DEEWR/JISC led international e-Framework for Education and Research (http://www.e-framework.org/). His research background is in experimental high energy physics and distributed computing, involving large-scale international collaborations. Lyle’s professional background is in the IT areas of infrastructure development, software design, development and project management.
Tim PughBen EvansLesley Wyborn Harmonizing Spatial Data Services for Earth and Environmental Science Applications in Data Clouds and Petascale Computing
Spatial information and data service providers are building software service stacks and computing infrastructure for specific community HPC use cases and for requirements such as data staging for analysis, visualization, and modelling, aggregation services, server-side processing, web processing services, and virtual laboratories. The intent of the BoF is to bring together leading spatial information service providers and data producers from a variety of communities to disseminate knowledge about current service architectures, and discuss desired service and data interoperability and features within high performance data and computing environments and to seek feedback from the user community. Representatives from data producers/collectors will discuss their data product formats and services, and interoperability with service providers and users communities. The representatives will range from collectors of large volume remotely sensed data sets to collectors of small scale data sets that store precise measurements of real world phenomena. Of further interest is to identify data production and services that could not be accommodated and why. Representatives from the service providers will discuss their business drivers and community use cases, software stack design, and ICT infrastructure that influenced the design of the software services and requirements for data producers or other service providers. Of further interest is the need to identify use cases that could be not accommodated by the software services, data producers, or service providers, and why. Representatives from the user community will present use cases for accessing data products and services from either service providers or directly from data producers/collectors. Of interest is the user’s current assessment of services and products, and future needs for services and products. Representatives from the service providers will discuss their business drivers and community use cases, software stack design, and ICT infrastructure that influenced the design of the software services and requirements for data producers or other service providers. Of further interest is the need to identify use cases that could be not accommodated by the software services, data producers, or service providers, and why.Representatives from the user community will present use cases for accessing data products and services from either service providers or directly from data producers/collectors. Of interest is the user’s current assessment of services and products, and future needs for services and products.Biography – Tim Pugh
Tim F. Pugh is member of Centre for Australian Weather and Climate Research (CAWCR), a partnership between CSIRO and the Australian Bureau of Meteorology. Tim is a scientific programmer specializing in computational fluid dynamics, parallel computing and application development, and internet-based information technology and data servicesBiography – Ben Evans
Ben Evans is the Head of the ANU Supercomputer Facility at the Australian National University. He leads projects in HPC and Data-Intensive analysis, working with the partners of NCI and the research sector.Biography – Lesley Wyborn
Lesley Wyborn is a Senior Geoscience Advisor at Geoscience Australia and is a member of the Australian Academy of Science National Data in Science Committee, and the Executive Committee of the Earth and Space Science Informatics Focus Group of the American Geophysical Union.
Richard SinnottMartin Tomko Australian Urban Research Infrastructure
Network

This BoF will provide an overview of the $20m EIF SuperScience Australian Urban Research Infrastructure Network (AURIN – www.aurin.org.au) project. It will provide a demonstration of the existing systems and outline how the work is progressing based on feedback from expert groups and the urban and built environment community at large. It will also outline the plans for the future for the work as a whole.Biography – Martin Tomko
Dr. Martin Tomko is the Senior Project Manager in charge of the Information Infrastructure Design of AURIN, and Lecturer at the Faculty of Architecture at the University of Melbourne. He has a background in spatial information science, with experience in geospatial infrastructures and data handling. Most recently, he completed a post-doctoral research in Zurich, Switzerland, within the large EU FP6 project TRIPOD.Biography – Richard Sinnott
Professor Richard Sinnott is the Director of eResearch at the University of Melbourne.
Prior to coming to Melbourne in July 2010 Prof. Sinnott was the Technical Director of the UK National e-Science Centre; Director of e-Science at the University of Glasgow; Deputy Director (Technical) for the Bioinformatics Research Centre also at the University of Glasgow and the Technical Director for the National Centre for e-Social Science in the UK.
He holds a PhD from the University of Stirling, Scotland where his research was based on the architectural design of open distributed processing systems (he edited numerous international standards in this area); an MSc in Software Engineering from Stirling, and a BSc Hons in Theoretical Physics from the University of East Anglia in Norwich.
Richard has published extensively across a range of computing science research areas: from theoretical computing science; real-time systems; distributed systems; with more recent focus being based around provisioning of platforms for research scientists with specific focus on those domains requiring finer-grained security.
Joe ThurbonAnne CreganBill AppelbePaul CoddingtonRob CookLuke EdwardsPaola PetrelliPhil Tannenbaum Epic Fails in eResearch
Those involved in research understand that failure is par for the course. eResearch brings together research and software, applying technology to a rapidly changing constantly evolving landscape. In this challenging environment, the software development process has the capacity for many and varied epic fails, and examples of failed services and infrastructure also abound. The purpose of this BoF is to provide a forum for a free and frank discussion of our, as eResearch organisations, most epic of failures. We will laugh, cry, and most importantly learn from our mistakes. Presenters representing Australian eResearch state-based agencies will each describe a project, service or process that has spectacularly not worked, and share the lessons learned from the failure. As the airline industry illustrates, careful and intense scrutinization of disasters and their underlying causes is extremely fertile ground for identifying problems and issues, and provides a platform for making systematic improvements to standard operations and procedures. The goal of this BoF to learn from one another’s mistakes and have a chance to learn from the insights of others regarding our mistakes.Biography – Dr Joe Thurbon (Convenor)
Joe Thurbon is the Member Services Manager at Intersect, as well as the eResearch Analyst at Southern Cross University. He has a research background in logic and diagrammatic reasoning, and has practiced software engineering for almost 20 years. For the eight years prior to joining Intersect, Joe worked at CISRA, Canon’s Australian R&D company, researching and developing machine learning approaches to image processing problems. Joe has a BSc (Hons) from the University of Sydney in Computer Science and Psychology, and a PhD in Computer Science from the University of New South Wales.Biography – Dr Anne Cregan (Convenor)
Anne Cregan is an eResearch Analyst at Intersect. Her commercial IT background encompasses programming, business analysis, data mining and senior level management, as well as project management and co-ordination. Anne has a PhD in Computer Science from UNSW, specialising in semantic web technologies, and a BSc (Hons) in Mathematical Statistics and Psychology from the University of Sydney.Biography – Dr Bill Appelbe
Bill Appelbe is the founding CEO and Chief Scientist of Victorian Partnership For Advanced Computing (VPAC) since 2000. Bill completed an undergraduate honours science degree at Monash University in 1974 then completed a Masters then Doctorate in Computer Science and Electrical Engineering in 1978 at the University of British Columbia. Subsequently, he was employed as an Assistant Professor at the University of California, San Diego (1979-1986), then as an Associate Professor at Georgia Tech (1987-1998). Bill’s research interests are in parallel programming tools, software engineering and software frameworks. Bill’s research in parallel programming dates back to the early 1980’s with the development of a unique parallel programming static debugging tool, followed by ongoing development of interactive parallelization toolkits and animation tools for parallel programming (funded by the NSF, IBM, and LANL). Bill is an honorary faculty member of Monash University and RMIT.Biography – Dr Ann Borda
Ann Borda is the Executive Director of Victorian E-Research Strategic Initiative (VeRSI). Ann has held senior operational and management roles within academic, research and public sector organisations in the UK and Canada, with substantial experience in overseeing and delivering large-scale initiatives to further education and research. Ann has formerly been responsible for engaging with eResearch communities across the UK in order to facilitate broader and more effective use of national eInfrastructures such as Grid services and data facilities, as well as support for new capabilities and research practices to enable leading edge research. This involved working closely with key stakeholders across the UK Research Councils, government, eScience Centres, and the JISC Committee for the Support of Research, among others, to identify national requirements. Priority areas consisted of the impact on institutions, Research Councils and communities of new technologies, relationships to industry, and the support infrastructures needed for researchers. Among other roles, Ann has overseen the UK Open Source Software Advisory Service and sat on the Advisory Board to participate on discussions about national dissemination efforts, sustainability and licensing. Additionally, Ann has pursued academic research on informatics, human-computer interaction and collaborative technologies, and has undertaken consultative projects on soft system design, data modeling and information systems development.Biography – Dr Paul Coddington
Dr Paul Coddington is Deputy Director of eResearch SA, where he has managed eResearch projects and infrastructure since 2002. From 2007 to 2011 he also worked for the Australian Research Collaboration Service and managed the National eResearch Architecture Taskforce (NeAT) program, which funded projects to implement eResearch tools and services for many national research communities. He has over 25 years of experience in eResearch, working on university research and development projects in high-performance and distributed computing, computational science and research data management.Biography – Dr Rob Cook
Rob Cook is the CEO of QCIF (the Queensland Cyber Infrastructure Foundation), a not-for-profit company established by the Queensland universities to provide high performance infrastructure and services. His consulting
company, Pangalax, has been active in the research sector helping withthe establishment and development of major research and research infrastructure facilities including several Cooperative Research Centres. Prior to Pangalax, Rob spent several years in North America leading Astracon, a start-up company providing broadband network provisioning software to the telecommunications industry and before that CiTR – a telecoms software company in Brisbane.Biography – Luke Edwards
Luke Edwards is employed as a Marine Data Manager by IMOS – eMII (Integrated Marine Observing System – eMarine Information Infrastructure), iVEC, WAMSI (Western Australian Marine Science Institution) and Curtin University. He is part of the iVEC eResearch program and is working with the marine community to create a WA Node of the Australian Ocean Data Network. He is also working on the AusCover component of TERN (Terrestrial Ecosystem Research Network). Previously he worked in State Government and the university sector, focusing on geographic information systems and spatial data / metadata management. Luke holds a BSc (Env. Sci.) with first-class honours from UWA.Biography – Dr Paola Petrelli
Paoloa Petrelli is the earth system data librarian at the Tasmanian Partnership on Advanced Computing (TPAC) at the University of Tasmania. She is the data manager for the TPAC Oceans and Climate Digital Library Portal. Dr Petrelli received a PhD from the Department of Earth Science of the University of Siena (Italy) in 2005, and a Bachelor in Marine Environmental Sciences at the University of Venice (Italy) in 1999. Her research interests included modeling ocean and atmosphere interactions in Antarctica and sea ice processes. For the past 4 years she have been managing oceanographic and climatological datasets, acquiring extensive experience in web services and software used by the earth science research community.Biography – Phil Tannenbaum
Phil Tannenbaum is the Centre Manager at the Victorian Partnership for Advanced Computing (VPAC). He has more than 35 years experience in supercomputing and related technologies, including senior positions at the Bureau of Meteorology, NEC, and Cray. He sat on teams that defined next-generation supercomputers at NEC and Cray, including architecture, high performance peripherals, and input-output subsystems. His earlier career included technical positions with Control Data Corporation, NASA contractors, and Texaco Geophysical, all focused on high end scientific systems. He holds an M.Sc. (University of Houston, USA) and a B.Sc. (Old Dominion University, USA). He currently advises and assists the research community regarding high performance computing systems, and manages the VPAC HPC Centre.
Belinda WeaverNigel WardSuzanne Morris Joining the Dots
We invite people to a Birds-of-a-Feather discussion on how best to support research data management within a university. Rather than try to create individual, isolated services, we propose a ‘join the dots’ approach that will build a seamless service consisting of a web of referrals, advice and support across a range of units, and that is underpinned by practicable university policy and procedures. We believe this is the best approach for a large, decentralised research university, with multiple disciplines, but the approach would also suit smaller more centralised, universities.Biography – Belinda Weaver
Belinda Weaver is the Manager, Research Data Collections Service, at the University of Queensland Library. She is responsible for implementing the Library’s research data management strategy. She is currently working with Dr Nigel Ward of UQ’s eResearch Lab on the Seeding the Commons@UQproject for ANDS. She is a former manager of the UQ eSpace repository and was the co-ordinator of the UQ testbed for the APSR project.Biography – Dr Nigel Ward
Dr Nigel Ward is based in the eResearch Lab within the School of ITEE at The University of Queensland. He is currently managing UQ’s ANDS funded Seeding the Commons and Data Capture projects. Nigel has a background in digital library technologies and practices such as, metadata semantics, representation, storage and management, resource discovery protocols and persistent identifiers. As part of his current role he is investigating semantic web approaches to representing the research data context, and resource oriented interfaces to metadata registries.Biography – Dr Suzanne Morris
In 2010, Dr Suzanne Morris commenced as UQ’s first Research Integrity Officer in the Office of the Deputy Vice-Chancellor (Research). Her previous long-term position was Education Officer for the CRC Sugarcane Biotechnology also based at UQ. Suzanne is passionate about research integrity and developing skills in research higher degree students and early career researchers to help them negotiate the rocky road to successful research careers.
Lesley WybornBryan HeidornAndrew Treloar Dark Data and the Long Tail of Science
The increased use of instruments, including sensor networks, is enabling the collection of large volume and increasingly higher resolution scientific data sets. Many of these are collected by airborne or satellite instruments and the resultant data sets constitute proxies of real world phenomena (eg remote sensing satellite images that can proxy for vegetation types). New petascale computational infrastructures enable enhanced capabilities in modeling and simulation of these large volume data sets (Big Science). However, to be of value many of these large volume data sets need to be calibrated by precise measurements of point located sample data. Unfortunately these observational data are small in volume and can be collected by many individual researchers as part of a multitude of sampling campaigns (Small Science).The collection of large volume data sets is usually done by a few specialized, but well funded research teams who have to undertake good data management practices in order to be able to manipulate, share and reuse their data. In contrast, the data from many small science projects is termed ‘dark data’ because it is rarely indexed, stored and described so it can be reused. Often, once the research paper has been written the scientist rarely has the resources or the incentives to ensure that the underpinning data are preserved so that it can either be reused by others and/or aggregated into more significant national scale data sets. Most initiatives to reserve and store data tend to focus more on the large volume; homogeneous file based data sets which can be Petabytes in size. Although expensive on hardware, these large volume data sets are relatively cheap to develop software storage infrastructures that facilitate reuse and repurposing. In contrast, although small science data sets are in the range of Gigabytes, they are expensive to develop effective software data storage infrastructures that enable reuse and repurposing.This BoF will discuss why dark data is increasingly important in the era of the data deluge which is perceived to be dominated by large volume data sets. The BoF will provide a heads up on International and Australian initiatives to deal with the increasingly complex issue of aggregating small scale sample based data sets into homogenous national data assets that can be reused and repurposed for use cases that original collector rarely considered.Biography – Bryan Heidorn
Dr Bryan Heidorn is Director of the School of Information Resources and Library Science at the University of Arizona and the president of the JRS Biodiversity Foundation.Biography – Andrew Treloar
Dr Andrew Treloar is the Director of Technology for the Australian National Data Service (ANDS) http://ands.org.au/), with particular responsibility for the Applications and Metadata Stores programs.Biography – Lesley Wyborn
Dr. Lesley Wyborn is Senior Geoscience Advisor at Geoscience Australia. She is leader of the GA/CSIRO Collaborative eResearch Project and the GA/NCI High Performance Computing Pilot.
Sponsor Presentations
Adrian De Luca,
Hitachi Data Systems

Designing a Data Storage Architecture for
e-Research
Although many of the technologies that underpin e-Research models such as high performance and grid computing have been around for some time, one of the unique challenges to e-Research is how to effectively store, access, share and leverage the large collections of data produced across a geographically dispersed environment. In this session, Adrian De Luca discusses the necessary attributes to build and scale an ideal storage architecture.

Biography
Adrian De Luca brings over 15 years of experience in Information Technology to Hitachi Data Systems. In his role as Director Pre-Sales & Solutions, Adrian is responsible for directing the technical pre-sales capability in Australia and New Zealand, managing the team of Systems Engineers, Solution Architects and Business Development Managers. He is also Chief Technology Evangelist for Hitachi’s solutions, working closely with Hitachi’s development facilities in Santa Clara, California and customers around the region to bring innovative solutions to the market. Adrian works actively with industry bodies such as the Storage Networking Industry Association (SNIA), as well as the analysts and press. He is also a popular keynote speaker presenting at major industry events around Asia Pacific, written a number of business white papers and co-authored “Storage Virtualization for Dummies”.

Formally, Adrian De Luca has held senior positions in the Asia Pacific Product Management team, Australia/New Zealand Management team and Systems Engineering group at Hitachi Data Systems. He has a background in content management consulting at BroadVision Inc and maintained numerous Software Engineering positions at various enterprise software vendors.

Bill Mannel,
Vice President,
Product Marketing,
SGI
Data Centric Computing
The increase in Data-intensive workloads is the new paradigm in technical computing. An exponential data volume growth from all kind of sources such as telescopes, satellites, Next Generation Gene Sequencers and high resolution scientific instruments fuels this need. Data-intensive workloads are typically Model-free computation with nothing to decompose, hence all data needs to be loaded to find insights. This presentation outlines SGI ‘s view on Data-intensive workloads and what has been learnt from institutions using SGI solutions around the world.Biography
Bill Mannel is Vice President of product marketing at SGI. He’s been at SGI for over twenty years in various roles of increasing responsibility including Server Product Marketing, Advanced Graphics Product Marketing, Business Development and Sales Operations, and Customer Education. Prior to SGI, Bill worked at NASA-Dryden Flight Research Facility as a systems engineer on the X-29 aircraft test program. He began his career as a United States Air Force officer, doing structural flight testing including flutter, loads and vibro-acoustics on a variety of aircraft including the B-1B and F-16. Bill’s educational background includes a Bachelor of Science degree in Mechanical Engineering & Materials Science from Duke University and a Masters of Business Administration degree from California State University-San Jose.
Clive Gold,
CTO,
EMC ANZ

Spend 100% of Your Research Resources doing Research!
Do you spend more time and budget creating or using your research environment? How much more could you achieve if you could double the amount of research you do?

In the past, to create a research environment which would scale large enough, perform fast enough and provide you with the platform for research was a major part of any project. Times have changed, the improvements in multicore CPU’s, FLASH technologies, and highly parallel, polymorphic database tools are revolutionising research.

EMC has invested in the technologies that allow you to store- at scale, analyse – in real time and act – upon your insights.

EMC has enabled researchers to break free from traditional storage technologies and achieve faster results in fields as diverse as genomics, biomedical, clinical mass spectroscopy, synchrotron based research, DNA sequencing, nuclear medicine, positron emission tomography, magnetic resonance imaging and device development.

EMC’s innovative Big Data solutions enable highly concurrent processing of research data, increasing productivity and speed time-to-results. EMC eliminates the cost and complexity barriers of traditional data storage architectures, so researchers can streamline data access and analysis and increase their workflow efficiency.

Big Data resources can scale in lockstep with data-intensive demands of scientific applications, minimising capital and operating costs, while accelerating time-to-results for leading-edge research. With EMC’s Big Data solution, performance increases as data volumes grow, enabling previously unattainable levels of data performance at high scale.

EMC Big Data enables researchers to devote precious time and resources into ground-breaking research. Our proven solution for eResearch accelerates time-to-discovery for a vast array of initiatives. With almost no management or technical skills to scale up, meaning researchers can add capacity without waiting on IT resources to respond to them.

Biography
Clive Gold is a longstanding employee at EMC, having been with the company since 1999, working in roles such as Regional Product Marketing Manager. He has recently been promoted to the role of Marketing CTO, where he is responsible for EMC’s strategic product marketing activities and plays a critical role in advising on the overall direction of EMC.

Gold has over 25 years’ experience in the IT industry and is currently the Vice Chairman of the Storage Networking Industry Association – SNIA-ANZ. Having commenced his career with Hewlett-Packard, he subsequently held a variety of senior positions in sales, marketing and services with companies including ComTech and Pyramid Technology.

Gold holds a Masters degree in Business Administration as well as a Bachelor of Science in Electrical Engineering.

Ian Dolphin,
CEO,
SAKAI FoundationPaul Walk,
Deputy Director,
UKOLN
Software, Community and Sustainability
One important factor affecting the sustainability of software is the degree of care and professionalism with which it is created. Software created in the research process is often:

  • not a primary output
  • not normally formally assessed
  • not created by someone trained to create software

As such, one approach to improving the sustainability of software produced in research is to train those who create it to do so using well-proven, good practice.

In the UK, the JISC-funded DevCSI project has begun to explore how developers can pass on, to researchers, the skills, knowledge and good practice which enable them to create software in a sustainable way.

Jasig-Sakai – Ian Dolphin
Over the last ten years, open source solutions have become a major force in answering a range of software challenges facing higher education. Sustaining software development is possibly the largest of these challenges. Open source initiatives in education remain fragmented, with not-for-profit entities proliferating to serve a diverse range of communities and solutions.

Two such communities are currently working through the process of reducing this fragmentation. Jasig, the parent organisation for the uPortal, CAS, Bedework and uMobile initiatives, and Sakai, the parent oganisation for the Sakai Collaboration and Learning Environment and the Open Academic Environment, are pooling resources. The objective is to provide a sustaining home for software in support of the academic mission. One of the merged organizations first objectives is an inclusive software incubation process. This process will provide focussed advice for new projects based around a rich range of experiences, mentoring them through some of the the challenges from licensing choices to building a sustaining community.

Ram Durairaj Cisco and Openstack
One of the key ecosystems in cloud computing is the open source community. Cisco views OpenStack as the leading open source community for cloud infrastructure automation, and has committed to contributing to the networking services capabilities of the project.
We believe Australia’s eResearch program will see tremendous advantages in the growing strength of OpenStack’s compute, storage and networking automation services. In this short presentation, we will discuss why OpenStack is
strategic to Cisco’s cloud strategy, and what exactly Cisco has contributed to Quantum, the network services project that is in the “incubation” phase for the Diablo release. We will also discuss the work we are addressing in the Essex release, due in the Spring of 2012.
John Josephakis,
DataDirect Networks

Scalable Solutions for Geo-Distributed Collaborative Research
The emergence of a global and high-speed internet has enabled organizations to collaborate and communicate in ways which have been previously unachievable. As research initiatives increasingly reach across geographies and as data locality becomes critical to the responsiveness and accuracy of computational simulation and analysis, new tools are being introduced within the global sciences community which extend many of the concepts which are proven today in web and cloud mega-datacenters. DDN, the world’s leading provider of high-performance computing storage solutions, will discuss the convergence of web and HPC storage technology requirements as we approach the exascale era and new tools for enabling researchers to maximize the value of their research data.

Biography
John Josephakis has spent over 15 years in Management roles in the field of high technology. His prior experiences include Operations and Marketing roles at Bottom Line Distribution, and Sales Management at Peipheral Vision Inc. Mr. Josephakis holds a degree in Mechanical Engineering and Economics from the University of Texas at Austin and an MBA from St. Edwards University in Austin Texas.

Mark Nielsen, HP Storage Product Strategy and Operations Manager,
HP South Pacific

Scale out infrastructure powering eResearch
Today more than ever researchers are generating hundreds of terabytes if not petabytes of data in their quest to solve the mysteries of the world and the universe. This massive pool of invaluable intellectual property being created, whilst being a tremendous asset to researchers in their quest, poses a significant challenge in terms of how the data is stored, managed, backed up and secured. HP is at the forefront in terms of providing cost effective, scalable infrastructure. In this session HP will discuss the technologies and solutions that are available today to help researchers focus more on their research and not on how to store their data.

Biography
Mark Nielsen is responsible for the product strategy development & management, and for marketing and operations of HP’s Storage Portfolio across Australia and New Zealand. Mark also supports the sales and services teams to deliver integrated storage solutions to both Enterprise and SMB customers throughout Australia.

As part of his role at HP, Mark works closely with the enterprise team to develop end-to-end customer solutions and help HP achieve market objectives for its storage business. Mark works closely with the other HP product and services groups which include HP’s Business Critical Server (BCS) and Industry Standard Server (ISS) groups to put together comprehensive solutions offerings for HP’s enterprise customers.

Mark has more than 20 years’ experience in the Australian IT industry. He has over 15 years of continuous service with HP, joining the company in 1995 with Digital Equipment Corporation. Mark has held numerous roles in sales, business management and business development across a broad product and solutions portfolio which includes OEM, storage, servers and services.

Mark is a board member of SNIA (Storage Networking Industry Association).

Matthew Jones,
Group Manager – Industry Development, Australia and New Zeland,
Intel

Intel®: Accelerating the Path to Exascale
Learn about Intel’s global eResearch collaboration projects and the key ingredients required for crossing the threshold from Petascale computing to Exascale computing. Intel will also discuss future product roadmaps and the role that the Intel Many Integrated Core (MIC) architecture will play.

Biography
Matthew’s career has spanned the IT Consulting, Banking, Resources, Utilities, Telecommunication, Healthcare and Public Sector and has focused on how technology solutions can enable business transformation, drive best practices and enhance competitive advantage. Matthew has a Bachelor of Engineering from the University of Leeds (UK), Graduate Certificate of Management and an MBA from Deakin University.

Matthias Reumann,
IBM

From Petascale to Exascale…….
In this presentation, Matthias Ruemann will talk about the current and future direction at IBM in High Performance Computing and Storage. He will also discuss issues and solutions that affect research projects involving large complex data sets and will share real life use case stories.

Rex Tanakit,Director, Industry Solutions,Panasas, Inc The Last Bottleneck: How Parallel I/O can attenuate Amdahl’s Law
Parallel processing is a key approach in improving performance in technical computing. Over the past 15+ years, Panasas has seen several standards, (e.g. OpenMP, MPI), which focus on improving inter-processor communication. With such techniques, applications can experience significant performance enhancements when the problem size fits in memory, but these do still slow down when they have to access I/O. The upcoming pNFS (parallel NFS) standard helps complete the last step in parallel processing and to minimize the impact of Amdahl’s law on slowing down applications. This talk/presentation will discuss pNFS, benefits of parallel I/O, and how they can improve real world applications. Several ISVs see significant performance improvement when moving from serial to parallel I/O. The Panasas session will share some of these results.Biography
Rex Tanakit is leading the Industry Solutions effort at Panasas, an industry and market acknowledged leader in the high performance storage and parallel file system technology arena. Rex interacts daily with researchers and scientists in the high performance computing (HPC) sector globally in his role. Prior to joining Panasas, Rex was Director of Engineering at SGI for nearly 12 years. Rex was responsible for Performance Engineering and benchmarking. Rex and his team demonstrated and publicized the world-class performance of SGI systems on HPC, graphics, storage, and file serving, and led the creation of the Global Benchmarking Center (GBC) there. The GBC plays a key role in providing large and next generation system access for benchmarking and development to Field Sales, Applications Engineer, and ISV. Rex also worked with NASA on Project Columbia to help SGI deploy the fastest 64-bit supercomputer in the world. Rex has a BS in Chemical Engineering from UC San Diego and MS from UCLA.
Peter Elford,
Business Development Manager, Higher Education and Research
eResearch Collaboration – Solved problem or eResearch’s greatest challenge ?
Collaboration is the all-too-often forgotten child of the eResearch family, with most of the attention focussed on high performance computing and data management. Yet it is the capability that is applicable to the broadest range of disciplines, and hence could reasonably be expected to deliver the best return on investment. It’s also the capability where there are a rich set of offerings from both the open source and commercial offerings, particularly as improving the productivity of knowledge workers is a common driver across all industry sectors. This talk explores some of the issues in the adoption of collaborative tools and services across the research sector.Biography
Peter Elford is an eighteen year Cisco veteran and currently holds the position of Business Development Manager, Higher Education and Research. He has held several other roles at Cisco including Public Sector Solutions Architect, responsible for articulating the alignment between networked technologies and public sector outcomes, as the Federal Region Manager, responsible for Cisco’s engagements with the Australian Federal government, as a Corporate Consulting Engineer working on residential broadband solutions, and as a Systems Engineer working on network security and internetwork design. Before joining Cisco Systems in February 1993 he worked for three years at the Australian Academic and Research Network (AARNet), where he had responsibility for much of the hands on engineering for the embryonic Australian Internet. Peter holds a BSc (Hons) from the Australian National University.
Xenon Presentation Heterogeneous HPC Systems for E-Research – NVIDIA
Categories