Design of putative T-cell epitope-based vaccine against SARS-CoV-2 : reverse vaccinology approach

Khaerunissa Anbar Istiadi1, Fadilah Fadilah1,2, Rafika Indah Paramita1,2, Linda Erlina1,2, Budi Wiweko3

1Bioinformatics Core Facilities – IMERI, Faculty of Medicine, Universitas Indonesia. Jalan Salemba Raya number 6, Jakarta, Indonesia

2Department of Medical Chemistry, Faculty of Medicine, Universitas Indonesia. Jalan Salemba Raya number 4, Jakarta, Indonesia

3Department of Obstetrics and Gynaecology, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia

 

Late 2019, severe pneumonia cases were found in Wuhan, China. The cases then identified as COVID-19, caused by a novel Coronavirus, SARS-CoV-2. Until now, there has been no efficacious and approved therapy to control SARS-CoV-2 spread. The rapid global spread and vulnerability of infection made an urgency to produce drugs or vaccine, enhancing global immune system. Currently researchers and companies are working to develop a vaccine for SARS-CoV-2. Most of this vaccine target the Spike protein aim to elicit protective antibodies. S protein (S) might be used as target for vaccine development since it is the outer surface protein of SARS-CoV-2 which can be directly recognized by the immune system. Here an attempt has been made to design putative epitope-based vaccine for SARS-CoV-2, analyzing virus genome using reverse vaccinology approach based on SARS-CoV-2 virus spike protein sequence data. About 6000 sequences were downloaded from ViPR database and processed by AVANA to find conserved region. Based on predictions through the reverse vaccinology approach and molecular docking, HLA Class I putative epitope YQPYRVVVL which overlap with HLA Class II putative epitope NGVGYQPYRVVVLSF sequence was selected as potential peptide-vaccine candidates.

Keywords: HLA, Reverse vaccinology, SARS-CoV-2, Spike protein, T-cell epitope-based vaccine

COVID-19 eResearch Experiences and Collaborations in Australasia and Southeast Asia – Upskilling Support for Pandemic Response, Preparedness and Recovery

Dr Markus Buchhorn1, Sam Moskwa2, Dr Idris Sulaiman2,3, Rowland Mosbergen

1Asia-Pacific Advanced Network (APAN.net)

2Australasia eResearch Organisations (AeRO.edu.au)

3SE Asia Engagement

 

The Covid-19 pandemic is affecting countries in Southeast Asia and the broader Indo-Pacific region in ways that increasingly demand Australia and New Zealand’s collective attention since in fighting a global pandemic, no country can go it alone.

The ability to weather a crisis of this magnitude depends upon partnerships and collaboration around research capacity and infrastructure, public health capacity, economic, social and many other aspects.

This Birds of a Feather aims to provide participants the opportunity to consider potential topics that centre on the acceleration of upskilling and access to e-Research infrastructure and data sharing capabilities for the region. Disciplinary efforts to develop data-driven, timely and effective upskilling to the Pandemic as part of national and regional COVID-19 responses.

Objectives

To help address inequities in medical research capability and capacity by leveraging opportunities with the

Australasian research infrastructure providers and the wider research community.

Overview of emerging trends obtained from various stakeholders

* Extent of recognition that sharing data is important and worthwhile

* Requirements for greater capacity in computational infrastructure

* Extent of recognition that community-led infrastructure development is important

Agenda

* Introduction and Overview of eResearch experiences and constraints (10 minutes)

* Panel of speakers, discussants and general discusions(Lighting Talks & discussion: 40 min.)

* Possible Threads:

(1) Designing vaccines through High Performance Computing (HPC) and “big data” techniques

(2) Framework for Research Data Management and eResearch Community of Practice (CoP) Development

(3) Data Sharing Standards (such as Research Data Alliance COVID-19 Data Sharing Guidelines) (4) Role of Identity and Federated Access Management

* Wrap-up for take-aways (10 minutes)

* Possible extra time for networking at following break of eResConf (10 minutes, 14:30 – 14:40)

 

Persistent identifiers for Instruments BoF_Presenting the cummunity current issues and developments in a global context

Siobhann McCafferty1

1ARDC

Much work has been done previously on identifiers and schema for sensors, sequencers, and research platforms in geoscience and genomics.  These areas have led in the development of methods for identifying instruments, instances, recording calibration and connecting metadata to instruments and infrastructure amongst others.

There is also growing awareness outside of these traditional areas about the benefits of uniquely identifying instruments. This has resulted in an alignment of direction and energies in recent years developing and building agreement on a schema and Persistent Identifier (PID) that can be used globally and across disciplines in the drive for better description, use metrics, richer metadata and promoting joined up e-research.

Recent events in this area include the publication of a draft schema and white paper by the Research Data Alliance (RDA) Persistent Identification Working Group (PIDInst), and, the establishment and growth of the Identifiers for Instruments in Australia and Aotearoa/New Zealand Community of Practice (i4iOz).

This BOF will introduce the context and current work in Instrumentation PIDs globally by bringing together representatives from the RDA PIDINST Working Group, the i4iOZ Community of Practice, recent schema adoption pilot projects from both and long-time community members working on the area.

This BOF aims to drive discussion and cooperation in Instrument Identifier development in Australasia and internationally by bringing together groups currently working in the area and providing a platform for discussion and cooperation.

 

Best Practice for FAIR and Sustainable Research Software

Dr Michelle Barker1,Mr Nicholas May2, Mr Tom Honeyman3, Rebecca Lange4, Justin Baker5

1Research Software Alliance
2RSE-AUNZ
3ARDC
4Curtin University
5CSIRO

 

This BoF aims to utilise audience expertise to identify examples of best practice in developing FAIR/sustainable software for research, to advance community knowledge and networks.

The FAIR For Research Software Working Group (FAIR4RS WG) is leading the research software community in the crucial step of agreeing how to apply the FAIR principles to research software by mid-2021. This BoF will engage the audience by identifying local examples of best practice in creating FAIR software and utilising the Lamprecht et al paper to provide guidance on what the FAIR principles might include. The best practice examples can be used to promote the FAIR principles for research software when finalised.

This BoF brings together 3 organisations:

– Research Software Alliance (ReSA)’s co-convening of the FAIR4RS WG with Research Data Alliance (RDA) and FORCE11 to create international community-agreed standards and guidelines

– RSE-AUNZ in sharing the best research software practices across the community

– ARDC in working towards research software as a first class research output

Why attend?

– Engagement with work developing the FAIR principles for research software

– Sharing of best practice to enable knowledge transfer across projects

– Development of networks across the RSE community

– Contribution to a set of impact stories from the community demonstrating best practice for sustainable sesearch software.

– Increased awareness of the activities of RSE-AUNZ, ReSA and ARDC

Format:

– 3 x short talks from each organisation

– Topic identification  to guide breaking into smaller groups

– Breakout into groups

– Report backs


Biography:

Michelle is the Director of the Research Software Alliance (ReSA). She has extensive expertise in open science, research software, digital workforce capability and digital research infrastructure. As a sociologist, Michelle is passionate about building collaborative partnerships to achieve system change.

She recently chaired the OECD Global Science Forum expert group on digital skills for the research sector, is an Advisory Committee Member of the US Software Sustainability Institute (URSSI). Michelle is a former Director of the Australian Research Data Commons.

Effective Data Visualisations with Kibana Dashboards for CRAMS

Mr Samitha Amarapathy1, Mr Rafi Mohamed feroze1

1Monash University, Australia, Clayton, Australia

Data mining or extraction of patterns and useful information from data has been around the Software space for a long time, especially in the financial and retail sectors. Analysis of data for mining is an iterative process where different views are built and refined.

In the e-Research space, infrastructure can provide raw information on usage which can be visualized with tools. However, linking this information with meta-data related to users, research domains,  funding, publications etc., requires integration with meta-data management systems. Monash eResearch has developed CRAMS ( Cloud Resource Allocation Management System) which provides an effective self service mechanism for researchers and research facilities  to request cloud resources, monitor usage and manage own allocations.

As part of CRAMs, a dashboard was built for the Monash Research Data storage team to visualize usage information at individual project level and at faculty level. While this is useful, a lot more can be done by integrating raw data captured by CRAMS with infrastructure and other available data sources. Software developers manage models of data and write tools to specific requirements, it would be a waste of resources to rely on them to do data analysis.

In this presentation we discuss how we intend to leverage ElasticSearch/Kibana together with data extracted from CRAMS to generate effective data visualisations that’ll provide management insights centered around research data.


Biography:

Samitha leads the agile driven application development capability at eResearch and lead and manages IT projects of strategic importance to eResearch including the delivery of CRAMS program of work, MyTardis based implementations in research instrument integration space, projects for research platforms and projects for Australian research cloud –NeCTAR.

Rafi M Feroze is a Senior Analyst Programmer at Monash eResearch

Contributing To the International HPC Certification Forum

Mr Lev Lafayette1, Dr Julian Kunkel2, Ms Weronika  Filinger3, Dr Christian Meesters4, Ms Anja Gerbes5

1University Of Melbourne
2University of Reading, Reading, United Kingdom
3University of Edinburgh, Edinburgh, United Kingdom
4Johannes Gutenberg-Universität Mainz, Mainz, Germany
5Goethe-Universität Frankfurt am Main, Frankfurt am Main, Germany

As datasets grow in size and complexity faster than personal computational devices are able to perform more researchers seek HPC systems as a solution to their computational problems. However, many researchers lack the familiarity with the environment for HPC, and require training. As the formal education curriculum has not yet responded sufficiently to this pressure, leaving HPC centres to provide basic training.

One proposed solution to this issues has been the international HPC Certification Forum, established in 2018, and developed from the Performance Conscious HPC (PeCoH) project in 2017 with the Hamburg HPC Competence Center (HHCC), which had the explicit goal of creating the broad standards for an “HPC driving license”. Since its establishment, the Forum has developed a detailed skill-tree across multiple branches (e.g., HPC Knowledge, HPC Use, Performance Engineering etc) and levels of competencies (basic, intermediate, expert) where very specific skills have particular competencies. In addition, the Forum has developed a summative examination system and a PGP-signed certificate.

Whilst the Forum separates the examination and certification from curriculum development and content delivery, it also requires a feedback mechanism from HPC education providers. Review of learning objectives and specific competencies, development of branches in depth and breadth all contribute to building a community ecosystem for the development of the Forum and its success. The availability of “HPC CF Endorsed Training”, with certifiable content is a clear avenue for HPC centres to contribute to the Forum which will be elaborated in this presentation with examples from current work.


Biography:

Lev Lafayette is an HPC systems administrator and educator at the University of Melbourne, where has been for the past five years. Prior to that, he held a similar role at the University of Melbourne for eight years. He has also worked for the Ministry of Foreign Affairs (Timor-Leste) and the Parliament of Victoria, as been active in Linux community development for over fifteen years. He collects post-graduate degrees for fun and profit and is currently studying at the University of Otago (his sixth degree) and the University of London, London School of Economics (his seventh).

Software development in a hurry: designing for reusability

Dr Paulus Lahur1

1CSIRO, Clayton, Australia

The intention of this presentation is to raise awareness in software development for reusability. It has the potential to save significant time and effort in software development within research organisation such as CSIRO. When a reusable software is developed for a certain project, other projects can also reap the benefit. In this presentation, the term reusability is applied to both the entire software as well as its parts. Thus the software can be reused for broad applications of the same type. Furthermore, its parts can also be reused as building blocks in other software.

The method for developing reusable software is in the same line of existing best practices in software development. More important than method is the mindset when designing the code. Under time constraint, the temptation is great to start coding right after one or two iterations of design. Designing for reusability means doing one more iteration and asking whether other people can reuse the software and its parts for other purposes. For example, in the case study presented here, this thought process has lead to the addition of scripting capability.

As a result, even before the software is reused by others, the benefit is already apparent to the original developer, because the software is now more flexible than had it been implemented without consideration for reusability.

Given that everybody benefits from software designed with reusability, this approach needs to be put into wider practice. Note that the study case presented here is still ongoing.


Biography:

The author has been working in CSIRO as IMT Scientific Computing staff for five years.

The Value of eResearch Services: Harmonising Key Performance and Key Value Indicators

Dr Jerry Lai1,2, Dr Christopher McAvaney1, Dr Andrew Goh1,2

1Deakin University, Waurn Ponds, Geelong, Australia
2Intersect Australia, Sydney, Australia

The scope of eResearch services has expanded and evolved rather drastically over the past decade. Traditional measures of performance (KPI) which focus on infrastructure usage and financial return-on-investment (ROI), whilst capable in addressing part of what eResearch entail, fall short in capturing other key aspects of eResearch — engage, empower/support researchers to maximise research output (i.e., soft benefits, implicit outcomes). In Deakin eResearch (DUeR), we plan to introduce Key Value Indicators (KVI), to further enhance our ability to portray the value of our services.

The eResearch services at Deakin can be broadly conceptualised into three pillars:

* Infrastructure

* Training

* Research Collaboration and Consultancy

The traditional KPI are quite suitable for showing the value of our Infrastructure services, based on usage and ROI. For Training, on the other hand, key stakeholders at senior executive level have advised that “the feedback and attendance looks good, but we wish also to see how attending these eResearch training would benefit researchers in the long run, and more importantly, the research capability of the university at large”. This seemingly simple request led us to rethink our approach in reporting the value of our eResearch training; and its interdependence with our (DUeR and Intersect Australia) contribution to generating research output and potential of Deakin University.

This presentation outlines the service structure of DUeR; the key challenges we face in value and performance reporting; temporary solutions and proposed long-term solutions.


Biography:

Dr Jerry Lai is a Senior eResearch Analyst at Deakin University and Intersect Australia. Jerry has a background in psychological science and statistics. He has a passion for data-driven storytelling through which, he helps develop insights into new questions and practices. Led by Dr Christopher McAvaney (Services Manager, eResearch), Jerry works with multiple research teams in Deakin—by providing consultancy, hands-on Support, or through research collaboration—on the following:

– Research Methods and Survey Design

– Data-Analytics and Visualisation

– Statistical Programming

– Promote Statistical Literacy for Non-Statisticians

ORCID : https://orcid.org/0000-0001-7017-9008

LinkedIn : https://www.linkedin.com/in/jerry-lai

I’ve got a dashboard for that! The Telegraf-Influx-Grafana (TIG) stack meets HPC at Monash University

Miss Kerri Wait1

1Monash eResearch Centre, Monash University, Clayton Campus, Australia

The Telegraf-Influx-Grafana (TIG) stack is a powerful tool to explore and visualise the state of High Performance Computing (HPC) environments and their auxiliary services. Join me for a demonstration of the metrics and dashboards we’ve found most useful at the Monash eResearch Centre (MeRC).

The TIG stack consists of:

– Telegraf, plugin driven metric collection agent

– InfluxDB, time-series database

– Grafana, visualisation and dashboard web UI

Leveraging the software-defined HPC infrastructure at MeRC to deploy the telegraf agent to any number of different services is trivial. There’s no need to struggle with obscure configurations in vendor-specific web UIs. Telegraf is plugin driven and configured using a text file; enable and configure the appropriate plugins for the service in question and the telegraf agent is ready to go. The data is stored in InfluxDB, and queried using Grafana.

With TIG, I’m able to collect, store, and visualise spine switch hardware counters from the fabric, compute node health metrics like cpu and diskio, detailed jobstats for our Lustre storage, operations on our OpenLDAP servers, the utilisation of FlexLM tokens on license servers, as well as traffic on nginx and apache servers. I can monitor for specific disk usage patterns, alert particular team members via Slack, and troubleshoot user jobs.

Community maintained stacks like TIG provide rapid access to metrics, an interactive troubleshooting and exploration environment, as well as alerting and reporting functionality. They also allow you to antagonise your colleagues with the catchcry “I’ve got a dashboard for that!”


Biography:

Kerri Wait is an HPC Consultant at Monash University. As an engineer, Kerri has a keen interest in pulling things apart and reassembling them in novel ways. She applies the same principles to her work in eResearch, and is passionate about making scientific research faster, more robust, and repeatable by upskilling user communities and removing entry barriers. Kerri is currently focused on monitoring and visualisation techniques for infrastructure at all levels of the Monash HPC platforms.

Multidisciplinary Big-Data Sharing, Analysis, and Policy Recommendations to Support COVID-19 Pandemic Response

Dr Damar Susilaradeya1, Dr Prasandhya Astagiri Yusuf1, Gina Anindyajati1 , Dr Ahmad Gamal2, Dr Adila Alfa Krisnadhi3, Dr Bagus  Takwin4, Jarot Mulyo Semedi5, Iwan Ariawan6, Rus’an Nasrudin7, Dr Hervita Diatri1, Ima Mayasari8, Prof Budi Wiweko1

1Faculty of Medicine Universitas Indonesia, Depok, Indonesia
2Faculty of Engineering Universitas Indonesia, Depok, Indonesia
3Faculty of Computer Sciences Universitas Indonesia, Depok, Indonesia
4Faculty of Psychology Universitas Indonesia, Depok, Indonesia
5Faculty of Mathematics and Natural Sciences Universitas Indonesia, Depok, Indonesia
6Faculty of Public Health Universitas Indonesia, Depok, Indonesia
7Faculty of Economics & Business Universitas Indonesia, Depok, Indonesia
8Faculty of Administrative Science Universitas Indonesia, Depok, Indonesia

Introduction

The COVID-19 pandemic requires massive actions across multiple disciplines to prevent extended consequences. Indonesia as a multi-cultural archipelago with more than 260 million population faces a unique challenge in addressing the pandemic. Universitas Indonesia formed a multidisciplinary research group analysing big data in collaboration with various academic, government, and industry partners to inform the national COVID-19 committee.

Methods

We analysed data related to the large-scale social restriction implemented by the government. We looked at COVID-19 cases, mobility data, mental health status, health-related, and economic behaviours. We performed spatial analysis using Geographical Information System (GIS) to map region-specific risks that require tailored government intervention.

Results and discussions

This multidisplinary approach allowed us to have a comprehensive and well-rounded view of the problem our nation is facing which was then translated into big-data driven policy recommendations. We faced several challenges during the process of this research utilising big-data from various resources, including variability (variety of data), validity (study design, power of study, statistics), reproducibility (transparency and open science), and responsibility (ethics, security, and privacy). These challenges need to be addressed in order to provide the government with accurate scientific COVID-19 advice. Moreover, joint collaborations providing the necessary human resources, software platforms and computational infrastructure during COVID-19 and beyond are needed specifically to be able to do responsible data sharing and data mining.


Biography:

Damar Susilaradeya is a researcher at the Medical Technology Cluster Indonesian Medical Education and Research Institute (IMERI) Faculty of Medicine Universitas Indonesia. She is also a lecturer at the Medical Physics Department at the Faculty of Medicine Universitas Indonesia. She is the coordinator of Universitas Indonesia Big-Data Synergy Against COVID-19 team.

12314

Recent Comments

    Categories

    About the conference

    eResearch Australasia provides opportunities for delegates to engage, connect, and share their ideas and exemplars concerning new information centric research capabilities, and how information and communication technologies help researchers to collaborate, collect, manage, share, process, analyse, store, find, understand and re-use information.

    Conference Managers

    Please contact the team at Conference Design with any questions regarding the conference.

    © 2018 - 2020 Conference Design Pty Ltd