TY - CONF
T1 - Ad hoc Cloud Computing
T2 - IEEE Cloud
Y1 - 2015
A1 - Gary McGilvary
A1 - Barker, Adam
A1 - Malcolm Atkinson
KW - ad hoc
KW - cloud computing
KW - reliability
KW - virtualization
KW - volunteer computing
AB - This paper presents the first complete, integrated and end-to-end solution for ad hoc cloud computing environments. Ad hoc clouds harvest resources from existing sporadically available, non-exclusive (i.e. primarily used for some other purpose) and unreliable infrastructures. In this paper we discuss the problems ad hoc cloud computing solves and outline our architecture which is based on BOINC.
JF - IEEE Cloud
UR - http://arxiv.org/abs/1505.08097
ER -
TY - CONF
T1 - Applying selectively parallel IO compression to parallel storage systems
T2 - Euro-Par
Y1 - 2014
A1 - Rosa Filgueira
A1 - Malcolm Atkinson
A1 - Yusuke Tanimura
A1 - Isao Kojima
JF - Euro-Par
ER -
TY - CONF
T1 - FAST: Flexible Automated Syncrhonization Transfer tool
T2 - Proceedings of the Sixth International Workshop on Data-Intensive Distributed Computing Date
Y1 - 2014
A1 - Rosa Filgueira
A1 - Iraklis Klampanos
A1 - Yusuke Tanimura
A1 - Malcolm Atkinson.
JF - Proceedings of the Sixth International Workshop on Data-Intensive Distributed Computing Date
PB - ACM
CY - New York, NY, USA
ER -
TY - Generic
T1 - Varpy: A python library for volcanology and rock physics data analysis. EGU2014-3699
Y1 - 2014
A1 - Rosa Filgueira
A1 - Malcolm Atkinson
A1 - Andrew Bell
A1 - Branwen Snelling
ER -
TY - CONF
T1 - C2MS: Dynamic Monitoring and Management of Cloud Infrastructures
T2 - IEEE CloudCom
Y1 - 2013
A1 - Gary McGilvary
A1 - Josep Rius
A1 - Íñigo Goiri
A1 - Francesc Solsona
A1 - Barker, Adam
A1 - Atkinson, Malcolm P.
AB - Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration; an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS); a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS.
JF - IEEE CloudCom
CY - Bristol, UK
ER -
TY - BOOK
T1 - The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business
T2 - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya)
Y1 - 2013
A1 - Atkinson, Malcolm P.
A1 - Baxter, Robert M.
A1 - Peter Brezany
A1 - Oscar Corcho
A1 - Michelle Galea
A1 - Parsons, Mark
A1 - Snelling, David
A1 - van Hemert, Jano
KW - Big Data
KW - Data Intensive
KW - data mining
KW - Data Streaming
KW - Databases
KW - Dispel
KW - Distributed Computing
KW - Knowledge Discovery
KW - Workflows
AB - With the digital revolution opening up tremendous opportunities in many fields, there is a growing need for skilled professionals who can develop data-intensive systems and extract information and knowledge from them. This book frames for the first time a new systematic approach for tackling the challenges of data-intensive computing, providing decision makers and technical experts alike with practical tools for dealing with our exploding data collections. Emphasising data-intensive thinking and interdisciplinary collaboration, The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business examines the essential components of knowledge discovery, surveys many of the current research efforts worldwide, and points to new areas for innovation. Complete with a wealth of examples and DISPEL-based methods demonstrating how to gain more from data in real-world systems, the book: * Outlines the concepts and rationale for implementing data-intensive computing in organisations * Covers from the ground up problem-solving strategies for data analysis in a data-rich world * Introduces techniques for data-intensive engineering using the Data-Intensive Systems Process Engineering Language DISPEL * Features in-depth case studies in customer relations, environmental hazards, seismology, and more * Showcases successful applications in areas ranging from astronomy and the humanities to transport engineering * Includes sample program snippets throughout the text as well as additional materials on a companion website The DATA Bonanza is a must-have guide for information strategists, data analysts, and engineers in business, research, and government, and for anyone wishing to be on the cutting edge of data mining, machine learning, databases, distributed systems, or large-scale computing.
JF - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya)
PB - John Wiley & Sons Inc.
SN - 978-1-118-39864-7
ER -
TY - CHAP
T1 - Data-Intensive Analysis
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Oscar Corcho
A1 - van Hemert, Jano
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - data mining
KW - Data-Analysis Experts
KW - Data-Intensive Analysis
KW - Knowledge Discovery
AB - Part II: "Data-intensive Knowledge Discovery", focuses on the needs of data-analysis experts. It illustrates the problem-solving strategies appropriate for a data-rich world, without delving into the details of underlying technologies. It should engage and inform data-analysis specialists, such as statisticians, data miners, image analysts, bio-informaticians or chemo-informaticians, and generate ideas pertinent to their application areas. Chapter 5: "Data-intensive Analysis", introduces a set of common problems that data-analysis experts often encounter, by means of a set of scenarios of increasing levels of complexity. The scenarios typify knowledge discovery challenges and the presented solutions provide practical methods; a starting point for readers addressing their own data challenges.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - Data-Intensive Components and Usage Patterns
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Oscar Corcho
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data Analysis
KW - data mining
KW - Data-Intensive Components
KW - Registry
KW - Workflow Libraries
KW - Workflow Sharing
AB - Chapter 7: "Data-intensive components and usage patterns", provides a systematic review of the components that are commonly used in knowledge discovery tasks as well as common patterns of component composition. That is, it introduces the processing elements from which knowledge discovery solutions are built and common composition patterns for delivering trustworthy information. It reflects on how these components and patterns are evolving in a data-intensive context.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - The Data-Intensive Survival Guide
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Malcolm Atkinson
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data-Analysis Experts
KW - Data-Intensive Architecture
KW - Data-intensive Computing
KW - Data-Intensive Engineers
KW - Datascopes
KW - Dispel
KW - Domain Experts
KW - Intellectual Ramps
KW - Knowledge Discovery
KW - Workflows
AB - Chapter 3: "The data-intensive survival guide", presents an overview of all of the elements of the proposed data-intensive strategy. Sufficient detail is presented for readers to understand the principles and practice that we recommend. It should also provide a good preparation for readers who choose to sample later chapters. It introduces three professional viewpoints: domain experts, data-analysis experts, and data-intensive engineers. Success depends on a balanced approach that develops the capacity of all three groups. A data-intensive architecture provides a flexible framework for that balanced approach. This enables the three groups to build and exploit data-intensive processes that incrementally step from data to results. A language is introduced to describe these incremental data processes from all three points of view. The chapter introduces ‘datascopes’ as the productized data-handling environments and ‘intellectual ramps’ as the ‘on ramps’ for the highways from data to knowledge.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - Data-Intensive Thinking with DISPEL
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Malcolm Atkinson
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data-Intensive Machines
KW - Data-Intensive Thinking, Data-intensive Computing
KW - Dispel
KW - Distributed Computing
KW - Knowledge Discovery
AB - Chapter 4: "Data-intensive thinking with DISPEL", engages the reader with technical issues and solutions, by working through a sequence of examples, building up from a sketch of a solution to a large-scale data challenge. It uses the DISPEL language extensively, introducing its concepts and constructs. It shows how DISPEL may help designers, data-analysts, and engineers develop solutions to the requirements emerging in any data-intensive application domain. The reader is taken through simple steps initially, this then builds to conceptually complex steps that are necessary to cope with the realities of real data providers, real data, real distributed systems, and long-running processes.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Inc.
ER -
TY - CHAP
T1 - Definition of the DISPEL Language
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Paul Martin
A1 - Yaikhom, Gagarine
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data Streaming
KW - Data-intensive Computing
KW - Dispel
AB - Chapter 10: "Definition of the DISPEL language", describes the novel aspects of the DISPEL language: its constructs, capabilities, and anticipated programming style.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
T3 - {Parallel and Distributed Computing, series editor Albert Y. Zomaya}
PB - John Wiley & Sons Inc.
ER -
TY - CONF
T1 - The demand for consistent web-based workflow editors
T2 - Proceedings of the 8th Workshop on Workflows in Support of Large-Scale Science
Y1 - 2013
A1 - Gesing, Sandra
A1 - Atkinson, Malcolm
A1 - Klampanos, Iraklis
A1 - Galea, Michelle
A1 - Berthold, Michael R.
A1 - Barbera, Roberto
A1 - Scardaci, Diego
A1 - Terstyanszky, Gabor
A1 - Kiss, Tamas
A1 - Kacsuk, Peter
KW - web-based workflow editors
KW - workflow composition
KW - workflow interoperability
KW - workflow languages and concepts
JF - Proceedings of the 8th Workshop on Workflows in Support of Large-Scale Science
PB - ACM
CY - New York, NY, USA
SN - 978-1-4503-2502-8
UR - http://doi.acm.org/10.1145/2534248.2534260
ER -
TY - CHAP
T1 - The Digital-Data Challenge
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Malcolm Atkinson
A1 - Parsons, Mark
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Big Data
KW - Data-intensive Computing, Knowledge Discovery
KW - Digital Data
KW - Digital-Data Revolution
AB - Part I: Strategies for success in the digital-data revolution, provides an executive summary of the whole book to convince strategists, politicians, managers, and educators that our future data-intensive society requires new thinking, new behavior, new culture, and new distribution of investment and effort. This part will introduce the major concepts so that readers are equipped to discuss and steer their organization’s response to the opportunities and obligations brought by the growing wealth of data. It will help readers understand the changing context brought about by advances in digital devices, digital communication, and ubiquitous computing. Chapter 1: The digital-data challenge, will help readers to understand the challenges ahead in making good use of the data and introduce ideas that will lead to helpful strategies. A global digital-data revolution is catalyzing change in the ways in which we live, work, relax, govern, and organize. This is a significant change in society, as important as the invention of printing or the industrial revolution, but more challenging because it is happening globally at lnternet speed. Becoming agile in adapting to this new world is essential.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - The Digital-Data Revolution
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Malcolm Atkinson
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data
KW - Information
KW - Knowledge
KW - Knowledge Discovery
KW - Social Impact of Digital Data
KW - Wisdom, Data-intensive Computing
AB - Chapter 2: "The digital-data revolution", reviews the relationships between data, information, knowledge, and wisdom. It analyses and quantifies the changes in technology and society that are delivering the data bonanza, and then reviews the consequential changes via representative examples in biology, Earth sciences, social sciences, leisure activity, and business. It exposes quantitative details and shows the complexity and diversity of the growing wealth of data, introducing some of its potential benefits and examples of the impediments to successfully realizing those benefits.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - DISPEL Development
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Adrian Mouat
A1 - Snelling, David
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Diagnostics
KW - Dispel
KW - IDE
KW - Libraries
KW - Processing Elements
AB - Chapter 11: "DISPEL development", describes the tools and libraries that a DISPEL developer might expect to use. The tools include those needed during process definition, those required to organize enactment, and diagnostic aids for developers of applications and platforms.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Inc.
ER -
TY - CHAP
T1 - DISPEL Enactment
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Chee Sun Liew
A1 - Krause, Amrey
A1 - Snelling, David
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data Streaming
KW - Data-Intensive Engineering
KW - Dispel
KW - Workflow Enactment
AB - Chapter 12: "DISPEL enactment", describes the four stages of DISPEL enactment. It is targeted at the data-intensive engineers who implement enactment services.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Inc.
ER -
TY - CHAP
T1 - Foreword
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Tony Hey
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Big Data
KW - Data-intensive Computing, Knowledge Discovery
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - JOUR
T1 - Lesion Area Detection Using Source Image Correlation Coefficient for CT Perfusion Imaging
JF - IEEE Journal of Biomedical and Health Informatics
Y1 - 2013
A1 - Fan Zhu
A1 - Rodríguez, David
A1 - Carpenter, Trevor K.
A1 - Atkinson, Malcolm P.
A1 - Wardlaw, Joanna M.
KW - CT , Pattern Recognition , Perfusion Source Images , Segmentation
AB - Computer tomography (CT) perfusion imaging is widely used to calculate brain hemodynamic quantities such as Cerebral Blood Flow (CBF), Cerebral Blood Volume (CBV) and Mean Transit Time (MTT) that aid the diagnosis of acute stroke. Since perfusion source images contain more information than hemodynamic maps, good utilisation of the source images can lead to better understanding than the hemodynamic maps alone. Correlation-coefficient tests are used in our approach to measure the similarity between healthy tissue time-concentration curves and unknown curves. This information is then used to differentiate penumbra and dead tissues from healthy tissues. The goal of the segmentation is to fully utilize information in the perfusion source images. Our method directly identifies suspected abnormal areas from perfusion source images and then delivers a suggested segmentation of healthy, penumbra and dead tissue. This approach is designed to handle CT perfusion images, but it can also be used to detect lesion areas in MR perfusion images.
VL - 17
IS - 5
ER -
TY - CONF
T1 - MPI collective I/O based on advanced reservations to obtain performance guarantees from shared storage systems
T2 - CLUSTER
Y1 - 2013
A1 - Yusuke Tanimura
A1 - Rosa Filgueira
A1 - Isao Kojima
A1 - Malcolm P. Atkinson
JF - CLUSTER
ER -
TY - CHAP
T1 - Platforms for Data-Intensive Analysis
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Snelling, David
ED - Malcolm Atkinson
ED - Baxter, Robert M.
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data-Intensive Engineering
KW - Data-Intensive Systems
KW - Dispel
KW - Distributed Systems
AB - Part III: "Data-intensive engineering", is targeted at technical experts who will develop complex applications, new components, or data-intensive platforms. The techniques introduced may be applied very widely; for example, to any data-intensive distributed application, such as index generation, image processing, sequence comparison, text analysis, and sensor-stream monitoring. The challenges, methods, and implementation requirements are illustrated by making extensive use of DISPEL. Chapter 9: "Platforms for data-intensive analysis", gives a reprise of data-intensive architectures, examines the business case for investing in them, and introduces the stages of data-intensive workflow enactment.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - Preface
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Malcolm Atkinson
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Big Data, Data-intensive Computing, Knowledge Discovery
AB - Who should read the book and why. The structure and conventions used. Suggested reading paths for different categories of reader.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CHAP
T1 - Problem Solving in Data-Intensive Knowledge Discovery
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Oscar Corcho
A1 - van Hemert, Jano
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data-Analysis Experts
KW - Data-Intensive Analysis
KW - Design Patterns for Knowledge Discovery
KW - Knowledge Discovery
AB - Chapter 6: "Problem solving in data-intensive knowledge discovery", on the basis of the previous scenarios, this chapter provides an overview of effective strategies in knowledge discovery, highlighting common problem-solving methods that apply in conventional contexts, and focusing on the similarities and differences of these methods.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CONF
T1 - Provenance for seismological processing pipelines in a distributed streaming workflow
T2 - EDBT/ICDT Workshops
Y1 - 2013
A1 - Alessandro Spinuso
A1 - James Cheney
A1 - Malcolm Atkinson
JF - EDBT/ICDT Workshops
ER -
TY - CHAP
T1 - Sharing and Reuse in Knowledge Discovery
T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
Y1 - 2013
A1 - Oscar Corcho
ED - Malcolm Atkinson
ED - Rob Baxter
ED - Peter Brezany
ED - Oscar Corcho
ED - Michelle Galea
ED - Parsons, Mark
ED - Snelling, David
ED - van Hemert, Jano
KW - Data-Intensive Analysis
KW - Knowledge Discovery
KW - Ontologies
KW - Semantic Web
KW - Sharing
AB - Chapter 8: "Sharing and re-use in knowledge discovery", introduces more advanced knowledge discovery problems, and shows how improved component and pattern descriptions facilitate re-use. This supports the assembly of libraries of high level components well-adapted to classes of knowledge discovery methods or application domains. The descriptions are made more powerful by introducing notations from the semantic Web.
JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business
PB - John Wiley & Sons Ltd.
ER -
TY - CONF
T1 - Towards Addressing CPU-Intensive Seismological Applications in Europe
T2 - International Supercomputing Conference
Y1 - 2013
A1 - Michele Carpené
A1 - I.A. Klampanos
A1 - Siew Hoon Leong
A1 - Emanuele Casarotti
A1 - Peter Danecek
A1 - Graziella Ferini
A1 - Andre Gemünd
A1 - Amrey Krause
A1 - Lion Krischer
A1 - Federica Magnoni
A1 - Marek Simon
A1 - Alessandro Spinuso
A1 - Luca Trani
A1 - Malcolm Atkinson
A1 - Giovanni Erbacci
A1 - Anton Frank
A1 - Heiner Igel
A1 - Andreas Rietbrock
A1 - Horst Schwichtenberg
A1 - Jean-Pierre Vilotte
AB - Advanced application environments for seismic analysis help geoscientists to execute complex simulations to predict the behaviour of a geophysical system and potential surface observations. At the same time data collected from seismic stations must be processed comparing recorded signals with predictions. The EU-funded project VERCE (http://verce.eu/) aims to enable specific seismological use-cases and, on the basis of requirements elicited from the seismology community, provide a service-oriented infrastructure to deal with such challenges. In this paper we present VERCE’s architecture, in particular relating to forward and inverse modelling of Earth models and how the, largely file-based, HPC model can be combined with data streaming operations to enhance the scalability of experiments.We posit that the integration of services and HPC resources in an open, collaborative environment is an essential medium for the advancement of sciences of critical importance, such as seismology.
JF - International Supercomputing Conference
CY - Leipzig, Germany
ER -
TY - CONF
T1 - V-BOINC: The Virtualization of BOINC
T2 - In Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2013).
Y1 - 2013
A1 - Gary McGilvary
A1 - Barker, Adam
A1 - Ashley Lloyd
A1 - Malcolm Atkinson
AB - The Berkeley Open Infrastructure for Network Computing (BOINC) is an open source client-server middleware system created to allow projects with large computational requirements, usually set in the scientific domain, to utilize a technically unlimited number of volunteer machines distributed over large physical distances. However various problems exist deploying applications over these heterogeneous machines using BOINC: applications must be ported to each machine architecture type, the project server must be trusted to supply authentic applications, applications that do not regularly checkpoint may lose execution progress upon volunteer machine termination and applications that have dependencies may find it difficult to run under BOINC. To solve such problems we introduce virtual BOINC, or V-BOINC, where virtual machines are used to run computations on volunteer machines. Application developers can then compile their applications on a single architecture, checkpointing issues are solved through virtualization API's and many security concerns are addressed via the virtual machine's sandbox environment. In this paper we focus on outlining a unique approach on how virtualization can be introduced into BOINC and demonstrate that V-BOINC offers acceptable computational performance when compared to regular BOINC. Finally we show that applications with dependencies can easily run under V-BOINC in turn increasing the computational potential volunteer computing offers to the general public and project developers. V-BOINC can be downloaded at http://garymcgilvary.co.uk/vboinc.html
JF - In Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2013).
CY - Delft, The Netherlands
ER -
TY - CONF
T1 - Abstract: Reservation-Based I/O Performance Guarantee for MPI-IO Applications Using Shared Storage Systems
T2 - SC Companion
Y1 - 2012
A1 - Yusuke Tanimura
A1 - Rosa Filgueira
A1 - Isao Kojima
A1 - Malcolm P. Atkinson
JF - SC Companion
ER -
TY - CONF
T1 - An adaptive, scalable, and portable technique for speeding up MPI-based applications
T2 - International European Conference on Parallel and Distributed Computing, Europar-2012
Y1 - 2012
A1 - Rosa Filgueira
A1 - Alberto Nuñez
A1 - Javier Fernandez
A1 - Malcolm Atkinson
JF - International European Conference on Parallel and Distributed Computing, Europar-2012
ER -
TY - JOUR
T1 - Computed Tomography Perfusion Imaging Denoising Using Gaussian Process Regression
JF - Physics in Medicine and Biology
Y1 - 2012
A1 - Fan Zhu
A1 - Carpenter, Trevor
A1 - Rodríguez, David
A1 - Malcolm Atkinson
A1 - Wardlaw, Joanna
AB - Objective: Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, Computed Tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. Methods: The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Results: Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time- concentration curves and reduces the oscillations in the curve. Conclusion: GPR is superior to the comparable techniques used in this study.
ER -
TY - JOUR
T1 - Data-Intensive Architecture for Scientific Knowledge Discovery
JF - Distributed and Parallel Databases
Y1 - 2012
A1 - Atkinson, Malcolm P.
A1 - Chee Sun Liew
A1 - Michelle Galea
A1 - Paul Martin
A1 - Krause, Amrey
A1 - Adrian Mouat
A1 - Oscar Corcho
A1 - Snelling, David
KW - Knowledge discovery, workflow management system
AB - This paper presents a data-intensive architecture that demonstrates the ability to support applications from a wide range of application domains, and support the different types of users involved in defining, designing and executing data-intensive processing tasks. The prototype architecture is introduced, and the pivotal role of DISPEL as a canonical language is explained. The architecture promotes the exploration and exploitation of distributed and heterogeneous data and spans the complete knowledge discovery process, from data preparation, to analysis, to evaluation and reiteration. The architecture evaluation included large-scale applications from astronomy, cosmology, hydrology, functional genetics, imaging processing and seismology.
VL - 30
UR - http://dx.doi.org/10.1007/s10619-012-7105-3
IS - 5
ER -
TY - JOUR
T1 - Parallel perfusion imaging processing using GPGPU
JF - Computer Methods and Programs in Biomedicine
Y1 - 2012
A1 - Fan Zhu
A1 - Rodríguez, David
A1 - Carpenter, Trevor
A1 - Malcolm Atkinson
A1 - Wardlaw, Joanna
KW - Deconvolution
KW - GPGPU
KW - Local AIF
KW - Parallelization
KW - Perfusion Imaging
AB - Background and purpose The objective of brain perfusion quantification is to generate parametric maps of relevant hemodynamic quantities such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) that can be used in diagnosis of acute stroke. These calculations involve deconvolution operations that can be very computationally expensive when using local Arterial Input Functions (AIF). As time is vitally important in the case of acute stroke, reducing the analysis time will reduce the number of brain cells damaged and increase the potential for recovery. Methods GPUs originated as graphics generation dedicated co-processors, but modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its large number of computing cores and constitutes an affordable high performance computing method. In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. We present the serial and parallel implementations of such algorithms and the evaluation of the performance gains using GPUs. Results Our method has gained a 5.56 and 3.75 speedup for CT and MR images respectively. Conclusions It seems that using GPGPU is a desirable approach in perfusion imaging analysis, which does not harm the quality of cerebral hemodynamic maps but delivers results faster than the traditional computation.
UR - http://www.sciencedirect.com/science/article/pii/S0169260712001587
ER -
TY - RPRT
T1 - EDIM1 Progress Report
Y1 - 2011
A1 - Paul Martin
A1 - Malcolm Atkinson
A1 - Parsons, Mark
A1 - Adam Carter
A1 - Gareth Francis
AB - The Edinburgh Data-Intensive Machine (EDIM1) is the product of a joint collaboration between the data-intensive group at the School of Informatics and EPCC. EDIM1 is an experimental system, offering an alternative architecture for data-intensive computation and providing a platform for evaluating tools for data-intensive research; a 120 node cluster of ‘data-bricks’ with high storage yet modest computational capacity. This document gives some background into the context in which EDIM1 was designed and constructed, as well as providing an overview of its use so far and future plans.
ER -
TY - JOUR
T1 - A Generic Parallel Processing Model for Facilitating Data Mining and Integration
JF - Parallel Computing
Y1 - 2011
A1 - Liangxiu Han
A1 - Chee Sun Liew
A1 - van Hemert, Jano
A1 - Malcolm Atkinson
KW - Data Mining and Data Integration (DMI)
KW - Life Sciences
KW - OGSA-DAI
KW - Parallelism
KW - Pipeline Streaming
KW - workflow
AB - To facilitate Data Mining and Integration (DMI) processes in a generic way, we investigate a parallel pipeline streaming model. We model a DMI task as a streaming data-flow graph: a directed acyclic graph (DAG) of Processing Elements PEs. The composition mechanism links PEs via data streams, which may be in memory, buffered via disks or inter-computer data-flows. This makes it possible to build arbitrary DAGs with pipelining and both data and task parallelisms, which provides room for performance enhancement. We have applied this approach to a real DMI case in the Life Sciences and implemented a prototype. To demonstrate feasibility of the modelled DMI task and assess the efficiency of the prototype, we have also built a performance evaluation model. The experimental evaluation results show that a linear speedup has been achieved with the increase of the number of distributed computing nodes in this case study.
PB - Elsevier
VL - 37
IS - 3
ER -
TY - CONF
T1 - Optimum Platform Selection and Configuration for Computational Jobs
T2 - All Hands Meeting 2011
Y1 - 2011
A1 - Gary McGilvary
A1 - Malcolm Atkinson
A1 - Barker, Adam
A1 - Ashley Lloyd
AB - The performance and cost of many scientific applications which execute on a variety of High Performance Computing (HPC), local cluster environments and cloud services could be enhanced, and costs reduced if the platform was carefully selected on a per-application basis and the application itself was optimally configured for a given platform. With a wide-variety of computing platforms on offer, each possessing different properties, all too frequently platform decisions are made on an ad-hoc basis with limited ‘black-box’ information. The limitless number of possible application configurations also make it difficult for an individual who wants to achieve cost-effective results with the maximum performance available. Such individuals may include biomedical researchers analysing microarray data, software developers running aviation simulations or bankers performing risk assessments. However in either case, it is likely that many may not have the required knowledge to select the optimum platform and setup for their application; to do so, would require extensive knowledge of their applications and various platforms. In this paper we describe a framework that aims to resolve such issues by (i) reducing the detail required in the decision making process by placing this information within a selection framework, thereby (ii) maximising an application’s performance gain and/or reducing costs. We present a set of preliminary results where we compare the performance of running the Simple Parallel R INTerface (SPRINT) over a variety of platforms. SPRINT is a framework providing parallel functions of the statistical package R, allowing post genomic data to be easily analysed on HPC resources [1]. We run SPRINT on Amazon’s Elastic Compute Cloud (EC2) to compare the performance with the results obtained from HECToR, the UK’s National Supercomputing Service, and the Edinburgh Compute and Data Facilities (ECDF) cluster.
JF - All Hands Meeting 2011
CY - York
ER -
TY - CONF
T1 - A Parallel Deconvolution Algorithm in Perfusion Imaging
T2 - Healthcare Informatics, Imaging, and Systems Biology (HISB)
Y1 - 2011
A1 - Zhu, Fan.
A1 - Rodríguez, David
A1 - Carpenter, Trevor
A1 - Malcolm Atkinson
A1 - Wardlaw, Joanna
KW - Deconvolution
KW - GPGPU
KW - Parallelization
KW - Perfusion Imaging
AB - In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. GPUs originated as graphics generation dedicated co-processors, but the modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its huge number of computing cores and constitutes an affordable high performance computing method. The objective of brain perfusion quantification is to generate parametric maps of relevant haemodynamic quantities such as Cerebral Blood Flow (CBF), Cerebral Blood Volume (CBV) and Mean Transit Time (MTT) that can be used in diagnosis of conditions such as stroke or brain tumors. These calculations involve deconvolution operations that in the case of using local Arterial Input Functions (AIF) can be very expensive computationally. We present the serial and parallel implementations of such algorithm and the evaluation of the performance gains using GPUs.
JF - Healthcare Informatics, Imaging, and Systems Biology (HISB)
CY - San Jose, California
SN - 978-1-4577-0325-6
UR - http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6061411&tag=1
ER -
TY - JOUR
T1 - Performance database: capturing data for optimizing distributed streaming workflows
JF - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Y1 - 2011
A1 - Chee Sun Liew
A1 - Atkinson, Malcolm P.
A1 - Radoslaw Ostrowski
A1 - Murray Cole
A1 - van Hemert, Jano I.
A1 - Liangxiu Han
KW - measurement framework
KW - performance data
KW - streaming workflows
AB - The performance database (PDB) stores performance-related data gathered during workflow enactment. We argue that by carefully understanding and manipulating this data, we can improve efficiency when enacting workflows. This paper describes the rationale behind the PDB, and proposes a systematic way to implement it. The prototype is built as part of the Advanced Data Mining and Integration Research for Europe project. We use workflows from real-world experiments to demonstrate the usage of PDB.
VL - 369
IS - 1949
ER -
TY - JOUR
T1 - Validation and mismatch repair of workflows through typed data streams
JF - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Y1 - 2011
A1 - Yaikhom, Gagarine
A1 - Malcolm Atkinson
A1 - van Hemert, Jano
A1 - Oscar Corcho
A1 - Krause, Amy
AB - The type system of a language guarantees that all of the operations on a set of data comply with the rules and conditions set by the language. While language typing is a fundamental requirement for any programming language, the typing of data that flow between processing elements within a workflow is currently being treated as optional. In this paper, we introduce a three-level type system for typing workflow data streams. These types are parts of the Data Intensive System Process Engineering Language programming language, which empowers users with the ability to validate the connections inside a workflow composition, and apply appropriate data type conversions when necessary. Furthermore, this system enables the enactment engine in carrying out type-directed workflow optimizations.
VL - 369
IS - 1949
ER -
TY - RPRT
T1 - Data-Intensive Research Workshop (15-19 March 2010) Report
Y1 - 2010
A1 - Malcolm Atkinson
A1 - Roure, David De
A1 - van Hemert, Jano
A1 - Shantenu Jha
A1 - Ruth McNally
A1 - Robert Mann
A1 - Stratis Viglas
A1 - Chris Williams
KW - Data-intensive Computing
KW - Data-Intensive Machines
KW - Machine Learning
KW - Scientific Databases
AB - We met at the National e-Science Institute in Edinburgh on 15-19 March 2010 to develop our understanding of DIR. Approximately 100 participants (see Appendix A) worked together to develop their own understanding, and we are offering this report as the first step in communicating that to a wider community. We present this in turns of our developing/emerging understanding of "What is DIR?" and "Why it is important?'". We then review the status of the field, report what the workshop achieved and what remains as open questions.
JF - National e-Science Centre
PB - Data-Intensive Research Group, School of Informatics, University of Edinburgh
CY - Edinburgh
ER -
TY - Generic
T1 - Federated Enactment of Workflow Patterns
T2 - Lecture Notes in Computer Science
Y1 - 2010
A1 - Yaikhom, Gagarine
A1 - Liew, Chee
A1 - Liangxiu Han
A1 - van Hemert, Jano
A1 - Malcolm Atkinson
A1 - Krause, Amy
ED - D’Ambra, Pasqua
ED - Guarracino, Mario
ED - Talia, Domenico
AB - In this paper we address two research questions concerning workflows: 1) how do we abstract and catalogue recurring workflow patterns?; and 2) how do we facilitate optimisation of the mapping from workflow patterns to actual resources at runtime? Our aim here is to explore techniques that are applicable to large-scale workflow compositions, where the resources could change dynamically during the lifetime of an application. We achieve this by introducing a registry-based mechanism where pattern abstractions are catalogued and stored. In conjunction with an enactment engine, which communicates with this registry, concrete computational implementations and resources are assigned to these patterns, conditional to the execution parameters. Using a data mining application from the life sciences, we demonstrate this new approach.
JF - Lecture Notes in Computer Science
PB - Springer Berlin / Heidelberg
VL - 6271
UR - http://dx.doi.org/10.1007/978-3-642-15277-1_31
N1 - 10.1007/978-3-642-15277-1_31
ER -
TY - JOUR
T1 - Integrating distributed data sources with OGSA--DAI DQP and Views
JF - Philosophical Transactions A
Y1 - 2010
A1 - Dobrzelecki, B.
A1 - Krause, A.
A1 - Hume, A. C.
A1 - Grant, A.
A1 - Antonioletti, M.
A1 - Alemu, T. Y.
A1 - Atkinson, M.
A1 - Jackson, M.
A1 - Theocharopoulos, E.
AB - OGSA-DAI (Open Grid Services Architecture Data Access and Integration) is a framework for building distributed data access and integration systems. Until recently, it lacked the built-in functionality that would allow easy creation of federations of distributed data sources. The latest release of the OGSA-DAI framework introduced the OGSA-DAI DQP (Distributed Query Processing) resource. The new resource encapsulates a distributed query processor, that is able to orchestrate distributed data sources when answering declarative user queries. The query processor has many extensibility points, making it easy to customize. We have also introduced a new OGSA-DAI Views resource that provides a flexible method for defining views over relational data. The interoperability of the two new resources, together with the flexibility of the OGSA-DAI framework, allows the building of highly customized data integration solutions.
VL - 368
ER -
TY - CONF
T1 - Towards Optimising Distributed Data Streaming Graphs using Parallel Streams
T2 - Data Intensive Distributed Computing (DIDC'10), in conjunction with the 19th International Symposium on High Performance Distributed Computing
Y1 - 2010
A1 - Chee Sun Liew
A1 - Atkinson, Malcolm P.
A1 - van Hemert, Jano
A1 - Liangxiu Han
KW - Data-intensive Computing
KW - Distributed Computing
KW - Optimisation
KW - Parallel Stream
KW - Scientific Workflows
AB - Modern scientific collaborations have opened up the opportunity of solving complex problems that involve multi- disciplinary expertise and large-scale computational experiments. These experiments usually involve large amounts of data that are located in distributed data repositories running various software systems, and managed by different organisations. A common strategy to make the experiments more manageable is executing the processing steps as a workflow. In this paper, we look into the implementation of fine-grained data-flow between computational elements in a scientific workflow as streams. We model the distributed computation as a directed acyclic graph where the nodes represent the processing elements that incrementally implement specific subtasks. The processing elements are connected in a pipelined streaming manner, which allows task executions to overlap. We further optimise the execution by splitting pipelines across processes and by introducing extra parallel streams. We identify performance metrics and design a measurement tool to evaluate each enactment. We conducted ex- periments to evaluate our optimisation strategies with a real world problem in the Life Sciences—EURExpress-II. The paper presents our distributed data-handling model, the optimisation and instrumentation strategies and the evaluation experiments. We demonstrate linear speed up and argue that this use of data-streaming to enable both overlapped pipeline and parallelised enactment is a generally applicable optimisation strategy.
JF - Data Intensive Distributed Computing (DIDC'10), in conjunction with the 19th International Symposium on High Performance Distributed Computing
PB - ACM
CY - Chicago, Illinois
UR - http://www.cct.lsu.edu/~kosar/didc10/index.php
ER -
TY - RPRT
T1 - ADMIRE D1.5 – Report defining an iteration of the model and language: PM3 and DL3
Y1 - 2009
A1 - Peter Brezany
A1 - Ivan Janciak
A1 - Alexander Woehrer
A1 - Carlos Buil Aranda
A1 - Malcolm Atkinson
A1 - van Hemert, Jano
AB - This document is the third deliverable to report on the progress of the model, language and ontology research conducted within Workpackage 1 of the ADMIRE project. Significant progress has been made on each of the above areas. The new results that we achieved are recorded against the targets defined for project month 18 and are reported in four sections of this document
PB - ADMIRE project
UR - http://www.admire-project.eu/docs/ADMIRE-D1.5-model-language-ontology.pdf
ER -
TY - CONF
T1 - Adoption of e-Infrastructure Services: inhibitors, enablers and opportunities
T2 - 5th International Conference on e-Social Science
Y1 - 2009
A1 - Voss, A.
A1 - Asgari-Targhi, M.
A1 - Procter, R.
A1 - Halfpenny, P.
A1 - Fragkouli, E.
A1 - Anderson, S.
A1 - Hughes, L.
A1 - Fergusson, D.
A1 - Vander Meer, E.
A1 - Atkinson, M.
AB - Based on more than 100 interviews with respondents from the academic community and information services, we present findings from our study of inhibitors and enablers of adoption of e-Infrastructure services for research. We discuss issues raised and potential ways of addressing them.
JF - 5th International Conference on e-Social Science
CY - Maternushaus, Cologne
ER -
TY - CONF
T1 - Advanced Data Mining and Integration Research for Europe
T2 - All Hands Meeting 2009
Y1 - 2009
A1 - Atkinson, M.
A1 - Brezany, P.
A1 - Corcho, O.
A1 - Han, L
A1 - van Hemert, J.
A1 - Hluchy, L.
A1 - Hume, A.
A1 - Janciak, I.
A1 - Krause, A.
A1 - Snelling, D.
A1 - Wöhrer, A.
AB - There is a rapidly growing wealth of data [1]. The number of sources of data is increasing, while, at the same time, the diversity, complexity and scale of these data resources are also increasing dramatically. This cornucopia of data oers much potential; a combinatorial explosion of opportunities for knowledge discovery, improved decisions and better policies. Today, most of these opportunities are not realised because composing data from multiple sources and extracting information is too dicult. Every business, organisation and government faces problems that can only be addressed successfully if we improve our techniques for exploiting the data we gather.
JF - All Hands Meeting 2009
CY - Oxford
ER -
TY - CONF
T1 - Automating Gene Expression Annotation for Mouse Embryo
T2 - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference)
Y1 - 2009
A1 - Liangxiu Han
A1 - van Hemert, Jano
A1 - Richard Baldock
A1 - Atkinson, Malcolm P.
ED - Ronghuai Huang
ED - Qiang Yang
ED - Jian Pei
ED - et al
JF - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference)
PB - Springer
VL - LNAI 5678
ER -
TY - Generic
T1 - Crossing boundaries: computational science, e-Science and global e-Infrastructure II
T2 - All Hands Meeting 2008
Y1 - 2009
A1 - Coveney, P. V.
A1 - Atkinson, M. P.
ED - Coveney, P. V.
ED - Atkinson, M. P.
JF - All Hands Meeting 2008
T3 - Philosophical Transactions of the Royal Society Series A
PB - Royal Society Publishing
CY - Edinburgh
VL - 367
UR - http://rsta.royalsocietypublishing.org/content/367/1898.toc
ER -
TY - Generic
T1 - Crossing boundaries: computational science, e-Science and global e-Infrastructure I
T2 - All Hands meeting 2008
Y1 - 2009
A1 - Coveney, P. V.
A1 - Atkinson, M. P.
ED - Coveney, P. V.
ED - Atkinson, M. P.
JF - All Hands meeting 2008
T3 - Philosophical Transactions of the Royal Society Series A
PB - Royal Society Publishing
CY - Edinburgh
VL - 367
UR - http://rsta.royalsocietypublishing.org/content/367/1897.toc
ER -
TY - CONF
T1 - A Distributed Architecture for Data Mining and Integration
T2 - Data-Aware Distributed Computing (DADC'09), in conjunction with the 18th International Symposium on High Performance Distributed Computing
Y1 - 2009
A1 - Atkinson, Malcolm P.
A1 - van Hemert, Jano
A1 - Liangxiu Han
A1 - Ally Hume
A1 - Chee Sun Liew
AB - This paper presents the rationale for a new architecture to support a significant increase in the scale of data integration and data mining. It proposes the composition into one framework of (1) data mining and (2) data access and integration. We name the combined activity “DMI”. It supports enactment of DMI processes across heterogeneous and distributed data resources and data mining services. It posits that a useful division can be made between the facilities established to support the definition of DMI processes and the computational infrastructure provided to enact DMI processes. Communication between those two divisions is restricted to requests submitted to gateway services in a canonical DMI language. Larger-scale processes are enabled by incremental refinement of DMI-process definitions often by recomposition of lower-level definitions. Autonomous types and descriptions which will support detection of inconsistencies and semi-automatic insertion of adaptations.These architectural ideas are being evaluated in a feasibility study that involves an application scenario and representatives of the community.
JF - Data-Aware Distributed Computing (DADC'09), in conjunction with the 18th International Symposium on High Performance Distributed Computing
PB - ACM
ER -
TY - RPRT
T1 - An e-Infrastructure for Collaborative Research in Human Embryo Development
Y1 - 2009
A1 - Barker, Adam
A1 - van Hemert, Jano I.
A1 - Baldock, Richard A.
A1 - Atkinson, Malcolm P.
AB - Within the context of the EU Design Study Developmental Gene Expression Map, we identify a set of challenges when facilitating collaborative research on early human embryo development. These challenges bring forth requirements, for which we have identified solutions and technology. We summarise our solutions and demonstrate how they integrate to form an e-infrastructure to support collaborative research in this area of developmental biology.
UR - http://arxiv.org/pdf/0901.2310v1
ER -
TY - CONF
T1 - An E-infrastructure to Support Collaborative Embryo Research
T2 - Cluster Computing and the Grid
Y1 - 2009
A1 - Barker, Adam
A1 - van Hemert, Jano I.
A1 - Baldock, Richard A.
A1 - Atkinson, Malcolm P.
JF - Cluster Computing and the Grid
PB - IEEE Computer Society
SN - 978-0-7695-3622-4
ER -
TY - JOUR
T1 - Guest Editorial: Research Data: It’s What You Do With Them
JF - International Journal of Digital Curation
Y1 - 2009
A1 - Malcolm Atkinson
AB - These days it may be stating the obvious that the number of data resources, their complexity and diversity is growing rapidly due to the compound effects of increasing speed and resolution of digital instruments, due to pervasive data-collection automation and due to the growing power of computers. Just because we are becoming used to the accelerating growth of data resources, it does not mean we can be complacent; they represent an enormous wealth of opportunity to extract information, to make discoveries and to inform policy. But all too often it still takes a heroic effort to discover and exploit those opportunities, hence the research and progress, charted by the Fourth International Digital Curation Conference1 and recorded in this issue of the International Journal of Digital Curation, are an invaluable step on a long and demanding journey.
VL - 4
UR - http://www.ijdc.net/index.php/ijdc/article/view/96
IS - 1
ER -
TY - JOUR
T1 - Preface. Crossing boundaries: computational science, e-Science and global e-Infrastructure
JF - Philosophical Transactions of the Royal Society Series A
Y1 - 2009
A1 - Coveney, P. V.
A1 - Atkinson, M. P.
PB - Royal Society Publishing
VL - 367
ER -
TY - JOUR
T1 - Strategies and Policies to Support and Advance Education in e-Science
JF - Computing Now
Y1 - 2009
A1 - Malcolm Atkinson
A1 - Elizabeth Vander Meer
A1 - Fergusson, David
A1 - Clive Davenhall
A1 - Hamza Mehammed
AB - In previous installments of this series, we’ve presented tools and resources that university undergraduate and graduate environments must provide to allow for the continued development and success of e-Science education. We’ve introduced related summer (http://doi.ieeecomputersociety.org/ 10.1109/MDSO.2008.20) and winter (http://doi.ieeecomputersociety.org/10.1109/MDSO.2008.26) schools and important issues such as t-Infrastructure provision (http://doi.ieeecomputersociety.org/ 10.1109/MDSO.2008.28), intellectual property rights in the context of digital repositories (http://doi.ieeecomputersociety.org/10.1109/MDSO.2008.34), and curriculum content (http://www2. computer.org/portal/web/computingnow/0309/education). We conclude now with an overview of areas in which we must focus effort and strategies and policies that could provide much-needed support in these areas. We direct these strategy and policy recommendations toward key stakeholders in e-Science education, such as ministries of education, councils in professional societies, and professional teachers and educational strategists. Ministries of education can influence funding councils, thus financially supporting our proposals. Professional societies can assist in curricula revision, and teachers and strategists shape curricula in institutions, which makes them valuable in improving and developing education in e-Science and (perhaps) e-Science in education. We envision incremental change in curricula, so our proposals aim to evolve existing courses, rather than suggesting drastic upheavals and isolated additions. The long-term goal is to ensure that every graduate obtains the appropriate level of e-Science competency for their field, but we don’t presume to define that level for any given discipline or institution. We set out issues and ideas but don’t offer rigid prescriptions, which would take control away from important stakeholders.
UR - http://www.computer.org/portal/web/computingnow/education
ER -
TY - JOUR
T1 - A Strategy for Research and Innovation in the Century of Information
JF - Prometheus
Y1 - 2009
A1 - e-Science Directors’ Forum Strategy Working Group
A1 - Atkinson, M.
A1 - Britton, D.
A1 - Coveney, P.
A1 - De Roure, D
A1 - Garnett, N.
A1 - Geddes, N.
A1 - Gurney, R.
A1 - Haines, K.
A1 - Hughes, L.
A1 - Ingram, D.
A1 - Jeffreys, P.
A1 - Lyon, L.
A1 - Osborne, I.
A1 - Perrott, P.
A1 - Procter. R.
A1 - Rusbridge, C.
AB - More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year‐long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century‐of‐Information Research (CIR); it crystallises the ideas developed by the e‐Science Directors’ Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e‐Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of ‘digital‐systems judgement’ that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world‐leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e‐Science programme (2001–06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital‐information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital‐information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004–2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: 1. support the continuous innovation of digital‐information research methods; 2. provide easily used, pervasive and sustained e‐Infrastructure for all research; 3. enlarge the productive research community which exploits the new methods efficiently; 4. generate capacity, propagate knowledge and develop skills via new curricula; and 5. develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital‐infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well‐curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
VL - 27
ER -
TY - JOUR
T1 - Distributed Computing Education, Part 1: A Special Case?
JF - IEEE Distributed Systems Online
Y1 - 2008
A1 - Fergusson, D.
A1 - Hopkins, R.
A1 - Romano, D.
A1 - Vander Meer, E.
A1 - Atkinson, M.
VL - 9
UR - http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2008/06&file=o6002edu.xml&xsl=article.xsl&;jsessionid=LZ5zjySvc2xPnVv4qTYJXhlvwSnRGGj7S7WvPtrPyv23rJGQdjJr!982319602
IS - 6
ER -
TY - JOUR
T1 - Distributed Computing Education, Part 2: International Summer Schools
JF - IEEE Distributed Systems Online
Y1 - 2008
A1 - Fergusson, D.
A1 - Hopkins, R.
A1 - Romano, D.
A1 - Vander Meer, E.
A1 - Atkinson, M.
VL - 9
UR - http://dsonline.computer.org/portal/site/dsonline/menuitem.9ed3d9924aeb0dcd82ccc6716bbe36ec/index.jsp?&pName=dso_level1&path=dsonline/2008/07&file=o7002edu.xml&xsl=article.xsl&
IS - 7
ER -
TY - JOUR
T1 - Distributed Computing Education, Part 3: The Winter School Online Experience
JF - Distributed Systems Online
Y1 - 2008
A1 - Low, B.
A1 - Cassidy, K.
A1 - Fergusson, D.
A1 - Atkinson, M.
A1 - Vander Meer, E.
A1 - McGeever, M.
AB - The International Summer Schools in Grid Computing (ISSGC) have provided numerous international students with the opportunity to learn grid systems, as detailed in part 2 of this series (http://doi.ieeecomputersociety.org/10.1109/MDSO.2008.20). The International Winter School on Grid Computing 2008 (IWSGC 08) followed the successful summer schools, opening up the ISSGC experience to a wider range of students because of its online format. The previous summer schools made it clear that many students found the registration and travel costs and the time requirements prohibitive. The EU FP6 ICEAGE project held the first winter school from 6 February to 12 March 2008. The winter school repurposed summer school materials and added resources such as the ICEAGE digital library and summer-school-tested t-Infrastructures such as GILDA (Grid INFN Laboratory for Dissemination Activities). The winter schools shared the goals of the summer school, which emphasized disseminating grid knowledge. The students act as multipliers, spreading the skills and knowledge they acquired at the winter school to their colleagues to build strong and enthusiastic local grid communities.
PB - IEEE Computer Society
VL - 9
UR - http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4659260
IS - 9
ER -
TY - JOUR
T1 - Distributed Computing Education, Part 4: Training Infrastructure
JF - Distributed Systems Online
Y1 - 2008
A1 - Fergusson, D.
A1 - Barbera, R.
A1 - Giorgio, E.
A1 - Fargetta, M.
A1 - Sipos, G.
A1 - Romano, D.
A1 - Atkinson, M.
A1 - Vander Meer, E.
AB - In the first article of this series (see http://doi.ieeecomputersociety.org/10.1109/MDSO.2008.16), we identified the need for teaching environments that provide infrastructure to support education and training in distributed computing. Training infrastructure, or t-infrastructure, is analogous to the teaching laboratory in biology and is a vital tool for educators and students. In practice, t-infrastructure includes the computing equipment, digital communications, software, data, and support staff necessary to teach a course. The International Summer Schools in Grid Computing (ISSGC) series and the first International Winter School on Grid Computing (IWSGC 08) used the Grid INFN Laboratory of Dissemination Activities (GILDA) infrastructure so students could gain hands-on experience with middleware. Here, we describe GILDA, related summer and winter school experiences, multimiddleware integration, t-infrastructure, and academic courses, concluding with an analysis and recommendations.
PB - IEEE Computer Society
VL - 9
UR - http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4752926
IS - 10
ER -
TY - JOUR
T1 - Distributed Computing Education, Part 5: Coming to Terms with Intellectual Property Rights
JF - Distributed Systems Online
Y1 - 2008
A1 - Boon Low
A1 - Kathryn Cassidy
A1 - Fergusson, David
A1 - Malcolm Atkinson
A1 - Elizabeth Vander Meer
A1 - Mags McGeever
AB - In part 1 of this series on distributed computing education, we introduced a list of components important for teaching environments. We outlined the first three components, which included development of materials for education, education for educators and teaching infrastructures, identifying current practice, challenges, and opportunities for provision. The final component, a supportive policy framework that encourages cooperation and sharing, includes the need to manage intellectual property rights (IPR).
PB - IEEE Computer Society
VL - 9
UR - http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4755177
IS - 12
ER -
TY - RPRT
T1 - Education and Training Task Force Report
Y1 - 2008
A1 - Atkinson, M.
A1 - Vander Meer, E.
A1 - Fergusson, D.
A1 - Artacho, M.
AB - The development of e-Infrastructure, of which grid computing is a fundamental element, will have major economic and social benefits. Online and financial businesses already successfully use grid computing technologies, for instance. There are already demonstrations showing the benefits to engineering, medicine and the creative industries as well. New research methods and technologies generate large data sets that need to be shared in order to ensure continued social and scientific research and innovation. e-Infrastructure provides an environment for coping with these large data sets and for sharing data across regions. An investment in educating people in this technology, then, is an investment that will strengthen our economies and societies. In order to deliver e-Infrastructure education and training successfully in the EU, we must develop a policy framework that will ensure shared responsibility and equivalent training in the field. This document focuses primarily on the current state of grid and e-Science education, introducing key challenges and the opportunities available to educational planners that serve as a starting point for further work. It then proposes strategies and policies to provide a supportive framework for e-Infrastructure education and training. The ETTF Report concludes with policy recommendations to be taken forward by the e-IRG. These recommendations address issues such as the level of Member State investment in e-Infrastructure education, the harmonisation of education in distributed-computation thinking and in the use of e-Infrastructure and the development of standards for student and teacher identification, for the sharing of t-Infrastructure (and training material) and for accreditation.
JF - e-Infrastructure Reflection Group
UR - http://www.e-irg.eu/index.php?option=com_content&task=view&id=38&Itemid=37
ER -
TY - CONF
T1 - Fostering e-Infrastructures: from user-designer relations to community engagement
T2 - Symposium on Project Management in e-Science
Y1 - 2008
A1 - Voss, A.
A1 - Asgari-Targhi, M.
A1 - Halfpenny, P.
A1 - Procter, R.
A1 - Anderson, S.
A1 - Dunn, S.
A1 - Fragkouli, E.
A1 - Hughes, L.
A1 - Atkinson, M.
A1 - Fergusson, D.
A1 - Mineter, M.
A1 - Rodden, T.
AB - In this paper we discuss how e-Science can draw on the findings, approaches and methods developed in other disciplines to foster e-Infrastructures for research. We also discuss the issue of making user involvement in IT development scale across an open ommunity of researchers and from single systems to distributed e-Infrastructures supporting collaborative research.
JF - Symposium on Project Management in e-Science
CY - Oxford
ER -
TY - CONF
T1 - OGSA-DAI: Middleware for Data Integration: Selected Applications
T2 - ESCIENCE '08: Proceedings of the 2008 Fourth IEEE International Conference on eScience
Y1 - 2008
A1 - Grant, Alistair
A1 - Antonioletti, Mario
A1 - Hume, Alastair C.
A1 - Krause, Amy
A1 - Dobrzelecki, Bartosz
A1 - Jackson, Michael J.
A1 - Parsons, Mark
A1 - Atkinson, Malcolm P.
A1 - Theocharopoulos, Elias
JF - ESCIENCE '08: Proceedings of the 2008 Fourth IEEE International Conference on eScience
PB - IEEE Computer Society
CY - Washington, DC, USA
SN - 978-0-7695-3535-7
ER -
TY - CONF
T1 - Widening Uptake of e-Infrastructure Services
T2 - 4th International Conference on e-Social Science
Y1 - 2008
A1 - Voss, A.
A1 - Asgari-Targhi, M.
A1 - Procter, R.
A1 - Halfpenny, P.
A1 - Dunn, S.
A1 - Fragkouli, E.
A1 - Anderson, S.
A1 - Hughes, L.
A1 - Mineter, M.
A1 - Fergusson, D.
A1 - Atkinson, M.
AB - This paper presents findings from the e-Uptake project which aims to widen the uptake of e-Infrastructure Services for research. We focus specifically on the identification of barriers and enablers of uptake and the taxonomy developed to structure our findings. Based on these findings, we describe the development of a number of interventions such as training and outreach events, workshops and the deployment of a UK 'one-stop-shop' for support and event information as well as training material. Finally, we will describe how the project relates to other ongoing community engagement efforts in the UK and worldwide. Introduction Existing investments in e-Science and Grid computing technologies have helped to develop the capacity to build e-Infrastructures for research: distributed, networked, interoperable computing and data resources that are available to underpin a wide range of research activities in all research disciplines. In the UK, the Research Councils and the JISC are funding programmes to support the development of essential components of such infrastructures such as National Grid Service (www.ngs.ac.uk) or the UK Access Management Federation (www.ukfederation.org.uk) as well as discipline-specific efforts to build consistent and accessible instantiations of e-Infrastructures, for example the e- Infrastructure for the Social Sciences (Daw et al. 2007). These investments are complemented by an active programme of community engagement (Voss et al. 2007). As part of the community engagement strand of its e-Infrastructure programme, JISC has funded the e-Uptake project, a collaboration between the ESRC National Centre for e-Social Science at the University of Manchester, the Arts & Humanities e-Science Support Centre at King's College London and the National e-Science Centre at the University of Edinburgh. In this paper we present the project's activities to date to widen the uptake of e-Infrastructure services by eliciting information about the barriers to and enablers of uptake, developing adequate interventions such as training and outreach events, running workshops and the deploying a UK 'one-stop-shop' for support and event information as well as training material.
JF - 4th International Conference on e-Social Science
CY - Manchester
UR - http://www.ncess.ac.uk/events/conference/programme/workshop1/?ref=/programme/thurs/1aVoss.htm
ER -
TY - CONF
T1 - Accessing Data in Grids Using OGSA-DAI
T2 - Knowledge and Data Management in Grids
Y1 - 2007
A1 - Chue Hong, N. P.
A1 - Antonioletti, M.
A1 - Karasavvas, K. A.
A1 - Atkinson, M.
ED - Talia, D.
ED - Bilas, A.
ED - Dikaiakos, M.
AB - The grid provides a vision in which resources, including storage and data, can be shared across organisational boundaries. The original emphasis of grid computing lay in the sharing of computational resources but technological and scientific advances have led to an ongoing data explosion in many fields. However, data is stored in many different storage systems and data formats, with different schema, access rights, metadata attributes, and ontologies all of which are obstacles to the access, integration and management of this information. In this chapter we examine some of the ways in which these differences can be addressed by grid technology to enable the meaningful sharing of data. In particular, we present an overview of the OGSA-DAI (Open Grid Service Architecture - Data Access and Integration) software, which provides a uniform, extensible framework for accessing structured and semi-structured data and provide some examples of its use in other projects. The open-source OGSA-DAI software is freely available from http://www.ogsadai.org.uk.
JF - Knowledge and Data Management in Grids
SN - 978-0-387-37830-5
UR - http://www.springer.com/computer/communication+networks/book/978-0-387-37830-5
ER -
TY - CONF
T1 - e-Research Infrastructure Development and Community Engagement
T2 - All Hands Meeting 2007
Y1 - 2007
A1 - Voss, A.
A1 - Mascord, M.
A1 - Fraser, M.
A1 - Jirotka, M.
A1 - Procter, R.
A1 - Halfpenny, P.
A1 - Fergusson, D.
A1 - Atkinson, M.
A1 - Dunn, S.
A1 - Blanke, T.
A1 - Hughes, L.
A1 - Anderson, S.
AB - The UK and wider international e-Research initiatives are entering a critical phase in which they need to move from the development of the basic underlying technology, demonstrators, prototypes and early applications to wider adoption and the development of stable infrastructures. In this paper we will review existing work on studies of infrastructure and community development, requirements elicitation for existing services as well as work within the arts and humanities and the social sciences to establish e-Research in these communities. We then describe two projects recently funded by JISC to study barriers to adoption and responses to them as well as use cases and service usage models.
JF - All Hands Meeting 2007
CY - Nottingham, UK
ER -
TY - CONF
T1 - Grid Enabling Your Data Resources with OGSA-DAI
T2 - Applied Parallel Computing. State of the Art in Scientific Computing
Y1 - 2007
A1 - Antonioletti, M.
A1 - Atkinson, M.
A1 - Chue Hong, N. P.
A1 - Dobrzelecki, B.
A1 - Hume, A. C.
A1 - Jackson, M.
A1 - Karasavvas, K.
A1 - Krause, A.
A1 - Schopf, J. M.
A1 - Sugden. T.
A1 - Theocharopoulos, E.
JF - Applied Parallel Computing. State of the Art in Scientific Computing
T3 - Lecture Notes in Computer Science
VL - 4699
ER -
TY - CONF
T1 - OGSA-DAI 3.0 - The What's and Whys
T2 - UK e-Science All Hands Meeting
Y1 - 2007
A1 - Antonioletti, M.
A1 - Hong, N. P. Chue
A1 - Hume, A. C.
A1 - Jackson, M.
A1 - Karasavvas, K.
A1 - Krause, A.
A1 - Schopf, J. M.
A1 - Atkinson, M. P.
A1 - Dobrzelecki, B.
A1 - Illingworth, M.
A1 - McDonnell, N.
A1 - Parsons, M.
A1 - Theocharopoulous, E.
JF - UK e-Science All Hands Meeting
ER -
TY - Generic
T1 - Special Issue: Selected Papers from the 2004 U.K. e-Science All Hands Meeting
T2 - All Hands Meeting 2004
Y1 - 2007
A1 - Walker, D. W.
A1 - Atkinson, M. P.
A1 - Sommerville, I.
ED - Walker, D. W.
ED - Atkinson, M. P.
ED - Sommerville, I.
JF - All Hands Meeting 2004
T3 - Concurrency and Computation: Practice and Experience
PB - John Wiley & Sons Ltd
CY - Nottingham, UK
VL - 19
ER -
TY - CONF
T1 - Study of User Priorities for e-Infrastructure for e-Research (SUPER)
T2 - Proceedings of the UK e-Science All Hands Meeting
Y1 - 2007
A1 - Newhouse, S.
A1 - Schopf, J. M.
A1 - Richards, A.
A1 - Atkinson, M. P.
JF - Proceedings of the UK e-Science All Hands Meeting
ER -
TY - CONF
T1 - EGEE: building a pan-European grid training organisation
T2 - ACSW Frontiers
Y1 - 2006
A1 - Berlich, R{\"u}diger
A1 - Hardt, Marcus
A1 - Kunze, Marcel
A1 - Atkinson, Malcolm P.
A1 - Fergusson, David
JF - ACSW Frontiers
ER -
TY - CONF
T1 - FireGrid: Integrated emergency response and fire safety engineering for the future built environment
T2 - All Hands Meeting 2005
Y1 - 2006
A1 - D. Berry
A1 - Usmani, A.
A1 - Torero, J.
A1 - Tate, A.
A1 - McLaughlin, S.
A1 - Potter, S.
A1 - Trew, A.
A1 - Baxter, R.
A1 - Bull, M.
A1 - Atkinson, M.
AB - Analyses of disasters such as the Piper Alpha explosion (Sylvester-Evans and Drysdale, 1998), the World Trade Centre collapse (Torero et al, 2002, Usmani et al, 2003) and the fires at Kings Cross (Drysdale et al, 1992) and the Mont Blanc tunnel (Rapport Commun, 1999) have revealed many mistaken decisions, such as that which sent 300 fire-fighters to their deaths in the World Trade Centre. Many of these mistakes have been attributed to a lack of information about the conditions within the fire and the imminent consequences of the event. E-Science offers an opportunity to significantly improve the intervention in fire emergencies. The FireGrid Consortium is working on a mixture of research projects to make this vision a reality. This paper describes the research challenges and our plans for solving them.
JF - All Hands Meeting 2005
CY - Nottingham, UK
ER -
TY - CONF
T1 - Grid Enabling your Data Resources with OGSA-DAI
T2 - Workshop on State-of-the-Art in Scientific and Parallel Computing
Y1 - 2006
A1 - Antonioletti, M.
A1 - Atkinson, M.
A1 - Hong, N. Chue
A1 - Dobrzelecki, B.
A1 - Hume, A.
A1 - Jackson, M.
A1 - Karasavvas, K.
A1 - Krause, A.
A1 - Sugden, T.
A1 - Theocharopoulos, E.
JF - Workshop on State-of-the-Art in Scientific and Parallel Computing
ER -
TY - CHAP
T1 - Knowledge and Data Management in Grids, CoreGRID
T2 - Euro-Par'06 Proceedings of the CoreGRID 2006, UNICORE Summit 2006, Petascale Computational Biology and Bioinformatics conference on Parallel processing
Y1 - 2006
A1 - Chue Hong, N. P.
A1 - Antonioletti, M.
A1 - Karasavvas, K. A.
A1 - Atkinson, M.
ED - Lehner, W.
ED - Meyer, N.
ED - Streit, A.
ED - Stewart, C.
JF - Euro-Par'06 Proceedings of the CoreGRID 2006, UNICORE Summit 2006, Petascale Computational Biology and Bioinformatics conference on Parallel processing
T3 - Lecture Notes in Computer Science
PB - Springer
CY - Berlin, Germany
VL - 4375
SN - 978-3-540-72226-7
UR - http://www.springer.com/computer/communication+networks/book/978-3-540-72226-7
ER -
TY - CONF
T1 - Profiling OGSA-DAI Performance for Common Use Patterns
T2 - UK e-Science All Hands Meeting
Y1 - 2006
A1 - Dobrzelecki, B.
A1 - Antonioletti, M.
A1 - Schopf, J. M.
A1 - Hume, A. C.
A1 - Atkinson, M.
A1 - Hong, N. P. Chue
A1 - Jackson, M.
A1 - Karasavvas, K.
A1 - Krause, A.
A1 - Parsons, M.
A1 - Sugden, T.
A1 - Theocharopoulos, E.
JF - UK e-Science All Hands Meeting
ER -
TY - JOUR
T1 - The design and implementation of Grid database services in OGSA-DAI
JF - Concurrency - Practice and Experience
Y1 - 2005
A1 - Antonioletti, Mario
A1 - Atkinson, Malcolm P.
A1 - Baxter, Robert M.
A1 - Borley, Andrew
A1 - Hong, Neil P. Chue
A1 - Collins, Brian
A1 - Hardman, Neil
A1 - Hume, Alastair C.
A1 - Knox, Alan
A1 - Mike Jackson
A1 - Krause, Amrey
A1 - Laws, Simon
A1 - Magowan, James
A1 - Pato
VL - 17
ER -
TY - CONF
T1 - The Digital Curation Centre: a vision for digital curation
T2 - 2005 IEEE International Symposium on Mass Storage Systems and Technology
Y1 - 2005
A1 - Rusbridge, C.
A1 - P. Burnhill
A1 - S. Ross
A1 - P. Buneman
A1 - D. Giaretta
A1 - Lyon, L.
A1 - Atkinson, M.
AB - We describe the aims and aspirations for the Digital Curation Centre (DCC), the UK response to the realisation that digital information is both essential and fragile. We recognise the equivalence of preservation as "interoperability with the future", asserting that digital curation is concerned with "communication across time". We see the DCC as having relevance for present day data curation and for continuing data access for generations to come. We describe the structure and plans of the DCC, designed to support these aspirations and based on a view of world class research being developed into curation services, all of which are underpinned by outreach to the broadest community.
JF - 2005 IEEE International Symposium on Mass Storage Systems and Technology
PB - IEEE Computer Society
CY - Sardinia, Italy
SN - 0-7803-9228-0
ER -
TY - CONF
T1 - Introduction to OGSA-DAI Services
T2 - Scientific Applications of Grid Computing
Y1 - 2005
A1 - Karasavvas, K.
A1 - Antonioletti, M.
A1 - Atkinson, M.
A1 - Hong, N. C.
A1 - Sugden, T.
A1 - Hume, A.
A1 - Jackson, M.
A1 - Krause, A.
A1 - Palansuriya, C.
JF - Scientific Applications of Grid Computing
VL - 3458
SN - 978-3-540-25810-0
ER -
TY - CONF
T1 - A New Architecture for OGSA-DAI
T2 - UK e-Science All Hands Meeting
Y1 - 2005
A1 - Atkinson, M.
A1 - Karasavvas, K.
A1 - Antonioletti, M.
A1 - Baxter, R.
A1 - Borley, A.
A1 - Hong, N. C.
A1 - Hume, A.
A1 - Jackson, M.
A1 - Krause, A.
A1 - Laws, S.
A1 - Paton, N.
A1 - Schopf, J.
A1 - Sugden, T.
A1 - Tourlas, K.
A1 - Watson, P.
JF - UK e-Science All Hands Meeting
ER -
TY - CONF
T1 - OGSA-DAI Status and Benchmarks
T2 - All Hands Meeting 2005
Y1 - 2005
A1 - Antonioletti, Mario
A1 - Malcolm Atkinson
A1 - Rob Baxter
A1 - Andrew Borle
A1 - Hong, Neil P. Chue
A1 - Patrick Dantressangle
A1 - Hume, Alastair C.
A1 - Mike Jackson
A1 - Krause, Amy
A1 - Laws, Simon
A1 - Parsons, Mark
A1 - Paton, Norman W.
A1 - Jennifer M. Schopf
A1 - Tom Sugden
A1 - Watson, Paul
AB - This paper presents a status report on some of the highlights that have taken place within the OGSADAI project since the last AHM. A description of Release 6.0 functionality and details of the forthcoming release, due in September 2005, is given. Future directions for this project are discussed. This paper also describes initial results of work being done to systematically benchmark recent OGSADAI releases. The OGSA-DAI software distribution, and more information about the project, is available from the project website at www.ogsadai.org.uk.
JF - All Hands Meeting 2005
CY - Nottingham, UK
ER -
TY - JOUR
T1 - Web Service Grids: an evolutionary approach
JF - Concurrency - Practice and Experience
Y1 - 2005
A1 - Atkinson, Malcolm P.
A1 - Roure, David De
A1 - Dunlop, Alistair N.
A1 - Fox, Geoffrey
A1 - Henderson, Peter
A1 - Hey, Anthony J. G.
A1 - Paton, Norman W.
A1 - Newhouse, Steven
A1 - Parastatidis, Savas
A1 - Trefethen, Anne E.
A1 - Watson, Paul
A1 - Webber, Jim
VL - 17
ER -
TY - Generic
T1 - Grid Services Supporting the Usage of Secure Federated, Distributed Biomedical Data
T2 - All Hands Meeting 2004
Y1 - 2004
A1 - Richard Sinnott
A1 - Malcolm Atkinson
A1 - Micha Bayer
A1 - Dave Berry
A1 - Anna Dominiczak
A1 - Magnus Ferrier
A1 - David Gilbert
A1 - Neil Hanlon
A1 - Derek Houghton
A1 - Hunt, Ela
A1 - David White
AB - The BRIDGES project is a UK e-Science project that provides grid based support for biomedical research into the genetics of hypertension – the Cardiovascular Functional Genomics Project (CFG). Its main goal is to provide an effective environment for CFG, and biomedical research in general, including access to integrated data, analysis and visualization, with appropriate authorisation and privacy, as well as grid based computational tools and resources. It also aims to provide an improved understanding of the requirements of academic biomedical research virtual organizations and to evaluate the utility of existing data federation tools.
JF - All Hands Meeting 2004
CY - Nottingham, UK
UR - http://www.allhands.org.uk/2004/proceedings/papers/87.pdf
ER -
TY - CONF
T1 - Grid-Based Metadata Services
T2 - SSDBM
Y1 - 2004
A1 - Deelman, Ewa
A1 - Singh, Gurmeet Singh
A1 - Atkinson, Malcolm P.
A1 - Chervenak, Ann L.
A1 - Hong, Neil P. Chue
A1 - Kesselman, Carl
A1 - Patil, Sonal
A1 - Pearlman, Laura
A1 - Su, Mei-Hui
JF - SSDBM
ER -
TY - CONF
T1 - OGSA-DAI Status Report and Future Directions
T2 - All Hands Meeting 2004
Y1 - 2004
A1 - Antonioletti, Mario
A1 - Malcolm Atkinson
A1 - Rob Baxter
A1 - Borley, Andrew
A1 - Hong, Neil P. Chue
A1 - Collins, Brian
A1 - Jonathan Davies
A1 - Desmond Fitzgerald
A1 - Hardman, Neil
A1 - Hume, Alastair C.
A1 - Mike Jackson
A1 - Krause, Amrey
A1 - Laws, Simon
A1 - Paton, Norman W.
A1 - Tom Sugden
A1 - Watson, Paul
A1 - Mar
AB - Data Access and Integration (DAI) of data resources, such as relational and XML databases, within a Grid context. Project members also participate in the development of DAI standards through the GGF DAIS WG. The standards that emerge through this effort will be adopted by OGSA-DAI once they have stabilised. The OGSA-DAI developers are also engaging with a growing user community to gather their data and functionality requirements. Several large projects are already using OGSA-DAI to provide their DAI capabilities. This paper presents a status report on OGSA-DAI activities since the last AHM and announces future directions. The OGSA-DAI software distribution and more information about the project is available from the project website at http://www.ogsadai.org.uk/.
JF - All Hands Meeting 2004
CY - Nottingham, UK
ER -
TY - CONF
T1 - OGSA-DAI: Two Years On
T2 - GGF10
Y1 - 2004
A1 - Antonioletti, Mario
A1 - Malcolm Atkinson
A1 - Rob Baxter
A1 - Borley, Andrew
A1 - Neil Chue Hong
A1 - Collins, Brian
A1 - Jonathan Davies
A1 - Hardman, Neil
A1 - George Hicken
A1 - Ally Hume
A1 - Mike Jackson
A1 - Krause, Amrey
A1 - Laws, Simon
A1 - Magowan, James
A1 - Jeremy Nowell
A1 - Paton, Norman W.
A1 - Dave Pearson
A1 - To
AB - The OGSA-DAI project has been producing Grid-enabled middleware for almost two years now, providing data access and integration capabilities to data resources, such as databases, within an OGSA context. In these two years, OGSA-DAI has been tracking rapidly evolving standards, managing changes in software dependencies, contributing to the standardisation process and liasing with a growing user community together with their associated data requirements. This process has imparted important lessons and raised a number of issues that need to be addressed if a middleware product is to be widely adopted. This paper examines the experiences of OGSA-DAI in implementing proposed standards, the likely impact that the still-evolving standards landscape will have on future implementations and how these affect uptake of the software. The paper also examines the gathering of requirements from and engagement with the Grid community, the difficulties of defining a process for the management and publishing of metadata, and whether relevant standards can be implemented in an efficient manner. The OGSA-DAI software distribution and more details about the project are available from the project Web site at http://www.ogsadai.org.uk/.
JF - GGF10
CY - Berlin, Germany
ER -
TY - RPRT
T1 - SPLAT: (Suffix-tree Powered Local Alignment Tool): A Full-Sensitivity Protein Database Search Program that Accelerates the Smith-Waterman Algorithm using a Generalised Suffix Tree Index.
Y1 - 2004
A1 - Harding, N. J.
A1 - Atkinson, M. P.
JF - Department of Computer Science (DCS Tech Report TR-2003-141)
PB - University of Glasgow
ER -
TY - RPRT
T1 - Web Service Grids: An Evolutionary Approach
Y1 - 2004
A1 - Malcolm Atkinson
A1 - Roure, David De
A1 - Alistair Dunlop
A1 - Fox, Geoffrey
A1 - Henderson, Peter
A1 - Tony Hey
A1 - Norman Paton
A1 - Newhouse, Steven
A1 - Parastatidis, Savas
A1 - Anne Trefethen
A1 - Watson, Paul
A1 - Webber, Jim
AB - The UK e-Science Programme is a £250M, 5 year initiative which has funded over 100 projects. These application-led projects are under-pinned by an emerging set of core middleware services that allow the coordinated, collaborative use of distributed resources. This set of middleware services runs on top of the research network and beneath the applications we call the ‘Grid’. Grid middleware is currently in transition from pre-Web Service versions to a new version based on Web Services. Unfortunately, only a very basic set of Web Services embodied in the Web Services Interoperability proposal, WS-I, are agreed by most IT companies. IBM and others have submitted proposals for Web Services for Grids - the Web Services ResourceFramework and Web Services Notification specifications - to the OASIS organisation for standardisation. This process could take up to 12 months from March 2004 and the specifications are subject to debate and potentially significant changes. Since several significant UK e-Science projects come to an end before the end of this process, the UK therefore needs to develop a strategy that will protect the UK’s investment in Grid middleware by informing the Open Middleware Infrastructure Institute’s (OMII) roadmap and UK middleware repository in Southampton. This paper sets out an evolutionary roadmap that will allow us to capture generic middleware components from projects in a form that will facilitate migration or interoperability with the emerging Grid Web Services standards and with on-going OGSA developments. In this paper we therefore define a set of Web Services specifications - that we call ‘WS-I+’ to reflect the fact that this is a larger set than currently accepted by WS-I – that we believe will enable us to achieve the twin goals of capturing these components and facilitating migration to future standards. We believe that the extra Web Services specifications we have included in WS-I+ are both helpful in building e-Science Grids and likely to be widely accepted.
JF - UK e-Science Technical Report Series
ER -
TY - RPRT
T1 - Computer Challenges to emerge from e-Science.
Y1 - 2003
A1 - Atkinson, M.
A1 - Crowcroft, J.
A1 - Goble, C.
A1 - Gurd, J.
A1 - Rodden, T.
A1 - Shadbolt, N.
A1 - Sloman, M.
A1 - Sommerville, I.
A1 - Storey, T.
AB - The UK e-Science programme has initiated significant developments that allow networked grid technology to be used to form virtual colaboratories. The e-Science vision of a globally connected community has broader application than science with the same fundamental technologies being used to support eCommerce and e-Government. The broadest vision of e-Science outlines a challenging research agenda for the computing community. New theories and models will be needed to provide a sound foundation for the tools used to specify, design, analyse and prove the properties of future grid technologies and applications. Fundamental research is needed in order to build a future e-Science infrastructure and to understand how to exploit the infrastructure to best effect. A future infrastructure needs to be dynamic, universally available and promote trust. Realising this infrastructure will need new theories, methods and techniques to be developed and deployed. Although often not directly visible these fundamental infrastructure advances will provide the foundation for future scientific advancement, wealth generation and governance. • We need to move from the current data focus to a semantic grid with facilities for the generation, support and traceability of knowledge. • We need to make the infrastructure more available and more trusted by developing trusted ubiquitous systems. • We need to reduce the cost of development by enabling the rapid customised assembly of services. • We need to reduce the cost and complexity of managing the infrastructure by realising autonomic computing systems.
JF - EPSRC
ER -
TY - CHAP
T1 - Data Access, Integration, and Management
T2 - The Grid 2: Blueprint for a New Computing Infrastructure (2nd edition),
Y1 - 2003
A1 - Atkinson. M.
A1 - Chervenak, A. L.
A1 - Kunszt, P.
A1 - Narang, I.
A1 - Paton, N. W.
A1 - Pearson, D.
A1 - Shoshani, A.
A1 - Watson, P.
ED - Foster, I.
ED - Kesselman, C
JF - The Grid 2: Blueprint for a New Computing Infrastructure (2nd edition),
PB - Morgan Kaufmann
SN - 1-55860-933-4
ER -
TY - CONF
T1 - Databases and the Grid: Who Challenges Whom?
T2 - BNCOD
Y1 - 2003
A1 - Atkinson, Malcolm P.
JF - BNCOD
ER -
TY - CONF
T1 - The Design and Implementation of Grid Database Services in OGSA-DAI
T2 - All Hands Meeting 2003
Y1 - 2003
A1 - Ali Anjomshoaa
A1 - Antonioletti, Mario
A1 - Malcolm Atkinson
A1 - Rob Baxter
A1 - Borley, Andrew
A1 - Hong, Neil P. Chue
A1 - Collins, Brian
A1 - Hardman, Neil
A1 - George Hicken
A1 - Ally Hume
A1 - Knox, Alan
A1 - Mike Jackson
A1 - Krause, Amrey
A1 - Laws, Simon
A1 - Magowan, James
A1 - Charaka Palansuriya
A1 - Paton, Norman W.
AB - This paper presents a high-level overview of the design and implementation of the core components of the OGSA-DAI project. It describes the design decisions made, the project’s interaction with the Data Access and Integration Working Group of the Global Grid Forum and provides an overview of implementation characteristics. Further details of the implementation are provided in the extensive documentation available from the project web site.
JF - All Hands Meeting 2003
CY - Nottingham, UK
ER -
TY - RPRT
T1 - Grid Database Access and Integration: Requirements and Functionalities
Y1 - 2003
A1 - Atkinson, M. P.
A1 - Dialani, V.
A1 - Guy, L.
A1 - Narang, I.
A1 - Paton, N. W.
A1 - Pearson, D.
A1 - Storey, T.
A1 - Watson, P.
AB - This document is intended to provide the context for developing Grid data service standard recommendations within the Global Grid Forum. It defines the generic requirements for accessing and integrating persistent structured and semi-structured data. In addition, it defines the generic functionalities which a Grid data service needs to provide in supporting discovery of and controlled access to data, in performing data manipulation operations, and in virtualising data resources. The document also defines the scope of Grid data service standard recommendations which are presented in a separate document.
JF - Global Grid Forum
ER -
TY - JOUR
T1 - The pervasiveness of evolution in GRUMPS software
JF - Softw., Pract. Exper.
Y1 - 2003
A1 - Evans, Huw
A1 - Atkinson, Malcolm P.
A1 - Brown, Margaret
A1 - Cargill, Julie
A1 - Crease, Murray
A1 - Draper, Steve
A1 - Gray, Philip D.
A1 - Thomas, Richard
VL - 33
ER -
TY - CHAP
T1 - Rationale for Choosing the Open Grid Services Architecture
T2 - Grid Computing: Making the Global Infrastructure a Reality
Y1 - 2003
A1 - Atkinson, M.
ED - F. Berman
ED - G. Fox
ED - T. Hey
JF - Grid Computing: Making the Global Infrastructure a Reality
PB - John Wiley & Sons, Ltd
CY - Chichester, UK
SN - 9780470853191
ER -
TY - JOUR
T1 - Database indexing for large DNA and protein sequence collections
JF - VLDB J.
Y1 - 2002
A1 - Hunt, Ela
A1 - Atkinson, Malcolm P.
A1 - Irving, Robert W.
VL - 11
ER -
TY - CONF
T1 - A Database Index to Large Biological Sequences
T2 - VLDB
Y1 - 2001
A1 - Hunt, Ela
A1 - Atkinson, Malcolm P.
A1 - Irving, Robert W.
JF - VLDB
ER -
TY - JOUR
T1 - An efficient object promotion algorithm for persistent object systems
JF - Softw., Pract. Exper.
Y1 - 2001
A1 - Printezis, Tony
A1 - Atkinson, Malcolm P.
VL - 31
ER -
TY - CONF
T1 - The GRUMPS Architecture: Run-time Evolution in a Large Scale Distributed System
T2 - Proceedings of the Workshop on Engineering Complex Object-Oriented Solutions for Evolution (ECOOSE), held as part of OOPSLA 2001.
Y1 - 2001
A1 - Evans, Huw
A1 - Peter Dickman
A1 - Malcolm Atkinson
AB - This paper describes the first version of the distributed programming architecture for the Grumps1 project. The architecture consists of objects that communicate in terms of both asynchronous and synchronous events. A novel three-level extensible naming scheme is discussed that allows Grumps developers to deploy systems that can refer to entities not identified at the time when the Grumps system and application-level code were implemented. Examples detailing how the topology of a Grumps system may be changed at run-time and how new object implementations may be distributed during system execution are given. The separation of policy from mechanism is shown to be a major part of how system evolution is supported and this is made even more flexible when expressed through the use of Java interfaces for crucial core concepts.
JF - Proceedings of the Workshop on Engineering Complex Object-Oriented Solutions for Evolution (ECOOSE), held as part of OOPSLA 2001.
ER -
TY - BOOK
T1 - GRUMPS Summer Anthology, 2001
Y1 - 2001
A1 - Atkinson, M.
A1 - Brown, M.
A1 - Cargill, J.
A1 - Crease, M.
A1 - Draper, S.
A1 - Evans, H.
A1 - Gray, P.
A1 - Mitchell, C.
A1 - Ritchie, M.
A1 - Thomas, R.
AB - This is the first collection of papers from GRUMPS [http://grumps.dcs.gla.ac.uk]. The project only started up in February 2001, and this collection (frozen at 1 Sept 2001) shows that it got off to a productive start. Versions of some of these papers have been submitted to conferences and workshops: the website will have more information on publication status and history. GRUMPS decided to begin with a first study, partly to help the team coalesce. This involved installing two pieces of software in a first year computing science lab: one (the "UAR") to record a large volume of student actions at a low level with a view to mining them later, another (the "LSS") directly designed to assist tutor-student interaction. Some of the papers derive from that, although more are planned. Results from this first study can be found on the website. The project also has a link to UWA in Perth, Western Australia, where related software has already been developed and used as described in one of the papers. Another project strand concerns using handsets in lecture theatres to support interactivity there, as two other papers describe. As yet unrepresented in this collection, GRUMPS will also be entering the bioinformatics application area. The GRUMPS project operates on several levels. It is based in the field of Distributed Information Management (DIM), expecting to cover both mobile and static nodes, synchronous and detached clients, high and low volume data sources. The specific focus of the project (see the original proposal on the web site) is to address records of computational activity (where any such pre-existing usage might have extra record collection installed) and data experimentation, where the questions to be asked of the data emerge concurrently with data collection which will therefore be dynamically modifiable: a requirement that further pushes on the space of DIM. The level above concerns building and making usable tools for asking questions of the data, or rather of the activities that generate the data. Above that again is the application domain level: what the original computational activities serve, education and bioinformatics being two identified cases. The GRUMPS team is therefore multidisciplinary, from DIM architecture researchers to educational evaluators. The mix of papers reflects this.
PB - Academic Press
ER -
TY - CHAP
T1 - Persistence and Java — A Balancing Act
T2 - Objects and Databases
Y1 - 2001
A1 - Atkinson, M.
ED - Klaus Dittrich
ED - Giovanna Guerrini
ED - Isabella Merlo
ED - Marta Oliva
ED - M. Elena Rodriguez
AB - Large scale and long-lived application systems, enterprise applications, require persistence, that is provision of storage for many of their data structures. The JavaTM programming language is a typical example of a strongly-typed, object-oriented programming language that is becoming popular for building enterprise applications. It therefore needs persistence. The present options for obtaining this persistence are reviewed. We conclude that the Orthogonal Persistence Hypothesis, OPH, is still persuasive. It states that the universal and automated provision of longevity or brevity for all data will significantly enhance developer productivity and improve applications. This position paper reports on the PJama project with particular reference to its test of the OPH. We review why orthogonal persistence has not been taken up widely, and why the OPH is still incompletely tested. This leads to a more general challenge of how to conduct experiments which reveal large-scale and long-term effects and some thoughts on how that challenge might be addressed by the software research community.
JF - Objects and Databases
T3 - Lecture Notes in Computer Science
PB - Springer
VL - 1944
UR - http://www.springerlink.com/content/8t7x3m1ehtdqk4bm/?p=7ece1338fff3480b83520df395784cc6&pi=0
ER -
TY - CHAP
T1 - Scalable and Recoverable Implementation of Object Evolution for the PJama1 Platform
T2 - Persistent Object Systems: Design, Implementation, and Use 9th International Workshop, POS-9 Lillehammer, Norway, September 6–8, 2000 Revised Papers
Y1 - 2001
A1 - Atkinson, M. P.
A1 - Dmitriev, M. A.
A1 - Hamilton, C.
A1 - Printezis, T.
ED - Graham N. C.
ED - Kirby, Alan Dearle
ED - Dag I. K. Sjøberg
AB - PJama1 is the latest version of an orthogonally persistent platform for Java. It depends on a new persistent object store, Sphere, and provides facilities for class evolution. This evolution technology supports an arbitrary set of changes to the classes, which may have arbitrarily large populations of persistent objects. We verify that the changes are safe. When there are format changes, we also convert all of the instances, while leaving their identities unchanged. We aspire to both very large persistent object stores and freedom for developers to specify arbitrary conversion methods in Java to convey information from old to new formats. Evolution operations must be safe and the evolution cost should be approximately linear in the number of objects that must be reformatted. In order that these conversion methods can be written easily, we continue to present the pre-evolution state consistently to Java executions throughout an evolution. At the completion of applying all of these transformations, we must switch the store state to present only the post-evolution state, with object identity preserved. We present an algorithm that meets these requirements for eager, total conversion. This paper focuses on the mechanisms built into Sphere to support safe, atomic and scalable evolution. We report our experiences in using this technology and include a preliminary set of performance measurements.
JF - Persistent Object Systems: Design, Implementation, and Use 9th International Workshop, POS-9 Lillehammer, Norway, September 6–8, 2000 Revised Papers
T3 - Lecture Notes in Computer Science
PB - Springer
VL - 2135
UR - http://www.springerlink.com/content/09hx07h9lw0p1h82/?p=2bc20319905146bab8ba93b2fcc8cc01&pi=23
ER -
TY - JOUR
T1 - Guest editorial
JF - VLDB J.
Y1 - 2000
A1 - Atkinson, Malcolm P.
VL - 9
ER -
TY - CONF
T1 - Persistence and Java - A Balancing Act
T2 - Objects and Databases
Y1 - 2000
A1 - Atkinson, Malcolm P.
JF - Objects and Databases
ER -
TY - CONF
T1 - Scalable and Recoverable Implementation of Object Evolution for the PJama1 Platform
T2 - POS
Y1 - 2000
A1 - Atkinson, Malcolm P.
A1 - Dmitriev, Misha
A1 - Hamilton, Craig
A1 - Printezis, Tony
JF - POS
ER -
TY - CONF
T1 - Defining and Handling Transient Fields in PJama
T2 - DBPL
Y1 - 1999
A1 - Printezis, Tony
A1 - Atkinson, Malcolm P.
A1 - Jordan, Mick J.
JF - DBPL
ER -
TY - CONF
T1 - Evolutionary Data Conversion in the PJama Persistent Language
T2 - ECOOP Workshop on Object-Oriented Databases
Y1 - 1999
A1 - Dmitriev, Misha
A1 - Atkinson, Malcolm P.
JF - ECOOP Workshop on Object-Oriented Databases
ER -
TY - CONF
T1 - Evolutionary Data Conversion in the PJama Persistent Language
T2 - ECOOP Workshops
Y1 - 1999
A1 - Dmitriev, Misha
A1 - Atkinson, Malcolm P.
JF - ECOOP Workshops
ER -
TY - CONF
T1 - Issues Raised by Three Years of Developing PJama: An Orthogonally Persistent Platform for Java
T2 - ICDT
Y1 - 1999
A1 - Atkinson, Malcolm P.
A1 - Jordan, Mick J.
JF - ICDT
ER -
TY - Generic
T1 - VLDB'99, Proceedings of 25th International Conference on Very Large Data Bases, September 7-10, 1999, Edinburgh, Scotland, UK
Y1 - 1999
A1 - Atkinson, Malcolm P.
A1 - Maria E. Orlowska
A1 - Patrick Valduriez
A1 - Stanley B. Zdonik
A1 - Michael L. Brodie
ED - Atkinson, Malcolm P.
ED - Maria E. Orlowska
ED - Patrick Valduriez
ED - Stanley B. Zdonik
ED - Michael L. Brodie
PB - Morgan Kaufmann
SN - 1-55860-615-7
ER -