TY - CONF T1 - Ad hoc Cloud Computing T2 - IEEE Cloud Y1 - 2015 A1 - Gary McGilvary A1 - Barker, Adam A1 - Malcolm Atkinson KW - ad hoc KW - cloud computing KW - reliability KW - virtualization KW - volunteer computing AB - This paper presents the first complete, integrated and end-to-end solution for ad hoc cloud computing environments. Ad hoc clouds harvest resources from existing sporadically available, non-exclusive (i.e. primarily used for some other purpose) and unreliable infrastructures. In this paper we discuss the problems ad hoc cloud computing solves and outline our architecture which is based on BOINC. JF - IEEE Cloud UR - http://arxiv.org/abs/1505.08097 ER - TY - JOUR T1 - Precise montaging and metric quantification of retinal surface area from ultra-widefield fundus photography and fluorescein angiography JF - Ophthalmic Surg Lasers Imaging Retina Y1 - 2014 A1 - Croft, D.E. A1 - van Hemert, J. A1 - Wykoff, C.C. A1 - Clifton, D. A1 - Verhoek, M. A1 - Fleming, A. A1 - Brown, D.M. KW - medical KW - retinal imaging AB - BACKGROUND AND OBJECTIVE: Accurate quantification of retinal surface area from ultra-widefield (UWF) images is challenging due to warping produced when the retina is projected onto a two-dimensional plane for analysis. By accounting for this, the authors sought to precisely montage and accurately quantify retinal surface area in square millimeters. PATIENTS AND METHODS: Montages were created using Optos 200Tx (Optos, Dunfermline, U.K.) images taken at different gaze angles. A transformation projected the images to their correct location on a three-dimensional model. Area was quantified with spherical trigonometry. Warping, precision, and accuracy were assessed. RESULTS: Uncorrected, posterior pixels represented up to 79% greater surface area than peripheral pixels. Assessing precision, a standard region was quantified across 10 montages of the same eye (RSD: 0.7%; mean: 408.97 mm(2); range: 405.34-413.87 mm(2)). Assessing accuracy, 50 patients' disc areas were quantified (mean: 2.21 mm(2); SE: 0.06 mm(2)), and the results fell within the normative range. CONCLUSION: By accounting for warping inherent in UWF images, precise montaging and accurate quantification of retinal surface area in square millimeters were achieved. [Ophthalmic Surg Lasers Imaging Retina. 2014;45:312-317.]. VL - 45 ER - TY - JOUR T1 - Quantification of Ultra-Widefield Retinal Images JF - Retina Today Y1 - 2014 A1 - D.E. Croft A1 - C.C. Wykoff A1 - D.M. Brown A1 - van Hemert, J. A1 - M. Verhoek KW - medical KW - retinal imaging AB - Advances in imaging periodically lead to dramatic changes in the diagnosis, management, and study of retinal disease. For example, the innovation and wide-spread application of fluorescein angiography and optical coherence tomography (OCT) have had tremendous impact on the management of retinal disorders.1,2 Recently, ultra-widefield (UWF) imaging has opened a new window into the retina, allowing the capture of greater than 80% of the fundus with a single shot.3 With montaging, much of the remaining retinal surface area can be captured.4,5 However, to maximize the potential of these new modalities, accurate quantification of the pathology they capture is critical. UR - http://www.bmctoday.net/retinatoday/pdfs/0514RT_imaging_Croft.pdf ER - TY - Generic T1 - Varpy: A python library for volcanology and rock physics data analysis. EGU2014-3699 Y1 - 2014 A1 - Rosa Filgueira A1 - Malcolm Atkinson A1 - Andrew Bell A1 - Branwen Snelling ER - TY - CONF T1 - C2MS: Dynamic Monitoring and Management of Cloud Infrastructures T2 - IEEE CloudCom Y1 - 2013 A1 - Gary McGilvary A1 - Josep Rius A1 - Íñigo Goiri A1 - Francesc Solsona A1 - Barker, Adam A1 - Atkinson, Malcolm P. AB - Server clustering is a common design principle employed by many organisations who require high availability, scalability and easier management of their infrastructure. Servers are typically clustered according to the service they provide whether it be the application(s) installed, the role of the server or server accessibility for example. In order to optimize performance, manage load and maintain availability, servers may migrate from one cluster group to another making it difficult for server monitoring tools to continuously monitor these dynamically changing groups. Server monitoring tools are usually statically configured and with any change of group membership requires manual reconfiguration; an unreasonable task to undertake on large-scale cloud infrastructures. In this paper we present the Cloudlet Control and Management System (C2MS); a system for monitoring and controlling dynamic groups of physical or virtual servers within cloud infrastructures. The C2MS extends Ganglia - an open source scalable system performance monitoring tool - by allowing system administrators to define, monitor and modify server groups without the need for server reconfiguration. In turn administrators can easily monitor group and individual server metrics on large-scale dynamic cloud infrastructures where roles of servers may change frequently. Furthermore, we complement group monitoring with a control element allowing administrator-specified actions to be performed over servers within service groups as well as introduce further customized monitoring metrics. This paper outlines the design, implementation and evaluation of the C2MS. JF - IEEE CloudCom CY - Bristol, UK ER - TY - BOOK T1 - The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business T2 - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya) Y1 - 2013 A1 - Atkinson, Malcolm P. A1 - Baxter, Robert M. A1 - Peter Brezany A1 - Oscar Corcho A1 - Michelle Galea A1 - Parsons, Mark A1 - Snelling, David A1 - van Hemert, Jano KW - Big Data KW - Data Intensive KW - data mining KW - Data Streaming KW - Databases KW - Dispel KW - Distributed Computing KW - Knowledge Discovery KW - Workflows AB - With the digital revolution opening up tremendous opportunities in many fields, there is a growing need for skilled professionals who can develop data-intensive systems and extract information and knowledge from them. This book frames for the first time a new systematic approach for tackling the challenges of data-intensive computing, providing decision makers and technical experts alike with practical tools for dealing with our exploding data collections. Emphasising data-intensive thinking and interdisciplinary collaboration, The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business examines the essential components of knowledge discovery, surveys many of the current research efforts worldwide, and points to new areas for innovation. Complete with a wealth of examples and DISPEL-based methods demonstrating how to gain more from data in real-world systems, the book: * Outlines the concepts and rationale for implementing data-intensive computing in organisations * Covers from the ground up problem-solving strategies for data analysis in a data-rich world * Introduces techniques for data-intensive engineering using the Data-Intensive Systems Process Engineering Language DISPEL * Features in-depth case studies in customer relations, environmental hazards, seismology, and more * Showcases successful applications in areas ranging from astronomy and the humanities to transport engineering * Includes sample program snippets throughout the text as well as additional materials on a companion website The DATA Bonanza is a must-have guide for information strategists, data analysts, and engineers in business, research, and government, and for anyone wishing to be on the cutting edge of data mining, machine learning, databases, distributed systems, or large-scale computing. JF - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya) PB - John Wiley & Sons Inc. SN - 978-1-118-39864-7 ER - TY - CHAP T1 - Data-Intensive Analysis T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho A1 - van Hemert, Jano ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - data mining KW - Data-Analysis Experts KW - Data-Intensive Analysis KW - Knowledge Discovery AB - Part II: "Data-intensive Knowledge Discovery", focuses on the needs of data-analysis experts. It illustrates the problem-solving strategies appropriate for a data-rich world, without delving into the details of underlying technologies. It should engage and inform data-analysis specialists, such as statisticians, data miners, image analysts, bio-informaticians or chemo-informaticians, and generate ideas pertinent to their application areas. Chapter 5: "Data-intensive Analysis", introduces a set of common problems that data-analysis experts often encounter, by means of a set of scenarios of increasing levels of complexity. The scenarios typify knowledge discovery challenges and the presented solutions provide practical methods; a starting point for readers addressing their own data challenges. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Data-Intensive Components and Usage Patterns T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Analysis KW - data mining KW - Data-Intensive Components KW - Registry KW - Workflow Libraries KW - Workflow Sharing AB - Chapter 7: "Data-intensive components and usage patterns", provides a systematic review of the components that are commonly used in knowledge discovery tasks as well as common patterns of component composition. That is, it introduces the processing elements from which knowledge discovery solutions are built and common composition patterns for delivering trustworthy information. It reflects on how these components and patterns are evolving in a data-intensive context. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - The Data-Intensive Survival Guide T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Analysis Experts KW - Data-Intensive Architecture KW - Data-intensive Computing KW - Data-Intensive Engineers KW - Datascopes KW - Dispel KW - Domain Experts KW - Intellectual Ramps KW - Knowledge Discovery KW - Workflows AB - Chapter 3: "The data-intensive survival guide", presents an overview of all of the elements of the proposed data-intensive strategy. Sufficient detail is presented for readers to understand the principles and practice that we recommend. It should also provide a good preparation for readers who choose to sample later chapters. It introduces three professional viewpoints: domain experts, data-analysis experts, and data-intensive engineers. Success depends on a balanced approach that develops the capacity of all three groups. A data-intensive architecture provides a flexible framework for that balanced approach. This enables the three groups to build and exploit data-intensive processes that incrementally step from data to results. A language is introduced to describe these incremental data processes from all three points of view. The chapter introduces ‘datascopes’ as the productized data-handling environments and ‘intellectual ramps’ as the ‘on ramps’ for the highways from data to knowledge. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Data-Intensive Thinking with DISPEL T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Machines KW - Data-Intensive Thinking, Data-intensive Computing KW - Dispel KW - Distributed Computing KW - Knowledge Discovery AB - Chapter 4: "Data-intensive thinking with DISPEL", engages the reader with technical issues and solutions, by working through a sequence of examples, building up from a sketch of a solution to a large-scale data challenge. It uses the DISPEL language extensively, introducing its concepts and constructs. It shows how DISPEL may help designers, data-analysts, and engineers develop solutions to the requirements emerging in any data-intensive application domain. The reader is taken through simple steps initially, this then builds to conceptually complex steps that are necessary to cope with the realities of real data providers, real data, real distributed systems, and long-running processes. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - Definition of the DISPEL Language T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Paul Martin A1 - Yaikhom, Gagarine ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Streaming KW - Data-intensive Computing KW - Dispel AB - Chapter 10: "Definition of the DISPEL language", describes the novel aspects of the DISPEL language: its constructs, capabilities, and anticipated programming style. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business T3 - {Parallel and Distributed Computing, series editor Albert Y. Zomaya} PB - John Wiley & Sons Inc. ER - TY - CONF T1 - The demand for consistent web-based workflow editors T2 - Proceedings of the 8th Workshop on Workflows in Support of Large-Scale Science Y1 - 2013 A1 - Gesing, Sandra A1 - Atkinson, Malcolm A1 - Klampanos, Iraklis A1 - Galea, Michelle A1 - Berthold, Michael R. A1 - Barbera, Roberto A1 - Scardaci, Diego A1 - Terstyanszky, Gabor A1 - Kiss, Tamas A1 - Kacsuk, Peter KW - web-based workflow editors KW - workflow composition KW - workflow interoperability KW - workflow languages and concepts JF - Proceedings of the 8th Workshop on Workflows in Support of Large-Scale Science PB - ACM CY - New York, NY, USA SN - 978-1-4503-2502-8 UR - http://doi.acm.org/10.1145/2534248.2534260 ER - TY - CHAP T1 - The Digital-Data Challenge T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson A1 - Parsons, Mark ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data KW - Data-intensive Computing, Knowledge Discovery KW - Digital Data KW - Digital-Data Revolution AB - Part I: Strategies for success in the digital-data revolution, provides an executive summary of the whole book to convince strategists, politicians, managers, and educators that our future data-intensive society requires new thinking, new behavior, new culture, and new distribution of investment and effort. This part will introduce the major concepts so that readers are equipped to discuss and steer their organization’s response to the opportunities and obligations brought by the growing wealth of data. It will help readers understand the changing context brought about by advances in digital devices, digital communication, and ubiquitous computing. Chapter 1: The digital-data challenge, will help readers to understand the challenges ahead in making good use of the data and introduce ideas that will lead to helpful strategies. A global digital-data revolution is catalyzing change in the ways in which we live, work, relax, govern, and organize. This is a significant change in society, as important as the invention of printing or the industrial revolution, but more challenging because it is happening globally at lnternet speed. Becoming agile in adapting to this new world is essential. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - The Digital-Data Revolution T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data KW - Information KW - Knowledge KW - Knowledge Discovery KW - Social Impact of Digital Data KW - Wisdom, Data-intensive Computing AB - Chapter 2: "The digital-data revolution", reviews the relationships between data, information, knowledge, and wisdom. It analyses and quantifies the changes in technology and society that are delivering the data bonanza, and then reviews the consequential changes via representative examples in biology, Earth sciences, social sciences, leisure activity, and business. It exposes quantitative details and shows the complexity and diversity of the growing wealth of data, introducing some of its potential benefits and examples of the impediments to successfully realizing those benefits. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - DISPEL Development T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Adrian Mouat A1 - Snelling, David ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Diagnostics KW - Dispel KW - IDE KW - Libraries KW - Processing Elements AB - Chapter 11: "DISPEL development", describes the tools and libraries that a DISPEL developer might expect to use. The tools include those needed during process definition, those required to organize enactment, and diagnostic aids for developers of applications and platforms. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - DISPEL Enactment T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Chee Sun Liew A1 - Krause, Amrey A1 - Snelling, David ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Streaming KW - Data-Intensive Engineering KW - Dispel KW - Workflow Enactment AB - Chapter 12: "DISPEL enactment", describes the four stages of DISPEL enactment. It is targeted at the data-intensive engineers who implement enactment services. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - Foreword T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Tony Hey ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data KW - Data-intensive Computing, Knowledge Discovery JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Platforms for Data-Intensive Analysis T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Snelling, David ED - Malcolm Atkinson ED - Baxter, Robert M. ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Engineering KW - Data-Intensive Systems KW - Dispel KW - Distributed Systems AB - Part III: "Data-intensive engineering", is targeted at technical experts who will develop complex applications, new components, or data-intensive platforms. The techniques introduced may be applied very widely; for example, to any data-intensive distributed application, such as index generation, image processing, sequence comparison, text analysis, and sensor-stream monitoring. The challenges, methods, and implementation requirements are illustrated by making extensive use of DISPEL. Chapter 9: "Platforms for data-intensive analysis", gives a reprise of data-intensive architectures, examines the business case for investing in them, and introduces the stages of data-intensive workflow enactment. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Preface T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data, Data-intensive Computing, Knowledge Discovery AB - Who should read the book and why. The structure and conventions used. Suggested reading paths for different categories of reader. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Problem Solving in Data-Intensive Knowledge Discovery T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho A1 - van Hemert, Jano ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Analysis Experts KW - Data-Intensive Analysis KW - Design Patterns for Knowledge Discovery KW - Knowledge Discovery AB - Chapter 6: "Problem solving in data-intensive knowledge discovery", on the basis of the previous scenarios, this chapter provides an overview of effective strategies in knowledge discovery, highlighting common problem-solving methods that apply in conventional contexts, and focusing on the similarities and differences of these methods. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Sharing and Reuse in Knowledge Discovery T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Analysis KW - Knowledge Discovery KW - Ontologies KW - Semantic Web KW - Sharing AB - Chapter 8: "Sharing and re-use in knowledge discovery", introduces more advanced knowledge discovery problems, and shows how improved component and pattern descriptions facilitate re-use. This supports the assembly of libraries of high level components well-adapted to classes of knowledge discovery methods or application domains. The descriptions are made more powerful by introducing notations from the semantic Web. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CONF T1 - User-friendly workflows in quantum chemistry T2 - IWSG 2013 Y1 - 2013 A1 - Herres-Pawlis, Sonja A1 - Balaskó, Ákos A1 - Birkenheuer, Georg A1 - Brinkmann, André A1 - Gesing, Sandra A1 - Grunzke, Richard A1 - Hoffmann, Alexander A1 - Kacsuk, Peter A1 - Krüger, Jens A1 - Packschies, Lars A1 - Terstyansky, Gabor A1 - Weingarten, Noam JF - IWSG 2013 PB - CEUR Workshop Proceedings CY - Zurich, Switzerland UR - http://ceur-ws.org/Vol-993/paper14.pdf ER - TY - CONF T1 - V-BOINC: The Virtualization of BOINC T2 - In Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2013). Y1 - 2013 A1 - Gary McGilvary A1 - Barker, Adam A1 - Ashley Lloyd A1 - Malcolm Atkinson AB - The Berkeley Open Infrastructure for Network Computing (BOINC) is an open source client-server middleware system created to allow projects with large computational requirements, usually set in the scientific domain, to utilize a technically unlimited number of volunteer machines distributed over large physical distances. However various problems exist deploying applications over these heterogeneous machines using BOINC: applications must be ported to each machine architecture type, the project server must be trusted to supply authentic applications, applications that do not regularly checkpoint may lose execution progress upon volunteer machine termination and applications that have dependencies may find it difficult to run under BOINC. To solve such problems we introduce virtual BOINC, or V-BOINC, where virtual machines are used to run computations on volunteer machines. Application developers can then compile their applications on a single architecture, checkpointing issues are solved through virtualization API's and many security concerns are addressed via the virtual machine's sandbox environment. In this paper we focus on outlining a unique approach on how virtualization can be introduced into BOINC and demonstrate that V-BOINC offers acceptable computational performance when compared to regular BOINC. Finally we show that applications with dependencies can easily run under V-BOINC in turn increasing the computational potential volunteer computing offers to the general public and project developers. V-BOINC can be downloaded at http://garymcgilvary.co.uk/vboinc.html JF - In Proceedings of the 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2013). CY - Delft, The Netherlands ER - TY - CONF T1 - The W3C PROV family of specifications for modelling provenance metadata T2 - EDBT Y1 - 2013 A1 - Paolo Missier A1 - Khalid Belhajjame A1 - James Cheney JF - EDBT ER - TY - JOUR T1 - Consistency and repair for XML write-access control policies JF - VLDB J. Y1 - 2012 A1 - Loreto Bravo A1 - James Cheney A1 - Irini Fundulaki A1 - Ricardo Segovia VL - 21 ER - TY - CONF T1 - A Data Driven Science Gateway for Computational Workflows T2 - UNICORE Summit 2012 Y1 - 2012 A1 - Grunzke, Richard A1 - Birkenheuer, G. A1 - Blunk, D. A1 - Breuers, S. A1 - Brinkmann, A. A1 - Gesing, Sandra A1 - Herres-Pawlis, S A1 - Kohlbacher, O. A1 - Krüger, J. A1 - Kruse, M. A1 - Müller-Pfefferkorn, R. A1 - Schäfer, P. A1 - Schuller, B. A1 - Steinke, T. A1 - Zink, A. JF - UNICORE Summit 2012 ER - TY - CONF T1 - Generic User Management for Science Gateways via Virtual Organizations T2 - EGI Technical Forum 2012 Y1 - 2012 A1 - Schlemmer, Tobias A1 - Grunzke, Richard A1 - Gesing, Sandra A1 - Krüger, Jens A1 - Birkenheuer, Georg A1 - Müller-Pfefferkorn, Ralph A1 - Kohlbacher, Oliver JF - EGI Technical Forum 2012 ER - TY - CONF T1 - The MoSGrid Community - From National to International Scale T2 - EGI Community Forum 2012 Y1 - 2012 A1 - Gesing, Sandra A1 - Herres-Pawlis, Sonja A1 - Birkenheuer, Georg A1 - Brinkmann, André A1 - Grunzke, Richard A1 - Kacsuk, Peter A1 - Kohlbacher, Oliver A1 - Kozlovszky, Miklos A1 - Krüger, Jens A1 - Müller-Pfefferkorn, Ralph A1 - Schäfer, Patrick A1 - Steinke, Thomas JF - EGI Community Forum 2012 ER - TY - CONF T1 - MoSGrid: Progress of Workflow driven Chemical Simulations T2 - Grid Workflow Workshop 2011 Y1 - 2012 A1 - Birkenheuer, Georg A1 - Blunk, Dirk A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - Fels, Gregor A1 - Gesing, Sandra A1 - Grunzke, Richard A1 - Herres-Pawlis, Sonja A1 - Kohlbacher, Oliver A1 - Krüger, Jens A1 - Packschies, Lars A1 - Schäfer, Patrick A1 - Schuller, B. A1 - Schuster, Johannes A1 - Steinke, Thomas A1 - Szikszay Fabri, Anna A1 - Wewior, Martin A1 - Müller-Pfefferkorn, Ralph A1 - Kohlbacher, Oliver JF - Grid Workflow Workshop 2011 PB - CEUR Workshop Proceedings ER - TY - CHAP T1 - Multi-agent Negotiation of Virtual Machine Migration Using the Lightweight Coordination Calculus T2 - Agent and Multi-Agent Systems. Technologies and Applications Y1 - 2012 A1 - Anderson, Paul A1 - Shahriar Bijani A1 - Vichos, Alexandros ED - Jezic, Gordan ED - Kusek, Mario ED - Nguyen, Ngoc-Thanh ED - Howlett, Robert ED - Jain, Lakhmi JF - Agent and Multi-Agent Systems. Technologies and Applications T3 - Lecture Notes in Computer Science PB - Springer Berlin / Heidelberg VL - 7327 SN - 978-3-642-30946-5 UR - http://dx.doi.org/10.1007/978-3-642-30947-2_16 ER - TY - JOUR T1 - OMERO: flexible, model-driven data management for experimental biology JF - NATURE METHODS Y1 - 2012 A1 - Chris Allan A1 - Jean-Marie Burel A1 - Josh Moore A1 - Colin Blackburn A1 - Melissa Linkert A1 - Scott Loynton A1 - Donald MacDonald A1 - et al. AB - Data-intensive research depends on tools that manage multidimensional, heterogeneous datasets. We built OME Remote Objects (OMERO), a software platform that enables access to and use of a wide range of biological data. OMERO uses a server-based middleware application to provide a unified interface for images, matrices and tables. OMERO's design and flexibility have enabled its use for light-microscopy, high-content-screening, electron-microscopy and even non-image-genotype data. OMERO is open-source software, available at http://openmicroscopy.org/. PB - Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. VL - 9 SN - 1548-7091 UR - http://dx.doi.org/10.1038/nmeth.1896 IS - 3 ER - TY - Generic T1 - A Review of Attacks and Security Approaches in Open Multi-agent Systems Y1 - 2012 A1 - Shahriar Bijani A1 - David Robertson AB - Open multi-agent systems (MASs) have growing popularity in the Multi-agent Systems community and are predicted to have many applications in future, as large scale distributed systems become more widespread. A major practical limitation to open MASs is security because the openness of such systems negates many traditional security solutions. In this paper we introduce and classify main attacks on open MASs. We then survey and analyse various security techniques in the literature and categorise them under prevention and detection approaches. Finally, we suggest which security technique is an appropriate countermeasure for which classes of attack. ER - TY - CONF T1 - A Science Gateway Getting Ready for Serving the International Molecular Simulation Community T2 - Proceedings of Science Y1 - 2012 A1 - Gesing, Sandra A1 - Herres-Pawlis, Sonja A1 - Birkenheuer, Georg A1 - Brinkmann, André A1 - Grunzke, Richard A1 - Kacsuk, Peter A1 - Kohlbacher, Oliver A1 - Kozlovszky, Miklos A1 - Krüger, Jens A1 - Müller-Pfefferkorn, Ralph A1 - Schäfer, Patrick A1 - Steinke, Thomas JF - Proceedings of Science ER - TY - JOUR T1 - A Single Sign-On Infrastructure for Science Gateways on a Use Case for Structural Bioinformatics JF - Journal of Grid Computing Y1 - 2012 A1 - Gesing, Sandra A1 - Grunzke, Richard A1 - Krüger, Jens A1 - Birkenheuer, Georg A1 - Wewior, Martin A1 - Schäfer, Patrick A1 - Schuller, Bernd A1 - Schuster, Johannes A1 - Herres-Pawlis, Sonja A1 - Breuers, Sebastian A1 - Balaskó, Ákos A1 - Kozlovszky, Miklos A1 - Fabri, AnnaSzikszay A1 - Packschies, Lars A1 - Kacsuk, Peter A1 - Blunk, Dirk A1 - Steinke, Thomas A1 - Brinkmann, André A1 - Fels, Gregor A1 - Müller-Pfefferkorn, Ralph A1 - Jäkel, René A1 - Kohlbacher, Oliver KW - DCIs KW - Science gateway KW - security KW - Single sign-on KW - Structural bioinformatics VL - 10 UR - http://dx.doi.org/10.1007/s10723-012-9247-y ER - TY - CONF T1 - Workflow-enhanced conformational analysis of guanidine zinc complexes via a science gateway T2 - HealthGrid Applications and Technologies Meet Science Gateways for Life Sciences Y1 - 2012 A1 - Herres-Pawlis, Sonja A1 - Birkenheuer, Georg A1 - Brinkmann, André A1 - Gesing, Sandra A1 - Grunzke, Richard A1 - Jäkel, René A1 - Kohlbacher, Oliver A1 - Krüger, Jens A1 - Dos Santos Vieira, Ines JF - HealthGrid Applications and Technologies Meet Science Gateways for Life Sciences PB - IOS Press ER - TY - JOUR T1 - Automatically Identifying and Annotating Mouse Embryo Gene Expression Patterns JF - Bioinformatics Y1 - 2011 A1 - Liangxiu Han A1 - van Hemert, Jano A1 - Richard Baldock KW - classification KW - e-Science AB - Motivation: Deciphering the regulatory and developmental mechanisms for multicellular organisms requires detailed knowledge of gene interactions and gene expressions. The availability of large datasets with both spatial and ontological annotation of the spatio-temporal patterns of gene-expression in mouse embryo provides a powerful resource to discover the biological function of embryo organisation. Ontological annotation of gene expressions consists of labelling images with terms from the anatomy ontology for mouse development. If the spatial genes of an anatomical component are expressed in an image, the image is then tagged with a term of that anatomical component. The current annotation is done manually by domain experts, which is both time consuming and costly. In addition, the level of detail is variable and inevitably, errors arise from the tedious nature of the task. In this paper, we present a new method to automatically identify and annotate gene expression patterns in the mouse embryo with anatomical terms. Results: The method takes images from in situ hybridisation studies and the ontology for the developing mouse embryo, it then combines machine learning and image processing techniques to produce classifiers that automatically identify and annotate gene expression patterns in these images.We evaluate our method on image data from the EURExpress-II study where we use it to automatically classify nine anatomical terms: humerus, handplate, fibula, tibia, femur, ribs, petrous part, scapula and head mesenchyme. The accuracy of our method lies between 70–80% with few exceptions. Conclusions: We show that other known methods have lower classification performance than ours. We have investigated the images misclassified by our method and found several cases where the original annotation was not correct. This shows our method is robust against this kind of noise. Availability: The annotation result and the experimental dataset in the paper can be freely accessed at http://www2.docm.mmu.ac.uk/STAFF/L.Han/geneannotation/ Contact: l.han@mmu.ac.uk, j.vanhemert@ed.ac.uk and Richard.Baldock@hgu.mrc.ac.uk VL - 27 UR - http://bioinformatics.oxfordjournals.org/content/early/2011/02/25/bioinformatics.btr105.abstract ER - TY - JOUR T1 - An evaluation of ontology matching in geo-service applications JF - Geoinformatica Y1 - 2011 A1 - Lorenzino Vaccari A1 - Pavel Shvaiko A1 - Juan Pane A1 - Paolo Besana A1 - Maurizio Marchese ER - TY - JOUR T1 - Generating web-based user interfaces for computational science JF - Concurrency and Computation: Practice and Experience Y1 - 2011 A1 - van Hemert, J. A1 - Koetsier, J. A1 - Torterolo, L. A1 - Porro, I. A1 - Melato, M. A1 - Barbera, R. AB - Scientific gateways in the form of web portals are becoming the popular approach to share knowledge and resources around a topic in a community of researchers. Unfortunately, the development of web portals is expensive and requires specialists skills. Commercial and more generic web portals have a much larger user base and can afford this kind of development. Here we present two solutions that address this problem in the area of portals for scientific computing; both take the same approach. The whole process of designing, delivering and maintaining a portal can be made more cost-effective by generating a portal from a description rather than programming in the traditional sense. We show four successful use cases to show how this process works and the results it can deliver. PB - Wiley VL - 23 ER - TY - CONF T1 - Granular Security for a Science Gateway in Structural Bioinformatics T2 - Proceedings of the International Workshop on Science Gateways for Life Sciences (IWSG-Life 2011) Y1 - 2011 A1 - Gesing, Sandra A1 - Grunzke, Richard A1 - Balaskó, Ákos A1 - Birkenheuer, Georg A1 - Blunk, Dirk A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - Fels, Gregor A1 - Herres-Pawlis, Sonja A1 - Kacsuk, Peter A1 - Kozlovszky, Miklos A1 - Krüger, Jens A1 - Packschies, Lars A1 - Schäfer, Patrick A1 - Schuller, Bernd A1 - Schuster, Johannes A1 - Steinke, Thomas A1 - Szikszay Fabri, Anna A1 - Wewior, Martin A1 - Müller-Pfefferkorn, Ralph A1 - Kohlbacher, Oliver JF - Proceedings of the International Workshop on Science Gateways for Life Sciences (IWSG-Life 2011) PB - CEUR Workshop Proceedings ER - TY - Generic T1 - Intrusion Detection in Open Peer-to-Peer Multi-agent Systems T2 - 5th International Conference on Autonomous Infrastructure, Management and Security (AIMS 2011) Y1 - 2011 A1 - Shahriar Bijani A1 - David Robertson AB - One way to build large-scale autonomous systems is to develop open peer-to-peer architectures in which peers are not pre-engineered to work together and in which peers themselves determine the social norms that govern collective behaviour. A major practical limitation to such systems is security because the very openness of such systems negates most traditional security solutions. We propose a programme of research that addresses this problem by devising ways of attack detection and damage limitation that take advantage of social norms described by electronic institutions. We have analysed security issues of open peer-to-peer multi-agent systems and focused on probing attacks against confidentiality. We have proposed a framework and adapted an inference system, which shows the possibility of private information disclosure by an adversary. We shall suggest effective countermeasures in such systems and propose attack response techniques to limit possible damages. JF - 5th International Conference on Autonomous Infrastructure, Management and Security (AIMS 2011) T3 - Managing the dynamics of networks and services PB - Springer-Verlag Berlin SN - 978-3-642-21483-7 ER - TY - JOUR T1 - Managing dynamic enterprise and urgent workloads on clouds using layered queuing and historical performance models JF - Simulation Modelling Practice and Theory Y1 - 2011 A1 - David A. Bacigalupo A1 - van Hemert, Jano I. A1 - Xiaoyu Chen A1 - Asif Usmani A1 - Adam P. Chester A1 - Ligang He A1 - Donna N. Dillenberger A1 - Gary B. Wills A1 - Lester Gilbert A1 - Stephen A. Jarvis KW - e-Science AB - The automatic allocation of enterprise workload to resources can be enhanced by being able to make what–if response time predictions whilst different allocations are being considered. We experimentally investigate an historical and a layered queuing performance model and show how they can provide a good level of support for a dynamic-urgent cloud environment. Using this we define, implement and experimentally investigate the effectiveness of a prediction-based cloud workload and resource management algorithm. Based on these experimental analyses we: (i) comparatively evaluate the layered queuing and historical techniques; (ii) evaluate the effectiveness of the management algorithm in different operating scenarios; and (iii) provide guidance on using prediction-based workload and resource management. VL - 19 ER - TY - CONF T1 - Optimum Platform Selection and Configuration for Computational Jobs T2 - All Hands Meeting 2011 Y1 - 2011 A1 - Gary McGilvary A1 - Malcolm Atkinson A1 - Barker, Adam A1 - Ashley Lloyd AB - The performance and cost of many scientific applications which execute on a variety of High Performance Computing (HPC), local cluster environments and cloud services could be enhanced, and costs reduced if the platform was carefully selected on a per-application basis and the application itself was optimally configured for a given platform. With a wide-variety of computing platforms on offer, each possessing different properties, all too frequently platform decisions are made on an ad-hoc basis with limited ‘black-box’ information. The limitless number of possible application configurations also make it difficult for an individual who wants to achieve cost-effective results with the maximum performance available. Such individuals may include biomedical researchers analysing microarray data, software developers running aviation simulations or bankers performing risk assessments. However in either case, it is likely that many may not have the required knowledge to select the optimum platform and setup for their application; to do so, would require extensive knowledge of their applications and various platforms. In this paper we describe a framework that aims to resolve such issues by (i) reducing the detail required in the decision making process by placing this information within a selection framework, thereby (ii) maximising an application’s performance gain and/or reducing costs. We present a set of preliminary results where we compare the performance of running the Simple Parallel R INTerface (SPRINT) over a variety of platforms. SPRINT is a framework providing parallel functions of the statistical package R, allowing post genomic data to be easily analysed on HPC resources [1]. We run SPRINT on Amazon’s Elastic Compute Cloud (EC2) to compare the performance with the results obtained from HECToR, the UK’s National Supercomputing Service, and the Edinburgh Compute and Data Facilities (ECDF) cluster. JF - All Hands Meeting 2011 CY - York ER - TY - Generic T1 - Probing Attacks on Multi-agent Systems using Electronic Institutions T2 - Declarative Agent Languages and Technologies Workshop (DALT), AAMAS 2011 Y1 - 2011 A1 - Shahriar Bijani A1 - David Robertson A1 - David Aspinall JF - Declarative Agent Languages and Technologies Workshop (DALT), AAMAS 2011 ER - TY - CONF T1 - A Science Gateway for Molecular Simulations T2 - EGI User Forum 2011 Y1 - 2011 A1 - Gesing, Sandra A1 - Kacsuk, Peter A1 - Kozlovszky, Miklos A1 - Birkenheuer, Georg A1 - Blunk, Dirk A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - Fels, Gregor A1 - Grunzke, Richard A1 - Herres-Pawlis, Sonja A1 - Krüger, Jens A1 - Packschies, Lars A1 - Müller-Pfefferkorn, Ralph A1 - Schäfer, Patrick A1 - Steinke, Thomas A1 - Szikszay Fabri, Anna A1 - Warzecha, Klaus A1 - Wewior, Martin A1 - Kohlbacher, Oliver JF - EGI User Forum 2011 SN - 978 90 816927 1 7 ER - TY - JOUR T1 - Comparing Clinical Decision Support Systems for Recruitment in Clinical Trials JF - Journal of Medical Informatics Y1 - 2010 A1 - Marc Cuggia A1 - Paolo Besana A1 - David Glasspool. ER - TY - JOUR T1 - Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles JF - BMC Genomics Y1 - 2010 A1 - R. R. Kitchen A1 - V. S. Sabine A1 - A. H. Sims A1 - E. J. Macaskill A1 - L. Renshaw A1 - J. S. Thomas A1 - van Hemert, J. I. A1 - J. M. Dixon A1 - J. M. S. Bartlett AB - Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. VL - 11 UR - http://www.biomedcentral.com/1471-2164/11/134 IS - 134 ER - TY - CONF T1 - Grid-Workflows in Molecular Science T2 - Software Engineering 2010, Grid Workflow Workshop Y1 - 2010 A1 - Birkenheuer, Georg A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - Blunk, Dirk A1 - Fels, Gregor A1 - Gesing, Sandra A1 - Herres-Pawlis, Sonja A1 - Kohlbacher, Oliver A1 - Krüger, Jens A1 - Packschies, Lars JF - Software Engineering 2010, Grid Workflow Workshop PB - GI-Edition - Lecture Notes in Informatics (LNI) ER - TY - CONF T1 - The MoSGrid Gaussian Portlet – Technologies for the Implementation of Portlets for Molecular Simulations T2 - Proceedings of the International Workshop on Science Gateways (IWSG10) Y1 - 2010 A1 - Wewior, Martin A1 - Packschies, Lars A1 - Blunk, Dirk A1 - Wickeroth, D. A1 - Warzecha, Klaus A1 - Herres-Pawlis, Sonja A1 - Gesing, Sandra A1 - Breuers, Sebastian A1 - Krüger, Jens A1 - Birkenheuer, Georg A1 - Lang, Ulrich ED - Barbera, Roberto ED - Andronico, Giuseppe ED - La Rocca, Giuseppe JF - Proceedings of the International Workshop on Science Gateways (IWSG10) PB - Consorzio COMETA ER - TY - JOUR T1 - Quality control for quantitative PCR based on amplification compatibility test JF - Methods Y1 - 2010 A1 - Tichopad, Ales A1 - Tzachi Bar A1 - Ladislav Pecen A1 - Robert R. Kitchen A1 - Kubista, Mikael A1 - Michael W. Pfaffl AB - Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. PB - Elsevier VL - 50 UR - http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WN5-4Y88DBN-3&_user=10&_coverDate=04%2F30%2F2010&_alid=1247745718&_rdoc=1&_fmt=high&_orig=search&_cdi=6953&_sort=r&_docanchor=&view=c&_ct=2&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5 IS - 4 ER - TY - CONF T1 - Resource management of enterprise cloud systems using layered queuing and historical performance models T2 - IEEE International Symposium on Parallel Distributed Processing Y1 - 2010 A1 - Bacigalupo, D. A. A1 - van Hemert, J. A1 - Usmani, A. A1 - Dillenberger, D. N. A1 - Wills, G. B. A1 - Jarvis, S. A. KW - e-Science AB - The automatic allocation of enterprise workload to resources can be enhanced by being able to make `what-if' response time predictions, whilst different allocations are being considered. It is important to quantitatively compare the effectiveness of different prediction techniques for use in cloud infrastructures. To help make the comparison of relevance to a wide range of possible cloud environments it is useful to consider the following. 1.) urgent cloud customers such as the emergency services that can demand cloud resources at short notice (e.g. for our FireGrid emergency response software). 2.) dynamic enterprise systems, that must rapidly adapt to frequent changes in workload, system configuration and/or available cloud servers. 3.) The use of the predictions in a coordinated manner by both the cloud infrastructure and cloud customer management systems. 4.) A broad range of criteria for evaluating each technique. However, there have been no previous comparisons meeting these requirements. This paper, meeting the above requirements, quantitatively compares the layered queuing and (\^A¿HYDRA\^A¿) historical techniques - including our initial thoughts on how they could be combined. Supporting results and experiments include the following: i.) defining, investigating and hence providing guidelines on the use of a historical and layered queuing model; ii.) using these guidelines showing that both techniques can make low overhead and typically over 70% accurate predictions, for new server architectures for which only a small number of benchmarks have been run; and iii.) defining and investigating tuning a prediction-based cloud workload and resource management algorithm. JF - IEEE International Symposium on Parallel Distributed Processing ER - TY - CONF T1 - TOPP goes Rapid T2 - Cluster Computing and the Grid, IEEE International Symposium on Y1 - 2010 A1 - Gesing, Sandra A1 - van Hemert, Jano A1 - Jos Koetsier A1 - Bertsch, Andreas A1 - Kohlbacher, Oliver AB - Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly suggests the use of grid-based high-performance computing for analysis. TOPP and OpenMS are open-source packages for proteomics data analysis; however, they do not provide support for Grid computing. In this work we present a portal interface for high-throughput data analysis with TOPP. The portal is based on Rapid, a tool for efficiently generating standardized portlets for a wide range of applications. The web-based interface allows the creation and editing of user-defined pipelines and their execution and monitoring on a Grid infrastructure. The portal also supports several file transfer protocols for data staging. It thus provides a simple and complete solution to high-throughput proteomics data analysis for inexperienced users through a convenient portal interface. JF - Cluster Computing and the Grid, IEEE International Symposium on PB - IEEE Computer Society CY - Los Alamitos, CA, USA SN - 978-0-7695-4039-9 ER - TY - CONF T1 - Understanding TSP Difficulty by Learning from Evolved Instances T2 - Lecture Notes in Computer Science Y1 - 2010 A1 - Smith-Miles, Kate A1 - van Hemert, Jano A1 - Lim, Xin ED - Blum, Christian ED - Battiti, Roberto AB - Whether the goal is performance prediction, or insights into the relationships between algorithm performance and instance characteristics, a comprehensive set of meta-data from which relationships can be learned is needed. This paper provides a methodology to determine if the meta-data is sufficient, and demonstrates the critical role played by instance generation methods. Instances of the Travelling Salesman Problem (TSP) are evolved using an evolutionary algorithm to produce distinct classes of instances that are intentionally easy or hard for certain algorithms. A comprehensive set of features is used to characterise instances of the TSP, and the impact of these features on difficulty for each algorithm is analysed. Finally, performance predictions are achieved with high accuracy on unseen instances for predicting search effort as well as identifying the algorithm likely to perform best. JF - Lecture Notes in Computer Science PB - Springer Berlin / Heidelberg VL - 6073 UR - http://dx.doi.org/10.1007/978-3-642-13800-3_29 N1 - 10.1007/978-3-642-13800-3_29 ER - TY - CONF T1 - Workflow Interoperability in a Grid Portal for Molecular Simulations T2 - Proceedings of the International Workshop on Science Gateways (IWSG10) Y1 - 2010 A1 - Gesing, Sandra A1 - Marton, Istvan A1 - Birkenheuer, Georg A1 - Schuller, Bernd A1 - Grunzke, Richard A1 - Krüger, Jens A1 - Breuers, Sebastian A1 - Blunk, Dirk A1 - Fels, Gregor A1 - Packschies, Lars A1 - Brinkmann, André A1 - Kohlbacher, Oliver A1 - Kozlovszky, Miklos JF - Proceedings of the International Workshop on Science Gateways (IWSG10) PB - Consorzio COMETA ER - TY - RPRT T1 - ADMIRE D1.5 – Report defining an iteration of the model and language: PM3 and DL3 Y1 - 2009 A1 - Peter Brezany A1 - Ivan Janciak A1 - Alexander Woehrer A1 - Carlos Buil Aranda A1 - Malcolm Atkinson A1 - van Hemert, Jano AB - This document is the third deliverable to report on the progress of the model, language and ontology research conducted within Workpackage 1 of the ADMIRE project. Significant progress has been made on each of the above areas. The new results that we achieved are recorded against the targets defined for project month 18 and are reported in four sections of this document PB - ADMIRE project UR - http://www.admire-project.eu/docs/ADMIRE-D1.5-model-language-ontology.pdf ER - TY - CONF T1 - Advanced Data Mining and Integration Research for Europe T2 - All Hands Meeting 2009 Y1 - 2009 A1 - Atkinson, M. A1 - Brezany, P. A1 - Corcho, O. A1 - Han, L A1 - van Hemert, J. A1 - Hluchy, L. A1 - Hume, A. A1 - Janciak, I. A1 - Krause, A. A1 - Snelling, D. A1 - Wöhrer, A. AB - There is a rapidly growing wealth of data [1]. The number of sources of data is increasing, while, at the same time, the diversity, complexity and scale of these data resources are also increasing dramatically. This cornucopia of data o ers much potential; a combinatorial explosion of opportunities for knowledge discovery, improved decisions and better policies. Today, most of these opportunities are not realised because composing data from multiple sources and extracting information is too dicult. Every business, organisation and government faces problems that can only be addressed successfully if we improve our techniques for exploiting the data we gather. JF - All Hands Meeting 2009 CY - Oxford ER - TY - CONF T1 - Automating Gene Expression Annotation for Mouse Embryo T2 - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference) Y1 - 2009 A1 - Liangxiu Han A1 - van Hemert, Jano A1 - Richard Baldock A1 - Atkinson, Malcolm P. ED - Ronghuai Huang ED - Qiang Yang ED - Jian Pei ED - et al JF - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference) PB - Springer VL - LNAI 5678 ER - TY - JOUR T1 - The Circulate Architecture: Avoiding Workflow Bottlenecks Caused By Centralised Orchestration JF - Cluster Computing Y1 - 2009 A1 - Barker, A. A1 - Weissman, J. A1 - van Hemert, J. I. KW - grid computing KW - workflow VL - 12 UR - http://www.springerlink.com/content/080q5857711w2054/?p=824749739c6a432ea95a0c3b59f4025f&pi=1 ER - TY - JOUR T1 - Design and Optimization of Reverse-Transcription Quantitative PCR Experiments JF - Clin Chem Y1 - 2009 A1 - Tichopad, Ales A1 - Kitchen, Rob A1 - Riedmaier, Irmgard A1 - Becker, Christiane A1 - Stahlberg, Anders A1 - Kubista, Mikael AB - BACKGROUND: Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. METHOD: We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. RESULTS: A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. CONCLUSIONS: We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. UR - http://www.clinchem.org/cgi/content/abstract/clinchem.2009.126201v1 ER - TY - RPRT T1 - An e-Infrastructure for Collaborative Research in Human Embryo Development Y1 - 2009 A1 - Barker, Adam A1 - van Hemert, Jano I. A1 - Baldock, Richard A. A1 - Atkinson, Malcolm P. AB - Within the context of the EU Design Study Developmental Gene Expression Map, we identify a set of challenges when facilitating collaborative research on early human embryo development. These challenges bring forth requirements, for which we have identified solutions and technology. We summarise our solutions and demonstrate how they integrate to form an e-infrastructure to support collaborative research in this area of developmental biology. UR - http://arxiv.org/pdf/0901.2310v1 ER - TY - CONF T1 - An E-infrastructure to Support Collaborative Embryo Research T2 - Cluster Computing and the Grid Y1 - 2009 A1 - Barker, Adam A1 - van Hemert, Jano I. A1 - Baldock, Richard A. A1 - Atkinson, Malcolm P. JF - Cluster Computing and the Grid PB - IEEE Computer Society SN - 978-0-7695-3622-4 ER - TY - Generic T1 - A Methodology for Mobile Network Security Risk Management T2 - Sixth International Conference on Information Technology: New Generations (ITNG '09) Y1 - 2009 A1 - Mahdi Seify A1 - Shahriar Bijani JF - Sixth International Conference on Information Technology: New Generations (ITNG '09) PB - IEEE Computer Society ER - TY - JOUR T1 - An Open Grid Services Architecture Primer JF - Computer Y1 - 2009 A1 - Grimshaw, Andrew A1 - Morgan, Mark A1 - Merrill, Duane A1 - Kishimoto, Hiro A1 - Savva, Andreas A1 - Snelling, David A1 - Smith, Chris A1 - Dave Berry PB - IEEE Computer Society Press CY - Los Alamitos, CA, USA VL - 42 ER - TY - JOUR T1 - A Strategy for Research and Innovation in the Century of Information JF - Prometheus Y1 - 2009 A1 - e-Science Directors’ Forum Strategy Working Group A1 - Atkinson, M. A1 - Britton, D. A1 - Coveney, P. A1 - De Roure, D A1 - Garnett, N. A1 - Geddes, N. A1 - Gurney, R. A1 - Haines, K. A1 - Hughes, L. A1 - Ingram, D. A1 - Jeffreys, P. A1 - Lyon, L. A1 - Osborne, I. A1 - Perrott, P. A1 - Procter. R. A1 - Rusbridge, C. AB - More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year‐long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century‐of‐Information Research (CIR); it crystallises the ideas developed by the e‐Science Directors’ Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e‐Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of ‘digital‐systems judgement’ that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world‐leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e‐Science programme (2001–06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital‐information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital‐information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004–2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: 1. support the continuous innovation of digital‐information research methods; 2. provide easily used, pervasive and sustained e‐Infrastructure for all research; 3. enlarge the productive research community which exploits the new methods efficiently; 4. generate capacity, propagate knowledge and develop skills via new curricula; and 5. develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital‐infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well‐curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements. VL - 27 ER - TY - CONF T1 - Using Simulation for Decision Support: Lessons Learned from FireGrid T2 - Proceedings of the 6th International Conference on Information Systems for Crisis Response and Management (ISCRAM 2009) Y1 - 2009 A1 - Gerhard Wickler A1 - George Beckett A1 - Liangxiu Han A1 - Sung Han Koo A1 - Stephen Potter A1 - Gavin Pringle A1 - Austin Tate ED - J. Landgren, U. Nulden ED - B. Van de Walle JF - Proceedings of the 6th International Conference on Information Systems for Crisis Response and Management (ISCRAM 2009) CY - Gothenburg, Sweden ER - TY - CONF T1 - An Architecture for an Integrated Fire Emergency Response System for the Built Environment T2 - 9th Symposium of the International Association for Fire Safety Science (IAFSS) Y1 - 2008 A1 - Rochan Upadhyay A1 - Galvin Pringle A1 - George Beckett A1 - Stephen Potter A1 - Liangxiu Han A1 - Stephen Welch A1 - Asif Usmani A1 - Jose Torero KW - emergency response system KW - FireGrid KW - system architecture KW - technology integration AB - FireGrid is a modern concept that aims to leverage a number of modern technologies to aid fire emergency response. In this paper we provide a brief introduction to the FireGrid project. A number of different technologies such as wireless sensor networks, grid-enabled High Performance Computing (HPC) implementation of fire models, and artificial intelligence tools need to be integrated to build up a modern fire emergency response system. We propose a system architecture that provides the framework for integration of the various technologies. We describe the components of the generic FireGrid system architecture in detail. Finally we present a small-scale demonstration experiment which has been completed to highlight the concept and application of the FireGrid system to an actual fire. Although our proposed system architecture provides a versatile framework for integration, a number of new and interesting research problems need to be solved before actual deployment of the system. We outline some of the challenges involved which require significant interdisciplinary collaborations. JF - 9th Symposium of the International Association for Fire Safety Science (IAFSS) PB - IAFSS CY - Karlsruhe, GERMANY ER - TY - JOUR T1 - Distributed Computing Education, Part 4: Training Infrastructure JF - Distributed Systems Online Y1 - 2008 A1 - Fergusson, D. A1 - Barbera, R. A1 - Giorgio, E. A1 - Fargetta, M. A1 - Sipos, G. A1 - Romano, D. A1 - Atkinson, M. A1 - Vander Meer, E. AB - In the first article of this series (see http://doi.ieeecomputersociety.org/10.1109/MDSO.2008.16), we identified the need for teaching environments that provide infrastructure to support education and training in distributed computing. Training infrastructure, or t-infrastructure, is analogous to the teaching laboratory in biology and is a vital tool for educators and students. In practice, t-infrastructure includes the computing equipment, digital communications, software, data, and support staff necessary to teach a course. The International Summer Schools in Grid Computing (ISSGC) series and the first International Winter School on Grid Computing (IWSGC 08) used the Grid INFN Laboratory of Dissemination Activities (GILDA) infrastructure so students could gain hands-on experience with middleware. Here, we describe GILDA, related summer and winter school experiences, multimiddleware integration, t-infrastructure, and academic courses, concluding with an analysis and recommendations. PB - IEEE Computer Society VL - 9 UR - http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4752926 IS - 10 ER - TY - CONF T1 - Eliminating the Middle Man: Peer-to-Peer Dataflow T2 - HPDC '08: Proceedings of the 17th International Symposium on High Performance Distributed Computing Y1 - 2008 A1 - Barker, Adam A1 - Weissman, Jon B. A1 - van Hemert, Jano KW - grid computing KW - workflow JF - HPDC '08: Proceedings of the 17th International Symposium on High Performance Distributed Computing PB - ACM ER - TY - JOUR T1 - A Grid infrastructure for parallel and interactive applications JF - Computing and Informatics Y1 - 2008 A1 - Gomes, J. A1 - Borges, B. A1 - Montecelo, M. A1 - David, M. A1 - Silva, B. A1 - Dias, N. A1 - Martins, JP A1 - Fernandez, C. A1 - Garcia-Tarres, L. , A1 - Veiga, C. A1 - Cordero, D. A1 - Lopez, J. A1 - J Marco A1 - Campos, I. A1 - Rodríguez, David A1 - Marco, R. A1 - Lopez, A. A1 - Orviz, P. A1 - Hammad, A. VL - 27 IS - 2 ER - TY - Generic T1 - HIDGM: A Hybrid Intrusion Detection System for Mobile networks T2 - International Conference on Computer and Electrical Engineering (ICEEE) Y1 - 2008 A1 - Shahriar Bijani A1 - Maryam Kazemitabar JF - International Conference on Computer and Electrical Engineering (ICEEE) PB - IEEE Computer Society ER - TY - JOUR T1 - The interactive European Grid: Project objectives and achievements JF - Computing and Informatics Y1 - 2008 A1 - J Marco A1 - Campos, I. A1 - Coterillo, I. A1 - Diaz, I. A1 - Lopez, A. A1 - Marco, R. A1 - Martinez-Rivero, C. A1 - Orviz, P. A1 - Rodríguez, David A1 - Gomes, J. A1 - Borges, G. A1 - Montecelo, M. A1 - David, M. A1 - Silva, B. A1 - Dias, N. A1 - Martins, JP A1 - Fernandez, C. A1 - Garcia-Tarres, L. VL - 27 IS - 2 ER - TY - CONF T1 - Matching Spatial Regions with Combinations of Interacting Gene Expression Patterns T2 - Communications in Computer and Information Science Y1 - 2008 A1 - van Hemert, J. I. A1 - Baldock, R. A. ED - M. Elloumi ED - \emph ED - et al KW - biomedical KW - data mining KW - DGEMap KW - e-Science AB - The Edinburgh Mouse Atlas aims to capture in-situ gene expression patterns in a common spatial framework. In this study, we construct a grammar to define spatial regions by combinations of these patterns. Combinations are formed by applying operators to curated gene expression patterns from the atlas, thereby resembling gene interactions in a spatial context. The space of combinations is searched using an evolutionary algorithm with the objective of finding the best match to a given target pattern. We evaluate the method by testing its robustness and the statistical significance of the results it finds. JF - Communications in Computer and Information Science PB - Springer Verlag ER - TY - CONF T1 - Orchestrating Data-Centric Workflows T2 - The 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid) Y1 - 2008 A1 - Barker, Adam A1 - Weissman, Jon B. A1 - van Hemert, Jano KW - grid computing KW - workflow JF - The 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid) PB - IEEE Computer Society ER - TY - CONF T1 - Scientific Workflow: A Survey and Research Directions T2 - Lecture Notes in Computer Science Y1 - 2008 A1 - Barker, Adam A1 - van Hemert, Jano KW - e-Science KW - workflow AB - Workflow technologies are emerging as the dominant approach to coordinate groups of distributed services. However with a space filled with competing specifications, standards and frameworks from multiple domains, choosing the right tool for the job is not always a straightforward task. Researchers are often unaware of the range of technology that already exists and focus on implementing yet another proprietary workflow system. As an antidote to this common problem, this paper presents a concise survey of existing workflow technology from the business and scientific domain and makes a number of key suggestions towards the future development of scientific workflow systems. JF - Lecture Notes in Computer Science PB - Springer VL - 4967 UR - http://dx.doi.org/10.1007/978-3-540-68111-3_78 ER - TY - JOUR T1 - Semantic-Supported and Agent-Based Decentralized Grid Resource Discovery JF - Future Generation Computer Systems Y1 - 2008 A1 - Liangxiu Han A1 - Dave Berry KW - grid resource discovery, decentralization, agent, semantic similarity, ontology AB - One of open issues in grid computing is efficient resource discovery. In this paper, we propose a novel semantic-supported and agent-based decentralized grid resource discovery mechanism. Without overhead of negotiation, the algorithm allows individual resource agents to semantically interact with neighbour agents based on local knowledge and dynamically form a resource service chain to complete a task. The algorithm ensures resource agent’s ability to cooperate and coordinate on neighbour knowledge requisition for flexible problem solving. The developed algorithm is evaluated by investigating the relationship between the success probability of resource discovery and semantic similarity under different factors. The experiments show the algorithm could flexibly and dynamically discover resources and therefore provide a valuable addition to the field. PB - ScienceDirect VL - 24 IS - 8 ER - TY - CONF T1 - Accessing Data in Grids Using OGSA-DAI T2 - Knowledge and Data Management in Grids Y1 - 2007 A1 - Chue Hong, N. P. A1 - Antonioletti, M. A1 - Karasavvas, K. A. A1 - Atkinson, M. ED - Talia, D. ED - Bilas, A. ED - Dikaiakos, M. AB - The grid provides a vision in which resources, including storage and data, can be shared across organisational boundaries. The original emphasis of grid computing lay in the sharing of computational resources but technological and scientific advances have led to an ongoing data explosion in many fields. However, data is stored in many different storage systems and data formats, with different schema, access rights, metadata attributes, and ontologies all of which are obstacles to the access, integration and management of this information. In this chapter we examine some of the ways in which these differences can be addressed by grid technology to enable the meaningful sharing of data. In particular, we present an overview of the OGSA-DAI (Open Grid Service Architecture - Data Access and Integration) software, which provides a uniform, extensible framework for accessing structured and semi-structured data and provide some examples of its use in other projects. The open-source OGSA-DAI software is freely available from http://www.ogsadai.org.uk. JF - Knowledge and Data Management in Grids SN - 978-0-387-37830-5 UR - http://www.springer.com/computer/communication+networks/book/978-0-387-37830-5 ER - TY - CHAP T1 - COBrA and COBrA-CT: Ontology Engineering Tools T2 - Anatomy Ontologies for Bioinformatics: Principles and Practice Y1 - 2007 A1 - Stuart Aitken A1 - Yin Chen ED - Albert Burger ED - Duncan Davidson ED - Richard Baldock AB - COBrA is a Java-based ontology editor for bio-ontologies and anatomies that dif- fers from other editors by supporting the linking of concepts between two ontologies, and providing sophisticated analysis and verification functions. In addition to the Gene Ontology and Open Biology Ontologies formats, COBrA can import and export ontologies in the Se- mantic Web formats RDF, RDFS and OWL. COBrA is being re-engineered as a Prot ́eg ́e plug-in, and complemented by an ontology server and a tool for the management of ontology versions and collaborative ontology de- velopment. We describe both the original COBrA tool and the current developments in this chapter. JF - Anatomy Ontologies for Bioinformatics: Principles and Practice PB - Springer SN - ISBN-10:1846288843 UR - http://www.amazon.ca/Anatomy-Ontologies-Bioinformatics-Principles-Practice/dp/1846288843 ER - TY - CONF T1 - Data Integration in eHealth: A Domain/Disease Specific Roadmap T2 - Studies in Health Technology and Informatics Y1 - 2007 A1 - Ure, J. A1 - Proctor, R. A1 - Martone, M. A1 - Porteous, D. A1 - Lloyd, S. A1 - Lawrie, S. A1 - Job, D. A1 - Baldock, R. A1 - Philp, A. A1 - Liewald, D. A1 - Rakebrand, F. A1 - Blaikie, A. A1 - McKay, C. A1 - Anderson, S. A1 - Ainsworth, J. A1 - van Hemert, J. A1 - Blanquer, I. A1 - Sinno ED - N. Jacq ED - Y. Legr{\'e} ED - H. Muller ED - I. Blanquer ED - V. Breton ED - D. Hausser ED - V. Hern{\'a}ndez ED - T. Solomonides ED - M. Hofman-Apitius KW - e-Science AB - The paper documents a series of data integration workshops held in 2006 at the UK National e-Science Centre, summarizing a range of the problem/solution scenarios in multi-site and multi-scale data integration with six HealthGrid projects using schizophrenia as a domain-specific test case. It outlines emerging strategies, recommendations and objectives for collaboration on shared ontology-building and harmonization of data for multi-site trials in this domain. JF - Studies in Health Technology and Informatics PB - IOPress VL - 126 SN - 978-1-58603-738-3 ER - TY - CONF T1 - e-Research Infrastructure Development and Community Engagement T2 - All Hands Meeting 2007 Y1 - 2007 A1 - Voss, A. A1 - Mascord, M. A1 - Fraser, M. A1 - Jirotka, M. A1 - Procter, R. A1 - Halfpenny, P. A1 - Fergusson, D. A1 - Atkinson, M. A1 - Dunn, S. A1 - Blanke, T. A1 - Hughes, L. A1 - Anderson, S. AB - The UK and wider international e-Research initiatives are entering a critical phase in which they need to move from the development of the basic underlying technology, demonstrators, prototypes and early applications to wider adoption and the development of stable infrastructures. In this paper we will review existing work on studies of infrastructure and community development, requirements elicitation for existing services as well as work within the arts and humanities and the social sciences to establish e-Research in these communities. We then describe two projects recently funded by JISC to study barriers to adoption and responses to them as well as use cases and service usage models. JF - All Hands Meeting 2007 CY - Nottingham, UK ER - TY - CONF T1 - Interaction as a Grounding for Peer to Peer Knowledge Sharing T2 - Advances in Web Semantics Y1 - 2007 A1 - Robertson, D. A1 - Walton, C. A1 - Barker, A. A1 - Besana, P. A1 - Chen-Burger, Y. A1 - Hassan, F. A1 - Lambert, D. A1 - Li, G. A1 - McGinnis, J A1 - Osman, N. A1 - Bundy, A. A1 - McNeill, F. A1 - van Harmelen, F. A1 - Sierra, C. A1 - Giunchiglia, F. JF - Advances in Web Semantics PB - LNCS-IFIP VL - 1 ER - TY - JOUR T1 - Mining co-regulated gene profiles for the detection of functional associations in gene expression data JF - Bioinformatics Y1 - 2007 A1 - Gyenesei, Attila A1 - Wagner, Ulrich A1 - Barkow-Oesterreicher, Simon A1 - Stolte, Etzard A1 - Schlapbach, Ralph VL - 23 ER - TY - CONF T1 - Mining spatial gene expression data for association rules T2 - Lecture Notes in Bioinformatics Y1 - 2007 A1 - van Hemert, J. I. A1 - Baldock, R. A. ED - S. Hochreiter ED - R. Wagner KW - biomedical KW - data mining KW - DGEMap KW - e-Science AB - We analyse data from the Edinburgh Mouse Atlas Gene-Expression Database (EMAGE) which is a high quality data source for spatio-temporal gene expression patterns. Using a novel process whereby generated patterns are used to probe spatially-mapped gene expression domains, we are able to get unbiased results as opposed to using annotations based predefined anatomy regions. We describe two processes to form association rules based on spatial configurations, one that associates spatial regions, the other associates genes. JF - Lecture Notes in Bioinformatics PB - Springer Verlag UR - http://dx.doi.org/10.1007/978-3-540-71233-6_6 ER - TY - JOUR T1 - OBO Explorer: An Editor for Open Biomedical Ontologies in OWL JF - Bioinformatics Y1 - 2007 A1 - Stuart Aitken A1 - Yin Chen A1 - Jonathan Bard AB - To clarify the semantics, and take advantage of tools and algorithms developed for the Semantic Web, a mapping from the Open Biomedical Ontologies (OBO) format to the Web Ontology Language (OWL) has been established. We present an ontology editor that allows end users to work directly with this OWL representation of OBO format ontologies. PB - Oxford Journals UR - http://bioinformatics.oxfordjournals.org/cgi/content/abstract/btm593? ER - TY - CONF T1 - Towards a Grid-Enabled Simulation Framework for Nano-CMOS Electronics T2 - 3rd IEEE International Conference on eScience and Grid Computing Y1 - 2007 A1 - Liangxiu Han A1 - Asen Asenov A1 - Dave Berry A1 - Campbell Millar A1 - Gareth Roy A1 - Scott Roy A1 - Richard Sinnott A1 - Gordon Stewart AB - The electronics design industry is facing major challenges as transistors continue to decrease in size. The next generation of devices will be so small that the position of individual atoms will affect their behaviour. This will cause the transistors on a chip to have highly variable characteristics, which in turn will impact circuit and system design tools. The EPSRC project “Meeting the Design Challenges of Nano-CMOS Electronics” (Nano-CMOS) has been funded to explore this area. In this paper, we describe the distributed data-management and computing framework under development within Nano-CMOS. A key aspect of this framework is the need for robust and reliable security mechanisms that support distributed electronics design groups who wish to collaborate by sharing designs, simulations, workflows, datasets and computation resources. This paper presents the system design, and an early prototype of the project which hasbeen useful in helping us to understand the benefits of such a grid infrastructure. In particular, we also present two typical use cases: user authentication, and execution of large-scale device simulations. JF - 3rd IEEE International Conference on eScience and Grid Computing PB - IEEE Computer Society CY - Bangalore, India ER - TY - CONF T1 - Transaction-Based Grid Database Replication T2 - UK e-Science Al l Hands Meeting 2007 Y1 - 2007 A1 - Y. Chen A1 - D. Berry A1 - P. Dantressangle KW - Grid, Replication, Transaction-based, OGSA-DAI AB - We present a framework for grid database replication. Data replication is one of the most useful strategies to achieve high levels of availability and fault tolerance as well as minimal access time in grids. It is commonly demanded by many grid applications. However, most existing grid replication systems only deal with read-only files. By contrast, several relational database vendors provide tools that offer transaction-based replication, but the capabilities of these products are insufficient to address grid issues. They lack scalability and cannot cope with the heterogeneous nature of grid resources. Our approach uses existing grid mechanisms to provide a metadata registry and to make initial replicas of data resources. We then define high-level APIs for managing transaction-based replication. These APIs can be mapped to a variety of relational database replication mechanisms allowing us to use existing vendor-specific solutions. The next stage in the project will use OGSA- DAI to manage replication across multiple domains. In this way, our framework can support transaction-based database synchronisation that maintains consistency in a data-intensive, large- scale distributed, disparate networking environment. JF - UK e-Science Al l Hands Meeting 2007 CY - Nottingham, UK ER - TY - CONF T1 - EGEE: building a pan-European grid training organisation T2 - ACSW Frontiers Y1 - 2006 A1 - Berlich, R{\"u}diger A1 - Hardt, Marcus A1 - Kunze, Marcel A1 - Atkinson, Malcolm P. A1 - Fergusson, David JF - ACSW Frontiers ER - TY - CONF T1 - FireGrid: Integrated emergency response and fire safety engineering for the future built environment T2 - All Hands Meeting 2005 Y1 - 2006 A1 - D. Berry A1 - Usmani, A. A1 - Torero, J. A1 - Tate, A. A1 - McLaughlin, S. A1 - Potter, S. A1 - Trew, A. A1 - Baxter, R. A1 - Bull, M. A1 - Atkinson, M. AB - Analyses of disasters such as the Piper Alpha explosion (Sylvester-Evans and Drysdale, 1998), the World Trade Centre collapse (Torero et al, 2002, Usmani et al, 2003) and the fires at Kings Cross (Drysdale et al, 1992) and the Mont Blanc tunnel (Rapport Commun, 1999) have revealed many mistaken decisions, such as that which sent 300 fire-fighters to their deaths in the World Trade Centre. Many of these mistakes have been attributed to a lack of information about the conditions within the fire and the imminent consequences of the event. E-Science offers an opportunity to significantly improve the intervention in fire emergencies. The FireGrid Consortium is working on a mixture of research projects to make this vision a reality. This paper describes the research challenges and our plans for solving them. JF - All Hands Meeting 2005 CY - Nottingham, UK ER - TY - CONF T1 - Grid Infrastructures for Secure Access to and Use of Bioinformatics Data: Experiences from the BRIDGES Project T2 - Proceedings of the First International Conference on Availability, Reliability and Security, ARES Y1 - 2006 A1 - Richard O. Sinnott A1 - Micha Bayer A1 - A. J. Stell A1 - Jos Koetsier JF - Proceedings of the First International Conference on Availability, Reliability and Security, ARES T3 - Proceedings of the The First International Conference on Availability, Reliability and Security CY - Vienna, Austria ER - TY - JOUR T1 - A Criticality-Based Framework for Task Composition in Multi-Agent Bioinformatics Integration Systems JF - Bioinformatics Y1 - 2005 A1 - Karasavvas, K. A1 - Baldock, R. A1 - Burger, A. VL - 21 ER - TY - JOUR T1 - The design and implementation of Grid database services in OGSA-DAI JF - Concurrency - Practice and Experience Y1 - 2005 A1 - Antonioletti, Mario A1 - Atkinson, Malcolm P. A1 - Baxter, Robert M. A1 - Borley, Andrew A1 - Hong, Neil P. Chue A1 - Collins, Brian A1 - Hardman, Neil A1 - Hume, Alastair C. A1 - Knox, Alan A1 - Mike Jackson A1 - Krause, Amrey A1 - Laws, Simon A1 - Magowan, James A1 - Pato VL - 17 ER - TY - CONF T1 - The Digital Curation Centre: a vision for digital curation T2 - 2005 IEEE International Symposium on Mass Storage Systems and Technology Y1 - 2005 A1 - Rusbridge, C. A1 - P. Burnhill A1 - S. Ross A1 - P. Buneman A1 - D. Giaretta A1 - Lyon, L. A1 - Atkinson, M. AB - We describe the aims and aspirations for the Digital Curation Centre (DCC), the UK response to the realisation that digital information is both essential and fragile. We recognise the equivalence of preservation as "interoperability with the future", asserting that digital curation is concerned with "communication across time". We see the DCC as having relevance for present day data curation and for continuing data access for generations to come. We describe the structure and plans of the DCC, designed to support these aspirations and based on a view of world class research being developed into curation services, all of which are underpinned by outreach to the broadest community. JF - 2005 IEEE International Symposium on Mass Storage Systems and Technology PB - IEEE Computer Society CY - Sardinia, Italy SN - 0-7803-9228-0 ER - TY - CONF T1 - Evolutionary Transitions as a Metaphor for Evolutionary Optimization T2 - LNAI 3630 Y1 - 2005 A1 - Defaweux, A. A1 - Lenaerts, T. A1 - van Hemert, J. I. ED - M. Capcarrere ED - A. A. Freitas ED - P. J. Bentley ED - C. G. Johnson ED - J. Timmis KW - constraint satisfaction KW - transition models AB - This paper proposes a computational model for solving optimisation problems that mimics the principle of evolutionary transitions in individual complexity. More specifically it incorporates mechanisms for the emergence of increasingly complex individuals from the interaction of more simple ones. The biological principles for transition are outlined and mapped onto an evolutionary computation context. The class of binary constraint satisfaction problems is used to illustrate the transition mechanism. JF - LNAI 3630 PB - Springer-Verlag SN - 3-540-28848-1 ER - TY - Generic T1 - Experience with the international testbed in the crossgrid project T2 - Advances in Grid Computing-EGC 2005 Y1 - 2005 A1 - Gomes, J. A1 - David, M. A1 - Martins, J. A1 - Bernardo, L. A1 - A García A1 - Hardt, M. A1 - Kornmayer, H. A1 - Marco, Jesus A1 - Marco, Rafael A1 - Rodríguez, David A1 - Diaz, Irma A1 - Cano, Daniel A1 - Salt, J. A1 - Gonzalez, S. A1 - J Sánchez A1 - Fassi, F. A1 - Lara, V. A1 - Nyczyk, P. A1 - Lason, P. A1 - Ozieblo, A. A1 - Wolniewicz, P. A1 - Bluj, M. A1 - K Nawrocki A1 - A Padee A1 - W Wislicki ED - Peter M. A. Sloot, Alfons G. Hoekstra, Thierry Priol, Alexander Reinefeld ED - Marian Bubak JF - Advances in Grid Computing-EGC 2005 T3 - LNCS PB - Springer Berlin/Heidelberg CY - Amsterdam VL - 3470 ER - TY - CONF T1 - A New Architecture for OGSA-DAI T2 - UK e-Science All Hands Meeting Y1 - 2005 A1 - Atkinson, M. A1 - Karasavvas, K. A1 - Antonioletti, M. A1 - Baxter, R. A1 - Borley, A. A1 - Hong, N. C. A1 - Hume, A. A1 - Jackson, M. A1 - Krause, A. A1 - Laws, S. A1 - Paton, N. A1 - Schopf, J. A1 - Sugden, T. A1 - Tourlas, K. A1 - Watson, P. JF - UK e-Science All Hands Meeting ER - TY - CONF T1 - OGSA-DAI Status and Benchmarks T2 - All Hands Meeting 2005 Y1 - 2005 A1 - Antonioletti, Mario A1 - Malcolm Atkinson A1 - Rob Baxter A1 - Andrew Borle A1 - Hong, Neil P. Chue A1 - Patrick Dantressangle A1 - Hume, Alastair C. A1 - Mike Jackson A1 - Krause, Amy A1 - Laws, Simon A1 - Parsons, Mark A1 - Paton, Norman W. A1 - Jennifer M. Schopf A1 - Tom Sugden A1 - Watson, Paul AB - This paper presents a status report on some of the highlights that have taken place within the OGSADAI project since the last AHM. A description of Release 6.0 functionality and details of the forthcoming release, due in September 2005, is given. Future directions for this project are discussed. This paper also describes initial results of work being done to systematically benchmark recent OGSADAI releases. The OGSA-DAI software distribution, and more information about the project, is available from the project website at www.ogsadai.org.uk. JF - All Hands Meeting 2005 CY - Nottingham, UK ER - TY - CONF T1 - Organization of the International Testbed of the CrossGrid Project T2 - Cracow Grid Workshop 2005 Y1 - 2005 A1 - Gomes, J. A1 - David, M. A1 - Martins, J. A1 - Bernardo, L. A1 - Garcia, A. A1 - Hardt, M. A1 - Kornmayer, H. A1 - Marco, Rafael A1 - Rodríguez, David A1 - Diaz, Irma A1 - Cano, Daniel A1 - Salt, J. A1 - Gonzalez, S. A1 - Sanchez, J. A1 - Fassi, F. A1 - Lara, V. A1 - Nyczyk, P. A1 - Lason, P. A1 - Ozieblo, A. A1 - Wolniewicz, P. A1 - Bluj, M. JF - Cracow Grid Workshop 2005 ER - TY - CONF T1 - Transition Models as an incremental approach for problem solving in Evolutionary Algorithms T2 - Proceedings of the Genetic and Evolutionary Computation Conference Y1 - 2005 A1 - Defaweux, A. A1 - Lenaerts, T. A1 - van Hemert, J. I. A1 - Parent, J. ED - H.-G. Beyer ED - et al KW - constraint satisfaction KW - transition models AB - This paper proposes an incremental approach for building solutions using evolutionary computation. It presents a simple evolutionary model called a Transition model. It lets building units of a solution interact and then uses an evolutionary process to merge these units toward a full solution for the problem at hand. The paper provides a preliminary study on the evolutionary dynamics of this model as well as an empirical comparison with other evolutionary techniques on binary constraint satisfaction. JF - Proceedings of the Genetic and Evolutionary Computation Conference PB - {ACM} Press ER - TY - JOUR T1 - Bioinformatics System Integration and Agent Technology JF - Journal of Biomedical Informatics Y1 - 2004 A1 - Karasavvas, K. A1 - Baldock, R. A1 - Burger, A. VL - 37 ER - TY - Generic T1 - Development of a Grid Infrastructure for Functional Genomics T2 - Life Science Grid Conference (LSGrid 2004) Y1 - 2004 A1 - Sinnott, R. O. A1 - Bayer, M. A1 - Houghton, D. A1 - D. Berry A1 - Ferrier, M. JF - Life Science Grid Conference (LSGrid 2004) T3 - LNCS PB - Springer Verlag CY - Kanazawa, Japan ER - TY - CONF T1 - Dynamic Routing Problems with Fruitful Regions: Models and Evolutionary Computation T2 - LNCS Y1 - 2004 A1 - van Hemert, J. I. A1 - la Poutré, J. A. ED - Xin Yao ED - Edmund Burke ED - Jose A. Lozano ED - Jim Smith ED - Juan J. Merelo-Guerv\'os ED - John A. Bullinaria ED - Jonathan Rowe ED - Peter Ti\v{n}o Ata Kab\'an ED - Hans-Paul Schwefel KW - dynamic problems KW - evolutionary computation KW - vehicle routing AB - We introduce the concept of fruitful regions in a dynamic routing context: regions that have a high potential of generating loads to be transported. The objective is to maximise the number of loads transported, while keeping to capacity and time constraints. Loads arrive while the problem is being solved, which makes it a real-time routing problem. The solver is a self-adaptive evolutionary algorithm that ensures feasible solutions at all times. We investigate under what conditions the exploration of fruitful regions improves the effectiveness of the evolutionary algorithm. JF - LNCS PB - Springer-Verlag CY - Birmingham, UK VL - 3242 SN - 3-540-23092-0 ER - TY - Generic T1 - Grid Services Supporting the Usage of Secure Federated, Distributed Biomedical Data T2 - All Hands Meeting 2004 Y1 - 2004 A1 - Richard Sinnott A1 - Malcolm Atkinson A1 - Micha Bayer A1 - Dave Berry A1 - Anna Dominiczak A1 - Magnus Ferrier A1 - David Gilbert A1 - Neil Hanlon A1 - Derek Houghton A1 - Hunt, Ela A1 - David White AB - The BRIDGES project is a UK e-Science project that provides grid based support for biomedical research into the genetics of hypertension – the Cardiovascular Functional Genomics Project (CFG). Its main goal is to provide an effective environment for CFG, and biomedical research in general, including access to integrated data, analysis and visualization, with appropriate authorisation and privacy, as well as grid based computational tools and resources. It also aims to provide an improved understanding of the requirements of academic biomedical research virtual organizations and to evaluate the utility of existing data federation tools. JF - All Hands Meeting 2004 CY - Nottingham, UK UR - http://www.allhands.org.uk/2004/proceedings/papers/87.pdf ER - TY - CONF T1 - OGSA-DAI Status Report and Future Directions T2 - All Hands Meeting 2004 Y1 - 2004 A1 - Antonioletti, Mario A1 - Malcolm Atkinson A1 - Rob Baxter A1 - Borley, Andrew A1 - Hong, Neil P. Chue A1 - Collins, Brian A1 - Jonathan Davies A1 - Desmond Fitzgerald A1 - Hardman, Neil A1 - Hume, Alastair C. A1 - Mike Jackson A1 - Krause, Amrey A1 - Laws, Simon A1 - Paton, Norman W. A1 - Tom Sugden A1 - Watson, Paul A1 - Mar AB - Data Access and Integration (DAI) of data resources, such as relational and XML databases, within a Grid context. Project members also participate in the development of DAI standards through the GGF DAIS WG. The standards that emerge through this effort will be adopted by OGSA-DAI once they have stabilised. The OGSA-DAI developers are also engaging with a growing user community to gather their data and functionality requirements. Several large projects are already using OGSA-DAI to provide their DAI capabilities. This paper presents a status report on OGSA-DAI activities since the last AHM and announces future directions. The OGSA-DAI software distribution and more information about the project is available from the project website at http://www.ogsadai.org.uk/. JF - All Hands Meeting 2004 CY - Nottingham, UK ER - TY - CONF T1 - OGSA-DAI: Two Years On T2 - GGF10 Y1 - 2004 A1 - Antonioletti, Mario A1 - Malcolm Atkinson A1 - Rob Baxter A1 - Borley, Andrew A1 - Neil Chue Hong A1 - Collins, Brian A1 - Jonathan Davies A1 - Hardman, Neil A1 - George Hicken A1 - Ally Hume A1 - Mike Jackson A1 - Krause, Amrey A1 - Laws, Simon A1 - Magowan, James A1 - Jeremy Nowell A1 - Paton, Norman W. A1 - Dave Pearson A1 - To AB - The OGSA-DAI project has been producing Grid-enabled middleware for almost two years now, providing data access and integration capabilities to data resources, such as databases, within an OGSA context. In these two years, OGSA-DAI has been tracking rapidly evolving standards, managing changes in software dependencies, contributing to the standardisation process and liasing with a growing user community together with their associated data requirements. This process has imparted important lessons and raised a number of issues that need to be addressed if a middleware product is to be widely adopted. This paper examines the experiences of OGSA-DAI in implementing proposed standards, the likely impact that the still-evolving standards landscape will have on future implementations and how these affect uptake of the software. The paper also examines the gathering of requirements from and engagement with the Grid community, the difficulties of defining a process for the management and publishing of metadata, and whether relevant standards can be implemented in an efficient manner. The OGSA-DAI software distribution and more details about the project are available from the project Web site at http://www.ogsadai.org.uk/. JF - GGF10 CY - Berlin, Germany ER - TY - CONF T1 - Phase transition properties of clustered travelling salesman problem instances generated with evolutionary computation T2 - LNCS Y1 - 2004 A1 - van Hemert, J. I. A1 - Urquhart, N. B. ED - Xin Yao ED - Edmund Burke ED - Jose A. Lozano ED - Jim Smith ED - Juan J. Merelo-Guerv\'os ED - John A. Bullinaria ED - Jonathan Rowe ED - Peter Ti\v{n}o Ata Kab\'an ED - Hans-Paul Schwefel KW - evolutionary computation KW - problem evolving KW - travelling salesman AB - This paper introduces a generator that creates problem instances for the Euclidean symmetric travelling salesman problem. To fit real world problems, we look at maps consisting of clustered nodes. Uniform random sampling methods do not result in maps where the nodes are spread out to form identifiable clusters. To improve upon this, we propose an evolutionary algorithm that uses the layout of nodes on a map as its genotype. By optimising the spread until a set of constraints is satisfied, we are able to produce better clustered maps, in a more robust way. When varying the number of clusters in these maps and, when solving the Euclidean symmetric travelling salesman person using Chained Lin-Kernighan, we observe a phase transition in the form of an easy-hard-easy pattern. JF - LNCS PB - Springer-Verlag CY - Birmingham, UK VL - 3242 SN - 3-540-23092-0 UR - http://www.vanhemert.co.uk/files/clustered-phase-transition-tsp.tar.gz ER - TY - JOUR T1 - Robust parameter settings for variation operators by measuring the resampling ratio: A study on binary constraint satisfaction problems JF - Journal of Heuristics Y1 - 2004 A1 - van Hemert, J. I. A1 - Bäck, T. KW - constraint satisfaction KW - evolutionary computation KW - resampling ratio AB - In this article, we try to provide insight into the consequence of mutation and crossover rates when solving binary constraint satisfaction problems. This insight is based on a measurement of the space searched by an evolutionary algorithm. From data empirically acquired we describe the relation between the success ratio and the searched space. This is achieved using the resampling ratio, which is a measure for the amount of points revisited by a search algorithm. This relation is based on combinations of parameter settings for the variation operators. We then show that the resampling ratio is useful for identifying the quality of parameter settings, and provide a range that corresponds to robust parameter settings. VL - 10 ER - TY - CONF T1 - The Design and Implementation of Grid Database Services in OGSA-DAI T2 - All Hands Meeting 2003 Y1 - 2003 A1 - Ali Anjomshoaa A1 - Antonioletti, Mario A1 - Malcolm Atkinson A1 - Rob Baxter A1 - Borley, Andrew A1 - Hong, Neil P. Chue A1 - Collins, Brian A1 - Hardman, Neil A1 - George Hicken A1 - Ally Hume A1 - Knox, Alan A1 - Mike Jackson A1 - Krause, Amrey A1 - Laws, Simon A1 - Magowan, James A1 - Charaka Palansuriya A1 - Paton, Norman W. AB - This paper presents a high-level overview of the design and implementation of the core components of the OGSA-DAI project. It describes the design decisions made, the project’s interaction with the Data Access and Integration Working Group of the Global Grid Forum and provides an overview of implementation characteristics. Further details of the implementation are provided in the extensive documentation available from the project web site. JF - All Hands Meeting 2003 CY - Nottingham, UK ER - TY - JOUR T1 - The pervasiveness of evolution in GRUMPS software JF - Softw., Pract. Exper. Y1 - 2003 A1 - Evans, Huw A1 - Atkinson, Malcolm P. A1 - Brown, Margaret A1 - Cargill, Julie A1 - Crease, Murray A1 - Draper, Steve A1 - Gray, Philip D. A1 - Thomas, Richard VL - 33 ER - TY - CHAP T1 - Rationale for Choosing the Open Grid Services Architecture T2 - Grid Computing: Making the Global Infrastructure a Reality Y1 - 2003 A1 - Atkinson, M. ED - F. Berman ED - G. Fox ED - T. Hey JF - Grid Computing: Making the Global Infrastructure a Reality PB - John Wiley & Sons, Ltd CY - Chichester, UK SN - 9780470853191 ER - TY - CONF T1 - Criticality-Based Task Composition in Distributed Bioinformatics Systems T2 - Proceedings of the Twelfth International Conference on Intelligent Systems for Molecular Biology Y1 - 2002 A1 - Karasavvas, K. A1 - Baldock, R. A1 - Burger, A. JF - Proceedings of the Twelfth International Conference on Intelligent Systems for Molecular Biology ER - TY - CONF T1 - Measuring the Searched Space to Guide Efficiency: The Principle and Evidence on Constraint Satisfaction T2 - Springer Lecture Notes on Computer Science Y1 - 2002 A1 - van Hemert, J. I. A1 - Bäck, T. ED - J. J. Merelo ED - A. Panagiotis ED - H.-G. Beyer ED - Jos{\'e}-Luis Fern{\'a}ndez-Villaca{\~n}as ED - Hans-Paul Schwefel KW - constraint satisfaction KW - resampling ratio AB - In this paper we present a new tool to measure the efficiency of evolutionary algorithms by storing the whole searched space of a run, a process whereby we gain insight into the number of distinct points in the state space an algorithm has visited as opposed to the number of function evaluations done within the run. This investigation demonstrates a certain inefficiency of the classical mutation operator with mutation-rate 1/l, where l is the dimension of the state space. Furthermore we present a model for predicting this inefficiency and verify it empirically using the new tool on binary constraint satisfaction problems. JF - Springer Lecture Notes on Computer Science PB - Springer-Verlag, Berlin SN - 3-540-44139-5 ER - TY - CONF T1 - A Multi-Agent Bioinformatics Integration System with Adjustable Autonomy: An Overview T2 - Proceedings of the First International Conference on Autonomous Agents and Multi-Agent Systems Y1 - 2002 A1 - Karasavvas, K. A1 - Burger, A. A1 - Baldock, R. JF - Proceedings of the First International Conference on Autonomous Agents and Multi-Agent Systems PB - ACM ER - TY - CONF T1 - A Multi-Agent Bioinformatics Integration System with Adjustable Autonomy T2 - Lecture Notes in Computer Science Y1 - 2002 A1 - Karasavvas, K. A1 - Burger, A. A1 - Baldock, R. JF - Lecture Notes in Computer Science VL - 2417 ER - TY - CONF T1 - Use of Evolutionary Algorithms for Telescope Scheduling T2 - Integrated Modeling of Telescopes Y1 - 2002 A1 - Grim, R. A1 - Jansen, M. L. M. A1 - Baan, A. A1 - van Hemert, J. I. A1 - de Wolf, H. ED - Torben Anderson KW - constraint satisfaction KW - scheduling AB - LOFAR, a new radio telescope, will be designed to observe with up to 8 independent beams, thus allowing several simultaneous observations. Scheduling of multiple observations parallel in time, each having their own constraints, requires a more intelligent and flexible scheduling function then operated before. In support of the LOFAR radio telescope project, and in co-operation with Leiden University, Fokker Space has started a study to investigate the suitability of the use of evolutionary algorithms applied to complex scheduling problems. After a positive familiarisation phase, we now examine the potential use of evolutionary algorithms via a demonstration project. Results of the familiarisation phase, and the first results of the demonstration project are presented in this paper. JF - Integrated Modeling of Telescopes PB - The International Society for Optical Engineering ({SPIE}) VL - 4757 ER - TY - CONF T1 - An Engineering Approach to Evolutionary Art T2 - Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001) Y1 - 2001 A1 - van Hemert, J. I. A1 - Jansen, M. L. M. ED - Lee Spector ED - Erik D. Goodman ED - Annie Wu ED - W. B. Langdon ED - Hans-Michael Voigt ED - Mitsuo Gen ED - Sandip Sen ED - Marco Dorigo ED - Shahram Pezeshk ED - Max H. Garzon ED - Edmund Burke KW - evolutionary art AB - We present a general system that evolves art on the Internet. The system runs on a server which enables it to collect information about its usage world wide; its core uses operators and representations from genetic program-ming. We show two types of art that can be evolved using this general system. JF - Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001) PB - Morgan Kaufmann Publishers, San Francisco ER - TY - CONF T1 - A ``Futurist'' approach to dynamic environments T2 - Proceedings of the Workshops at the Genetic and Evolutionary Computation Conference, Dynamic Optimization Problems Y1 - 2001 A1 - van Hemert, J. I. A1 - Van Hoyweghen, C. A1 - Lukschandl, E. A1 - Verbeeck, K. ED - J. Branke ED - Th. B{\"a}ck KW - dynamic problems AB - The optimization of dynamic environments has proved to be a difficult area for Evolutionary Algorithms. As standard haploid populations find it difficult to track a moving target, diffKerent schemes have been suggested to improve the situation. We study a novel approach by making use of a meta learner which tries to predict the next state of the environment, i.e. the next value of the goal the individuals have to achieve, by making use of the accumulated knowledge from past performance. JF - Proceedings of the Workshops at the Genetic and Evolutionary Computation Conference, Dynamic Optimization Problems PB - Morgan Kaufmann Publishers, San Francisco ER - TY - BOOK T1 - GRUMPS Summer Anthology, 2001 Y1 - 2001 A1 - Atkinson, M. A1 - Brown, M. A1 - Cargill, J. A1 - Crease, M. A1 - Draper, S. A1 - Evans, H. A1 - Gray, P. A1 - Mitchell, C. A1 - Ritchie, M. A1 - Thomas, R. AB - This is the first collection of papers from GRUMPS [http://grumps.dcs.gla.ac.uk]. The project only started up in February 2001, and this collection (frozen at 1 Sept 2001) shows that it got off to a productive start. Versions of some of these papers have been submitted to conferences and workshops: the website will have more information on publication status and history. GRUMPS decided to begin with a first study, partly to help the team coalesce. This involved installing two pieces of software in a first year computing science lab: one (the "UAR") to record a large volume of student actions at a low level with a view to mining them later, another (the "LSS") directly designed to assist tutor-student interaction. Some of the papers derive from that, although more are planned. Results from this first study can be found on the website. The project also has a link to UWA in Perth, Western Australia, where related software has already been developed and used as described in one of the papers. Another project strand concerns using handsets in lecture theatres to support interactivity there, as two other papers describe. As yet unrepresented in this collection, GRUMPS will also be entering the bioinformatics application area. The GRUMPS project operates on several levels. It is based in the field of Distributed Information Management (DIM), expecting to cover both mobile and static nodes, synchronous and detached clients, high and low volume data sources. The specific focus of the project (see the original proposal on the web site) is to address records of computational activity (where any such pre-existing usage might have extra record collection installed) and data experimentation, where the questions to be asked of the data emerge concurrently with data collection which will therefore be dynamically modifiable: a requirement that further pushes on the space of DIM. The level above concerns building and making usable tools for asking questions of the data, or rather of the activities that generate the data. Above that again is the application domain level: what the original computational activities serve, education and bioinformatics being two identified cases. The GRUMPS team is therefore multidisciplinary, from DIM architecture researchers to educational evaluators. The mix of papers reflects this. PB - Academic Press ER - TY - CONF T1 - Constraint Satisfaction Problems and Evolutionary Algorithms: A Reality Check T2 - Proceedings of the Twelfth Belgium/Netherlands Conference on Artificial Intelligence (BNAIC'00) Y1 - 2000 A1 - van Hemert, J. I. ED - van den Bosch, A. ED - H. Weigand KW - constraint satisfaction AB - Constraint satisfaction has been the subject of many studies. Different areas of research have tried to solve all kind of constraint problems. Here we will look at a general model for constraint satisfaction problems in the form of binary constraint satisfaction. The problems generated from this model are studied in the research area of constraint programming and in the research area of evolutionary computation. This paper provides an empirical comparison of two techniques from each area. Basically, this is a check on how well both areas are doing. It turns out that, although evolutionary algorithms are doing well, classic approaches are still more successful. JF - Proceedings of the Twelfth Belgium/Netherlands Conference on Artificial Intelligence (BNAIC'00) PB - BNVKI, Dutch and the Belgian AI Association ER - TY - CONF T1 - Managing Transparency in Distributed Bioinformatics Systems T2 - European Media Lab Workshop on Management and Integration of Biochemical Data Y1 - 2000 A1 - Karasavvas, K. A1 - Baldock, R. A1 - Burger, A. JF - European Media Lab Workshop on Management and Integration of Biochemical Data ER - TY - CONF T1 - Stepwise Adaptation of Weights for Symbolic Regression with Genetic Programming T2 - Proceedings of the Twelfth Belgium/Netherlands Conference on Artificial Intelligence (BNAIC'00) Y1 - 2000 A1 - Eggermont, J. A1 - van Hemert, J. I. ED - van den Bosch, A. ED - H. Weigand KW - data mining KW - genetic programming AB - In this paper we continue study on the Stepwise Adaptation of Weights (SAW) technique. Previous studies on constraint satisfaction and data clas-sification have indicated that SAW is a promising technique to boost the performance of evolutionary algorithms. Here we use SAW to boost per-formance of a genetic programming algorithm on simple symbolic regression problems. We measure the performance of a standard GP and two variants of SAW extensions on two different symbolic regression problems. JF - Proceedings of the Twelfth Belgium/Netherlands Conference on Artificial Intelligence (BNAIC'00) PB - BNVKI, Dutch and the Belgian AI Association ER - TY - CONF T1 - A comparison of genetic programming variants for data classification T2 - Springer Lecture Notes on Computer Science Y1 - 1999 A1 - Eggermont, J. A1 - Eiben, A. E. A1 - van Hemert, J. I. ED - D. J. Hand ED - J. N. Kok ED - M. R. Berthold KW - classification KW - data mining KW - genetic programming AB - In this paper we report the results of a comparative study on different variations of genetic programming applied on binary data classification problems. The first genetic programming variant is weighting data records for calculating the classification error and modifying the weights during the run. Hereby the algorithm is defining its own fitness function in an on-line fashion giving higher weights to `hard' records. Another novel feature we study is the atomic representation, where `Booleanization' of data is not performed at the root, but at the leafs of the trees and only Boolean functions are used in the trees' body. As a third aspect we look at generational and steady-state models in combination of both features. JF - Springer Lecture Notes on Computer Science PB - Springer-Verlag, Berlin SN - 3-540-66332-0 ER - TY - CONF T1 - Population dynamics and emerging features in AEGIS T2 - Proceedings of the Genetic and Evolutionary Computation Conference Y1 - 1999 A1 - Eiben, A. E. A1 - Elia, D. A1 - van Hemert, J. I. ED - W. Banzhaf ED - J. Daida ED - Eiben, A. E. ED - M. H. Garzon ED - V. Honavar ED - M. Jakiela ED - R. E. Smith KW - dynamic problems AB - We describe an empirical investigation within an artificial world, aegis, where a population of animals and plants is evolving. We compare different system setups in search of an `ideal' world that allows a constantly high number of inhabitants for a long period of time. We observe that high responsiveness at individual level (speed of movement) or population level (high fertility) are `ideal'. Furthermore, we investigate the emergence of the so-called mental features of animals determining their social, consumptional and aggressive behaviour. The tests show that being socially oriented is generally advantageous, while agressive behaviour only emerges under specific circumstances. JF - Proceedings of the Genetic and Evolutionary Computation Conference PB - Morgan Kaufmann Publishers, San Francisco ER - TY - Generic T1 - VLDB'99, Proceedings of 25th International Conference on Very Large Data Bases, September 7-10, 1999, Edinburgh, Scotland, UK Y1 - 1999 A1 - Atkinson, Malcolm P. A1 - Maria E. Orlowska A1 - Patrick Valduriez A1 - Stanley B. Zdonik A1 - Michael L. Brodie ED - Atkinson, Malcolm P. ED - Maria E. Orlowska ED - Patrick Valduriez ED - Stanley B. Zdonik ED - Michael L. Brodie PB - Morgan Kaufmann SN - 1-55860-615-7 ER - TY - CONF T1 - Solving Binary Constraint Satisfaction Problems using Evolutionary Algorithms with an Adaptive Fitness Function T2 - Springer Lecture Notes on Computer Science Y1 - 1998 A1 - Eiben, A. E. A1 - van Hemert, J. I. A1 - Marchiori, E. A1 - Steenbeek, A. G. ED - Eiben, A. E. ED - Th. B{\"a}ck ED - M. Schoenauer ED - H.-P. Schwefel KW - constraint satisfaction AB - This paper presents a comparative study of Evolutionary Algorithms (EAs) for Constraint Satisfaction Problems (CSPs). We focus on EAs where fitness is based on penalization of constraint violations and the penalties are adapted during the execution. Three different EAs based on this approach are implemented. For highly connected constraint networks, the results provide further empirical support to the theoretical prediction of the phase transition in binary CSPs. JF - Springer Lecture Notes on Computer Science PB - Springer-Verlag, Berlin ER -