TY - BOOK T1 - The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business T2 - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya) Y1 - 2013 A1 - Atkinson, Malcolm P. A1 - Baxter, Robert M. A1 - Peter Brezany A1 - Oscar Corcho A1 - Michelle Galea A1 - Parsons, Mark A1 - Snelling, David A1 - van Hemert, Jano KW - Big Data KW - Data Intensive KW - data mining KW - Data Streaming KW - Databases KW - Dispel KW - Distributed Computing KW - Knowledge Discovery KW - Workflows AB - With the digital revolution opening up tremendous opportunities in many fields, there is a growing need for skilled professionals who can develop data-intensive systems and extract information and knowledge from them. This book frames for the first time a new systematic approach for tackling the challenges of data-intensive computing, providing decision makers and technical experts alike with practical tools for dealing with our exploding data collections. Emphasising data-intensive thinking and interdisciplinary collaboration, The DATA Bonanza: Improving Knowledge Discovery in Science, Engineering, and Business examines the essential components of knowledge discovery, surveys many of the current research efforts worldwide, and points to new areas for innovation. Complete with a wealth of examples and DISPEL-based methods demonstrating how to gain more from data in real-world systems, the book: * Outlines the concepts and rationale for implementing data-intensive computing in organisations * Covers from the ground up problem-solving strategies for data analysis in a data-rich world * Introduces techniques for data-intensive engineering using the Data-Intensive Systems Process Engineering Language DISPEL * Features in-depth case studies in customer relations, environmental hazards, seismology, and more * Showcases successful applications in areas ranging from astronomy and the humanities to transport engineering * Includes sample program snippets throughout the text as well as additional materials on a companion website The DATA Bonanza is a must-have guide for information strategists, data analysts, and engineers in business, research, and government, and for anyone wishing to be on the cutting edge of data mining, machine learning, databases, distributed systems, or large-scale computing. JF - Wiley Series on Parallel and Distributed Computing (Editor: Albert Y. Zomaya) PB - John Wiley & Sons Inc. SN - 978-1-118-39864-7 ER - TY - CHAP T1 - Data-Intensive Analysis T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho A1 - van Hemert, Jano ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - data mining KW - Data-Analysis Experts KW - Data-Intensive Analysis KW - Knowledge Discovery AB - Part II: "Data-intensive Knowledge Discovery", focuses on the needs of data-analysis experts. It illustrates the problem-solving strategies appropriate for a data-rich world, without delving into the details of underlying technologies. It should engage and inform data-analysis specialists, such as statisticians, data miners, image analysts, bio-informaticians or chemo-informaticians, and generate ideas pertinent to their application areas. Chapter 5: "Data-intensive Analysis", introduces a set of common problems that data-analysis experts often encounter, by means of a set of scenarios of increasing levels of complexity. The scenarios typify knowledge discovery challenges and the presented solutions provide practical methods; a starting point for readers addressing their own data challenges. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Data-Intensive Components and Usage Patterns T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Analysis KW - data mining KW - Data-Intensive Components KW - Registry KW - Workflow Libraries KW - Workflow Sharing AB - Chapter 7: "Data-intensive components and usage patterns", provides a systematic review of the components that are commonly used in knowledge discovery tasks as well as common patterns of component composition. That is, it introduces the processing elements from which knowledge discovery solutions are built and common composition patterns for delivering trustworthy information. It reflects on how these components and patterns are evolving in a data-intensive context. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - The Data-Intensive Survival Guide T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Analysis Experts KW - Data-Intensive Architecture KW - Data-intensive Computing KW - Data-Intensive Engineers KW - Datascopes KW - Dispel KW - Domain Experts KW - Intellectual Ramps KW - Knowledge Discovery KW - Workflows AB - Chapter 3: "The data-intensive survival guide", presents an overview of all of the elements of the proposed data-intensive strategy. Sufficient detail is presented for readers to understand the principles and practice that we recommend. It should also provide a good preparation for readers who choose to sample later chapters. It introduces three professional viewpoints: domain experts, data-analysis experts, and data-intensive engineers. Success depends on a balanced approach that develops the capacity of all three groups. A data-intensive architecture provides a flexible framework for that balanced approach. This enables the three groups to build and exploit data-intensive processes that incrementally step from data to results. A language is introduced to describe these incremental data processes from all three points of view. The chapter introduces ‘datascopes’ as the productized data-handling environments and ‘intellectual ramps’ as the ‘on ramps’ for the highways from data to knowledge. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Data-Intensive Thinking with DISPEL T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Machines KW - Data-Intensive Thinking, Data-intensive Computing KW - Dispel KW - Distributed Computing KW - Knowledge Discovery AB - Chapter 4: "Data-intensive thinking with DISPEL", engages the reader with technical issues and solutions, by working through a sequence of examples, building up from a sketch of a solution to a large-scale data challenge. It uses the DISPEL language extensively, introducing its concepts and constructs. It shows how DISPEL may help designers, data-analysts, and engineers develop solutions to the requirements emerging in any data-intensive application domain. The reader is taken through simple steps initially, this then builds to conceptually complex steps that are necessary to cope with the realities of real data providers, real data, real distributed systems, and long-running processes. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - Definition of the DISPEL Language T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Paul Martin A1 - Yaikhom, Gagarine ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Streaming KW - Data-intensive Computing KW - Dispel AB - Chapter 10: "Definition of the DISPEL language", describes the novel aspects of the DISPEL language: its constructs, capabilities, and anticipated programming style. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business T3 - {Parallel and Distributed Computing, series editor Albert Y. Zomaya} PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - The Digital-Data Challenge T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson A1 - Parsons, Mark ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data KW - Data-intensive Computing, Knowledge Discovery KW - Digital Data KW - Digital-Data Revolution AB - Part I: Strategies for success in the digital-data revolution, provides an executive summary of the whole book to convince strategists, politicians, managers, and educators that our future data-intensive society requires new thinking, new behavior, new culture, and new distribution of investment and effort. This part will introduce the major concepts so that readers are equipped to discuss and steer their organization’s response to the opportunities and obligations brought by the growing wealth of data. It will help readers understand the changing context brought about by advances in digital devices, digital communication, and ubiquitous computing. Chapter 1: The digital-data challenge, will help readers to understand the challenges ahead in making good use of the data and introduce ideas that will lead to helpful strategies. A global digital-data revolution is catalyzing change in the ways in which we live, work, relax, govern, and organize. This is a significant change in society, as important as the invention of printing or the industrial revolution, but more challenging because it is happening globally at lnternet speed. Becoming agile in adapting to this new world is essential. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - The Digital-Data Revolution T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data KW - Information KW - Knowledge KW - Knowledge Discovery KW - Social Impact of Digital Data KW - Wisdom, Data-intensive Computing AB - Chapter 2: "The digital-data revolution", reviews the relationships between data, information, knowledge, and wisdom. It analyses and quantifies the changes in technology and society that are delivering the data bonanza, and then reviews the consequential changes via representative examples in biology, Earth sciences, social sciences, leisure activity, and business. It exposes quantitative details and shows the complexity and diversity of the growing wealth of data, introducing some of its potential benefits and examples of the impediments to successfully realizing those benefits. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - DISPEL Development T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Adrian Mouat A1 - Snelling, David ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Diagnostics KW - Dispel KW - IDE KW - Libraries KW - Processing Elements AB - Chapter 11: "DISPEL development", describes the tools and libraries that a DISPEL developer might expect to use. The tools include those needed during process definition, those required to organize enactment, and diagnostic aids for developers of applications and platforms. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - DISPEL Enactment T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Chee Sun Liew A1 - Krause, Amrey A1 - Snelling, David ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data Streaming KW - Data-Intensive Engineering KW - Dispel KW - Workflow Enactment AB - Chapter 12: "DISPEL enactment", describes the four stages of DISPEL enactment. It is targeted at the data-intensive engineers who implement enactment services. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Inc. ER - TY - CHAP T1 - Foreword T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Tony Hey ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data KW - Data-intensive Computing, Knowledge Discovery JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Platforms for Data-Intensive Analysis T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Snelling, David ED - Malcolm Atkinson ED - Baxter, Robert M. ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Engineering KW - Data-Intensive Systems KW - Dispel KW - Distributed Systems AB - Part III: "Data-intensive engineering", is targeted at technical experts who will develop complex applications, new components, or data-intensive platforms. The techniques introduced may be applied very widely; for example, to any data-intensive distributed application, such as index generation, image processing, sequence comparison, text analysis, and sensor-stream monitoring. The challenges, methods, and implementation requirements are illustrated by making extensive use of DISPEL. Chapter 9: "Platforms for data-intensive analysis", gives a reprise of data-intensive architectures, examines the business case for investing in them, and introduces the stages of data-intensive workflow enactment. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Preface T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data, Data-intensive Computing, Knowledge Discovery AB - Who should read the book and why. The structure and conventions used. Suggested reading paths for different categories of reader. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Problem Solving in Data-Intensive Knowledge Discovery T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho A1 - van Hemert, Jano ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Analysis Experts KW - Data-Intensive Analysis KW - Design Patterns for Knowledge Discovery KW - Knowledge Discovery AB - Chapter 6: "Problem solving in data-intensive knowledge discovery", on the basis of the previous scenarios, this chapter provides an overview of effective strategies in knowledge discovery, highlighting common problem-solving methods that apply in conventional contexts, and focusing on the similarities and differences of these methods. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Sharing and Reuse in Knowledge Discovery T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Analysis KW - Knowledge Discovery KW - Ontologies KW - Semantic Web KW - Sharing AB - Chapter 8: "Sharing and re-use in knowledge discovery", introduces more advanced knowledge discovery problems, and shows how improved component and pattern descriptions facilitate re-use. This supports the assembly of libraries of high level components well-adapted to classes of knowledge discovery methods or application domains. The descriptions are made more powerful by introducing notations from the semantic Web. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - JOUR T1 - EnzML: multi-label prediction of enzyme classes using InterPro signatures. JF - BMC Bioinformatics Y1 - 2012 A1 - De Ferrari, Luna A1 - Stuart Aitken A1 - van Hemert, Jano A1 - Goryanin, Igor AB - BACKGROUND: Manual annotation of enzymatic functions cannot keep up with automatic genome sequencing. In this work we explore the capacity of InterPro sequence signatures to automatically predict enzymatic function. RESULTS: We present EnzML, a multi-label classification method that can efficiently account also for proteins with multiple enzymatic functions: 50,000 in UniProt. EnzML was evaluated using a standard set of 300,747 proteins for which the manually curated Swiss-Prot and KEGG databases have agreeing Enzyme Commission (EC) annotations. EnzML achieved more than 98% subset accuracy (exact match of all correct Enzyme Commission classes of a protein) for the entire dataset and between 87 and 97% subset accuracy in reannotating eight entire proteomes: human, mouse, rat, mouse-ear cress, fruit fly, the S. pombe yeast, the E. coli bacterium and the M. jannaschii archaebacterium. To understand the role played by the dataset size, we compared the cross-evaluation results of smaller datasets, either constructed at random or from specific taxonomic domains such as archaea, bacteria, fungi, invertebrates, plants and vertebrates. The results were confirmed even when the redundancy in the dataset was reduced using UniRef100, UniRef90 or UniRef50 clusters. CONCLUSIONS: InterPro signatures are a compact and powerful attribute space for the prediction of enzymatic function. This representation makes multi-label machine learning feasible in reasonable time (30 minutes to train on 300,747 instances with 10,852 attributes and 2,201 class values) using the Mulan Binary Relevance Nearest Neighbours algorithm implementation (BR-kNN). VL - 13 ER - TY - JOUR T1 - Automatically Identifying and Annotating Mouse Embryo Gene Expression Patterns JF - Bioinformatics Y1 - 2011 A1 - Liangxiu Han A1 - van Hemert, Jano A1 - Richard Baldock KW - classification KW - e-Science AB - Motivation: Deciphering the regulatory and developmental mechanisms for multicellular organisms requires detailed knowledge of gene interactions and gene expressions. The availability of large datasets with both spatial and ontological annotation of the spatio-temporal patterns of gene-expression in mouse embryo provides a powerful resource to discover the biological function of embryo organisation. Ontological annotation of gene expressions consists of labelling images with terms from the anatomy ontology for mouse development. If the spatial genes of an anatomical component are expressed in an image, the image is then tagged with a term of that anatomical component. The current annotation is done manually by domain experts, which is both time consuming and costly. In addition, the level of detail is variable and inevitably, errors arise from the tedious nature of the task. In this paper, we present a new method to automatically identify and annotate gene expression patterns in the mouse embryo with anatomical terms. Results: The method takes images from in situ hybridisation studies and the ontology for the developing mouse embryo, it then combines machine learning and image processing techniques to produce classifiers that automatically identify and annotate gene expression patterns in these images.We evaluate our method on image data from the EURExpress-II study where we use it to automatically classify nine anatomical terms: humerus, handplate, fibula, tibia, femur, ribs, petrous part, scapula and head mesenchyme. The accuracy of our method lies between 70–80% with few exceptions. Conclusions: We show that other known methods have lower classification performance than ours. We have investigated the images misclassified by our method and found several cases where the original annotation was not correct. This shows our method is robust against this kind of noise. Availability: The annotation result and the experimental dataset in the paper can be freely accessed at http://www2.docm.mmu.ac.uk/STAFF/L.Han/geneannotation/ Contact: l.han@mmu.ac.uk, j.vanhemert@ed.ac.uk and Richard.Baldock@hgu.mrc.ac.uk VL - 27 UR - http://bioinformatics.oxfordjournals.org/content/early/2011/02/25/bioinformatics.btr105.abstract ER - TY - JOUR T1 - A Generic Parallel Processing Model for Facilitating Data Mining and Integration JF - Parallel Computing Y1 - 2011 A1 - Liangxiu Han A1 - Chee Sun Liew A1 - van Hemert, Jano A1 - Malcolm Atkinson KW - Data Mining and Data Integration (DMI) KW - Life Sciences KW - OGSA-DAI KW - Parallelism KW - Pipeline Streaming KW - workflow AB - To facilitate Data Mining and Integration (DMI) processes in a generic way, we investigate a parallel pipeline streaming model. We model a DMI task as a streaming data-flow graph: a directed acyclic graph (DAG) of Processing Elements PEs. The composition mechanism links PEs via data streams, which may be in memory, buffered via disks or inter-computer data-flows. This makes it possible to build arbitrary DAGs with pipelining and both data and task parallelisms, which provides room for performance enhancement. We have applied this approach to a real DMI case in the Life Sciences and implemented a prototype. To demonstrate feasibility of the modelled DMI task and assess the efficiency of the prototype, we have also built a performance evaluation model. The experimental evaluation results show that a linear speedup has been achieved with the increase of the number of distributed computing nodes in this case study. PB - Elsevier VL - 37 IS - 3 ER - TY - CONF T1 - RapidBrain: Developing a Portal for Brain Research Imaging T2 - All Hands Meeting 2011, York Y1 - 2011 A1 - Kenton D'Mellow A1 - Rodríguez, David A1 - Carpenter, Trevor A1 - Jos Koetsier A1 - Dominic Job A1 - van Hemert, Jano A1 - Wardlaw, Joanna A1 - Fan Zhu AB - Brain imaging researchers execute complex multistep workflows in their computational analysis. Those workflows often include applications that have very different user interfaces and sometimes use different data formats. A good example is the brain perfusion quantification workflow used at the BRIC (Brain Research Imaging Centre) in Edinburgh. Rapid provides an easy method for creating portlets for computational jobs, and at the same it is extensible. We have exploited this extensibility with additions that stretch the functionality beyond the original limits. These changes can be used by other projects to create their own portals, but it should be noted that the development of such portals involve a greater effort than the required in the regular use of Rapid for creating portlets. In our case it has been used to provide a user-friendly interface for perfusion analysis that covers from volume JF - All Hands Meeting 2011, York CY - York ER - TY - JOUR T1 - Validation and mismatch repair of workflows through typed data streams JF - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences Y1 - 2011 A1 - Yaikhom, Gagarine A1 - Malcolm Atkinson A1 - van Hemert, Jano A1 - Oscar Corcho A1 - Krause, Amy AB - The type system of a language guarantees that all of the operations on a set of data comply with the rules and conditions set by the language. While language typing is a fundamental requirement for any programming language, the typing of data that flow between processing elements within a workflow is currently being treated as optional. In this paper, we introduce a three-level type system for typing workflow data streams. These types are parts of the Data Intensive System Process Engineering Language programming language, which empowers users with the ability to validate the connections inside a workflow composition, and apply appropriate data type conversions when necessary. Furthermore, this system enables the enactment engine in carrying out type-directed workflow optimizations. VL - 369 IS - 1949 ER - TY - RPRT T1 - Data-Intensive Research Workshop (15-19 March 2010) Report Y1 - 2010 A1 - Malcolm Atkinson A1 - Roure, David De A1 - van Hemert, Jano A1 - Shantenu Jha A1 - Ruth McNally A1 - Robert Mann A1 - Stratis Viglas A1 - Chris Williams KW - Data-intensive Computing KW - Data-Intensive Machines KW - Machine Learning KW - Scientific Databases AB - We met at the National e-Science Institute in Edinburgh on 15-19 March 2010 to develop our understanding of DIR. Approximately 100 participants (see Appendix A) worked together to develop their own understanding, and we are offering this report as the first step in communicating that to a wider community. We present this in turns of our developing/emerging understanding of "What is DIR?" and "Why it is important?'". We then review the status of the field, report what the workshop achieved and what remains as open questions. JF - National e-Science Centre PB - Data-Intensive Research Group, School of Informatics, University of Edinburgh CY - Edinburgh ER - TY - Generic T1 - Federated Enactment of Workflow Patterns T2 - Lecture Notes in Computer Science Y1 - 2010 A1 - Yaikhom, Gagarine A1 - Liew, Chee A1 - Liangxiu Han A1 - van Hemert, Jano A1 - Malcolm Atkinson A1 - Krause, Amy ED - D’Ambra, Pasqua ED - Guarracino, Mario ED - Talia, Domenico AB - In this paper we address two research questions concerning workflows: 1) how do we abstract and catalogue recurring workflow patterns?; and 2) how do we facilitate optimisation of the mapping from workflow patterns to actual resources at runtime? Our aim here is to explore techniques that are applicable to large-scale workflow compositions, where the resources could change dynamically during the lifetime of an application. We achieve this by introducing a registry-based mechanism where pattern abstractions are catalogued and stored. In conjunction with an enactment engine, which communicates with this registry, concrete computational implementations and resources are assigned to these patterns, conditional to the execution parameters. Using a data mining application from the life sciences, we demonstrate this new approach. JF - Lecture Notes in Computer Science PB - Springer Berlin / Heidelberg VL - 6271 UR - http://dx.doi.org/10.1007/978-3-642-15277-1_31 N1 - 10.1007/978-3-642-15277-1_31 ER - TY - CONF T1 - TOPP goes Rapid T2 - Cluster Computing and the Grid, IEEE International Symposium on Y1 - 2010 A1 - Gesing, Sandra A1 - van Hemert, Jano A1 - Jos Koetsier A1 - Bertsch, Andreas A1 - Kohlbacher, Oliver AB - Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly suggests the use of grid-based high-performance computing for analysis. TOPP and OpenMS are open-source packages for proteomics data analysis; however, they do not provide support for Grid computing. In this work we present a portal interface for high-throughput data analysis with TOPP. The portal is based on Rapid, a tool for efficiently generating standardized portlets for a wide range of applications. The web-based interface allows the creation and editing of user-defined pipelines and their execution and monitoring on a Grid infrastructure. The portal also supports several file transfer protocols for data staging. It thus provides a simple and complete solution to high-throughput proteomics data analysis for inexperienced users through a convenient portal interface. JF - Cluster Computing and the Grid, IEEE International Symposium on PB - IEEE Computer Society CY - Los Alamitos, CA, USA SN - 978-0-7695-4039-9 ER - TY - CONF T1 - Towards Optimising Distributed Data Streaming Graphs using Parallel Streams T2 - Data Intensive Distributed Computing (DIDC'10), in conjunction with the 19th International Symposium on High Performance Distributed Computing Y1 - 2010 A1 - Chee Sun Liew A1 - Atkinson, Malcolm P. A1 - van Hemert, Jano A1 - Liangxiu Han KW - Data-intensive Computing KW - Distributed Computing KW - Optimisation KW - Parallel Stream KW - Scientific Workflows AB - Modern scientific collaborations have opened up the opportunity of solving complex problems that involve multi- disciplinary expertise and large-scale computational experiments. These experiments usually involve large amounts of data that are located in distributed data repositories running various software systems, and managed by different organisations. A common strategy to make the experiments more manageable is executing the processing steps as a workflow. In this paper, we look into the implementation of fine-grained data-flow between computational elements in a scientific workflow as streams. We model the distributed computation as a directed acyclic graph where the nodes represent the processing elements that incrementally implement specific subtasks. The processing elements are connected in a pipelined streaming manner, which allows task executions to overlap. We further optimise the execution by splitting pipelines across processes and by introducing extra parallel streams. We identify performance metrics and design a measurement tool to evaluate each enactment. We conducted ex- periments to evaluate our optimisation strategies with a real world problem in the Life Sciences—EURExpress-II. The paper presents our distributed data-handling model, the optimisation and instrumentation strategies and the evaluation experiments. We demonstrate linear speed up and argue that this use of data-streaming to enable both overlapped pipeline and parallelised enactment is a generally applicable optimisation strategy. JF - Data Intensive Distributed Computing (DIDC'10), in conjunction with the 19th International Symposium on High Performance Distributed Computing PB - ACM CY - Chicago, Illinois UR - http://www.cct.lsu.edu/~kosar/didc10/index.php ER - TY - CONF T1 - Understanding TSP Difficulty by Learning from Evolved Instances T2 - Lecture Notes in Computer Science Y1 - 2010 A1 - Smith-Miles, Kate A1 - van Hemert, Jano A1 - Lim, Xin ED - Blum, Christian ED - Battiti, Roberto AB - Whether the goal is performance prediction, or insights into the relationships between algorithm performance and instance characteristics, a comprehensive set of meta-data from which relationships can be learned is needed. This paper provides a methodology to determine if the meta-data is sufficient, and demonstrates the critical role played by instance generation methods. Instances of the Travelling Salesman Problem (TSP) are evolved using an evolutionary algorithm to produce distinct classes of instances that are intentionally easy or hard for certain algorithms. A comprehensive set of features is used to characterise instances of the TSP, and the impact of these features on difficulty for each algorithm is analysed. Finally, performance predictions are achieved with high accuracy on unseen instances for predicting search effort as well as identifying the algorithm likely to perform best. JF - Lecture Notes in Computer Science PB - Springer Berlin / Heidelberg VL - 6073 UR - http://dx.doi.org/10.1007/978-3-642-13800-3_29 N1 - 10.1007/978-3-642-13800-3_29 ER - TY - RPRT T1 - ADMIRE D1.5 – Report defining an iteration of the model and language: PM3 and DL3 Y1 - 2009 A1 - Peter Brezany A1 - Ivan Janciak A1 - Alexander Woehrer A1 - Carlos Buil Aranda A1 - Malcolm Atkinson A1 - van Hemert, Jano AB - This document is the third deliverable to report on the progress of the model, language and ontology research conducted within Workpackage 1 of the ADMIRE project. Significant progress has been made on each of the above areas. The new results that we achieved are recorded against the targets defined for project month 18 and are reported in four sections of this document PB - ADMIRE project UR - http://www.admire-project.eu/docs/ADMIRE-D1.5-model-language-ontology.pdf ER - TY - CONF T1 - Automating Gene Expression Annotation for Mouse Embryo T2 - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference) Y1 - 2009 A1 - Liangxiu Han A1 - van Hemert, Jano A1 - Richard Baldock A1 - Atkinson, Malcolm P. ED - Ronghuai Huang ED - Qiang Yang ED - Jian Pei ED - et al JF - Lecture Notes in Computer Science (Advanced Data Mining and Applications, 5th International Conference) PB - Springer VL - LNAI 5678 ER - TY - CONF T1 - A Distributed Architecture for Data Mining and Integration T2 - Data-Aware Distributed Computing (DADC'09), in conjunction with the 18th International Symposium on High Performance Distributed Computing Y1 - 2009 A1 - Atkinson, Malcolm P. A1 - van Hemert, Jano A1 - Liangxiu Han A1 - Ally Hume A1 - Chee Sun Liew AB - This paper presents the rationale for a new architecture to support a significant increase in the scale of data integration and data mining. It proposes the composition into one framework of (1) data mining and (2) data access and integration. We name the combined activity “DMI”. It supports enactment of DMI processes across heterogeneous and distributed data resources and data mining services. It posits that a useful division can be made between the facilities established to support the definition of DMI processes and the computational infrastructure provided to enact DMI processes. Communication between those two divisions is restricted to requests submitted to gateway services in a canonical DMI language. Larger-scale processes are enabled by incremental refinement of DMI-process definitions often by recomposition of lower-level definitions. Autonomous types and descriptions which will support detection of inconsistencies and semi-automatic insertion of adaptations.These architectural ideas are being evaluated in a feasibility study that involves an application scenario and representatives of the community. JF - Data-Aware Distributed Computing (DADC'09), in conjunction with the 18th International Symposium on High Performance Distributed Computing PB - ACM ER - TY - CONF T1 - A model of social collaboration in Molecular Biology knowledge bases T2 - Proceedings of the 6th Conference of the European Social Simulation Association (ESSA'09) Y1 - 2009 A1 - De Ferrari, Luna A1 - Stuart Aitken A1 - van Hemert, Jano A1 - Goryanin, Igor AB - Manual annotation of biological data cannot keep up with data production. Open annotation models using wikis have been proposed to address this problem. In this empirical study we analyse 36 years of knowledge collection by 738 authors in two Molecular Biology wikis (EcoliWiki and WikiPathways) and two knowledge bases (OMIM and Reactome). We first investigate authorship metrics (authors per entry and edits per author) which are power-law distributed in Wikipedia and we find they are heavy-tailed in these four systems too. We also find surprising similarities between the open (editing open to everyone) and the closed systems (expert curators only). Secondly, to discriminate between driving forces in the measured distributions, we simulate the curation process and find that knowledge overlap among authors can drive the number of authors per entry, while the time the users spend on the knowledge base can drive the number of contributions per author. JF - Proceedings of the 6th Conference of the European Social Simulation Association (ESSA'09) PB - European Social Simulation Association ER - TY - CONF T1 - Using architectural simulation models to aid the design of data intensive application T2 - The Third International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2009) Y1 - 2009 A1 - Javier Fernández A1 - Liangxiu Han A1 - Alberto Nuñez A1 - Jesus Carretero A1 - van Hemert, Jano JF - The Third International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2009) PB - IEEE Computer Society CY - Sliema, Malta ER - TY - CONF T1 - Eliminating the Middle Man: Peer-to-Peer Dataflow T2 - HPDC '08: Proceedings of the 17th International Symposium on High Performance Distributed Computing Y1 - 2008 A1 - Barker, Adam A1 - Weissman, Jon B. A1 - van Hemert, Jano KW - grid computing KW - workflow JF - HPDC '08: Proceedings of the 17th International Symposium on High Performance Distributed Computing PB - ACM ER - TY - Generic T1 - European Graduate Student Workshop on Evolutionary Computation Y1 - 2008 A1 - Di Chio, Cecilia A1 - Giacobini, Mario A1 - van Hemert, Jano ED - Di Chio, Cecilia ED - Giacobini, Mario ED - van Hemert, Jano KW - evolutionary computation AB - Evolutionary computation involves the study of problem-solving and optimization techniques inspired by principles of evolution and genetics. As any other scientific field, its success relies on the continuity provided by new researchers joining the field to help it progress. One of the most important sources for new researchers is the next generation of PhD students that are actively studying a topic relevant to this field. It is from this main observation the idea arose of providing a platform exclusively for PhD students. ER - TY - Generic T1 - Evolutionary Computation in Combinatorial Optimization, 8th European Conference T2 - Lecture Notes in Computer Science Y1 - 2008 A1 - van Hemert, Jano A1 - Cotta, Carlos ED - van Hemert, Jano ED - Cotta, Carlos KW - evolutionary computation AB - Metaheuristics have shown to be effective for difficult combinatorial optimization problems appearing in various industrial, economical, and scientific domains. Prominent examples of metaheuristics are evolutionary algorithms, tabu search, simulated annealing, scatter search, memetic algorithms, variable neighborhood search, iterated local search, greedy randomized adaptive search procedures, ant colony optimization and estimation of distribution algorithms. Problems solved successfully include scheduling, timetabling, network design, transportation and distribution, vehicle routing, the travelling salesman problem, packing and cutting, satisfiability and general mixed integer programming. EvoCOP began in 2001 and has been held annually since then. It is the first event specifically dedicated to the application of evolutionary computation and related methods to combinatorial optimization problems. Originally held as a workshop, EvoCOP became a conference in 2004. The events gave researchers an excellent opportunity to present their latest research and to discuss current developments and applications. Following the general trend of hybrid metaheuristics and diminishing boundaries between the different classes of metaheuristics, EvoCOP has broadened its scope over the last years and invited submissions on any kind of metaheuristic for combinatorial optimization. JF - Lecture Notes in Computer Science PB - Springer VL - LNCS 4972 ER - TY - CONF T1 - Graph Colouring Heuristics Guided by Higher Order Graph Properties T2 - Lecture Notes in Computer Science Y1 - 2008 A1 - Juhos, Istv\'{a}n A1 - van Hemert, Jano ED - van Hemert, Jano ED - Cotta, Carlos KW - evolutionary computation KW - graph colouring AB - Graph vertex colouring can be defined in such a way where colour assignments are substituted by vertex contractions. We present various hyper-graph representations for the graph colouring problem all based on the approach where vertices are merged into groups. In this paper, we show this provides a uniform and compact way to define algorithms, both of a complete or a heuristic nature. Moreover, the representation provides information useful to guide algorithms during their search. In this paper we focus on the quality of solutions obtained by graph colouring heuristics that make use of higher order properties derived during the search. An evolutionary algorithm is used to search permutations of possible merge orderings. JF - Lecture Notes in Computer Science PB - Springer VL - 4972 ER - TY - CONF T1 - Orchestrating Data-Centric Workflows T2 - The 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid) Y1 - 2008 A1 - Barker, Adam A1 - Weissman, Jon B. A1 - van Hemert, Jano KW - grid computing KW - workflow JF - The 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid) PB - IEEE Computer Society ER - TY - BOOK T1 - Recent Advances in Evolutionary Computation for Combinatorial Optimization T2 - Studies in Computational Intelligence Y1 - 2008 A1 - Cotta, Carlos A1 - van Hemert, Jano AB - Combinatorial optimisation is a ubiquitous discipline whose usefulness spans vast applications domains. The intrinsic complexity of most combinatorial optimisation problems makes classical methods unaffordable in many cases. To acquire practical solutions to these problems requires the use of metaheuristic approaches that trade completeness for pragmatic effectiveness. Such approaches are able to provide optimal or quasi-optimal solutions to a plethora of difficult combinatorial optimisation problems. The application of metaheuristics to combinatorial optimisation is an active field in which new theoretical developments, new algorithmic models, and new application areas are continuously emerging. This volume presents recent advances in the area of metaheuristic combinatorial optimisation, with a special focus on evolutionary computation methods. Moreover, it addresses local search methods and hybrid approaches. In this sense, the book includes cutting-edge theoretical, methodological, algorithmic and applied developments in the field, from respected experts and with a sound perspective. JF - Studies in Computational Intelligence PB - Springer VL - 153 SN - 978-3-540-70806-3 UR - http://www.springer.com/engineering/book/978-3-540-70806-3 ER - TY - CONF T1 - Scientific Workflow: A Survey and Research Directions T2 - Lecture Notes in Computer Science Y1 - 2008 A1 - Barker, Adam A1 - van Hemert, Jano KW - e-Science KW - workflow AB - Workflow technologies are emerging as the dominant approach to coordinate groups of distributed services. However with a space filled with competing specifications, standards and frameworks from multiple domains, choosing the right tool for the job is not always a straightforward task. Researchers are often unaware of the range of technology that already exists and focus on implementing yet another proprietary workflow system. As an antidote to this common problem, this paper presents a concise survey of existing workflow technology from the business and scientific domain and makes a number of key suggestions towards the future development of scientific workflow systems. JF - Lecture Notes in Computer Science PB - Springer VL - 4967 UR - http://dx.doi.org/10.1007/978-3-540-68111-3_78 ER - TY - CONF T1 - WikiSim: simulating knowledge collection and curation in structured wikis. T2 - Proceedings of the 2008 International Symposium on Wikis in Porto, Portugal Y1 - 2008 A1 - De~Ferrari, Luna A1 - Stuart Aitken A1 - van Hemert, Jano A1 - Goryanin, Igor AB - The aim of this work is to model quantitatively one of the main properties of wikis: how high quality knowledge can emerge from the individual work of independent volunteers. The approach chosen is to simulate knowledge collection and curation in wikis. The basic model represents the wiki as a set of of true/false values, added and edited at each simulation round by software agents (users) following a fixed set of rules. The resulting WikiSim simulations already manage to reach distributions of edits and user contributions very close to those reported for Wikipedia. WikiSim can also span conditions not easily measurable in real-life wikis, such as the impact of various amounts of user mistakes. WikiSim could be extended to model wiki software features, such as discussion pages and watch lists, while monitoring the impact they have on user actions and consensus, and their effect on knowledge quality. The method could also be used to compare wikis with other curation scenarios based on centralised editing by experts. The future challenges for WikiSim will be to find appropriate ways to evaluate and validate the models and to keep them simple while still capturing relevant properties of wiki systems. JF - Proceedings of the 2008 International Symposium on Wikis in Porto, Portugal PB - ACM CY - New York, NY, USA ER - TY - Generic T1 - European Graduate Student Workshop on Evolutionary Computation Y1 - 2007 A1 - Giacobini, Mario A1 - van Hemert, Jano ED - Mario Giacobini ED - van Hemert, Jano KW - evolutionary computation AB - Evolutionary computation involves the study of problem-solving and optimization techniques inspired by principles of evolution and genetics. As any other scientific field, its success relies on the continuity provided by new researchers joining the field to help it progress. One of the most important sources for new researchers is the next generation of PhD students that are actively studying a topic relevant to this field. It is from this main observation the idea arose of providing a platform exclusively for PhD students. CY - Valencia, Spain ER - TY - Generic T1 - Evolutionary Computation in Combinatorial Optimization, 7th European Conference T2 - Lecture Notes in Computer Science Y1 - 2007 A1 - Cotta, Carlos A1 - van Hemert, Jano ED - Carlos Cotta ED - van Hemert, Jano KW - evolutionary computation AB - Metaheuristics have often been shown to be effective for difficult combinatorial optimization problems appearing in various industrial, economical, and scientific domains. Prominent examples of metaheuristics are evolutionary algorithms, simulated annealing, tabu search, scatter search, memetic algorithms, variable neighborhood search, iterated local search, greedy randomized adaptive search procedures, estimation of distribution algorithms, and ant colony optimization. Successfully solved problems include scheduling, timetabling, network design, transportation and distribution, vehicle routing, the traveling salesman problem, satisfiability, packing and cutting, and general mixed integer programming. EvoCOP began in 2001 and has been held annually since then. It was the first event specifically dedicated to the application of evolutionary computation and related methods to combinatorial optimization problems. Originally held as a workshop, EvoCOP became a conference in 2004. The events gave researchers an excellent opportunity to present their latest research and to discuss current developments and applications as well as providing for improved interaction between members of this scientific community. Following the general trend of hybrid metaheuristics and diminishing boundaries between the different classes of metaheuristics, EvoCOP has broadened its scope over the last years and invited submissions on any kind of metaheuristic for combinatorial optimization. JF - Lecture Notes in Computer Science PB - Springer VL - LNCS 4446 UR - http://springerlink.metapress.com/content/105633/ ER - TY - Generic T1 - European Graduate Student Workshop on Evolutionary Computation Y1 - 2006 A1 - Giacobini, Mario A1 - van Hemert, Jano ED - Giacobini, Mario ED - van Hemert, Jano KW - evolutionary computation AB - Evolutionary computation involves the study of problem-solving and optimization techniques inspired by principles of evolution and genetics. As any other scientific field, its success relies on the continuity provided by new researchers joining the field to help it progress. One of the most important sources for new researchers is the next generation of PhD students that are actively studying a topic relevant to this field. It is from this main observation the idea arose of providing a platform exclusively for PhD students. CY - Budapest, Hungary ER -