TY - JOUR T1 - Precise montaging and metric quantification of retinal surface area from ultra-widefield fundus photography and fluorescein angiography JF - Ophthalmic Surg Lasers Imaging Retina Y1 - 2014 A1 - Croft, D.E. A1 - van Hemert, J. A1 - Wykoff, C.C. A1 - Clifton, D. A1 - Verhoek, M. A1 - Fleming, A. A1 - Brown, D.M. KW - medical KW - retinal imaging AB - BACKGROUND AND OBJECTIVE: Accurate quantification of retinal surface area from ultra-widefield (UWF) images is challenging due to warping produced when the retina is projected onto a two-dimensional plane for analysis. By accounting for this, the authors sought to precisely montage and accurately quantify retinal surface area in square millimeters. PATIENTS AND METHODS: Montages were created using Optos 200Tx (Optos, Dunfermline, U.K.) images taken at different gaze angles. A transformation projected the images to their correct location on a three-dimensional model. Area was quantified with spherical trigonometry. Warping, precision, and accuracy were assessed. RESULTS: Uncorrected, posterior pixels represented up to 79% greater surface area than peripheral pixels. Assessing precision, a standard region was quantified across 10 montages of the same eye (RSD: 0.7%; mean: 408.97 mm(2); range: 405.34-413.87 mm(2)). Assessing accuracy, 50 patients' disc areas were quantified (mean: 2.21 mm(2); SE: 0.06 mm(2)), and the results fell within the normative range. CONCLUSION: By accounting for warping inherent in UWF images, precise montaging and accurate quantification of retinal surface area in square millimeters were achieved. [Ophthalmic Surg Lasers Imaging Retina. 2014;45:312-317.]. VL - 45 ER - TY - CHAP T1 - Platforms for Data-Intensive Analysis T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Snelling, David ED - Malcolm Atkinson ED - Baxter, Robert M. ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Intensive Engineering KW - Data-Intensive Systems KW - Dispel KW - Distributed Systems AB - Part III: "Data-intensive engineering", is targeted at technical experts who will develop complex applications, new components, or data-intensive platforms. The techniques introduced may be applied very widely; for example, to any data-intensive distributed application, such as index generation, image processing, sequence comparison, text analysis, and sensor-stream monitoring. The challenges, methods, and implementation requirements are illustrated by making extensive use of DISPEL. Chapter 9: "Platforms for data-intensive analysis", gives a reprise of data-intensive architectures, examines the business case for investing in them, and introduces the stages of data-intensive workflow enactment. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Preface T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Malcolm Atkinson ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Big Data, Data-intensive Computing, Knowledge Discovery AB - Who should read the book and why. The structure and conventions used. Suggested reading paths for different categories of reader. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CHAP T1 - Problem Solving in Data-Intensive Knowledge Discovery T2 - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business Y1 - 2013 A1 - Oscar Corcho A1 - van Hemert, Jano ED - Malcolm Atkinson ED - Rob Baxter ED - Peter Brezany ED - Oscar Corcho ED - Michelle Galea ED - Parsons, Mark ED - Snelling, David ED - van Hemert, Jano KW - Data-Analysis Experts KW - Data-Intensive Analysis KW - Design Patterns for Knowledge Discovery KW - Knowledge Discovery AB - Chapter 6: "Problem solving in data-intensive knowledge discovery", on the basis of the previous scenarios, this chapter provides an overview of effective strategies in knowledge discovery, highlighting common problem-solving methods that apply in conventional contexts, and focusing on the similarities and differences of these methods. JF - THE DATA BONANZA: Improving Knowledge Discovery for Science, Engineering and Business PB - John Wiley & Sons Ltd. ER - TY - CONF T1 - Provenance for seismological processing pipelines in a distributed streaming workflow T2 - EDBT/ICDT Workshops Y1 - 2013 A1 - Alessandro Spinuso A1 - James Cheney A1 - Malcolm Atkinson JF - EDBT/ICDT Workshops ER - TY - JOUR T1 - Parallel perfusion imaging processing using GPGPU JF - Computer Methods and Programs in Biomedicine Y1 - 2012 A1 - Fan Zhu A1 - Rodríguez, David A1 - Carpenter, Trevor A1 - Malcolm Atkinson A1 - Wardlaw, Joanna KW - Deconvolution KW - GPGPU KW - Local AIF KW - Parallelization KW - Perfusion Imaging AB - Background and purpose The objective of brain perfusion quantification is to generate parametric maps of relevant hemodynamic quantities such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) that can be used in diagnosis of acute stroke. These calculations involve deconvolution operations that can be very computationally expensive when using local Arterial Input Functions (AIF). As time is vitally important in the case of acute stroke, reducing the analysis time will reduce the number of brain cells damaged and increase the potential for recovery. Methods GPUs originated as graphics generation dedicated co-processors, but modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its large number of computing cores and constitutes an affordable high performance computing method. In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. We present the serial and parallel implementations of such algorithms and the evaluation of the performance gains using GPUs. Results Our method has gained a 5.56 and 3.75 speedup for CT and MR images respectively. Conclusions It seems that using GPGPU is a desirable approach in perfusion imaging analysis, which does not harm the quality of cerebral hemodynamic maps but delivers results faster than the traditional computation. UR - http://www.sciencedirect.com/science/article/pii/S0169260712001587 ER - TY - BOOK T1 - (PhD Thesis) Brain Perfusion Imaging - Performance and Accuracy Y1 - 2012 A1 - Fan Zhu AB - Title: Brain Perfusion Imaging - Performance and Accuracy Abstract: Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. The purpose of my PhD research is to develop novel methodologies for improving the efficiency and quality of brain perfusion-imaging analysis so that clinical decisions can be made more accurately and in shorter time. This thesis consists of three parts: 1. My research investigates the possibilities that parallel computing brings to make perfusion-imaging analysis faster in order to deliver results that are used in stroke diagnosis earlier. Brain perfusion analysis using local Arterial Input Functions (AIF) technique takes a long time to execute due to its heavy computational load. As time is vitally important in the case of acute stroke, reducing analysis time and therefore diagnosis time can reduce the number of brain cells damaged and improve the chances for patient recovery. We present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose computing on Graphics Processing Units) using the CUDA programming model. Our method aims to accelerate the process without any quality loss. 2. Specific features of perfusion source images are also used to reduce noise impact, which consequently improves the accuracy of hemodynamic maps. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR) makes use of the temporal information in the perfusion source imges to reduce the noise level. Over the entire image, our noise reduction method based on Gaussian process regression gains a 99% contrast-to-noise ratio improvement over the raw image and also improves the quality of hemodynamic maps, allowing a better identification of edges and detailed information. At the level of individual voxels, GPR provides a stable baseline, helps identify key parameters from tissue time-concentration curves and reduces the oscillations in the curves. Furthermore, the results shows that GPR is superior to the alternative techniques compared in this study. 3. My research also explores automatic segmentation of perfusion images into potentially healthy areas and lesion areas which can be used as additional information that assists in clinical diagnosis. Since perfusion source images contain more information than hemodynamic maps, good utilisation of source images leads to better understanding than the hemodynamic maps alone. Correlation coefficient tests are used to measure the similarities between the expected tissue time-concentration curves (from (reference tissue)) and the measured time-concentration curves (from target tissue). This information is then used to distinguish tissues at risk and dead tissues from healthy tissues. A correlation coefficient based signal analysis method that directly spots suspected lesion areas from perfusion source images is presented. Our method delivers a clear automatic segmentation of healthy tissue, tissue at risk and dead tissue. From our segmentation maps, it is easier to identify lesion boundaries than using traditional hemodynamic maps. ER - TY - JOUR T1 - Principles of Provenance (Dagstuhl Seminar 12091) JF - Dagstuhl Reports Y1 - 2012 A1 - James Cheney A1 - Anthony Finkelstein A1 - Bertram Ludäscher A1 - Stijn Vansummeren VL - 2 ER - TY - CONF T1 - A Parallel Deconvolution Algorithm in Perfusion Imaging T2 - Healthcare Informatics, Imaging, and Systems Biology (HISB) Y1 - 2011 A1 - Zhu, Fan. A1 - Rodríguez, David A1 - Carpenter, Trevor A1 - Malcolm Atkinson A1 - Wardlaw, Joanna KW - Deconvolution KW - GPGPU KW - Parallelization KW - Perfusion Imaging AB - In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. GPUs originated as graphics generation dedicated co-processors, but the modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its huge number of computing cores and constitutes an affordable high performance computing method. The objective of brain perfusion quantification is to generate parametric maps of relevant haemodynamic quantities such as Cerebral Blood Flow (CBF), Cerebral Blood Volume (CBV) and Mean Transit Time (MTT) that can be used in diagnosis of conditions such as stroke or brain tumors. These calculations involve deconvolution operations that in the case of using local Arterial Input Functions (AIF) can be very expensive computationally. We present the serial and parallel implementations of such algorithm and the evaluation of the performance gains using GPUs. JF - Healthcare Informatics, Imaging, and Systems Biology (HISB) CY - San Jose, California SN - 978-1-4577-0325-6 UR - http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6061411&tag=1 ER - TY - JOUR T1 - Performance database: capturing data for optimizing distributed streaming workflows JF - Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences Y1 - 2011 A1 - Chee Sun Liew A1 - Atkinson, Malcolm P. A1 - Radoslaw Ostrowski A1 - Murray Cole A1 - van Hemert, Jano I. A1 - Liangxiu Han KW - measurement framework KW - performance data KW - streaming workflows AB - The performance database (PDB) stores performance-related data gathered during workflow enactment. We argue that by carefully understanding and manipulating this data, we can improve efficiency when enacting workflows. This paper describes the rationale behind the PDB, and proposes a systematic way to implement it. The prototype is built as part of the Advanced Data Mining and Integration Research for Europe project. We use workflows from real-world experiments to demonstrate the usage of PDB. VL - 369 IS - 1949 ER - TY - Generic T1 - Probing Attacks on Multi-agent Systems using Electronic Institutions T2 - Declarative Agent Languages and Technologies Workshop (DALT), AAMAS 2011 Y1 - 2011 A1 - Shahriar Bijani A1 - David Robertson A1 - David Aspinall JF - Declarative Agent Languages and Technologies Workshop (DALT), AAMAS 2011 ER - TY - JOUR T1 - The performance model of dynamic Virtual Organization (VO) formations within grid computing context JF - Chaos, Solitons & Fractals Y1 - 2009 A1 - Liangxiu Han KW - complex network KW - graph theory KW - grid computing KW - virtual organization formation PB - Elsevier Science VL - 40 IS - 4 N1 - In press ER - TY - CONF T1 - Portals for Life Sciences—a Brief Introduction T2 - Proceedings of the IWPLS09 International Workshop on Portals for Life Sciences Y1 - 2009 A1 - Gesing, Sandra A1 - Kohlbacher, O. A1 - van Hemert, J. I. AB - The topic ”‘Portals for Life Sciences”’ includes various research fields, on the one hand many different topics out of life sciences, e.g. mass spectrometry, on the other hand portal technologies and different aspects of computer science, such as usability of user interfaces and security of systems. The main aspect about portals is to simplify the user’s interaction with computational resources which are concer- ted to a supported application domain. JF - Proceedings of the IWPLS09 International Workshop on Portals for Life Sciences T3 - CEUR Workshop Proceedings UR - http://ceur-ws.org/Vol-513/paper01.pdf ER - TY - JOUR T1 - Preface. Crossing boundaries: computational science, e-Science and global e-Infrastructure JF - Philosophical Transactions of the Royal Society Series A Y1 - 2009 A1 - Coveney, P. V. A1 - Atkinson, M. P. PB - Royal Society Publishing VL - 367 ER - TY - Generic T1 - Proceedings of the 1st International Workshop on Portals for Life Sciences T2 - IWPLS09 International Workshop on Portals for Life Sciences Y1 - 2009 A1 - Gesing, Sandra A1 - van Hemert, Jano I. JF - IWPLS09 International Workshop on Portals for Life Sciences T3 - CEUR Workshop Proceedings CY - e-Science Institute, Edinburgh, UK UR - http://ceur-ws.org/Vol-513 ER - TY - CONF T1 - Profiling OGSA-DAI Performance for Common Use Patterns T2 - UK e-Science All Hands Meeting Y1 - 2006 A1 - Dobrzelecki, B. A1 - Antonioletti, M. A1 - Schopf, J. M. A1 - Hume, A. C. A1 - Atkinson, M. A1 - Hong, N. P. Chue A1 - Jackson, M. A1 - Karasavvas, K. A1 - Krause, A. A1 - Parsons, M. A1 - Sugden, T. A1 - Theocharopoulos, E. JF - UK e-Science All Hands Meeting ER - TY - CONF T1 - Property analysis of symmetric travelling salesman problem instances acquired through evolution T2 - Springer Lecture Notes on Computer Science Y1 - 2005 A1 - van Hemert, J. I. ED - G. Raidl ED - J. Gottlieb KW - problem evolving KW - travelling salesman AB - We show how an evolutionary algorithm can successfully be used to evolve a set of difficult to solve symmetric travelling salesman problem instances for two variants of the Lin-Kernighan algorithm. Then we analyse the instances in those sets to guide us towards deferring general knowledge about the efficiency of the two variants in relation to structural properties of the symmetric travelling salesman problem. JF - Springer Lecture Notes on Computer Science PB - Springer-Verlag, Berlin ER - TY - CONF T1 - Phase transition properties of clustered travelling salesman problem instances generated with evolutionary computation T2 - LNCS Y1 - 2004 A1 - van Hemert, J. I. A1 - Urquhart, N. B. ED - Xin Yao ED - Edmund Burke ED - Jose A. Lozano ED - Jim Smith ED - Juan J. Merelo-Guerv\'os ED - John A. Bullinaria ED - Jonathan Rowe ED - Peter Ti\v{n}o Ata Kab\'an ED - Hans-Paul Schwefel KW - evolutionary computation KW - problem evolving KW - travelling salesman AB - This paper introduces a generator that creates problem instances for the Euclidean symmetric travelling salesman problem. To fit real world problems, we look at maps consisting of clustered nodes. Uniform random sampling methods do not result in maps where the nodes are spread out to form identifiable clusters. To improve upon this, we propose an evolutionary algorithm that uses the layout of nodes on a map as its genotype. By optimising the spread until a set of constraints is satisfied, we are able to produce better clustered maps, in a more robust way. When varying the number of clusters in these maps and, when solving the Euclidean symmetric travelling salesman person using Chained Lin-Kernighan, we observe a phase transition in the form of an easy-hard-easy pattern. JF - LNCS PB - Springer-Verlag CY - Birmingham, UK VL - 3242 SN - 3-540-23092-0 UR - http://www.vanhemert.co.uk/files/clustered-phase-transition-tsp.tar.gz ER - TY - JOUR T1 - The pervasiveness of evolution in GRUMPS software JF - Softw., Pract. Exper. Y1 - 2003 A1 - Evans, Huw A1 - Atkinson, Malcolm P. A1 - Brown, Margaret A1 - Cargill, Julie A1 - Crease, Murray A1 - Draper, Steve A1 - Gray, Philip D. A1 - Thomas, Richard VL - 33 ER - TY - CHAP T1 - Persistence and Java — A Balancing Act T2 - Objects and Databases Y1 - 2001 A1 - Atkinson, M. ED - Klaus Dittrich ED - Giovanna Guerrini ED - Isabella Merlo ED - Marta Oliva ED - M. Elena Rodriguez AB - Large scale and long-lived application systems, enterprise applications, require persistence, that is provision of storage for many of their data structures. The JavaTM programming language is a typical example of a strongly-typed, object-oriented programming language that is becoming popular for building enterprise applications. It therefore needs persistence. The present options for obtaining this persistence are reviewed. We conclude that the Orthogonal Persistence Hypothesis, OPH, is still persuasive. It states that the universal and automated provision of longevity or brevity for all data will significantly enhance developer productivity and improve applications. This position paper reports on the PJama project with particular reference to its test of the OPH. We review why orthogonal persistence has not been taken up widely, and why the OPH is still incompletely tested. This leads to a more general challenge of how to conduct experiments which reveal large-scale and long-term effects and some thoughts on how that challenge might be addressed by the software research community. JF - Objects and Databases T3 - Lecture Notes in Computer Science PB - Springer VL - 1944 UR - http://www.springerlink.com/content/8t7x3m1ehtdqk4bm/?p=7ece1338fff3480b83520df395784cc6&pi=0 ER - TY - CONF T1 - Persistence and Java - A Balancing Act T2 - Objects and Databases Y1 - 2000 A1 - Atkinson, Malcolm P. JF - Objects and Databases ER - TY - CONF T1 - Population dynamics and emerging features in AEGIS T2 - Proceedings of the Genetic and Evolutionary Computation Conference Y1 - 1999 A1 - Eiben, A. E. A1 - Elia, D. A1 - van Hemert, J. I. ED - W. Banzhaf ED - J. Daida ED - Eiben, A. E. ED - M. H. Garzon ED - V. Honavar ED - M. Jakiela ED - R. E. Smith KW - dynamic problems AB - We describe an empirical investigation within an artificial world, aegis, where a population of animals and plants is evolving. We compare different system setups in search of an `ideal' world that allows a constantly high number of inhabitants for a long period of time. We observe that high responsiveness at individual level (speed of movement) or population level (high fertility) are `ideal'. Furthermore, we investigate the emergence of the so-called mental features of animals determining their social, consumptional and aggressive behaviour. The tests show that being socially oriented is generally advantageous, while agressive behaviour only emerges under specific circumstances. JF - Proceedings of the Genetic and Evolutionary Computation Conference PB - Morgan Kaufmann Publishers, San Francisco ER -