This is a static HTML version of an old Drupal site. The site is no longer maintained and could be deleted at any point. It is only here for historical interest.
Presentations by group members at external events
We regularly present our work at seminars, specific meetings, and national and international conferences.
The 10-years of the e-Science programme and many earlier years of e-Science have shown the critical importance of digital communication in data-intensive research and in collaboration to bring sufficient expertise to bear on challenges. A review of the 10 years of the e-Science programme shows that the significant positive outcomes are often years after the initial work, even though that led to major breakthroughs and achievements.
This presentation will focus on some of Amazon EC2's unique performance and cost characteristics. We test the impact of regional cost differences by submitting a job to EC2 from both Thailand and the UK. Though startup cost differences are small, cost differences scale with use and hence impact overall business competitiveness. Predicting this is complicated by variations in the underlying cloud load that mean application performance can differ upon each run. Costs can also vary as data transfer usage may or may not be recorded, and hence charged.
We present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. GPUs originated as graphics generation dedicated co-processors, but the modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its huge number of computing cores and constitutes an affordable high performance computing method.
Date and time:
Tuesday, 26 July, 2011 - 11:30
Location:
Healthcare Informatics, Imaging and Systems Biology(HISB) 2011, Calnifornia, US
At the start of the e-Science programme we thought scale was the predominant challenge.
Within a year we realised that there were many more aspects to the challenge of empowering researchers by applying distributed computation.
We now understand that e-Science is a continuous process, progress being achieved by walking paths together discovering critical issues and inventing solutions collaboratively.
The Data-Intensive Process Engineering Language (DISPEL) has been developed in the ADMIRE project to encourage partitioning of data-intensive process design and development. It manipulates processing elements and data streams to generate graphs that represent the requested processes. Some of the features of the language designed to make this possible will be introduced
Not every user knows how to submit a compute job by a remote login or to adapt to different job- submission systems when switching between facilities. In recognition, a recent trend is to provide web portals as an interface, which come in two types, each with its own major drawback. The first type consists of generic job-submission portals, which still require many technical specifics to be supplied by the user and much manual handling of data and results. The second type consists of domain-specific portals, which are expensive and time-consuming to build and maintain.
The global digital revolution provides a fertile and turbulent ecological environment in which e-Science is a small but vital element. There is a deep history of e-Science, but coining the term and injecting leadership and modest funds had a huge impact. A veritable explosion of activity has led to a global burst of new e-Science species. Our challenge is to understand what will enable them to thrive and yield maximum benefit as the digital revolution continues to be driven by commerce and media.
It is evident that data-intensive research is transforming computing landscape. We are facing the challenge of handling the deluge of data generated by sensors and modern instruments that are widely used in all domains. The number of sources of data is increasing, while, at the same time, the diversity, complexity and scale of these data resources are also growing dramatically. To survive the data tsunami, we need to improve our apparatus for the exploration and exploitation of the growing wealth of data.
Date and time:
Wednesday, 7 July, 2010 - 14:00
Location:
Information Sciences Institute, University of Southern California, Los Angeles, California, US.
It is evident that data-intensive research is transforming computing landscape. We are facing the challenge of handling the deluge of data generated by sensors and modern instruments that are widely used in all domains. The number of sources of data is increasing, while, at the same time, the diversity, complexity and scale of these data resources are also growing dramatically. To survive the data tsunami, we need to improve our apparatus for the exploration and exploitation of the growing wealth of data.
Date and time:
Thursday, 1 July, 2010 - 13:00
Location:
Computation Institute, University of Chicago, Chicago, Illinois, US.