This is a static HTML version of an old Drupal site. The site is no longer maintained and could be deleted at any point. It is only here for historical interest.
Reusable computational models
Computational models that can be applied in more than one discipline
Diversity, in every dimension, is a key attribute of today’s data bonanza. Our research takes a holistic view, embracing this diversity and the consequent intricate interactions between users and systems. We created the Dispel data-streaming language to describe complex computation patterns at high levels of abstraction, while providing meta-information for optimisation. Provenance and contextual information must be harnessed to achieve autonomous execution, data placement, energy efficiency and reliability.
From the ENVRI Description of Work: "Frontier environmental research increasingly depends on a wide range of data and advanced capabilities to process and analyse them. The ENVRI project, 'Common Operations of Environmental Research Infrastructures' is a collaboration in the ESFRI Environment Cluster, with support from ICT experts, to develop common e-science components and services for their facilities. The results will speed up the construction of these infrastructures and will allow scientists to use the data and software from each facility to enable multi-disciplinary science."
The agenda of this meeting will be flexible, the aim is to provide the informaticians with an understanding of the specific challenges in monitoring, analysis and modelling of experimental and seismological data.
Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis, and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively. They fail for several reasons, all of which are aspects of scalability.
Date and time:
Wednesday, 28 April, 2010 - 10:30
Location:
edikt2010 Symposium - Using computing in your research, e-Science Institute, 15 South College Street, Edinburgh
We present Edinburgh Data-Intensive Research, a research group in Edinburgh Informatics and part of the UK National e-Science Centre. The demonstration comprises several rounds of 15 minutes, where we briefly introduce the group (2-minutes), then attendees can pick people to talk to for the remaining time. All team members are there and have laptops to provide in-depth demonstrations of our methods and applications of them.
Date and time:
Monday, 15 March, 2010 - 14:00
Location:
Data-Intensive Research Workshop, e-Science Institute, UK
The high cost and difficulty of obtaining high-quality mRNA from primary tissue has led many microarray studies to be conducted based on the analysis of single sample-replicates. The purpose of our study was to quantify the impact of this practice on the quality and reproducibility of reported results by exploiting the multiple-array-per-chip design of the Illumina BeadChip platform.
Date and time:
Tuesday, 9 March, 2010 - 09:30
Location:
MRC Human Genetics Unit, Western General Hospital, Edinburgh
High-throughput array-based screens are a popular experimental technique employed by molecular biologists in their endeavour to unravel highly complex gene-interactions. Over the past decade a wealth of research has focused on the development of advanced statistical techniques with which to interrogate these large datasets. Simultaneously, a similar level of effort has been expended in the development of large-scale, open access data repositories and accompanying standards for the reporting of experiment metadata.
Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively: they fail for several reasons, all of which are aspects of scalability.
Science is witnessing a data revolution. Data are now created by faster and cheaper physical technologies, software tools and digital collaborations. Examples of these include satellite networks, simulation models and social network data. To transform these data successfully into information then into knowledge and finally into wisdom, we need new forms of computational thinking. These may be enabled by building "instruments" that make data comprehensible for the "naked mind" in a similar fashion to the way in which telescopes reveal the universe to the naked eye.
Date and time:
Tuesday, 9 February, 2010 - 09:30
Location:
Seminar Room, Biomedical Systems Analysis, Human Genetics Unit, Medical Research Council, Edinburgh, UK
Presenting the research of the Data-Intensive Research Group as part of a visit of Professor Robin Stanton (Pro Vice-Chancellor) and Professor Lindsay Botten (Director, National Computational Infrastructure), Australian National University, to the UK National e-Science Centre.