This is a static HTML version of an old Drupal site. The site is no longer maintained and could be deleted at any point. It is only here for historical interest.
Latest news on our projects, seminars, presentations, publications and software.
Below you will find our latest news, which you can subscribe to using the RSS feed at the bottom. More specific information about people, publications, projects and our regular research seminar is accessed through the menu at the top.
The turbulent global digital-data revolution is delivering a bonanza of research opportunities. In most disciplines these promise significant advances in understanding, but today we have to invest unsustainable amounts of intellectual effort and energy to obtain those advances because our conceptual tools and their supporting technology have not yet grown to meet the challenge of data wealth. The talk reviews some of the ways in which we can sharpen our data-intensive tools and discuss early experiences in several application areas.
Modern science involves enormous amounts of data which need to be transferred and shared among various locations. For the EFFORT (Earthquake and Failure Forecasting in Real Time) project, large data files need to be synchronized between different locations and operating systems in near real time. There are many challenges in performing large data transfers, continuously, over a long period of time. The use of Globus Online to perform the data transfers addresses many of these issues. Globus Online is quickly becoming a new standard for high performance data transfer.
There is a big problem in concurrent use of the parallel storage systems on HPC Clusters. This might spoil the performance improvement that might otherwise be obtained by optimizations of MPI-IO, such as data sieving, two-phase collective I/O and etc.
Our approach to achieve performance guarantees is to allow users / applications to explicitly reserve I/O throughput of the storage system in advance, with start and end time of the access.
There is increasing demand to develop a more personalised approach to diagnosis and treatment regimes for patients, such as those with cancer, so that treatment offered is based on the knowledge that it will be effective. The current “one size fits all” approach should not be applied to care and treatment when the tools that are now available can target the individual.
Large applications now depend on components and services from numerous providers. Many individuals, and different organisations will also have requirements on various aspects of the composite system. For example; the configuration of a simple web server may involve some configuration which is provided (by default) from the vendor, some which is mandated by the local security policies, some which is necessary to provide compatibility with some collaborator's service, some which is delegated to a particular application developer, etc, etc.
Overview of large-scale annotation in bioinformatics research.
1. A brief intro into what gene expression annotation is, how it can be captured (using example of Eurexpress.org project); How collected annotation data corpora can be used for analysis and auto-annotation.
2. Introduce virtual fly brain (VFB) project.
3. Describe the modes of collecting annotation in the scope of VFB, refer to the problem of data verification.
4. Introduce the proposed social media based approach to data verification and further data analysis.