Last week I was a co-organiser of a Newton Institute workshop on high dimensional statistics in biology. It was a great meeting and there were lots of interesting discussions, in particular on chip-seq methods and protein-DNA binding array work. I also finally heard Peter Bickel talk about the “Genome Structure Correction” method (GSC), something which he developed for ENCODE statistics, which I now, finally, understand. It is a really important advance in the way we think about statistics on the genome.

The headache for genome analysis is that we know for sure that it is a heterogeneous place – lots of things vary, from gene density to GC content to … nearly anything you name. This means that naive parametric statistical measures, for example, assuming everything is poisson, is will completely overestimate the significance. In contrast, naive randomisation experiments, to build some potential empirical distribution of the genome can easily lead to over-dispersed null distributions, ie, end up under estimating the significance (given a choice it is always better to underestimate). What’s nice is that Peter has come up with a sampling method to give you the “right” empirical null distribution. This involves a segmented-block-bootstrap method where in effect you create “feasible” miniature genome samples by sampling the existing data. As well as being intuitively correct, Peter can show it is actually correct given only two assumptions; one that genome’s heterogeneity is block-y at a suitably larger scale than the items being measured, and secondly that the genome has independence of structure once one samples from far enough way, a sort of mixing property. Finally Peter appeals the same ergodic theory used in physics to convert a sampling over space to being a sampling over time; in other words, that by sampling the single genome’s heterogeneity from the genome we have, this produces a set of samples of “potential genomes” that evolution could have created. All these are justifiable, and certainly this is far fewer assumptions than other statistics. Using this method, empirical distributions (which in some cases can be safely assumed to be gaussian, so then far fewer points are needed to get the estimate) can be generated, and test statistics built off these distributions. (Peter prefers confidence limits of a null distribution).

End result – one can control, correctly, for heterogeneity (of certain sorts, but many of the class you want to, eg, gene density). Peter is part of the ENCODE DAC group I am putting together, and Peter and his postdoc, Ben Brown, are going to be making Perl, pseudo-code and R routines for this statistic. We in Ensembl will implement this I think in a web page, so that everyone can easily use this. Overall… it is a great step forward in handling genome-wide statistics.

It is also about as mathematical as I get.


(you wouldn’t expect orangutans out of the trees, would you?)

Ensembl 49 will contain good news on the comparative genomics side. Apart from the new whole-genome multiple alignments for which we can now handle segmental duplications and infer ancestral sequences (see Ewan’s post on the 20th of January), two new species will be available, namely horse and orangutan.

We are especially excited about the new orangutan genome as it is a key species in the primate lineage, in between the human, chimp and gorilla group and the Old World monkeys. Its inclusion in our gene trees will result in a better resolution of the phylogeny of the primate genes.

We’re pleased to announce that Ensembl now has a mirror at the Beijing Genomics Institute, Shenzhen (BGI-SZ). The mirror can be found at http://ensembl.genomics.org.cn/


Most of the functionality of the main Ensembl site is mirrored, however we’re still working with our colleagues at the BGI to provide the rest, for example BioMart.


Due to a combination of the volume of data comprising a single Ensembl release (the MySQL data and index files for release 48 take up apround 600Gb, and that’s without counting all f the flat-file dumps) and the very slow Internet connection between the UK and China, the we’re using a “sneakernet” solution – i.e. dumping the data onto a hard drive and shipping it to China. This has proved to be an interesting challenge but it’s working out pretty well so far.

We hope that this mirror will make life easier for our users in and around China. We’re actively trying to set up mirrors elsewhere around the world to reduce network delays and improve peoples’ Ensembl experience; we’ll post here as soon as any new mirrors come online.

I would like to thank our colleagues at the BGI-SZ, particularly Lin Fang, for setting this mirror up.

I’ve just been visiting CNIO in madrid – a great, fancy new(ish) institute in Madrid focusing on cancer – it was a great visit if you ignore the 2 hour delay (thanks Iberia) coming out and currently 1 hour delay (thanks BA…) coming back. They are doing all the things one expects from a high-end molecular biology institute. There are a chip-chip guys, moving to chip-seq. There are some classic cell biologists moving into more genome wide assays (in this case, replication). They have a great prospective sample collection in two cancers, and are about to get into a Genome Wide Association Study (GWAS).

David – the head of bioinformatics service – already is leveraging Ensembl alot. They script against our databases (Perl API mainly) and have a local mirror set up. They ran courses, bringing over Ensembl people for both an API course and a Browser course (contact helpdesk if you’d like this to happen at your institute…). But even then, discussions with David made us realise that they could use us even more – for the functional genomics schema and the variation schema in particular.

This is what Ensembl is all about. We make it easier for people who want to work genomically to do the sometimes painful data manipulation and plumbing. In particular, Ensembl provides public domain information in a large scale, well organised and ready to be browsed on the web, scripting against in Perl and accessible to clients like bioconductor. And more than any other group, we help group’s like David’s do more for his institute and have to worry less about the infrastructure. David was very interested in the “geek for a week” program when someone comes to work at Ensembl to help accelerate a project.

Returning to the airline theme, some of the biologists admitted using the UCSC browser in a little embarrassed way. I responded that it was fine – UCSC is a great browser, with some great tools. Like airlines, we know people have a choice browsers, and we hope people come “fly ensembl” and enjoy it, but we know the competition is good (and really friendly as well – we like working with those crazy californians, and have a number of joint projects). If you are a biologist, you should use the best tool for the job at hand. Of course, we know where we’re lacking, in particular in comparison to UCSC, and we are working on getting better. Keep an eye open on changes in Ensembl this year – and do come fly with us even if your “regular browser” is US based.

Finally my plane I think is ready to depart.

(Madrid airport is so big I think I’m half way to the UK already)

Today the 1000 Genomes projects was announced. By any measure this is a big deal.
The goal is simple: to create the most comprehensive and medically useful collection of human variation ever assembled by producing approximately 6 terabases of sequence. To put this amount of data in prospective, 6 terabases is more than 60 times the amount of data that is currently available in the DDBJ/GenBank/EMBL Archive and that took more than 25 years to collect. At the peak production of the 1000 Genomes project more that 8 billion basepairs per day will be sequenced. It’s data output of the the entire human genome project every week. All made publicly available.
The data generation rate and the short read length mean that the bioinformatics requires for the project are equally ambitious (or terrifying depending on your point of view). The EBI and NCBI, working together, are creating a joint DCC (data coordination centre) to collect, organise and provide the data to the world. Steve Sherry at the NCBI and I are eager to take this on.
At Ensembl we’ve been expecting this development and built support for re-sequencing data into our variation database a couple of years ago. So far, we have data for about 6 humans, 5 mouse strains, and a smattering of rat data. Small stuff compared to six months from now, but large enough that we have both experience and confidence dealing with the large-scale resequencing data. We are probably going to need both.
Check out more at http://www.1000genomes.org

Richard posted the next release intentions here:

ensembl-dev archive

Lots of good stuff – orangutan, horse being released, the usual tweaks about contamination (viral genes) into the gene sets being removed, little details.

But one thing is quite a change. It is from Javier’s Compara team, and it is simply stated as

“Generate the 7-way alignments using the new enredo-pecan-ortheus pipline”

Unpacking this statement, it is a big change in how we’re thinking about comparative genomics alignments. Enredo is a method to produce a set of co-linear regions, sometimes called a “synteny map” though this term is a dreadful term. The key thing is that it handles duplications in the genome, allowing (say) two regions of human to be co-linear with one region of mouse. This is hard to handle on a genome-wide scale in a scaleable manner. Pecan is the multiple aligner written by the brilliant Ben Paten (used to be my student, and wrote Pecan whilst at the EBI; he is now at UCSC with Jim and David and co). Pecan is the best aligner – by both simulation testing and testing via ancient repeat alignability criteria – it has the highest sensitivity of alignment with the same specificity as the next best aligner. Finally Ortheus, also from Ben, provides (potentially) realignment whilst simultaneously sampling correctly from a probabilistic model of sequence evolution, critically including insertion and deletions, and thus as a side effect, producing likely ancestral sequences. This also has been stringently tested using a hold-one-out criteria, basically can we “predict” the marmoset sequence only using other extant species (answer – not completely correctly, but better than any other method, eg taking the nearest sequence).

So – what does this all mean. Basically there are two key things:

  1. Handling lineage specific duplications. This is a headache, and we have a good solution, providing the alignment of therefore the paralogous and orthologous regions (the paralogy is limited to relatively recent paralogy, ie, within mammals) simultaneously
  2. We can reliably predict ancestoral sequences

One headache is that some of the things we display, in particular the GERP continuous conservation score, needs to be adapted to work on the basis now of regions with paralogy. There is a fascinating piece of theory to work through here – what is the concept of the “neutral tree” when there has been a lineage specific duplication? How should one treat paralogs? Currently this is ignored by virtue of the fact that the alignments don’t allow this. Now the alignments do allow this, and we need to do something sensible, as well as stimulate evolution theory people to look at the data and work out new methods.

The next headache is what do we do with the ancestral sequences? Dump them? Display them? Gene predict on them? If so, how?

The end result is that release 49, even the comparative genomics, wont look very different, but it will have these new alignments, and over 2008 we will be working out how to present, analyse and leverage them more – so if you are interested, please do take them for a spin!

(Release 49 is due to be out sometime in mid-Feb)

We’re going to be experimenting with broader content generated by the Ensembl team in the Ensembl blog – at the very least by myself, Ewan Birney. So you can expect to read more about what we’re doing, the things which are coming up in the pipeline and our thoughts on how genomic infrastructure is going to evolve over time. Ensembl is a big team, with alot of components, so it is often hard to track what we’re doing and why we’ve made some decisions. This blog hopefully will keep you up to date with our progress in an informal manner.