Hello to our readers, I hope everyone is having a nice April. In the UK we are experiencing a long winter with some rain, but spring seems to be around the corner… as are these upcoming workshops…

Did you know? The EBI has released tutorial videos.

Have a look at the Ensembl browser videos for information and direction to some of its pages! Or, learn more about BioMart, a fast data mining tool.

Upcoming workshops- May

Browser workshop at the WHO in Cairo (12-13 May)
Module in the Open Door Workshop at the Sanger (12-14 May)
Ensembl in China: The Shanghai Center for Bioinformation Technology (14-16 May)
Ensembl in China: Center for Bioinformatics, Beijing (19-21 May)
Browser and API workshops at the GTPB in Oeiras, Portugal (27-30 May)
Presentation at the European Human Genetics Conference in Barcelona (30 May)

We have four groups on campus interested in human genes: Ensembl, Havana, whos data forms the bulk of the Vega database, HGNC, the human gene nomenclature committee, and finally UniProt, which has a special initiative on human proteins. With all these groups on the hinxton campus, and with all of them reporting to (at least one) of myself (Ewan Birney), Rolf Apweiler or Tim Hubbard, who form the three-way coordination body now called “Hinxton Sequence Forum”, HSF it should all work out well, right?

Which is sort of true; the main thing that has recently changed over the last year has been far, far closer coordination between these four groups than there was ever before, meaning we will be achieving an even closer coordination of our data, leaving us hopefully with the only differences being update cycle and genes which cannot be coordinated fully (eg, due to gaps in the assembly).

Each of these groups have a unique view point on the problem. Ensembl wants to create as best-as-possible geneset across the entire human genome, and its genesis back in 2000 was that this had to be largely automatic to be achievable in the time scale desired, being months (not years) after data was present. Havana wants to provide the best possible individual gene calls when they annotate a region, integrating both computational, high throughput and individual literature references together, UniProt wants to provide maximal functional information on the protein products of genes, using many literature references on protein function which are not directly informative on gene structure and finally HGNC wants to provide a single, unique, symbol for each gene to provide a framework for discussing genes, in particular between practicing scientists.

Three years ago, each group knew of the other’s existence, often discussed things, was friendly enough but rarely tried to understand in depth why certain data items were causing conflicts as they moved between the different groups. Result: many coordinated genes but a rather persistent set of things which was not coordinated. Result of that: irritated users.

This year, this has already changed, and will change even more over 2008 and 2009. Ensembl is now using full length Havana genes in the gene build, such that when Havana has integrated the usually complex web of high throughput cDNAs, ESTs and literature information, these gene structures “lock down” this part of the genome. About one third of the genome has Havana annotation, and because of the ENCODE Scale up award to a consortium headed by Tim Hubbard, this will now both extend across the entire genome and be challenged and refined by some of the leading computational gene finders world wide (Michael Brent, Mark Diekhans and Manolis Kellis, please take a bow). Previously Ensembl brought in Havana on a one-off basis; now this process has been robustly engineered, and Steve Searle, the co-head of the Gene Build team, is confident this can work in a 4-monthly cycle. This means it seems possible that we can promise a worse-case response to a bad gene structure being fixed in six months, with the fixed gene structure also being present far faster on the Vega web site. It also means that the Ensembl “automated” system will be progressively replaced by this expert lead “manual” annotation over the next 3 years across the entire genome.

(An aside. I hate using the words “automated” and “manual” for these two processes. The Ensembl gene build is, in parts, very un-automated, with each gene build being precisely tailored to the genome of interest in a manual manner, by the so called “gene builder”. In contrast “manual” annotation is an expert curator looking at the results of many computational tools, each usually using different experimental information mapped, in often sophisticated ways, onto the genome. Both use alot of human expertise and alot of computational expertise. The “Ensembl” approach is to use human expertise in the crafting of rules, parameters and choosing which evidence is the most reliable in the context of the genome of interest, but having the final decision executed on those rules systematically, whereas the “Havana” curation approach is to use human expertise inherently gene-by-gene to provide the decision making in each case, and have the computational expertise focus on making this decision making as efficient as possible. Both will continue as critical parts of what we do, with higher investment genomes (or gene regions in some genomes) deserving the more human-resource hungry per genome annotated “manual” curation whereas “automated” systems, which still have a considerable human resource, can be scaled across many more genomes easily).

This joint Havana/Ensembl build will, by construction, be both more correct and more stable over time due to the nature of the Havana annotation process. This means other groups interacting with Havana/Ensembl can work in a smoother, more predictable way. In particular on campus it provides a route for the UniProt team to both schedule their own curation in a smart way (basically, being post-Havana curation) and provide a feedback route for issues noticed in UniProt curation which can be fixed in a gene-by-gene manner. This coordination also helps drive down the issues with HGNC. HGNC always had a tight relationship with Havana, providing HGNC names to their structures, but the HGNC naming process did not coordinate so well with the Ensembl models, with gene names in complex cases becoming confused. This now can be untangled at the right levels – when it is an issue with gene structures, prioritise those for the manual route, when it is an issue with the transfer of the assignment of HGNC names (which primarily has individual sequences, with notes to provide disambiguation) to the final Havana/Ensembl gene models this can be triaged and fixed. HGNC will be providing new classifiers of gene names to deal with complex scenarios where there is just no consistent rule-based way of classifying the difference between “gene” “locus” and “transcript” in a way which can work genome-wide. The most extreme example are the ig loci, with a specialised naming scheme for the components of each locus, but there are other oddities in the genome, such as the proto-cadherin locus which is… just complex. By having these flags, we can warn users that they are looking at a complex scenario, and provide the ability for people who want to work only with cases that follow the “simple” rules (one gene, in one location, with multiple transcripts) the ability to work just in that genome space, without pretending that these parts of biology don’t exist.

It also means our relationships to the other groups in this area; in particular NCBI and UCSC (via the CCDS collaboration), NCBI EntrezGenes (via the HGNC collaboration) and other places worldwide can (a) work better with us because we’ve got more of our shop in order and (b) we can provide a system where if we want to change information or a system, we have only one place we need to change it.

End result; far more synchrony of data, far less confusion for users, far better use of our own resources and better integration with other groups. Everyone’s a winner. Although this is all fiddly, sometimes annoying, detail orientated work, it really makes me happy to see us on a path where we can see this resolved.

Last week I was a co-organiser of a Newton Institute workshop on high dimensional statistics in biology. It was a great meeting and there were lots of interesting discussions, in particular on chip-seq methods and protein-DNA binding array work. I also finally heard Peter Bickel talk about the “Genome Structure Correction” method (GSC), something which he developed for ENCODE statistics, which I now, finally, understand. It is a really important advance in the way we think about statistics on the genome.

The headache for genome analysis is that we know for sure that it is a heterogeneous place – lots of things vary, from gene density to GC content to … nearly anything you name. This means that naive parametric statistical measures, for example, assuming everything is poisson, is will completely overestimate the significance. In contrast, naive randomisation experiments, to build some potential empirical distribution of the genome can easily lead to over-dispersed null distributions, ie, end up under estimating the significance (given a choice it is always better to underestimate). What’s nice is that Peter has come up with a sampling method to give you the “right” empirical null distribution. This involves a segmented-block-bootstrap method where in effect you create “feasible” miniature genome samples by sampling the existing data. As well as being intuitively correct, Peter can show it is actually correct given only two assumptions; one that genome’s heterogeneity is block-y at a suitably larger scale than the items being measured, and secondly that the genome has independence of structure once one samples from far enough way, a sort of mixing property. Finally Peter appeals the same ergodic theory used in physics to convert a sampling over space to being a sampling over time; in other words, that by sampling the single genome’s heterogeneity from the genome we have, this produces a set of samples of “potential genomes” that evolution could have created. All these are justifiable, and certainly this is far fewer assumptions than other statistics. Using this method, empirical distributions (which in some cases can be safely assumed to be gaussian, so then far fewer points are needed to get the estimate) can be generated, and test statistics built off these distributions. (Peter prefers confidence limits of a null distribution).

End result – one can control, correctly, for heterogeneity (of certain sorts, but many of the class you want to, eg, gene density). Peter is part of the ENCODE DAC group I am putting together, and Peter and his postdoc, Ben Brown, are going to be making Perl, pseudo-code and R routines for this statistic. We in Ensembl will implement this I think in a web page, so that everyone can easily use this. Overall… it is a great step forward in handling genome-wide statistics.

It is also about as mathematical as I get.

Yesterday, Ensembl released a new version of the browser and database (version 49). Along with new species, homologue predictions, and new code in our API, there have been changes in how the multiple alignments are done on the whole-genome scale. Have a look at the news for more details.

We are looking forward to release 50! as we are working on some new features. Keep your eye out in August for this next release. A reminder, we will not release another version between now and August, and updates may appear in the Pre! site but not in the main site, for that time.

Please explore features on release 49 such as BLAST which is now configured to align queries against top-level sequences (i.e. chromosomes and scaffolds), and BLAT, a fast alignment program which is now the default selection.

Paralogues are shown in blue in GeneTreeView to help aid your eye.

Upcoming workshops -April

(March workshops are listed in a previous post)

Browser workshops at the VIB Ghent and Leuven (31 Mar – 2 Apr)
Browser workshop (focus: rat) at the ULB Brussels (EURATools) (16 Apr)
Browser workshop at the BCB UCL/Birkbeck (21 Apr)
Module in the EBI roadshow in Poitiers (23, 24 Apr)
API workshop at the Dept. of Genetics, Cambridge (28, 29, 30 Apr)

Keep your eye out for Release 49, which is due on Tuesday 18 March. The delay is due to the scheduled downtime and maintance at the Sanger and EBI this weekend, which has caused some trouble. However, Release 49 will soon be visible to the community!

New features in release 49 will includeBLAST against top-level sequences on all species, updates on theGeneTreeView page that should make things easier to see, and new Ensembl gene sets for Orangutan, Horse and and Takifugu. FlyBase 5.4 will be imported for Fruitfly. For API users, the regulatory features will be moved from the core API to the functional genomics API.

Also, a word of warning to those using our mouse clones under ‘DAS sources’. MICER clones and the bMQ set (129S7/AB2.2 in the ‘DAS Sources’ menu of ContigView). The clones, originally mapped to NCBI M36, are lifted over to the new assembly (NCBIM 37) coordinates. The drawing indicates where the clone lifts over to in the new assembly. However, the pop-up box shows the coordinates of the original mappings. This is indicated in Ensembl by the ‘NCBIM36’ label above the coordinates.

Write our helpdesk if you are confused!

After a very successful Ensembl US West Coast Tour last month, the Ensembl Outreach team is presently looking into the possibility of organising a similar tour on the US East Coast in the second half of 2008. At the moment we are mainly thinking of 1-day browser workshops, but if there is interest in an API workshop we can of course also consider this.
The participating institutions would only have to pay the instructor’s expenses and would share the travel costs, but we would not otherwise charge for the workshops. People that are potentially interested in hosting a workshop can contact me for more details.

We are looking forward to Ensembl release 49, which has been delayed to 13 March, 2008. This is a result of some downtime planned at the Wellcome Trust Sanger Institute. Users, beware! Ensembl will not be available from: Friday 7 March – Sunday 9 March

On the 13th March Ensembl version 49 will be available. Keep an eye out for:
Drosophila melanogaster (assembly BDGP 5.4)
Horse (first gene build)

Viral genes have been removed in multiple species. ncRNA updates will be ready for Pika and Mouse Lemur, and new variations from dbSNP 128 (mouse, chicken, cow, and zebrafish) and dbSNP 126 (rat) will be available. Have a look at the new pairwise alignments between human and horse genomes.

Upcoming Workshops – March

A series of talks and workshops are happening in March.

Browser Workshop (2-day course at the EBI) 5-7 March
Browser Workshop (part of a 2 day course, MRC London: EBI Roadshow) 11-12 March
Presentation at the Genomes to Systems 2008 conference in Manchester, 17-19 March

Interested in organising a course in Ensembl and BioMart? Contact our helpdesk.

-The Ensembl Outreach team

Orangutan on an Ensembl gene tree
(you wouldn’t expect orangutans out of the trees, would you?)

Ensembl 49 will contain good news on the comparative genomics side. Apart from the new whole-genome multiple alignments for which we can now handle segmental duplications and infer ancestral sequences (see Ewan’s post on the 20th of January), two new species will be available, namely horse and orangutan.

We are especially excited about the new orangutan genome as it is a key species in the primate lineage, in between the human, chimp and gorilla group and the Old World monkeys. Its inclusion in our gene trees will result in a better resolution of the phylogeny of the primate genes.

We’re pleased to announce that Ensembl now has a mirror at the Beijing Genomics Institute, Shenzhen (BGI-SZ). The mirror can be found at http://ensembl.genomics.org.cn/

Most of the functionality of the main Ensembl site is mirrored, however we’re still working with our colleagues at the BGI to provide the rest, for example BioMart.

Due to a combination of the volume of data comprising a single Ensembl release (the MySQL data and index files for release 48 take up apround 600Gb, and that’s without counting all f the flat-file dumps) and the very slow Internet connection between the UK and China, the we’re using a “sneakernet” solution – i.e. dumping the data onto a hard drive and shipping it to China. This has proved to be an interesting challenge but it’s working out pretty well so far.

We hope that this mirror will make life easier for our users in and around China. We’re actively trying to set up mirrors elsewhere around the world to reduce network delays and improve peoples’ Ensembl experience; we’ll post here as soon as any new mirrors come online.

I would like to thank our colleagues at the BGI-SZ, particularly Lin Fang, for setting this mirror up.

You might think that only our group and team leaders are traveling the globe, but also the members of the Ensembl outreach team (Xose Fernandez, Giulietta Spudich and myself) spend a fair amount of their time on the road (or in the air ….) to spread the word about Ensembl.

I myself, for example, just returned back in the UK from a 3-week “Ensembl US West Coast tour”. That means no more Margaritas, motels with ocean view or trendy LA restaurants for me for a while, but also no more lost luggage or cancelled or delayed flights (it’s not all glitz and glamour …. ).

My tour started with a visit to the Plant and Animal Genome XVI Conference in San Diego, where I gave a presentation on Ensembl and spent, together with other EBI colleagues, time in the EBI booth to promote our institute. After that I gave Ensembl browser workshops at City of Hope (see picture), the University of Oregon , UCSF, UCSC (where the audience mainly consisted of genome browser folks!), and UCLA. Numbers of participants ranged from around 15 till over 50 and in all places the workshop was very enthousiastically received. In fact, several of my hosts were already asking when we could repeat this ….

The principle aim of our workshops is of course to teach people how to get the most out of Ensembl, but apart from that it also is a really good way for us to stay in contact with our users. We can see what people exactly use Ensembl for, how they use it and what they like and dislike about it, so we always return back home with lots of new ideas and suggestions. One thing that, for instance, often strikes me is that most people are not aware of the existence of our data mining tool BioMart. However, after a short explanation and some hands-on exercises they find it almost without exception very useful! So, we still have some work to do to promote this very handy tool.

By the way, we not only offer browser workshops, but also workshops on the use of the various Ensembl Perl API’s. Keep an eye on this blog to see where and when the next workshops will be. Or, even better, host one at your own university or institute! For more information with regard to our workshops you can contact our helpdesk.