Thursday, 22 October 2015

Genomics and Big Data in Medicine

One of the great challenges – and opportunities – over the coming decade is the perfusion of molecular measurement, and accompanying data analysis, into general medicine. This will be nothing new for clinical genetics and other niche disciplines, but as medicine begins to mine the rich data streams from genomics, transcriptomics and metabolomics research, we will start running into some rather tricky integration problems. This is interesting both scientifically and socially, as a huge wave of technology pushes us to create clinical utility out of a confluence of molecular data, high-resolution imaging and data from continuous-sensing devices.

Opinion-makers have been grappling with these issues publicly for a while, and there are programmes in place in many different countries to enable, exploit and empower this change. Futuristic language like "The End of Medicine" and "The Revolution in the Clinic" is bandied about, and governments, charities and companies are all keen to get involved.

It’s all very exciting.

I have two different perspectives on this issue. First, as one of the world’s major sources of reference molecular information, EMBL-EBI is a trusted adviser and public data and knowledge provider. Our medical strategy is in place, supported by our advisory boards and ready for implementation (I will be writing a paper on this strategy with Rolf Apweiler). As always, we are prepared to help different sectors and communities deal with ‘big data’ storage, standardisation, integration and knowledge management.

On a more personal level, my research collaborations with clinician scientists have opened my eyes to the challenges and opportunities of practical medicine – some of which I mentioned in my blog post on human as model organism.

I also think we should go back to looking at how different technologies have enriched – but not fundamentally changed – medicine, and at how medicine has adopted new technologies over the years. For me, there is no better example than X-rays (I am indebted to the excellent essay and references from “X-rays as Evidence in German Orthopedic Surgery, 1895–1900”, by Andrew Warwick, Isis, 2005, 96:1–24 )

Technology, medicine and consumers

Picture of Anna Bertha Röntgen's hand
The fact that X-rays could reveal internal aspects of the human body were discovered serendipitously in 1895 by Wilhelm Conrad Röntgen at the University of Würzburg, when his wife accidentally put her hand between radium (a strong X-ray emitter) and photographic film, during Röntgen’s systematic analysis of these new electromagnetic rays. The iconic image of her bones and wedding ring shows at a glance that he had discovered a new way to image living tissue, and in that gestalt moment it seems Röntgen himself understood that this would be useful in medicine.

But it took more than 20 years for X-rays to be used widely in medicine, for a number of reasons. For one, the early developers and adopters of X-ray machines were driven not by medical altruism, but by the need to capture the public’s interest and sell kit – notably in the wealthy, technology-obsessed US at the time. Fair grounds in the northeast began offering ‘bone portraiture’ salons: amusing devices with live radium exposed provided either a picture you could take away or even a fluorescent screen for a live “show”. Such portraits were quite a fad in 1900s New York, with many families proudly mounting pictures of their own X-rays in their houses as a talking point. (This does rather remind me of genomics and genotyping being marketed directly to modern self-obsessed consumers)

Advertisement for a bone portraiture studio
There was both enthusiasm and scepticism in clinical circles – mainly the latter. X-rays made bones visible to the naked eye, but didn’t do much else in terms of treatment. The resolution wasn’t good enough to pick up hairline or in-place fractures, and an obviously broken bone didn’t require X rays to diagnose. One German doctor in the 1890s, disillusioned by the complexity in even getting an X-ray machine to work, declared that widespread use of this new technology was “an idle fantasy”. As the poisoning effect of radiation became clear, notably on fairground ‘bone portraitists’, many people in the mainstream medical establishment hardened their view that this technology was mere quackery, useless for clinical practice.

Shooting from the hip

But the ability to see inside the body remained tantalising to many clinicians and scientists, who continued to work on the technology. One group of clinical innovators saw the possibility to improve gallstone treatment. Gallstones are painful, dangerous and difficult to remove, but with the advent of general anaesthetic surgery was becoming a practical option. What was missing was a way to diagnose the presence of gallstones in patients without having to carry out surgery. However, for those keen to make use of X rays, there was a catch. Gallstones, despite being quite solid, were in fact transparent to the ‘hard’ X-rays used at that time – unless they had become calcified, which happened in only 5% of cases. So the clinicians had the right idea: make a better diagnosis to inform a clinical action (surgery) that helps the patient. But a key technical detail made it only occasionally successful. As one might imagine, the anti-X-ray crowd pointed to this failure as indicative of the futility of using the technology at all.
Modern Xray of childhood hip displacement

Clinicians were also at the time arguing over whether surgery or manipulation was the best way to treat childhood hip displacement. Manipulating the hip joint into the socket without surgery (under anaesthetic in a medical setting) seemed to work well enough. However, this non surgical approach was traditional, largely carried out by informal medical help, and scorned by the more professionalised medical establishment. As the dispute deepened, the manipulation group (interestingly, led by a surgeon) started taking X-rays before and after treatment to show how the hip joint moved into the correct place following their treatment.

All systems go

Andrew Warwick uses this example to explore how evidence (i.e. X-rays) gains currency in practical medical discourse. Having convinced the medical establishment of the utility of X-rays, more and more practitioners began to buy X-ray machines and engineers began to develop the technology to make and control X-rays in earnest in a clinical rather than physics laboratory setting. Both led to a more widespread adoption for things like orthopaedic interventions and, latterly, the breakthrough use of X-rays to diagnose Tuberculosis.

The use of X-rays was also catalysed by historical events. The Great War called for all manner of medical innovation, and in 1915 Marie Curie and her daughter famously set out to help doctors on the battlefields of France see bullets, shrapnel, and broken bones in their patients in the new Red Cross Radiology Service.

X-rays, radiology and imaging have become essential tools for any medical practice. Every clinician must have at least some working knowledge of the different types of images (and the different risks to the patient). They have practical applications in many disciplines, including neurology, internal medicine and cardiology. The discipline of medical imaging, where rather geeky clinicians work with physicists to push the limits of MRI, X Rays (of all sorts) and echolocation, brings a vast range of technology to bear on improving our ability to look inside the body – and sometimes intervene – in finer and finer detail.

Hindsight is 20-20

Innovative clinicians who believe in the potential of genomics and big-data analysis can learn a lot from the story of X-rays. The enthusiasm of those who grasped the potential of X-rays early on – including visionaries like Röntgen – can serve as a caution, reminding us not to be overconfident about predicting early success. My take-home is that we need to explore many avenues simultaneously – we cannot easily predict where the quickest win will be. Perhaps rare disease diagnosis, or personalised cancer treatment, or infectious biology? But this uncertainty in having to spread our bets to find the first beneficial area should not really deter the long term view that genomics and data analysis will become an every day part of medicine. Fundamentally this is about understanding ourselves at finer and finer detail, and this information will be useful when we are ill.  Who can imagine medicine today without medical imaging? Twenty years from now, will medicine before genomics be recognisable?

Direct-to-consumer genomics has enjoyed rapid uptake from early adopters (myself included!) who may be motivated to show off their knowledge, or who may be keenly interested in their ancestry. But the direct-to-consumer market is not the same as integration into healthcare systems; genomics and data analysis is not an “end run” around medical practice, rather another tool for the never-ending quest in trying to ensure our good health.

Just as X-rays and imaging were eventually absorbed and codified into clinical practice, genomics and data science will become so ingrained that we will not remember what it was like before. Medical imaging is a rigorous discipline in its own right but remains firmly rooted in traditional medical structures to ensure it fits seamlessly into practical clinical practice. Because of this, it remains a familiar sight to any patient with a habit of falling out of trees (and their frantic parents).

Similar to medical imaging, genomics and data science will not change the fundamentals of clinical practice; skilled professionals who have seen many similar (but not identical) examples in others can use their experience and knowledge to diagnose and hopefully treat disease. However, it’s quite likely there will be unexpected setbacks and surprising successes in their use, in particular at the start. New medical disciplines (clinical genomics? clinical bioinformatics?) will emerge inside clinical structures, which will provide the bedrock of routine practice. Every clinician will be expected to have a grasp of the fundamentals of these techniques, and specialists will offer more in-depth knowledge. Society is sure to become more comfortable with this new flavour of information, and with more self-monitoring (on devices, at home) changing how information is gathered around an individual - those same people who are already more motivated to research on the Internet before the visit to the clinician. But the need for intelligent, skilled individuals who have seen many examples of particular scenario will still be needed to guide, inform and treat, whatever the density of information gathered.

Genomics and data science, once they’ve shown their worth in practical day-to-day practice, will help clinicians make better decisions for their patients. Some diseases will transition from problematic to routine diagnosis and treatment. Some diseases will not be as affected by these technologies. There may be plenty of glitches, dead ends and troubling uncertainties, but if we learn from innovators of the past, using these technologies will quickly become as routine as going to the X-ray department to examine a hairline fracture.

Monday, 5 October 2015

10,000 Up

I've just passed my 10,000th follower on Twitter, and similar to when I went past 5,000 followers this feels like a good point to reflect on this open, 'blog-and-tweet' world evolving around me.

Many of the comments I made two years ago have stood the test of time: Twitter is still fundamentally a conversation, broadcast not just to your lunch queue but worldwide, and blogs remain lightweight, informal platforms for review and commentary. And as with any conversation you have to consider your audience first, and as with all public writing everyone still need and editor [sic].

So, happily, the basic principles hold true. But with this up tick, I want to reflect on whether these media/platforms are actually right for everyone, and the future of open discussion.

You don't have to join Twitter

I get a lot out of Twitter, and part of that includes sharing interesting papers or blog posts I've spotted in my twitter stream or adding new ones to the mix. I find myself asking people if they are on Twitter so I can be sure to reference them when their work comes up. This sometimes elicits a somewhat sheepish response along the lines of, "I don't do Twitter," or "Twitter makes me feel uncomfortable," or, in rare cases, a flash of rage against the world of informal and all-too-short critiques/discussions. 

I try to be positive about my Twitter experience while acknowledging that Twitter is not for everyone, and recognising its pitfalls: it can be distracting, people are often tempted to over-share aspects of their lives, etc. Twitter is certainly not a requirement, and blogs are rather personal. While I am not alone in finding these media useful, I wouldn't say that you are missing out on crucial information if you don't jump in. Mainstream scientific interaction is still via published, peer-reviewed papers and giving talks at conferences and workshops, and that is where the really important stuff is to be found.

You don't have to interact on Twitter

Many people use Twitter as a sort of personalised, text-based radio station: something you can tune into when you want, that gives you a personalised mix of news. I am pretty sure no one else has my mix of interests in worldwide science, UK, European and US politics, cricket and events in the North Tyne valley (in Northumberland). Twitter is useful just in this "consumption" mode.

Twitter is also inherently ephemeral - people shouldn't expect you to be on it all the time (although it seems like some people are!), and no one can expect that a comment (when someone @'s you) will always elicit a response. People have lives, holidays, emergencies and so forth, and all of those things make it perfectly reasonable to ignore this global conversation.

But if you do start a conversation about a topic, I think it is reasonable for others to expect that you will respond and follow through on sensible comments. Some conversations spark a need for deeper discussion (i.e. "we're going to need to meet up face to face for this one..."), and some might dwindle because you have to take your kids to a pirate party, sleep, etc. And that's fine. The instant, global nature of Twitter makes it empowering in some ways, but it also needs to be tamed to fit into your life.

140 characters: plenty of room for misinterpretation

Twitter's limitation of 140 characters can be challenging, and it's surprisingly easy to misinterpret a tweet - in particular when it is intended to be funny, or when it involves a disagreement. Without body language, vocal nuance or the comfort of expository paragraphs, it is extremely easy to blunder on, not realising that the other person has taken offence, become confused or both.

So I stand by my rules of avoiding 'jokes' on Twitter (unless I'm very confident that the people in the conversation will get it), and avoiding disagreements, particularly the complex ones. Science is full of ambiguity and disagreement, which is part of what makes it exciting - but Twitter's natural limitations turn those elements into confusion all to easily. It just is not the right medium for this kind of engagement (more on this below).


Post-publication peer review is cool

The rise of open, “pre-peer review” platforms in molecular biology communication is a big step change. It started with the quantitative biology community putting papers on the Physics Arxiv site, which quickly became the mainstream way to communicate in high energy and theoretical physics. Two platforms have been added to this: Bioarxiv, which has a similar model to Arxiv but with a stronger emphasis on biology papers (e.g., they process Word files). The second is F1000Research which combines submission, peer review and publication production workflows seamlessly as one stream - and insists on open peer review, so the peer-review comments and replies are all published together with the paper – everything in the open.

In this new publishing environment, blogs are starting to feel very … 2000 to me. Why am I am blogging when I could write a pre-publication with a DOI, which can be cited and made a more formal part of the scientific discourse? Blogging is nice, because it is accessible to a broader audience and allows for a more chatty, 'natural language' style – but if the main purpose is to communicate with scientists, pre-publication servers are a better way to go. If the goal is to communicate with a broad set of interested people, an on-line magazine or aggregation site might be better. The independent blog is on borrowed time (which does make me wonder why I’m writing this).

There is a lot of social innovation happening in science, and each one of them takes a bit of adjustment. (For example, keeping track of comments on F1000Research articles can be a bit exhausting.) In general, the move towards a more rapid, open discourse is almost certainly a good thing... Except...

Open, recorded, textual criticism is complex

These new, open communication platforms share three key characteristics: they are text based, they can be close to immediate (like spoken conversation) and, importantly, they are 'on the record', indefinitely.

This is an explosive combination for constructive criticism. The fluid, conversational nature of the media makes it easier to voice criticism, but it demands a certain level of nuanced writing skill to make up for the absence of voice and body language. These platforms make it too easy for people to barrel on with their views, offering no mechanism that advise about, for example, the responding tone of (or meaningful silence from) the other participants. Disagreements can flare up, triggered by trivial things like different use of common definitions, or by deeper problems like a fundamental misunderstanding of the important elements of different positions in a debate. 

The global nature of these communications adds many layers of cultural complexity, as they are (in science) mainly carried out in the English language, which provides endless opportunities for misunderstandings and perceived callousness due to people's different use of English. But it is perhaps the permanent, public, recorded nature of these critical conversations that raises the stakes highest. Onlookers can read motivations in the text that they wouldn’t interpret in the same spoken conversation. One cannot easily forget such a disagreement when it has been so public and lasting, in particular when it becomes personal.

Online constructive criticism in science

I don’t think the social mores are in place for how we handle criticism on these communication media. There are so many cultures and subtleties - there is a reason, after all, why science writing in papers is so dry and literal. In online conversations criticism is often so polite as to be obtuse, or impossible to understand. More upsettingly, criticism can be so vitriolic and personal that it can't be absorbed at all by the person receiving it or the audience observing the dialogue. 

Science is not the only field that has to work out a social contract for on-line communication, and I have a feeling that it’s going to be the generation that grew up with FaceBook, Twitter and SnapChat that's going to work out how best to do this. But I don’t think the current behaviour is right. Constructive criticism – honest but aiming for the best science – is such an important part of our our world that we need to create the right environment for it online. 

Overall, I am still positive about open communication. I find Twitter interesting, informative and fun, and I find blog posts provide an outlet and means of communication that lets me write more broadly and accessibly about science. These media will evolve - just as every other medium has - and be used by more and more scientists. I still enjoy being part of this evolution.