Peer reviewed videos: the way forwards for methods papers?

Last year I published my first ‘paper’ with JoVE – the Journal of Visualized Experiments. JoVE are a video journal, that I had heard about from a collaborator – who suggested that our MRI-targeted prostate slicing method ‘PEOPLE’ might be a good fit. It sounded like a great idea!

I’m happy to report that there’s no twist coming in this blog – the experience was great, and I’d recommend them to others too!

Seal of Approval by Jaco Haasbroek | Perfect Fit Phone Case Threadless
Image source: threadless.com

With JoVE, you submit an abstract & basic written paper of your method (or whatever research you’d like to publish as a video). The written submission is peer reviewed, edited as necessary, and once the reviewers are happy, you begin to plan a filming day. There are a few options here – I chose to go with the more expensive option of having JoVE arrange the script, filming & editing for me, rather than having to do it myself. The benefit here is you get to work with professionals, who know how to get the right shots, the right lighting, and edit everything in such a way that other scientists can see everything they need to see clearly, and learn the method so that they can carry it out themselves.

This was of particular benefit to me, as a (very!) amateur YouTuber with Cancer Research Demystified – I wanted to learn how the professionals do it!

Our videographer was Graham from https://www.sciphi.tv/. Working with him was a brilliant experience – he was an ex-researcher himself, and had extensive experience both carrying out and filming science. He made the day fun, quick and easy – if you ever need someone to film an academic video for you I highly recommend his company!

Filming day itself wouldn’t have been possible without the rest of our research team helping out (in particular Hayley and Aiman – thank you!) and of course a very generous prostate cancer patient, who was undergoing radical prostatectomy, kindly agreeing to take part in our research.

After a short wait we received a first draft of our video which we were really happy with – we had the opportunity to make a round of edits (there weren’t many), and then before long the video was up on JoVE’s website, as well as Pubmed and all the usual places you’d read scientific research in paper form!

Personally, I think videos make a whole lot more sense than written papers for sharing methodologies. I’ve used JoVE videos for training myself – notably for learning to build tissue microarrays (TMAs), and without those videos I’m not sure I could have learned this skill at all – as our resident experts had left the lab! A paper just wouldn’t be able to clearly explain how to use that equipment. With JoVE, there’s always a PDF that goes alongside the paper too, so once you’ve watched and understood the practical side, you have the written protocol to hand while you’re in the lab. The best of both worlds.

I’ve always been a fan of simple solutions (I’m a bit of a broken record on this) – and JoVE is a perfectly simple solution to providing training that will show you how to do something rather than just tell you.

Once caveat – it’s not cheap. But your fellow scientist who want to learn your methods will thank you – you’re doing the rest of us a favour! Of course, there’s always YouTube for a free (ish) alternative. But in my view, the added layers of peer review and professional production are worth the extra cost.

Here’s our JoVe video & PDF publication – enjoy!

https://www.jove.com/t/60216/use-magnetic-resonance-imaging-biopsy-data-to-guide-sampling

And no, this blog was not sponsored by anyone – I’m just a fan & paying customer!

A tour of our lab!

A quick blog this week! I wanted to take a moment to introduce one of our favourite Cancer Research Demystified videos. Here, we give a tour of our lab so that cancer patients, carers, students and anyone with an interest can see what cancer research really looks like!

During our first couple of years meeting with cancer patients, myself and Hayley noticed that for a lot of them, their main frame of reference for what a science lab looked like was ‘the telly’. Whether it was CSI, or even a particularly slick BBC News segment, it was clear that research labs were expected to be minimalist, futuristic, and full of coloured liquids.

The occasional person would describe the opposite picture – dark wooden cabinets filled with dusty glass specimen jars, stained benches, blackboards, worn-off labels on mystery chemicals, and that strong, ambiguous, smell.

Of course, neither are accurate. Real cancer research labs are somewhat modern, sure, but even the most expensive and ‘futuristic’ equipment typically looks more like a tumble dryer than an interactive hologram, and though much of our equipment does use lasers – they are hidden deep inside rather than scanning the lab for spies! Blackboards are long gone, replaced with white boards, dusty unlabeled jars are disposed of due to strict health and safety protocols, although stains on benches….? Well, some of those remain.

We did initially face some mild resistance when we first attempted to film this video. A senior member of staff advised us that patients want the comfort of knowing that the best brains in the world are working on a cure, using the best technology and most impressive workspaces. That’s why, we were told, we need to clear out so much lab mess before the camera crews come in for a news segment.

But frankly – those perfect, sterile, swish labs are out there – if someone wants to see a scientist in a never-before-worn white coat pipetting some pink liquid into a plate, all they need to do is turn on the news. We wanted to show something different – and frankly, more honest – warts and all!

The video we ended up with is a little on the nose perhaps, but we felt it needed to be. We show the reality of what it’s like to work in a lab (well, close to reality anyway – we filmed after hours to avoid getting in people’s way, so it is unusually quiet). Some of the difference between day-to-day lab work versus office work are highlighted, such as not being able to eat, drink or touch up your make up within the lab, and having to wear appropriate PPE.

I came back to this video during lockdown because I missed the lab. I still haven’t been back in there, and I’m not sure when I next will be. Other people are back there now though, under strict covid protocols, with significantly reduced capacity and masks. I hope to join them one day, but for now I’m minding my asthmatic lungs at home!

If you’re a cancer patient or carer – here’s a real look at where we’re carrying out the research to build better diagnostics and therapeutics. If you’re a student thinking about doing a medical/biology based research project – this is the sort of place you’ll find yourself working. Please enjoy!

For more Cancer Research Demystified content, here’s where you can find us:

YouTube: https://www.youtube.com/c/CancerResearchDemystified

Twitter: @CRDemystified

Instagram: cancer.research.demystified

These blogs come out every Monday at 11am GMT – so I’ll see you next week!

My love/hate relationship with impact metrics.

Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.

Checkbox - Graph Going Up Icon PNG Image | Transparent PNG Free Download on  SeekPNG
Image from SeakPNG

But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.

Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!

There are problems.

They are noteworthy.

But we still use impact factor religiously regardless.

My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.

There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:

1. “New England Journal of Medicine reviews controversial stent study”: https://www.bmj.com/content/368/bmj.m878

2. “Two retractions highlight long-standing issues of trust and sloppiness that must be addressed”: https://www.nature.com/news/stap-retracted-1.15488

3. “Retraction—Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis”: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)31324-6/fulltext

Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.

The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.

Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.

An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.

Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.

If you’re looking for a bad metric rabbit hole to go down, please enjoy the wide range of controversy both highlighted by and surrounding Beall’s lists: https://beallslist.net/misleading-metrics/

Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.

Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.

I can’t help it, I like to see numbers that go up!

In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.

If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!

Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.

Shifting the bench/desk balance: impact of COVID19.

During the last few years I’ve noticed one topic coming up again and again over coffee/drinks with other researchers: our collective gradual shift from the bench to the desk.

Of course, none of us were expecting the wet lab to actually go off limits for six months!

Best Practices for Limiting Access to Your Lab | Lab Manager
image source – labmanager.com

Pre-covid:

During my PhD, most of the day would be spent hanging out in the lab, with two or three ‘wet’ experiments on the go at a time, and minimal time during incubations for analysis/writing. During my postdoc years, this balance began to shift for me, and I think this is the same for a lot of us. We had all noticed a massive increase in wet lab data being generated, with virtually every technique gradually being made obsolete by increasingly affordable multiplexed or genome-wide versions. With more and more data being generated quicker and quicker, we all had a bit more time to sit at the desk, and a lot more data to play with there.

This manifested itself quite clearly in the perpetual fight for space in academic departments shifting from fighting over bench spaces in the labs, to desk spaces in the offices!

With my generation of researchers not always having in depth bioinformatics or statistical knowledge as a given, there has been an element of trying to play catch-up at the desk. Most of us know one or two computer whizzes who we can ask for help in our departments, but they of course are swamped with ‘quick’ questions from everyone, and just can’t train everyone from first principles. So we’ve been collectively trying to self-learn large scale data analysis while still producing wet lab data at the same time. It’s been a lot.

The covid months:

So how has seven months at home affected this? Well for me, it’s safe to say I’m beginning to run out of data to analyse for the first time in a very long time. I didn’t anticipate ‘running out’ of my own wet lab data ever – so it’s quite an odd feeling. I’m simultaneously making the transition to life as a faculty member, taking over modules and preparing new ways of teaching online, so it probably took me a bit longer than the average researcher to run out of research data – I imagine many wet lab PhD students hit this stage a good few months before I did.

For others, from what I’ve seen and heard, there has been a lot of upskilling happening to fill that lab-gap, and not a moment too soon. Many have been learning R or Python for the first time, or brushing off old half-attempted databases. Many have been learning to conduct systematic reviews and meta-analyses for the first time too, with our Division’s online modules on these topics having recently been made available to staff to as well as students – and with an enthusiastic uptake.

On a wider scale, for the first time in what feels like a long time, my field is starting to catch up with itself. People are stepping back, taking a breath, and appreciating the enormous volume of data around us. What’s more, we’re taking the time to not only read more of each other’s papers, but critically analyse them, validate what we can from home, and publish these findings too. This is something we’ve all previously lamented at those coffee/drinks chats that we wish we had the time to do!

This is much-needed, and well overdue.  

Post-lockdown:

I can only hope we continue to take this approach to research, as we gradually transition back to life in the lab. I now fully believe that one or two days of the week at the bench, with three or four at home or in the office could honestly achieve more overall than my previous habit of 5 days minimum in the lab.

For this academic year, although our labs have partially reopened, I’ve designed four student research projects that are all fully desk based. This means that whether lockdowns happen or not, research can continue. If you’d asked me this time last year, I wouldn’t have thought I could supervise four non wet lab students, but the collective ‘we will figure this out’ attitude has rubbed off on me! If all four go to plan, they’ll really help to get my lab off the ground while I’m recruiting my new team, and I’m really glad that this is possible from home.

It’s hard to find silver linings from 2020, but I honestly think our collective shift in focus from creation of data to critical analysis of data could be transformative. Let’s hope we all learn from this and continue to improve our practice as time goes on!

Myth busting the fake news about cancer research

When Hayley and I began our YouTube channel, Cancer Research Demystified, we had a clear aim in mind: to give patients & their loved ones answers to their questions about cancer research. We began with tackling the science of common treatments like chemotherapy and radiotherapy, explaining the latest hot topics in research like immunotherapy, and showing footage of what happens to a patient’s donated blood or tissue sample when we receive it in a research lab.

But over time, we noticed that these weren’t necessarily the most common questions we were actually getting from patients. Whether we were discussing latest advances in a support group meeting, consenting a patient to take part in a research study, or even just chatting to a taxi driver or barman who mentioned they had a family member with cancer – one question type was emerging as a very common trend.

Cancer conspiracies.

Now and then, patients & their loved ones would ask us if it was true that big pharma is keeping the cure to cancer a secret. Or indeed, politely inform us that this was happening, and with certainty – to them it was a fact.

While getting an Uber to my lab one day in Cold Spring Harbour Laboratory, USA, my driver told me that what I was doing was a waste of my time – that his cousin was importing the cure from China and selling it at a very reasonable price, and that the US regulators refuse to approve it, because they make too much money from chemotherapy.

In trying to engage with the online cancer patient support community, I joined a wide range of Facebook cancer support groups early on in the Cancer Research Demystified days. I was baffled at the sheer volume of misinformation being shared there. It seemed every time I logged in I came across someone trying to make money off desperate cancer patients – whether it was essential oils, CBD products or alkaline water, the list goes on.

It enraged me to see people trying to make a quick buck off vulnerable people. A cancer diagnosis is an extremely overwhelming thing, with patients getting a huge amount of technical jargon thrown at them during a time of great emotional challenge. You can’t be expected to get a PhD or MD overnight, in order to tell apart the clinicians from the scam artists, and you shouldn’t have to.

Of course the moment you bring up this topic in an office full of cancer researchers – you get a response. Everyone had their story to tell, whether it was a vulnerable relative being lead to believe they could avoid surgery for their cancer and just get acupuncture instead, or a set of memes or viral tweets convincing people that cancer researchers like us are keeping a cure a secret in order to line our own pockets.

It didn’t take long for us to decide to make a small series about this for YouTube. We roped in a colleague, Ben Simpson, who had a penchant for schooling those who were attempting to spread misinformation online. And so far, we’ve produced three episodes, under our series ‘Spam Filter’. The aim is to address these sorts of questions by reviewing the peer reviewed literature on each topic, explain the facts, and discuss why some of these rumours or myths might have managed to take hold.

Is cannabis a cure for cancer?

This topic is persistent online, and it’s easy to understand how it has grown legs, given some of the chemicals found in cannabis can genuinely help to relieve some symptoms/side effects of cancer or cancer treatment. It is not, however, a cure.

Are big pharma covering up the cure to cancer?

This one is a bit irritating to us to say the least, given we have all dedicated our lives to researching cancer. It’s also hard to provide peer reviewed data on something that isn’t real, but we’ve done our best to explain the reality of just how hard it would be to cover up a cure, given the numbers involved – as well as why nobody would bother, given they’d become rich beyond their wildest dreams by just marketing the cure instead!

Finally, the alkaline diet

This is a persistent myth online, that making you body more alkaline by eating alkaline foods (which in some case are actually acidic) could prevent or cure cancer. It’s a trendy diet, that really doesn’t make much sense at all. However, it’s very easy to see why people might think it is working, given they can test differences in their urine’s pH, that make it seem like something is changing. For this video we did some urine and blood tests on Ben, before, during and after a day of eating this diet, and discussed the facts and myths involved.

Which cancer myth do you think we should bust next? Or better yet, is there a rumour, trend or theory going around that you’ve seen, and you can’t tell whether it’s legit or not? Let us know and we’ll try our best to get to the bottom of it!

Back to (virtual) school: perspectives of a new lecturer during COVID.

I DON'T KNOW WHAT I'M DOING - dog computer | Meme Generator

My first term leading a module during COVID also happens to be my first term leading a module at all – and of course I’m not just leading one, but two! Everything is new, and this brings with it lots of challenges, but also lots of support. While I don’t particularly know what I’m doing this year, the comforting thing is – nobody else does either!

Fortunately, all module & program leads at UCL were enrolled as students on a mandatory online module this summer, which was written to help us learn how to adopt the University’s ‘Connected Learning’ approach.

The module was packed full of ideas to engage students from their computer screens, get everyone involved & motivated, and explain complex concepts in simple terms without necessarily having live feedback or queries from the students.

It was brilliant.

It was also terrifying.

Frequently throughout the module, reference would be made to last year. How did you engage your students on this module last year? How did you collect feedback, and what did you learn from it? How much of this content was made available online last year? How will this need to change during COVID? For me of course, the answer was generally ‘…dunno?’

It’s a strange thing to start from square one during a year like this.

The massive positive, as I said at the beginning, was that everyone else was just as lost as me. Every time I (virtually) ran to someone for help and gave the disclaimer I HAVE NO IDEA WHAT I AM DOING, I was always greeted with a smile, a laugh, and a ‘me neither’. We muddled our way through together. Not one person told me off or said I should have already known the answer to something. A huge range of people patiently (virtually) sat me down and taught me the basics of the systems I was new to, that they were experts in.

One of the biggest challenges was that we re-wrote one of our modules from scratch this summer. As well as having new learning outcomes, themes and requiring 21 brand new lectures and hundreds of new randomized online exam questions, the content itself is also now entirely online and viewable directly in the course page. This includes interactive elements, videos, and different kinds of ‘check your knowledge’ sections from drag and drop answers to standard MCQs, all with a handy progress bar and navigation panels to make it as straight forward and accessible for the students as possible. The formatting was made consistent using HTML, which I was surprised to find I somewhat remembered, having learned it way back during the MSN Messenger days! This module was a team effort and a huge body of work that I’m very proud of having had a hand in.

Aside from developing new content in new ways, there was a lot of other tasks I needed to learn in order to lead modules this year. I was taught to use a range of different central college systems to arrange things like student timetables, exam marking, academic records, and even how to install virtual laboratory simulations within a course page (which by the way – is so much fun).

On that note – particular thanks to Atalanta, Kurinchi, Norman, Darren, Alvena, Zahra, Tope, Lauren, Umber, Faith & everyone else who has taught me how to lead modules over the last few months – I very much owe you all a drink if we ever see each other again!

The collective sense of WE WILL FIGURE THIS THING OUT TOGETHER was a truly inspiring thing.

To be quite honest, there can be times in academia where everyone around you is so deflated or overwhelmed with their own individual academic stresses, that peer support can be truly lacking. But in these last few months trying to get our new online teaching up and running, this could not have been further from the case. Everyone I mentioned above (and more) had time and patience for me when I needed it, and for that I am extremely grateful.

We’re a couple of weeks into term now, and so far we’ve had no major issues or setbacks. My first few Zoom lectures went off without a hitch – no technical difficulties, no complaints, not even one moment of ‘eh I think you’re on mute there Susan’!

The students I have met so far have been motivated, eager and engaged. Even with the majority of cameras off, I can still hear the smiles in their voices. They laugh along, they suggest things, they answer questions – the Zoom fatigue I had expected (that many of us around the world have fallen victim to) was not particularly apparent. I am sure over the course of the term this may change, but so far things are incredibly positive. They are here (virtually) and they are ready to learn.

During one of my online induction sessions I used a poll to see how students were feeling about completing a module virtually. I included a range of answers from ‘anxious’ to ‘lonely’ to ‘excited’ and was giddy to see that by far the most popular answer was ‘delighted to be able to attend lectures in my PJs’!

To summarize, my experience of being a lecturer during covid has been significantly better than expected – so please cross your fingers for me that this continues!

Research integrity: good practice for new PIs!

Everyone loves a fresh start. Founding a research group is an exciting time in anyone’s career, and allows a great opportunity at a clean slate, and to embed good practice within our team right from the get go!

For me, this is my first year as a member of faculty, and I’m hoping to recruit the first members of my research team as soon as covid settles down a bit. I’ve also been lucky enough to get involved in co-leading a postgraduate module on research methodologies this year, for which I am developing content on research integrity alongside a Professor of evidence based medicine. He has a wealth of knowledge on these topics, and has highlighted a range of evidence-based resources that we’ve been able to incorporate into our teaching. It’s great timing, so I also plan to incorporate these into the training that I provide for my research team, as we hopefully lay the foundations for a happy, productive and impactful few decades of ‘Heavey lab’.

Here are six examples of good practice that I plan to incorporate, along with some links if you’d like to use them in your own teaching/research.

  1. Research integrity: this is key to ensuring that our work is of the utmost quality, that it can be replicated, validated and that it can ultimately drive change in the world. While this is something researchers often discuss ad hoc over coffee, there are also formal guidelines available, and these remove some of the ambiguity around individual versus institutional responsibilities related to this topic. Below you’ll find a link to the UK concordat to support research integrity. It is a detailed summary of the agreements signed by UK funding bodies, higher education institutes and relevant government departments, setting out the specific responsibilities we all have with regard to the integrity of our research. I intend to go through this with my team so they are clear on their own responsibilities as well as mine, and those of our funding bodies and institutes. https://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2019/the-concordat-to-support-research-integrity.pdf
  2. Prevention of research waste: research waste should be actively avoided. This figure is a clear summary, and I’ll keep it visible to my team so that we can all work together to prevent wasting our own time and resources, and maximise the impact of our work. Some of these points force us to really raise the game, and I’m excited to get stuck in.

Figure ref: Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi:10.1016/S0140-6736(13)62329-6

3. Prevention of misconduct: The word ‘misconduct’ may strike fear in the heart – but it describes a whole range of things, not just the extreme cases. Misconduct is not always intentional, and should be actively and consciously avoided rather than assuming ‘we’re good people, I’m sure we’re not doing anything wrong’. Here’s a quick checklist that you can use as a code of practice, to keep track of your research integrity and prevent research waste or misconduct. It’s not as detailed as the last link, and I plan to use it with each member of my team before, during and after our projects, to help us to consciously avoid misconduct. https://ukrio.org/wp-content/uploads/UKRIO-Code-of-Practice-for-Research.pdf

4. Prevention of ‘questionable research practices’: This figure below, from another blog, does a great job of highlighting many of the ‘grey areas’ in research that border on misconduct. Sadly, we’ve all seen some of these – from data secrecy (often due to laziness or lack of understanding rather than malice) to p-hacking (where someone runs as many statistical tests as they need to until they find/force a ‘significant’ result), or maybe it’s manipulating authorships for political gain, or playing games with peer review to win a perceived race. The ethical questions around these practices are often brushed aside as we try to ‘pick our battles’ and avoid conflict, but they can only be stopped if we’re open about them, and discuss the ramifications to the field and the wider world. I plan to display this figure and share anecdotes of bad past experiences with my team, so that they can learn from others’ bad practice in the same way I have. Unfortunately some lessons are best learned as ‘how not to do it’.  

https://blogs.lse.ac.uk/impactofsocialsciences/2015/07/03/data-secrecy-bad-science-or-scientific-misconduct/

5. Making documentation visible: To adhere to our own personal responsibilities around research integrity, we need to be clear on which rules and regulations we are each beholden to. I will keep ethics procedure documents, protocols, patient information sheets and consent forms visible and easily accessible to those who are authorized. I want my staff and students to know exactly what they can and can’t do in their research practice. I will also ensure they are familiar with the intricacies of each project’s approval, which can vary significantly. This sounds like a no-brainer – but ask yourself, have you ever worked on a project where you couldn’t access the latest full version of the ethics approval? Where maybe you had laid eyes on a draft or an approval letter, but not the full application? This happens far more often than it should, and leaves researchers unable to adequately adhere to their own personal responsibilities under the concordat linked above. It’s required, it’s an easy win, and I will make sure it’s the case for my team.

6. Safe space: I believe it’s crucial to encourage a safe environment where team members can ‘speak up’ about any of the above. This requires extra effort in the world of academia, which often discourages this. The life of an early career researcher is fragile, as you bounce from contract to contract, always worrying about stability and fighting for the next grant, the next authorship. The slightest knock to your reputation can seriously affect your future career, and this conscious fear can lead to team members not feeling safe to call out questionable practice. It’s not going to be easy to foster an environment where the whole team feels comfortable speaking up about questionable practice without it leading to a conflict, but I’m going to try my best to achieve this. I aim to make it abundantly clear to my team that they will not face any retaliation for calling out others’ questionable practice or identifying their own – no matter the consequence, even if it means ultimately we have to scrap a massive project, I will thank them. I would much rather know that something has gone wrong so I can correct it, retract it or edit it, rather than continue on not knowing. Anyone who comes to me with an honest concern will be treated with gratitude.

These six measures are of course not exhaustive, and I aim to continue to appraise the literature on good research practices, so that as well as starting on a solid foundation, we can also build better and better practice as we go.

Onwards and upwards!

Particular thanks to Prof Kurinchi Gurusamy for pointing me towards some of these great resources!

Why I started writing ‘To Did’ lists!

I’ve always been a fan of writing ‘To Do’ lists – they’re great for keeping tracks of small bits of work that could slip between the cracks during a busy day or week, and they’re also great for a little dopamine burst when you tick off an item.

Of course the drawback is the list always grows longer, and never gets completed!

Recently, as part of my transition into life as a member of faculty, I’ve started occasionally writing the opposite version, which I’ve taken to calling my ‘To Did’ list. Yes, I realize some people go with ‘To Done’ – but it’s on my ear now and I’m sticking to it!

The list consists of things that I have taken care of in a given day or week, and forces me to take a few minutes to acknowledge the work that I have managed to get done, rather than always focusing on the mountain ahead.

It also allows me to visualise the spread of different types of work that I’ve done, to see if it aligns roughly with how I intended to balance my time between research, teaching, and other tasks.

Finding a better balance in your work (essay)
Image credit: Inside Higher Ed

This is useful, because I’ve received warnings from quite a few academics that in my first year as a lecturer I would likely end up doing virtually all teaching, and virtually no research, and that I should try to make sure my research isn’t neglected if at all possible.

I always wondered whether this early research-teaching imbalance is real, or whether us academics maybe just convince ourselves that this balance is shifted farther towards teaching than it really is. I suspect this could happen, because we have a tendency toward feeling perpetually behind on our research, and teaching ‘To Do’ jobs usually have harder deadlines than research ones, so we often feel like we’re being forced to spend time on teaching tasks instead of research ones…. Maybe it’s just a trick of the mind, and we are actually doing a bit more research than we think? Or maybe it’s true, and my research will take a huge hit in year one, that I should actively work to prevent?

Of course, with covid-era teaching requiring significant extra hours from teaching staff, and preventing new research experiments from being carried out within the lab during lockdown, I suspected that I might fall victim to this potential research-teaching imbalance even more than your average first year PI.

And given I am a scientist, the urge to collect data to answer this question was strong.

Hence the ‘To Did’ list.

Did it identify a huge imbalance toward teaching?

No, not really!

I’m writing this in the evening, having just written out my ‘To Did’ list for today. It seems nicely varied, with eight items that I spent roughly equal time on. The two most time consuming items (by only a small margin) were pure teaching, one item sat nicely on the teaching-research border, four items were pure research, and the smallest one was ‘other’.

Over the summer, before I brought in the ‘To Did’ list, I started going through old ‘To Do’ lists and highlighting research items yellow, teaching items green, and everything else blue, to try to collect similar data on how I was balancing these types of work. I found that yellow and green were almost perfectly equal, with blue less common. Which to me, seems ideal – between the results of the ‘To Do’ & ‘To Did’ lists, I am reassured things seems to be relatively well balanced so far!  

An unexpected positive was that the ‘To Did’ list also highlighted for me how international my work has become, which hadn’t really clicked for me. Increasing my international network will (I hope) help my research career, and so it was exciting to notice items related to collaborations with Ireland, Finland, India and the US all in there alongside my main work in the UK.

Aside from the broad overview the ‘To Did’ list gives me of the variety of work I’m doing, it does also provide the same sort of dopamine release that ticking off a ‘To Do’ list does, only in this case, for me at least – it’s even better! Everything on my ‘To Did’ list is complete, even if it’s just a small step in a bigger picture. It’s something I’ve done that day, something I’ve accomplished, and something that is not hanging over me anymore.

One rule of my ‘To Did’ list, is that I do not allow myself to write ‘wrote/read emails’ as an item on the list. This is because I’ve had a bad habit in the past of putting myself down by saying ‘all I did all day was emails’, when in actual fact I may have been troubleshooting research problems, liaising with collaborators, submitting proposals, planning projects or reviewing papers – email was purely the vehicle. Calling those items ‘emails’ is a bit like spending three days on a wet lab experiment and saying ‘all I did the last few days was move stuff with my hands’ or teaching all day and saying ‘all I did today was speak!’ Writing these kinds of items on the list with verbs like liaised/reviewed/edited has made me acknowledge the reality of the work being done, and also helped me to feel better about previously perceived lack of productivity during lockdown, while I was really missing the lab!

So whether you’re trying to collect data on how you break up your time, or just looking for reassurance that you’re still getting s#!t done during the pandemic, I whole heartedly recommend writing a ‘To Did’ list.  

I guess I can now add a 9th item to today’s list – writing this blog!

How saying ‘yes’ to a quick side project lead to one of my main research interests!

It was the final year of my PhD, and I was presenting a poster at a conference, alongside my supervisor Dr Kathy Gately. We were showing off our new panel of PI3K inhibitor resistant lung cancer cell lines, which we had developed and begun to characterize. We were excited to tease out which signalling pathways might be playing a role in resistance to these drugs.

Along came Dr Michael O’Neill, the co-founder of Inflection Bioscience, who had recently licenced a drug that targeted the PIM kinases. At the time, I had never heard of PIM. He saw our poster, and suggested we should test their drug in our cell lines. It seemed straight forward enough.

After a couple of quick ‘look see’ experiments, we ended up submitting a grant.

Then another.

Then some student projects.

Some posters….

That escalated quickly meme | Memes catholiques, Humour, Débile

Before we knew it, this ‘quick win’ was becoming a driving interest for Kathy, and she was gathering researchers along the way (notably Dr Gillian Moore). I had left Kathy’s lab at this stage, but as a wider team we were beginning to build up a picture of how best we could potentially develop these drugs in the lung cancer space.

PIM research didn’t stop for Kathy, and it didn’t stop for me either.

When interviewing for a postdoc position in University College London with Dr Hayley Whitaker, I was asked ‘if you had access to human prostate cancer specimens, what would you do with them?’ On a whim, and with interview pressure weighing down on me, I responded ‘well there’s this really exciting drug target called PIM in lung cancer, I think it looks like it might be promising in prostate cancer too, so I’d probably run some experiments on that’.

I arrived home to Dublin that night, exhausted after a long day of travel & interviewing, and found out immediately that I’d been invited to a second round interview. This was great – but it would be in London again, in just a few days! I purchased a second pair of flights, cried over my bank balance for a moment, and then hunkered down in our basement office for the weekend, trying to pull together a presentation that had been assigned for the second round. The challenge that had been set was of course ‘if you had access to human prostate cancer specimens, what would you do with them?’ How could I present on anything other than PIM after suggesting it in my previous interview?!

I rushed a project pitch, which by chance turned out quite promising. There were a good few papers looking at PIM in prostate cancer, but not many looking at drug treatments, and none looking at the same co-targets that we were working on in lung cancer. I checked with Kathy if it was ok with her for me to present this, while rushing out of the building to get to the airport – but our conversation got slightly side-tracked when she told me she was expecting a baby! Safe to say PIM got a bit overlooked that lovely day.

The presentation went well, I got the job, and to my delight I was offered the chance to actually work on the project that I had pitched in the interview. What a wonderful opportunity for a postdoc to be given that level of freedom!

In order to differentiate my new prostate cancer project from the work Kathy was leading on, I set out to investigate a wider panel of drugs, including the PIM inhibitors but also quite a few others. The aim was to test promising late stage pre-clinical drugs in human prostate cancer tissue, using ex vivo culture and new omics technologies. I gathered some preliminary data and submitted it as a fellowship proposal, trying to position myself as someone who worked on drug development in general. Thankfully, I was successful.

It wasn’t mean to be a ‘PIM project’. But as luck would have it, PIM wasn’t going away.

One by one, the other drugs dropped off for one reason or another. Some couldn’t be investigated in an ex vivo model because they needed to be metabolised within the body, some needed to build up for a few weeks before an effect would be seen, some failed during concurrent animal testing, and some just showed disappointingly little activity in my model. By the time the work was close to publication, we were down to just 4 different treatments, and they were a very similar panel to what Kathy was leading on in lung cancer. I hope she forgives me!

Now, years later, we’ve just had our first original article come out on PIM in prostate cancer1. This is our first ‘flag in the sand’ where we put forward the idea of co-targeting PIM with the PI3K pathway. There are bigger and more detailed works to come from this in the future. If you’d like to read about the paper itself, I wrote a tweetorial that you can read unfurled here: https://threadreaderapp.com/thread/1300721602854871040

This paper came off the back of a couple of reviews on PIM as a drug target3,4, and there is of course more on the way.

Now, plans are brewing for wider PIM collaborations, and who knows, maybe PIM will stick around in my world even longer.

Did I ever set out to become a PIM researcher? No, not particularly.

But I suppose the lessons learned here are to say yes to opportunities, and to follow the data – if something isn’t your ‘plan A’ but it might make a difference to cancer patients in the future, then why wouldn’t you follow it?

Extra credit to my friend AJ (@AyoksAJ) for his very inspiring ‘Say Yes’ presentation to our postdoc networking group a few years ago, which still sticks around in my mind, and lead me to say YES to an opportunity that came my way this morning – let’s see where this one goes!

Thank you to Kathy, and to all the PIM friends I’ve made over the years.

1 https://www.nature.com/articles/s41598-020-71263-9

2 https://www.sciencedirect.com/science/article/pii/S0163725819302062

3 https://www.nature.com/articles/s41392-020-0109-y