Last year I published my first ‘paper’ with JoVE – the Journal of Visualized Experiments. JoVE are a video journal, that I had heard about from a collaborator – who suggested that our MRI-targeted prostate slicing method ‘PEOPLE’ might be a good fit. It sounded like a great idea!
I’m happy to report that there’s no twist coming in this blog – the experience was great, and I’d recommend them to others too!
Image source: threadless.com
With JoVE, you submit an abstract & basic written paper of your method (or whatever research you’d like to publish as a video). The written submission is peer reviewed, edited as necessary, and once the reviewers are happy, you begin to plan a filming day. There are a few options here – I chose to go with the more expensive option of having JoVE arrange the script, filming & editing for me, rather than having to do it myself. The benefit here is you get to work with professionals, who know how to get the right shots, the right lighting, and edit everything in such a way that other scientists can see everything they need to see clearly, and learn the method so that they can carry it out themselves.
This was of particular benefit to me, as a (very!) amateur YouTuber with Cancer Research Demystified – I wanted to learn how the professionals do it!
Our videographer was Graham from https://www.sciphi.tv/. Working with him was a brilliant experience – he was an ex-researcher himself, and had extensive experience both carrying out and filming science. He made the day fun, quick and easy – if you ever need someone to film an academic video for you I highly recommend his company!
Filming day itself wouldn’t have been possible without the rest of our research team helping out (in particular Hayley and Aiman – thank you!) and of course a very generous prostate cancer patient, who was undergoing radical prostatectomy, kindly agreeing to take part in our research.
After a short wait we received a first draft of our video which we were really happy with – we had the opportunity to make a round of edits (there weren’t many), and then before long the video was up on JoVE’s website, as well as Pubmed and all the usual places you’d read scientific research in paper form!
Personally, I think videos make a whole lot more sense than written papers for sharing methodologies. I’ve used JoVE videos for training myself – notably for learning to build tissue microarrays (TMAs), and without those videos I’m not sure I could have learned this skill at all – as our resident experts had left the lab! A paper just wouldn’t be able to clearly explain how to use that equipment. With JoVE, there’s always a PDF that goes alongside the paper too, so once you’ve watched and understood the practical side, you have the written protocol to hand while you’re in the lab. The best of both worlds.
I’ve always been a fan of simple solutions (I’m a bit of a broken record on this) – and JoVE is a perfectly simple solution to providing training that will show you how to do something rather than just tell you.
Once caveat – it’s not cheap. But your fellow scientist who want to learn your methods will thank you – you’re doing the rest of us a favour! Of course, there’s always YouTube for a free (ish) alternative. But in my view, the added layers of peer review and professional production are worth the extra cost.
Academic impact metrics fascinate me. They always have. I’m the kind of person that loves to self-reflect in quantitative ways – to chart my own progress over time, and with NUMBERS. That go UP. It’s why I’ve been a Fitbit addict for five years. And it’s why I’ve joined endless academic networks that calculate various impact metrics and show me how they go UP over time. I love it. It’s satisfying.
Image from SeakPNG
But as with anything one tends to fangirl over, early on I started picking holes in the details. Some of the metrics overlook key papers of mine for no apparent reason. Almost all value citations above all else – and citations themselves are problematic to say the least.
Journal impact factor is a great example of a problematic and overly relied upon metric. I am currently teaching our MSc students about this, and I found some useful graphs from Nature that show exactly why (which you can read about here) – from to variations across disciplines & times, outlier effects and impact factor inflation, all of which were no surprise, to an over reliance on front matter – which was new to me!
There are problems.
They are noteworthy.
But we still use impact factor religiously regardless.
My husband used to run committee meetings for a funding body, where he would sometimes have to remind the members & peer reviewers that they should not take journal impact factor into account when assessing publication record in relation to researcher track record, as per the San Francisco declaration https://sfdora.org/read/. Naturally, these reminders would often be ignored.
There’s a bit of a false sense of security around ‘high impact’ journals. That feeling of surely this has been so thoroughly and rigorously peer reviewed that it MUST be true. But sadly this is not the case. Some recent articles published in very high impact journals (New England Journal of Medicine, Nature, Lancet) were retracted, having been found to include fabricated research or unethical research. These can be read about at the following links:
Individual metrics such as H-index also typically rely on citations. An author’s H index is calculated as the number of papers (H) that have been cited at least H times. For example a researcher who has at least 4 papers that have each been cited at least 4 times, has a H index of 4. This researcher may have many more publications – but the rest have not been cited at least 4 times. Equally, this researcher may have one paper that has been cited 200 times – but their H index remains 4. The way in which the H index is calculated attempts to correct for unusually highly cited articles, such as the example given above, reducing the effects of outliers.
The H index is quite a useful measure of how highly cited an individual researcher is across their papers. However, as with impact factor – it is a metric based on citations, and citations do not necessarily imply quality or impact.
Another key limitation is that H index does not take into account authorship position. Depending on the field, the first author may have carried out the majority of the work, and written the majority of the manuscript – but the seventeenth author on a fifty author paper will get the same benefit from that paper to their own personal H index. In some studies hundreds of authors are listed – and all will benefit equally, though some will have contributed little.
An individual’s H index will also improve over time, given it takes into account the quantity of papers they have written, and the citations on those papers – which will themselves accumulate over time. Therefore, H index correlates with age, making it difficult to compare researchers at different career stages using this metric.
Then of course there’s also the sea of unreliable metrics dreamt up by specific websites trying to inflate their own readership and authority, such as Research Gate. This is one of the most blatant, and openly gives significant extra weight to reads, downloads, recommendations and Q&A posts within its own website in the calculation of its impact metrics, ‘RG Score’, and ‘Research Impact’ – a thinly veiled advertisement for Research Gate itself.
If you’re looking for a bad metric rabbit hole to go down, please enjoy the wide range of controversy both highlighted by and surrounding Beall’s lists: https://beallslist.net/misleading-metrics/
Altmetrics represent an attempt to broaden the scope of these types of impact metrics. While most other metrics focus on citations, altmetrics include other types of indicators. This can include journal article indicators (page views, downloads, saves to social bookmarks), social media indicators (tweets, Facebook mentions), non-scholarly indicators (Wikipedia mentions) and more. While it is beneficial that altimetrics rely on more than just citations, their disadvantages include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact.
Of course, despite all of the known issues with all kinds of impact metrics, I still have profiles on Google Scholar, Research Gate, LinkedIn, Mendelay, Publons, Scopus, Loop, and God knows how many others.
I can’t help it, I like to see numbers that go up!
In an effort to fix the issues, I did make a somewhat naive attempt at designing my own personal research impact metric this summer. It took into account authorship position, as well as weighting different types of articles differently (I’ve never thought my metrics should get as much of a bump from conference proceedings or editorials as they do from original articles, for example). I used it to rank my 84 Google Scholar items from top to bottom according to this attempted ‘metric’, and see which of my personal contributions to each paper represented my most significant contributions to the field. But beyond the extra weighting I brought in, I found myself falling into the pitfall of incorporating citations, journal impact factor etc. – so it was still very far from perfect.
If you know of a better attempt out there please let me know – I’m very curious to find alternatives & maybe even make my own attempt workable!
Many thanks to Prof Kuinchi Gurusamy for discussions and examples around this topic.
Everyone loves a fresh start. Founding a research group is an exciting time in anyone’s career, and allows a great opportunity at a clean slate, and to embed good practice within our team right from the get go!
For me, this is my first year as a member of faculty, and I’m hoping to recruit the first members of my research team as soon as covid settles down a bit. I’ve also been lucky enough to get involved in co-leading a postgraduate module on research methodologies this year, for which I am developing content on research integrity alongside a Professor of evidence based medicine. He has a wealth of knowledge on these topics, and has highlighted a range of evidence-based resources that we’ve been able to incorporate into our teaching. It’s great timing, so I also plan to incorporate these into the training that I provide for my research team, as we hopefully lay the foundations for a happy, productive and impactful few decades of ‘Heavey lab’.
Here are six examples of good practice that I plan to incorporate, along with some links if you’d like to use them in your own teaching/research.
Research integrity: this is key to ensuring that our work is of the utmost quality, that it can be replicated, validated and that it can ultimately drive change in the world. While this is something researchers often discuss ad hoc over coffee, there are also formal guidelines available, and these remove some of the ambiguity around individual versus institutional responsibilities related to this topic. Below you’ll find a link to the UK concordat to support research integrity. It is a detailed summary of the agreements signed by UK funding bodies, higher education institutes and relevant government departments, setting out the specific responsibilities we all have with regard to the integrity of our research. I intend to go through this with my team so they are clear on their own responsibilities as well as mine, and those of our funding bodies and institutes. https://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2019/the-concordat-to-support-research-integrity.pdf
Prevention of research waste: research waste should be actively avoided. This figure is a clear summary, and I’ll keep it visible to my team so that we can all work together to prevent wasting our own time and resources, and maximise the impact of our work. Some of these points force us to really raise the game, and I’m excited to get stuck in.
Figure ref: Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383(9912):101-104. doi:10.1016/S0140-6736(13)62329-6
3. Prevention of misconduct: The word ‘misconduct’ may strike fear in the heart – but it describes a whole range of things, not just the extreme cases. Misconduct is not always intentional, and should be actively and consciously avoided rather than assuming ‘we’re good people, I’m sure we’re not doing anything wrong’. Here’s a quick checklist that you can use as a code of practice, to keep track of your research integrity and prevent research waste or misconduct. It’s not as detailed as the last link, and I plan to use it with each member of my team before, during and after our projects, to help us to consciously avoid misconduct. https://ukrio.org/wp-content/uploads/UKRIO-Code-of-Practice-for-Research.pdf
4. Prevention of ‘questionable research practices’: This figure below, from another blog, does a great job of highlighting many of the ‘grey areas’ in research that border on misconduct. Sadly, we’ve all seen some of these – from data secrecy (often due to laziness or lack of understanding rather than malice) to p-hacking (where someone runs as many statistical tests as they need to until they find/force a ‘significant’ result), or maybe it’s manipulating authorships for political gain, or playing games with peer review to win a perceived race. The ethical questions around these practices are often brushed aside as we try to ‘pick our battles’ and avoid conflict, but they can only be stopped if we’re open about them, and discuss the ramifications to the field and the wider world. I plan to display this figure and share anecdotes of bad past experiences with my team, so that they can learn from others’ bad practice in the same way I have. Unfortunately some lessons are best learned as ‘how not to do it’.
5. Making documentation visible: To adhere to our own personal responsibilities around research integrity, we need to be clear on which rules and regulations we are each beholden to. I will keep ethics procedure documents, protocols, patient information sheets and consent forms visible and easily accessible to those who are authorized. I want my staff and students to know exactly what they can and can’t do in their research practice. I will also ensure they are familiar with the intricacies of each project’s approval, which can vary significantly. This sounds like a no-brainer – but ask yourself, have you ever worked on a project where you couldn’t access the latest full version of the ethics approval? Where maybe you had laid eyes on a draft or an approval letter, but not the full application? This happens far more often than it should, and leaves researchers unable to adequately adhere to their own personal responsibilities under the concordat linked above. It’s required, it’s an easy win, and I will make sure it’s the case for my team.
6. Safe space: I believe it’s crucial to encourage a safe environment where team members can ‘speak up’ about any of the above. This requires extra effort in the world of academia, which often discourages this. The life of an early career researcher is fragile, as you bounce from contract to contract, always worrying about stability and fighting for the next grant, the next authorship. The slightest knock to your reputation can seriously affect your future career, and this conscious fear can lead to team members not feeling safe to call out questionable practice. It’s not going to be easy to foster an environment where the whole team feels comfortable speaking up about questionable practice without it leading to a conflict, but I’m going to try my best to achieve this. I aim to make it abundantly clear to my team that they will not face any retaliation for calling out others’ questionable practice or identifying their own – no matter the consequence, even if it means ultimately we have to scrap a massive project, I will thank them. I would much rather know that something has gone wrong so I can correct it, retract it or edit it, rather than continue on not knowing. Anyone who comes to me with an honest concern will be treated with gratitude.
These six measures are of course not exhaustive, and I aim to continue to appraise the literature on good research practices, so that as well as starting on a solid foundation, we can also build better and better practice as we go.
Onwards and upwards!
Particular thanks to Prof Kurinchi Gurusamy for pointing me towards some of these great resources!
Are the influential Professors who made their names guesting on talk radio shows or writing opinion pieces in the national papers a thing of the past? Will they be replaced by a generation of #scicomm advocates, sharing lab bench selfies and vlogs? I’ve seen the latter spark eye-rolling and accusations of vanity – could it be true that this new brand of public engagement is less impactful, or does it still do the same job of engaging with the public, and ultimately make lofty academics more relatable to the average Joe?
These are some questions I’ve been asking myself over the last few years, while actively engaging more with other academics and the wider public online. Am I building towards becoming part of a new generation of influential Profs in the future, or just making myself look like an attention seeker?
To pick this apart, let’s start with why academics seek to communicate with the wider community in the first place.
Effective communication is key to ensuring our research has an impact within the wider community. For us to enact change in any field of academic research, we need to have discourse with other non-academic professionals, patients, advocates, teachers, politicians – whoever it might be.
Additionally, with more and more papers being published each year, it’s hard to get ours noticed within the academic community above the sea of new data without publicizing our work in some way. As a result, researchers like myself are scrambling to draw attention to our findings everywhere and anywhere we can, living in fear that our work may go unnoticed, gathering dust in the depths of Pubmed.
But at what point does publicizing your work, or describing it for a wider audience, cross over into attempted-influencer territory? And if it does – is that a bad thing?
As with anything, opinions vary – but it’s certainly not rare in my experience to come across a peer who believes that an influential Prof who gets invited onto talk radio should be revered as a great communicator, while one who engages on Twitter, or god forbid, Instagram (!) should be dismissed as an attention seeker.
Will this continue? Will one or the other die out, or perhaps will traditional media and social media merge over time, with the distinction becoming less relevant? To me, this appears to already be happening, with some of those revered Profs gradually turning to social media – particularly Twitter.
Will tomorrow’s generation of great academic communicators summarize their think piece from the Economist with a viral TikTok?
Where do I fit in to all of this?
Let me share some of the numbers with you that encouraged me to start engaging more online myself.
In the ten years from 2009, when I was an undergrad, to 2019, when I was applying for my first faculty roles, the number of papers about cancer being published in Pubmed almost doubled, from an already massive to 123,530, to a phenomenal 222,784. That’s nearly a quarter of a million papers in just one field, in just one year!
More research is great – more papers, more data, more opportunities for our field to advance. But it’s also more papers for each of us to keep up with. We can’t possibly each read this colossal amount of work, while still conducting our own research. And what if a paper does get lost, and nobody reads it – nobody will ask the next question, run the next experiment, or take the next step. At that point – why did we bother doing the work in the first place?
With this in mind, many of us – myself included, who lack contacts or cred in traditional media, are turning to social media to get our work out there. We’re using the internet to try to improve our paper’s Altimetric score, something which puts a numerical value on how much attention a publication has garnered. Maybe it starts with a tweet, or a blog (ahem), or a post on LinkedIn or Reddit. Personally, I’ve tried all of these, as well as dabbling in producing lay YouTube videos describing our latest published work. I’ve even tried posting paper abstracts on Instagram at this point – which surprisingly did get a few likes despite semi-drowning in a sea of selfies.
All sounds fine, right?
But with social media, you have the same issue as with Pubmed – an ever increasing deluge of content, drowning out your little post among millions of others.
To fight this, I’ve been actively trying to build a following on various social media platforms, within the scientific and academic communities. I want like-minded people to read my papers, and for them to do that, they need to see the tweets/blogs/videos that describe or link to them, so I need more followers, more retweets, more likes, etc. etc.
I’ve actively gotten into a habit of putting spare moments here and there into tiny social media tasks, all with the aim of building my own following. You might catch me liking a post about someone else’s paper while I’m making a cup of tea, retweeting a video while I’m walking to the shop, or following a few fellow scientists while half-watching Netflix. Any content of my own such as a blog like this, a YouTube video or a more detailed Twitter/Instagram post is produced in bulk on the weekends, and scheduled to be released throughout the week so that it appears like I’m constantly engaged, even though I do actually have a ‘day job’!
And it has worked a little bit, getting me from a few dozen followers to a few hundred, and now heading towards a couple of thousand. Nothing huge. Social media is the signpost as far as I’m concerned, directing people to my ‘actual’ work, rather than the endgame in and of itself.
But is that how it comes across?
Do I just look like an attempted influencer?
I received a message from a stranger a couple of weeks ago, after I helped to launch a joint Twitter account with a few like minded academics called ‘Academic YouTubers’. The stranger said something along the lines of ‘prepare for an influx of academic influencers’.
It was the first time I had heard the term ‘influencer’ used to refer to what I had considered to be people working in science communication or ‘sci comm’.
To others, does it look like we’re trying to become influencers? Using papers to garner followers rather than the other way around? Using social media in an attempt at garnering fame or even financial gain?
The idea of being an influencer within academia does tickle me a bit, I must admit. Imagine if our next ‘Cancer Research Demystified’ video included a sponsorship deal, where we advocate the use of a particular brand of RNA extraction kits and offer a discount code on your next purchase, or happen to be wearing lab coats with a name brand clearly visible and ‘#ad’ in the video description…!
I would like to think it’s clear to our (few) viewers that that’s not the aim of our channel – we’re purely trying to connect with cancer patients to tell them about cancer research.
But is that really how it comes across?
One moment sticks out in my memory on this. I posted a story on my Instagram page a couple of years ago, toasting a paper acceptance with a flute of prosecco. I was happy about the good news, and it’s a habit of mine to share the positive moments in academia, as they can be few and far between! That evening I received a reply to the story, from an old friend I went to college with. It read ‘Sweet Jesus, Sue is insufferable‘. Within an instant they deleted the message, but I had already seen it, before they presumably sent it to whoever the intended recipient was. It hurt my feelings a little, and made me question my online presence. Is it too much? Too self-congratulatory? Too vain?
That message still lingers in the back of my mind today, whenever I hit ‘post’.
Am I doing the right thing to post frequently, trying build a moderate audience one day and grow better at communicating my work with the wider world, or am I simply alienating my peers, overloading their feeds and making them role their eyes at my perceived attention-seeking behaviour?
Furthermore, can I really argue against this label, when really attention-seeking is pretty much exactly what I’m doing, just for my work, rather than myself?
At the end of the day, when I take a step back from all the minor details and self doubt, I firmly believe that engagement between academia and wider society is key to the advancement of civilization.
And if I believe that, then I suppose I need to continue to do what I’m doing – communicate wherever I can get a platform, explain my findings, their significance and what I believe should happen next. And for that to get seen, I also need to continue the less substantive posts, the odd meme and a whole lot of retweets, that help to game the algorithms and build that all important follower list.
After all, that’s how the rest of society is communicating nowadays – so for academia to stay relevant, surely we should follow suit.
Furthermore, the benefits to me of engaging with other scientists online are immeasurable, and deserve their own future blog – I’ve learned so much from other researchers debating their views on Twitter, and this does allow me to better inform my own work.
I suppose I’ll just have to accept that for every ‘like’ I receive from a former colleague, I’m probably also receiving an eye roll from another. And for as long as my #scicomm attempts seem to stimulate some minor engagement and/or discussion, I’ll have to keep going.
To be honest, I still feel uncomfortable about the term ‘academic influencer’, but perhaps that will change. I look at the next generation – today’s undergrads, the #scientistsofinstagram (yes it’s real, go look) who I oftentimes see posting heavily edited selfies in their lab coats and plugging particular trendy stationary brands. They seem to be actively aiming for the ‘academic influencer’ label. Is there anything wrong with what they’re doing? I don’t think so, so long as they aren’t spreading misinformation. And if I’m not going to judge them for their brand of science communication, then I suppose I shouldn’t judge myself for my own version either.
As always, comments, thoughts and discussion welcome. Go on – tell me I’m insufferable, you know you want to!
Times are strange due to #Covid19 – so we’re coming to you not from our lab, but on a virtual blackboard instead, from home! This video aims to give a whistle-stop tour of the costs involved in carrying out cancer research. We get asked about this a lot – so we’re here to show you where those valuable funds raised in pub quizzes, sponsored walks & raffles all go! Do you have a guess at how much it costs to carry out a full PhD? Watch the video to find out!
After adamantly refusing to blog for a very long time… it’s time to give in.
Let me introduce myself. I’m Susan. I’m a cancer researcher. My passion is understanding how to exploit vulnerabilities within tumours so that we can find better ways to treat the disease.
Over the last 13 years I’ve been developing my skills, learning more and more about cancer, and working towards the ultimate goal of starting my own research lab.
Now, it is finally happening!
As I work towards building ‘Heavey Lab’ in University College London, where I’ve recently been appointed as a Lecturer in Translational Medicine, I’ll endeavor to pop in now and then, chronicling each of the ‘firsts’ that come along with being a brand new member of faculty.
I’ve enjoyed communicating my research over the years, both online and in the real world, so that cancer patients, advocates, carers and students alike can get a taste of what the world of cancer research is really like. A lot of this #scicomm activity has been through Cancer Research Demystified, which I co-founded and run. I’ll share some of the material that we created for CRD here too, with brief introductions on why we wanted to share these aspects of our work with the world.
I’ll also share our publications, along with plain English explanations of what we found, why it was interesting to us, and with the benefit of hindsight – what happened next.