Copy
PAASP Newsletter 12/2017
View this email in your browser
Issue No. 012
February 2017
Heidelberg, Germany (Old Bridge and Castle)
Dear Friends and Colleagues,
Welcome to the PAASP Newsletter!
With these newsletters, we aim to update our readers on a regular basis about PAASP’s most recent developments and, more generally, what is currently happening related to Good Research Practice and Data Integrity in the area of non-regulated biomedical research.

 
Table of contents:
Biowebspin: PAASP partners with the academic-industry network platform Biowebspin - Read more
Author checklists: We do make an impact - Read more

Recent publications related to Good Research Practice - Read more
Publication archive on the PAASP website - Read more
Case study: Bexarotene - Can clinical results resolve controverisal preclinical findings? - Read more

Quote of the Month - Read more
History: Reproducibility in the 19th century - Read more

Useful tools and resources - Read more
Calendar and next events related to Good Research Practice and Data Integrity - Read more
Previous issues of our Newsletters can be found here.
If you have any questions or would like to discuss certain aspects in more detail please do contact us!


Your PAASP team

Follow us and our 'Intelligent Pharma Innovation' groups on
Forward this Newsletter to Friends and Colleagues
We are proud to announce that PAASP has partnered with Biowebspin, the leading online Life Sciences partnering platform between academia and industry with over 2 million researchers all over the world. Biowebspin’s mission is to help academic researchers and start-ups to find industrial partners for their research projects, technologies and licenses. Biowebspin detects new research and licensing opportunities matching the R&D and Business Development strategy of the industrial partner.
Under this partnership, PAASP and BWS have developed mechanisms and services to pre-evaluate academic projects based on the quality, integrity and robustness of the underlying research data. This will provide industrial partners or VC investors with valuable instruments to select specific projects and drug targets/candidates according to their needs and requirements.
Many people are aware of Nature’s checklists that need to be completed by the authors and that address a number of important questions related to study design and data analysis. However, it is not widely known that, although reviewers have access to these checklists, Nature journals do not currently publish them and therefore the readers may not have access to potentially important information. After checking recently published papers and confirming that at least some of them indeed do not disclose any information related to details such as blinding or randomization, we (Anton Bespalov and Malcolm Macleod) addressed this question to NPG journal editors.
After several months of waiting for feedback and repeated discussion, we are glad to announce that the waiting time paid off and that we can make an impact: the NPG journals “aim to start publishing the checklists along with the manuscripts in the near future.”
This is exciting news and we will publish the full original commentary here in the next issue of the Newsletter.
Recent news and publications

A manifesto for reproducible science
Manufo et al just published an open access manifesto for reproducible science in Nature Human Behavior. This interesting piece summarizes the current issues in science and presents several key elements that need to be widely implemented to optimize and improve the scientific process. The authors refer to a recent analysis showing that several measures indeed have the potential to improve the situation. Yet, the implementation of these measures, namely methods, reporting and dissemination, reproducibility, evaluation and incentives are only slowly becoming adopted.
Importantly, the authors state that science will not work without a very well implemented evaluation process. However, modifications to this procedure are needed and further ideas are required since, as the authors point out, the peer-review process is changing due to different factors (e.g. pre-registration, results-blind and post-publication peer review) and the trend towards an open peer-review process comprises new problems. True understanding of how best to structure and incentivize science will emerge slowly but will be a never-ending development.

Study by H. Würbel reveals very low reporting of protection mechanisms against biases
Hanno Würbel and his team investigated the measures taken to decrease the risk of research biases in grant applications or publications: Their findings reveal that only a few grant applications describe protection mechanisms against research biases: A) only 8% mention whether a sample-size calculation was preformed when designing the experiment, B) only 13% mention whether animals were randomly assigned in experiments and C) only 3% stated whether the experiments were performed in a blinded fashion.
However, when the same scientists applying for grants were asked in a questionnaire for their research practices the numbers were significantly higher: A) 69% of the researchers stated that they perform sample size calculation, B) 86% perform randomizations and C) 47% perform blinded experiments.
Although only 302 of the 1891 invited researchers responding appropriately to the survey, the results show that there is still a lack of awareness of the problem that reporting quality aspects such as randomization, blinding and power analysis are an essential requirement to judge data quality and integrity. In addition, it also demonstrates that current incentives and processes installed by scientific journals or funding organizations are not sufficient to support the implementation and/or reporting of measures to lessen the risk of bias.
However, the situation might even be worse than suggested by Würbel et al.: As only 17% of researches responded to the survey, it can therefore not be excluded that scientists who have implemented a greater use of methods to reduce bias in their laboratories were more willing to take part in this survey (selection bias).
Elsevier Implements Data Citation Standards to Encourage and Reward Authors for Sharing Research Data
Elsevier has implemented the FORCE11 Joint Declaration of Data Citation Principles for over 1800 of its journals and has incorporated them in its production and publication workflow. This means that authors publishing with Elsevier are now able to cite the research data underlying their article, contributing to attribution and encouraging research data sharing with research articles.
The FORCE11 data citation principles were launched in 2014 with the aim to make research data an integral part of the scholarly record. These principles help to ensure that authors receive credit for sharing through proper citation of research data, thereby increasing the availability of research data. Elsevier was involved in drafting these principles and, along with many other publishers, data repositories and research institutions, endorsed them as an industry standard.
Hopefully, together with author guidance and education, this will encourage and reward researchers for sharing their research data.
Meta-analysis in basic biology
In this editorial, published in Nature Methods, Natalie de Souza discussed the role meta-analysis could play in basic research.
Meta-analysis is a statistical approach that combines independent studies testing the same hypothesis and can therefore determine whether a result holds across that larger sample. It is valuable when additional statistical power is needed to overcome small effect sizes or sample sizes. This methodology is quite often used in clinical research due to the standard study design and relatively simple endpoints or measures of most clinical studies and it has been proven to be helpful when making informed decisions about health guidelines and care.
However, except from genome-wide association studies (GWAS) and similar projects, meta-analyses are not very common for exploratory basic research. Here too, though, meta-analysis could help resolve the conflict of contradictory conclusions, identify and correct for confounders, or point to the fact that the biology is more complex than initially thought – especially when results cannot be reproduced. Meta-analysis of mouse behavioral phenotyping, for instance, has been proposed as the basis for modeling genotype–laboratory interactions and for setting thresholds to identify phenotypes that should be more readily replicated across laboratories.
Nevertheless, it is important to note that effective meta-analysis requires that data be accessible in a suitable format, well annotated, and comparable in a way that is biologically meaningful. Any filtering or bioinformatic data processing must be assessed for its potential to bias the results, in which case appropriate normalization is needed. Only when study planning, conduct and evaluation are performed in a structured way, meta-analysis-based approaches will provide meaningful insights and will help improve research reproducibility.
POSTnote: Integrity in Research
Integrity in research refers to the behaviours and values that result in “high quality, ethical and valuable research”. This POSTnote from the Parliamentary Office of Science & Technology (POST) considers current approaches to fostering an environment conducive to good research in the UK, and detecting and preventing practice that falls short of expected standards. It also examines the mechanisms for supporting integrity and how this might be improved.
The key points in this briefing are:
  • There are concerns about how to maintain integrity in research, because of fears that the ‘publish or perish’ culture leads to poor or questionable research practices.
  • Compromised research integrity can put public health at risk and waste resources, undermine public trust in science and damage reputations.
  • Various mechanisms exist to promote good practice in research, including: institutional guidelines; a sector-wide concordat; regulatory bodies for some disciplines; peer review; and a variety of legal actions.
  • There are differing views over whether these mechanisms are sufficient, or if another form of oversight, such as regulation, might be preferable.
In 2013, the Reproducibility Project: Cancer Biology started with the aim to reproduce key findings in 50 cancer papers published in Nature, Science, Cell and other high-impact journals (the project was later downsized to 29 papers, mainly due to budget constraints among other factors).
In a series of articles, eLife has just released the results of the first five attempts from the Reproducibility Project to replicate experiments in cancer biology:
  • two of the attempts “substantially reproduced” research findings - although not all data sets met thresholds of statistical significance”.
  • one attempt to replicate published results failed
  • the two remaining attempts yielded “uninterpretable results”
While it is too early to draw any conclusions, the results of the first set of replication studies are mixed and indicate that assessing reproducibility in cancer biology is going to be as complex as it was in a similar project in psychology (Open Science Collaboration, 2015).
Interpreting the results of a replication study is difficult as scientists start to realize that if a replication fails to reproduce the original results, it doesn’t mean the original was wrong. In theory, it could be that the original was a false positive or that the replication is a false negative, that the replication did something wrong, or that both studies are correct and that the cause of the discrepancy is still unknown. It is therefore important to note that there are a number of reasons a replication may fail, independently of the quality of the original research.
As soon as further information becomes available, we will keep you posted and will report on it.
Consensus Preclinical Checklist (PRECHECK): Experimental Conditions - Rodent Disclosure Checklist
Published in the International Journal of Comparative Psychology, the presented guidelines follow the recommendation of a number of external bodies to regulate the use of animals in research. They can be used both for transparency in publication and as guidance for regulatory or funding institutions, to request information prior, during, or after funding, and to ensure adherence to regulations.
This checklist is based on and extends the following guidelines:
Animals in Research Ethical Guidelines
Description of Animal Research in Scientific Publications
Animal Welfare Act (7 U.S.C. §2131 et. seq.) and applicable federal regulations, policies, and guidelines, regarding personnel, supervision, record keeping, and veterinary care
ARRIVE guidelines
PAASP archive of publications
PAASP maintains a list of previous publications and a collection of articles on our website.
We will be regularly updating the Literature and Guidelines sections where you can find references for some of the most valuable publications related to the growing fields of GRP, Data Integrity and Reproducibility:
-  Consequences of poor data quality
-  Study design
-  Reproducibility and data robustness
-  Data analysis and statistics
-  Data quality tools
-  Therapeutic indication
-  Reporting bias
-  Peer Review and Impact Factor
-  Various commentaries
Read more
Several years ago the nuclear retinoid X receptor agonist bexarotene was reported to produce a marked (50%) reduction in amyloid plaque burden, to significantly reduce levels of soluble and insoluble brain Aβ, and to restore cognitive and memory functions in amyloid precursor protein-presenilin 1 transgenic mice. This report received a lot of attention when, about a year later, several labs commented on difficulties with confirming some of the claimed effects of bexarotene. By the time these reports were published (May 2013), however, a clinical trial was already enrolling patients (link)...
The results of this clinical trial have been published in 2016, but may have escape attention of colleagues who follow the data robustness and reproducibility discussion but do not actively track Alzheimer’s disease literature.
According to the report by Cummings et al, the primary outcome was negative as bexarotene did not affect the composite or regional amyloid burden (measured using amyloid plaque-labeling florbetapir PET) when all patients were included in the analysis.
We leave it to the experts to comment on whether secondary analyses are of interest and warrant further studies (e.g. bexarotene-treated ApoE4 noncarriers – N=4 - showed a significant reduction in brain amyloid while no change in amyloid burden was observed in ApoE4 carriers – N=12).
Introducing the rationale for this clinical study, Cummings et al described controversial preclinical evidence and summarized it as “most follow-up studies reproduced effects on soluble Aβ; effects on amyloid plaques and behavioral outcomes were more variable”.
 
We found such summary to be an interesting example of how follow-up negative studies are trivialized in the context of the original hypothesis and positive supporting data. Indeed, while effects on soluble Aβ may be called “variable” as some studies reported negative findings (here; here;  see also here), to the best of our knowledge, essentially all follow-up studies have NOT confirmed effects on amyloid plaques (here; here; here; here; here and  here).
 
What do we learn from this case?
Some people say that a drug that has weak and inconsistent efficacy in animals is not likely to work in humans. But it is also said that certain labs may simply be more experienced when dealing with certain experimental methods or animal models and one should not hesitate to advance experimental treatments that produce signs of efficacy in some labs but not in others. In a sense, we hope to have clinical data as a reference to prove one view or another. This bexarotene example shows that this path is taking a lot of time and does not necessarily bring clarity. And it is also not likely to facilitate delivery of robust preclinical data. The only reliable approach to make preclinical research robust is to invest into its quality. As the saying goes, “if you need a helping hand, the best place to look is at the end of your sleeve”.
Quote of the Month
In the late 19th century, the observatory of Harvard University pioneered a new approach in astronomy. While previous researchers had looked at the stars through their telescope and described what they had seen, these pioneers took photographic images, which could be used for quantitative analysis and allowed storage and independent analysis of data. During the night of May 19th 1893, Solon Bailey working from the satellite facility at Arequipa, Peru, looked at clusters of stars in distant nebula and attempt a census of their populations. He captured a single cluster in a two-hour exposure, applied a grid of 400 tiny boxes and counted the stars in each compartment. He realized that possibility for making mistakes and had the same plate re-counted by his wife Ruth Bailey. As she came up with a somewhat higher number, he decided to average the data and report the mean of 6389 stars in this cluster in his article in Astronomy and Astro-Physics. An early example of attempting to objectify data collection and reduce individual bias in data analysis.
Based on Dava Sobel “The Glass Universe”, 4th Estate, London 2016
In this section we provide information about useful tools for designing and analyzing experiments - instruments and equipment, which we at PAASP also use on relevant occasions. In addition, we also introduce helpful resources for educational and training purposes.
NIH Rigor and Reproducibility Training Modules
Scientific rigor is the structured and controlled application of the scientific method using the highest standards in the field, including considerations in experimental design, data analysis, and reproducibility.
SfN has partnered with NIH and leading neuroscientists who are experts in the field of scientific rigor to offer the series ‘Promoting Awareness and Knowledge to Enhance Scientific Rigor in Neuroscience’ as a part of NIH’s Training Modules to Enhance Data Reproducibility (TMEDR). Graduate students, postdoctoral fellows and early stage investigators are the primary audiences for these training modules.
The products of this initiative (training modules and webinars) are or will become available in the near future on this web site - LINK.

March 6-9, 2017 - Heidelberg, Germany:
2nd German Pharm-Tox Summit, held by the Deutschen Gesellschaft für experimentelle und klinische Pharmakologie und Toxikologie (DGPT) together with the Association of Clinical Pharmacology (VKliPha) and the Association for Applied Human Pharmacology (AGAH).
PAASP is organizing the ‘Advanced Courses in Experimental and Clinical Pharmacology’ on Monday, 6th of March from 12-5pm.

March 7, 2017
Webinar by Dr Phil Skolnick
Phil Skolnick will talk on March 7th at 16.00 CET (10 am EST). Tentative title: 'Trust but verify: developing (+)-opioids as medications to treat SUDs'. This webinar is hosted by Elena Koustova and supported by Cohen Veterans Bioscience.
 
March 9-12, 2017 - Nice, France:
ECNP Workshop on Neuropsychopharmacology for Junior Scientists in Europe
OS.1: Data sharing, reproducibility and team science by Magali Haas (Saturday 16:30 - 17:00)
 
March 24, 2017 – Leiden, Netherlands:
ECNP Network 'Preclinical Data Forum'
 
April 7-13, 2017 - Windsor, UK:
Workshop on Advanced Methods for Reproducible Science (BBSRC/ECNP)

May 21-24, 2017 - Stockholm, Sweden:
6th Pharmaceutical Sciences World Congress (PSWC): Symposium on Reproducibility Symposium on 'Data quality, robustness and relevance in preclinical research and development', organized and chaired by Thomas Steckler.
 
May 28-31, 2017 - Amsterdam, Netherlands:
5th World Conference on Research Integrity

July 3-5, 2017 - Heidelberg, Germany:
ECNP Workshop for Young Scientists “How to Make Preclinical Research Robust”

Please let us know if you are aware of additional or future meetings covering relevant aspects of Good Research Practice and Data Integrity in the biomedical research area:
Contact us
Get involved!
We at PAASP would like to invite you to join our projects and together help to improve the quality of biomedical research.
Please reach out to us and learn more about potential opportunities to collaborate on different activities, which range from the development of new quality standards for pre-clinical research to the training for students and scientists in Good Scientific Practices.
Contact us
Copyright © 2017 PAASP GmbH, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp