. The Centiloid Project: standardizing quantitative amyloid plaque estimation by PET. Alzheimers Dement. 2015 Jan;11(1):1-15.e1-4. Epub 2014 Oct 28 PubMed.

Recommends

Please login to recommend the paper.

Comments

  1. Overall, this project is a highly welcome initiative. The ability to confidently compare and translate quantitative results among the various amyloid tracers should facilitate both basic research on disease pathophysiology and the development of drugs acting on the amyloid pathway. Large, multicenter research or trials could confidently employ different amyloid ligands depending on availability, approval status, or other factors, without having to worry about the difficulties of pooling results together in the end. In theory, even longitudinal follow-up of a given patient could make use of different ligands at different time points and still arrive at a numerical value quantifying changes in amyloid load. A welcome “side-effect” of the centiloid scale is that measures of reliability will be more comparable between tracers, which is very important when calculating power or sample sizes for trials planning to use a mixed set of ligands.

    However, there are some challenges that the project needs to overcome. The first is related to the inherent convertibility of amyloid measures between the ligands. The centiloid scale is essentially a linear scale anchored at two points. Convertibility requires, among other things, that the amyloid load estimates for all ligands should change in a similar, linear way in response to increasing true plaque load. That is to say, non-linearities in the dynamic range may lead to problems of converting/standardizing intermediate load levels. Some possible sources of non-linearity are:

    • Differences in specificity: ligands not binding to the same binding sites on the amyloid fibrils (in the thermometer example, do they all actually measure temperature or also a bit of something else?). The current amyloid ligands (except FDDNP) seem reasonably similar, but new tracers may come that measure different species of fibrillar (or non-fibrillar?) amyloid and maybe also other targets.
    • Different non-specific binding to white-matter-influencing amyloid load measures differently across the dynamic range (e.g., due to atrophy, ligands with higher white-matter binding will be reporting relatively higher amyloid load at the higher end of the spectrum).
    • Fully quantitative (e.g., distribution volume ratio, DVR) and semi-quantitative (standardized uptake value ratio, SUVR) measures are differently sensitive to perturbing effects such as atrophy and blood-flow changes (especially hypoperfusion).

    These issues may be particularly important in case of longitudinal tracking of amyloid load and less so, for example, when simply separating amyloid positives and negatives. A further challenge can be to try to apply centiloid standardization at the regional/voxel level. While the project seems geared toward standardization of the global load measure, it may be interesting to use it also on the region/image level, and in that case the above-described and similar non-linearities (especially differences in white-matter binding) may make it difficult to obtain an equivalent centiloid image.

    Another set of challenges come at the time of trying to implement/use the centiloid scale in actual research projects/drug trials:

    • On-site calibration is key to ensure comparability of published findings. This step is described in the Centiloid Project. It is worth emphasizing that correct implementation will be facilitated by detailed description of acquisition, reconstruction settings/processing methodologies used to determine the values for the public calibration data sets (such as image preprocessing steps with important parameters, e.g., smoothing filters used, derivation of regions of interest, quantification method, etc.).
    • It may be envisaged that research groups/pharmaceutical companies would simply adopt using standard acquisition protocols and methods (cookbook recipes) instead of full local (re-) calibration work. This puts even stronger demands on having fully documented descriptions of settings/methods used for the calibration data sets. Ideally, even the actual software components or algorithms could be shared so that groups might simply and reliably analyze their data without on-site calibration.

Make a Comment

To make a comment you must login or register.