Search results
Found 8753 matches for
Leveraging the Adolescent Brain Cognitive Development Study to improve behavioral prediction from neuroimaging in smaller replication samples.
Magnetic resonance imaging (MRI) is a popular and useful non-invasive method to map patterns of brain structure and function to complex human traits. Recently published observations in multiple large scale studies cast doubt upon these prospects, particularly for prediction of cognitive traits from structural and resting state functional MRI, which seems to account for little behavioral variability. We leverage baseline data from thousands of children in the Adolescent Brain Cognitive DevelopmentSM (ABCD®) Study to inform the replication sample size required with both univariate and multivariate methods across different imaging modalities to detect reproducible brain-behavior associations. We demonstrate that by applying multivariate methods to high-dimensional brain imaging data, we can capture lower dimensional patterns of structural and functional brain architecture that correlate robustly with cognitive phenotypes and are reproducible with only 41 individuals in the replication sample for working memory-related functional MRI, and ~100 subjects for structural MRI. Even with 100 random re-samplings of 50 subjects in the discovery sample, prediction can be adequately powered with 98 subjects in the replication sample for multivariate prediction of cognition with working memory task functional MRI. These results point to an important role for neuroimaging in translational neurodevelopmental research and showcase how findings in large samples can inform reproducible brain-behavior associations in small sample sizes that are at the heart of many investigators' research programs and grants.
Association of Regular Opioid Use With Incident Dementia and Neuroimaging Markers of Brain Health in Chronic Pain Patients: Analysis of UK Biobank.
ObjectivesWe aimed to investigate the association of regular opioid use, compared with non-opioid analgesics, with incident dementia and neuroimaging outcomes among chronic pain patients.DesignThe primary design is a prospective cohort study. To triangulate evidence, we also conducted a nested case-control study analyzing opioid prescriptions and a cross-sectional study analyzing neuroimaging outcomes.Setting and participantsDementia-free UK Biobank participants with chronic pain and regular analgesic use.MeasurementsChronic pain status and regular analgesic use were captured using self-reported questionnaires and verbal interviews. Opioid prescription data were obtained from primary care records. Dementia cases were ascertained using primary care, hospital, and death registry records. Propensity score-matched Cox proportional hazards analysis, conditional logistic regression, and linear regression were applied to the data in the prospective cohort, nested case-control, and cross-sectional studies, respectively.ResultsProspective analyses revealed that regular opioid use, compared with non-opioid analgesics, was associated with an increased dementia risk over the 15-year follow-up (Hazard ratio [HR], 1.18 [95% confidence interval (CI): 1.08-1.30]; Absolute rate difference [ARD], 0.44 [95% CI: 0.19-0.71] per 1000 person-years; Wald χ2 = 3.65; df = 1; p <0.001). The nested case-control study suggested that a higher number of opioid prescriptions was associated with an increased risk of dementia (1 to 5 prescriptions: OR = 1.21, 95% CI: 1.07-1.37, Wald χ2 = 3.02, df = 1, p = 0.003; 6 to 20: OR = 1.27, 95% CI: 1.08-1.50, Wald χ2 = 2.93, df = 1, p = 0.003; more than 20: OR = 1.43, 95% CI: 1.23-1.67, Wald χ2 = 4.57, df = 1, p < 0.001). Finally, neuroimaging analyses revealed that regular opioid use was associated with lower total grey matter and hippocampal volumes, and higher white matter hyperintensities volumes.ConclusionRegular opioid use in chronic pain patients was associated with an increased risk of dementia and poorer brain health when compared to non-opioid analgesic use. These findings imply a need for re-evaluation of opioid prescription practices for chronic pain patients and, if further evidence supports causality, provide insights into strategies to mitigate the burden of dementia.
Changes in End-of-Life Symptom Management Prescribing among Long-Term Care Residents during COVID-19.
ObjectiveTo examine changes in the prescribing of end-of-life symptom management medications in long-term care (LTC) homes during the COVID-19 pandemic.DesignRetrospective cohort study using routinely collected health administrative data in Ontario, Canada.Setting and participantsWe included all individuals who died in LTC homes between January 1, 2017, and March 31, 2021. We separated the study into 2 periods: before COVID-19 (January 1, 2017, to March 17, 2020) and during COVID-19 (March 18, 2020, to March 31, 2021).MethodsFor each LTC home, we measured the percentage of residents who died before and during COVID-19 who had a subcutaneous symptom management medication prescription in their last 14 days of life. We grouped LTC homes into quintiles based on their mean prescribing rates before COVID-19, and examined changes in prescribing during COVID-19 and COVID-19 outcomes across quintiles.ResultsWe captured 75,438 LTC residents who died in Ontario's 626 LTC homes during the entire study period, with 19,522 (25.9%) dying during COVID-19. The mean prescribing rate during COVID-19 ranged from 46.9% to 79.4% between the lowest and highest prescribing quintiles. During COVID-19, the mean prescribing rate in the lowest prescribing quintile increased by 9.6% compared to before COVID-19. Compared to LTC homes in the highest prescribing quintile, homes in the lowest prescribing quintile experienced the highest proportion of COVID-19 outbreaks (73.4% vs 50.0%), the largest mean outbreak intensity (0.27 vs 0.09 cases/bed), the highest mean total days with a COVID-19 outbreak (72.7 vs 24.2 days), and the greatest proportion of decedents who were transferred and died outside of LTC (22.1% vs 8.6%).Conclusions and implicationsLTC homes in Ontario had wide variations in the prescribing rates of end-of-life symptom management medications before and during COVID-19. Homes in the lower prescribing quintiles had more COVID-19 cases per bed and days spent in an outbreak.
Characterization of the genetic determinants of context-specific DNA methylation in primary monocytes.
To better understand inter-individual variation in sensitivity of DNA methylation (DNAm) to immune activity, we characterized effects of inflammatory stimuli on primary monocyte DNAm (n = 190). We find that monocyte DNAm is site-dependently sensitive to lipopolysaccharide (LPS), with LPS-induced demethylation occurring following hydroxymethylation. We identify 7,359 high-confidence immune-modulated CpGs (imCpGs) that differ in genomic localization and transcription factor usage according to whether they represent a gain or loss in DNAm. Demethylated imCpGs are profoundly enriched for enhancers and colocalize to genes enriched for disease associations, especially cancer. DNAm is age associated, and we find that 24-h LPS exposure triggers approximately 6 months of gain in epigenetic age, directly linking epigenetic aging with innate immune activity. By integrating LPS-induced changes in DNAm with genetic variation, we identify 234 imCpGs under local genetic control. Exploring shared causal loci between LPS-induced DNAm responses and human disease traits highlights examples of disease-associated loci that modulate imCpG formation.
New Tools and Nuanced Interventions to Accelerate Achievement of the 2030 Roadmap for Neglected Tropical Diseases.
The World Health Organization roadmap for neglected tropical diseases (NTDs) sets out ambitious targets for disease control and elimination by 2030, including 90% fewer people requiring interventions against NTDs and the elimination of at least 1 NTD in 100 countries. Mathematical models are an important tool for understanding NTD dynamics, optimizing interventions, assessing the efficacy of new tools, and estimating the economic costs associated with control programs. As NTD control shifts to increased country ownership and programs progress toward disease elimination, tailored models that better incorporate local context and can help to address questions that are important for decision-making at the national level are gaining importance. In this introduction to the supplement, New Tools and Nuanced Interventions to Accelerate Achievement of the 2030 Roadmap for Neglected Tropical Diseases, we discuss current challenges in generating more locally relevant models and summarize how the articles in this supplement present novel ways in which NTD modeling can help to accelerate achievement and sustainability of the 2030 targets.
Using Passive Surveillance to Maintain Elimination as a Public Health Problem for Neglected Tropical Diseases: A Model-Based Exploration.
BackgroundGreat progress is being made toward the goal of elimination as a public health problem for neglected tropical diseases such as leprosy, human African trypanosomiasis, Buruli ulcer, and visceral leishmaniasis, which relies on intensified disease management and case finding. However, strategies for maintaining this goal are still under discussion. Passive surveillance is a core pillar of a long-term, sustainable surveillance program.MethodsWe use a generic model of disease transmission with slow epidemic growth rates and cases detected through severe symptoms and passive detection to evaluate under what circumstances passive detection alone can keep transmission under control.ResultsReducing the period of infectiousness due to decreasing time to treatment has a small effect on reducing transmission. Therefore, to prevent resurgence, passive surveillance needs to be very efficient. For some diseases, the treatment time and level of passive detection needed to prevent resurgence is unlikely to be obtainable.ConclusionsThe success of a passive surveillance program crucially depends on what proportion of cases are detected, how much of their infectious period is reduced, and the underlying reproduction number of the disease. Modeling suggests that relying on passive detection alone is unlikely to be enough to maintain elimination goals.
Impact of national-scale targeted point-of-care symptomatic lateral flow testing on trends in COVID-19 infections and hospitalisations during the second epidemic wave in austria
Background: In October 2020, amidst the second COVID-19 epidemic wave and before the second-national lockdown, Austria introduced a policy of population wide point-of-care lateral flow antigen testing (POC-LFT). This study explores the impact of this policy by quantifying the association between trends in POC-LFT activity with trends in PCR-positivity (as a proxy for symptomatic infection) and hospitalisations related to COVID-19 between October 22 and December 06, 2020. Methods: We stratified 94 Austrian districts according to POC-LFT-activity (number of POC-LFTs performed per 100,000 inhabitants over the study period), into three population cohorts: (i) high(N=24), (ii) medium(N=45) and (iii) low(N=25). Across the cohorts we a) compared trends in POC-LFT-activity with PCR-positivity and hospital admissions; and, b) compared the epidemic growth rate before and after the epidemic peak. Results: The trend in POC-LFT activity was similar to PCR-positivity and hospitalisations trends across high, medium and low POCLFT activity cohorts. Compared to the low POC-LFT-activity cohort, the high-activity cohort had steeper pre-peak daily increase in PCR-positivity (2.24 more cases per day, per district and per 100,000 inhabitants; 95% CI: 2.0-2.7; p<0.001) and hospitalisations (0.10; 95% CI: 0.02, 0.18; p<0.15), and 6 days earlier peak of PCR-positivity. The high-activity cohort also had steeper daily reduction in the post-peak trend in PCR-positivity (-3.6; 95% CI: -4.8, -2.3; p<0.001) and hospitalisations (-0.2; 95% CI: -0.32, -0.08; p<0.05). Conclusions: High POC-LFT-use was associated with increased and earlier case finding during the second Austrian COVID-19 epidemic wave, and early and significant reduction in cases and hospitalisations during the second national lockdown. A national policy promoting symptomatic POC-LFT in primary care, can capture trends in PCR-positivity and hospitalisations. Symptomatic POC-LFT delivered at scale and combined with immediate self-quarantining and contact tracing can thus be a proxy for epidemic status, and hence a useful tool that can replace large-scale PCR testing.
Inferring community transmission of SARS-CoV-2 in the United Kingdom using the ONS COVID-19 Infection Survey.
Key epidemiological parameters, including the effective reproduction number, R(t), and the instantaneous growth rate, r(t), generated from an ensemble of models, have been informing public health policy throughout the COVID-19 pandemic in the four nations of the United Kingdom of Great Britain and Northern Ireland (UK). However, estimation of these quantities became challenging with the scaling down of surveillance systems as part of the transition from the "emergency" to "endemic" phase of the pandemic. The Office for National Statistics (ONS) COVID-19 Infection Survey (CIS) provided an opportunity to continue estimating these parameters in the absence of other data streams. We used a penalised spline model fitted to the publicly-available ONS CIS test positivity estimates to produce a smoothed estimate of the prevalence of SARS-CoV-2 positivity over time. The resulting fitted curve was used to estimate the "ONS-based" R(t) and r(t) across the four nations of the UK. Estimates produced under this model are compared to government-published estimates with particular consideration given to the contribution that this single data stream can offer in the estimation of these parameters. Depending on the nation and parameter, we found that up to 77% of the variance in the government-published estimates can be explained by the ONS-based estimates, demonstrating the value of this singular data stream to track the epidemic in each of the four nations. We additionally find that the ONS-based estimates uncover epidemic trends earlier than the corresponding government-published estimates. Our work shows that the ONS CIS can be used to generate key COVID-19 epidemiological parameters across the four UK nations, further underlining the enormous value of such population-level studies of infection. This is not intended as an alternative to ensemble modelling, rather it is intended as a potential solution to the aforementioned challenge faced by public health officials in the UK in early 2022.
Digital measurement of SARS-CoV-2 transmission risk from 7 million contacts.
How likely is it to become infected by SARS-CoV-2 after being exposed? Almost everyone wondered about this question during the COVID-19 pandemic. Contact-tracing apps1,2 recorded measurements of proximity3 and duration between nearby smartphones. Contacts-individuals exposed to confirmed cases-were notified according to public health policies such as the 2 m, 15 min guideline4,5, despite limited evidence supporting this threshold. Here we analysed 7 million contacts notified by the National Health Service COVID-19 app6,7 in England and Wales to infer how app measurements translated to actual transmissions. Empirical metrics and statistical modelling showed a strong relation between app-computed risk scores and actual transmission probability. Longer exposures at greater distances had risk similar to that of shorter exposures at closer distances. The probability of transmission confirmed by a reported positive test increased initially linearly with duration of exposure (1.1% per hour) and continued increasing over several days. Whereas most exposures were short (median 0.7 h, interquartile range 0.4-1.6), transmissions typically resulted from exposures lasting between 1 h and several days (median 6 h, interquartile range 1.4-28). Households accounted for about 6% of contacts but 40% of transmissions. With sufficient preparation, privacy-preserving yet precise analyses of risk that would inform public health measures, based on digital contact tracing, could be performed within weeks of the emergence of a new pathogen.
A novel approach to evaluating the UK childhood immunisation schedule: estimating the effective coverage vector across the entire vaccine programme.
BackgroundThe availability of new vaccines can prompt policy makers to consider changes to the routine childhood immunisation programme in the UK. Alterations to one aspect of the schedule may have implications for other areas of the programme (e.g. adding more injections could reduce uptake of vaccines featuring later in the schedule). Colleagues at the Department of Health (DH) in the UK therefore wanted to know whether assessing the impact across the entire programme of a proposed change to the UK schedule could lead to different decisions than those made on the current case-by-case basis. This work is a first step towards addressing this question.MethodsA novel framework for estimating the effective coverage against all of the diseases within a vaccination programme was developed. The framework was applied to the current (August 2015) UK childhood immunisation programme, plausible extensions to it in the foreseeable future (introducing vaccination against Meningitis B and/or Hepatitis B) and a "what-if" scenario regarding a Hepatitis B vaccine scare that was developed in close collaboration with DH.ResultsOur applications of the framework demonstrate that a programme-view of hypothetical changes to the schedule is important. For example, we show how introducing Hepatitis B vaccination could negatively impact aspects of the current programme by reducing uptake of vaccines featuring later in the schedule, and illustrate that the potential benefits of introducing any new vaccine are susceptible to behaviour changes affecting uptake (e.g. a vaccine scare). We show how it may be useful to consider the potential benefits and scheduling needs of all vaccinations on the horizon of interest rather than those of an individual vaccine in isolation, e.g. how introducing Meningitis B vaccination could saturate the early (2-month) visit, thereby potentially restricting scheduling options for Hepatitis B immunisation should it be introduced to the programme in the future.ConclusionsOur results demonstrate the potential benefit of considering the programme-wide impact of changes to an immunisation schedule, and our framework is an important step in the development of a means for systematically doing so.
Are we prepared for the next influenza pandemic? Lessons from modelling different preparedness policies against four pandemic scenarios.
In the event of a novel influenza strain that is markedly different to the current strains circulating in humans, the population have little/no immunity and infection spreads quickly causing a global pandemic. Over the past century, there have been four major influenza pandemics: the 1918 pandemic ("Spanish Flu"), the 1957-58 pandemic (the "Asian Flu"), the 1967-68 pandemic (the "Hong Kong Flu") and the 2009 pandemic (the "Swine flu"). To inform planning against future pandemics, this paper investigates how different is the net-present value of employing pre-purchase and responsive- purchased vaccine programmes in presence and absence of anti-viral drugs to scenarios that resemble these historic influenza pandemics. Using the existing literature and in discussions with policy decision makers in the UK, we first characterised the four past influenza pandemics by their transmissibility and infection-severity. For these combinations of parameters, we then projected the net-present value of employing pre-purchase vaccine (PPV) and responsive-purchase vaccine (RPV) programmes in presence and absence of anti-viral drugs. To differentiate between PPV and RPV policies, we changed the vaccine effectiveness value and the time to when the vaccine is first available. Our results are "heat-map" graphs displaying the benefits of different strategies in pandemic scenarios that resemble historic influenza pandemics. Our results suggest that immunisation with either PPV or RPV in presence of a stockpile of effective antiviral drugs, does not have positive net-present value for all of the pandemic scenarios considered. In contrast, in the absence of effective antivirals, both PPV and RPV policies have positive net-present value across all the pandemic scenarios. Moreover, in all considered circumstances, vaccination was most beneficial if started sufficiently early and covered sufficiently large number of people. When comparing the two vaccine programmes, the RPV policy allowed a longer timeframe and lower coverage to attain the same benefit as the PPV policy. Our findings suggest that responsive-purchase vaccination policy has a bigger window of positive net-present value when employed against each of the historic influenza pandemic strains but needs to be rapidly available to maximise benefit. This is important for future planning as it suggests that future preparedness policies may wish to consider utilising timely (i.e. responsive-purchased) vaccines against emerging influenza pandemics.
The interplay between susceptibility and vaccine effectiveness control the timing and size of an emerging seasonal influenza wave in England.
Relaxing social distancing measures and reduced level of influenza over the last two seasons may lead to a winter 2022 influenza wave in England. We used an established model for influenza transmission and vaccination to evaluate the rolled out influenza immunisation programme over October to December 2022. Specifically, we explored how the interplay between pre-season population susceptibility and influenza vaccine efficacy control the timing and the size of a possible winter influenza wave. Our findings suggest that susceptibility affects the timing and the height of a potential influenza wave, with higher susceptibility leading to an earlier and larger influenza wave while vaccine efficacy controls the size of the peak of the influenza wave. With pre-season susceptibility higher than pre-COVID-19 levels, under the planned vaccine programme an early influenza epidemic wave is possible, its size dependent on vaccine effectiveness against the circulating strain. If pre-season susceptibility is low and similar to pre-COVID levels, the planned influenza vaccine programme with an effective vaccine could largely suppress a winter 2022 influenza outbreak in England.
A method for evaluating and comparing immunisation schedules that cover multiple diseases: Illustrative application to the UK routine childhood vaccine schedule.
BackgroundIn the UK, the childhood immunisation programme is given in the first 5 years of life and protects against 12 vaccine-preventable diseases. Recently, this programme has undergone changes with addition of vaccination against Meningitis B from September 2015 and the removal of the primary dose of protection against Meningitis C from July 2016. These hanges have direct impact on the associated diseases but in addition may induce indirect effects on the vaccines that are given simultaneously or later in the programme. In this work, we developed a novel formal method to evaluate the impact of vaccination changes to one aspect of the programme across an entire vaccine programme.MethodsFirstly, we combined transmission modelling (for four diseases) and historic data synthesis (for eight diseases) to project, for each disease, the disease burden at different levels of effective coverage against the associated disease. Secondly, we used a simulation model to determine the vector of effective coverage against each disease under three variations of the current childhood schedule. Combining these, we calculated the vector of disease burden across the programme under different scenarios, and assessed the direct and indirect effects of the schedule changes.ResultsThrough illustrative application of our novel framework to three scenarios of the current childhood immunisation programme in the UK, we demonstrated the feasibility of this unifying approach. For each disease in the programme, we successfully quantified the residual disease burden due to the change. For some diseases, the change was indirectly beneficial and reduced the burden, whereas for others the effect was adverse and the change increased the disease burden.ConclusionsOur results demonstrate the potential benefit of considering the programme-wide impact of changes to an immunisation schedule, and our framework is an important step in the development of a means for systematically doing so.
The changing health impact of vaccines in the COVID-19 pandemic: A modeling study.
Much of the world's population had already been infected with COVID-19 by the time the Omicron variant emerged at the end of 2021, but the scale of the Omicron wave was larger than any that had come before or has happened since, and it left a global imprinting of immunity that changed the COVID-19 landscape. In this study, we simulate a South African population and demonstrate how population-level vaccine effectiveness and efficiency changed over the course of the first 2 years of the pandemic. We then introduce three hypothetical variants and evaluate the impact of vaccines with different properties. We find that variant-chasing vaccines have a narrow window of dominating pre-existing vaccines but that a variant-chasing vaccine strategy may have global utility, depending on the rate of spread from setting to setting. Next-generation vaccines might be able to overcome uncertainty in pace and degree of viral evolution.