Search results
Found 9693 matches for
Multiomic analyses direct hypotheses for Creutzfeldt-Jakob disease risk genes.
Prions are assemblies of misfolded prion protein that cause several fatal and transmissible neurodegenerative diseases, with the most common phenotype in humans being sporadic Creutzfeldt-Jakob disease (sCJD). Aside from variation of the prion protein itself, molecular risk factors are not well understood. Prion and prion-like mechanisms are thought to underpin common neurodegenerative disorders meaning that the elucidation of mechanisms could have broad relevance. Herein we sought to further develop our understanding of the factors that confer risk of sCJD using a systematic gene prioritization and functional interpretation pipeline based on multiomic integrative analyses. We integrated the published sCJD genome-wide association study (GWAS) summary statistics with publicly available bulk brain and brain cell type gene and protein expression datasets. We performed multiple transcriptome and proteome-wide association studies (TWAS & PWAS) and Bayesian genetic colocalization analyses between sCJD risk association signals and multiple brain molecular quantitative trait loci signals. We then applied our systematic gene prioritization pipeline on the obtained results and nominated prioritized sCJD risk genes with risk-associated molecular mechanisms in a transcriptome and proteome-wide manner. Genetic upregulation of both gene and protein expression of syntaxin-6 (STX6) in the brain was associated with sCJD risk in multiple datasets, with a risk-associated gene expression regulation specific to oligodendrocytes. Similarly, increased gene and protein expression of protein disulfide isomerase family A member 4 (PDIA4), involved in the unfolded protein response, was linked to increased disease risk, particularly in excitatory neurons. Protein expression of mesencephalic astrocyte derived neurotrophic factor (MANF), involved in protection against endoplasmic reticulum stress and sulfatide binding (linking to the enzyme in the final step of sulfatide synthesis, encoded by sCJD risk gene GAL3ST1), was identified as protective against sCJD. In total 32 genes were prioritized into two tiers based on the level of evidence and confidence for further studies. This study provides insights into the genetically-associated molecular mechanisms underlying sCJD susceptibility and prioritizes several specific hypotheses for exploration beyond the prion protein itself and beyond the previously highlighted sCJD risk loci through the newly prioritized sCJD risk genes and mechanisms. These findings highlight the importance of glial cells, sulfatides and the excitatory neuron unfolded protein response in sCJD pathogenesis.
Genome-wide association analyses identify distinct genetic architectures for age-related macular degeneration across ancestries.
To effectively reduce vision loss due to age-related macular generation (AMD) on a global scale, knowledge of its genetic architecture in diverse populations is necessary. A critical element, AMD risk profiles in African and Hispanic/Latino ancestries, remains largely unknown. We combined data in the Million Veteran Program with five other cohorts to conduct the first multi-ancestry genome-wide association study of AMD and discovered 63 loci (30 novel). We observe marked cross-ancestry heterogeneity at major risk loci, especially in African-ancestry populations which demonstrate a primary signal in a major histocompatibility complex class II haplotype and reduced risk at the established CFH and ARMS2/HTRA1 loci. Dissecting local ancestry in admixed individuals, we find significantly smaller marginal effect sizes for CFH risk alleles in African ancestry haplotypes. Broadening efforts to include ancestrally distinct populations helped uncover genes and pathways that boost risk in an ancestry-dependent manner and are potential targets for corrective therapies.
Genome-wide association neural networks identify genes linked to family history of Alzheimer's disease.
Augmenting traditional genome-wide association studies (GWAS) with advanced machine learning algorithms can allow the detection of novel signals in available cohorts. We introduce "genome-wide association neural networks (GWANN)" a novel approach that uses neural networks (NNs) to perform a gene-level association study with family history of Alzheimer's disease (AD). In UK Biobank, we defined cases (n = 42 110) as those with AD or family history of AD and sampled an equal number of controls. The data was split into an 80:20 ratio of training and testing samples, and GWANN was trained on the former followed by identifying associated genes using its performance on the latter. Our method identified 18 genes to be associated with family history of AD. APOE, BIN1, SORL1, ADAM10, APH1B, and SPI1 have been identified by previous AD GWAS. Among the 12 new genes, PCDH9, NRG3, ROR1, LINGO2, SMYD3, and LRRC7 have been associated with neurofibrillary tangles or phosphorylated tau in previous studies. Furthermore, there is evidence for differential transcriptomic or proteomic expression between AD and healthy brains for 10 of the 12 new genes. A series of post hoc analyses resulted in a significantly enriched protein-protein interaction network (P-value
Genetic epidemiology of Alzheimer's disease
Dementia is a major health problem in the elderly. It is a syndrome characterized by impairment in intellectual functioning resulting in a distressing condition both for the patient and caregiver. Alzheimer's disease (AD) is the most common cause of dementia in Western society. AD is clinically characterized by an insidous onset of decline in memory and at least one other area of cognition. Additional characteristics are a gradually progressive course, a preserved level of consciousness, and absence of other conditions able to cause these symptoms. The pathological hallmark in brains of AD patients are extracellular plaques composed mainly of the amyloid-p peptide and intracellular neurofibrillary tangles containing hyperphosphorylated tau protein (Braak and Braak, 1991).
Large-scale evaluation of outcomes after a genetic diagnosis in children with severe developmental disorders.
PurposeWe sought to evaluate outcomes for clinical management after a genetic diagnosis from the Deciphering Developmental Disorders study.MethodsIndividuals in the Deciphering Developmental Disorders study who had a pathogenic/likely pathogenic genotype in the DECIPHER database were selected for inclusion (n = 5010). Clinical notes from regional clinical genetics services notes were reviewed to assess predefined clinical outcomes relating to interventions, prenatal choices, and information provision.ResultsOutcomes were recorded for 4237 diagnosed probands (85% of those eligible) from all 24 recruiting centers across the United Kingdom and Ireland. Clinical management was reported to have changed in 28% of affected individuals. Where individual-level interventions were recorded, additional diagnostic or screening tests were started in 903 (21%) probands through referral to a range of different clinical specialties, and stopped or avoided in a further 26 (0.6%). Disease-specific treatment was started in 85 (2%) probands, including seizure-control medications and dietary supplements, and contra-indicated medications were stopped or avoided in a further 20 (0.5%). The option of prenatal/preimplantation genetic testing was discussed with 1204 (28%) families, despite the relatively advanced age of the parents at the time of diagnosis. Importantly, condition-specific information or literature was given to 3214 (76%) families, and 880 (21%) were involved in family support groups. In the most common condition (KBG syndrome; 79 [2%] probands), clinical interventions only partially reflected the temporal development of phenotypes, highlighting the importance of consensus management guidelines and patient support groups.ConclusionOur results underscore the importance of achieving a clinico-molecular diagnosis to ensure timely onward referral of patients, enabling appropriate care and anticipatory surveillance, and for accessing relevant patient support groups.
Self-interactive learning: Fusion and evolution of multi-scale histomorphology features for molecular traits prediction in computational pathology.
Predicting disease-related molecular traits from histomorphology brings great opportunities for precision medicine. Despite the rich information present in histopathological images, extracting fine-grained molecular features from standard whole slide images (WSI) is non-trivial. The task is further complicated by the lack of annotations for subtyping and contextual histomorphological features that might span multiple scales. This work proposes a novel multiple-instance learning (MIL) framework capable of WSI-based cancer morpho-molecular subtyping by fusion of different-scale features. Our method, debuting as Inter-MIL, follows a weakly-supervised scheme. It enables the training of the patch-level encoder for WSI in a task-aware optimisation procedure, a step normally not modelled in most existing MIL-based WSI analysis frameworks. We demonstrate that optimising the patch-level encoder is crucial to achieving high-quality fine-grained and tissue-level subtyping results and offers a significant improvement over task-agnostic encoders. Our approach deploys a pseudo-label propagation strategy to update the patch encoder iteratively, allowing discriminative subtype features to be learned. This mechanism also empowers extracting fine-grained attention within image tiles (the small patches), a task largely ignored in most existing weakly supervised-based frameworks. With Inter-MIL, we carried out four challenging cancer molecular subtyping tasks in the context of ovarian, colorectal, lung, and breast cancer. Extensive evaluation results show that Inter-MIL is a robust framework for cancer morpho-molecular subtyping with superior performance compared to several recently proposed methods, in small dataset scenarios where the number of available training slides is less than 100. The iterative optimisation mechanism of Inter-MIL significantly improves the quality of the image features learned by the patch embedded and generally directs the attention map to areas that better align with experts' interpretation, leading to the identification of more reliable histopathology biomarkers. Moreover, an external validation cohort is used to verify the robustness of Inter-MIL on molecular trait prediction.
Cluster Triplet Loss for Unsupervised Domain Adaptation on Histology Images
Deep learning models that predict cancer patient treatment response from medical images need to be generalisable across different patient cohorts. However, this can be difficult due to heterogeneity across patient populations. Here we focus on the problem of predicting colorectal cancer patients' response to radiotherapy from histology images scanned from tumour biopsies, and adapt this prediction model onto a new, visibly different, target cohort of patients. We present a novel unsupervised domain adaptation method with a Cluster Triplet Loss function, using minimal information from the source domain, resulting in an improvement in AUC from 0.544 to 0.818 on the target cohort. We avoid the use of pseudo-labels and class feature centres to avoid adding noise and bias to the adapted model, and perform experiments to verify the preferable performance of our model over such state-of-the-art methods. Our proposed approach can be applied in many complex medical imaging cases, including prediction on large whole slide images, based on combining predictions from smaller, memory-feasible representations of the image extracted from graph neural networks.
Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges.
Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.
Long-term Mental Health Morbidity in Adult Survivors of COVID-19 Critical Illness - A Population-based Cohort Study.
BackgroundSurvivorship after coronavirus disease 2019 (COVID-19) critical illness may be associated with important long-term sequelae, but little is known regarding mental health outcomes.Research questionWhat is the association between COVID-19 critical illness and new post-discharge mental health diagnoses.Study designAND METHODS: We conducted a population-based cohort study in Ontario, Canada (January 1, 2020-March 31, 2022). We included consecutive adult survivors (age ≥ 18 years) of COVID-19 critical illness, and compared them with consecutive adult survivors of critical illness from non-COVID-19 pneumonia. The primary outcome was a new mental health diagnosis (a composite of mood, anxiety, or related disorders; schizophrenia/psychotic disorders; and other mental health disorders) following hospital discharge. We compared patients using overlap propensity score-weighted, cause-specific proportional hazard models.ResultsWe included 6,098 survivors of COVID-19 critical illness, and 2,568 adult survivors of critical illness from non-COVID-19 pneumonia at 102 centres. Incidence of new mental health diagnosis among survivors of COVID-19 critical illness was 25.3 per 100-person years (95% confidence interval [CI] 24.0-26.6), and 25.9 per 100-person years (95% CI: 24.0-27.8) among non-COVID-19 pneumonia. Following propensity weighting, COVID-19 critical illness was not associated with increased risk of new mental health diagnosis overall (hazard ratio [HR] 1.08 [95% CI: 0.96-1.23]), but was associated with increased risk in the category of new mood, anxiety, or related disorders (HR 1.21 [95% CI: 1.05-1.40]). No difference was seen in psychotic disorders, other mental health diagnoses, social problems, or deliberate self-harm.InterpretationAs compared to survival after critical illness from non-COVID-19 pneumonia, survival after COVID-19 critical illness was not associated with increased risk of the composite outcome of new mental health diagnosis, but was associated with elevated risk for new mood, anxiety, or related disorders.
Exome-wide evidence of compound heterozygous effects across common phenotypes in the UK Biobank.
The phenotypic impact of compound heterozygous (CH) variation has not been investigated at the population scale. We phased rare variants (MAF ∼0.001%) in the UK Biobank (UKBB) exome-sequencing data to characterize recessive effects in 175,587 individuals across 311 common diseases. A total of 6.5% of individuals carry putatively damaging CH variants, 90% of which are only identifiable upon phasing rare variants (MAF -7) after accounting for relatedness, polygenicity, nearby common variants, and rare variant burden. Of these, just one is discovered when considering homozygosity alone. Using longitudinal health records, we additionally identify and replicate a novel association between bi-allelic variation in ATP2C2 and an earlier age at onset of chronic obstructive pulmonary disease (COPD) (p -8). Genetic phase contributes to disease risk for gene-trait pairs: ATP2C2-COPD (p = 0.000238), FLG-asthma (p = 0.00205), and USH2A-visual impairment (p = 0.0084). We demonstrate the power of phasing large-scale genetic cohorts to discover phenome-wide consequences of compound heterozygosity.
Predicting individual patient and hospital-level discharge using machine learning.
BackgroundAccurately predicting hospital discharge events could help improve patient flow and the efficiency of healthcare delivery. However, using machine learning and diverse electronic health record (EHR) data for this task remains incompletely explored.MethodsWe used EHR data from February-2017 to January-2020 from Oxfordshire, UK to predict hospital discharges in the next 24 h. We fitted separate extreme gradient boosting models for elective and emergency admissions, trained on the first two years of data and tested on the final year of data. We examined individual-level and hospital-level model performance and evaluated the impact of training data size and recency, prediction time, and performance in subgroups.ResultsOur models achieve AUROCs of 0.87 and 0.86, AUPRCs of 0.66 and 0.64, and F1 scores of 0.61 and 0.59 for elective and emergency admissions, respectively. These models outperform a logistic regression model using the same features and are substantially better than a baseline logistic regression model with more limited features. Notably, the relative performance increase from adding additional features is greater than the increase from using a sophisticated model. Aggregating individual probabilities, daily total discharge estimates are accurate with mean absolute errors of 8.9% (elective) and 4.9% (emergency). The most informative predictors include antibiotic prescriptions, medications, and hospital capacity factors. Performance remains robust across patient subgroups and different training strategies, but is lower in patients with longer admissions and those who died in hospital.ConclusionsOur findings highlight the potential of machine learning in optimising hospital patient flow and facilitating patient care and recovery.
Self-monitoring of blood pressure following a stroke or transient ischaemic attack (TASMIN5S): a randomised controlled trial.
BackgroundBlood pressure (BP) control following stroke is important but currently sub-optimal. This trial aimed to determine whether self-monitoring of hypertension with telemonitoring and a treatment escalation protocol, results in lower BP than usual care in people with previous stroke or transient ischaemic attack (TIA).MethodsUnblinded randomised controlled trial, comparing a BP telemonitoring-based intervention with control (usual care) for hypertension management in 12 primary care practices in England. People with previous stroke or TIA with clinic systolic BP 130-180 mmHg, taking ≤ 3 antihypertensive medications and on stable treatment for at least four weeks were randomised 1:1 using secure online system to intervention or control. The BP:Together intervention comprised self-monitoring of blood pressure with a digital behavioural intervention which supported telemonitoring of self-monitored BP with feedback to clinicians and patients regarding medication titration. The planned primary outcome was difference in clinic measured systolic BP 12 months from randomisation but was not available following early study termination due to withdrawal of funding during the COVID-19 pandemic. Instead, in addition to pre-randomised data, routinely recorded BP was extracted from electronic patient records both pre- and post-randomisation and presented descriptively only. An intention to treat approach was taken.ResultsFrom 650 postal invitations, 129 (20%) responded, of whom 95 people had been screened for eligibility prior to the pandemic (November 2019-March 2020) and 55 (58%) were randomised. Pre-randomisation routinely recorded mean BP was 145/78 mmHg in the control (n = 26) and 145/79 mmHg in the self-monitoring (n = 21) groups. Post-randomisation mean BP was 134/73 mmHg in the control (n = 19) and 130/75 mmHg in the self-monitoring (n = 25) groups. Participants randomised to self-monitoring used the intervention for ≥ 7 months in 25/27 (93%) of cases.ConclusionsRecruitment of people with stroke/TIA to a trial comparing a BP self-monitoring and digital behavioural intervention to usual care was feasible prior to the COVID-19 pandemic and the vast majority of those randomised to intervention used it while the trial was running. Routinely recorded blood pressure control improved in both groups. Digital interventions including self-monitoring are feasible for people with stroke/TIA and should be definitively evaluated in future trials.Trial registrationISRCTN57946500 06/09/2019 Prospective.
Feasibility and Acceptability of Community Coronavirus Disease 2019 Testing Strategies (FACTS) in a University Setting.
BackgroundDuring the coronavirus disease 2019 (COVID-19) pandemic in 2020, the UK government began a mass severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing program. This study aimed to determine the feasibility and acceptability of organized regular self-testing for SARS-CoV-2.MethodsThis was a mixed-methods observational cohort study in asymptomatic students and staff at University of Oxford, who performed SARS-CoV-2 antigen lateral flow self-testing. Data on uptake and adherence, acceptability, and test interpretation were collected via a smartphone app, an online survey, and qualitative interviews.ResultsAcross 3 main sites, 551 participants (25% of those invited) performed 2728 tests during a follow-up of 5.6 weeks; 447 participants (81%) completed at least 2 tests, and 340 (62%) completed at least 4. The survey, completed by 214 participants (39%), found that 98% of people were confident to self-test and believed self-testing to be beneficial. Acceptability of self-testing was high, with 91% of ratings being acceptable or very acceptable. A total of 2711 (99.4%) test results were negative, 9 were positive, and 8 were inconclusive. Results from 18 qualitative interviews with students and staff revealed that participants valued regular testing, but there were concerns about test accuracy that impacted uptake and adherence.ConclusionsThis is the first study to assess feasibility and acceptability of regular SARS-CoV-2 self-testing. It provides evidence to inform recruitment for, adherence to, and acceptability of regular SARS-CoV-2 self-testing programs for asymptomatic individuals using lateral flow tests. We found that self-testing is acceptable and people were able to interpret results accurately.
Early detection of physiological deterioration in post-surgical patients using wearable technology combined with an integrated monitoring system: a pre- and post-interventional study
ABSTRACT Objectives Late recognition of physiological deterioration is a frequent problem in hospital wards. We assessed whether ambulatory (wearable) physiological monitoring combined with a system that continuously merges physiological variables into a single “risk” score (VSI), changed care and outcome in patients after major surgery. Design Pre- and post-interventional study. Setting A single centre tertiary referral university hospital upper-gastrointestinal service. Participants Patients who underwent major upper-gastrointestinal surgery. Interventions Phase-I (pre-intervention phase): Patients received continuous wearable monitoring and standard care, but the VSI score was not available for clinical use. Phase-II (post-intervention phase): Patients received continuous wearable monitoring. In addition to standard care the VSI score was displayed for use in clinical practice. Measurements and Main Results 200 participants were monitored in phase-I. 207 participants were monitored in phase-II. Participants were monitored (median, interquartile range, IQR) for 30.2% (13.8-49.2) of available time in phase-I and 58.2% (33.1-75.2) of available time in phase-II. Clinical staff recorded observations more frequently in the 36 hours prior to a major adverse event (death, cardiac arrest or unplanned admission to intensive care) for phase-II participants (median, IQR, time between observations of 1.00, 0.50-2.08 hours) than phase-I participants (1.50, 0.75-2.50 hours, p <0.001). There was no difference in observation frequency between the two phases for participants who did not undergo an adverse event ( p =0.129). 6/200 participants died before hospital discharge in phase-I, 1/207 participants died in hospital in phase-II. 20 (10.0%) patients in phase-I and 26 (12.6%) patients in phase-II had an unplanned admission to intensive care. Ward length-of-stay was unaltered (8.91, 6.71-14.02 days in phase-I, vs. 8.97, 5.99-13.85 days in phase-II, p =0.327). Conclusion The combination of the integrated monitoring system with ambulatory monitoring in high-risk post-surgical patients improved recognition and management of deteriorating patients without increasing the observation rate in those patients who did not deteriorate.