Search results
Found 9685 matches for
Environmental sustainability and the limits of healthcare resource allocation.
Recent literature has drawn attention to the complex relationship between health care and the environmental crisis. Healthcare systems are significant contributors to climate change and environmental degradation, and the environmental crisis is making our health worse and thus putting more pressure on healthcare systems; our health and the environment are intricately linked. In light of this relationship, we might think that there are no trade-offs between health and the environment; that healthcare decision-makers have special responsibilities to the environment; and that environmental values should be included in healthcare resource-allocation decisions. However, we argue that these claims are mistaken. The environmental crisis involves a wider range of considerations than just health. There is a plurality of reasons to act on the environment; we might do so to protect the natural world, to prevent catastrophes in other parts of the world, or to avert climate war and displacement. Trading-off between health care and environmental sustainability is thus unavoidable and requires sensitivity to all these reasons. Healthcare decision-makers are not well placed to be sensitive to these reasons, nor do they have the democratic authority to make such value judgements. Therefore, decisions about environmental sustainability interventions should be made at a 'higher level' of resource allocation. Importantly, hospitals have environmental duties but not environmental responsibilities; their job is to provide the best healthcare possible within the constraints given to them, not to choose between health care and other goods.
Alterations in care for children with special healthcare needs during the early COVID-19 pandemic: ethical and policy considerations.
Healthcare delivery and access, both in the United States and globally, were negatively affected during the entirety of the COVID-19 pandemic. This was particularly true during the first year when countries grappled with high rates of illness and implemented non-pharmaceutical interventions such as stay-at-home orders. Among children with special healthcare needs, research from the United Kingdom (U.K.) has shown that the pandemic response uniquely impacted various aspects of their care, including decreased access to care, delays in diagnosis, and poorer chronic disease control. In response to these findings, and to begin to comprehend whether the concerning findings from the nationalized system of healthcare in the U.K. extend to the highly dissimilar United States (U.S.) healthcare context, we reviewed the literature on alterations in access to and delivery of care during the early stages of the COVID-19 pandemic for children with special healthcare needs in the U.S. We then utilize these findings to consider the ethical and policy considerations of alterations in healthcare provision during pandemics and crisis events in the U.K. and U.S. and make recommendations regarding how the needs of CSHCN should be considered during future responses.
ECG analysis of ventricular fibrillation dynamics reflects ischaemic progression subject to variability in patient anatomy and electrode location.
BackgroundVentricular fibrillation (VF) is the deadliest arrhythmia, often caused by myocardial ischaemia. VF patients require urgent intervention planned quickly and non-invasively. However, the accuracy with which electrocardiographic (ECG) markers reflect the underlying arrhythmic substrate is unknown.MethodsWe analysed how ECG metrics reflect the fibrillatory dynamics of electrical excitation and ischaemic substrate. For this, we developed a human-based computational modelling and simulation framework for the quantification of ECG metrics, namely, frequency, slope, and amplitude spectrum area (AMSA) during VF in acute ischaemia for several electrode configurations. Simulations reproduced experimental and clinical findings in 21 scenarios presenting variability in the location and transmural extent of regional ischaemia, and severity of ischaemia in the remote myocardium secondary to VF.ResultsRegional acute myocardial ischaemia facilitated re-entries, potentially breaking up into VF. Ischaemia in the remote myocardium modulated fibrillation dynamics. Cases presenting a mildly ischaemic remote myocardium yielded sustained VF, enabled by the high proliferation of phase singularities (PS, 11-22) causing remarkably disorganised activation patterns. Conversely, global acute ischaemia induced stable rotors (3-12 PS). Changes in frequency and morphology of the ECG during VF reproduced clinical findings but did not show a direct correlation with the underlying wave dynamics. AMSA allowed the precise stratification of VF according to ischaemic severity in the remote myocardium (healthy: 23.62-24.45 mV Hz; mild ischaemia: 10.58-21.47 mV Hz; moderate ischaemia: 4.82-11.12 mV Hz). Within the context of clinical reference values, apex-anterior and apex-posterior electrode configurations were the most discriminatory in stratifying VF based on the underlying ischaemic substrate.ConclusionThis in silico study provides further insights into non-invasive patient-specific strategies for assessing acute ventricular arrhythmias. The use of reliable ECG markers to characterise VF is critical for developing tailored resuscitation strategies.
Dynamic Network Analysis of Electrophysiological Task Data
<jats:title>Abstract</jats:title> <jats:p>An important approach for studying the human brain is to use functional neuroimaging combined with a task. In electrophysiological data this often involves a time-frequency analysis, in which recorded brain activity is time-frequency transformed and epoched around task events of interest, followed by trial-averaging of the power. Whilst this simple approach can reveal fast oscillatory dynamics, the brain regions are analysed one at a time. This causes difficulties for interpretation and a debilitating number of multiple comparisons. In addition, it is now recognised that the brain responds to tasks through the coordinated activity of networks of brain areas. As such, techniques that take a whole-brain network perspective are needed. Here, we show how the oscillatory task responses from conventional time-frequency approaches, can be represented more parsimoniously at the network level using two state-of-the-art methods: the HMM (Hidden Markov Model) and DyNeMo (Dynamic Network Modes). Both methods reveal frequency-resolved networks of oscillatory activity with millisecond resolution. Comparing DyNeMo, HMM and traditional oscillatory response analysis, we show DyNeMo can identify task activations/deactivations that the other approaches fail to detect. DyNeMo offers a powerful new method for analysing task data from the perspective of dynamic brain networks.</jats:p>
Bayesian Inference of Phylogenetic Distances: Revisiting the Eigenvalue Approach.
Using genetic data to infer evolutionary distances between molecular sequence pairs based on a Markov substitution model is a common procedure in phylogenetics, in particular for selecting a good starting tree to improve upon. Many evolutionary patterns can be accurately modelled using substitution models that are available in closed form, including the popular general time reversible model (GTR) for DNA data. For more complex biological phenomena, such as variations in lineage-specific evolutionary rates over time (heterotachy), other approaches such as the GTR with rate variation (GTR +Γ ) are required, but do not admit analytical solutions and do not automatically allow for likelihood calculations crucial for Bayesian analysis. In this paper, we derive a hybrid approach between these two methods, incorporating Γ(α,α) -distributed rate variation and heterotachy into a hierarchical Bayesian GTR-style framework. Our approach is differentiable and amenable to both stochastic gradient descent for optimisation and Hamiltonian Markov chain Monte Carlo for Bayesian inference. We show the utility of our approach by studying hypotheses regarding the origins of the eukaryotic cell within the context of a universal tree of life and find evidence for a two-domain theory.
Understanding reservoirs of multi-host pathogens: A One Health approach to rabies in Tanzania
Rabies virus is a multi-host zoonotic pathogen that is endemic across large parts of sub-Saharan Africa. This case study reports a One Health approach to rabies in Tanzania which highlights the value of multi-sectoral collaboration and illustrates the importance of understanding the reservoir dynamics of multi-host pathogens when targeting interventions. As part of our research we have established contact tracing for rabies in Tanzania. This involves identifying patients presenting with animal-bite injuries and investigating the animals responsible for the bites. Through contact tracing we identify the owners of biting animals, ascertain the rabies status of these animals, and identify additional bite victims who have not presented to healthcare centres. Domestic dog vaccination is a key component of current rabies control programmes as domestic dogs are responsible for most human rabies exposures across Africa. However, in some areas rabies transmission also occurs within wildlife populations. For example, in the Lindi and Mtwara regions of southern Tanzania, jackals represent an unusually high proportion of animal rabies cases. Maintenance of rabies virus within wildlife populations can have implications for control strategies centred around domestic dog vaccination. Throughout this case study we illustrate the use of multiple data sources to identify the role of domestic dogs and wildlife in rabies transmission in the Lindi and Mtwara regions of southern Tanzania and compare this to the island of Pemba where domestic dog cases dominate. We highlight how domestic dog vaccination plays a vital role in controlling rabies in these two areas with very different disease ecology.
Characteristics of Madariaga and Venezuelan Equine Encephalitis Virus Infections, Panama.
Madariaga virus (MADV) and Venezuelan equine encephalitis virus (VEEV) are emerging arboviruses affecting rural and remote areas of Latin America. However, clinical and epidemiologic reports are limited, and outbreaks are occurring at an increasing frequency. We addressed the data gap by analyzing all available clinical and epidemiologic data of MADV and VEEV infections recorded since 1961 in Panama. A total of 168 human alphavirus encephalitis cases were detected in Panama during 1961‒2023. We described the clinical signs and symptoms and epidemiologic characteristics of those cases, and also explored signs and symptoms as potential predictors of encephalitic alphavirus infection compared with those of other arbovirus infections occurring in the region. Our results highlight the challenges for the clinical diagnosis of alphavirus disease in endemic regions with overlapping circulation of multiple arboviruses.
Role of age and exposure duration in the association between metabolic syndrome and risk of incident dementia: a prospective cohort study.
BackgroundMetabolic syndrome could be a modifiable risk factor for dementia. However, the effects of age and duration of exposure to metabolic syndrome on dementia risk remains underexplored. The aim of this study was to determine whether the association between metabolic syndrome and risk of dementia differs across mid-life versus late-life, and to explore how duration of metabolic syndrome affects this risk.MethodsWe conducted a population-based prospective study using data from the European Prospective Investigation into Cancer in Norfolk (EPIC-Norfolk) cohort. Metabolic syndrome was defined as having at least three of the following: elevated waist circumference, triglycerides, blood pressure, or glycated haemoglobin, or reduced HDL cholesterol. Incident all-cause dementia was ascertained through hospital inpatient, death, and mental health-care records. In full-cohort analyses, we studied 20 150 adults without dementia aged 50-79 years who attended baseline assessments. Cox proportional hazards models were used to estimate the association between metabolic syndrome and dementia in the full sample, and in mid-life (50-59 years and 60-69 years) and late-life (70-79 years). To assess duration of metabolic syndrome, group-based trajectory analysis was performed on 12 756 participants who attended at least two health assessments over 20 years.FindingsThe mean age of participants was 62·6 years (SD 7·5), and 10 857 (54%) were female. Over 25 years of follow-up (mean 18·8 years [SD 6·3]), 2653 (13%) participants developed dementia. In the full cohort, metabolic syndrome was associated with an increased risk of dementia (hazard ratio 1·11, 95% CI 1·01-1·21). In age-specific analyses, the association was similar for participants in late mid-life (age 60-69 years: 1·21, 1·05-1·39) and, although non-significant, in early mid-life (age 50-59 years: 1·12, 0·87-1·43), but attenuated for participants in late-life (age 70-79 years: 0·96, 0·81-1·14). A linear trend was observed between the number of metabolic syndrome components and dementia risk in those aged 60-69 years (ptrend=0·0040), but not in other age groups. In trajectory analysis, a prolonged duration of metabolic syndrome was associated with a significantly increased risk of developing dementia (1·26, 1·13-1·40) when compared to those with consistently low metabolic syndrome. No association was found for increasing metabolic syndrome (1·01, 0·88-1·17).InterpretationThese findings provide insights into how certain age windows and time periods might differentially affect dementia risk in the context of metabolic syndrome, and highlight the importance of considering age and duration of exposure to metabolic syndrome when devising dementia prevention strategies.FundingCanadian Institutes of Health Research-Institute of Aging, Oxford Population Health, and the Nicolaus and Margrit Langbehn Foundation.
Role of primary and secondary care data in atrial fibrillation ascertainment: impact on risk factor associations, patient management, and mortality in UK Biobank.
Electronic healthcare records (EHR) are at the forefront of advances in epidemiological research emerging from large-scale population biobanks and clinical studies. Hospital admissions, diagnoses, and procedures (HADP) data are often used to identify disease cases. However, this may result in incomplete ascertainment of chronic conditions such as atrial fibrillation (AF), which are principally managed in primary care (PC). We examined the relevance of EHR sources for AF ascertainment, and the implications for risk factor associations, patient management, and outcomes in UK Biobank. UK Biobank is a prospective study, with HADP and PC records available for 230 000 participants (to 2016). AF cases were ascertained in three groups: from PC records only (PC-only), HADP only (HADP-only), or both (PC + HADP). Conventional statistical methods were used to describe differences between groups in terms of characteristics, risk factor associations, ascertainment timing, rates of anticoagulation, and post-AF stroke and death. A total of 7136 incident AF cases were identified during 7 years median follow-up (PC-only: 22%, PC + HADP: 49%, HADP-only: 29%). There was a median lag of 1.3 years between cases ascertained in PC and subsequently in HADP. AF cases in each of the ascertainment groups had comparable baseline demographic characteristics. However, AF cases identified in hospital data alone had a higher prevalence of cardiometabolic comorbidities and lower rates of subsequent anticoagulation (PC-only: 44%, PC + HADP: 48%, HADP-only: 10%, P < 0.0001) than other groups. HADP-only cases also had higher rates of death [PC-only: 9.3 (6.8, 12.7), PC + HADP: 23.4 (20.5, 26.6), HADP-only: 81.2 (73.8, 89.2) events per 1000 person-years, P < 0.0001] compared to other groups. Integration of data from primary care with that from hospital records has a substantial impact on AF ascertainment, identifying a third more cases than hospital records alone. However, about a third of AF cases recorded in hospital were not present in the primary care records, and these cases had lower rates of anticoagulation, as well as higher mortality from both cardiovascular and non-cardiovascular causes. Initiatives aimed at enhancing information exchange of clinically confirmed AF between healthcare settings have the potential to benefit patient management and AF-related outcomes at an individual and population level. This research underscores the importance of access and integration of de-identified comprehensive EHR data for a definitive understanding of patient trajectories, and for robust epidemiological and translational research into AF.
Plasma proteomic evidence for increased β-amyloid pathology after SARS-CoV-2 infection.
Previous studies have suggested that systemic viral infections may increase risks of dementia. Whether this holds true for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus infections is unknown. Determining this is important for anticipating the potential future incidence of dementia. To begin to do this, we measured plasma biomarkers linked to Alzheimer's disease pathology in the UK Biobank before and after serology-confirmed SARS-CoV-2 infections. SARS-CoV-2 infection was associated with biomarkers associated with β-amyloid pathology: reduced plasma Aβ42:Aβ40 ratio and, in more vulnerable participants, lower plasma Aβ42 and higher plasma pTau-181. The plasma biomarker changes were greater in participants who had been hospitalized with COVID-19 or had reported hypertension previously. We showed that the changes in biomarkers were linked to brain structural imaging patterns associated with Alzheimer's disease, lower cognitive test scores and poorer overall health evaluations. Our data from this post hoc case-control matched study thus provide observational biomarker evidence that SARS-CoV-2 infection can be associated with greater brain β-amyloid pathology in older adults. While these results do not establish causality, they suggest that SARS-CoV-2 (and possibly other systemic inflammatory diseases) may increase the risk of future Alzheimer's disease.
Cytisine Versus Varenicline for Smoking Cessation in a Primary Care Setting: A Randomized Non-inferiority Trial.
IntroductionA smoking-cessation program was implemented as a randomized non-inferiority trial in primary care practices in Croatia and Slovenia to investigate whether a standard 4-week treatment with cytisine was at least as effective and feasible as a standard 12-week treatment with varenicline in helping smokers quit.Aims and methodsOut of 982 surveyed smokers, 377 were recruited to the non-inferiority trial: 186 were randomly assigned to cytisine and 191 to varenicline treatment. The primary cessation outcome was 7-day abstinence after 24 weeks, while the primary feasibility outcome was defined by adherence to the treatment plan. We also compared the rates of adverse events between the two treatment groups.ResultsThe cessation rate after 24 weeks was 32.46% (62/191) in the varenicline group and 23.12% (43/186) in the cytisine group (odds ratio [OR]: 95%, credible interval [CI]: 0.39 to 0.98). Of 191 participants assigned to varenicline treatment 59.16% (113) were adherent, while 70.43% (131 of 186) were adherent in the cytisine group (OR: 1.65, 95% CI: 1.07 to 2.56). Participants assigned to cytisine experienced fewer total (incidence rate ratio [IRR]: 0.59, 95% CI: 0.43 to 0.81) and fewer severe or more extreme adverse events (IRR: 0.72, 95% CI: 0.35 to 1.47).ConclusionsThis randomized non-inferiority trial (n = 377) found the standard 4-week cytisine treatment to be less effective than the standard 12-week varenicline treatment for smoking cessation. However, adherence to the treatment plan, ie, feasibility, was higher, and the rate of adverse events was lower among participants assigned to cytisine treatment.ImplicationsThe present study found the standard 12 weeks of varenicline treatment to be more effective than the standard 4 weeks of cytisine treatment for smoking cessation in a primary care setting in Croatia and Slovenia. Participants assigned to cytisine, however, had a higher adherence to the treatment plan and a lower rate of adverse events. Estimates from the present study may be especially suitable for generalizations to high-smoking prevalence populations in Europe. Given the much lower cost of cytisine treatment, its lower rate of adverse events, and higher feasibility (but its likely lower effectiveness with the standard dosage regimen), future analyses should assess the cost-effectiveness of the two treatments for health policy considerations.
Facilitating clinical trials in hip fracture in the UK : the role and potential of the National Hip Fracture Database and routinely collected data.
AIMS: The aim of this study was to evaluate the suitability, against an accepted international standard, of a linked hip fracture registry and routinely collected administrative dataset in England to embed and deliver randomized controlled trials (RCTs). METHODS: First, a bespoke cohort of individuals sustaining hip fractures between 2011 and 2016 was generated from the National Hip Fracture Database (NHFD) and linked to individual Hospital Episode Statistics (HES) records and mortality data. Second, in order to explore the availability and distribution of outcomes available in linked HES-Office of National Statistics (ONS) data, a more contemporary cohort with incident hip fracture was identified within HES between January 2014 and December 2018. Distributions of the outcomes within the HES-ONS dataset were reported using standard statistical summaries; descriptive characteristics of the NHFD and linked HES-ONS dataset were reported in line with the Clinical Trials Transformation Initiative recommendations for registry-enabled trials. RESULTS: Case ascertainment of the NHFD likely exceeds 94%. The assessment of the robustness, relevance, and reliability of the datasets was favourable. Outcomes from the HES-ONS dataset were concordant with other contemporaneous prospective cohort studies with bespoke data collection frameworks. CONCLUSION: Our findings support the feasibility of the NHFD and HES-ONS to support a registry-embedded, data-enabled RCT.
The Effect of Vehicle Motion (Cab Vibration) on Accelerometer Cut-Point Determined Moderate-to-Vigorous Physical Activity in Heavy Goods Vehicle Drivers
This study aimed to determine the impact of cab noise when driving Heavy Goods Vehicles (HGV) on cut-point estimated moderate-to-vigorous physical activity (MVPA) from wrist-worn accelerometers. First, we investigated the impact of cab noise on accelerometer output during HGV driving and then on cut-point estimated MVPA in HGV drivers. A GENEActiv accelerometer was located beneath the seat in six HGVs for 8 days. Acceleration recorded during driving lay predominantly (~94%) within the sedentary range (<40mg). HGV drivers (N = 386, 47.9 ± 9.3 years) wore a wrist-worn GENEActiv and a thighworn activPAL simultaneously for 8 days covering workdays and nonworkdays. MVPA recorded by the activPAL excludes seated transport, thus provided the criterion. Wrist accelerometer MVPA was classified using two cut-points approximating 3 metabolic equivalents (MVPA100mg) and 4.3 metabolic equivalents (indicative of brisk walking, MVPABRISK_WALK). Acceleration classified as MVPA100mg or MVPABRISK_WALK during activPAL-determined seated transport was considered erroneous. Across all-days, activPAL MVPA was 15 (interquartile range: 9, 26) min/day. Compared with activPAL, MVPA100mg was 100 min/day higher (95% limits of agreement ±53 min), but MVPABRISK_WALK similar (mean bias = −2 min/day, 95% limits of agreement ±15). On workdays, 23 (interquartile range: 11, 52) min of MVPA100mg and 2 (1, 7) min of MVPABRISK_WALK were erroneous. However, on nonworkdays, only 4 (3, 14) and 0.4 (0, 1) min, respectively, were erroneous. In conclusion, MVPA may be erroneously captured using cut-point analyses of accelerometer data in HGV drivers. However, this was substantially reduced by using an MVPA cut-point indicative of brisk walking, which also approximated activPAL estimated MVPA.
The predictive validity of a Brain Care Score for dementia and stroke: data from the UK Biobank cohort.
IntroductionThe 21-point Brain Care Score (BCS) was developed through a modified Delphi process in partnership with practitioners and patients to promote behavior changes and lifestyle choices in order to sustainably reduce the risk of dementia and stroke. We aimed to assess the associations of the BCS with risk of incident dementia and stroke.MethodsThe BCS was derived from the United Kingdom Biobank (UKB) baseline evaluation for participants aged 40-69 years, recruited between 2006-2010. Associations of BCS and risk of subsequent incident dementia and stroke were estimated using Cox proportional hazard regressions, adjusted for sex assigned at birth and stratified by age groups at baseline.ResultsThe BCS (median: 12; IQR:11-14) was derived for 398,990 UKB participants (mean age: 57; females: 54%). There were 5,354 incident cases of dementia and 7,259 incident cases of stroke recorded during a median follow-up of 12.5 years. A five-point higher BCS at baseline was associated with a 59% (95%CI: 40-72%) lower risk of dementia among participants aged <50. Among those aged 50-59, the figure was 32% (95%CI: 20-42%) and 8% (95%CI: 2-14%) for those aged >59 years. A five-point higher BCS was associated with a 48% (95%CI: 39-56%) lower risk of stroke among participants aged <50, 52% (95%CI, 47-56%) among those aged 50-59, and 33% (95%CI, 29-37%) among those aged >59.DiscussionThe BCS has clinically relevant and statistically significant associations with risk of dementia and stroke in approximately 0.4 million UK people. Future research includes investigating the feasibility, adaptability and implementation of the BCS for patients and providers worldwide.