921 |
Do Analysts Benefit from Online Feedback and Visibility?Khavis, Joshua A. January 2019 (has links)
I explore whether participation on Estimize.com, a crowdsourced earnings-forecasting platform aimed primarily at novices, improves professional analysts’ forecast accuracy and career outcomes. Estimize provides its contributors with frequent and timely feedback on their forecast performance and offers them a new channel for disseminating their forecasts to a wider public, features that could help analysts improve their forecast accuracy and raise their online visibility. Using proprietary data obtained from Estimize and a difference-in-differences research design, I find that IBES analysts who are active on Estimize improve their EPS forecast accuracy by 13% relative to the sample-mean forecast error, as well as reduce forecast bias. These improvements in performance vary predictably in ways consistent with learning through feedback. Additionally, I find increased market reaction to the positive earnings-forecasts revisions issued by analysts who are active on Estimize. I also find that analysts active on Estimize enjoy incremental positive career outcomes after controlling for forecast accuracy. My results suggest that professional analysts can learn to become better forecasters through online feedback and consequently garner more attention from the market. My results also suggest analysts can improve their career outcomes by gaining additional online visibility. / Business Administration/Accounting
|
922 |
DEVELOPMENT OF ENGLISH ORAL PROFICIENCY AMONG JAPANESE HIGH SCHOOL STUDENTSKanda, Makiko January 2015 (has links)
This study is a longitudinal study that investigated the development of English oral proficiency—complexity, accuracy, and fluency—under the pre-task and on-line planning conditions with task repetition among Japanese high school students. This study is unique because it is longitudinal and includes qualitative data. The participants were 15 Japanese high school students whose English proficiency level is categorized as low proficiency. Narrative tasks, post-task questionnaires, journals, and interviews were used in this study. In the narrative tasks, they were asked to describe a four-picture story three times with two minutes planning time, when they were allowed to listen to an ALT (assistant language teacher) tell the story and take notes. They completed a post-task questionnaire and a journal after completing the task. Interviews were conducted two times to further investigate their questionnaire responses and what they wrote in their journal entries. The results showed that low proficiency learners increased oral fluency, syntactic complexity, lexical complexity, and syntactic accuracy through repeating the same task within a single session, and syntactic complexity and lexical complexity through repeating the same type of task during the academic year. The aural input between the first, second, and third performance can lead them to draw their attention to form-meaning connections, resulting in increased oral performance. In addition, low and intermediate beginners benefited in increasing oral fluency, syntactic complexity, and syntactic accuracy, while high beginners benefited in improving oral fluency and lexical complexity under pre-task and on-line planning conditions with repetition during the academic year. The study suggests that the combined use of pre-task planning, on-line planning, and task repetition have a cumulative effect and can facilitate the development of oral fluency, syntactic complexity, lexical complexity, and syntactic accuracy for low proficiency high school learns of English. If learners are given the opportunity to plan before and during task performance with repetition, and to make the condition that draws their attention to both form and meaning, it is the most effective strategy to improve oral fluency, syntactic complexity, lexical complexity, and syntactic accuracy in task-based teaching in the classrooms. / Language Arts
|
923 |
Contributions to statistical methods for meta-analysis of diagnostic test accuracy studies / Methods for meta-analysis of diagnostic test accuracy studiesNegeri, Zelalem January 2019 (has links)
Meta-analysis is a popular statistical method that synthesizes evidence from multiple studies. Conventionally, both the hierarchical and bivariate models for meta-analysis of diagnostic test accuracy (DTA) studies assume that the random-effects follow the bivariate normal distribution. However, this assumption is restrictive, and inferences could be misleading when it is violated. On the other hand, subjective methods such as inspection of forest plots are used to identify outlying studies in a meta-analysis of DTA studies. Moreover, inferences made using the well-established bivariate random-effects models, when outlying or influential studies are present, may lead to misleading conclusions. Thus, the aim of this thesis is to address these issues by introducing alternative and robust statistical methods. First, we extend the current bivariate linear mixed model (LMM) by assuming a flexible bivariate skew-normal distribution for the random-effects. The marginal distribution of the proposed model is analytically derived so that parameter estimation can be performed using standard likelihood methods. Overall, the proposed model performs better in terms of confidence interval width of the overall sensitivity and specificity, and with regards to bias and root mean squared error of the between-study (co)variances than the traditional bivariate LMM. Second, we propose objective methods based on solid statistical reasoning for identifying outlying and/or influential studies in a meta-analysis of DTA studies. The performances of the proposed methods are evaluated using a simulation study. The proposed methods outperform and avoid the subjectivity of the currently used ad hoc approaches. Finally, we develop a new robust bivariate random-effects model which accommodates outlying and influential observations and leads to a robust statistical inference by down-weighting the effect of outlying and influential studies. The proposed model produces robust point estimates of sensitivity and specificity compared to the standard models, and also generates a similar point and interval estimates of sensitivity and specificity as the standard models in the absence of outlying or influential studies. / Thesis / Doctor of Philosophy (PhD) / Diagnostic tests vary from the noninvasive rapid strep test used to identify whether a patient has a bacterial sore throat to the much complex and invasive biopsy test used to examine the presence, cause, and extent of a severe condition, say cancer. Meta-analysis is a widely used statistical method that synthesizes evidence from several studies. In this thesis, we develop novel statistical methods extending the traditional methods for meta-analysis of diagnostic test accuracy studies. Our proposed methods address the issue of modelling asymmetrical data, identifying outlier studies, and optimally accommodating these outlying studies in a meta-analysis of diagnostic test accuracy studies. Using both real-life and simulated datasets, we show that our proposed methods perform better than conventional methods in a wide range of scenarios. %Therefore, we believe that our proposed methods are essential for methodologists, clinicians and health policy professionals in the process of making a correct judgment to using the appropriate diagnostic test to diagnose patients.
|
924 |
Words in the WildsSnefjella, Bryor January 2019 (has links)
affect, concreteness, corpus linguistics, cognitive science, cognitive linguistics, stereotype accuracy, national character stereotypes, semantic prosody / Increasing use of natural language corpora and methods from corpus and computational linguistics as a
supplement to traditional modes of scholarship in the social sciences and humanities has been labeled the
"text as data movement." Corpora afford greater scope in terms of sample sizes, time, geography, and subject
populations, as well as the opportunity to ecologically validate theories by testing their predictions within
behaviour which is not elicited by an experimenter. Herein, five projects are presented, each either exploiting
or taking inspiration from natural language data to make novel contributions to a subject matter area in
the psychological sciences, including social psychology and psycholinguistics. Additionally, each project
incorporates notions of word meaning grounded in psycholinguistic and psychoevolutionary theory, either the
affective or sensorimotor connotations of words. This thesis ends with a discussion of the necessity of taking
both experimental and observational approaches, as well as the challenge of how to link natural language
data to psychological constructs. / Thesis / Doctor of Philosophy (PhD) / The internet and modern computers are changing how scientists study the mind. Instead of doing experiments
within a laboratory, it is more and more common for cognitive scientists to observe patterns in online language
use. These patterns in language use are then used to comment on how the mind works. Online language
use is created by diverse people as they go about their lives. This is valuable for scientists studying the
mind. Our experiments are often limited by how many people and which people do experiments. Sometimes,
experiments can be misleading because people don't act in the real world like they do in a lab. This thesis
has five studies, each using online language use to comment on some part of how the mind works. Also, each
study involves how words make people feel, or whether a word refers to something you can see or touch.
Studying real people as they communicate offers new perspectives on old ideas or unanswered questions.
|
925 |
Precisionen av digital hälsoinformation : en systematisk översikt / The accuracy of online health information : a systematic reviewHolpers, Emelie, Sällström, Oskar January 2024 (has links)
Introduktion: I dagens digitala tidsålder vänder sig många individer till internet som sin första källa till hälsoinformation. Forskning har dock visat att majoriteten av digital hälsoinformation är av låg kvalitet och har bristfällig korrekthet. En stor del av denna information består sannolikt av både desinformation och missinformation. Syfte: Syftet med denna studie var att sammanställa resultaten från peer-review granskade studier som utvärderar korrektheten hos digital hälsoinformation, med särskilt fokus på studier som använder expertgranskade texter utvärderade mot etablerade riktlinjer. Metod: Den 7 april 2024 genomsöktes tre databaser – MEDLINE EBSCO, Scopus och PubMed. Totalt 21 artiklar inkluderades. Resultaten från dessa artiklar kategoriserades och analyserades induktivt utifrån Narro och Tudges neo-ekologiska teori. Resultat: Fyra huvudkategorier skapades gällande informationens korrekthet: god, måttlig, bristfällig och varierande. Av de inkluderade artiklarna bedömdes 3 (14%) ha god korrekthet, 4 (19%) artiklar måttlig korrekthet, 13 (62%) bristfällig korrekthet och 1 (5%) hade varierande. Därtill ansågs 76% av det inkluderade materialet från trovärdiga källor ha bristfällig korrekthet. Slutsats: Majoriteten av de inkluderade studierna och det trovärdiga materialet har bristfällig korrekthet. Dessutom, tenderar studier som uppvisar en högre grad av korrekthet, ha sämre läsbarhet och den övergripande kvaliteten är ofta bristfällig, vilket även gör det svårt för konsumenter att förstå informationen. / Introduction: In today's digital age, many individuals turn to the internet as their initial source of health information. However, research has shown that the majority of online health information is of poor quality and low accuracy. A significant portion of this information likely includes both misinformation and disinformation. Aim: The aim of this study was to synthesize findings from peer-reviewed research assessing the accuracy of online health information, with a particular emphasis on studies that utilize expert-reviewed content evaluated against established guidelines. Method: On 7 April 2024, three databases—MEDLINE EBSCO, Scopus, and PubMed—were searched. A total of 21 articles were included in the systematic review. The results from these articles were categorized and inductively examined through the lens of Narro and Tudge’s neo-ecological theory. Results: Four primary categories were created concerning the accuracy of the information: good, moderate, poor, and varied. Among the included articles, 3 (14%) were deemed to have good accuracy, 4 (19%) articles exhibited moderate accuracy, 13 (62%) demonstrated poor accuracy, and 1 (5%) had varied accuracy. Additionally, 76% of the included material from trustworthy sources were categorized as having poor accuracy. Conclusion: The majority of the included studies and trustworthy material were found to have poor accuracy. Furthermore, even when studies exhibited a higher degree of accuracy, the readability and overall quality were often deficient, making it difficult for consumers to understand.
|
926 |
Biologically Inspired Modular Neural NetworksAzam, Farooq 19 June 2000 (has links)
This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning inspirations that can be subsequently utilized to design artificial neural network.
The artificial neural networks are touted to be a neurobiologicaly inspired paradigm that emulate the functioning of the vertebrate brain. The brain is a highly structured entity with localized regions of neurons specialized in performing specific tasks. On the other hand, the mainstream monolithic feed-forward neural networks are generally unstructured black boxes which is their major performance limiting characteristic. The non explicit structure and monolithic nature of the current mainstream artificial neural networks results in lack of the capability of systematic incorporation of functional or task-specific a priori knowledge in the artificial neural network design process. The problem caused by these limitations are discussed in detail in this dissertation and remedial solutions are presented that are driven by the functioning of the brain and its structural organization.
Also, this dissertation presents an in depth study of the currently available modular neural network architectures along with highlighting their shortcomings and investigates new modular artificial neural network models in order to overcome pointed out shortcomings. The resulting proposed modular neural network models have greater accuracy, generalization, comprehensible simplified neural structure, ease of training and more user confidence. These benefits are readily obvious for certain problems, depending upon availability and usage of available a priori knowledge about the problems.
The modular neural network models presented in this dissertation exploit the capabilities of the principle of divide and conquer in the design and learning of the modular artificial neural networks. The strategy of divide and conquer solves a complex computational problem by dividing it into simpler sub-problems and then combining the individual solutions to the sub-problems into a solution to the original problem. The divisions of a task considered in this dissertation are the automatic decomposition of the mappings to be learned, decompositions of the artificial neural networks to minimize harmful interaction during the learning process, and explicit decomposition of the application task into sub-tasks that are learned separately.
The versatility and capabilities of the new proposed modular neural networks are demonstrated by the experimental results. A comparison of the current modular neural network design techniques with the ones introduced in this dissertation, is also presented for reference. The results presented in this dissertation lay a solid foundation for design and learning of the artificial neural networks that have sound neurobiological basis that leads to superior design techniques. Areas of the future research are also presented. / Ph. D.
|
927 |
Terrängunderlags inverkan på habitatanalyser genom hydraulisk modellering av torrfåror / The Impact of Terrain Data on Habitat Analysis Through Hydraulic Modeling of Dry ChannelsGothe, Miranda, Hagwall, Karin January 2024 (has links)
Detta arbete utreder olika terrängunderlags påverkan på habitatanalyser genom hydraulisk modellering, med fokus på att bestämma de bäst anpassade terrängunderlagen för olika områden nedströms vattenkraftanläggningar. Studien genomför en jämförande analys av olika terrängunderlag, inklusive olika laserskannad data från Lantmäteriet samt fotogrammetrimodeller baserade på drönarbilder, för att avgöra hur dessa metoder påverkar habitatanalyser. Arbetet undersöker även hur habitatanalyser påverkas av andra osäkerheter som introduceras vid hydraulisk modellering av habitat, nämligen Mannings råhetskoefficient och de antagna preferenserna av flödesförhållanden vilka bestämmer de habitabla områdena. Studien inkluderar prognoser för habitabla områden för lax vid olika flödesförhållanden och vid tre olika vattenkraftverk med torrfåror av olika karaktär. Hydraulsik modellering är ett användbart verktyg för att få en förståelse för de ekologiska konsekvenserna av vattenkraftsanläggningar. Representativa terrängdata är avgörande eftersom de direkt påverkar beräkningarna av de hydrauliska variablerna inom modellerna, vilket i sin tur bestämmer lämpligheten av habitat. Resultaten visar på betydande variationer i omfattningen av habitabla områden beroende på vilken typ av terrängunderlag som användes, samt beroende på flödesområdets karaktär. Generellt visade de modifierade versionerna av terrängunderlagen på mer representativa resultat än de omodifierade, och modifieringar baserade på uppmätta värden är att föredra för att generera representativa modifieringar. I områden med mindre vegetation ger terrängmodeller genererade med drönarfotogrammetri de mest representativa resultaten, medan de ger osäkra resultat i områden med mer vegetation. I bevuxna områden visar istället modifierade terrängmodeller baserade på laserskannade data från Lantmäterier på mer representativa resultat. De olika typerna av laserskannade data från Lantmäteriet visar inte på någon betydande skillnad i detta syfte. Vidare genomföordes känslighetsanalyser för Mannings råhetskoefficient och laxens habitatpreferenser. Dessa analyser visade på relativt förväntade resultat, där ett ökat värde på Mannings råhetskoefficient resulterade i reducerade hastigheter och ökade djup, vilket i sin tur resulterade i större beboeliga områden vid högre flödeshastigheter. Motsatta resultat observerades för minskade värden för Mannings råhetskoefficient. Variationerna i Mannings råhetskoefficient visar inga betydande skillnader i vilket flöde som genererar ett ekologiskt maximum. De varierade habitatpreferenserna visade på större variation i optimala flöden, vilket understryker vikten av att ta fram representativa habitatpreferenser baserade på specifika ekologiska förhållanden på platsen. / Introduction This thesis investigates the influence of various terrain data on habitat analyses through hydraulic modelling, aiming to identify the most suitable terrain models for different areas downstream of hydropower plants in Sweden. The study aims to evaluate the utility of different advanced and costly measurement techniques by adressing the following research questions; How significant is the difference in estimated habitable area for salmon downstream of hydropower plants using terrain data of varying levels of detail? What causes the differences in habitable area and what characterizes the watercourses where terrain models with different levels of detail are best suited? How much do variations in Manning’s roughness coefficient and the hydraulic preferences for salmon impact the estimated habitable areas? Methodology The study conducts a comparative analysis of various terrain data, including different laser-scanned datasets from Lantm¨ateriet and photogrammetry models based on drone footage, to determine how these methods affect predictions in habitable areas in streams donstream of hydropower plants. The habitat analyses were performed using hydraulic modeling in the HEC-RAS software, analysing three different models with varying terrain data and flow rates. The simulation results were then used to estimate the extent of areas with suitable habitat based on calculated water depths and velocities. Hydraulic modelling is a useful tool for understanding the ecological impacts of hydropower installations. Accurate terrain data is crucial as they directly influence the calculation of hydraulic variables within the models, which in turn determines the predicted suitability of habitats for various aquatic species. Results and Discussion The results show significant variations in the extent of habitable areas depending on the type of terrain data used and the characteristics of the flow area. Generally, modified versions of the terrain showed more representative resultes than unmodified versions, with modigfications based on measured values being the preferable. In areas with less vegetation, terrain models generated from drone photogrammetry provided the most representative results, while these models produced uncertain results in more vegetated areas. In vegetated areas, modified terrain models based on laser-scanned data from Lantm¨ateriet showed more representative results. No significant difference was observed between different types of laser-scanned data from Lantm¨ateriet within the purpose of the study. Additionally, sensitivity analyses were conducted for Manning’s roughness coefficient and hydraulic habitat preferences for salmon. These analyses showed that increased Manning’s roughness coefficient values resulted in reduced velocities and increased depths, leading to larger habitable areas at higher flow rates. Opposite results were observed for decreased values of Manning’s roughness coefficient. Variations in Manning’s roughness coefficient showed no significant differences in the flow rates generating an ecological maximum. However, varying habitate oreferences showed greater variation in optimal flow rates, underlining the importance of deriving accurate habitate preferences based on specific ecological conditions at site. Conclusions The analysis results emphasise the importance of a thorough understanding of the mothods and tools used in habitate analyses. This involves ensuring the quality of terrain data and making precise choices of habitate preferences based on species presence. Creating accurate models is complex, and simplied models can lead to misleading results.
|
928 |
An Extended Calibration and Validation of a Slotted-Wall Transonic Wall-Interference Correction Method for the National Transonic FacilityBailey, Matthew Marlando 26 November 2019 (has links)
Correcting wind tunnel data for wall interference is a critical part of relating the acquired data to a free-air condition. Accurately determining and correcting for the interference caused by the presence of boundaries in wind tunnels can be difficult especially for facilities employing ventilated boundaries. In this work, three varying levels of ventilation at the National Transonic Facility (NTF) were modeled and calibrated with a general slotted wall (GSW) linear boundary condition to validate the computational model used to determine wall interference corrections. Free-air lift, drag, and pitching moment coefficient predictions were compared for a range of lift production and Mach conditions to determine the uncertainty in the corrections process and the expected domain of applicability.
Exploiting a previously designed statistical validation method, this effort accomplishes the extension of a calibration and validation for a boundary pressure wall interference corrections method. The foundational calibration and validation work was based on blockage interference only, while this present work extends the assessment of the method to encompass blockage and lift interference production. The validation method involves the establishment of independent cases that are then compared to rigorously determine the degree to which the correction method can converge free-air solutions for differing interference fields. The process involved first establishing an empty-tunnel calibration to gain both a centerline Mach profile of the facility at various ventilation settings, and to gain a baseline wall pressure signature undisturbed by a test article. The wall boundary condition parameters were then calibrated with a blockage and lift interference producing test article, and final corrected performance coefficients were compared for varying test section ventilated configurations to validate the corrections process and assess its domain of applicability. During the validation process discrimination between homogeneous and discrete implementations of the boundary condition was accomplished and final results indicated comparative strength in the discrete implementation's ability to capture experimental flow physics.
Final results indicate that a discrete implementation of the General Slotted Wall boundary condition is effective in significantly reducing variations caused by differing interference fields. Corrections performed with the discrete implementation of the boundary condition collapse differing measurements of lift coefficient to within 0.0027, drag coefficient to within 0.0002, and pitching moment coefficient to within 0.0020. / Doctor of Philosophy / The purpose of conducting experimental tests in wind tunnels is often to acquire a quantitative measure of test article aerodynamic characteristics in such a way that those specific characteristics can be accurately translated into performance characteristics of the real vehicle that the test article intends to simulate. The difficulty in accurately simulating the real flow problem may not be readily apparent, but scientists and engineers have been working to improve this desired equivalence for the better part of the last half-century.
The primary aspects of experimental aerodynamics simulation that present difficulty in attaining equivalence are: geometric fidelity, accurate scaling, and accounting for the presence of walls. The problem of scaling has been largely addressed by adequately matching conditions of similarity like compressibility (Mach number), and viscous effects (Reynolds number). However, accounting for the presence of walls in the experimental setup has presented ongoing challenges for ventilated boundaries; these challenges include difficulties in the correction process, but also extend into the determination of correction uncertainties.
Exploiting a previously designed statistical validation method, this effort accomplishes the extension of a calibration and validation effort for a boundary pressure wall interference corrections method. The foundational calibration and validation work was based on blockage interference only, while this present work extends the assessment of the method to encompass blockage and lift interference production. The validation method involves the establishment of independent cases that are then compared to rigorously determine the degree to with the correction method can converge free-air solutions for differing interference scenarios. The process involved first establishing an empty-tunnel calibration to gain both a centerline Mach profile of the facility at various ventilation settings, and to gain a baseline wall pressure signature undisturbed by a test article. The wall boundary condition parameters were then calibrated with a blockage and lift interference producing test article, and final corrected performance coefficients were compared for varying test section ventilated configurations to validate the corrections process and assess its domain of applicability. During the validation process discrimination between homogeneous and discrete implementations of the boundary condition was accomplished and final results indicated comparative strength in the discrete implementation's ability to capture experimental flow physics.
Final results indicate that a discrete implementation of the General Slotted Wall boundary condition is effective in significantly reducing variations caused by differing interference fields. Corrections performed with the discrete implementation of the boundary condition collapse differing measurements of lift coefficient to within 0.0027, drag coefficient to within 0.0002, and pitching moment coefficient to within 0.0020.
|
929 |
Bivariate meta-analysis of sensitivity and specificity of radiographers' plain radiograph reporting in clinical practice.Brealey, S., Hewitt, C., Scally, Andy J., Hahn, S., Godfrey, C., Thomas, N. January 2009 (has links)
No / Studies of diagnostic accuracy often report paired tests for sensitivity and specificity that can be pooled separately to produce summary estimates in a meta-analysis. This was done recently for a systematic review of radiographers' reporting accuracy of plain radiographs. The problem with pooling sensitivities and specificities separately is that it does not acknowledge any possible (negative) correlation between these two measures. A possible cause of this negative correlation is that different thresholds are used in studies to define abnormal and normal radiographs because of implicit variations in thresholds that occur when radiographers' report plain radiographs. A method that allows for the correlation that can exist between pairs of sensitivity and specificity within a study using a random effects approach is the bivariate model. When estimates of accuracy as a fixed-effects model were pooled separately, radiographers' reported plain radiographs in clinical practice at 93% (95% confidence interval (CI) 92-93%) sensitivity and 98% (95% CI 98-98%) specificity. The bivariate model produced the same summary estimates of sensitivity and specificity but with wider confidence intervals (93% (95% CI 91-95%) and 98% (95% CI 96-98%), respectively) that take into account the heterogeneity beyond chance between studies. This method also allowed us to calculate a 95% confidence ellipse around the mean values of sensitivity and specificity and a 95% prediction ellipse for individual values of sensitivity and specificity. The bivariate model is an improvement on pooling sensitivity and specificity separately when there is a threshold effect, and it is the preferred method of choice.
|
930 |
Using out-of-office blood pressure measurements in established cardiovascular risk scores: implications for practiceStevens, S.L., Stevens, R.J., de Leeuw, P., Kroon, A.A., Greenfield, S., Mohammed, Mohammed A., Gill, P., Verberk, W.J., McManus, R.J. 07 September 2018 (has links)
Yes / Blood pressure (BP) measurement is increasingly carried out through home or ambulatory monitoring, yet existing cardiovascular risk scores were developed for use with measurements obtained in clinic.
To describe differences in cardiovascular risk estimates obtained using ambulatory or home BP measurements instead of clinic readings.
Design and setting: Secondary analysis of data from adults aged 30-84 without prior history of cardiovascular disease (CVD) in two BP monitoring studies (BP-Eth and HOMERUS).
Method: The primary comparison was Framingham risk calculated using BP measured as in the Framingham study or daytime ambulatory BP measurements. The QRISK2 and SCORE risk equations were also studied. Statistical and clinical significance were determined using the Wilcoxon signed-rank test and scatter plots respectively.
Results: In 442 BP-Eth patients (mean age = 58 years, 50% female) the median absolute difference in 10-year Framingham cardiovascular risk calculated using BP measured as in the Framingham study or daytime ambulatory BP measurements was 1.84% (interquartile range 0.65 to 3.63, p=0.67). Only 31/ 442 (7.0%) of patients were reclassified across the 10% risk treatment threshold. In 165 HOMERUS patients (mean age = 56 years, 46% female) the median difference in 10-year risk was 2.76% (IQR 1.19 to 6.39, p<0.001) and only 8/165 (4.8%) of patient were reclassified.
Conclusion: Estimates of cardiovascular risk are similar when calculated using BP measurements obtained as in the risk score derivation study or through ambulatory monitoring. Further research is required to determine if differences in estimated risk would meaningfully influence risk score accuracy.
|
Page generated in 0.1444 seconds