Spelling suggestions: "subject:"biometry"" "subject:"audiometry""
231 |
Uma nova abordagem para reconhecimento biométrico baseado em características dinâmicas da íris humana / A new approach for biometric recognition based on dynamic characteristics of the human irisRonaldo Martins da Costa 19 February 2010 (has links)
A identificação pessoal através da análise da textura da íris é um método de identificação biométrico de grande eficiência. Algoritmos e técnicas foram desenvolvidos levando-se em consideração as características de textura da imagem da íris do olho humano. No entanto, essas características por serem estáticas são também susceptíveis a fraudes, ou seja, uma foto pode substituir a íris em análise. Por isso, este trabalho propõe um método para extrair as características de textura da íris durante a contração e dilatação da pupila, além das próprias características dinâmicas de contração e dilatação. Para isso foi desenvolvido um novo sistema de aquisição da imagem utilizando iluminação NIR (Near Infra-Red) e levando-se em conta o reflexo consensual dos olhos. As características são medidas de acordo com um padrão dinâmico de iluminação controlado pelo programa. Com isso, é possível aumentar a segurança de dispositivos de reconhecimento biométrico de pessoas através da íris, pois, somente íris vivas podem ser utilizadas. Os resultados mostram um índice de precisão significativo na capacidade de discriminação destas características. / The personal identification by iris texture analysis is a highly effective biometric identification method. Some algorithms and techniques were developed, taking into consideration the texture features of the iris image in the human eye. Nonetheless, such features, due to the fact that they are static, are also susceptible to fraud. That is, a picture can replace the iris in an analysis. For that reason, this work proposes a method for extracting texture features of the iris during the pupil contraction and dilation, in addition to the dynamic contraction and dilation features themselves. Therefore, it was developed a new image acquisition system through NIR (Near Infra-Red) illumination, considering the consensual reflex of the eyes. Features are measured according to a dynamic illumination standard controlled by the software and are afterwards selected by means of data mining. Then it is possible to increase the safety in the biometric recognition devices of people through their iris, since only living irises can be utilized. Results show a significant precision index in determining such features.
|
232 |
Semiparametric Estimation of a Gaptime-Associated Hazard FunctionTeravainen, Timothy January 2014 (has links)
This dissertation proposes a suite of novel Bayesian semiparametric estimators for a proportional hazard function associated with the gaptimes, or inter-arrival times, of a counting process in survival analysis. The Cox model is applied and extended in order to identify the subsequent effect of an event on future events in a system with renewal. The estimators may also be applied, without changes, to model the effect of a point treatment on subsequent events, as well as the effect of an event on subsequent events in neighboring subjects.
These Bayesian semiparametric estimators are used to analyze the survival and reliability of the New York City electric grid. In particular, the phenomenon of "infant mortality," whereby electrical supply units are prone to immediate recurrence of failure, is flexibly quantified as a period of increased risk. In this setting, the Cox model removes the significant confounding effect of seasonality. Without this correction, infant mortality would be misestimated due to the exogenously increased failure rate during summer months and times of high demand. The structural assumptions of the Bayesian estimators allow the use and interpretation of sparse event data without the rigid constraints of standard parametric models used in reliability studies.
|
233 |
Statistical modeling and statistical learning for disease prediction and classificationChen, Tianle January 2014 (has links)
This dissertation studies prediction and classification models for disease risk through semiparametric modeling and statistical learning. It consists of three parts. In the first part, we propose several survival models to analyze the Cooperative Huntington's Observational Research Trial (COHORT) study data accounting for the missing mutation status in relative participants (Kieburtz and Huntington Study Group, 1996a). Huntington's disease (HD) is a progressive neurodegenerative disorder caused by an expansion of cytosine-adenine-guanine (CAG) repeats at the IT15 gene. A CAG repeat number greater than or equal to 36 is defined as carrying the mutation and carriers will eventually show symptoms if not censored by other events. There is an inverse relationship between the age-at-onset of HD and the CAG repeat length; the greater the CAG expansion, the earlier the age-at-onset. Accurate estimation of age-at-onset based on CAG repeat length is important for genetic counseling and the design of clinical trials for HD. Participants in COHORT (denoted as probands) undergo a genetic test and their CAG repeat number is determined. Family members of the probands do not undergo the genetic test and their HD onset information is provided by probands. Several methods are proposed in the literature to model the age specific cumulative distribution function (CDF) of HD onset as a function of the CAG repeat length. However, none of the existing methods can be directly used to analyze COHORT proband and family data because family members' mutation status is not always known. In this work, we treat the presence or absence of an expanded CAG repeat in first-degree family members as missing data and use the expectation-maximization (EM) algorithm to carry out the maximum likelihood estimation of the COHORT proband and family data jointly. We perform simulation studies to examine finite sample performance of the proposed methods and apply these methods to estimate the CDF of HD age-at-onset from the COHORT proband and family combined data. Our results show a slightly lower estimated cumulative risk of HD with the combined data compared to using proband data alone.
We then extend the approach to predict the cumulative risk of disease accommodating predictors with time-varying effects and outcomes subject to censoring. We model the time-specific effect through a nonparametric varying-coefficient function and handle censoring through self-consistency equations that redistribute the probability mass of censored outcomes to the right. The computational procedure is extremely convenient and can be implemented by standard software. We prove large sample properties of the proposed estimator and evaluate its finite sample performance through simulation studies. We apply the method to estimate the cumulative risk of developing HD from the mutation carriers in COHORT data and illustrate an inverse relationship between the cumulative risk of HD and the length of CAG repeats at the IT15 gene.
In the second part of the dissertation, we develop methods to accurately predict whether pre-symptomatic individuals are at risk of a disease based on their various marker profiles, which offers an opportunity for early intervention well before definitive clinical diagnosis. For many diseases, existing clinical literature may suggest the risk of disease varies with some markers of biological and etiological importance, for example age. To identify effective prediction rules using nonparametric decision functions, standard statistical learning approaches treat markers with clear biological importance (e.g., age) and other markers without prior knowledge on disease etiology interchangeably as input variables. Therefore, these approaches may be inadequate in singling out and preserving the effects from the biologically important variables, especially in the presence of potential noise markers. Using age as an example of a salient marker to receive special care in the analysis, we propose a local smoothing large margin classifier implemented with support vector machine to construct effective age-dependent classification rules. The method adaptively adjusts age effect and separately tunes age and other markers to achieve optimal performance. We derive the asymptotic risk bound of the local smoothing support vector machine, and perform extensive simulation studies to compare with standard approaches. We apply the proposed method to two studies of premanifest HD subjects and controls to construct age-sensitive predictive scores for the risk of HD and risk of receiving HD diagnosis during the study period.
In the third part of the dissertation, we develop a novel statistical learning method for longitudinal data. Predicting disease risk and progression is one of the main goals in many clinical studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. We develop a statistical learning method for longitudinal data by introducing subject-specific long-term and short-term latent effects through designed kernels to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of distinctive feature of data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzhemeier's Disease (Alzhemeier's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to predict the conversion from mild cognitive impairment to dementia, and show a substantial gain in performance while accounting for the longitudinal feature of data.
|
234 |
Learning Logic Rules for Disease Classification: With an Application to Developing Criteria Sets for the Diagnostic and Statistical Manual of Mental DisordersMauro, Christine January 2015 (has links)
This dissertation develops several new statistical methods for disease classification that directly account for the unique logic structure of criteria sets found in the Diagnostic and Statistical Manual of Mental Disorders. For psychiatric disorders, a clinically significant anatomical or physiological deviation cannot be used to determine disease status. Instead, clinicians rely on criteria sets from the Diagnostic and Statistical Manual of Mental Disorders to make diagnoses. Each criteria set is comprised of several symptom domains, with the domains determined by expert opinion or psychometric analyses. In order to be diagnosed, an individual must meet the minimum number of symptoms, or threshold, required for each domain. If both the overall number of domains and the number of symptoms within each domain are small, an exhaustive search to determine these thresholds is feasible, with the thresholds chosen to minimize the overall misclassification rate. However, for more complicated scenarios, such as incorporating a continuous biomarker into the diagnostic criteria, a novel technique is necessary. In this dissertation, we propose several novel approaches to empirically determine these thresholds.
Within each domain, we start by fitting a linear discriminant function based upon a sample of individuals in which disease status and the number of symptoms present in that domain are both known. Since one must meet the criteria for all domains, an overall positive diagnosis is only issued if the prediction in each domain is positive. Therefore, the overall decision rule is the intersection of all the domain specific rules. We fit this model using several approaches. In the first approach, we directly apply the framework of the support vector machine (SVM). This results in a non-convex minimization problem, which we can approximate by an iterative algorithm based on the Difference of Convex functions algorithm. In the second approach, we recognize that the expected population loss function can be re-expressed in an alternative form. Based on this alternative form, we propose two more iterative algorithms, SVM Iterative and Logistic Iterative. Although the number of symptoms per domain for the current clinical application is small, the proposed iterative methods are general and flexible enough to be adapted to complicated settings such as using continuous biomarker data, high-dimensional data (for example, imaging markers or genetic markers), other logic structures, or non-linear discriminant functions to assist in disease diagnosis.
Under varying simulation scenarios, the Exhaustive Search and both proposed methods, SVM Iterative and Logistic Iterative, have good performance characteristics when compared with the oracle decision rule. We also examine one simulation in which the Exhaustive Search is not feasible and find that SVM Iterative and Logistic Iterative perform quite well. Each of these methods is then applied to a real data set in order to construct a criteria set for Complicated Grief, a new psychiatric disorder of interest. As the domain structure is currently unknown, both a two domain and three domain structure is considered. For both domain structures, all three methods choose the same thresholds. The resulting criteria sets are then evaluated on an independent data set of cases and shown to have high sensitivities. Using this same data, we also evaluate the sensitivity of three previously published criteria sets for Complicated Grief. Two of the three published criteria sets show poor sensitivity, while the sensitivity of the third is quite good. To fully evaluate our proposed criteria sets, as well as the previously published sets, a sample of controls is necessary so that specificity can also be assessed. The collection of this data is currently ongoing. We conclude the dissertation by considering the influence of study design on criteria set development and its evaluation. We also discuss future extensions of this work such as handling complex logic structures and simultaneously discovering both the domain structure and domain thresholds.
|
235 |
Statistical Learning Methods for Personalized Medical Decision MakingLiu, Ying January 2016 (has links)
The theme of my dissertation is on merging statistical modeling with medical domain knowledge and machine learning algorithms to assist in making personalized medical decisions. In its simplest form, making personalized medical decisions for treatment choices and disease diagnosis modality choices can be transformed into classification or prediction problems in machine learning, where the optimal decision for an individual is a decision rule that yields the best future clinical outcome or maximizes diagnosis accuracy. However, challenges emerge when analyzing complex medical data. On one hand, statistical modeling is needed to deal with inherent practical complications such as missing data, patients' loss to follow-up, ethical and resource constraints in randomized controlled clinical trials. On the other hand, new data types and larger scale of data call for innovations combining statistical modeling, domain knowledge and information technologies. This dissertation contains three parts addressing the estimation of optimal personalized rule for choosing treatment, the estimation of optimal individualized rule for choosing disease diagnosis modality, and methods for variable selection if there are missing data.
In the first part of this dissertation, we propose a method to find optimal Dynamic treatment regimens (DTRs) in Sequential Multiple Assignment Randomized Trial (SMART) data. Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each stage of treatment by potentially time-varying patient features and intermediate outcomes observed in previous stages. The complexity, patient heterogeneity, and chronicity of many diseases and disorders call for learning optimal DTRs that best dynamically tailor treatment to each individual's response over time. We propose a robust and efficient approach referred to as Augmented Multistage Outcome-Weighted Learning (AMOL) to identify optimal DTRs from sequential multiple assignment randomized trials. We improve outcome-weighted learning (Zhao et al.~2012) to allow for negative outcomes; we propose methods to reduce variability of weights to achieve numeric stability and higher efficiency; and finally, for multiple-stage trials, we introduce robust augmentation to improve efficiency by drawing information from Q-function regression models at each stage. The proposed AMOL remains valid even if the regression model is misspecified. We formally justify that proper choice of augmentation guarantees smaller stochastic errors in value function estimation for AMOL; we then establish the convergence rates for AMOL. The comparative advantage of AMOL over existing methods is demonstrated in extensive simulation studies and applications to two SMART data sets: a two-stage trial for attention deficit hyperactivity disorder and the STAR*D trial for major depressive disorder.
The second part of the dissertation introduced a machine learning algorithm to estimate personalized decision rules for medical diagnosis/screening to maximize a weighted combination of sensitivity and specificity. Using subject-specific risk factors and feature variables, such rules administer screening tests with balanced sensitivity and specificity, and thus protect low-risk subjects from unnecessary pain and stress caused by false positive tests, while achieving high sensitivity for subjects at high risk. We conducted simulation study mimicking a real breast cancer study, and we found significant improvements on sensitivity and specificity comparing our personalized screening strategy (assigning mammography+MRI to high-risk patients and mammography alone to low-risk subjects based on a composite score of their risk factors) to one-size-fits-all strategy (assigning mammography+MRI or mammography alone to all subjects). When applying to a Parkinson's disease (PD) FDG-PET and fMRI data, we showed that the method provided individualized modality selection that can improve AUC, and it can provide interpretable decision rules for choosing brain imaging modality for early detection of PD. To the best of our knowledge, this is the first time in the literature to propose automatic data-driven methods and learning algorithm for personalized diagnosis/screening strategy.
In the last part of the dissertation, we propose a method, Multiple Imputation Random Lasso (MIRL), to select important variables and to predict the outcome for an epidemiological study of Eating and Activity in Teens. % in the presence of missing data. In this study, 80% of individuals have at least one variable missing. Therefore, using variable selection methods developed for complete data after list-wise deletion substantially reduces prediction power. Recent work on prediction models in the presence of incomplete data cannot adequately account for large numbers of variables with arbitrary missing patterns. We propose MIRL to combine penalized regression techniques with multiple imputation and stability selection. Extensive simulation studies are conducted to compare MIRL with several alternatives. MIRL outperforms other methods in high-dimensional scenarios in terms of both reduced prediction error and improved variable selection performance, and it has greater advantage when the correlation among variables is high and missing proportion is high. MIRL is shown to have improved performance when comparing with other applicable methods when applied to the study of Eating and Activity in Teens for the boys and girls separately, and to a subgroup of low social economic status (SES) Asian boys who are at high risk of developing obesity.
|
236 |
Analysis of Oncogenic Signal Transduction with Application to KRAS Signaling PathwaysBroyde, Joshua January 2018 (has links)
The discovery of novel members of tumorigenic pathways remains a critical step to fully dissect the molecular biology of cancer. Indeed, because a number of cancer drivers are themselves undruggable, elucidating the signaling apparatuses in which they participate is essential for discovering novel therapeutic targets that will allow the treatment of aggressive neoplastic growth. In the context of oncoproteins and tumor suppressors, novel participants may be upstream regulators, downstream effectors, or physical cognate binding partners. In this work, we develop in silico approaches to more fully elucidate the tumorigenic signaling machinery used by tumor suppressors and oncoproteins. We first report applications of machine-learning algorithms to integrate diverse networkbased information to generate testable hypotheses of proteins involved in canonical oncogenic pathways. We develop the OncoSig algorithm to elucidate novel members of protein-centric maps to elucidate upstream modulators, cognate binding partners, and downstream effectors for any tumor suppressor or oncogene in a tumor-specific fashion. We specifically apply OncoSig to elucidate the oncogenic KRAS regulatory map in Lung adenocarcinoma (LUAD). Oncogenic KRAS is a key driver of aggressive tumor growth in many LUAD patients, yet has no FDA-approved drugs targeting it. Thus, elucidating members of the KRAS protein-centric map is critical for discovering synthetic lethal interactions that may be subject to therapeutic targeting. Critically, 18/22 of novel predicted KRAS interactors elicited synthetic lethality in LUAD organoid cultures that harbored an activating KRAS mutation. We then extend the OncoSig algorithm to 10 oncogenic/tumor suppressor pathways (such as TP53, EGFR, and PI3K), and show that OncoSig is able to recover known regulators and downstream effectors of these critical mediators of tumorigenesis. We then focus specifically on dissecting KRAS’s physical protein-protein interactions. Many cognate binding partners bind to KRAS via a structurally conserved RAS-Binding Domain (RBD), thus propagating KRAS signal transduction. Thus, for example, CRAF, PI3K, and RALGDS, all bind to KRAS via an RBD. To elucidate novel KRAS protein-protein interactors, we use structural and sequence based approaches to discover biophysical properties of known RBDs. We apply the PrePPI algorithm, which predicts novel protein-protein interactions based on structural similarity, and find that PrePPI successfully recovers known RBDs while discriminating from domains structurally similar to the RBD that do not bind to KRAS. Using this information, we develop biophysical features to computationally predict novel KRAS binding partners. Finally, we report computational and experimental work addressing whether KRAS forms a homodimer. The precise mechanism for how KRAS propagates signal transduction after binding to the RBD remains elusive, and KRAS homo-dimerization, for example, may play a key role in KRAS induced tumorigenesis. Using Analytical Utracentrifugation to measure binding affinity, we find that KRAS forms either a weak dimer or a large non-specific multimer. Furthermore, analysis of KRAS protein structures deposited in the Protein Data Bank reveals key regions that have a propensity to form homodimer contacts in the crystal complexes, and may mediate KRAS homo-dimerization in a biological setting as well. These results provide mechanistic insight into how KRAS dimerization may facilitate cellular signal transduction.
|
237 |
Bayesian Modeling for Mental Health SurveysWilliams, Sharifa Zakiya January 2018 (has links)
Sample surveys are often used to collect data for obtaining estimates of finite population quantities, such as disease prevalence. However, non-response and sampling frame under-coverage can cause the survey sample to differ from the target population in important ways. To reduce bias in the survey estimates that can arise from these differences, auxiliary information about the target population from sources including administrative files or census data can be used. Survey weighting is one approach commonly used to reduce bias. Although weighted estimates are relatively easy to obtain, they can be inefficient in the presence of highly dispersed weights. Model-based estimation in survey research offers advantages of improved efficiency in the presence of sparse data and highly variable weights. However, these models can be subject to model misspecification. In this dissertation, we propose Bayesian penalized spline regression models for survey inference about proportions in the entire population as well as in sub-populations. The proposed methods incorporate survey weights as covariates using a penalized spline to protect against model misspecification. We show by simulations that the proposed methods perform well, yielding efficient estimates of population proportion for binary survey data in the presence of highly dispersed weights and robust to model misspecification for survey outcomes. We illustrate the use of the proposed methods to estimate the prevalence of lifetime temper dysregulation disorder among National Guard service members overall and in sub-populations defined by gender and race using the Ohio Army National Guard Mental Health Initiative 2008-2009 survey data. We further extend the proposed framework to the setting where individual auxiliary data for the population are not available and utilize a Bayesian bootstrap approach to complete model-based estimation of current and undiagnosed depression in Hispanics/Latinos of different national backgrounds from the 2015 Washington Heights Community Survey.
|
238 |
Statistical methods for the study of etiologic heterogeneityZabor, Emily Craig January 2019 (has links)
Traditionally, cancer epidemiologists have investigated the causes of disease under the premise that patients with a certain site of disease can be treated as a single entity. Then risk factors associated with the disease are identified through case-control or cohort studies for the disease as a whole. However, with the rise of molecular and genomic profiling, in recent years biologic subtypes have increasingly been identified. Once subtypes are known, it is natural to ask the question of whether they share a common etiology, or in fact arise from distinct sets of risk factors, a concept known as etiologic heterogeneity. This dissertation seeks to evaluate methods for the study of etiologic heterogeneity in the context of cancer research and with a focus on methods for case-control studies. First, a number of existing regression-based methods for the study of etiologic heterogeneity in the context of pre-defined subtypes are compared using a data example and simulation studies. This work found that a standard polytomous logistic regression approach performs at least as well as more complex methods, and is easy to implement in standard software. Next, simulation studies investigate the statistical properties of an approach that combines the search for the most etiologically distinct subtype solution from high dimensional tumor marker data with estimation of risk factor effects. The method performs well when appropriate up-front selection of tumor markers is performed, even when there is confounding structure or high-dimensional noise. And finally, an application to a breast cancer case-control study demonstrates the usefulness of the novel clustering approach to identify a more risk heterogeneous class solution in breast cancer based on a panel of gene expression data and known risk factors.
|
239 |
Uso do laser scanner terrestre na estimativa de parâmetros biométricos em povoamentos florestais / Use of terrestrial laser scanning on biometric parameters estimations of forest plantationsAlmeida, Gustavo José Ferreira de 11 August 2017 (has links)
A quantificação de recursos florestais é usada para fins diversos nas ciências naturais, e depende da obtenção de dados de campo de forma precisa e rápida, e o inventário florestal tem se valido principalmente de trabalho humano manual para este fim. A tecnologia LiDAR, baseada em sistemas a laser, permite a coleta desses dados por meio da representação tridimensional do ambiente e a geração de informações espacialmente precisas dos objetos que o compõe. O sistema de varredura laser terrestre (terrestrial laser scanning - TLS) aplica essa tecnologia sob abordagem terrestre, e assim pode ser usada na representação 3D de florestas e ambientes naturais. Devido a crescente número de estudos nesse tópico atualmente o sistema TLS é capaz de fornecer métricas florestais básicas com elevada exatidão, como densidade de plantio e diâmetro à altura do peito, além de informações não obtidas pelo inventário florestal padrão, como estimativa da biomassa e índice de área foliar, entre outros. Este trabalho tem por objetivo a avalição da capacidade do sistema TLS em fornecer com exatidão métricas de árvores individuais selecionadas em dois povoamentos florestais localizados no sudeste do Brasil. Árvores de Eucalyptus sp. (n = 6) e Pinus elliottii var. elliottii (n = 5) foram submetidas à varredura e os valores obtidos pelo mapeamento 3D foram comparados com dados medidos em campo manualmente. Os resultados encontrados mostram que o algoritmo empregado na filtragem dos troncos foi eficiente no isolamento dos fustes de árvores individuais até a altura total das árvores amostradas, enquanto que o algoritmo para modelagem do tronco filtrado foi capaz de fornecer medidas de diâmetro até 50% da altura total das amostras. A exatidão das medidas de DAP pelos dados TLS foi de 0,91 cm e 2,77 cm (REQM) para Eucalyptus e Pinus, respectivamente. Os diâmetros ao longo do fuste tiveram mais exatidão no Eucalyptus (REQM = 2,75 cm e r = 0,77) do que no Pinus (REQM = 3,62 cm e r = 0,86), resultados condizentes com os encontrados em literatura. A exatidão da estimativa dos diâmetros diminuiu ao longo do fuste. O autor sugere que a influência de vento forte no momento da varredura pode ter interferido na qualidade das nuvens de pontos em relação a ruídos e na exatidão dos modelos de obtenção de diâmetros. A partir destes resultados conclui-se que, para as características ambientais e parâmetros de varreduras apresentados, o sistema TLS foi capaz de fornecer dados com exatidão aceitável, e mais estudos devem ser conduzidos buscando o entendimento e mitigação de efeitos que podem dificultar a obtenção de dados precisos nos estratos superiores do dossel florestal. / Forest resources assessment is used for diverse purposes on natural sciences, and relies on field data acquisition in fast and precise ways, and forest inventory has been relying mainly on manual human labor for that. LiDAR technology, which is based on a laser system, allows for these data acquisition through 3D representation of surroundings and the generation of espacially precise information about the objetcs within. Terrestrial laser scanning - TLS - applies this technology in a land approach, thus it can be used on the 3D representation of forests and natural scenes. Due to increasing number of studies on this subject nowadays TLS system is capable of giving basic forest metrics with high precision, as for plant density and diameter at breast height, besides information not obtained by standard inventory procedures, as biomass estimation and leaf área index, among others. This work aims the assessment of TLS capability on giving precise metrics of individual trees located at two forest stands in southeastern Brazil. Trees of Eucalyptus sp. (n = 6) and Pinus elliottii var. elliottii (n = 5) were scanned and the numbers obtained by 3D mapping were compared to manually measured field data. The results found show that the algorithms used on trunk filtration were efficient on individual trees stem isolation until total height of measured trees, while the trunk modelling algorithm was capable of giving diameters until 50% of samples total height. The precision of DBH measurements by TLS data was 0,91 cm and 2,77 cm (RMSE) for Eucalyptus and Pinus, respectivelly. Diameters along the stem were more preciselly estimated for Eucalytus trees (RMSE = 2,75 cm and r = 0,77) than for Pinus trees (RMSE = 3,62 cm and r = 0,86), results consistente with literature. The precision of diameters estimation diminished along the stem. The author suggests that the influence of intense wind by the time of scanning can have interfered on cloud point quality in the terms of noises and thus on the precision of diameter estimation modelling. From these results one can conclude that, considering the environmental aspects and scanning parameters presented, TLS system was capable on giving data with acceptable precision, and more studies must be carried searching for understanding and mitigation of effects that can difficult precise data acquisition on upper forest strata.
|
240 |
Determinação dos valores do tamanho do fígado de crianças normais, entre 0 e 7 anos de idade, por ultra-sonografia / Sonographic determination of liver size in normal newborns, infants, and children under 7 years of ageRocha, Silvia Maria Sucena da 09 April 2009 (has links)
INTRODUÇÃO: A biometria hepática por ultra-som é freqüentemente solicitada na investigação diagnóstica de crianças, no entanto, há múltiplos métodos descritos, nenhum com aceitação consensual e carência de valores normais de referência. OBJETIVOS: Determinar o tamanho do fígado de crianças normais, entre 0 e 7 anos de idade, por ultra-sonografia e correlacionar os valores obtidos com as variáveis: idade, sexo, estatura, peso corporal e índice de massa corpórea (IMC). MÉTODOS: Entre 2003 e 2005 foram examinadas 584 crianças saudáveis, com idades entre 0 e 7 anos, aplicando-se método ultra-sonográfico padronizado. A hepatometria foi efetuada em planos de corte longitudinais, estabelecidos por linhas de orientação externas, associadas a reparos anatômicos extra e intra-hepáticos. Foram medidos: a) o diâmetro crânio-caudal do lobo hepático esquerdo, na linha médio-esternal (CCLME) e b) o diâmetro crânio-caudal da superfície posterior do lobo hepático direito, na linha hemiclavicular (CCPLHC). As crianças foram subdivididas em 11 grupos por faixa etária. Para o estudo de correlação foi utilizado o coeficiente de correlação de Pearson. O teste t de Student não-pareado foi aplicado na comparação das medidas entre os sexos e o teste de Bonferroni, para análise de variância das médias por faixa etária. Nomogramas em função da idade foram elaborados mediante modelos de regressão não-linear. RESULTADOS: As medidas hepáticas apresentaram correlação positiva e significante com a idade (r=0,75 para CCLME e 0,80 para CCPLHC), a estatura (r=0,80 para CCLME e 0,85 para CCPLHC) e o peso (r=0,74 para CCLME e 0,82 para CCPLHC), não havendo correlação com o IMC (r<0,11). Observou-se diferença significativa entre os sexos em 3 grupos etários, com valores maiores nos meninos. Observou-se aumento progressivo do tamanho do fígado na faixa etária estudada, proporcionalmente menor que o crescimento corporal e com padrão de crescimento diferenciado para cada um dos lobos: o lobo hepático esquerdo apresentou crescimento mais expressivo nos 3 primeiros anos de vida, enquanto que o direito apresentou crescimento gradual e progressivo dos 0 aos 7 anos. CONCLUSÕES: Os valores do tamanho do fígado de crianças normais, entre 0 e 7 anos de idade, foram determinados, mediante aplicação de técnica padronizada, verificando-se correlação positiva e significante entre o tamanho do fígado, a idade, a estatura e o peso corporal, não havendo correlação com o IMC. Não se observou diferença consistente das medidas hepáticas em relação ao sexo. Os nomogramas apresentados demonstram as variações normais do tamanho do fígado na população estudada, notando-se aumento progressivo com a idade, em menor proporção que o crescimento corporal e com padrão de crescimento diferenciado para cada lobo / INTRODUCTION: Several methods to determine sonographic liver size have been reported, none of them consensually accepted; moreover, there is a lack of normal reference values. OBJECTIVES:The proposal of this research is to determine sonographic liver size in normal newbors, infants, and children under 7 year of age and correlate obtained values to age, sex, body height, body weight and body mass index (BMI). METHODS: Between 2003 and 2005, 584 healthy children were examined with sonography. Liver measurements were performed on two standardized longitdinal seccional planes, defined by orientation external lines, related to, both, extra and intra-hepatic anatomic references. Two diameters were measured: a) left lobe cranio-caudal diameter, on mid-sternal line (CCMSL) and b) right lobe posterior surface cranio-caudal diameter, on midclavicular line (PCCMCL). Children were classified acconding to age and sorted in 11 groups. Statistical analysis: Pearsons correlation coefficient was used for correlation study. Unpaired Stutent t-test was applied for comparison of measures acconding to sex. Bonferroni test was used for variance analysis between the means at the different age groups. Nomograms of the liver size related to age were established by non-linear regression models. RESULTS: It was observed positive and significant correlation between liver measurements and age (r=0,75 for CCMSL and 0,80 for PCCMCL), body height (r=0,80 for CCMSL and 0,85 for PCCMCL), and body weight (r=0,74 for CCMSL and 0,82 for PCCMCL). No relevant correlation with BMI (r<0,11) was found. Significant difference between sexes were found in 3 age groups, with higher values in males. Liver size measurements showed progressive increase, proportionally lesser than body height and with a different growth pattern for each lobe: the left lobe presented major growth in the first 3 years of age, while the right lobe presented gradual and progressive growth from birth to 7 years of age. CONCLUSIONS: Liver size in normal newborns, infants, and children under 7 years of age was determined by sonographic standadized method. Data analysis revealed positive and significant correlation between liver size and age, as well as body height, and body weight. No relevant correlation was found between liver size and BMI, neither there was consistent difference of liver measures between sexes. Nomograms demonstrate normal variation range of liver size in the studied population, showing progressive increase acconding to age, proporcionally lesser than body height, and with a distinct growth pattern for each hepatic lobe
|
Page generated in 0.0738 seconds