• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Análise do impacto de perturbações sobre medidas de qualidade de ajuste para modelos de equações estruturais / Analysis of the impact of disturbances over the measures of goodness of fit for structural equation models

Renata Trevisan Brunelli 11 May 2012 (has links)
A Modelagem de Equações Estruturais (SEM, do inglês Structural Equation Modeling) é uma metodologia multivariada que permite estudar relações de causa/efeito e correlação entre um conjunto de variáveis (podendo ser elas observadas ou latentes), simultaneamente. A técnica vem se difundindo cada vez mais nos últimos anos, em diferentes áreas do conhecimento. Uma de suas principais aplicações é na conrmação de modelos teóricos propostos pelo pesquisador (Análise Fatorial Conrmatória). Existem diversas medidas sugeridas pela literatura que servem para avaliar o quão bom está o ajuste de um modelo de SEM. Entretanto, é escassa a quantidade de trabalhos na literatura que listem relações entre os valores de diferentes medidas com possíveis problemas na amostra e na especicação do modelo, isto é, informações a respeito de que possíveis problemas desta natureza impactam quais medidas (e quais não), e de que maneira. Tal informação é importante porque permite entender os motivos pelos quais um modelo pode estar sendo considerado mal-ajustado. O objetivo deste trabalho é investigar como diferentes perturbações na amostragem, especicação e estimação de um modelo de SEM podem impactar as medidas de qualidade de ajuste; e, além disso, entender se o tamanho da amostra influencia esta resposta. Simultaneamente, também se avalia como tais perturbações afetam as estimativas, dado que há casos de perturbações em que os parâmetros continuam sendo bem ajustados, mesmo com algumas medidas indicando um mau ajuste; ao mesmo tempo, há ocasiões em que se indica um bom ajuste, enquanto que os parâmetros são estimados de forma distorcida. Tais investigações serão realizadas a partir de simulações de exemplos de amostras de diferentes tamanhos para cada tipo de perturbação. Então, diferentes especicações de modelos de SEM serão aplicados a estas amostras, e seus parâmetros serão estimados por dois métodos diferentes: Mínimos Quadrados Generalizados e Máxima Verossimilhança. Conhecendo tais resultados, um pesquisador que queira aplicar a técnica de SEM poderá se precaver e, dentre as medidas de qualidade de ajuste disponíveis, optar pelas que mais se adequem às características de seu estudo. / The Structural Equation Modeling (SEM) is a multivariate methodology that allows the study of cause-and-efect relationships and correlation of a set of variables (that may be observed or latent ones), simultaneously. The technique has become more diuse in the last years, in different fields of knowledge. One of its main applications is on the confirmation of theoretical models proposed by the researcher (Confirmatory Factorial Analysis). There are several measures suggested by literature to measure the goodness of t of a SEM model. However, there is a scarce number of texts that list relationships between the values of different of those measures with possible problems that may occur on the sample or the specication of the SEM model, like information concerning what problems of this nature impact which measures (and which not), and how does the impact occur. This information is important because it allows the understanding of the reasons why a model could be considered bad fitted. The objective of this work is to investigate how different disturbances of the sample, the model specification and the estimation of a SEM model are able to impact the measures of goodness of fit; additionally, to understand if the sample size has influence over this impact. It will also be investigated if those disturbances affect the estimates of the parameters, given the fact that there are disturbances for which occurrence some of the measures indicate badness of fit but the parameters are not affected; at the same time, that are occasions on which the measures indicate a good fit and there are disturbances on the estimates of the parameters. Those investigations will be made simulating examples of different size samples for which type of disturbance. Then, SEM models with different specifications will be fitted to each sample, and their parameters will be estimated by two dierent methods: Generalized Least Squares and Maximum Likelihood. Given those answers, a researcher that wants to apply the SEM methodology to his work will be able to be more careful and, among the available measures of goodness of fit, to chose those that are more adequate to the characteristics of his study.
142

Análise do impacto de perturbações sobre medidas de qualidade de ajuste para modelos de equações estruturais / Analysis of the impact of disturbances over the measures of goodness of fit for structural equation models

Brunelli, Renata Trevisan 11 May 2012 (has links)
A Modelagem de Equações Estruturais (SEM, do inglês Structural Equation Modeling) é uma metodologia multivariada que permite estudar relações de causa/efeito e correlação entre um conjunto de variáveis (podendo ser elas observadas ou latentes), simultaneamente. A técnica vem se difundindo cada vez mais nos últimos anos, em diferentes áreas do conhecimento. Uma de suas principais aplicações é na conrmação de modelos teóricos propostos pelo pesquisador (Análise Fatorial Conrmatória). Existem diversas medidas sugeridas pela literatura que servem para avaliar o quão bom está o ajuste de um modelo de SEM. Entretanto, é escassa a quantidade de trabalhos na literatura que listem relações entre os valores de diferentes medidas com possíveis problemas na amostra e na especicação do modelo, isto é, informações a respeito de que possíveis problemas desta natureza impactam quais medidas (e quais não), e de que maneira. Tal informação é importante porque permite entender os motivos pelos quais um modelo pode estar sendo considerado mal-ajustado. O objetivo deste trabalho é investigar como diferentes perturbações na amostragem, especicação e estimação de um modelo de SEM podem impactar as medidas de qualidade de ajuste; e, além disso, entender se o tamanho da amostra influencia esta resposta. Simultaneamente, também se avalia como tais perturbações afetam as estimativas, dado que há casos de perturbações em que os parâmetros continuam sendo bem ajustados, mesmo com algumas medidas indicando um mau ajuste; ao mesmo tempo, há ocasiões em que se indica um bom ajuste, enquanto que os parâmetros são estimados de forma distorcida. Tais investigações serão realizadas a partir de simulações de exemplos de amostras de diferentes tamanhos para cada tipo de perturbação. Então, diferentes especicações de modelos de SEM serão aplicados a estas amostras, e seus parâmetros serão estimados por dois métodos diferentes: Mínimos Quadrados Generalizados e Máxima Verossimilhança. Conhecendo tais resultados, um pesquisador que queira aplicar a técnica de SEM poderá se precaver e, dentre as medidas de qualidade de ajuste disponíveis, optar pelas que mais se adequem às características de seu estudo. / The Structural Equation Modeling (SEM) is a multivariate methodology that allows the study of cause-and-efect relationships and correlation of a set of variables (that may be observed or latent ones), simultaneously. The technique has become more diuse in the last years, in different fields of knowledge. One of its main applications is on the confirmation of theoretical models proposed by the researcher (Confirmatory Factorial Analysis). There are several measures suggested by literature to measure the goodness of t of a SEM model. However, there is a scarce number of texts that list relationships between the values of different of those measures with possible problems that may occur on the sample or the specication of the SEM model, like information concerning what problems of this nature impact which measures (and which not), and how does the impact occur. This information is important because it allows the understanding of the reasons why a model could be considered bad fitted. The objective of this work is to investigate how different disturbances of the sample, the model specification and the estimation of a SEM model are able to impact the measures of goodness of fit; additionally, to understand if the sample size has influence over this impact. It will also be investigated if those disturbances affect the estimates of the parameters, given the fact that there are disturbances for which occurrence some of the measures indicate badness of fit but the parameters are not affected; at the same time, that are occasions on which the measures indicate a good fit and there are disturbances on the estimates of the parameters. Those investigations will be made simulating examples of different size samples for which type of disturbance. Then, SEM models with different specifications will be fitted to each sample, and their parameters will be estimated by two dierent methods: Generalized Least Squares and Maximum Likelihood. Given those answers, a researcher that wants to apply the SEM methodology to his work will be able to be more careful and, among the available measures of goodness of fit, to chose those that are more adequate to the characteristics of his study.
143

MODELS FOR ASSESSMENT OF FLAWS IN PRESSURE TUBES OF CANDU REACTORS

Sahoo, Anup Kumar January 2009 (has links)
Probabilistic assessment and life cycle management of engineering components and systems in a nuclear power plant is intended to ensure safe and efficient operation of energy generation over its entire life. The CANDU reactor core consists of 380-480 pressure tubes, which are like miniature pressure vessels that contain natural uranium fuel. Pressure tubes operate under severe temperature and radiation conditions, which result in degradation with ageing. Presence of flaws in a pressure tube makes it vulnerable to delayed hydride cracking (DHC), which may lead to rupture or break-before-leak situation. Therefore, assessment of flaws in the pressure tubes is considered an integral part of a reactor core assessment program. The main objective of the thesis is to develop advanced probabilistic and mechanical stress field models for the assessment of flaws. The flaw assessment models used by the industries are based on deterministic upper/lower bound values for the variables and they ignore uncertainties associated with system parameters. In this thesis, explicit limit state equations are formulated and first order reliability method is employed for reliability computation, which is more efficient than simulation-based methods. A semi-probabilistic approach is adopted to develop an assessment model, which consists of a mechanics-based condition (or equation) involving partial factors that are calibrated to a specified reliability level. This approach is applied to develop models for DHC initiation and leak-before-break assessments. A novel feature of the proposed method is that it bridges the gap between a simple deterministic analysis and complex simulations, and it is amenable to practical applications. The nuclear power plant systems are not easily accessible for inspection and data collection due to exposure to high radiation. For this reason, small samples of pressure tubes are inspected at periodic intervals and small sample of data so collected are used as input to probabilistic analysis. The pressure tube flaw assessment is therefore confounded by large sampling uncertainties. Therefore, determination of adequate sample size is an important issue. In this thesis, a risk informed approach is proposed to define sample size requirement for flaw assessment. Notch-tip stress field is a key factor in any flaw assessment model. Traditionally, linear elastic fracture mechanics (LEFM) and its extension, serves the basis for determination of notch-tip stress field for elastic and elastic-perfectly-plastic material, respectively. However, the LEFM solution is based on small deformation theory and fixed crack geometry, which leads to singular stress and strain field at the crack-tip. The thesis presents new models for notch and crack induced stress fields based on the deformed geometry. In contrast with the classical solution based on small deformation theory, the proposed model uses the Cauchy's stress definition and boundary conditions which are coupled with the deformed geometry. This formulation also incorporates the rotation near the crack-tip, which leads to blunting and displacement of the crack-tip. The solution obtained based on the final deformed configuration yields a non-singular stress field at the crack-tip and a non-linear variation of stress concentration factor for both elastic and elastic-perfectly-plastic material. The proposed stress field formulation approach is applied to formulate an analytical model for estimating the threshold stress intensity factor (KIH) for DHC initiation. The analytical approach provides a relationship between KIH and temperature that is consistent with experimental results.
144

MODELS FOR ASSESSMENT OF FLAWS IN PRESSURE TUBES OF CANDU REACTORS

Sahoo, Anup Kumar January 2009 (has links)
Probabilistic assessment and life cycle management of engineering components and systems in a nuclear power plant is intended to ensure safe and efficient operation of energy generation over its entire life. The CANDU reactor core consists of 380-480 pressure tubes, which are like miniature pressure vessels that contain natural uranium fuel. Pressure tubes operate under severe temperature and radiation conditions, which result in degradation with ageing. Presence of flaws in a pressure tube makes it vulnerable to delayed hydride cracking (DHC), which may lead to rupture or break-before-leak situation. Therefore, assessment of flaws in the pressure tubes is considered an integral part of a reactor core assessment program. The main objective of the thesis is to develop advanced probabilistic and mechanical stress field models for the assessment of flaws. The flaw assessment models used by the industries are based on deterministic upper/lower bound values for the variables and they ignore uncertainties associated with system parameters. In this thesis, explicit limit state equations are formulated and first order reliability method is employed for reliability computation, which is more efficient than simulation-based methods. A semi-probabilistic approach is adopted to develop an assessment model, which consists of a mechanics-based condition (or equation) involving partial factors that are calibrated to a specified reliability level. This approach is applied to develop models for DHC initiation and leak-before-break assessments. A novel feature of the proposed method is that it bridges the gap between a simple deterministic analysis and complex simulations, and it is amenable to practical applications. The nuclear power plant systems are not easily accessible for inspection and data collection due to exposure to high radiation. For this reason, small samples of pressure tubes are inspected at periodic intervals and small sample of data so collected are used as input to probabilistic analysis. The pressure tube flaw assessment is therefore confounded by large sampling uncertainties. Therefore, determination of adequate sample size is an important issue. In this thesis, a risk informed approach is proposed to define sample size requirement for flaw assessment. Notch-tip stress field is a key factor in any flaw assessment model. Traditionally, linear elastic fracture mechanics (LEFM) and its extension, serves the basis for determination of notch-tip stress field for elastic and elastic-perfectly-plastic material, respectively. However, the LEFM solution is based on small deformation theory and fixed crack geometry, which leads to singular stress and strain field at the crack-tip. The thesis presents new models for notch and crack induced stress fields based on the deformed geometry. In contrast with the classical solution based on small deformation theory, the proposed model uses the Cauchy's stress definition and boundary conditions which are coupled with the deformed geometry. This formulation also incorporates the rotation near the crack-tip, which leads to blunting and displacement of the crack-tip. The solution obtained based on the final deformed configuration yields a non-singular stress field at the crack-tip and a non-linear variation of stress concentration factor for both elastic and elastic-perfectly-plastic material. The proposed stress field formulation approach is applied to formulate an analytical model for estimating the threshold stress intensity factor (KIH) for DHC initiation. The analytical approach provides a relationship between KIH and temperature that is consistent with experimental results.
145

Adaptive Reliability Analysis of Reinforced Concrete Bridges Using Nondestructive Testing

Huang, Qindan 2010 May 1900 (has links)
There has been increasing interest in evaluating the performance of existing reinforced concrete (RC) bridges just after natural disasters or man-made events especially when the defects are invisible, or in quantifying the improvement after rehabilitations. In order to obtain an accurate assessment of the reliability of a RC bridge, it is critical to incorporate information about its current structural properties, which reflects the possible aging and deterioration. This dissertation proposes to develop an adaptive reliability analysis of RC bridges incorporating the damage detection information obtained from nondestructive testing (NDT). In this study, seismic fragility is used to describe the reliability of a structure withstanding future seismic demand. It is defined as the conditional probability that a seismic demand quantity attains or exceeds a specified capacity level for given values of earthquake intensity. The dissertation first develops a probabilistic capacity model for RC columns and the capacity model can be used when the flexural stiffness decays nonuniformly over a column height. Then, a general methodology to construct probabilistic seismic demand models for RC highway bridges with one single-column bent is presented. Next, a combination of global and local NDT methods is proposed to identify in-place structural properties. The global NDT uses the dynamic responses of a structure to assess its global/equivalent structural properties and detect potential damage locations. The local NDT uses local measurements to identify the local characteristics of the structure. Measurement and modeling errors are considered in the application of the NDT methods and the analysis of the NDT data. Then, the information obtained from NDT is used in the probabilistic capacity and demand models to estimate the seismic fragility of the bridge. As an illustration, the proposed probabilistic framework is applied to a reinforced concrete bridge with a one-column bent. The result of the illustration shows that the proposed framework can successfully provide the up-to-date structural properties and accurate fragility estimates.
146

Untersuchung von Holzwerkstoffen unter Schlagbelastung zur Beurteilung der Werkstoffeignung für den Maschinenbau

Müller, Christoph 20 October 2015 (has links) (PDF)
In der vorliegenden Arbeit werden Holzwerkstoffe im statischen Biegeversuch und im Schlagbiegeversuch vergleichend geprüft. Ausgewählte Holzwerkstoffe werden thermisch geschädigt, zudem wird eine relevante Kerbgeometrie geprüft. Ziel der Untersuchungen ist die Eignung verschiedenartiger Werkstoffe für den Einsatz in sicherheitsrelevanten Anwendungen mit Schlagbelastungen zu prüfen. Hierzu werden zunächst die Grundlagen der instrumentierten Schlagprüfung und der Holzwerkstoffe erarbeitet. Der Stand der Technik wird dargelegt und bereits durchgeführte Studien werden analysiert. Darauf aufbauend wird eine eigene Prüfeinrichtung zur zeitlich hoch aufgelösten Kraft-Beschleunigungs-Messung beim Schlagversuch entwickelt. Diese wird anhand verschiedener Methoden auf ihre Eignung und die Messwerte auf Plausibilität geprüft. Darüber hinaus wird ein statistisches Verfahren zur Überprüfung auf ausreichende Stichprobengröße entwickelt und auf die durchgeführten Messungen angewendet. Anhand der unter statischer und schlagartiger Biegebeanspruchung ermittelten charakteristischen Größen, wird ein Klassenmodell zum Werkstoffvergleich und zur Werkstoffauswahl vorgeschlagen. Dieses umfasst integral die mechanische Leistungsfähigkeit der geprüften Holzwerkstoffe und ist für weitere Holzwerkstoffe anwendbar. Abschließend wird, aufbauend auf den gewonnenen Erkenntnissen, ein Konzept für die Bauteilprüfung unter Schlagbelastung für weiterführende Untersuchungen vorgeschlagen. / In the present work wood-based materials are compared under static bending load and impact bending load. Several thermal stress conditions are applied to selected materials, furthermore one relevant notch geometry is tested. The objective of these tests is to investigate the suitability of distinct wood materials for security relevant applications with the occurrence of impact loads. For this purpose the basics of instrumented impact testing and wood-based materials are acquired. The state of the technology and a comprehensive analysis of original studies are subsequently presented. On this basis an own impact pendulum was developed to allow force-acceleration measurement with high sample rates. The apparatus is validated by several methods and the achieved signals are tested for plausibility. A general approach of testing for adequate sample size is implemented and applied to the tested samples. Based on the characteristic values of the static bending and impact bending tests a classification model for material selection and comparison is proposed. The classification model is an integral approach for mechanical performance assessment of wood-based materials. In conclusion a method for impact testing of components (in future studies) is introduced.
147

Species Distribution Modeling: Implications of Modeling Approaches, Biotic Effects, Sample Size, and Detection Limit

Wang, Lifei 14 January 2014 (has links)
When we develop and use species distribution models to predict species' current or potential distributions, we are faced with the trade-offs between model generality, precision, and realism. It is important to know how to improve and validate model generality while maintaining good model precision and realism. However, it is difficult for ecologists to evaluate species distribution models using field-sampled data alone because the true species response function to environmental or ecological factors is unknown. Species distribution models should be able to approximate the true characteristics and distributions of species if ecologists want to use them as reliable tools. Simulated data provide the advantage of being able to know the true species-environment relationships and control the causal factors of interest to obtain insights into the effects of these factors on model performance. I used a case study on Bythotrephes longimanus distributions from several hundred Ontario lakes and a simulation study to explore the effects on model performance caused by several factors: the choice of predictor variables, the model evaluation methods, the quantity and quality of the data used for developing models, and the strengths and weaknesses of different species distribution models. Linear discriminant analysis, multiple logistic regression, random forests, and artificial neural networks were compared in both studies. Results based on field data sampled from lakes indicated that the predictive performance of the four models was more variable when developed on abiotic (physical and chemical) conditions alone, whereas the generality of these models improved when including biotic (relevant species) information. When using simulated data, although the overall performance of random forests and artificial neural networks was better than linear discriminant analysis and multiple logistic regression, linear discriminant analysis and multiple logistic regression had relatively good and stable model sensitivity at different sample size and detection limit levels, which may be useful for predicting species presences when data are limited. Random forests performed consistently well at different sample size levels, but was more sensitive to high detection limit. The performance of artificial neural networks was affected by both sample size and detection limit, and it was more sensitive to small sample size.
148

Species Distribution Modeling: Implications of Modeling Approaches, Biotic Effects, Sample Size, and Detection Limit

Wang, Lifei 14 January 2014 (has links)
When we develop and use species distribution models to predict species' current or potential distributions, we are faced with the trade-offs between model generality, precision, and realism. It is important to know how to improve and validate model generality while maintaining good model precision and realism. However, it is difficult for ecologists to evaluate species distribution models using field-sampled data alone because the true species response function to environmental or ecological factors is unknown. Species distribution models should be able to approximate the true characteristics and distributions of species if ecologists want to use them as reliable tools. Simulated data provide the advantage of being able to know the true species-environment relationships and control the causal factors of interest to obtain insights into the effects of these factors on model performance. I used a case study on Bythotrephes longimanus distributions from several hundred Ontario lakes and a simulation study to explore the effects on model performance caused by several factors: the choice of predictor variables, the model evaluation methods, the quantity and quality of the data used for developing models, and the strengths and weaknesses of different species distribution models. Linear discriminant analysis, multiple logistic regression, random forests, and artificial neural networks were compared in both studies. Results based on field data sampled from lakes indicated that the predictive performance of the four models was more variable when developed on abiotic (physical and chemical) conditions alone, whereas the generality of these models improved when including biotic (relevant species) information. When using simulated data, although the overall performance of random forests and artificial neural networks was better than linear discriminant analysis and multiple logistic regression, linear discriminant analysis and multiple logistic regression had relatively good and stable model sensitivity at different sample size and detection limit levels, which may be useful for predicting species presences when data are limited. Random forests performed consistently well at different sample size levels, but was more sensitive to high detection limit. The performance of artificial neural networks was affected by both sample size and detection limit, and it was more sensitive to small sample size.
149

Model-Based Optimization of Clinical Trial Designs

Vong, Camille January 2014 (has links)
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
150

A decompositional investigation of 3D face recognition

Cook, James Allen January 2007 (has links)
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.

Page generated in 0.0698 seconds