• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3716
  • 915
  • 683
  • 427
  • 160
  • 93
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7564
  • 1139
  • 886
  • 809
  • 729
  • 726
  • 711
  • 572
  • 536
  • 534
  • 526
  • 523
  • 500
  • 483
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

ANALYZING THE GEO-DEPENDENCE OF HUMAN FACE APPEARANCE AND ITS APPLICATIONS

Islam, Mohammad T. 01 January 2016 (has links)
Human faces have been a subject of study in computer science for decades. The rich of set features from human faces have been used in solving various problems in computer vision, including person identification, facial expression analysis, and attribute classification. In this work, I explore the human facial features that depend on the geo-location using a data- driven approach. I analyze millions of public domain images to extract the geo-dependent human facial features and explore their applications. Using various machine learning and statistical techniques, I show that the geo-dependent features of human faces can be used to solve the image geo-localization task of given an image, predict where it was taken. Deep Convolutional Neural Networks (CNN) have been recently shown to excel at the image classification task; I have used CNNs to geo-localize images using the human face as a cue. I also show that the facial features used in image localization can be used to solve other problems, such as ethnicity, gender, and age estimation.
402

Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework

Dalton, Lori Anne 2012 May 1900 (has links)
With the advent of high-throughput genomic and proteomic technologies, in conjunction with the difficulty in obtaining even moderately sized samples, small-sample classifier design has become a major issue in the biological and medical communities. Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework of minimum mean-square error (MMSE) signal estimation in the presence of uncertainty, where uncertainty is relative to a prior over a family of distributions. This results in a Bayesian approach to error estimation that is optimal and unbiased relative to the model. The prior addresses a trade-off between estimator robustness (modeling assumptions) and accuracy. Closed-form representations for Bayesian error estimators are provided for two important models: discrete classification with Dirichlet priors (the discrete model) and linear classification of Gaussian distributions with fixed, scaled identity or arbitrary covariances and conjugate priors (the Gaussian model). We examine robustness to false modeling assumptions and demonstrate that Bayesian error estimators perform especially well for moderate true errors. The Bayesian modeling framework facilitates both optimization and analysis. It naturally gives rise to a practical expected measure of performance for arbitrary error estimators: the sample-conditioned mean-square error (MSE). Closed-form expressions are provided for both Bayesian models. We examine the consistency of Bayesian error estimation and illustrate a salient application in censored sampling, where sample points are collected one at a time until the conditional MSE reaches a stopping criterion. We address practical considerations for gene-expression microarray data, including the suitability of the Gaussian model, a methodology for calibrating normal-inverse-Wishart priors from unused data, and an approximation method for non-linear classification. We observe superior performance on synthetic high-dimensional data and real data, especially for moderate to high expected true errors and small feature sizes. Finally, arbitrary error estimators may be optimally calibrated assuming a fixed Bayesian model, sample size, classification rule, and error estimation rule. Using a calibration function mapping error estimates to their optimally calibrated values off-line, error estimates may be calibrated on the fly whenever the assumptions apply.
403

Power estimation of microprocessors

Sambamurthy, Sriram 13 December 2010 (has links)
The widespread use of microprocessor chips in high performance applications like graphics simulators and low power applications like mobile phones, laptops, medical applications etc. has made power estimation an important step in the manufacture of VLSI chips. It has become necessary to estimate the power consumption not only after the circuits have been laid out, but also during the design of the modules of the microprocessor at higher levels of design abstraction. The design of a microprocessor is complex and is performed at multiple layers of abstraction before it finally gets manufactured. The processor is first conceptually designed using blocks at the system level, and then modeled using a high-level language (C, C++, SystemC). This enables the early development of software applications using these high-level models. The C/C++ model is then translated to a hardware description language (HDL), that typically corresponds to the register transfer level (RT-Level). Once the processor is defined at the RT-Level, it is synthesized into gates and state elements based on user-defined constraints. In this thesis, novel techniques to estimate the power consumed by the microprocessor circuits at the gate level and RT-level of abstraction are presented. At the gate level, the average power consumed by microprocessor circuits is straight-forward to estimate, as the implementation is known. However, estimating the maximum or peak instantaneous power consumed by the microprocessor as a whole, when it is executing instructions, is a hard problem due to the high complexity of the state space involved. An hierarchical approach to estimate the peak power using powerful search techniques and formal tools is presented in this thesis. This approach has been extended and applied to solve the problem of estimating the maximum supply drop. Details on this extension and a discussion of promising results are also presented. In addition, this approach has been applied to explore the possibility of minimizing the leakage component of power dissipation, when the processor is idle. At the register transfer level, estimating the average power consumed by the circuits of the microprocessor is by itself a challenging problem. This is due to the fact that their implementation is unknown at this level of abstraction. The average power consumption directly depends on the implementation. The implementation, in turn, depends on the performance constraint imposed on the microprocessor. One of the factors affecting the performance of the microprocessor, is the speed of operation of its circuits. Considering these factors and dependencies (for making early design decisions at the RT-Level), a methodology that estimates the power vs. delay curves of microprocessor circuits has been developed. This will enable designers to make design decisions for even rudimentary designs without going through the time consuming process of synthesis. / text
404

Filtering Approaches for Inequality Constrained Parameter Estimation

Yang, Xiongtan Unknown Date
No description available.
405

Density Estimation in Kernel Exponential Families: Methods and Their Sensitivities

Zhou, Chenxi January 2022 (has links)
No description available.
406

On Numerical Error Estimation for the Finite-Volume Method with an Application to Computational Fluid Dynamics

Tyson, William Conrad 29 November 2018 (has links)
Computational fluid dynamics (CFD) simulations can provide tremendous insight into complex physical processes and are often faster and more cost-effective to execute than experiments. However, each CFD result inherently contains numerical errors that can significantly degrade the accuracy of a simulation. Discretization error is typically the largest contributor to the overall numerical error in a given simulation. Discretization error can be very difficult to estimate since the generation, transport, and diffusion of these errors is a highly nonlinear function of the computational grid and discretization scheme. As CFD is increasingly used in engineering design and analysis, it is imperative that CFD practitioners be able to accurately quantify discretization errors to minimize risk and improve the performance of engineering systems. In this work, improvements are made to the accuracy and efficiency of existing error estimation techniques. Discretization error is estimated by deriving and solving an error transport equation (ETE) for the local discretization error everywhere in the computational domain. Truncation error is shown to act as the local source for discretization error in numerical solutions. An equivalence between adjoint methods and ETE methods for functional error estimation is presented. This adjoint/ETE equivalence is exploited to efficiently obtain error estimates for multiple output functionals and to extend the higher-order properties of adjoint methods to ETE methods. Higher-order discretization error estimates are obtained when truncation error estimates are sufficiently accurate. Truncation error estimates are demonstrated to deteriorate on grids with a non-smooth variation in grid metrics (e.g., unstructured grids) regardless of how smooth the underlying exact solution may be. The loss of accuracy is shown to stem from noise in the discrete solution on the order of discretization error. When using conventional least-squares reconstruction techniques, this noise is exactly captured and introduces a lower-order error into the truncation error estimate. A novel reconstruction method based on polyharmonic smoothing splines is developed to smoothly reconstruct the discrete solution and improve the accuracy of error estimates. Furthermore, a method for iteratively improving discretization error estimates is devised. Efficiency and robustness considerations are discussed. Results are presented for several inviscid and viscous flow problems. To facilitate the study of discretization error estimation, a new, higher-order finite-volume solver is developed. A detailed description of the code base is provided along with a discussion of best practices for CFD code design. / Ph. D. / Computational fluid dynamics (CFD) is a branch of computational physics at the intersection of fluid mechanics and scientific computing in which the governing equations of fluid motion, such as the Euler and Navier-Stokes equations, are solved numerically on a computer. CFD is utilized in numerous fields including biomedical engineering, meteorology, oceanography, and aerospace engineering. CFD simulations can provide tremendous insight into physical processes and are often preferred over experiments because they can be performed more quickly, are typically more cost-effective, and can provide data in regions where it may be difficult to measure. While CFD can be an extremely powerful tool, CFD simulations are inherently subject to numerical errors. These errors, which are generated when the governing equations of fluid motion are solved on a computer, can have a significant impact on the accuracy of a CFD solution. If numerical errors are not accurately quantified, ill-informed decision-making can lead to poor system performance, increased risk of injury, or even system failure. In this work, research efforts are focused on numerical error estimation for the finite -volume method, arguably the most widely used numerical algorithm for solving CFD problems. The error estimation techniques provided herein target discretization error, the largest contributor to the overall numerical error in a given simulation. Discretization error can be very difficult to estimate since these errors are generated, convected, and diffused by the same physical processes embedded in the governing equations. In this work, improvements are made to the accuracy and efficiency of existing discretization error estimation techniques. Results are presented for several inviscid and viscous flow problems. To facilitate the study of these error estimators, a new, higher-order finite -volume solver is developed. A detailed description of the code base is provided along with a discussion of best practices for CFD code design.
407

Power System Parameter Estimation for Enhanced Grid Stability Assessment in Systems with Renewable Energy Sources

Schmitt, Andreas Joachim 05 June 2018 (has links)
The modern day power grid is a highly complex system; as such, maintaining stable operations of the grid relies on many factors. Additionally, the increased usage of renewable energy sources significantly complicates matters. Attempts to assess the current stability of the grid make use of several key parameters, however obtaining these parameters to make an assessment has its own challenges. Due to the limited number of measurements and the unavailability of information, it is often difficult to accurately know the current value of these parameters needed for stability assessment. This work attempts to estimate three of these parameters: the Inertia, Topology, and Voltage Phasors. Without these parameters, it is no longer possible to determine the current stability of the grid. Through the use of machine learning, empirical studies, and mathematical optimization it is possible to estimate these three parameters when previously this was not the case. These three methodologies perform estimations through measurement-based approaches. This allows for the obtaining of these parameters without required system knowledge, while improving results when systems information is known. / Ph. D. / Stable grid operations means that electricity is supplied to all customers at any given time regardless of changes in the system. As the power grid grows and develops, the number of ways in which a grid can lose stability also grows. As a result, the metrics that are used to determine if a grid is stable at any given time have grown increasingly complex and rely on significantly more amounts of information. This information required in order to obtain the metrics which determine grid stability often has key limitations in when and how it can be obtained. The work presented details several methods for obtaining this information in situations were it was previously not possible to do so. The methods are all measurement based, which means that no prior knowledge about the grid is required in order to compute the values.
408

The importance of contextual factors on the accuracy of estimates in project management : an emergence of a framework for more realistic estimation process

Lazarski, Adam January 2014 (has links)
Successful projects are characterized by the quality of their planning. Good planning that better takes into account contextual factors allows more accurate estimates to be achieved. As an outcome of this research, a new framework composed of best practices has been discovered. This comprises an open platform that project experts and practitioners can work with efficiently, and that researchers can develop further as required. The research investigation commenced in the autumn of 2008 with a pilot study and then proceeded through an inductive research process, involving a series of eleven interviews. These consisted of interviews with four well-recognized experts in the field, four interviews with different practitioners and three group interviews. In addition, a long-running observation of forty-five days was conceptualized, together with other data sources, before culminating in the proposal of a new framework for improving the accuracy of estimates. Furthermore, an emerging framework – and a description of its know-how in terms of application – have been systematically reviewed through the course of four hundred twenty-five days of meetings, dedicated for the most part to improving the use of a wide range of specific project management tools and techniques and to an improvement in understanding of planning and the estimation process associated with it. This approach constituted an ongoing verification of the research’s findings against project management practice and also served as an invaluable resource for the researcher’s professional and practice-oriented development. The results obtained offered fresh insights into the importance of knowledge management in the estimation process, including the “value of not knowing”, the oft-overlooked phenomenon of underestimation and its potential to co-exist with overestimation, and the use of negative buffer management in the critical chain concept to secure project deadlines. The project also highlighted areas of improvement for future research practice that wishes to make use of an inductive approach in order to achieve a socially agreed framework, rather than a theory alone. In addition, improvements were suggested to the various qualitative tools employed in the customized data analysis process.
409

Age estimation in the living : a test of 6 radiographic methods

Hackman, S. Lucina M. R. January 2012 (has links)
There is a growing recognition that there is a requirement for methods of age estimation of the living to be rigorously tested to ensure that they are accurate, reliable and valid for use in forensic and humanitarian age estimation. The necessity for accurate and reliable methods of age estimation are driven both by humanitarian, political and judicial need. Age estimation methods commonly in use today are based on the application of reference standards, known as atlases, which were developed using data collected from children who participated in longitudinal studies in the early to mid-1900s. The standards were originally developed to provide a baseline to which radiographs could be compared in order to assess the child’s stage of skeletal development in relation to their chronological age, a purpose for which they are still utilised in the medical community. These atlases provide a testable link between skeletal age and chronological age which has been recognised by forensic practitioners who have essentially hijacked this medical capability and applied it to their fields. This has resulted in an increased use of these standards as a method of predicting the chronological age from the skeletal age of a child when the former is unknown. This novel use of the atlases on populations who are distinct, ethnically, temporally and geographically, from those whose data was gathered and was used in the design of the standard leaves the forensic outcomes vulnerable to challenge in court. This study aims to examine the reliability and accuracy of these standards in relation to a modern population, providing a sound statistical base for the use of these standards for forensic purposes. Radiographs were collected from the local hospital from children who had been X-rayed for investigation during attendance at the local A&E department. Four body areas were selected for investigation; the hand-wrist, the elbow, the knee and the foot-ankle and tests were undertaken to assess the radiographs using six commonly uses methods of age estimation. Further images of the wrist and elbow were collected from children in New Delhi, India. These images were subject to age estimation utilising the methods described.
410

Identification passive en acoustique : estimateurs et applications au SHM / Passive estimation in acoustics : estimators and applications to SHM

Vincent, Rémy 08 January 2016 (has links)
L’identité de Ward est une relation qui permet d’identifier unmilieu de propagation linéaire dissipatif, c'est-à-dire d'estimer des paramètres qui le caractérisent. Dans les travaux exposés, cette identité est utilisée pour proposer de nouveaux modèles d’observation caractérisant un contexte d’estimation qualifié de passif : les sources qui excitent le système ne sont pas contrôlées par l’utilisateur. La théorie de l’estimation/détection dans ce contexte est étudiée et des analyses de performances sont menées sur divers estimateurs. La portée applicative des méthodes proposées concerne le domaine du Structural Health Monitoring (SHM), c’est-à-dire le suivi de l’état de santé desbâtiment, des ponts... L'approche est développée pour la modalité acoustique aux fréquences audibles, cette dernière s'avérant complémentaire des techniques de l’état de l’art du SHM et permettant entre autre, d’accéder à des paramètres structuraux et géométriques. Divers scénarios sont illustrés par la mise en oeuvre expérimentale des algorithmes développés et adaptés à des contraintes de calculs embarqués sur un réseau de capteurs autonome. / Ward identity is a relationship that enables damped linear system identification, ie the estimation its caracteristic properties. This identity is used to provide new observation models that are available in an estimation context where sources are uncontrolled by the user. An estimation and detection theory is derived from these models and various performances studies areconducted for several estimators. The reach of the proposed methods is extended to Structural Health Monitoring (SHM), that aims at measuring and tracking the health of buildings, such as a bridge or a sky-scraper for instance. The acoustic modality is chosen as it provides complementary parameters estimation to the state of the art in SHM, such as structural and geometrical parameters recovery. Some scenarios are experimentally illustrated by using the developed algorithms, adapted to fit the constrains set by embedded computation on anautonomous sensor network.

Page generated in 0.0918 seconds