• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 462
  • 63
  • 56
  • 56
  • 54
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Video stabilization and rectification for handheld cameras

Jia, Chao 26 June 2014 (has links)
Video data has increased dramatically in recent years due to the prevalence of handheld cameras. Such videos, however, are usually shakier compared to videos shot by tripod-mounted cameras or cameras with mechanical stabilizers. In addition, most handheld cameras use CMOS sensors. In a CMOS sensor camera, different rows in a frame are read/reset sequentially from top to bottom. When there is fast relative motion between the scene and the video camera, a frame can be distorted because each row was captured under a different 3D-to-2D projection. This kind of distortion is known as rolling shutter effect. Digital video stabilization and rolling shutter rectification seek to remove the unwanted frame-to-frame jitter and rolling shutter effect, in order to generate visually stable and pleasant videos. In general, we need to (1) estimate the camera motion, (2) regenerate camera motion, and (3) synthesize new frames. This dissertation aims at improving the first two steps of video stabilization and rolling shutter rectification. It has been shown that the inertial sensors in handheld devices can provide more accurate and robust motion estimation compared to vision-based methods. This dissertation proposes an online camera-gyroscope calibration method for sensor fusion while a user is capturing video. The proposed method uses an implicit extended Kalman filter and is based on multiple-view geometry in a rolling shutter camera model. It is able to estimate the needed calibration parameters online with all kinds of camera motion. Given the camera motion estimated from inertial sensors after the pro- posed calibration method, this dissertation first proposes an offline motion smoothing algorithm based on a 3D rotational camera motion model. The offline motion smoothing is formulated as a geodesic-convex regression problem on the manifold of rotation matrix sequences. The formulated problem is solved by an efficient two-metric projection algorithm on the manifold. The geodesic-distance-based smoothness metric better exploits the manifold structure of sequences of rotation matrices. Then this dissertation proposes two online motion smoothing algorithms that are also based on a 3D rotational camera motion model. The first algorithm extends IIR filtering from Euclidean space to the nonlinear manifold of 3D rotation matrices. The second algorithm uses unscented Kalman filtering on a constant angular velocity model. Both offline and online motion smoothing algorithms are constrained to guarantee that no black borders intrude into the stabilized frames. / text
142

Topology optimization for additive manufacturing of customized meso-structures using homogenization and parametric smoothing functions

Sundararajan, Vikram Gopalakrishnan 16 February 2011 (has links)
Topology optimization tools are useful for distributing material in a geometric domain to match targets for mass, displacement, structural stiffness, and other characteristics as closely as possible. Topology optimization tools are especially applicable to additive manufacturing applications, which provide nearly unlimited freedom for customizing the internal and external architecture of a part. Existing topology optimization tools, however, do not take full advantage of the capabilities of additive manufacturing. Prominent tools use micro- or meso-scale voids or artificial materials to parameterize the topology optimization problem, but they use filters, penalization functions, and other schemes to force convergence to regions of fully dense (solid) material and fully void (open) space in the final structure as a means of accommodating conventional manufacturing processes. Since additive manufacturing processes are capable of fabricating intermediate densities (e.g., via porous mesostructures), significant performance advantages could be achieved by preserving and exploiting those features during the topology optimization process. Towards this goal, a topology optimization tool has been created by combining homogenization with parametric smoothing functions. Rectangular mesoscale voids are used to represent material topology. Homogenization is used to analyze its properties. B-spline based parametric smoothing functions are used to control the size of the voids throughout the design domain, thereby smoothing the topology and reducing the number of required design variables relative to homogenization-based approaches. Resulting designs are fabricated with selective laser sintering technology, and their geometric and elastic properties are evaluated experimentally. / text
143

Next Generation Computer Controlled Optical Surfacing

Kim, Dae Wook January 2009 (has links)
Precision optics can be accurately fabricated by computer controlled optical surfacing (CCOS) that uses well characterized polishing tools driven by numerically controlled machines. The CCOS process is optimized to vary the dwell time of the tool on the workpiece according to the desired removal and the calibrated tool influence function (TIF), which is the shape of the wear function by the tool. This study investigates four major topics to improve current CCOS processes, and provides new solutions and approaches for the next generation CCOS processes.The first topic is to develop a tool for highly aspheric optics fabrication. Both the TIF stability and surface finish rely on the tool maintaining intimate contact with the workpiece. Rigid tools smooth the surface, but do not maintain intimate contacts for aspheric surfaces. Flexible tools conform to the surface, but lack smoothing. A rigid conformal (RC) lap using a visco-elastic non-Newtonian medium was developed. It conforms to the aspheric shape, yet maintains stability to provide natural smoothing.The second topic is a smoothing model for the RC lap. The smoothing naturally removes mid-to-high frequency errors while a large tool runs over the workpiece to remove low frequency errors efficiently. The CCOS process convergence rate can be significantly improved by predicting the smoothing effects. A parametric smoothing model was introduced and verified.The third topic is establishing a TIF model to represent measured TIFs. While the linear Preston's model works for most cases, non-linear removal behavior as the tool overhangs the workpiece edge introduces a difficulty in modeling. A parametric model for the edge TIFs was introduced and demonstrated. Various TIFs based on the model are provided as a library.The last topic is an enhanced process optimization technique. A non-sequential optimization technique using multiple TIFs was developed. Operating a CCOS with a small and well characterized TIF achieves excellent performance, but takes a long time. Sequential polishing runs using large and small tools can reduce this polishing time. The non-sequential approach performs multiple dwell time optimizations for the entire CCOS runs simultaneously. The actual runs will be sequential, but the optimization is comprehensive.
144

Stochastic claims reserving in non-life insurance : Bootstrap and smoothing models

Björkwall, Susanna January 2011 (has links)
In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis. However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself. The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.
145

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
146

Faktorgraph-basierte Sensordatenfusion zur Anwendung auf einem Quadrocopter / Factor Graph Based Sensor Fusion for a Quadrotor UAV

Lange, Sven 13 December 2013 (has links) (PDF)
Die Sensordatenfusion ist eine allgegenwärtige Aufgabe im Bereich der mobilen Robotik und darüber hinaus. In der vorliegenden Arbeit wird das typischerweise verwendete Verfahren zur Sensordatenfusion in der Robotik in Frage gestellt und anhand von neuartigen Algorithmen, basierend auf einem Faktorgraphen, gelöst sowie mit einer korrespondierenden Extended-Kalman-Filter-Implementierung verglichen. Im Mittelpunkt steht dabei das technische sowie algorithmische Sensorkonzept für die Navigation eines Flugroboters im Innenbereich. Ausführliche Experimente zeigen die Qualitätssteigerung unter Verwendung der neuen Variante der Sensordatenfusion, aber auch Einschränkungen und Beispiele mit nahezu identischen Ergebnissen beider Varianten der Sensordatenfusion. Neben Experimenten anhand einer hardwarenahen Simulation wird die Funktionsweise auch anhand von realen Hardwaredaten evaluiert.
147

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
148

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
149

Forecasting and inventory control for hospital management

Crowe, Walter Ramsey January 1977 (has links)
Economic stringencies have compelled Canadian hospitals to examine their administrative effectiveness critically. Improved supplies and inventory procedures adopted by leading industrial corporations, suggest that hospitals might benefit from such systems. Lack of the profit incentive, and the high ratio of wages to total expenses in hospitals, have delayed adoption of modern inventory management techniques. This study examined the economic status of Canadian hospitals, and endeavoured to discover whether a computer-based inventory management system, incorporating short-term statistical demand forecasting, would be feasible and advantageous. Scientific forecasting for inventory management is not used by hospitals. The writer considered which technique would be most suited to their needs, taking account of benefits claimed by industrial users. Samples of demand data were subjected to a variety of simple forecasting methods, including moving averages, exponentially smoothed averages and the Box-Jenkins method. Comparisons were made in terms of relative size of forecast errors; ease of data maintenance, and demands upon hospital clerical staffs. The computer system: BRUFICH facilitated scrutiny of the effect of each technique upon major components of the system. It is concluded that either of two methods would be appropriate: moving averages and double exponential smoothing. The latter, when combined with adaptive control through tracking signals, is easily incorporated within the total inventory system. It requires only a short run of data, tracks trend satisfactorily, and demands little operator intervention. The original system designed by this writer was adopted by the Hospital for Sick Children, Toronto, and has significantly improved their inventory management.
150

Ensuring Safe Exploitation of Wind Turbine Kinetic Energy : An Invariance Kernel Formulation

Rawn, Barry Gordon 21 April 2010 (has links)
This thesis investigates the computation of invariance kernels for planar nonlinear systems with one input, with application to wind turbine stability. Given a known bound on the absolute value of the input variations (possibly around a fixed non-zero value), it is of interest to determine if the system's state can be guaranteed to stay within a desired region K of the state space irrespective of the input variations. The collection of all initial conditions for which trajectories will never exit K irrespective of input variations is called the invariance kernel. This thesis develops theory to characterize the boundary of the invariance kernel and develops an algorithm to compute the exact boundary of the invariance kernel. The algorithm is applied to two simplified wind turbine systems that tap kinetic energy of the turbine to support the frequency of the grid. One system provides power smoothing, and the other provides inertial response. For these models, limits on speed and torque specify a desired region of operation K in the state space, while the wind is represented as a bounded input. The theory developed in the thesis makes it possible to define a measure called the wind disturbance margin. This measure quantifies the largest range of wind variations under which the specified type of grid support may be provided. The wind disturbance margin quantifies how the exploitation of kinetic energy reduces a turbine's tolerance to wind disturbances. The improvement in power smoothing and inertial response made available by the increased speed range of a full converter-interfaced turbine is quantified as an example.

Page generated in 0.055 seconds