• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 24
  • 19
  • 13
  • 8
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 239
  • 46
  • 39
  • 36
  • 29
  • 28
  • 27
  • 27
  • 23
  • 23
  • 22
  • 21
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

High speed very thin films with reverse roll coatings : an experimental investigation of reverse roll coating of fluids using rigid and deformable rolls at high speeds

Shibata, Yusuke January 2012 (has links)
The objective of a coating operation is to transfer a defect free liquid film onto a continuous substrate in order to meet the requirements of the final products. Mainly two concerns govern the process. The first concern is the economics of the process and the second concern is the quality of the coated film. The economics of the process are dictated by the speed of coating and the film thickness. Clearly, higher speeds mean better productivity hence less cost of operation and thinner films are desirable because less material is being used. Quality is governed by film uniformity and integrity, indicating that the film will perform as designed. Film defects such as streaks or tiny air bubbles are indication that the film properties are not uniform rendering it unacceptable to customers. One of the most versatile coating systems to achieve thin films at high speeds is reverse roll coating which has been used for a long time all over the world. At low speed, typically 1m/s, this coating operation is inherently stable and with small gaps of order 100 microns can ii lead to film thickness of order 30-50 microns. Much research, theoretical and experimental, has been devoted to this coating flow but only at low speeds and for large gaps (>100 microns). There are no comprehensive data how very thin films, 20 microns and less (particularly lower limits in the region of 5 microns) can be achieved at high speeds, of 2 or more metres per second. This study is concerned precisely with this aim, that of investigating the effect of large speeds and small roller gaps (rollers nearly touching or in elastohydrodynamic contact) to achieve the very thin films desired by modern applications (electronics, medical and others). In order to achieve this aim, a rig was designed and built to enable to understand the effect of various coating conditions and liquid properties on the metered film thickness and coating instability. To achieve thin films at high speeds, small roll gap and low viscosity are needed, however flow instabilities will develop under these conditions. To achieve stable coating window at high speeds high surface tension is needed. It was found that the roll gap and the viscosity have complicated effect on the coating window. In the case of low viscosity liquid (7mPa.s), small roll gaps are needed, whereas in the case of high viscosity liquid (more than 30mPa.s), large gaps are needed. It was found that Weber number is better describer for ribbing instability in rigid reverse roll coating unlike in rigid forward roll coating in which capillary number is the one. In addition the potential of reverse deformable roll coating (rolls in elastohydrodynamic contact) was investigated in order to achieve much thinner films at higher speeds. As a result of the investigation of reverse deformable roll coating, it was found that there is a possibility to get much thinner stable films at much higher speeds compared to reverse rigid roll coating. The liquid transfer from an applicator roller to a PET film was investigated in this study. It was found that air stagnation at downstream meniscus and air entrainment at upstream meniscus depend on the liquid properties such as viscosity and surface tension and coating conditions such as web tension and wrap angle of web. As a result, wet film instability also depends on liquid properties and coating conditions. It was found that air stagnation causes streaks on the wet film and air entrainment caused bubbles on the wet film. To get a stable wet film, it was found that suitable viscosity and high surface tension were needed.
72

Detecção de objetos em vídeos usando misturas de modelos baseados em partes deformáveis obtidas de um conjunto de imagens / Object detection in video using mixtures of deformable part models obtained from a image set

Castaneda Leon, Leissi Margarita 23 October 2012 (has links)
A detecção de objetos, pertencentes a uma determinada classe, em vídeos é de uma atividade amplamente estudada devido às aplicações potenciais que ela implica. Por exemplo, para vídeos obtidos por uma câmera estacionária, temos aplicações como segurança ou vigilância do tráfego, e por uma câmera dinâmica, para assistência ao condutor, entre outros. Na literatura, há diferentes métodos para tratar indistintamente cada um dos casos mencionados, e que consideram só imagens obtidas por um único tipo de câmera para treinar os detectores. Isto pode levar a uma baixa performance quando se aplica a técnica em vídeos de diferentes tipos de câmeras. O estado da arte na detecção de objetos de apenas uma classe, mostra uma tendência pelo uso de histogramas, treinamento supervisionado e, basicamente, seguem a seguinte estrutura: construção do modelo da classe de objeto, detecção de candidatos em uma imagem/quadro, e aplicação de uma medida sobre esses candidatos. Outra desvantagem observada é o uso de diferentes modelos para cada linha de visada de um objeto, gerando muitos modelos e, em alguns casos, um classificador para cada linha de visada. Nesta dissertação, abordamos o problema de detecção de objetos, usando um modelo da classe do objeto criada com um conjunto de dados de imagens estáticas e posteriormente usamos o modelo para detectar objetos na seqüência de imagens (vídeos) que foram coletadas a partir de câmeras estacionárias e dinâmicas, ou seja, num cenário totalmente diferente do usado para o treinamento. A criação do modelo é feita em uma fase de aprendizagem off-line, utilizando o conjunto de imagens PASCAL 2007. O modelo baseia-se em uma mistura de modelos baseados em partes deformáveis (MDPM), originalmente proposto por Felzenszwalb et al. (2010b) no âmbito da detecção de objetos em imagens. Não limitamos o modelo para uma determinada linha de visada. Foi elaborado um conjunto de experimentos que exploram o melhor número de componentes da mistura e o número de partes do modelo. Além disso, foi realizado um estudo comparativo de MDPMs simétricas e assimétricas. Testamos esse método para detectar objetos como pessoas e carros em vídeos obtidos por câmera estacionária e dinâmica. Nossos resultados não mostram apenas o bom desempenho da MDPM e melhores resultados que o estado da arte na detecção de objetos em vídeos obtidos por câmeras estacionárias ou dinâmicas, mas também mostram o melhor número de componentes da mistura e as partes para o modelo criado. Finalmente, os resultados mostram algumas diferenças entre as MDPMs simétricas e assimétricas na detecção de objetos em diferentes vídeos. / The problem of detecting objects that belong to a specific class of objects, in videos is a widely studied activity due to its potential applications. For example, for videos that have been taken from a stationary camera, we can mention applications such as security and traffic surveillance; when the video have been taken from a dynamic camera, a possible application is autonomous driving. The literature, presents several different approaches to treat indiscriminately with each of the cases mentioned, and only consider images obtained from a stationary or dynamic camera to train the detectors. These approaches can lead to poor performaces when the tecniques are used in sequences of images from different types of camera. The state of the art in the detection of objects that belong to a specific class shows a tendency to the use of histograms, supervised training and basically follows the structure: object class model construction, detection of candidates in the image/frame, and application of a distance measure to those candidates. Another disadvantage is that some approaches use several models for each point of view of the car, generating a lot of models and, in some cases, one classifier for each point of view. In this work, we approach the problem of object detection, using a model of the object class created with a dataset of static images and we use the model to detect objects in videos (sequence of images) that were collected from static and dynamic cameras, i.e., in a totally different setting than used for training. The creation of the model is done by an off-line learning phase, using an image database of cars in several points of view, PASCAL 2007. The model is based on a mixture of deformable part models (MDPM), originally proposed by Felzenszwalb et al. (2010b) for detection in static images. We do not limit the model for any specific viewpoint. A set of experiments was elaborated to explore the best number of components of the integration, as well as the number of parts of the model. In addition, we performed a comparative study of symmetric and asymmetric MDPMs. We evaluated the proposed method to detect people and cars in videos obtained by a static or a dynamic camera. Our results not only show good performance of MDPM and better results than the state of the art approches in object detection on videos obtained from a stationary, or dynamic, camera, but also show the best number of components of the integration and parts or the created object. Finally, results show differences between symmetric and asymmetric MDPMs in the detection of objects in different videos.
73

Vector Flow Model in Video Estimation and Effects of Network Congestion in Low Bit-Rate Compression Standards

Ramadoss, Balaji 16 October 2003 (has links)
The use of digitized information is rapidly gaining acceptance in bio-medical applications. Video compression plays an important role in the archiving and transmission of different digital diagnostic modalities. The present scheme of video compression for low bit-rate networks is not suitable for medical video sequences. The instability is the result of block artifacts resulting from the block based DCT coefficient quantization. The possibility of applying deformable motion estimation techniques to make the video compression standard (H.263) more adaptable for bio-medial applications was studied in detail. The study on the network characteristics and the behavior of various congestion control mechanisms was used to analyze the complete characteristics of existing low bit rate video compression algorithms. The study was conducted in three phases. The first phase involved the implementation and study of the present H.263 compression standard and its limitations. The second phase dealt with the analysis of an external force for active contours which was used to obtain estimates for deformable objects. The external force, which is termed Gradient Vector Flow (GVF), was computed as a diffusion of the gradient vectors associated with a gray-level or binary edge map derived from the image. The mathematical aspect of a multi-scale framework based on a medial representation for the segmentation and shape characterization of anatomical objects in medical imagery was derived in detail. The medial representations were based on a hierarchical representation of linked figural models such as protrusions, indentations, neighboring figures and included figures--which represented solid regions and their boundaries. The third phase dealt with the vital parameters for effective video streaming over the internet in the bottleneck bandwidth, which gives the upper limit for the speed of data delivery from one end point to the other in a network. If a codec attempts to send data beyond this limit, all packets above the limit will be lost. On the other hand, sending under this limit will clearly result in suboptimal video quality. During this phase the packet-drop-rate (PDR) performance of TCP(1/2) was investigated in conjunction with a few representative TCP-friendly congestion control protocols (CCP). The CCPs were TCP(1/256), SQRT(1/256) and TFRC (256), with and without self clocking. The CCPs were studied when subjected to an abrupt reduction in the available bandwidth. Additionally, the investigation studied the effect on the drop rates of TCP-Compatible algorithms by changing the queuing scheme from Random Early Detection (RED) to DropTail.
74

Efficient implementation of the Particle Level Set method

Johansson, John January 2010 (has links)
<p>The Particle Level set method is a successful extension to Level set methods to improve thevolume preservation in fluid simulations. This thesis will analyze how sparse volume data structures can be used to store both the signed distance function and the particles in order to improve access speed and memory efficiency. This Particle Level set implementation will be evaluated against Digital Domains current Particle Level set implementation. Different degrees of quantization will be used to implement particle representations with varying accuracy. These particles will be tested and both visual results and error measurments will be presented. The sparse volume data structures DB-Grid and Field3D will be evaluated in terms of speed and memory efficiency.</p>
75

Model-Based Matching by Linear Combinations of Prototypes

Jones, Michael J., Poggio, Tomaso 01 December 1996 (has links)
We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.
76

Spectral/hp Finite Element Models for Fluids and Structures

Payette, Gregory 2012 May 1900 (has links)
We consider the application of high-order spectral/hp finite element technology to the numerical solution of boundary-value problems arising in the fields of fluid and solid mechanics. For many problems in these areas, high-order finite element procedures offer many theoretical and practical computational advantages over the low-order finite element technologies that have come to dominate much of the academic research and commercial software of the last several decades. Most notably, we may avoid various forms of locking which, without suitable stabilization, often plague low-order least-squares finite element models of incompressible viscous fluids as well as weak-form Galerkin finite element models of elastic and inelastic structures. The research documented in this dissertation includes applications of spectral/hp finite element technology to an analysis of the roles played by the linearization and minimization operators in least-squares finite element models of nonlinear boundary value problems, a novel least-squares finite element model of the incompressible Navier-Stokes equations with improved local mass conservation, weak-form Galerkin finite element models of viscoelastic beams and a high-order seven parameter continuum shell element for the numerical simulation of the fully geometrically nonlinear mechanical response of isotropic, laminated composite and functionally graded elastic shell structures. In addition, we also present a simple and efficient sparse global finite element coefficient matrix assembly operator that may be readily parallelized for use on shared memory systems. We demonstrate, through the numerical simulation of carefully chosen benchmark problems, that the finite element formulations proposed in this study are efficient, reliable and insensitive to all forms of numerical locking and element geometric distortions.
77

Modeling and Control of a Magnetic Fluid Deformable Mirror for Ophthalmic Adaptive Optics Systems

Iqbal, Azhar 13 April 2010 (has links)
Adaptive optics (AO) systems make use of active optical elements, namely wavefront correctors, to improve the resolution of imaging systems by compensating for complex optical aberrations. Recently, magnetic fluid deformable mirrors (MFDM) were proposed as a novel type of wavefront correctors that offer cost and performance advantages over existing wavefront correctors. These mirrors are developed by coating the free surface of a magnetic fluid with a thin reflective film of nano-particles. The reflective surface of the mirrors can be deformed using a locally applied magnetic field and thus serves as a wavefront corrector. MFDMs have been found particularly suitable for ophthalmic imaging systems where they can be used to compensate for the complex aberrations in the eye that blur the images of the internal parts of the eye. However, their practical implementation in clinical devices is hampered by the lack of effective methods to control the shape of their deformable surface. The research work reported in this thesis presents solutions to the surface shape control problem in a MFDM that will make it possible for such devices to become integral components of retinal imaging AO systems. The first major contribution of this research is the development of an accurate analytical model of the dynamics of the mirror surface shape. The model is developed by analytically solving the coupled system of fluid-magnetic equations that govern the dynamics of the surface shape. The model is presented in state-space form and can be readily used in the development of surface shape control algorithms. The second major contribution of the research work is a novel, innovative design of the MFDM. The design change was prompted by the findings of the analytical work undertaken to develop the model mentioned above and is aimed at linearizing the response of the mirror surface. The proposed design also allows for mirror surface deflections that are many times higher than those provided by the conventional MFDM designs. A third contribution of this thesis involves the development of control algorithms that allowed the first ever use of a MFDM in a closed-loop adaptive optics system. A decentralized proportional-integral (PI) control algorithm developed based on the DC model of the wavefront corrector is presented to deal mostly with static or slowly time-varying aberrations. To improve the stability robustness of the closed-loop AO system, a decentralized robust proportional-integral-derivative (PID) controller is developed using the linear-matrix-inequalities (LMI) approach. To compensate for more complex dynamic aberrations, an Hinf controller is designed using the mixed-sensitivity Hinf design method. The proposed model, design and control algorithms are experimentally tested and validated.
78

Modeling and Control of a Magnetic Fluid Deformable Mirror for Ophthalmic Adaptive Optics Systems

Iqbal, Azhar 13 April 2010 (has links)
Adaptive optics (AO) systems make use of active optical elements, namely wavefront correctors, to improve the resolution of imaging systems by compensating for complex optical aberrations. Recently, magnetic fluid deformable mirrors (MFDM) were proposed as a novel type of wavefront correctors that offer cost and performance advantages over existing wavefront correctors. These mirrors are developed by coating the free surface of a magnetic fluid with a thin reflective film of nano-particles. The reflective surface of the mirrors can be deformed using a locally applied magnetic field and thus serves as a wavefront corrector. MFDMs have been found particularly suitable for ophthalmic imaging systems where they can be used to compensate for the complex aberrations in the eye that blur the images of the internal parts of the eye. However, their practical implementation in clinical devices is hampered by the lack of effective methods to control the shape of their deformable surface. The research work reported in this thesis presents solutions to the surface shape control problem in a MFDM that will make it possible for such devices to become integral components of retinal imaging AO systems. The first major contribution of this research is the development of an accurate analytical model of the dynamics of the mirror surface shape. The model is developed by analytically solving the coupled system of fluid-magnetic equations that govern the dynamics of the surface shape. The model is presented in state-space form and can be readily used in the development of surface shape control algorithms. The second major contribution of the research work is a novel, innovative design of the MFDM. The design change was prompted by the findings of the analytical work undertaken to develop the model mentioned above and is aimed at linearizing the response of the mirror surface. The proposed design also allows for mirror surface deflections that are many times higher than those provided by the conventional MFDM designs. A third contribution of this thesis involves the development of control algorithms that allowed the first ever use of a MFDM in a closed-loop adaptive optics system. A decentralized proportional-integral (PI) control algorithm developed based on the DC model of the wavefront corrector is presented to deal mostly with static or slowly time-varying aberrations. To improve the stability robustness of the closed-loop AO system, a decentralized robust proportional-integral-derivative (PID) controller is developed using the linear-matrix-inequalities (LMI) approach. To compensate for more complex dynamic aberrations, an Hinf controller is designed using the mixed-sensitivity Hinf design method. The proposed model, design and control algorithms are experimentally tested and validated.
79

Robust Image Registration for Improved Clinical Efficiency : Using Local Structure Analysis and Model-Based Processing

Forsberg, Daniel January 2013 (has links)
Medical imaging plays an increasingly important role in modern healthcare. In medical imaging, it is often relevant to relate different images to each other, something which can prove challenging, since there rarely exists a pre-defined mapping between the pixels in different images. Hence, there is a need to find such a mapping/transformation, a procedure known as image registration. Over the years, image registration has been proved useful in a number of clinical situations. Despite this, current use of image registration in clinical practice is rather limited, typically only used for image fusion. The limited use is, to a large extent, caused by excessive computation times, lack of established validation methods/metrics and a general skepticism toward the trustworthiness of the estimated transformations in deformable image registration. This thesis aims to overcome some of the issues limiting the use of image registration, by proposing a set of technical contributions and two clinical applications targeted at improved clinical efficiency. The contributions are made in the context of a generic framework for non-parametric image registration and using an image registration method known as the Morphon.  In image registration, regularization of the estimated transformation forms an integral part in controlling the registration process, and in this thesis, two regularizers are proposed and their applicability demonstrated. Although the regularizers are similar in that they rely on local structure analysis, they differ in regard to implementation, where one is implemented as applying a set of filter kernels, and where the other is implemented as solving a global optimization problem. Furthermore, it is proposed to use a set of quadrature filters with parallel scales when estimating the phase-difference, driving the registration. A proposal that brings both accuracy and robustness to the registration process, as shown on a set of challenging image sequences. Computational complexity, in general, is addressed by porting the employed Morphon algorithm to the GPU, by which a performance improvement of 38-44x is achieved, when compared to a single-threaded CPU implementation. The suggested clinical applications are based upon the concept paint on priors, which was formulated in conjunction with the initial presentation of the Morphon, and which denotes the notion of assigning a model a set of properties (local operators), guiding the registration process. In this thesis, this is taken one step further, in which properties of a model are assigned to the patient data after completed registration. Based upon this, an application using the concept of anatomical transfer functions is presented, in which different organs can be visualized with separate transfer functions. This has been implemented for both 2D slice visualization and 3D volume rendering. A second application is proposed, in which landmarks, relevant for determining various measures describing the anatomy, are transferred to the patient data. In particular, this is applied to idiopathic scoliosis and used to obtain various measures relevant for assessing spinal deformity. In addition, a data analysis scheme is proposed, useful for quantifying the linear dependence between the different measures used to describe spinal deformities.
80

Segmentation Of Torso Ct Images

Demirkol, Onur Ali 01 July 2006 (has links) (PDF)
Medical imaging modalities provide effective information for anatomic or metabolic activity of tissues and organs in the body. Therefore, medical imaging technology is a critical component in diagnosis and treatment of various illnesses. Medical image segmentation plays an important role in converting medical images into anatomically, functionally or surgically identifiable structures, and is used in various applications. In this study, some of the major medical image segmentation methods are examined and applied to 2D CT images of upper torso for segmentation of heart, lungs, bones, and muscle and fat tissues. The implemented medical image segmentation methods are thresholding, region growing, watershed transformation, deformable models and a hybrid method / watershed transformation and region merging. Moreover, a comparative analysis is performed among these methods to obtain the most efficient segmentation method for each tissue and organ in torso. Some improvements are proposed for increasing accuracy of some image segmentation methods.

Page generated in 0.0394 seconds