• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 48
  • 37
  • 24
  • 17
  • 9
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 356
  • 72
  • 69
  • 63
  • 52
  • 50
  • 42
  • 39
  • 35
  • 33
  • 31
  • 31
  • 28
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

The Use of Press Archives in the Temporal and Spatial Analysis of Rainfall-Induced Landslides in Tegucigalpa, Honduras, 1980-2005

Garcia-Urquia, Elias January 2015 (has links)
The scarcity of data poses a challenging obstacle for the study of natural disasters, especially in developing countries where the social vulnerability plays as important a role as the physical vulnerability.  The work presented in this thesis is oriented towards the demonstration of the usefulness of press archives as a data source for the temporal and spatial analysis of landslides in Tegucigalpa, Honduras for the period between 1980 and 2005.  In the last four decades, Tegucigalpa has been characterized by a disorganized urban growth that has significantly contributed to the destabilization of the city’s slopes.  In the first part of the thesis, a description of the database compilation procedure is provided.  The limitations of using data derived from press archives have also been addressed to indicate how these affect the subsequent landslide analyses.  In the second part, the temporal richness offered by press archives has allowed the establishment of rainfall thresholds for landslide occurrence.  Through the use of the critical rainfall intensity method, the analysis of rainfall thresholds for 7, 15, 30 and 60 antecedent days shows that the number of yielded false alarms increases with the threshold duration.  A new method based on the rainfall frequency contour lines was proposed to improve the distinction between days with and without landslides.  This method also offers the possibility to identify the landslides that may only occur with a major contribution of anthropogenic disturbances as well as those landslides induced by high-magnitude rainfall events.  In the third part, the matrix method has been employed to construct two landslide susceptibility maps: one based on the multi-temporal press-based landslide inventory and a second one based on the landslide inventory derived from an aerial photograph interpretation carried out in 2014.  Despite the low spatial accuracy provided by the press archives in locating the landslides, both maps exhibit 69% of consistency in the susceptibility classes and a good agreement in the areas with the highest propensity to landslides.  Finally, the integration of these studies with major actions required to improve the process of landslide data collection is proposed to prepare Tegucigalpa for future landslides.
262

Feature selection based segmentation of multi-source images : application to brain tumor segmentation in multi-sequence MRI

Zhang, Nan 12 September 2011 (has links) (PDF)
Multi-spectral images have the advantage of providing complementary information to resolve some ambiguities. But, the challenge is how to make use of the multi-spectral images effectively. In this thesis, our study focuses on the fusion of multi-spectral images by extracting the most useful features to obtain the best segmentation with the least cost in time. The Support Vector Machine (SVM) classification integrated with a selection of the features in a kernel space is proposed. The selection criterion is defined by the kernel class separability. Based on this SVM classification, a framework to follow up brain tumor evolution is proposed, which consists of the following steps: to learn the brain tumors and select the features from the first magnetic resonance imaging (MRI) examination of the patients; to automatically segment the tumor in new data using a multi-kernel SVM based classification; to refine the tumor contour by a region growing technique; and to possibly carry out an adaptive training. The proposed system was tested on 13 patients with 24 examinations, including 72 MRI sequences and 1728 images. Compared with the manual traces of the doctors as the ground truth, the average classification accuracy reaches 98.9%. The system utilizes several novel feature selection methods to test the integration of feature selection and SVM classifiers. Also compared with the traditional SVM, Fuzzy C-means, the neural network and an improved level set method, the segmentation results and quantitative data analysis demonstrate the effectiveness of our proposed system.
263

Détection et suivi d'objets par vision fondés sur segmentation par contour actif basé région

Ait Fares, Wassima 27 September 2013 (has links) (PDF)
La segmentation et le suivi d'objets sont des domaines de recherche compétitif dans la vision par ordinateur. Une de leurs applications importantes réside dans la robotique où la capacité à segmenter un objet d'intérêt du fond de l'image, d'une manière précise, est cruciale particulièrement dans des images acquises à bord durant le mouvement du robot. Segmenter un objet dans une image est une opération qui consiste à distinguer la région objet du celle du fond suivant un critère défini. Suivre un objet dans une séquence d'images est une opération qui consiste à localiser la région objet au fil du temps dans une vidéo. Plusieurs techniques peuvent être utilisées afin d'assurer ces opérations. Dans cette thèse, nous nous sommes intéressés à segmenter et suivre des objets en utilisant la méthode du contour actif en raison de sa robustesse et son efficacité à pouvoir segmenter et suivre des objets non rigides. Cette méthode consiste à faire évoluer une courbe à partir d'une position initiale, entourant l'objet à détecter, vers la position de convergence qui correspond aux bords de cet objet d'intérêt. Nous proposons d'abord un critère global qui dépend des régions de l'image ce qui peut imposer certaines contraintes sur les caractéristiques de ces régions comme une hypothèse d'homogénéité. Cette hypothèse ne peut pas être toujours vérifiée du fait de l'hétérogénéité souvent présente dans les images. Dans le but de prendre en compte l'hétérogénéité qui peut apparaître soit sur l'objet d'intérêt soit sur le fond dans des images bruitées et avec une initialisation inadéquate du contour actif, nous proposons une technique qui combine des statistiques locales et globales pour définir le critère de segmentation. En utilisant un rayon de taille fixe, un demi-­‐disque est superposé sur chaque point du contour actif afin de définir les régions d'extraction locale. Lorsque l'hétérogénéité se présente à la fois sur l'objet d'intérêt et sur le fond de l'image, nous développons une technique basée sur un rayon flexible déterminant deux demi-­‐disques avec deux rayons de valeurs différentes pour extraire l'information locale. Le choix de la valeur des deux rayons est déterminé en prenant en considération la taille de l'objet à segmenter ainsi que de la distance séparant l'objet d'intérêt de ses voisins. Enfin, pour suivre un objet mobile dans une séquence vidéo en utilisant la méthode des contours actifs, nous développons une approche hybride du suivi d'objet basée sur les caractéristiques de la région et sur le vecteur mouvement des points d'intérêt extraits dans la région objet. En utilisant notre approche, le contour actif initial à chaque image sera ajusté suffisamment d'une façon à ce qu'il soit le plus proche possible au bord réel de l'objet d'intérêt, ainsi l'évolution du contour actif basée sur les caractéristiques de la région ne sera pas piégée par de faux contours. Des résultats de simulations sur des images synthétiques et réelles valident l'efficacité des approches proposées.
264

Segmentation et quantification des couches rétiniennes dans des images de tomographie de cohérence optique, dans le cas de sujets sains et pathologiques

Ghorbel, Itebeddine 12 April 2012 (has links) (PDF)
La tomographie de cohérence optique (OCT) est une technique d'imagerie non invasive, fondée sur le principe de l'interférométrie. Elle est maintenant un examen classique pour le dépistage et le suivi des affections rétiniennes, notamment des dégénérescences maculaires. C'est dans ce cadre que s'inscrit le premier objectif de ces travaux de thèse, où nous proposons une nouvelle méthode de segmentation des images OCT de sujets sains. Les principales difficultés sont liées au bruit des images, à la variabilité de la morphologie d'un patient à l'autre et aux interfaces mal définies entre les différentes couches. Notre nouvelle approche est fondée sur des algorithmes de segmentation plus globaux. Ainsi, huit couches rétiniennes peuvent être détectées, y compris les segments internes (IS) des photorécepteurs. L'évolution lente des maladies rétiniennes pose le problème de l'évaluation de ces thérapeutiques. C'est dans ce cadre que s'inscrit le second objectif de cette thèse, où nous étendons le champ d'application des méthodes développées pour les sujets sains aux sujets atteints de rétinopathie pigmentaire. Nous avons ainsi développé un nouveau modèle paramétrique déformable qui intègre les informations a priori en ajoutant une contrainte de parallélisme. Dans les cas sains et pathologiques, nous avons réalisé une évaluation exhaustive qualitative et quantitative. Les résultats de segmentation automatique ont été comparés avec les segmentations manuelles réalisées par différents experts. Ces évaluations montrent une très bonne concordance et une forte corrélation entre les segmentations automatiques et les segmentations faites manuellement par un expert.
265

3D Surface Analysis for the Automated Detection of Deformations on Automotive Panels

Yogeswaran, Arjun 16 May 2011 (has links)
This thesis examines an automated method to detect surface deformations on automotive panels for the purpose of quality control along a manufacturing assembly line. Automation in the automotive manufacturing industry is becoming more prominent, but quality control is still largely performed by human workers. Quality control is important in the context of automotive body panels as deformations can occur along the assembly line such as inadequate handling of parts or tools around a vehicle during assembly, rack storage, and shipping from subcontractors. These defects are currently identified and marked, before panels are either rectified or discarded. This work attempts to develop an automated system to detect deformations to alleviate the dependence on human workers in quality control and improve performance by increasing speed and accuracy. Some techniques make use of an ideal CAD model behaving as a master work, and panels scanned on the assembly line are compared to this model to determine the location of deformations. This thesis presents a solution for detecting deformations of various scales without a master work. It also focuses on automated analysis requiring minimal intuitive operator-set parameters and provides the ability to classify the deformations as dings, which are deformations that protrude from the surface, or dents, which are depressions into the surface. A complete automated deformation detection system is proposed, comprised of a feature extraction module, segmentation module, and classification module, which outputs the locations of deformations when provided with the 3D mesh of an automotive panel. Two feature extraction techniques are proposed. The first is a general feature extraction technique for 3D meshes using octrees for multi-resolution analysis and evaluates the amount of surface variation to locate deformations. The second is specifically designed for the purpose of deformation detection, and analyzes multi-resolution cross-sections of a 3D mesh to locate deformations based on their estimated size. The performance of the proposed automated deformation detection system, and all of its sub-modules, is tested on a set of meshes which represent differing characteristics of deformations in surface panels, including deformations of different scales. Noisy, low resolution meshes are captured from a 3D acquisition, while artificial meshes are generated to simulate ideal acquisition conditions. The proposed system shows accurate results in both ideal situations as well as non-ideal situations under the condition of noise and complex surface curvature by extracting only the deformations of interest and accurately classifying them as dings or dents.
266

Modeling Phoneme Durations And Fundamental Frequency Contours In Turkish Speech

Ozturk, Ozlem 01 October 2005 (has links) (PDF)
The term prosody refers to characteristics of speech such as intonation, timing, loudness, and other acoustical properties imposed by physical, intentional and emotional state of the speaker. Phone durations and fundamental frequency contours are considered as two of the most prominent aspects of prosody. Modeling phone durations and fundamental frequency contours in Turkish speech are studied in this thesis. Various methods exist for building prosody models. State-of-the-art is dominated by corpus-based methods. This study introduces corpus-based approaches using classification and regression trees to discover the relationships between prosodic attributes and phone durations or fundamental frequency contours. In this context, a speech corpus, designed to have specific phonetic and prosodic content has been recorded and annotated. A set of prosodic attributes are compiled. The elements of the set are determined based on linguistic studies and literature surveys. The relevances of prosodic attributes are investigated by statistical measures such as mutual information and information gain. Fundamental frequency contour and phone duration modeling are handled as independent problems. Phone durations are predicted by using regression trees where the set of prosodic attributes is formed by forward selection. Quantization of phone durations is studied to improve prediction quality. A two-stage duration prediction process is proposed for handling specific ranges of phone duration values. Scaling and shifting of predicted durations are proposed to minimize mean squared error. Fundamental frequency contour modeling is studied under two different frameworks. One of them generates a codebook of syllable-fundamental-frequency-contours by vector quantization. The codewords are used to predict sentence fundamental frequency contours. Pitch accent prediction by two different clustering of codewords into accented and not-accented subsets is also considered in this framework. Based on the experience, the other approach is initiated. An algorithm has been developed to identify syllables having perceptual prominence or pitch accents. The slope of fundamental frequency contours are then predicted for the syllables identified as accented. Pitch contours of sentences are predicted using the duration information and estimated slope values. Performance of the phone duration and fundamental frequency contour models are evaluated quantitatively using statistical measures such as mean absolute error, root mean squared error, correlation and by kappa coefficients, and by correct classification rate in case of discrete symbol prediction.
267

Mechanistic modeling of evaporating thin liquid film instability on a bwr fuel rod with parallel and cross vapor flow

Hu, Chih-Chieh 20 January 2009 (has links)
This work has been aimed at developing a mechanistic, transient, 3-D numerical model to predict the behavior of an evaporating thin liquid film on a non-uniformly heated cylindrical rod with simultaneous parallel and cross flow of vapor. Interest in this problem has been motivated by the fact that the liquid film on a full-length boiling water reactor fuel rod may experience significant axial and azimuthal heat flux gradients and cross flow due to variations in the thermal-hydraulic conditions in surrounding subchannels caused by proximity to inserted control blade tip and/or the top of part-length fuel rods. Such heat flux gradients coupled with localized cross flow may cause the liquid film on the fuel rod surface to rupture, thereby forming a dry hot spot. These localized dryout phenomena can not be accurately predicted by traditional subchannel analysis methods in conjunction with empirical dryout correlations. To this end, a numerical model based on the Level Contour Reconstruction Method was developed. The Standard k- turbulence model is included. A cylindrical coordinate system has been used to enhance the resolution of the Level Contour Reconstruction Model. Satisfactory agreement has been achieved between the model predictions and experimental data. A model of this type is necessary to supplement current state-of-the-art BWR core thermal-hydraulic design methods based on subchannel analysis techniques coupled with empirical dry out correlations. In essence, such a model would provide the core designer with a "magnifying glass" by which the behavior of the liquid film at specific locations within the core (specific axial node on specific location within a specific bundle in the subchannel analysis model) can be closely examined. A tool of this type would allow the designer to examine the effectiveness of possible design changes and/or modified control strategies to prevent conditions leading to localized film instability and possible fuel failure.
268

Human Action Recognition In Video Data For Surveillance Applications

Gurrapu, Chaitanya January 2004 (has links)
Detecting human actions using a camera has many possible applications in the security industry. When a human performs an action, his/her body goes through a signature sequence of poses. To detect these pose changes and hence the activities performed, a pattern recogniser needs to be built into the video system. Due to the temporal nature of the patterns, Hidden Markov Models (HMM), used extensively in speech recognition, were investigated. Initially a gesture recognition system was built using novel features. These features were obtained by approximating the contour of the foreground object with a polygon and extracting the polygon's vertices. A Gaussian Mixture Model (GMM) was fit to the vertices obtained from a few frames and the parameters of the GMM itself were used as features for the HMM. A more practical activity detection system using a more sophisticated foreground segmentation algorithm immune to varying lighting conditions and permanent changes to the foreground was then built. The foreground segmentation algorithm models each of the pixel values using clusters and continually uses incoming pixels to update the cluster parameters. Cast shadows were identified and removed by assuming that shadow regions were less likely to produce strong edges in the image than real objects and that this likelihood further decreases after colour segmentation. Colour segmentation itself was performed by clustering together pixel values in the feature space using a gradient ascent algorithm called mean shift. More robust features in the form of mesh features were also obtained by dividing the bounding box of the binarised object into grid elements and calculating the ratio of foreground to background pixels in each of the grid elements. These features were vector quantized to reduce their dimensionality and the resulting symbols presented as features to the HMM to achieve a recognition rate of 62% for an event involving a person writing on a white board. The recognition rate increased to 80% for the &quotseen" person sequences, i.e. the sequences of the person used to train the models. With a fixed lighting position, the lack of a shadow removal subsystem improved the detection rate. This is because of the consistent profile of the shadows in both the training and testing sequences due to the fixed lighting positions. Even with a lower recognition rate, the shadow removal subsystem was considered an indispensable part of a practical, generic surveillance system.
269

Extração automática de contornos de telhados de edifícios em um modelo digital de elevação, utilizando inferência Bayesiana e campos aleatórios de Markov

Galvanin, Edinéia Aparecida dos Santos [UNESP] 29 March 2007 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:30:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-03-29Bitstream added on 2014-06-13T18:40:49Z : No. of bitstreams: 1 galvanin_eas_dr_prud.pdf: 5173646 bytes, checksum: aae51c2e230277eff607da015efe8a65 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / As metodologias para a extração automática de telhados desempenham um papel importante no contexto de aquisição de informação espacial para Sistemas de Informação Geográficas (SIG). Neste sentido, este trabalho propõe uma metodologia para extração automática de contornos de telhado de edifícios utilizando dados de varredura a laser. A metodologia baseiase em duas etapas principais: 1) Extração de regiões altas (edifícios, árvores etc.) de um Modelo Digital de Elevação (MDE) gerado a partir dos dados laser; 2) Extração das regiões altas que correspondem a contornos de telhados. Na primeira etapa são utilizadas as técnicas de divisão recursiva, via estrutura quadtree e de fusão Bayesiana de regiões considerando Markov Random Field (MRF). Inicialmente a técnica de divisão recursiva é usada para particionar o MDE em regiões homogêneas. No entanto, devido a ligeiras diferenças de altura no MDE, nesta etapa a fragmentação das regiões pode ser relativamente alta. Para minimizar essa fragmentação, a técnica de fusão Bayesiana de regiões é aplicada nos dados segmentados. Utiliza-se para tanto um modelo hierárquico, cujas alturas médias das regiões dependem de uma média geral e de um efeito aleatório, que incorpora a relação de vizinhança entre elas. A distribuição a priori para o efeito aleatório é especificada como um modelo condicional auto-regressivo (CAR). As distribuições a posteriori para os parâmetros de interesse foram obtidas utilizando o Amostrador de Gibbs. Na segunda etapa os contornos de telhados são identificados entre todos os objetos altos extraídos na etapa anterior. Levando em conta algumas propriedades de telhados e as medidas de alguns atributos (por exemplo, área, retangularidade, ângulos entre eixos principais de objetos) é construída uma função de energia a partir do modelo MRF. / Methodologies for automatic building roof extraction are important in the context of spatial information acquisition for geographical information systems (GIS). Thus, this work proposes a methodology for automatic extraction of building roof contour from laser scanning data. The methodology is based on two stages: 1) Extraction of high regions (buildings, trees etc.) from a Digital Elevation Model (DEM) derived from laser scanning data; 2) Building roof contour extraction. In the first stage is applied the recursive splitting technique using the quadtree structure followed by a Bayesian merging technique considering Markov Random Field (MRF) model. The recursive splitting technique subdivides the DEM into homogeneous regions. However, due to slight height differences in the DEM, in this stage the region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. Thus, a hierarchical model is proposed, whose height values in the data depend on a general mean plus a random effect. The prior distribution for the random effects is specified by the Conditional Autoregressive (CAR) model. The posterior probability distributions are obtained by the Gibbs sampler. In the second stage the building roof contours are identified among all high objects extracted previously.
270

Registro múltiplo de sequências temporais coronais e sagitais obtidas por ressonância magnética baseada em transformada de Hough. / Multiple registration of coronal and sagittal MR temporal image sequences based on Hough transform.

Neylor Stevo 20 August 2010 (has links)
Este trabalho discute a determinação de padrões respiratórios em sequências temporais de imagens obtidas por Ressonância Magnética (RM) e o seu uso no registro temporal de imagens coronais e sagitais. O registro é feito sem o uso de qualquer informação de sincronismo e qualquer gás especial para reforçar o contraste. As sequências temporais de imagens são adquiridas em respiração livre. O movimento real do pulmão nunca foi diretamente visto, pois é totalmente dependente dos músculos que o rodeiam. A visualização do pulmão em movimento é um tema atual de pesquisa na medicina. O movimento do pulmão não possui intervalos regulares e é suscetível a variações na respiração. Comparado à Tomografia Computadorizada (TC), a RM necessita de um maior tempo de aquisição e é preferível porque não envolve radiação. Como as sequências de imagens coronais e sagitais são ortogonais entre si, a sua intersecção corresponde a um segmento de reta no espaço tridimensional. O registro se baseia na análise deste segmento interseccional. A variação deste segmento de interseção no tempo pode ser enfileirada, definindo uma imagem espaço-temporal em duas dimensões (2DST). Supõe-se que o movimento diafragmático é o movimento principal de todas as estruturas do pulmão se movem quase que totalmente sincronicamente. A sincronização deste movimento é feita através de um padrão chamado função respiração. Este padrão é obtido através do processamento de uma imagem 2DST. Um algoritmo da transformada de Hough intervalar procura movimentos sincronizados na função respiração. O algoritmo de contornos ativos ajusta pequenas discrepâncias originadas por movimentos assíncronos nos padrões respiratórios . A saída é um conjunto de padrões respiratórios. Finalmente, a composição de imagens coronal e sagital que estão na mesma fase respiratória é realizada através da comparação de padrões respiratórios provenientes das superfícies de contorno superior e diafragmática. Quando disponíveis, os padrões respiratórios associados às estruturas internas do pulmão também são usados. Vários resultados e conclusões são apresentados. / This work addresses the determination of the breathing patterns in time sequence of images obtained from Magnetic Resonance (MR) and their use in the temporal registration of coronal and sagital images. The registration is done without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to Computerized Tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization of this motion is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal images that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respire tory patterns associated to lung internal structures are also used. Several results and conclusions are shown.

Page generated in 0.0456 seconds