• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 197
  • 74
  • 26
  • 23
  • 18
  • 11
  • 11
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 682
  • 180
  • 112
  • 81
  • 68
  • 52
  • 50
  • 47
  • 46
  • 46
  • 45
  • 44
  • 43
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.

Nassar, Alaa S.N. January 2018 (has links)
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity. / Higher Committee for Education Development in Iraq
582

Biomass to Biofuel : Syngas Cleaning and Biomass Feedstock

Sadegh-Vaziri, Ramiar January 2017 (has links)
This thesis builds around the idea of a biofuel production process that is comprised of biomass production, biomass gasification, gas cleaning, and fuel production. In this work, we specifically looked into H2S removal as a part of cleaning the producer gas and flocculation of microalgae which is involved in the harvesting of microalgae after biomass production. One of the impurities to remove from the producer gas is hydrogen sulfide which can be removed by using a packed bed of zinc oxide. Despite the regular use, it was only recently shown that during reaction with H2S, nano-size particles of ZnO exhibit void formation and outward growth. In this work, a micro-scale model was introduced to describe the void formation and outward growth. On the macro-scale, the simulations captured pore clogging of pellets due to the outward growth. The pore clogging prevents the full conversion of pellets and consequently leads to shorter breakthrough times of beds. The second problem investigated here deals with the flocculation of microalgae. Microalgae is produced in relatively low concentrations in the incubator liquid medium and during the harvesting, the concentration is increased to an acceptable level. The harvesting process includes a flocculation followed by a filtration or centrifuge unit. During flocculation, microalgae are stimulated to aggregate and form clusters. The experiments showed that the mean size of clusters formed during flocculation increases with time to a maximum and then starts decreasing, resulting in an overshoot in the mean size profile. The size of clusters influence the efficiency of the afterward filtration or centrifuge, thus it is of interest to carefully track the size evolution of clusters, making the studying of overshoot a crucial research topic. In this work, the possible mechanisms behind this overshoot were investigated. / <p>QC 20170330</p>
583

Numerical simulation of multi-dimensional fractal soot aggregates

Suarez, Andres January 2018 (has links)
Superaggregates are clusters formed by diverse aggregation mechanisms at different scales. They can be found in fluidized nanoparticles and soot formation. An aggregate, with a single aggregation mechanism, can be described by the fractal dimension, df , which is the measure of the distribution and configuration of primary particles into the aggregates. Similarly, a su-peraggregate can be analyzed by the different fractal dimensions that are found at each scale. In a fractal structure aggregate, a self-similarity can be identified at different scales and it has a power law relation between the mass and aggregate size, which can be related to properties like density or light scattering. The fractal dimension, df , can be influenced by aggregation mechanism, particles concentration, temperature, residence time, among other variables. More-over, this parameter can help on the estimation of aggregates’ properties which can help on the design of new processes, analyze health issues and characterize new materials.A multi-dimensional soot aggregate was simulated with the following approach. The first aggregation stage was modeled with a Diffusion Limited cluster-cluster aggregation (DLCA) mechanism, where primary clusters with a fractal dimension, df1, close to 1.44 were obtained. Then, the second aggregation stage was specified by Ballistic Aggregation (BA) mechanism, where the primary clusters generated in the first stage were used to form a superaggregate. All the models were validated with reported data on different experiments and computer models. Using the Ballistic Aggregation (BA) model with primary particles as the building blocks, the fractal dimension, df2, was close to 2.0, which is the expected value reported by literature. However, a decrease on this parameter is appreciated using primary clusters, from a DLCA model, as the building blocks because there is a less compact distribution of primary particles in the superaggregate’s structure.On the second aggregation stage, the fractal dimension, df2, increases when the superaggre-gate size increases, showing an asymptotic behavior to 2.0, which will be developed at higher scales. Partial reorganization was implemented in the Ballistic Aggregation (BA) mechanism where two contact points between primary clusters were achieved for stabilization purposes. This implementation showed a faster increase on the fractal dimension, df2, than without par-tial reorganization. This behavior is the result of a more packed distribution of primary clusters in a short range scales, but it does not affect the scaling behavior of multi-dimensional fractal structures. Moreover, the same results were obtained with different scenarios where the building block sizes were in the range from 200 to 300 and 700 to 800 primary particles.The obtained results demonstrate the importance of fractal dimension, df , for aggregate characterization. This parameter is powerful, universal and accurate since the identification of the different aggregation stages in the superaggregate can increase the accuracy of the estimation of properties, which is crucial in physics and process modeling.
584

MULTISCALING ANALYSIS OF FLUIDIC SYSTEMS: MIXING AND MICROSTRUCTURE CHARACTERIZATION

Camesasca, Marco 07 April 2006 (has links)
No description available.
585

Measurement of White Matter Structure Changes in Amyotrohpic Lateral Sclerosis Using Fractal Analysis

Liu, Zao 13 September 2011 (has links)
No description available.
586

Diabetic Retinopathy Classification Using Gray Level Textural Contrast and Blood Vessel Edge Profile Map

Gurudath, Nikita January 2014 (has links)
No description available.
587

Bond Improvement of Al/Cu Joints Created by Very High Power Ultrasonic Additive Manufacturing

Truog, Adam G. 25 June 2012 (has links)
No description available.
588

Application of Wavelets to Filtering and Analysis of Self-Similar Signals

Wirsing, Karlton 30 June 2014 (has links)
Digital Signal Processing has been dominated by the Fourier transform since the Fast Fourier Transform (FFT) was developed in 1965 by Cooley and Tukey. In the 1980's a new transform was developed called the wavelet transform, even though the first wavelet goes back to 1910. With the Fourier transform, all information about localized changes in signal features are spread out across the entire signal space, making local features global in scope. Wavelets are able to retain localized information about the signal by applying a function of a limited duration, also called a wavelet, to the signal. As with the Fourier transform, the discrete wavelet transform has an inverse transform, which allows us to make changes in a signal in the wavelet domain and then transform it back in the time domain. In this thesis, we have investigated the filtering properties of this technique and analyzed its performance under various settings. Another popular application of wavelet transform is data compression, such as described in the JPEG 2000 standard and compressed digital storage of fingerprints developed by the FBI. Previous work on filtering has focused on the discrete wavelet transform. Here, we extended that method to the stationary wavelet transform and found that it gives a performance boost of as much as 9 dB over that of the discrete wavelet transform. We also found that the SNR of noise filtering decreases as a frequency of the base signal increases up to the Nyquist limit for both the discrete and stationary wavelet transforms. Besides filtering the signal, the discrete wavelet transform can also be used to estimate the standard deviation of the white noise present in the signal. We extended the developed estimator for the discrete wavelet transform to the stationary wavelet transform. As with filtering, it is found that the quality of the estimate decreases as the frequency of the base signal increases. Many interesting signals are self-similar, which means that one of their properties is invariant on many different scales. One popular example is strict self-similarity, where an exact copy of a signal is replicated on many scales, but the most common property is statistical self-similarity, where a random segment of a signal is replicated on many different scales. In this work, we investigated wavelet-based methods to detect statistical self-similarities in a signal and their performance on various types of self-similar signals. Specifically, we found that the quality of the estimate depends on the type of the units of the signal being investigated for low Hurst exponent and on the type of edge padding being used for high Hurst exponent. / Master of Science
589

(Not) Drawing The Line: Technology Reexamined

Liguori, Elizabeth Angela 07 June 2017 (has links)
(Not) Drawing The Line: Technology Re-examined is the culmination of interdisciplinary research exploring the nature of materiality and process in the fields of art, science, and technology. Exploration and experimentation in these diverse disciplines have helped to illuminate many of the ideas and concepts that have guided the overall research process. These explorations have also honed the ability to critically examine how technology is perceived and represented, post-internet.   This document illustrates the processes involved in the conception and creation of a body of work manifested through visual and technological problem solving, investigative research of materials and technologies, and the fundamental concerns of art, technology, form and pattern. These empirical areas of research are punctuated by literary texts on the philosophy of art and technology that have informed many of the visual comparisons represented. This body of evidence is an exploration of the idea that the evolution of technological developments can often be attributed to the creation of art through the heuristic experimentation and visual explorations of the artist. / Master of Fine Arts / (Not) Drawing The Line: Technology Re-examined is the result of exploration and research in the areas of studio art, science, and technology. Many of the ideas and concepts presented in this documentation are a result of curiosity-driven research which uses materials and processes to help form connections across disciplines. In most examples, the materials used are common or familiar items such as the #2 pencil, reflective surfaces such as a mirror, or clay used for ceramics. The intention behind the work documented in this thesis is to help the viewer look at these everyday materials in a different way than their common use suggests through visual comparisons and wordplay. Its goal is to suggest that a new use or discovery may be presented in common materials and that solutions to complicated problems could be found right under our noses, so to speak. at its core, it is a commentary on technology, its uses, and perceptions during our current, post-internet place in history
590

Métodos para aproximação poligonal e o desenvolvimento de extratores de características de forma a partir da função tangencial

Carvalho, Juliano Daloia de 12 September 2008 (has links)
Whereas manually drawn contours could contain artifacts related to hand tremor, automatically detected contours could contain noise and inaccuracies due to limitations or errors in the procedures for the detection and segmentation of the related regions. To improve the further step of description, modeling procedures are desired to eliminate the artifacts in a given contour, while preserving the important and significant details present in the contour. In this work, are presented a couple of polygonal modeling methods, first a method applied direct on the original contour and other derived from the turning angle function. Both methods use the following parametrization Smin e µmax to infer about removing or maintain a given segment. By the using of the mentioned parameters the proposed methods could be configured according to the application problem. Both methods have been shown eficient to reduce the influence of noise and artifacts while preserving relevant characteristic for further analysis. Systems to support the diagnosis by images (CAD) and retrieval of images by content (CBIR) use shape descriptor methods to make possible to infer about factors existing in a given contour or as base to classify groups with dierent patterns. Shape factors methods should represent a value that is aected by the shape of an object, thus it is possible to characterize the presence of a factor in the contour or identify similarity among contours. Shape factors should be invariant to rotation, translation or scale. In the present work there are proposed the following shape features: index of the presence of convex region (XRTAF ), index of the presence of concave regions (V RTAF ), index of convexity (CXTAF ), two measures of fractal dimension (DFTAF e DF1 TAF ) and the index of spiculation (ISTAF ). All derived from the smoothed turning angle function. The smoothed turning angle function represent the contour in terms of their concave and convex regions. The polygonal modeling and the shape descriptors methods were applied on the breast masses classification issue to evaluate their performance. The polygonal modeling procedure proposed in this work provided higher compression and better polygonal fitness. The best classification accuracies, on discriminating between benign masses and malignant tumors, obtain for XRTAF , V RTAF , CXTAF , DFTAF , DF1 TAF and ISTAF , in terms of area under the receiver operating characteristics curve, were 0:92, 0:92, 0:93, 0:93, 0:92 e 0:94, respectively. / Contornos obtidos manualmente podem conter ruídos e artefatos oriundos de tremores da mão bem como contornos obtidos automaticamente podem os conter dado a problemas na etapa de segmentação. Para melhorar os resultados da etapa de representação e descrição, são necessários métodos capazes de reduzir a influência dos ruídos e artefatos enquanto mantém características relevantes da forma. Métodos de aproximação poligonal têm como objetivo a remoção de ruídos e artefatos presentes nos contornos e a melhor representação da forma com o menor número possível de segmentos de retas. Nesta disserta ção são propostos dois métodos de aproximação poligonal, um aplicado diretamente no contorno e outro que é obtido a partir da função tangencial do contorno original. Ambos os métodos fazem uso dos parâmetros Smin e µmax para inferirem sobre a permanência ou remoção de um dado segmento. Com a utilização destes parâmetros os métodos podem ser configurados para serem utilizados em vários tipos de aplicações. Ambos os métodos mostram-se eficientes na remoção de ruídos e artefatos, enquanto que características relevantes para etapas de pós-processamento são mantidas. Sistemas de apoio ao diagnóstico por imagens e de recuperação de imagens por conte údo fazem uso de métodos descritores de forma para que seja possível inferir sobre características presentes em um dado contorno ou ainda como base para medir a dissimilaridade entre contornos. Métodos descritores de características são capazes de representar um contorno por um número, assim é possível estabelecer a presença de uma característica no contorno ou ainda identificar uma possível similaridade entre os contornos. Métodos para extração de características devem ser invariantes a rotação, translação e escala. Nesta dissertação são propostos os seguintes métodos descritores de características: índice de presença de regiões convexas (XRTAF ), índice da presença de regiões côncavas (V RTAF ), índice de convexidade (CXTAF ), duas medidas de dimensão fractal (DFTAF e DF1 TAF ) e o índice de espículos (ISTAF ). Todos aplicados sobre a função tangencial suavizada. A função tangencial suavizada representa o contorno em termos de suas regiões côncavas e regiões convexas. Os métodos de aproximação poligonal e descritores de características foram aplicados para o problema de classificação de lesões de mama. Os resultados obtidos, mostraram que os métodos de aproximação poligonal propostos neste trabalho resultam em polígonos mais compactos e com melhor representação do contorno original. Os melhores resultados de classificação, na discriminação entre lesões benignas e tumores malignos, obtidos por XRTAF , V RTAF , CXTAF , DFTAF , DF1 TAF e ISTAF , em termos da área sob a curva ROC, foram 0:92, 0:92, 0:93, 0:93, 0:92 e 0:94, respectivamente. / Mestre em Ciência da Computação

Page generated in 0.058 seconds