• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 55
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1327
  • 1327
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 118
  • 117
  • 114
  • 110
  • 110
  • 108
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
761

Porosity Analysis in Starch Imbued Handsheets - Challenges using impulse drying and methods for image analysis

Thabot, Arnaud Henri 15 November 2007 (has links)
In about 30 years of experiments and development, impulse drying is now considered as a well known technology and a good candidate in the constant effort to save energy in the paper industry. The drying section is indeed the most expensive section in the process of paper production. However, this potential technology has a major disadvantage, stopping its implementation in the industry. Paper, which is a porous material with a variable compressibility, experienced a sudden release of energy at the nip opening during impulse drying. Under these conditions of high intensity process (both in temperature and pressure), the fiber mat has a tendency to delaminate. This web disruption is a critical issue against impulse drying. This thesis comes up with a new approach to the problem. These last years, the technology itself has been addressed in this issue and many improvements have been reached in terms of energy release (heat transfer control, material coating ). The novel idea is then to investigate the inner structure of the paper once it has been coated with starch to a large extent (up to 10 or 20% of the relative basis weight). Starch is known for its large use in industry, but also its capability to expand under high temperature. Hence, both relative strength and bulking effects are investigated in this thesis, using numerous experiments with variable temperatures and pressures, along with ultrasonic testing and image analysis. We have the opportunity to appreciate the phenomenon of heat transfer and mass transport in the coated medium, while reaching promising results in terms of strength and bulk. These are finally investigated using scanning electron microscopy as a first step toward a pore expansion model for starch imbued handsheets.
762

High Hydrostatic Pressure Induced Inactivation Kinetics Of E. Coli O157:h7 And S. Aureus In Carrot Juice And Analysis Of Cell Volume Change

Pilavtepe, Mutlu 01 December 2007 (has links) (PDF)
The main objective of this study was to determine the pressure induced inactivation mechanism of pressure-resistant Escherichia coli O157:H7 933 and Staphylococcus aureus 485 in a low acid food. Firstly, inactivation curves of pathogens were obtained at 200 to 400 MPa at 40&ordm / C in peptone water and carrot juice. First-order and Weibull models were fitted and Weibull model described the inactivation curves of both pathogens more accurately than first-order model, revealing that food systems could exhibit either protective or sensitizing effect on microorganisms. Carrot juice had a protective effect on E. coli O157:H7 whereas it had a sensitizing effect on S. aureus, due to the naturally occurring constituents or phytoalexins in carrot roots that could have a toxic effect. Secondly, scanning electron microscopy (SEM) and fluorescent microscopy images of studied pathogens were taken. Developed software was used to analyze SEM images to calculate the change in the view area and volume of cells. Membrane integrity of pressurized cells was also examined using fluorescent microscopy images. The increase in average values of the view area and volume of both pathogens was significant for the highest pressure levels studied. The increase in volume and the view area could be explained by the modification of membrane properties, i.e., disruption or increase in permeability, lack of membrane integrity, denaturation of membrane-bound proteins and pressure-induced phase transition of membrane lipid bilayer. The change in volume and the view area of microorganisms added another dimension to the understanding of inactivation mechanisms of microbial cells by HHP.
763

Topics in living cell miultiphoton laser scanning microscopy (MPLSM) image analysis

Zhang, Weimin 30 October 2006 (has links)
Multiphoton laser scanning microscopy (MPLSM) is an advanced fluorescence imaging technology which can produce a less noisy microscope image and minimize the damage in living tissue. The MPLSM image in this research is the dehydroergosterol (DHE, a fluorescent sterol which closely mimics those of cholesterol in lipoproteins and membranes) on living cell's plasma membrane area. The objective is to use a statistical image analysis method to describe how cholesterol is distributed on a living cell's membrane. Statistical image analysis methods applied in this research include image segmentation/classification and spatial analysis. In image segmentation analysis, we design a supervised learning method by using smoothing technique with rank statistics. This approach is especially useful in a situation where we have only very limited information of classes we want to segment. We also apply unsupervised leaning methods on the image data. In image data spatial analysis, we explore the spatial correlation of segmented data by a Monte Carlo test. Our research shows that the distributions of DHE exhibit a spatially aggregated pattern. We fit two aggregated point pattern models, an area-interaction process model and a Poisson cluster process model, to the data. For the area interaction process model, we design algorithms for maximum pseudo-likelihood estimator and Monte Carlo maximum likelihood estimator under lattice data setting. For the Poisson Cluster process parameter estimation, the method for implicit statistical model parameter estimate is used. A group of simulation studies shows that the Monte Carlo maximum estimation method produces consistent parameter estimates. The goodness-of-fit tests show that we cannot reject both models. We propose to use the area interaction process model in further research.
764

Widefield fluorescence correlation spectroscopy

Nicovich, Philip R. 26 March 2010 (has links)
Fluorescence correlation spectroscopy has become a standard technique for modern biophysics and single molecule spectroscopy research. Here is presented a novel widefield extension of the established single-point technique. Flow in microfluidic devices was used as a model system for microscopic motion and through widefield fluorescence correlation spectroscopy flow profiles were mapped in three dimensions. The technique presented is shown to be more tolerant to low signal strength, allowing image data with signal-to-noise values as low as 1.4 to produce accurate flow maps as well as utilizing dye-labeled single antibodies as flow tracers. With proper instrumentation flows along the axial direction can also be measured. Widefield fluorescence correlation spectroscopy has also been utilized to produce super-resolution confocal microscopic images relying on the single-molecule microsecond blinking dynamics of fluorescent silver clusters. A method for fluorescence modulation signal extraction as well as synthesis of several novel noble metal fluorophores is also presented.
765

An exemplar-based approach to search-assisted computer-aided diagnosis of pigmented skin lesions

Zhou, Zhen Hao (Howard) 15 November 2010 (has links)
Over the years, exemplar-based methods have yielded significant improvements over their model-based counterparts in image synthesis applications. Notably, texture synthesis algorithms using an exemplar-based approach have shown success where traditional stochastic methods failed. As an illustrative example, I will present an exemplar-based approach that yields substantial benefits for user-guided terrain synthesis using Digital Elevation Models (DEMs). This success is realized through exploitation of structural properties of natural terrain. In addition to their proliferation in the image synthesis domain, as annotated image datasets become increasingly available, exemplar-based methods are also gaining in popularity for image analysis applications. This thesis addresses the intersection between exemplar-based analysis and the problem of content-based image retrieval (CBIR). A basic problem in CBIR is the process by which the search criteria are refined by the user through the manipulation of returned exemplars. Exemplar-based analysis is particularly well-suited to query refinement due to its interpretability and the ease with which it can be incorporated into an interactive system. I investigate this connection in the domain of Computer-Assisted Diagnosis (CAD) of dermatological images. I demonstrate that exemplar-based approaches in CBIR can be effective for diagnosing pigmented skin lesions (PSLs). I will present an exemplar-based algorithm for segmenting PSLs in dermatoscopic images. In addition, I will present a generalized representation of dermoscopic features for detection and matching. This representation not only leads to an exemplar-based PSL diagnosis scheme, but it also enables us to realize interactive region-of-interest retrieval, which includes a relevance feedback mechanism to facilitate more flexible query-by-example analysis. Finally, I will assess the benefit of this CBIR-CAD approach through both quantitative evaluations and user studies.
766

Real-time Object Recognition on a GPU

Pettersson, Johan January 2007 (has links)
<p>Shape-Based matching (SBM) is a known method for 2D object recognition that is rather robust against illumination variations, noise, clutter and partial occlusion.</p><p>The objects to be recognized can be translated, rotated and scaled.</p><p>The translation of an object is determined by evaluating a similarity measure for all possible positions (similar to cross correlation).</p><p>The similarity measure is based on dot products between normalized gradient directions in edges.</p><p>Rotation and scale is determined by evaluating all possible combinations, spanning a huge search space.</p><p>A resolution pyramid is used to form a heuristic for the search that then gains real-time performance.</p><p>For SBM, a model consisting of normalized edge gradient directions, are constructed for all possible combinations of rotation and scale.</p><p>We have avoided this by using (bilinear) interpolation in the search gradient map, which greatly reduces the amount of storage required.</p><p>SBM is highly parallelizable by nature and with our suggested improvements it becomes much suited for running on a GPU.</p><p>This have been implemented and tested, and the results clearly outperform those of our reference CPU implementation (with magnitudes of hundreds).</p><p>It is also very scalable and easily benefits from future devices without effort.</p><p>An extensive evaluation material and tools for evaluating object recognition algorithms have been developed and the implementation is evaluated and compared to two commercial 2D object recognition solutions.</p><p>The results show that the method is very powerful when dealing with the distortions listed above and competes well with its opponents.</p>
767

Vehicle Detection in Monochrome Images

Lundagårds, Marcus January 2008 (has links)
<p>The purpose of this master thesis was to study computer vision algorithms for vehicle detection in monochrome images captured by mono camera. The work has mainly been focused on detecting rear-view cars in daylight conditions. Previous work in the literature have been revised and algorithms based on edges, shadows and motion as vehicle cues have been modified, implemented and evaluated. This work presents a combination of a multiscale edge based detection and a shadow based detection as the most promising algorithm, with a positive detection rate of 96.4% on vehicles at a distance of between 5 m to 30 m. For the algorithm to work in a complete system for vehicle detection, future work should be focused on developing a vehicle classifier to reject false detections.</p>
768

Visual Observation of Human Emotions / L'observation visuelle des émotions humaines

Jain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
769

Quantitative metallography tracking and analysis for the scanning laser epitaxy process applied to CMSX-4 and Rene-80 nickel-based superalloys

Gambone, Justin J. 14 November 2012 (has links)
This thesis involves the development of digital algorithms for the microstructural analysis of metallic deposits produced through the use of Scanning Laser Epitaxy (SLE). SLE is a new direct digital manufacturing (DDM) technique which allows for the creation of three dimensional nickel-based superalloy components using an incremental layering system. Using a bed of powder placed on an underlying substrate and a laser propagating a melt-pool across the sample, a layer of material can be added and through the careful control of SLE settings various microstructures can be created or extended from the substrate. To create parts that are within specified microstructure tolerances the ideal SLE settings must be located through experimental runs, with each material needing different operating parameters. This thesis focuses on improving the microstructural analysis by use of a program that tracks various features found in samples produced through the SLE technique and a data analysis program that provides greater insights into how the SLE settings influence the microstructure. Using this program the isolation of optimal SLE settings is faster while also providing greater insights into the process than is currently possible. The microstructure recognition program features three key aspects. The first evaluates major characteristics that typically arise during the SLE process; such as sample deformation, the aspects of a single crystal deposit, and the total deposit height. The second saves the data and all relevant test settings in a format that will allow for future analysis and comparison to other samples. Finally, it features a robust yet rapid execution so it may be used for entire runs of SLE samples, which can number up to 25, within a week. The program is designed for the types of microstructure found in CMSX-4 and Rene-80, specifically single crystal and equiaxed regions. The data fitting program uses optimally piecewise-fitted equations to find relationships between the SLE settings and the microstructure traits. The data is optimally piecewise fitted as the SLE process is a two-stage procedure, establishing then propagating the melt-pool across a sample, which creates distinct microstructure transitions. Using the information gathered, graphs provide a visual aid to better allow the experimenter to understand the process and a DOE is performed using sequential analysis; allowing the previously run samples to influence the future trials, reducing the amount of materials used while still providing great insight into the parameter field. Having access to the microstructure data across the entire sample and an advanced data fitting program that can accurately relate them to the SLE settings allows the program to track and optimize features that were never before possible.
770

Assessment of Image Analysis as a Measure of Scleractinian Coral Growth

Gustafson, Steven K. 29 March 2006 (has links)
Image analysis was used to measure basal areas of selected colonies of Montastraea annularis and Porites astreoides, following the colonies over a three-year period from 2002 to 2004. Existing digital images of permanently-marked quadrats in the Caye Caulker Marine Reserve, Belize, were selected based on image quality and availability of images of selected quadrats for all three years. Annual growth rates were calculated from the basal-area measurements. Mean growth rates (radial skeletal extension) for M. annularis and P. astreoides were 0.02 cm yr-1 and -0.20 cm yr-1, respectively. Basal area measurements demonstrated a large degree of variability. Increases were approximately balanced by declines giving the impression of stasis. By removing negative values and correcting by 25% to allow for comparison with vertical growth rates, mean values increased to ~0.5 cm yr-1 for M. annularis and ~0.8 cm yr-1 for P. astreoides. Basal area as a growth measure was compared to methods used in earlier studies. A new growth index based on basal area and perimeter was proposed and modeled. This growth index can be useful for reporting growth measured from basal areas and comparable other methods. The index also measures negative growth, or mortality, which conventional methods cannot do.

Page generated in 0.0879 seconds