• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 654
  • 207
  • 60
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1313
  • 1313
  • 208
  • 202
  • 157
  • 140
  • 139
  • 131
  • 115
  • 114
  • 113
  • 110
  • 108
  • 108
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A quantitative description at multiple scales of observation of accumulation and displacement patterns in single and dual-species biofilms

Klayman, Benjamin Joseph. January 2007 (has links) (PDF)
Thesis (Ph. D.)--Montana State University--Bozeman, 2007. / Typescript. Chairperson, Graduate Committee: Anne Camper. Includes bibliographical references (leaves 104-113).
52

Video-based nearshore depth inversion using WDM method

Hampson, Robert W.. January 2009 (has links)
Thesis (M.C.E.)--University of Delaware, 2008. / Principal faculty advisor: James T. Kirby, Dept. of Civil & Environmental Engineering. Includes bibliographical references.
53

Algorithms for Applied Digital Image Cytometry

Wählby, Carolina January 2003 (has links)
Image analysis can provide genetic as well as protein level information from fluorescence stained fixed or living cells without loosing tissue morphology. Analysis of spatial, spectral, and temporal distribution of fluorescence can reveal important information on the single cell level. This is in contrast to most other methods for cell analysis, which do not account for inter-cellular variation. Flow cytometry enables single-cell analysis, but tissue morphology is lost in the process, and temporal events cannot be observed. The need for reproducibility, speed and accuracy calls for computerized methods for cell image analysis, i.e., digital image cytometry, which is the topic of this thesis. Algorithms for cell-based screening are presented and applied to evaluate the effect of insulin on translocation events in single cells. This type of algorithms could be the basis for high-throughput drug screening systems, and have been developed in close cooperation with biomedical industry. Image based studies of cell cycle proteins in cultured cells and tissue sections show that cyclin A has a well preserved expression pattern while the expression pattern of cyclin E is disturbed in tumors. The results indicate that analysis of cyclin E expression provides additional valuable information for cancer prognosis, not visible by standard tumor grading techniques. Complex chains of events and interactions can be visualized by simultaneous staining of different proteins involved in a process. A combination of image analysis and staining procedures that allow sequential staining and visualization of large numbers of different antigens in single cells is presented. Preliminary results show that at least six different antigens can be stained in the same set of cells. All image cytometry requires robust segmentation techniques. Clustered objects, background variation, as well as internal intensity variations complicate the segmentation of cells in tissue. Algorithms for segmentation of 2D and 3D images of cell nuclei in tissue by combining intensity, shape, and gradient information are presented. The algorithms and applications presented show that fast, robust, and automatic digital image cytometry can increase the throughput and power of image based single cell analysis.
54

Chart Detection and Recognition in Graphics Intensive Business Documents

Svendsen, Jeremy Paul 24 December 2015 (has links)
Document image analysis involves the recognition and understanding of document images using computer vision techniques. The research described in this thesis relates to the recognition of graphical elements of a document image. More specifically, an approach for recognizing various types of charts as well as their components is presented. This research has many potential applications. For example, a user could redraw a chart in a different style or convert the chart to a table, without possessing the original information that was used to create the chart. Another application is the ability to find information, which is only presented in the chart, using a search engine. A complete solution to chart image recognition and understanding is presented. The proposed algorithm extracts enough information such that the chart can be recreated. The method is a syntactic approach which uses mathematical grammars to recognize and classify every component of a chart. There are two grammars presented in this thesis, one which analyzes 2D and 3D pie charts and the other which analyzes 2D and 3D bar charts, as well as line charts. The pie chart grammar isolates each slice and its properties whereas the bar and line chart grammar recognizes the bars, indices, gridlines and polylines. The method is evaluated in two ways. A qualitative approach redraws the chart for the user, and a semi-automated quantitative approach provides a complete analysis of the accuracy of the proposed method. The qualitative analysis allows the user to see exactly what has been classified correctly. The quantitative analysis gives more detailed information about the strengths and weaknesses of the proposed method. The results of the evaluation process show that the accuracy of the proposed methods for chart recognition is very high. / Graduate
55

Monitoramento por microscopia óptica e processamento digital de imagens do processo de conformação cerâmica por conformação com amidos comerciais

Cruz, Tessie Gouvea da [UNESP] 11 September 2007 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:34:58Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-09-11Bitstream added on 2014-06-13T20:06:01Z : No. of bitstreams: 1 cruz_tg_dr_guara.pdf: 6855273 bytes, checksum: c4d4c4e4cbb6489a37fda09a1a0d409c (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Este trabalho propõe uma metodologia, baseada no processamento digital de imagens, para o monitoramento do processo de conformação cerâmica com amidos comerciais. No estudo da formação dos canais porosos e o processo de gelatinização foi utilizada a microscopia óptica a quente e análise do comportamento dos amidos com o aumento da temperatura. Para a caracterização dos poros durante as etapas da sua formação, foi utilizada a reconstrução por extensão de foco. Como resultado complementar às técnicas utilizadas foi desenvolvida uma metodologia, com base em tratamento estatístico, para caracterização espacial da porosidade. Foi feito o mapeamento das concentrações de porosidade e visualização em 3-D dessas regiões. / A methodology is proposed to establish standards for consolidation casting ceramics process with commercial starches, based on digital image processing. Hot stage light microscopy has been used to study porous channels formation and gelling process, evaluating starches behavior with temperature rising. Depth-from-focus reconstruction and quantitative microscopy have been applied to characterize pores during their formation. As an supplementary result, a new method for statistical based spatial characterization of porous three-dimensional distribution has been developed. It provides 3-D maps for visualization of porosities concentration distribution.
56

Volume measurement of wood disks

Dånmark, Anders January 2013 (has links)
At the Department of Forest Products at Swedish University of Agricultural Sciences different metrics for wood are used. The volume of wood disks' is measured using archimedes principle.There are concerns of how accurate this measurement is and a different measuringsystem is wanted. This thesis has investigated the possibility of measuring the disks' volumes with imageanalysis. The recovery error should be less than 1% of the actual volume. In general, there are two methods for recovering an object using imageanalysis, active and passive methods. Compairing active and passive methods, active methods usually require simple algorithms but more expensive equipment compared to passive methods. Different methods for measuring objects' volumes have been evaluated and the choosen method was ``shape from silhouette''. Shape from silhouette is a passive method, only using the silhouette of anobject from multiple views to recover the objects volume. Passive methods have one drawback, they can only recover the visual hull of an object and the wood disks can be slightly concave. Due to the questionable accuracy of the current measurement method it was still deemed as possible to achieve at least equal performance. When the volume measuring algorithm was developed it was first tested in two simulations using on a sphere to determine its performance with different voxel sizes and different number of images. The algorithm performed well and an error of less than 1 % was achieved with a sphere. A third simulation was performed using a simulated wood disk, which is a much more complex object, and 5 % accuracy was achieved. Finally, an experiment on real images was performed. This experiment did, however, fail due to the low quality imaging setup. The conclusion of this thesis is that itis not possible to achieve less than 1 % accuracy of the recovered volume using the shape from silhouette technique.
57

Automated Seed Point Selection in Confocal Image Stacks of Neuron Cells

Bilodeau, Gregory Peter 25 July 2013 (has links)
This paper provides a fully automated method of finding high-quality seed points in 3D space from a stack of images of neuron cells. These seed points may then be used as initial starting points for automated local tracing algorithms, removing a time consuming required user interaction in current methodologies. Methods to collapse the search space and provide rudimentary topology estimates are also presented. / Master of Science
58

Improving the Performance of Hyperspectral Target Detection

Ma, Ben 15 December 2012 (has links)
This dissertation develops new approaches for improving the performance of hyperspectral target detection. Different aspects of hyperspectral target detection are reviewed and studied to effectively distinguish target features from background interference. The contributions of this dissertation are detailed as follows. 1) Propose an adaptive background characterization method that integrates region segmentation with target detection. In the experiments, not only unstructured matched filter based detectors are considered, but also two hybrid detectors combining fully constrained least squared abundance estimation with statistic test (i.e., adaptive matched subspace detector and adaptive cosine/coherent detector) are investigated. The experimental results demonstrate that using local adaptive background characterization, background clutters can be better suppressed than the original algorithms with global characterization. 2) Propose a new approach to estimate abundance fractions based on the linear spectral mixture model for hybrid structured and unstructured detectors. The new approach utilizes the sparseness constraint to estimate abundance fractions, and achieves better performance than the popular non-negative and fully constrained methods in the situations when background endmember spectra are not accurately acquired or estimated, which is very common in practical applications. To improve the dictionary incoherence, the use of band selection is proposed to improve the sparseness constrained linear unmixing. 3) Propose random projection based dimensionality reduction and decision fusion approach for detection improvement. Such a data independent dimensionality reduction process has very low computational cost, and it is capable of preserving the original data structure. Target detection can be robustly improved by decision fusion of multiple runs of random projection. A graphics processing unit (GPU) parallel implementation scheme is developed to expedite the overall process. 4) Propose nonlinear dimensionality reduction approaches for target detection. Auto-associative neural network-based Nonlinear Principal Component Analysis (NLPCA) and Kernel Principal Component Analysis (KPCA) are applied to the original data to extract principal components as features for target detection. The results show that NLPCA and KPCA can efficiently suppress trivial spectral variations, and perform better than the traditional linear version of PCA in target detection. Their performance may be even better than the directly kernelized detectors.
59

Insights from the Use of a Standard Taxonomy for Remote Sensing Analysis

Kari, Swapna 11 December 2004 (has links)
Knowledge acquisition is concerned with finding and structuring knowledge in such a way that it can be used in a variety of intelligent decision-making tools. Knowledge of a domain can be encoded as taxonomy i.e., a hierarchically organized set of categories. The relationships within the hierarchy can be of different kinds, depending on the application, and a typical taxonomy includes several different kinds of relations. Thus taxonomies play an important role in analyzing and modeling knowledge. The focus of this study is to derive knowledge from a standard taxonomic structure in the remote sensing domain. The various methodological channels adopted by the remote sensing data analysts to produce different information products normally go through some definite processes, which can be examined along with their context (spectral, spatial, temporal) by the taxonomical approach. This allows users to assess the applicability of a methodology for a particular area of interest and also has the advantage in aiding the upper-level decision-makers in understanding why different approaches might provide different outputs to the same source data. Some of the previous work done by a number of multi disciplinary researchers in analyzing remote sensing data has been used in this study to examine the structure of their methodologies from a taxonomical perspective. The analysis of the developed taxonomies clearly indicates a definite structure to the underlying analysis procedures and has potential for the development of systems to automate them.
60

Blasting Design Using Fracture Toughness and Image Analysis of the Bench Face and Muckpile

Kim, Kwangmin 21 September 2006 (has links)
Few studies of blasting exist because of difficulties in obtaining reliable fragmentation data or even obtaining consistent blasting results. Many researchers have attempted to predict blast fragmentation using the Kuz-Ram model, an empirical fragmentation model suggested by Cunningham. The purpose of this study is to develop an empirical model to relate specific explosives energy (ESE) to blasting fragmentation reduction ratio (RR) and rock fracture toughness (KIC). The reduction ratio was obtained by analyzing the bench face block size distribution and the muck fragment size distribution using image analysis. The fracture toughness was determined using the Edge Notched Disk Wedge Splitting test. Blasting data from twelve (12) blasts at four (4) different quarries were analyzed. Based on this data set, an empirical relationship, ESE=11.7 RR801.202 KIC4.14 has been developed. Using this relationship, based on the predicted blasting energy input for a desired eighty-percent passing (P80) muckpile fragment size the burden and spacing may be determined. / Master of Science

Page generated in 0.0762 seconds