• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 9
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Avaliação de descritores de textura para segmentação não-supervisionada de imagens / Texture descriptors evalution for unsupervised image segmentation

Souto Junior, Carlos Alberto 16 August 2018 (has links)
Orientador: Clésio Luis Tozzi / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenhaia Elétrica e de Computação / Made available in DSpace on 2018-08-16T00:09:42Z (GMT). No. of bitstreams: 1 SoutoJunior_CarlosAlberto_M.pdf: 16501917 bytes, checksum: 490a2364c9bd25c00b6cfa939af84889 (MD5) Previous issue date: 2010 / Resumo: Este trabalho consiste em uma avaliação de descritores de atributos de textura para o caso totalmente não-supervisionado, na qual nada se conhece anteriormente sobre a natureza das texturas ou o número de regiões presentes na imagem. Escolheram-se para descrever as texturas decomposição por filtros de Gabor, descritores escalares baseados em matrizes de co-ocorrência de níveis de cinza e campos aleatórios de Gauss-Markov; e aplicou-se um procedimento baseado no algoritmo k-means, onde o valor ótimo do parâmetro k foi estimado a partir de uma métrica de qualidade calculada nos resultados da execução do algoritmo k-means para vários valores de k. O k ótimo foi obtido pelo "método do cotovelo". Aplicou-se o procedimento em imagens sintéticas e naturais e confrontou-se com uma segmentação manual. Obtiveram-se melhores resultados para imagens agrícolas de baixa altitude e tipo frente-fundo quando usados descritores baseados em matrizes de co-ocorrência; nas imagens de satélite, o método que emprega campos aleatórios foi melhor sucedido / Abstract: This work comprises a texture features descriptors evaluation focusing the fully unsupervised case, where neither the texture nature nor the numbers of regions in the image are previously known. Three distinct texture descriptors were chosen: Image decomposition with Gabor filters, scalar descriptors based in gray-level co-occurrence matrix and Gauss-Markov random fields; and an automatic region number determination framework was applied. For performance evaluation, the procedure was applied in both synthetic and natural images / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
2

Study on Optimality Conditions in Stochastic Linear Programming

Zhao, Lei January 2005 (has links)
In the rapidly changing world of today, people have to make decisions under some degree of uncertainty. At the same time, the development of computing technologies enables people to take uncertain factors into considerations while making their decisions.Stochastic programming techniques have been widely applied in finance engineering, supply chain management, logistics, transportation, etc. Such applications often involve a large, possibly infinite, set of scenarios. Hence the resulting programstend to be large in scale.The need to solve large scale programs calls for a combination of mathematical programming techniques and sample-based approximation. When using sample-based approximations, it is important to determine the extent to which the resulting solutions are dependent on thespecific sample used. This dissertation research focuses on computational evaluation of the solutions from sample-based two-stage/multistage stochastic linear programming algorithms, with a focus on the effectiveness of optimality tests and the quality ofa proposed solution.In the first part of this dissertation, two alternative approaches of optimality tests of sample-based solutions, adaptive and non-adaptive sampling methods, are examined and computationally compared. The results of the computational experiment are in favor of the adaptive methods. In the second part of this dissertation, statistically motivated bound-based solution validation techniques in multistage linear stochastic programs are studied both theoretically and computationally. Different approaches of representations of the nonanticipativity constraints are studied. Bounds are established through manipulations of the nonanticipativity constraints.
3

Dimensions of statistically self-affine functions and random Cantor sets

Jones, Taylor 05 1900 (has links)
The subject of fractal geometry has exploded over the past 40 years with the availability of computer generated images. It was seen early on that there are many interesting questions at the intersection of probability and fractal geometry. In this dissertation we will introduce two random models for constructing fractals and prove various facts about them.
4

Describing and Predicting Breakthrough Curves for non-Reactive Solute Transport in Statistically Homogeneous Porous Media

Wang, Huaguo 06 December 2002 (has links)
The applicability and adequacy of three modeling approaches to describe and predict breakthough curves (BTCs) for non-reactive solutes in statistically homogeneous porous media were numerically and experimentally investigated. Modeling approaches were: the convection-dispersion equation (CDE) with scale-dependent dispersivity, mobile-immobile model (MIM), and the fractional convection-dispersion equation (FCDE). In order to test these modeling approaches, a prototype laboratory column system was designed for conducting miscible displacement experiments with a free-inlet boundary. Its performance and operating conditions were rigorously evaluated. When the CDE with scale-dependent dispersivity is solved numerically for generating a BTC at a given location, the scale-dependent dispersivity can be specified in several ways namely, local time-dependent dispersivity, average time-dependent dispersivity, apparent time-dependent dispersivity, apparent distance-dependent dispersivity, and local distance-dependent dispersivity. Theoretical analysis showed that, when dispersion was assumed to be a diffusion-like process, the scale-dependent dispersivity was locally time-dependent. In this case, definitions of the other dispersivities and relationships between them were directly or indirectly derived from local time-dependent dispersivity. Making choice between these dispersivities and relationships depended on the solute transport problem, solute transport conditions, level of accuracy of the calculated BTC, and computational efficiency The distribution of these scale-dependent dispersivities over scales could be described as either as a power-law function, hyperbolic function, log-power function, or as a new scale-dependent dispersivity function (termed as the LIC). The hyperbolic function and the LIC were two potentially applicable functions to adequately describe the scale dependent dispersivity distribution in statistically homogeneous porous media. All of the three modeling approaches described observed BTCs very well. The MIM was the only model that could explain the tailing phenomenon in the experimental BTCs. However, all of them could not accurately predict BTCs at other scales using parameters determined at one observed scale. For the MIM and the FCDE, the predictions might be acceptable only when the scale for prediction was very close to the observed scale. When the distribution of the dispersivity over a range of scales could be reasonably well-defined by observations, the CDE might be the best choice for predicting non-reactive solute transport in statistically homogeneous porous media. / Ph. D.
5

Inferência da densidade da madeira estimada por esclerometria / Inference of wood density estimated by sclerometry

Veiga, Nádia Schiavon da, 1986- 25 August 2018 (has links)
Orientador: Julio Soriano / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola / Made available in DSpace on 2018-08-25T02:26:09Z (GMT). No. of bitstreams: 1 Veiga_NadiaSchiavonda_M.pdf: 2631028 bytes, checksum: 691c2efa4c134e00748b530909cb9973 (MD5) Previous issue date: 2014 / Resumo: A densidade da madeira é uma propriedade física importante às diversas aplicações, tais como na construção civil, na fabricação de móveis e na indústria de papel e celulose. Essa propriedade varia de espécie para espécie, sendo também influenciada por variações do teor de umidade. Convencionalmente, a densidade da madeira é obtida em laboratório por procedimentos padronizados. Diversas pesquisas buscam apontar métodos de ensaios não destrutivos que possam correlacionar seus resultados com a densidade da madeira. Neste contexto, esta pesquisa teve como objetivo estabelecer correlações dos resultados da esclerometria (ensaio não destrutivo) com a densidade da madeira. Para tanto, três espécies folhosas com densidades distintas foram escolhidas, sendo elas o Cumaru (Dipteryx odorata), a Garapa (Apuleia leiocarpa) e o Cedro (Cedrela ssp), das quais foram extraídas peças prismáticas medindo 80 mm x 200 mm x 300 mm. Duas condições de umidade foram consideradas: a de madeira não estabilizada, com umidade de pátio de madeireira, e a de madeira estabilizada por secagem. Esses prismas foram submetidos a impactos esclerométricos nas direções anatômicas longitudinal, radial e tangencial. As densidades foram determinadas pelo método da ABNT NBR 7190 (1997). Das correlações geradas para cada uma das direções anatômicas, foram obtidos coeficientes de correlação superiores a 0,81, indicando que os índices esclerométricos e a densidade podem ser correlacionados linearmente. Finalmente, pode-se concluir que a esclerometria é um método adequado para estimar a densidade da madeira / Abstract: Wood density is an important physical property for various applications, such as in the construction, furniture making and pulp and paper industry. This property varies among species, being also influenced by variations in moisture content. Conventionally, the density of the wood is obtained in the laboratory by standardized procedures. Several researches are pointing nondestructive testing methods that can correlate their results with wood density. In this context, this study aimed to establish correlations between the results of sclerometry (nondestructive testing) with the density of the wood. For this purpose, three broadleaves species with different densities Cumaru (Dipteryx odorata), Garapa (Apuleia leiocarpa) e Cedro (Cedrela ssp) were chosen, of which were extracted prismatic pieces measuring 80 mm x 200 mm x 300 mm. Two moisture content conditions were considered: non-stabilized wood with moisture content of sawmill patio, and wood stabilized by drying. These prisms were subjected to esclerometric impacts in the longitudinal, radial and tangential anatomical directions. The densities were determined by method of ABNT NBR 7190 (1997). From the correlations generated for each anatomical direction, were obtained correlation coefficients greater than 0.81, indicating that esclerometric indexes and density can be correlated linearly. Finally, it can be concluded that sclerometry is a suitable method to estimate the density of the wood / Mestrado / Construções Rurais e Ambiencia / Mestra em Engenharia
6

Application of Statistically Optimized Near-field Acoustical Holography (SONAH) in Cylindrical Coordinates to Noise Control of a Bladeless Fan

Weimin Thor (8085548) 05 December 2019 (has links)
Near-field Acoustical Holography is a tool that is conventionally used to visualize sound fields through an inverse process in a three-dimensional space so that either sound field projections or sound source localization can be performed. The visualization is conducted by using sound pressure measurements taken in the near-field region close to the surface of the unknown sound source. Traditional Fourier-based Near-field Acoustical Holography requires a large number of measurement inputs to avoid spatial truncation effects. However, the use of a large number of measurements is usually not feasible since having a large number of microphones is costly, and usually the array is limited in size by the physical environment, thus limiting the practicality of this method. In the present work, because of the desire to reduce the number of microphones required to conduct acoustical holography, a method known as Statistically Optimized Near-field Acoustical Holography initially proposed by Steiner and Hald was analyzed. The main difference between the present work and the concept mentioned by Steiner and Hald is the cylindrical coordinate system employed here for the purpose of experimenting on a bladeless fan, which resembles a cylindrical structure and which could be assumed to be a cylindrical source. The algorithm was first verified <i>via</i> simulations and measurements, and was then applied to experimental data obtained <i>via</i> pressure measurements made with a cylindrical microphone array. Finally, suggestions for noise control strategies for the bladeless fan are described, based on the measurement results.<br>
7

Universality of Kolmogorov's Cascade Picture in Inverse Energy Cascade Range of Two-dimensional turbulence / 2次元乱流のエネルギー逆カスケード領域における、コルモゴロフのカスケード描像の普遍性について

Mizuta, Atsushi 23 May 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第18446号 / 理博第4006号 / 新制||理||1578(附属図書館) / 31324 / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 藤 定義, 教授 佐々 真一, 教授 早川 尚男 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
8

Constrained Spectral Conditioning for the Spatial Mapping of Sound

Spalt, Taylor Brooke 05 November 2014 (has links)
In aeroacoustic experiments of aircraft models and/or components, arrays of microphones are utilized to spatially isolate distinct sources and mitigate interfering noise which contaminates single-microphone measurements. Array measurements are still biased by interfering noise which is coherent over the spatial array aperture. When interfering noise is accounted for, existing algorithms which aim to both spatially isolate distinct sources and determine their individual levels as measured by the array are complex and require assumptions about the nature of the sound field. This work develops a processing scheme which uses spatially-defined phase constraints to remove correlated, interfering noise at the single-channel level. This is achieved through a merger of Conditioned Spectral Analysis (CSA) and the Generalized Sidelobe Canceller (GSC). A cross-spectral, frequency-domain filter is created using the GSC methodology to edit the CSA formulation. The only constraint needed is the user-defined, relative phase difference between the channel being filtered and the reference channel used for filtering. This process, titled Constrained Spectral Conditioning (CSC), produces single-channel Fourier Transform estimates of signals which satisfy the user-defined phase differences. In a spatial sound field mapping context, CSC produces sub-datasets derived from the original which estimate the signal characteristics from distinct locations in space. Because single-channel Fourier Transforms are produced, CSC's outputs could theoretically be used as inputs to many existing algorithms. As an example, data-independent, frequency-domain beamforming (FDBF) using CSC's outputs is shown to exhibit finer spatial resolution and lower sidelobe levels than FDBF using the original, unmodified dataset. However, these improvements decrease with Signal-to-Noise Ratio (SNR), and CSC's quantitative accuracy is dependent upon accurate modeling of the sound propagation and inter-source coherence if multiple and/or distributed sources are measured. In order to demonstrate systematic spatial sound mapping using CSC, it is embedded into the CLEAN algorithm which is then titled CLEAN-CSC. Simulated data analysis indicates that CLEAN-CSC is biased towards the mapping and energy allocation of relatively stronger sources in the field, which limits its ability to identify and estimate the level of relatively weaker sources. It is also shown that CLEAN-CSC underestimates the true integrated levels of sources in the field and exhibits higher-than-true peak source levels, and these effects increase and decrease respectively with increasing frequency. Five independent scaling methods are proposed for correcting the CLEAN-CSC total integrated output levels, each with their own assumptions about the sound field being measured. As the entire output map is scaled, these do not account for relative source level errors that may exist. Results from two airfoil tests conducted in NASA Langley's Quiet Flow Facility show that CLEAN-CSC exhibits less map noise than CLEAN yet more segmented spatial sound distributions and lower integrated source levels. However, using the same source propagation model that CLEAN assumes, the scaled CLEAN-CSC integrated source levels are brought into closer agreement with those obtained with CLEAN. / Ph. D.
9

Enhanced gradient crystal-plasticity study of size effects in B.C.C. metal

Demiral, Murat January 2012 (has links)
Owing to continuous miniaturization, many modern high-technology applications such as medical and optical devices, thermal barrier coatings, electronics, micro- and nano-electro mechanical systems (MEMS and NEMS), gems industry and semiconductors increasingly use components with sizes down to a few micrometers and even smaller. Understanding their deformation mechanisms and assessing their mechanical performance help to achieve new insights or design new material systems with superior properties through controlled microstructure at the appropriate scales. However, a fundamental understanding of mechanical response in surface-dominated structures, different than their bulk behaviours, is still elusive. In this thesis, the size effect in a single-crystal Ti alloy (Ti15V3Cr3Al3Sn) is investigated. To achieve this, nanoindentation and micropillar (with a square cross-section) compression tests were carried out in collaboration with Swiss Federal Laboratories for Materials Testing and Research (EMPA), Switzerland. Three-dimensional finite element models of compression and indentation with an implicit time-integration scheme incorporating a strain-gradient crystal-plasticity (SGCP) theory were developed to accurately represent deformation of the studied body-centered cubic metallic material. An appropriate hardening model was implemented to account for strain-hardening of the active slip systems, determined experimentally. The optimized set of parameters characterizing the deformation behaviour of Ti alloy was obtained based on a direct comparison of simulations and the experiments. An enhanced model based on the SGCP theory (EMSGCP), accounting for an initial microstructure of samples in terms of different types of dislocations (statistically stored and geometrically necessary dislocations), was suggested and used in the numerical analysis. This meso-scale continuum theory bridges the gap between the discrete-dislocation dynamics theory, where simulations are performed at strain rates several orders of magnitude higher than those in experiments, and the classical continuum-plasticity theory, which cannot explain the dependence of mechanical response on a specimen s size since there is no length scale in its constitutive description. A case study was performed using a cylindrical pillar to examine, on the one hand, accuracy of the proposed EMSGCP theory and, on the other hand, its universality for different pillar geometries. An extensive numerical study of the size effect in micron-size pillars was also implemented. On the other hand, an anisotropic character of surface topographies around indents along different crystallographic orientations of single crystals obtained in numerical simulations was compared to experimental findings. The size effect in nano-indentation was studied numerically. The differences in the observed hardness values for various indenter types were investigated using the developed EMSGCP theory.
10

A Comperative Assessment Of Available Methods For Seismic Performance Evaluation Of Buried Structures

Ozcebe, Ali Guney 01 August 2009 (has links) (PDF)
In the last three decades, seismic performance assessment of buried structures has evolved through the following stages : i) buried structures are not prone to seismically-induced damages, thus no need for detailed investigations, ii) eliminating soil-structure-earthquake interaction and use of seismically-induced free field ground deformations directly as the basis for seismic demand, thus producing conservative results, and finally iii) soil-structure and earthquake interaction models incorporating both kinematic and inertial interactions. As part of soil-structure and earthquake interacting models, simplified frame analysis established the state of practice and is widely used. Within the confines of this thesis, the results of simplified frame analysis based response of buried structures are compared with those of 2-D finite element dynamic analyses. For the purpose, 1-D dynamic and 2-D pseudo-dynamic analyses of free field and buried structural systems are performed for a number of generic soil, structure and earthquake combinations. The analyses results revealed that, in general, available closed form solutions are in pretty good agreement with the results of finite element analyses. However, due to the fact that dynamic analyses can model both kinematic and inertial effects / it should be preffered for the design of critical structures.

Page generated in 0.093 seconds