• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 103
  • 57
  • 33
  • 18
  • 12
  • 10
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 513
  • 170
  • 85
  • 58
  • 55
  • 45
  • 45
  • 44
  • 42
  • 42
  • 41
  • 38
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Coupled analysis of degradation processes in concrete specimens at the meso-level

Idiart, Andrés Enrique 10 July 2009 (has links)
En los últimos años, el análisis numérico de problemas acoplados, como los procesos de degradación de materiales y estructuras relacionados con los efectos medioambientales, ha cobrado especial importancia en la comunidad científica de la mecánica del hormigón. Problemas de este tipo son por ejemplo el ataque químico, el efecto de altas temperaturas o la retracción por secado.Tradicionalmente, los análisis acoplados existentes en la literatura se han realizado a nivel macroscópico, considerando el material como un medio continuo y homogéneo. Sin embargo, es bien conocido que el origen de la degradación observada a nivel macroscópico, a menudo es debida a la interacción entre los áridos y el mortero, sobre todo cuando se dan cambios de volumen diferenciales entre los dos componentes. Esta es la razón por la que el análisis mesomecánico está emergiendo como una herramienta potente para estudios de materiales heterogéneos, aunque actualmente existen escasos modelos numéricos capaces de simular un problema acoplado a esta escala de observación.En esta tesis, la aplicabilidad del modelo meso-mecánico de elementos finitos, desarrollado en el seno del grupo de investigación durante los últimos quince años, se extiende al análisis de problemas acoplados higro-mecánicos y químico-mecánicos, con el fin de estudiar la retracción por secado y el ataque sulfático externo en muestras de hormigón. La generación numérica de mesogeometrías y mallas de elementos finitos con los áridos de mayor tamaño rodeados de la fase mortero se consigue mediante la teoría de Voronoï/Delaunay Adicionalmente, con el fin de simular las principales trayectorias de fisuración, se insertan a priori elementos junta de espesor nulo, equipados con una ley constitutiva basada en la mecánica de fractura no lineal, a lo largo de todos los contactos entre árido y matriz, y también en algunas líneas matriz-matriz.La aportación principal de esta tesis es, conjuntamente con la realización de análisis acoplados sobre una representación mesoestructural del material, la simulación no solo de la formación y propagación de fisuras, sino también la consideración explícita de la influencia de éstas en el proceso de difusión.Los cálculos numéricos se realizan mediante el uso de los códigos de elementos finitos DRAC y DRACFLOW, previamente desarrollados en el seno del grupo de investigación, y acoplados mediante una estrategia staggered. Las simula-ciones realizadas abarcan, entre otros aspectos, la evaluación del compor-tamiento acoplado, el ajuste de parámetros del modelo con resultados experimentales disponibles en la bibliografía, diferentes estudios del efecto de los áridos en la microfisuración inducida por el secado y las expansiones debidas al ataque sulfático, así como el efecto simultáneo de los procesos gobernados por difusión y cargas de origen mecánico. Los resultados obtenidos concuerdan con observaciones experimentales de la fisuración, el fenómeno de spalling y la evolución de las deformaciones, y muestran la capacidad del modelo para ser utilizado en el estudio de problemas acoplados en los que la naturaleza heterogénea y cuasi-frágil del material tiene un papel predominante.
142

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
143

Denoising of Infrared Images Using Independent Component Analysis

Björling, Robin January 2005 (has links)
Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer. / The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.
144

Urban Shrinkage in Liepāja : Awareness of population decline in the planning process

Kaugurs, Kristaps January 2011 (has links)
The aim of the study is to investigate the current state of awareness of urban shrinkage inLiepājaby the key actors involved in the planning process. Last couple of hundred years have brought many transformations in urbanity that was always accompanied by the growth of the population and expansion of the city. However, the new patterns of urban development emerged in the last decades all over the globe, causing cities to lose the inhabitants resulting in urban shrinkage.Liepāja, the third largest city inLatvia, has lost a quarter of its population in last two decades and the trend continues. The long-term municipal planning document is being presented during this research in a light of which the research question is asked: “What is the current state of awareness of urban shrinkage inLiepājaby the key actors?” Utilising Flyvbjerg’s phronetic form of inquiry in combination with case study and repeated semi-structured interviews, the dominant planning views related to urban shrinkage are sought and analysed. The research identifies three underlying causalities that shape the decisions in planning and leave formidable consequences for the future of the city. The causalities identified and discussed in this paper are (1) the planning legacy; (2) the misconception; and (3) the political sensitivity of the urban shrinkage.
145

Estimation and bias correction of the magnitude of an abrupt level shift

Liu, Wenjie January 2012 (has links)
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
146

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
147

Characterization of material behavior during the manufacturing process of a co-extruded solid oxide fuel cell

Eisele, Prescott L. (Prescott Lawrence) 08 April 2004 (has links)
Recent developments in powder metal oxide processing have enabled co-extrusion of a honeycomb structure with alternating layers of metal and ceramic. Such a structure is envisioned for use as a Solid Oxide Fuel Cell (SOFC) if defects can be minimized during the manufacturing process. The two dissimilar materials tend to shrink at different rates during hydrogen reduction and sintering, inducing internal stresses that lead to structural defects such as cracks, or high residual stresses. The objective of this thesis is to characterize the shrinkage and relaxation mechanisms inherent in both the metal and ceramic so that internal stresses developed during manufacturing can be estimated and ultimately minimized. Constitutive models are adapted from the literature to simulate the sintering and viscoelastic behaviors of the ceramic. Likewise, existing models in the literature are used to characterize the viscoplastic relaxation of the porous powder metal phase and its sintering behavior. Empirical models are developed for the reduction behavior of the metal oxides, based on a series of experiments conducted that measure water vapor (hygrometry) and dimensional change (dilatometry) during reduction and sintering. Similarly, the necessary parameters for the sintering model and viscoplastic model were determined through a series of experiments. The constructed system of constitutive equations appears to have the essential elements for modeling dimensional change, porosity/strength and development of internal (residual) stresses in co-extruded SOFC structures.
148

Bayesian variable selection in clustering via dirichlet process mixture models

Kim, Sinae 17 September 2007 (has links)
The increased collection of high-dimensional data in various fields has raised a strong interest in clustering algorithms and variable selection procedures. In this disserta- tion, I propose a model-based method that addresses the two problems simultane- ously. I use Dirichlet process mixture models to define the cluster structure and to introduce in the model a latent binary vector to identify discriminating variables. I update the variable selection index using a Metropolis algorithm and obtain inference on the cluster structure via a split-merge Markov chain Monte Carlo technique. I evaluate the method on simulated data and illustrate an application with a DNA microarray study. I also show that the methodology can be adapted to the problem of clustering functional high-dimensional data. There I employ wavelet thresholding methods in order to reduce the dimension of the data and to remove noise from the observed curves. I then apply variable selection and sample clustering methods in the wavelet domain. Thus my methodology is wavelet-based and aims at clustering the curves while identifying wavelet coefficients describing discriminating local features. I exemplify the method on high-dimensional and high-frequency tidal volume traces measured under an induced panic attack model in normal humans.
149

Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising

DeNooyer, Eric-Jan D. 01 January 2010 (has links)
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
150

Effect of Dosage of Non-Chloride Accelerator versus Chloride Accelerator on the Cracking Potential of Concrete Repair Slabs

Meagher, Thomas F. 01 January 2015 (has links)
Due to strict placement time and strength constraints during the construction of concrete pavement repair slabs, accelerators must be incorporated into the mixture design. Since the most common accelerator, calcium chloride, promotes corrosion of concrete reinforcement, a calcium nitrate-based accelerator was studied as an alternative. To replicate mixtures used in the field, commercial accelerators commonly used in concrete pavement repair slabs were used in the current study. Crack risk of different mixtures was assessed using modeling and cracking frame testing. HIPERPAV modeling was conducted using several measured mixture properties; namely, concrete mechanical properties, strength-based and heat of hydration-based activation energies, hydration parameters using calorimetric studies, and adiabatic temperature rise profiles. Autogenous shrinkage was also measured to assess the effect of moisture consumption on concrete volume contraction. The findings of the current study indicate that the cracking risk associated with calcium nitrate-based accelerator matches the performance of a calcium-chloride based accelerator when placement is conducted during nighttime hours.

Page generated in 0.0433 seconds