• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 169
  • 103
  • 57
  • 33
  • 18
  • 12
  • 10
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 510
  • 169
  • 85
  • 57
  • 55
  • 45
  • 45
  • 44
  • 42
  • 42
  • 41
  • 38
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
142

Denoising of Infrared Images Using Independent Component Analysis

Björling, Robin January 2005 (has links)
Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer. / The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.
143

Urban Shrinkage in Liepāja : Awareness of population decline in the planning process

Kaugurs, Kristaps January 2011 (has links)
The aim of the study is to investigate the current state of awareness of urban shrinkage inLiepājaby the key actors involved in the planning process. Last couple of hundred years have brought many transformations in urbanity that was always accompanied by the growth of the population and expansion of the city. However, the new patterns of urban development emerged in the last decades all over the globe, causing cities to lose the inhabitants resulting in urban shrinkage.Liepāja, the third largest city inLatvia, has lost a quarter of its population in last two decades and the trend continues. The long-term municipal planning document is being presented during this research in a light of which the research question is asked: “What is the current state of awareness of urban shrinkage inLiepājaby the key actors?” Utilising Flyvbjerg’s phronetic form of inquiry in combination with case study and repeated semi-structured interviews, the dominant planning views related to urban shrinkage are sought and analysed. The research identifies three underlying causalities that shape the decisions in planning and leave formidable consequences for the future of the city. The causalities identified and discussed in this paper are (1) the planning legacy; (2) the misconception; and (3) the political sensitivity of the urban shrinkage.
144

Estimation and bias correction of the magnitude of an abrupt level shift

Liu, Wenjie January 2012 (has links)
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
145

A novel approach to restoration of Poissonian images

Shaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
146

Characterization of material behavior during the manufacturing process of a co-extruded solid oxide fuel cell

Eisele, Prescott L. (Prescott Lawrence) 08 April 2004 (has links)
Recent developments in powder metal oxide processing have enabled co-extrusion of a honeycomb structure with alternating layers of metal and ceramic. Such a structure is envisioned for use as a Solid Oxide Fuel Cell (SOFC) if defects can be minimized during the manufacturing process. The two dissimilar materials tend to shrink at different rates during hydrogen reduction and sintering, inducing internal stresses that lead to structural defects such as cracks, or high residual stresses. The objective of this thesis is to characterize the shrinkage and relaxation mechanisms inherent in both the metal and ceramic so that internal stresses developed during manufacturing can be estimated and ultimately minimized. Constitutive models are adapted from the literature to simulate the sintering and viscoelastic behaviors of the ceramic. Likewise, existing models in the literature are used to characterize the viscoplastic relaxation of the porous powder metal phase and its sintering behavior. Empirical models are developed for the reduction behavior of the metal oxides, based on a series of experiments conducted that measure water vapor (hygrometry) and dimensional change (dilatometry) during reduction and sintering. Similarly, the necessary parameters for the sintering model and viscoplastic model were determined through a series of experiments. The constructed system of constitutive equations appears to have the essential elements for modeling dimensional change, porosity/strength and development of internal (residual) stresses in co-extruded SOFC structures.
147

Bayesian variable selection in clustering via dirichlet process mixture models

Kim, Sinae 17 September 2007 (has links)
The increased collection of high-dimensional data in various fields has raised a strong interest in clustering algorithms and variable selection procedures. In this disserta- tion, I propose a model-based method that addresses the two problems simultane- ously. I use Dirichlet process mixture models to define the cluster structure and to introduce in the model a latent binary vector to identify discriminating variables. I update the variable selection index using a Metropolis algorithm and obtain inference on the cluster structure via a split-merge Markov chain Monte Carlo technique. I evaluate the method on simulated data and illustrate an application with a DNA microarray study. I also show that the methodology can be adapted to the problem of clustering functional high-dimensional data. There I employ wavelet thresholding methods in order to reduce the dimension of the data and to remove noise from the observed curves. I then apply variable selection and sample clustering methods in the wavelet domain. Thus my methodology is wavelet-based and aims at clustering the curves while identifying wavelet coefficients describing discriminating local features. I exemplify the method on high-dimensional and high-frequency tidal volume traces measured under an induced panic attack model in normal humans.
148

Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising

DeNooyer, Eric-Jan D. 01 January 2010 (has links)
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
149

Effect of Dosage of Non-Chloride Accelerator versus Chloride Accelerator on the Cracking Potential of Concrete Repair Slabs

Meagher, Thomas F. 01 January 2015 (has links)
Due to strict placement time and strength constraints during the construction of concrete pavement repair slabs, accelerators must be incorporated into the mixture design. Since the most common accelerator, calcium chloride, promotes corrosion of concrete reinforcement, a calcium nitrate-based accelerator was studied as an alternative. To replicate mixtures used in the field, commercial accelerators commonly used in concrete pavement repair slabs were used in the current study. Crack risk of different mixtures was assessed using modeling and cracking frame testing. HIPERPAV modeling was conducted using several measured mixture properties; namely, concrete mechanical properties, strength-based and heat of hydration-based activation energies, hydration parameters using calorimetric studies, and adiabatic temperature rise profiles. Autogenous shrinkage was also measured to assess the effect of moisture consumption on concrete volume contraction. The findings of the current study indicate that the cracking risk associated with calcium nitrate-based accelerator matches the performance of a calcium-chloride based accelerator when placement is conducted during nighttime hours.
150

Shrinkage Influence on Tension-Stiffening of Concrete Structures / Susitraukimo įtaka gelžbetoninių elementų tempiamosios zonos elgsenai

Gribniak, Viktor 02 November 2009 (has links)
Due to the use of refined ultimate state theories as well as high strength concrete and reinforcement, resulting in longer spans and smaller depths, the serviceability criteria often limits application of modern reinforced concrete (RC) superstructures. In structural analysis, civil engineers can choose between traditional design code methods and numerical techniques. In order to choose a particular calculation method, engineers should be aware of accuracy of differ-ent techniques. Adequate modelling of RC cracking and, particularly, post-cracking behaviour, as one of the major sources of nonlinearity, is the most im-portant and difficult task of deformational analysis. In smeared crack approach dealing with average cracking and strains, post-cracking effects can be modelled by a stress-strain tension-stiffening relationship. Most known tension-stiffening relationships have been derived from test data of shrunk tension or shear mem-bers. Subsequently, these constitutive laws were applied for modelling of bend-ing elements which behaviour differs from test members. Furthermore, such re-lationships were coupled with shrinkage effect. Therefore, present research aims at developing a technique for deriving a free-of-shrinkage tension-stiffening re-lationship using test data of shrunk bending RC members. The main objective of this PhD dissertation is to investigate shrinkage influence on deformations and tension-stiffening of RC members subjected to short-term loading. Present... [to full text] / Pastaraisiais metais vis plačiau taikant stiprųjį betoną bei armatūrą, konst-rukcijų perdengiamos angos didėja, o skerspjūviai mažėja. Todėl projektuojant standumo (įlinkių) sąlyga vis dažniau tampa lemiamu veiksniu. Inžinieriai gelž-betoninių konstrukcijų apskaičiavimams gali taikyti empirinius normų arba skai-tinius metodus. Vieno ar kito skaičiavimo metodo parinkimas turi būti pagrįstas statistiniais tikslumo analizės rezultatais. Yra žinoma, kad adekvatus gelžbetoninio elemento pleišėjimo (ypač plyšių vystymosi stadijos) modeliavimas yra vienas sudėtingiausių netiesinės mechani-kos uždavinių. Toks uždavinys gali būti išspręstas taikant vidutinių plyšių kon-cepciją, kai pleišėjimo proceso modeliavimui naudojama tempiamojo betono vidutinių įtempių ir deformacijų diagrama. Dauguma tokių diagramų gautos, naudojant tempimo arba šlyties bandymo rezultatus. Pabrėžtina, kad šių diagra-mų taikymas lenkiamųjų gelžbetoninių elementų modeliavime duoda nemažas paklaidas. Kitas svarbus aspektas yra tai, kad gelžbetoniniuose bandiniuose, iki juos apkraunant trumpalaike apkrova, vyksta betono susitraukimas. Šiame darbe buvo siekiama sukurti metodą, leidžiantį pagal eksperimentinius lenkiamųjų gelžbetoninių elementų duomenis gauti tempiamojo betono vidutinių įtempių ir deformacijų diagramas, įvertinant betono susitraukimo įtaką. Pagrindinis diser-tacijos tikslas yra įvertinti ikieksploatacinių betono susitraukimo ir valkšnumo poveikį gelžbetoninių elementų, apkrautų trumpalaike apkrova... [toliau žr. visą tekstą]

Page generated in 0.0514 seconds