• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1243
  • 305
  • 123
  • 101
  • 67
  • 60
  • 42
  • 24
  • 22
  • 18
  • 14
  • 13
  • 8
  • 7
  • 7
  • Tagged with
  • 2438
  • 886
  • 407
  • 338
  • 306
  • 245
  • 239
  • 205
  • 197
  • 194
  • 178
  • 171
  • 170
  • 152
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Um procedimento de estimação de parâmetros de linhas de transmissão baseado na teoria de decomposição modal /

Asti, Gislaine Aparecida. January 2010 (has links)
Orientador: Sérgio Kurokawa / Banca: Afonso José do Prado / Banca: José Carlos da Costa Campos / Resumo: O objetivo deste trabalho é mostrar uma metodologia para estimar os parâmetros de linhas de transmissão. O método é baseado na teoria de decomposição modal de linhas de transmissão e é desenvolvido a partir das medições das correntes e tensões nos terminais da linha. Conforme testes realizados por Kurokawa, et al., (2006), o método de estimação de parâmetros é exato se a matriz de decomposição modal é conhecida. Desse modo, neste trabalho, o método será aplicado em uma linha de transmissão trifásica de 440 kV não transposta, em uma frequência de 60 Hz, para vários comprimentos de linhas, onde será utilizada a matriz de Clarke como sendo uma matriz de decomposição modal / Abstract: The objective of this work is to show a methodology to estimate the transmission lines parameters. The method is based the theory of modal decomposition of transmission lines and is developed from measurements of currents and voltages at the terminals of the line. According to tests realized by Kurokawa, et al. (2006), the method of parameter estimation is exact if the modal transformation matrix is known. Thus, in this work, the method will be apllied in three phase transmission line of 440 kV non transposed, in a frequency of 60 Hz, for various lengths of lines, were the matrix will be used Clarke as a modal decomposition matrix / Mestre
352

The Exploration of the Relationship Between Guessing and Latent Ability in IRT Models

Gao, Song 01 December 2011 (has links)
This study explored the relationship between successful guessing and latent ability in IRT models. A new IRT model was developed with a guessing function integrating probability of guessing an item correctly with the examinee's ability and the item parameters. The conventional 3PL IRT model was compared with the new 2PL-Guessing model on parameter estimation using the Monte Carlo method. SAS program was used to implement the data simulation and the maximum likelihood estimation. Compared with the traditional 3PL model, the new model should reflect: a) the maximum probability of guessing should not be more than 0.5, even for the highest ability examinees; b) different ability of examinees should have different probability of successful guessing, because a basic assumption for the new models is that higher ability examinees have a higher probability of successful guessing than lower ability examinees; c) smaller standard error in estimating parameters; and d) faster running time. The results illustrated that the new 2PL-Guessing model was superior to the 3PL model in all four aspects.
353

VECTOR QUANTIZATION USING ODE BASED NEURAL NETWORK WITH VARYING VIGILANCE PARAMETER

Khudhair, Ali Dheyaa 01 May 2012 (has links)
Vector Quantization importance has been increasing and it is becoming a vital element in the process of classification and clustering of different types of information to help in the development of machines learning and decisions making, however the different techniques that implements Vector Quantization have always come short in some aspects. A lot of researchers have turned their heads towards the idea of creating a Vector Quantization mechanism that is fast and can be used to classify data that is rapidly being generated from some source, most of the mechanisms depend on a specific style of neural networks, this research is one of those attempts. One of the dilemmas that this technology faces is the compromise that has to be made between the accuracy of the results and the speed of the classification or quantization process, also the complexity of the suggested algorithms makes it very hard to implement and realize any of them on a hardware that can be used as a fast-online classifier which can keep up with the speed of the information being presented to the system, an example for such information sources would be high speed processors, and computer networks intrusion detection systems. This research focuses on creating a Vector Quantizer using neural networks, the neural network that is used in this study is a novel one and has a unique feature that comes from the fact that it is based solely on a set of ordinary differential equations. The input data will be injected in those equations and the classification would be based on finding the equilibrium points of the system with the presence of those input patterns. The elimination of conditional statements in this neural network would mean that the implementation and the execution of the classification process of this technique would have one single path that can accommodate any value. A single execution path will provide easier algorithm analysis and open the possibility to realizing it on a pure analog circuit that can have an operation speed able to match the speed of incoming information and classify the data in a real time fashion. The details of this dynamical system will be provided in this research, also the shortcomings that we have faced and how we overcame them will be explained in particulars. Also, a drastic change in the way of looking at the speed vs. accuracy compromise has been made and presented in this research, aiming towards creating a technique that can produce accurate results with high speeds.
354

A study concerning homeostasis and population development of colagen fibers / A study concerning homeostasis and population development of colagen fibers

Alves, Calebe de Andrade January 2017 (has links)
ALVES, C. A. A study concerning homeostasis and population development of collagen fibers. 2017. 88 f. Tese (Doutorado em Física) – Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Pós-Graduação em Física (posgrad@fisica.ufc.br) on 2017-11-21T16:35:18Z No. of bitstreams: 1 2017_tese_caalves.pdf: 8939939 bytes, checksum: 5cbf75fd845e26cdee776ee15fc2cfbf (MD5) / Approved for entry into archive by Giordana Silva (giordana.nascimento@gmail.com) on 2017-11-22T18:55:25Z (GMT) No. of bitstreams: 1 2017_tese_caalves.pdf: 8939939 bytes, checksum: 5cbf75fd845e26cdee776ee15fc2cfbf (MD5) / Made available in DSpace on 2017-11-22T18:55:25Z (GMT). No. of bitstreams: 1 2017_tese_caalves.pdf: 8939939 bytes, checksum: 5cbf75fd845e26cdee776ee15fc2cfbf (MD5) Previous issue date: 2017 / Collagen is a generic name for the group of the most common proteins in mammals. It confers mechanical stability, strength and toughness to the tissues, in a large number of species. In this work we investigate two properties of collagen that explain in part the choice by natural selection of this substance as an essential building material. In the first study the property under investigation is the homeostasis of a single fiber, i.e., the maintenance of its elastic properties under the action of collagen monomers that contribute to its stiffening and enzymes that digest it. The model used for this purpose is a onedimensional chain of linearly elastic springs in series coupled with layers of sites. Particles representing monomers and enzymes can diffuse along these layers and interact with the springs according to specified rules. The predicted lognormal distribution for the local stiffness is compared to experimental data from electronic microscopy images and a good concordance is found. The second part of this work deals with the distribution of sizes among multiple collagen fibers, which is found to be bimodal, hypothetically because it leads to a compromise between stiffness and toughness of the bundle of fibers. We propose a mechanism for the evolution of the fiber population which includes growth, fusion and birth of fibers and write a Population Balance Equation for that. By performing a parameter estimation over a set of Monte Carlo simulations, we determine the parameters that best fit the available data. / Collagen is a generic name for the group of the most common proteins in mammals. It confers mechanical stability, strength and toughness to the tissues, in a large number of species. In this work we investigate two properties of collagen that explain in part the choice by natural selection of this substance as an essential building material. In the first study the property under investigation is the homeostasis of a single fiber, i.e., the maintenance of its elastic properties under the action of collagen monomers that contribute to its stiffening and enzymes that digest it. The model used for this purpose is a onedimensional chain of linearly elastic springs in series coupled with layers of sites. Particles representing monomers and enzymes can diffuse along these layers and interact with the springs according to specified rules. The predicted lognormal distribution for the local stiffness is compared to experimental data from electronic microscopy images and a good concordance is found. The second part of this work deals with the distribution of sizes among multiple collagen fibers, which is found to be bimodal, hypothetically because it leads to a compromise between stiffness and toughness of the bundle of fibers. We propose a mechanism for the evolution of the fiber population which includes growth, fusion and birth of fibers and write a Population Balance Equation for that. By performing a parameter estimation over a set of Monte Carlo simulations, we determine the parameters that best fit the available data.
355

QUALITATIVE AND QUANTITATIVE PROCEDURE FOR UNCERTAINTY ANALYSIS IN LIFE CYCLE ASSESSMENT OF WASTEWATER SOLIDS TREATMENT PROCESSES

Alyaseri, Isam 01 May 2014 (has links)
In order to perform the environmental analysis and find the best management in the wastewater treatment processes using life cycle assessment (LCA) method, uncertainty in LCA has to be evaluated. A qualitative and quantitative procedure was constructed to deal with uncertainty for the wastewater treatment LCA studies during the inventory and analysis stages. The qualitative steps in the procedure include setting rules for the inclusion of inputs and outputs in the life cycle inventory (LCI), setting rules for the proper collection of data, identifying and conducting data collection analysis for the significant contributors in the model, evaluating data quality indicators, selecting the proper life cycle impact assessment (LCIA) method, evaluating the uncertainty in the model through different cultural perspectives, and comparing with other LCIA methods. The quantitative steps in the procedure include assigning the best guess value and the proper distribution for each input or output in the model, calculating the uncertainty for those inputs or outputs based on data characteristics and the data quality indicators, and finally using probabilistic analysis (Monte Carlo simulation) to estimate uncertainty in the outcomes. Environmental burdens from the solids handling unit at Bissell Point Wastewater Treatment Plant (BPWWTP) in Saint Louis, Missouri was analyzed. Plant specific data plus literature data were used to build an input-output model. Environmental performance of an existing treatment scenario (dewatering-multiple hearth incineration-ash to landfill) was analyzed. To improve the environmental performance, two alternative scenarios (fluid bed incineration and anaerobic digestion) were proposed, constructed, and evaluated. System boundaries were set to include the construction, operation and dismantling phases. The impact assessment method chosen was Eco-indicator 99 and the impact categories were: carcinogenicity, respiratory organics and inorganics, climate change, radiation, ozone depletion, ecotoxicity, acidification-eutrophication, and minerals and fossil fuels depletion. Analysis of the existing scenario shows that most of the impacts came from the operation phase on the categories related to fossil fuels depletion, respiratory inorganics, and carcinogens due to energy consumed and emissions from incineration. The proposed alternatives showed better performance than the existing treatment. Fluid bed incineration had better performance than anaerobic digestion. Uncertainty analysis showed there is 57.6% possibility to have less impact on the environment when using fluid bed incineration than the anaerobic digestion. Based on single scores ranking in the Eco-indicator 99 method, the environmental impact order is: multiple hearth incineration > anaerobic digestion > fluid bed incineration. This order was the same for the three model perspectives in the Eco-indicator 99 method and when using other LCIA methods (Eco-point 97 and CML 2000). The study showed that the incorporation of qualitative/quantitative uncertainty analysis into LCA gave more information than the deterministic LCA and can strengthen the LCA study. The procedure tested in this study showed that Monte Carlo simulation can be used in quantifying uncertainty in the wastewater treatment studies. The procedure can be used to analyze the performance of other treatment options. Although the analysis in different perspectives and different LCIA methods did not impact the order of the scenarios, it showed a possibility of variation in the final outcomes of some categories. The study showed the importance of providing decision makers with the best and worst possible outcomes in any LCA study and informing them about the perspectives and assumptions used in the assessment. Monte Carlo simulation is able to perform uncertainty analysis in the comparative LCA only between two products or scenarios based on the (A-B) approach due to the overlapping between the probability distributions of the outcomes. It is recommended to modify it to include more than two scenarios.
356

Modelos de regressão aleatória para a estimação de parâmetros genéticos da produção e constituintes do leite de búfalas

Aspilcueta Borquis, Rusbel Raúl [UNESP] 31 May 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:15Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-05-31Bitstream added on 2014-06-13T20:43:07Z : No. of bitstreams: 1 aspilcuetaborquis_rr_dr_jabo.pdf: 608014 bytes, checksum: fec77d6e9cf690a6db8e92476a753c54 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Foram estimados parâmetros genéticos para a produção de leite, gordura e proteína no dia do controle de primeiras lactações de búfalas leiteiras por meio de análises uni e multicaracterística de regressão aleatória. Para os modelos de regressão aleatória unicaracterística foram analisadas as 1.433 primeiras lactações de búfalas. Para as características em estudo, foram considerados nos modelos utilizados, os efeitos aleatórios genético aditivo, de ambiente permanente e o residual. Como efeitos fixos, foram considerados o grupo contemporâneo e o número de ordenha (1 ou 2 ordenhas). Os efeitos linear e quadrático da covariável idade da vaca ao parto e a curva média de lactação da população, estão modelados por polinômios ortogonais de Legendre de terceira ordem. Os efeitos aleatórios genético aditivo e de ambiente permanente foram estimados por meio de regressão aleatória sobre polinômio ortogonais de Legendre de terceira à sexta ordem. Os resultados indicam que são requeridos polinômios de Legendre de baixa ordem para modelar a estrutura de (co)variância genética e de ambiente permanente. As estimativas de herdabilidade das características em estudo foram moderadas, o que poderia ajudar no processo de seleção dos animais para obtenção de ganhos genéticos. As estimativas das correlações genéticas foram altas entre controles, indicando que seja qual for o critério de seleção adotado, ganhos genéticos indiretos são esperados em toda a curva de lactação. Para o modelo de regressão aleatória multivariada foi analisado o mesmo banco de dados e as mesmas pressuposições do modelo unicaracaterística. No que se refere à modelagem dos efeitos aleatórios, utilizouse, em todas as características, polinômios de Legendre de terceira... / Genetic parameters for milk, fat and protein yields in the test day were estimated for the first lactations of dairy buffaloes by using single- and multiple-trait random regression analyses. To the single-trait analyses were analyzed 1,433 first lactations. The models included as random effects: additive genetic, permanent environment and residual and as fixed effects: contemporary group, number of milkings (one or two), linear and quadratic effects of the covariable age of the buffalo at birth and mean lactation curve of the population, modeled using Legendre orthogonal polynomials of third order. The random regression effects of genetic and permanent environment were estimated by random regression with Legendre orthogonal polynomials from third to sixth orders. The results indicate that low order polynomials are required to model the structure of genetic and permanent environment (co)variances. The heritability estimates of the traits studied were moderated, it permits to do selection of animals to obtain genetic gain. The genetic correlation estimates were high among test-days, indicating that the selection aims may be different, but indirect genetic gain for all the lactation curve was expected. To the model of multiple-trait random regression, the same data was analyzed, with the same presuppositions of the single-trait model. To model the random effects, Legendre polynomials of third and fourth order were used, to genetic effects and permanent environment, respectively. The residual variances were modeled considering four residual classes grouped as: 1, 2-3, 4-8, 9-10 months of lactation. The results indicate that the heritabilities estimates of the traits presented enough genetic variance to be selected. The estimates of the variance components may be used to implement a BLUP evaluation in a Brazilian buffalo population. The genetic correlations among test-days for a trait ... (Complete abstract click electronic access below)
357

Algumas distribuições de probalidade para idosos grupados e censurados

Cruz, José Nilton da [UNESP] 09 February 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:23:03Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-02-09Bitstream added on 2014-06-13T20:49:42Z : No. of bitstreams: 1 cruz_jn_me_botib.pdf: 562672 bytes, checksum: b3ad775d02dc10ea4de46942362904f0 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / São comuns experimentos conduzidos de forma a não permitir a observação do tempo exato de ocorrência do evento (por exemplo, morte), e sim o intervalo em que este ocorreu, caracterizando assim, respostas com censura intervalar. Quando os indivíduos são avaliados nos mesmos tempos, tem-se um caso particular de censura intervalar, sendo os dados deste tipo conhecidos como grupados e censurados. Dados grupados podem apresentar um grande número de empates, ou seja, proporção de empate maior que 25% (Chalita et al., 2002), podendo ser analisados considerando-se o tempo discreto e ajustando-se modelos à probabilidade de o indivíduo falhar em um certo intervalo, dado que ele sobreviveu ao intervalo anterior (Lawless, 1982). O objetivo deste trabalho é propor modelos de sobrevivência para dados grupados e censurados baseado nas distribuiçõess Weibull Generalizada (Mudhol kar et al., 1996), Log-Weibull Exponenciada (Hashimoto et al., 2010) e Log-Burr XII (Silva, 2008). Posteriormente, estes modelos e os modelos Log-Normal Generalizada e Weibull Exponenciada extendidos para dados grupados e censurados por Silveira et al. (2010), serão aplicados à um conjunto de dados referente a um estudo de pacientes submetidos a cirurgia de Duhamel-Haddad (em que os modelos Log-Normal Generalizada e Weibull Exponenciada já foram ajustados em Silveira et al. (2010)), e comparados pelo critério de Informação de Akaike Corrigido (AICс) / Experiments which are unable to accurately indicate the time of their occurrence are frequent (death, for instance) only the time they occur is related, which means answers with interval- censored . When individuals are assessed at the same time, there is a particular case interval-censored, and the data in this case is known as grouped and censored. Data of this kind can have a large number of ties whose proportions are greater than 25% (Chalita et al., 2002), and can be analyzed considering the discrete time and fitting models to the probability of the individual’s eventual failure and the timing of the failure, as he survived the previous interval (Lawless, 1982). The aim of this paper is to propose survival models data grouped based on Generalized Weibull distributions (Mudholkar et al., 1996), Log-ExponentiatedWeibull (Hashimoto et al., 2010) and Log-Burr XII (Silva, 2008). These models, the Generalized Lognormal and Weibull Exponential models extended to grouped data and censored by Silveira et al. (2010), will be applied to a data set concerning a study of patients undergoing Duhamel-Haddad surgery (whose Generalized Lognormal and Weibull Exponential models have been adjusted in Silveira et al. (2010), and compared by the Fixed Akaike information criterion (AICc)
358

Modelagem do escoamento ao longo de evaporadores de serpentina com tubos aletados

Bueno, Sandhoerts Said [UNESP] 17 May 2004 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:23:39Z (GMT). No. of bitstreams: 0 Previous issue date: 2004-05-17Bitstream added on 2014-06-13T20:30:31Z : No. of bitstreams: 1 bueno_ss_me_ilha.pdf: 1064602 bytes, checksum: fd46f925f2d20ccedc49c80672da6280 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Neste trabalho, apresenta-se um modelo numérico distribuído para a simulação dos escoamentos do fluido refrigerante e do ar, no regime transiente, em evaporadores de expansão seca de serpentina com tubos aletados, comuns em sistemas de refrigeração e ar condicionado. No modelo proposto, o escoamento do fluido refrigerante no interior dos tubos é dividido em duas regiões: uma de escoamento bifásico líquido-vapor e uma de escoamento de vapor superaquecido. Considera-se a queda de pressão no interior dos tubos e a condensação do vapor d'água do ar que escoa em fluxo cruzado na parte externa dos tubos. O escoamento bifásico do fluido refrigerante é simplificado como um escoamento unidimensional, considerando o deslizamento entre as fases de líquido e de vapor. Para o escoamento do refrigerante, resolvem-se as equações de conservação da massa, da quantidade de movimento e de conservação da energia. Para o escoamento de ar, são resolvidas as equações de conservação da energia e de conservação da massa (umidade). Resolve-se, também, a equação da conservação da energia para a parede do tubo, para se obter a sua temperatura. O método de volumes finitos é utilizado na discretização das equações governantes e o método de Newton-Raphson é utilizado para a solução do sistema de equações resultante. Inicialmente, condições de regime permanente são assumidas e, posteriormente, para avaliar o comportamento transiente do evaporador, uma variação em degrau da vazão em massa de refrigerante é imposta em sua entrada. O modelo permite o cálculo da vazão de refrigerante, conhecidas as condições de operação e os parâmetros geométricos, usando-se o processo de estimativa de parâmetros, com o método de minimização de Levenberg-Marquardt. Além disso, o modelo permite a análise de algumas configurações de... . / This work presents a numerical model to simulate the unsteady refrigerant fluid flow and air flow in dry-expansion finned-tube coil evaporators, the kind widely used in air conditioning and refrigeration systems. The model considers the refrigerant flow inside the tubes divided in a region of two-phase flow and a single-phase region, where the refrigerant is in the superheated state. The refrigerant pressure drop and the moisture condensation on the air flow crossing the outside of the tubes are also taking into account. The refrigerant two-phase flow is taken as one-dimensional and the slip between the liquid and vapor phases is considered. For the refrigerant flow, mass, momentum and energy conservation equations are solved in order to evaluate the specific mass, velocity, and temperature of the refrigerant fluid, respectively. For the air flow, energy and mass (humidity) conservation equations are solved, to obtain, respectively, the temperature and absolute humidity of the air crossing the evaporator. Also, the solution of energy conservation equation for the tube wall is used to determine the wall temperature distribution. Finite Volume Method is used all over to discretize the governing equations and a Newton-Raphson Scheme is utilized for the solution of the resulting system of equations. To analyze the evaporator unsteady behavior, the steady conditions are obtained initially and later a step change in the mass flow rate is imposed at the tube inlet. Obtained results such as superheating degree along the coil and air temperature at the outlet are compared to experimental data available in the open literature. From the model the refrigerant mass flow rate can be determined, from a known operating conditions and geometry parameters, using the process of parameter estimation with the method of Levenberg-Marquardt... (Complete abstract click electronic address below).
359

Algumas distribuições de probalidade para idosos grupados e censurados /

Cruz, José Nilton da. January 2012 (has links)
Orientador: Liciana Vaz de Arruda Silveira / Banca: Roseli Aparecida Leandro / Banca: Lídia Raquel de Carvalho / Resumo: São comuns experimentos conduzidos de forma a não permitir a observação do tempo exato de ocorrência do evento (por exemplo, morte), e sim o intervalo em que este ocorreu, caracterizando assim, respostas com censura intervalar. Quando os indivíduos são avaliados nos mesmos tempos, tem-se um caso particular de censura intervalar, sendo os dados deste tipo conhecidos como grupados e censurados. Dados grupados podem apresentar um grande número de empates, ou seja, proporção de empate maior que 25% (Chalita et al., 2002), podendo ser analisados considerando-se o tempo discreto e ajustando-se modelos à probabilidade de o indivíduo falhar em um certo intervalo, dado que ele sobreviveu ao intervalo anterior (Lawless, 1982). O objetivo deste trabalho é propor modelos de sobrevivência para dados grupados e censurados baseado nas distribuiçõess Weibull Generalizada (Mudhol kar et al., 1996), Log-Weibull Exponenciada (Hashimoto et al., 2010) e Log-Burr XII (Silva, 2008). Posteriormente, estes modelos e os modelos Log-Normal Generalizada e Weibull Exponenciada extendidos para dados grupados e censurados por Silveira et al. (2010), serão aplicados à um conjunto de dados referente a um estudo de pacientes submetidos a cirurgia de Duhamel-Haddad (em que os modelos Log-Normal Generalizada e Weibull Exponenciada já foram ajustados em Silveira et al. (2010)), e comparados pelo critério de Informação de Akaike Corrigido (AICс) / Abstract: Experiments which are unable to accurately indicate the time of their occurrence are frequent (death, for instance) only the time they occur is related, which means answers with interval- censored . When individuals are assessed at the same time, there is a particular case interval-censored, and the data in this case is known as grouped and censored. Data of this kind can have a large number of ties whose proportions are greater than 25% (Chalita et al., 2002), and can be analyzed considering the discrete time and fitting models to the probability of the individual's eventual failure and the timing of the failure, as he survived the previous interval (Lawless, 1982). The aim of this paper is to propose survival models data grouped based on Generalized Weibull distributions (Mudholkar et al., 1996), Log-ExponentiatedWeibull (Hashimoto et al., 2010) and Log-Burr XII (Silva, 2008). These models, the Generalized Lognormal and Weibull Exponential models extended to grouped data and censored by Silveira et al. (2010), will be applied to a data set concerning a study of patients undergoing Duhamel-Haddad surgery (whose Generalized Lognormal and Weibull Exponential models have been adjusted in Silveira et al. (2010), and compared by the Fixed Akaike information criterion (AICc) / Mestre
360

Topics in image recovery and image quality assessment /Cui Lei.

Cui, Lei 16 November 2016 (has links)
Image recovery, especially image denoising and deblurring is widely studied during the last decades. Variational models can well preserve edges of images while restoring images from noise and blur. Some variational models are non-convex. For the moment, the methods for non-convex optimization are limited. This thesis finds new non-convex optimizing method called difference of convex algorithm (DCA) for solving different variational models for various kinds of noise removal problems. For imaging system, noise appeared in images can show different kinds of distribution due to the different imaging environment and imaging technique. Here we show how to apply DCA to Rician noise removal and Cauchy noise removal. The performance of our experiments demonstrates that our proposed non-convex algorithms outperform the existed ones by better PSNR and less computation time. The progress made by our new method can improve the precision of diagnostic technique by reducing Rician noise more efficiently and can improve the synthetic aperture radar imaging precision by reducing Cauchy noise within. When applying variational models to image denoising and deblurring, a significant subject is to choose the regularization parameters. Few methods have been proposed for regularization parameter selection for the moment. The numerical algorithms of existed methods for parameter selection are either complicated or implicit. In order to find a more efficient and easier way to estimate regularization parameters, we create a new image quality sharpness metric called SQ-Index which is based on the theory of Global Phase Coherence. The new metric can be used for estimating parameters for a various of variational models, but also can estimate the noise intensity based on special models. In our experiments, we show the noise estimation performance with this new metric. Moreover, extensive experiments are made for dealing with image denoising and deblurring under different kinds of noise and blur. The numerical results show the robust performance of image restoration by applying our metric to parameter selection for different variational models.

Page generated in 0.0745 seconds