• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

d-Limonene, a Renewable Component for Polymer Synthesis

Ren, Shanshan January 2017 (has links)
d-Limonene (Lim) was used in various polymer formulations to achieve a more sustainable polymerization. Lim is a renewable and essentially non-toxic compound, derived from citrus fruit peels, that may replace some of the many toxic and fossil-based chemicals used in polymer synthesis. Bulk free-radical polymerizations of n-butyl acrylate (BA) with Lim were performed to investigate Lim co-polymerization kinetics and estimate the monomer reactivity ratios, important parameters in the prediction of copolymer composition. Kinetic modeling of the BA/Lim copolymerization was performed with PREDICI simulation software. The model supports the presence of a significant degradative chain transfer reaction due to Lim. This reaction mechanism is due to the presence of allylic hydrogen in Lim. Nonetheless, relatively high molecular weight polymers were produced. It was concluded that Lim behaves more like a chain transfer agent than a co-monomer. Terpolymerizations of BA, butyl methacrylate (BMA) with Lim were then performed. In order to predict the terpolymer composition, the monomer reactivity ratios for BA/BMA were estimated. By applying the three pairs of co-monomer reactivity ratios to the integrated Mayo-Lewis equation, terpolymer compositions were ably predicted up to high monomer conversion levels. Lim was then used as a chain transfer agent to prepare core-shell latex-based pressure sensitive adhesives (PSA) comprising BA and styrene via seeded semi-batch emulsion polymerization. By varying the concentration of Lim and divinylbenzene crosslinker, the core polymer microstructure was modified to yield different molecular weights and degrees of crosslinking. The core latex was then used as a seed to prepare core-shell latexes. By changing the Lim concentration during the shell-stage polymerization, the molecular weight of shell polymer was also modified. The latexes were characterized for their microstructure and were cast as films for PSA performance evaluation. The PSA performance was shown to be highly related to the polymer microstructure. Tack and peel strength showed a decrease with increasing Lim concentration. Shear strength went through a maximum with a core Lim concentration increase from 0 to 5 phm.
2

Extending Ranked Set Sampling to Survey Methodology

Sroka, Christopher J. 11 September 2008 (has links)
No description available.
3

New methods for projecting enrollments within urban school districts

Smith, Geoffrey Hutchinson 15 December 2017 (has links)
This dissertation models K-12 enrollment within an urban school district using two grade progression ratio (gpr)-based and two housing choice methods. The housing choice methods provide, for the first time, a new spatio-demographic model for projecting school enrollments by grade for any flexibly defined set of individual catchment areas. All methods use the geocoded pattern of individual, address-matched, enrollments within the study district but are different in the way they model this data to estimate key parameters. The conventional method projects the intra-urban pattern of enrollment by assuming no change in grade progression ratios (gprs), which are themselves functions of enrollment change. The adaptive kernel ratio estimation (KRE) of local gprs successfully predicts local changes in gprs from three preceding two-year periods of gpr change. The two housing choice methods are based on different mixtures of a generalized linear and a periodic model, each of which use housing counts and characteristics. Results are clearly sensitive to these differences. Using the above predictions of gpr change, the adaptive KRE enrollment projections are 4.1% better than those made using the conventional model. The two housing choice models were 2.0% less accurate than the conventional model for the first three years of the projection but were 5.1% more accurate than this model for the fourth and fifth years of the projection. Limitations are discussed. These findings help close a major gap in the literature of small-area enrollment projections, shed new light on spatial dynamics collected at areas below the scale of the school district, and permit new kinds of investigations of urban/suburban school district demography.
4

Signal-to-noise-plus-interference ratio estimation and statistics for direct sequence spread spectrum code division multiple access communications

Gupta, Amit January 2004 (has links)
No description available.
5

Towards Adaptation of OFDM Based Wireless Communication Systems

Billoori, Sharath Reddy 31 March 2004 (has links)
OFDM has been recognized as a powerful multi-carrier modulation technique that provides efficient spectral utilization and resilience to frequency selective fading channels. Adaptive modulation is a concept whereby the modulation modes are dynamically changed based on the perceived instantaneous channel conditions. In conjunction with OFDM systems, adaptive modulation is a very powerful technique to combat the frequency selective nature of mobile channels, while simultaneously attempting to fully maximize the time-varying capacity of the channel. This is based on the fact that frequency selective fading affects the sub-carriers unevenly, causing some of them to fade more severely than others. The modulation modes are adaptively selected on the sub-carriers depending on the amount of fading, to maximize throughput and improve the overall BER. Transmission parameter adaptation is the response of the transmitter to the time-varying channel quality. To efficiently react to the dynamic nature of the channel, adaptive OFDM systems rely on efficient algorithms in three key areas namely, channel quality estimation, transmission parameter selection and signaling or blind detection mechanisms of the modified parameters. These are together termed as the enabling techniques that contribute to the effective performance of adaptive OFDM systems. This thesis deals with higher performance and efficient enabling parameter estimation algorithms that further improve the overall performance of adaptive OFDM systems. Traditional estimation of channel quality indicators, such as noise power and SNR, assume that the noise has a flat power spectral density within the transmission band of the OFDM signal. Hence, a single estimate of the noise power is obtained by averaging the instantaneous noise power values across all the sub-carriers. In reality, the noise within the OFDM bandwidth is a combination of white and correlated noise components, and has an uneven affect across the sub-carriers. It is this fact that has motivated the proposal of a windowing approach for noise power estimation. Windowing provides many local estimates of the dynamic noise statistics and allows better noise tracking across the OFDM transmission band. This method is particularly useful for better resource utilization and improved performance in sub-band adaptive modulation, where adaptation is performed on the sub-carriers on a group-by-group basis based on the observed channel conditions. Blind modulation mode detection is another relatively unexplored issue in regard to adaptation of OFDM systems. The receiver has to be informed of the appropriate modulation modes used at the transmitter for proper demodulation. If this can be done without any explicit signaling information embedded within the OFDM symbol, it has the advantage of improved throughput and data capacity. A model selection approach is taken, a novel statistical blind modulation detection method based on the Kullback-Leibler (K-L) distance is proposed. This algorithm takes into account the distribution of the Euclidian distances from the received noisy samples on the complex plane to the closest legitimate constellation points of all the modulation modes used. If this can be done without any explicit signaling information embedded within the OFDM symbol, it has the advantage of improved throughput and data capacity. A model selection approach is taken, and a novel statistical blind modulation detection method based on the Kullback-Leibler (K-L) distance is proposed. This algorithm takes into account the distribution of the Euclidian distances from the received noisy samples on the complex plane to the closest legitimate constellation points of all the modulation modes used.
6

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
7

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
8

GENERATIVE MODELS WITH MARGINAL CONSTRAINTS

Bingjing Tang (16380291) 16 June 2023 (has links)
<p> Generative models form powerful tools for learning data distributions and simulating new samples. Recent years have seen significant advances in the flexibility and applicability of such models, with Bayesian approaches like nonparametric Bayesian models and deep neural network models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) finding use in a wide range of domains. However, the black-box nature of these models means that they are often hard to interpret, and they often come with modeling implications that are inconsistent with side knowledge resulting from domain knowledge. This thesis studies situations where the modeler has side knowledge represented as probability distributions on functionals of the objects being modeled, and we study methods to incorporate this particular kind of side knowledge into flexible generative models. This dissertation covers three main parts. </p> <p><br></p> <p>The first part focuses on incorporating a special case of the aforementioned side knowledge into flexible nonparametric Bayesian models. Many times, practitioners have additional distributional information about a subset of the coordinates of the observations being modeled. The flexibility of nonparametric Bayesian models usually implies incompatibility with this side information. Such inconsistency triggers the necessity of developing methods to incorporate this side knowledge into flexible nonparametric Bayesian models. We design a specialized generative process to build in this side knowledge and propose a novel sigmoid Gaussian process conditional model. We also develop a corresponding posterior sampling method based on data augmentation to overcome a doubly intractable problem. We illustrate the efficacy of our proposed constrained nonparametric Bayesian model in a variety of real-world scenarios including modeling environmental and earthquake data. </p> <p><br></p> <p>The second part of the dissertation discusses neural network approaches to satisfying the said general side knowledge. Further, the generative models considered in this part broaden into black-box models. We formulate this side knowledge incorporation problem as a constrained divergence minimization problem and propose two scalable neural network approaches as its solution. We demonstrate their practicality using various synthetic and real examples. </p> <p><br></p> <p> The third part of the dissertation concentrates on a specific generative model of individual pixels of the fMRI data constructed from a latent group image. Usually there is two-fold side knowledge about the latent group image: spatial structure and partial activation zones. The former can be captured by modeling the prior for the group image with Markov random fields. The latter, which is often obtained from previous related studies, is left for future research. We propose a novel Bayesian model with Markov random fields and aim to estimate the maximum a posteriori for the group image. We also derive a variational Bayes algorithm to overcome local optima in the optimization.</p>
9

Stochastic density ratio estimation and its application to feature selection / Estimação estocástica da razão de densidades e sua aplicação em seleção de atributos

Braga, Ígor Assis 23 October 2014 (has links)
The estimation of the ratio of two probability densities is an important statistical tool in supervised machine learning. In this work, we introduce new methods of density ratio estimation based on the solution of a multidimensional integral equation involving cumulative distribution functions. The resulting methods use the novel V -matrix, a concept that does not appear in previous density ratio estimation methods. Experiments demonstrate the good potential of this new approach against previous methods. Mutual Information - MI - estimation is a key component in feature selection and essentially depends on density ratio estimation. Using one of the methods of density ratio estimation proposed in this work, we derive a new estimator - VMI - and compare it experimentally to previously proposed MI estimators. Experiments conducted solely on mutual information estimation show that VMI compares favorably to previous estimators. Experiments applying MI estimation to feature selection in classification tasks evidence that better MI estimation leads to better feature selection performance. Parameter selection greatly impacts the classification accuracy of the kernel-based Support Vector Machines - SVM. However, this step is often overlooked in experimental comparisons, for it is time consuming and requires familiarity with the inner workings of SVM. In this work, we propose procedures for SVM parameter selection which are economic in their running time. In addition, we propose the use of a non-linear kernel function - the min kernel - that can be applied to both low- and high-dimensional cases without adding another parameter to the selection process. The combination of the proposed parameter selection procedures and the min kernel yields a convenient way of economically extracting good classification performance from SVM. The Regularized Least Squares - RLS - regression method is another kernel method that depends on proper selection of its parameters. When training data is scarce, traditional parameter selection often leads to poor regression estimation. In order to mitigate this issue, we explore a kernel that is less susceptible to overfitting - the additive INK-splines kernel. Then, we consider alternative parameter selection methods to cross-validation that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that the additive INK-splines kernel outperforms both the RBF and the previously proposed multiplicative INK-splines kernel. They also show that the alternative parameter selection procedures fail to consistently improve performance. Still, we find that the Finite Prediction Error method with the additive INK-splines kernel performs comparably to cross-validation. / A estimação da razão entre duas densidades de probabilidade é uma importante ferramenta no aprendizado de máquina supervisionado. Neste trabalho, novos métodos de estimação da razão de densidades são propostos baseados na solução de uma equação integral multidimensional. Os métodos resultantes usam o conceito de matriz-V , o qual não aparece em métodos anteriores de estimação da razão de densidades. Experimentos demonstram o bom potencial da nova abordagem com relação a métodos anteriores. A estimação da Informação Mútua - IM - é um componente importante em seleção de atributos e depende essencialmente da estimação da razão de densidades. Usando o método de estimação da razão de densidades proposto neste trabalho, um novo estimador - VMI - é proposto e comparado experimentalmente a estimadores de IM anteriores. Experimentos conduzidos na estimação de IM mostram que VMI atinge melhor desempenho na estimação do que métodos anteriores. Experimentos que aplicam estimação de IM em seleção de atributos para classificação evidenciam que uma melhor estimação de IM leva as melhorias na seleção de atributos. A tarefa de seleção de parâmetros impacta fortemente o classificador baseado em kernel Support Vector Machines - SVM. Contudo, esse passo é frequentemente deixado de lado em avaliações experimentais, pois costuma consumir tempo computacional e requerer familiaridade com as engrenagens de SVM. Neste trabalho, procedimentos de seleção de parâmetros para SVM são propostos de tal forma a serem econômicos em gasto de tempo computacional. Além disso, o uso de um kernel não linear - o chamado kernel min - é proposto de tal forma que possa ser aplicado a casos de baixa e alta dimensionalidade e sem adicionar um outro parâmetro a ser selecionado. A combinação dos procedimentos de seleção de parâmetros propostos com o kernel min produz uma maneira conveniente de se extrair economicamente um classificador SVM com boa performance. O método de regressão Regularized Least Squares - RLS - é um outro método baseado em kernel que depende de uma seleção de parâmetros adequada. Quando dados de treinamento são escassos, uma seleção de parâmetros tradicional em RLS frequentemente leva a uma estimação ruim da função de regressão. Para aliviar esse problema, é explorado neste trabalho um kernel menos suscetível a superajuste - o kernel INK-splines aditivo. Após, são explorados métodos de seleção de parâmetros alternativos à validação cruzada e que obtiveram bom desempenho em outros métodos de regressão. Experimentos conduzidos em conjuntos de dados reais mostram que o kernel INK-splines aditivo tem desempenho superior ao kernel RBF e ao kernel INK-splines multiplicativo previamente proposto. Os experimentos também mostram que os procedimentos alternativos de seleção de parâmetros considerados não melhoram consistentemente o desempenho. Ainda assim, o método Finite Prediction Error com o kernel INK-splines aditivo possui desempenho comparável à validação cruzada.
10

Estimação da relação ar-combustível utilizando o sinal de pressão no cilindro em um motor ciclo Otto a etanol / Air-fuel ratio estimation using cylinder pressure sign in Otto cycle internal combustion engines powered ethanol

Costa, Fabiano Tadeu Mathias 25 April 2005 (has links)
A crescente demanda de diminuição das emissões e redução do consumo dos motores de combustão interna exige a melhoria dos métodos para diagnose, em tempo real, e para melhor controle do processo de combustão. Portanto, é desejável determinar a relação ar-combustível sobre uma extensa faixa de condições de operação para obter um melhor controle do motor. Este trabalho apresenta a aplicação do Método dos Momentos para obtenção de um modelo de estimação da relação ar-combustível, através do sinal de pressão no cilindro, em um motor ciclo Otto a etanol. O modelo obtido permitirá o desenvolvimento de novos sistemas de controle utilizando como estratégia a pressão no cilindro. / The increasing demands for low emission and low fuel consumption in internal combustion engines require improved methods for diagnosis, in real-time and best possible control of the combustion process. Therefore, determining air-fuel ratio over a wide range of engine operating conditions is desirable for better engine control. This work presents the Moment Method application for obtaining air-fuel ratio estimation model, by cylinder pressure sign, in Otto cycle engine powered by ethanol. The obtained model will allow the development of new control systems, for engine powered alcohol, using as strategy the cylinder pressure.

Page generated in 0.1235 seconds