• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Understanding Kafrin microparticle formation and morphology

Da Silva, Marcio Faria January 2016 (has links)
A laboratory process exists for the extraction of kafirin protein from sorghum grain in order to form kafirin encapsulating microparticles. This laboratory process extracts approximately 2 g of protein and takes in excess of 60 hours from start to finish. A scaled-up extraction process based on the current laboratory process, consisting of a 100 L extraction vessel, was established in order to extract large volumes of kafirin protein from sorghum grain. Approximately 2.5 kg of kafirin protein, which contained approximately 80 % protein after defatting, was extracted from red sorghum grain. This blended kafirin protein, which was the product of combining 9 batches done on the up-scaled process, was needed in order to obtain a consistent base raw material for further experimentation. The blended kafirin was used to investigate the formation of kafirin encapsulating microparticles. This was achieved by means of the solvent phase separation technique with acetic acid as the solvent phase. A series of experiments, selected from a partial factorial design, were used to screen how the formation of microparticles was affected by various parameters. The parameters investigated were solvent to protein ratio, stirring speed, water addition rate and number of water droplets. The morphology of the various microparticles produced was analysed by means of light microscopy, FTIR and particle size analysis, and the different formed microparticles characterised. From the screening partial factorial experimental design, it was determined that the acetic acid concentration was crucial for the formation of microparticles. Microparticles did not form at a low mass ratio (2.3) of glacial acetic acid solvent to protein. Water addition rate and stirring rate also affected microparticle formation while the number of water droplets was insignificant. Therefore, using a high solvent to protein mass ratio (6.8), additional refined partial factorial experiments were conducted. These experiments focused on the effect of water addition rate and stirring speed on the final kafirin microparticle size. Ultimately, a polynomial model was developed to predict the final kafirin microparticle size using only the water addition rate and stirring speed as inputs. The model had an R2 value of 0.986 and was found to relatively accurate during validation. The model also identified that three distinct regions existed within the workspace: _ A region containing large particles due to protein mass agglomeration and crosslinking, which occurs at low stirring speeds (< 400 rpm) and high water addition rates (> 5 mL/min) _ A region where only small individual microparticles exist, which occurs at high stirring speeds (< 800 rpm) and low water addition rates (> 2 mL/min) _ A region where moderate particles existed as uniform agglomerates of the microparticles, which occurs at moderate stirring speeds (+- 600 rpm) and moderate water addition rates (+- 3.5 mL/min) Ultimately these kafirin microparticles, prepared from protein extracted in an up scaled process, were used to form qualitative microparticle films. The microparticle films were made without plasticiser and without dewatering the microparticles. Furthermore these films were made from microparticles in the regions identified in the model. This qualitative film formation showed that agglomerated microparticles can form films. This could be beneficial for the feasibility of a commercialised process for kafirin microparticle films since the production time would be shorter and less energy intensive. / Dissertation (MEng)--University of Pretoria, 2016. / Chemical Engineering / MEng / Unrestricted
2

Uso de polinômios fracionários nos modelos mistos

Garcia, Edijane Paredes January 2019 (has links)
Orientador: Luzia Aparecida Trinca / Resumo: A classe dos modelos de regressão incorporando polinômios fracionários - FPs (Fractional Polynomials), proposta por Royston & Altman (1994), tem sido amplamente estudada. O uso de FPs em modelos mistos constitui uma alternativa muito atrativa para explicar a dependência das medidas intra-unidades amostrais em modelos em que há não linearidade na relação entre a variável resposta e variáveis regressoras contínua. Tal característica ocorre devido aos FPs oferecerem, para a resposta média, uma variedade de formas funcionais não lineares para as variáveis regressoras contínuas, em que se destacam a família dos polinômios convencionais e algumas curvas assimétricas e com assíntotas. A incorporação dos FPs na estrutura dos modelos mistos tem sido investigada por diversos autores. Porém, não existem publicações sobre: a exploração da problemática da modelagem na parte fixa e na parte aleatória (principalmente na presença de várias variáveis regressoras contínuas e categóricas); o estudo da influência dos FPs na estrutura dos efeitos aleatórios; a investigação de uma adequada estrutura para a matriz de covariâncias do erro; ou, um ponto de fundamental importância para colaborar com a seleção do modelo, a realização da análise de diagnóstico dos modelos ajustados. Uma contribuição, do nosso ponto de vista, de grande relevância é a investigação e oferecimento de estratégias de ajuste dos modelos polinômios fracionários com efeitos mistos englobando os pontos citados acima com o objetiv... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The class of regression models incorporating Fractional Polynomials (FPs), proposed by Royston & Altman (1994), has been extensively studied. The use of FPs in mixed models is a very attractive alternative to explain the within-subjects’ measurements dependence in models where there is non-linearity in the relationship between the response variable and continuous covariates. This characteristic occurs because the FPs offers a variety of non-linear functional forms for the continuous covariates in the average response, in which the family of the conventional polynomials and some asymmetric curves with asymptotes stand out. The incorporation of FPs into the structure of the mixed models has been investigated by several authors. However, there are no works about the following issues: the modeling of the fixed and random effects (mainly in the presence of several continuous and categorical covariates), the study of the influence of the FPs on the structure of the random effects, the investigation of an adequate structure for the covariance of the random errors, or, a point that has central importance to the selection of the model, to perform a diagnostic analysis of the fitted models. In our point of view, a contribution of great relevance is the investigation and the proposition of strategies for fitting FPs with mixed effects encompassing the points mentioned above, with the goals of filling these gaps and to awaken the users to the great potential of mixed models, now even mor... (Complete abstract click electronic access below) / Doutor
3

Análise da reconstrução 3D a partir de um par estereoscópico HR-CCD/CBERS-2 usando dois modelos matemáticos

Galindo, José Roberto Fernandes [UNESP] 18 January 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:25Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-01-18Bitstream added on 2014-06-13T18:08:13Z : No. of bitstreams: 1 galindo_jrf_me_prud.pdf: 1101121 bytes, checksum: fe484a202d77b0a31f93d9a5e1d4b42a (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Desde o advento dos primeiros satélites de Sensoriamento Remoto, vários são os estudos desenvolvidos com o intuito de utilizar as imagens produzidas por estes sensores para fins cartográficos. Apesar das imagens de baixa e média resolução não possuírem a precisão requerida para aplicações cartográficas em escalas grandes, existem vantagens como: serem imagens multiespectrais, possuírem periodicidade e menores custos para a sua aquisição, quando comparadas com as obtidas através dos tradicionais levantamentos aerofotogramétricos. Uma melhoria presente nos satélites de alta e média resolução é a capacidade de visada off nadir, o que permite à formação de pares estereoscópicos e a reconstrução 3D da cena imageada, a geração de modelos digitais de elevação (MDE) e a produção de imagens ortorretificadas, dentre outros produtos. Com os primeiros pares estereoscópicos adquiridos pelo sensor HR-CCD (High Resolution Charge-Coupled Devices) do CBERS-2 (2004), surgiu a possibilidade de realizar estudos objetivando a geração de produtos cartográficos a partir desses estereopares... / Since the advent of the first Remote Sensing satellites, many studies have been developed with the intention of using the images produced by these sensors for cartographic purpose. Although these images of the average resolution do not possess the accuracy required for cartographic applications in big scales, their advantages include being multispectral, periodic repetition of acquisition, and lower cost when compared to images obtained through traditional aerial photogrammetric surveys. An improvement present in medium and high resolution satellites is their off nadir capacity, which allows 3D reconstruction based on stereoscopy, the generation of digital elevation models (DEM) and the production of orthorectified images, among others products. With the first stereoscopic pairs acquired by the CBERS-2 (2004) HR-CCD (High Resolution Charge-Coupled Device) sensor, the possibility now exists of realizing studies whose goal is generating cartographic products from these stereo pairs. Within this context, this work evaluated the geometric quality of a CBERS-2 HR-CCD stereo pair making use of the DLT (Direct Linear Transformation) mathematical model and Polynomial-Based Pushbroom model, available from the Leica Photogrammetry Suite (LPS) digital photogrammetry system by Leica Geosystems, classifying them in accordance with the Cartographic Accuracy Standards... (Complete abstract click electronic access below)
4

Análise da reconstrução 3D a partir de um par estereoscópico HR-CCD/CBERS-2 usando dois modelos matemáticos /

Galindo, José Roberto Fernandes. January 2008 (has links)
Resumo: Desde o advento dos primeiros satélites de Sensoriamento Remoto, vários são os estudos desenvolvidos com o intuito de utilizar as imagens produzidas por estes sensores para fins cartográficos. Apesar das imagens de baixa e média resolução não possuírem a precisão requerida para aplicações cartográficas em escalas grandes, existem vantagens como: serem imagens multiespectrais, possuírem periodicidade e menores custos para a sua aquisição, quando comparadas com as obtidas através dos tradicionais levantamentos aerofotogramétricos. Uma melhoria presente nos satélites de alta e média resolução é a capacidade de visada off nadir, o que permite à formação de pares estereoscópicos e a reconstrução 3D da cena imageada, a geração de modelos digitais de elevação (MDE) e a produção de imagens ortorretificadas, dentre outros produtos. Com os primeiros pares estereoscópicos adquiridos pelo sensor HR-CCD (High Resolution Charge-Coupled Devices) do CBERS-2 (2004), surgiu a possibilidade de realizar estudos objetivando a geração de produtos cartográficos a partir desses estereopares... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Since the advent of the first Remote Sensing satellites, many studies have been developed with the intention of using the images produced by these sensors for cartographic purpose. Although these images of the average resolution do not possess the accuracy required for cartographic applications in big scales, their advantages include being multispectral, periodic repetition of acquisition, and lower cost when compared to images obtained through traditional aerial photogrammetric surveys. An improvement present in medium and high resolution satellites is their off nadir capacity, which allows 3D reconstruction based on stereoscopy, the generation of digital elevation models (DEM) and the production of orthorectified images, among others products. With the first stereoscopic pairs acquired by the CBERS-2 (2004) HR-CCD (High Resolution Charge-Coupled Device) sensor, the possibility now exists of realizing studies whose goal is generating cartographic products from these stereo pairs. Within this context, this work evaluated the geometric quality of a CBERS-2 HR-CCD stereo pair making use of the DLT (Direct Linear Transformation) mathematical model and Polynomial-Based Pushbroom model, available from the Leica Photogrammetry Suite (LPS) digital photogrammetry system by Leica Geosystems, classifying them in accordance with the Cartographic Accuracy Standards... (Complete abstract click electronic access below) / Orientador: Júlio Kiyoshi Hasegawa / Coorientador: Maurício Galo / Banca: João Fernando Custódio da Silva / Banca: Hideo Araki / Mestre
5

[en] METHOD TO ESTIMATE THE ELECTRIC LOSSES BASED ON THE LOAD PARAMETER ALLOCATION IN MEDIUM VOLTAGE DISTRIBUTION SYSTEMS / [pt] MÉTODO PARA ESTIMAÇÃO DAS PERDAS ELÉTRICAS BASEADO NA ALOCAÇÃO DE PARÂMETROS DAS CARGAS EM SISTEMAS DE DISTRIBUIÇÃO DE MÉDIA TENSÃO

VICTOR DANIEL ARMAULIA SANCHEZ 02 February 2016 (has links)
[pt] Em sistemas de distribuição de energia elétrica, um dos maiores desafios para as distribuidoras é a estimação das perdas técnicas. De acordo com a bibliografia, as perdas elétricas nas redes de distribuição em diferentes países podem variar aproximadamente de 3 porcento e 25 porcento da energia fornecida à rede, o que pode significar grandes impactos nos custos do sistema. Especificamente no Brasil, a adequada avaliação das perdas elétricas fornece informação importante para que o regulador estabeleça as tarifas de distribuição de energia elétrica. Na literatura há diversos métodos para a estimação das perdas técnicas de energia, mas devido à dificuldade na modelagem dos equipamentos do sistema, assim como a falta de informação da energia consumida pelas cargas, as estimações podem acarretar em grandes erros. Para tratar este problema, esta dissertação propõe um novo método baseado em um modelo de carga polinomial modificado para estimar as perdas elétricas, considerando medições de tensão e potência na subestação e, quando disponíveis, medições de tensão e potência demandadas pelas cargas. A contribuição principal do método proposto é o uso da informação da topologia da rede e a correlação entre a potência consumida pelas cargas e as grandezas medidas na subestação. Para detalhar e analisar o desempenho do método proposto são utilizados três sistemas elétricos. Os resultados das estimações são comparados com os resultados obtidos por outros métodos de referência encontradas na literatura e em aplicações práticas. / [en] In electrical distribution systems, one of the greatest challenges for utilities is the estimation of technical losses. According to the literature, energy losses throughout the world s electric distribution networks may vary from country to country approximately between 3 percent and 25 percent of the electricity provided, which may cause great impacts on the electrical system costs. Specifically in Brazil, the appropriate evaluation of the energy losses provides valuable information for the regulator to establish the energy distribution tariffs. In literature, there are different ways for estimating energy losses, but due to the difficulty for modeling precisely the equipment of the system, as well as the lack of information regarding the energy consumed of each load, the energy losses estimation may lead to huge errors. To deal with this problem, it is proposed a new method based on a modified load model, taking into account the measurements of voltages and power at the substation and, when available, the measurements of voltages and power demanded by loads with meters installed. The main contribution of the proposed method is the use of the network information and the correlation between the power consumed by the loads and the voltage and power supplied by the substation. In order to detail and analyze the performance of the proposed method, three electric systems are used. The results of the estimations given by the proposed method are compared to those obtained with other methods found in literature and in practical applications.
6

Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals

Sreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).” We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech. Improved iterative Wiener filtering for speech enhancement A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison. Optimal local polynomial modeling and applications We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed. Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments. The generic signal model is x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1. In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples. We show that, in both cases, the bias and variance take the general form: The mean square error (MSE) is given by where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc. The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.

Page generated in 0.0739 seconds