• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 9
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 38
  • 38
  • 16
  • 10
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Synthesis of Catalytic Membrane Surface Composites for Remediating Azo Dyes in Solution

Sutherland, Alexander January 2019 (has links)
In the past 30 years zero-valent iron (ZVI) has become an increasingly popular reducing agent technology for remediating environmental contaminants prone to chemical degradation. Azo dyes and chlorinated organic compounds (COCs) are two classes of such contaminants, both of which include toxic compounds with known carcinogenic potential. ZVI has been successfully applied to the surfaces of permeable reactive barriers, as well as grown into nanoscale particles (nZVI) and applied in-situ to chemically reduce these contaminants into more environmentally benign compounds. However, the reactivity of ZVI and nZVI in these technologies is limited by their finite supply of electrons for facilitating chemical reduction, and the tendency of nZVI particles to homo-aggregate in solution and form colloids with reduced surface area to volume ratio, and thus reduced reactivity. The goal of this project was to combine reactive nanoparticle and membrane technologies to create an electro-catalytic permeable reactive barrier that overcomes the weaknesses of nZVI for the enhanced electrochemical filtration of azo dyes in solution. Specifically, nZVI was successfully grown and stabilized in a network of functionalized carbon nanotubes (CNTs) and deposited into an electrically conductive thin film on the surface of a polymeric microfiltration support membrane. Under a cathodic applied voltage, this thin film facilitated the direct reduction of the methyl orange (MO) azo dye in solution, and regenerated nZVI reactivity for enhanced electro-catalytic operation. The electro-catalytic performance of these nZVI-CNT membrane surface composites to remove MO was validated, modelled, and optimized in a batch system, as well as tested in a dead-end continuous flow cell system. In the batch experiments, systems with nZVI and a -2 V applied potential demonstrated synergistic enhancement of MO removal, which indicated the regeneration of nZVI reactivity and allowed for the complete removal of 0.25 mM MO batches within 2-3 hours. Partial least squares regression (PLSR) modelling was used to determine the impact of each experimental parameter in the batch system and provided the means for an optimization leading to maximized MO removal. Finally, tests in a continuous system yielded rates of MO removal 1.6 times greater than those of the batch system in a single pass, and demonstrated ~87% molar removal of MO at fluxes of approximately 422 lmh. The work herein lays the foundation for a promising technology that, if further developed, could be applied to remediate azo dyes and COCs in textile industry effluents and groundwater sites respectively. / Thesis / Master of Applied Science (MASc)
22

Projeto e desenvolvimento de um sistema de análises químicas por injeção em fluxo para determinações espectrofotométricas simultâneas de cobre e de níquel explorando cinética diferencial e calibração multivariada / Project and development of a flow-injection system for simultaneous spectrophotometric determination of copper and nickel exploiting differential kinetics and multivariate calibration

Sasaki, Milton Katsumi 09 June 2011 (has links)
Análise cinética diferencial explora diferenças em taxas reacionais entre os analitos e um sistema reacional comum; etapas de separação prévia dos analitos podem então ser prescindidas. Sistemas de análise por injeção em fluxo (FIA) se afiguram como uma ferramenta importante para métodos envolvendo essa estratégia, pois permitem um controle preciso da dispersão de reagentes / amostras e da temporização. O objetivo deste trabalho foi então explorar estes dois aspectos favoráveis visando a determinação simultânea de cobre e de níquel, a partir de suas reações com o reagente cromogênico 5-Br-PADAP. Três alíquotas de amostra eram simultaneamente inseridas, por meio de um injetor proporcional, no fluxo transportador reagente (5-Br-PADAP 75 mg L-1 + sistema tampão 0,5 mol L-1 em ácido acético / acetato, pH 4,7) de um sistema FIA em linha única. Durante o transporte em direção ao detector, as zonas estabelecidas se coalesciam, originando uma zona complexa que era monitorada a 562 nm. Os valores locais máximos e mínimos da função concentração / tempo obtida eram considerados para calibração multivariada utilizando a ferramenta quimiométrica PLS-2 (partial least squares - 2). A concentração do reagente, a capacidade tampão, a temperatura, a vazão, os comprimentos do percurso analítico e das alças de amostragem, bem como a distância inicial entre as zonas de amostra estabelecidas foram avaliados para construção dos modelos matemáticos. Estes foram criados a partir de 24 soluções-padrão mistas de Cu2+ e Ni2+ (0,00-1,60 mg L-1 em HNO3 a 0,1% v/v). Duas variáveis latentes foram suficientes para capturar > 98 % das variâncias inerentes ao conjunto de dados e erros médios das previsões (RMSEP) foram estimados em 0,025 e 0,071 mg L-1 para Cu e Ni, salientando a boa precisão do modelo de calibração. O sistema proposto apresenta boas figuras de mérito: fisicamente estável, quando mantido em operação por quatro horas ininterruptas, consumo de 314 \'mü\'g 5-Br-PADAP por amostra, frequência analítica de 33 amostras por hora (165 dados, 66 determinações) e erros nas leituras em sinais de absorbância tipicamente < 5%. Entretanto, verificou-se a inexatidão das previsões efetuadas pelo modelo proposto, quando comparadas aos resultados obtidos por ICP OES. A partir deste fato, tornam-se necessários maiores estudos referentes a este tipo de matriz, bem como de técnicas de mascaramento dos possíveis interferentes presentes / Differential kinetic analysis exploits the differences in reaction rates between the analytes and a common reactant system; prior steps of analyte separation can then be waived. Flow-injection systems (FIA) are considered as an important tool for methods involving such a strategy because they allow precise control of sample / reagent dispersion and timing. The aim of this work was then to exploit these two favorable aspects for the simultaneous determination of copper and nickel using the 5-Br-PADAP chromogenic reagent. Three sample aliquots were simultaneously inserted by means of a proportional injector into reagent carrier stream (75 mg L-1 5-Br-PADAP + 0.5 mol L-1 acetic acid / acetate, pH 4.7) of a single-line FIA system. During transport towards detection, the established zones coalesce themselves, resulting in a complex zone that was monitored at 562 nm. The local maximum and minimum values of the concentration / time obtained function were considered for multivariate calibration using the PLS-2 (partial least squares - 2) chemometric tool. The reagent concentration, buffering capacity, temperature, flow rate and lengths of the analytical path, sampling loops and initial distance between plugs were established and evaluated for the construction of mathematical models. To this end, 24 Cu2+ and Ni2+ (0.00 - 1.60 mg L-1, also 0.1% v/v HNO3) mixed standard solutions were used. Two latent variables were enough to capture > 98% of the variance inherent in the data set and average prediction errors (RMSEP) were estimated as 0.025 and 0.071 mg L-1 for Cu and Ni, emphasizing the good precision the calibration model. The proposed system presents good figures of merit: physical stability when kept in operation for four uninterrupted hours, consumption of 314 \'mü\'g 5-Br-PADAP per sample, sample throughput of 33 h-1 (165 data, 66 determinations) and error readings in absorbance signals typically <5%. However, inaccuracy of the predictions made by the proposed model when compared to results obtained by ICP OES was noted. Thus, further studies involving this type of matrix, as well as masking techniques of potential interferences present, are recommended
23

Projeto e desenvolvimento de um sistema de análises químicas por injeção em fluxo para determinações espectrofotométricas simultâneas de cobre e de níquel explorando cinética diferencial e calibração multivariada / Project and development of a flow-injection system for simultaneous spectrophotometric determination of copper and nickel exploiting differential kinetics and multivariate calibration

Milton Katsumi Sasaki 09 June 2011 (has links)
Análise cinética diferencial explora diferenças em taxas reacionais entre os analitos e um sistema reacional comum; etapas de separação prévia dos analitos podem então ser prescindidas. Sistemas de análise por injeção em fluxo (FIA) se afiguram como uma ferramenta importante para métodos envolvendo essa estratégia, pois permitem um controle preciso da dispersão de reagentes / amostras e da temporização. O objetivo deste trabalho foi então explorar estes dois aspectos favoráveis visando a determinação simultânea de cobre e de níquel, a partir de suas reações com o reagente cromogênico 5-Br-PADAP. Três alíquotas de amostra eram simultaneamente inseridas, por meio de um injetor proporcional, no fluxo transportador reagente (5-Br-PADAP 75 mg L-1 + sistema tampão 0,5 mol L-1 em ácido acético / acetato, pH 4,7) de um sistema FIA em linha única. Durante o transporte em direção ao detector, as zonas estabelecidas se coalesciam, originando uma zona complexa que era monitorada a 562 nm. Os valores locais máximos e mínimos da função concentração / tempo obtida eram considerados para calibração multivariada utilizando a ferramenta quimiométrica PLS-2 (partial least squares - 2). A concentração do reagente, a capacidade tampão, a temperatura, a vazão, os comprimentos do percurso analítico e das alças de amostragem, bem como a distância inicial entre as zonas de amostra estabelecidas foram avaliados para construção dos modelos matemáticos. Estes foram criados a partir de 24 soluções-padrão mistas de Cu2+ e Ni2+ (0,00-1,60 mg L-1 em HNO3 a 0,1% v/v). Duas variáveis latentes foram suficientes para capturar > 98 % das variâncias inerentes ao conjunto de dados e erros médios das previsões (RMSEP) foram estimados em 0,025 e 0,071 mg L-1 para Cu e Ni, salientando a boa precisão do modelo de calibração. O sistema proposto apresenta boas figuras de mérito: fisicamente estável, quando mantido em operação por quatro horas ininterruptas, consumo de 314 \'mü\'g 5-Br-PADAP por amostra, frequência analítica de 33 amostras por hora (165 dados, 66 determinações) e erros nas leituras em sinais de absorbância tipicamente < 5%. Entretanto, verificou-se a inexatidão das previsões efetuadas pelo modelo proposto, quando comparadas aos resultados obtidos por ICP OES. A partir deste fato, tornam-se necessários maiores estudos referentes a este tipo de matriz, bem como de técnicas de mascaramento dos possíveis interferentes presentes / Differential kinetic analysis exploits the differences in reaction rates between the analytes and a common reactant system; prior steps of analyte separation can then be waived. Flow-injection systems (FIA) are considered as an important tool for methods involving such a strategy because they allow precise control of sample / reagent dispersion and timing. The aim of this work was then to exploit these two favorable aspects for the simultaneous determination of copper and nickel using the 5-Br-PADAP chromogenic reagent. Three sample aliquots were simultaneously inserted by means of a proportional injector into reagent carrier stream (75 mg L-1 5-Br-PADAP + 0.5 mol L-1 acetic acid / acetate, pH 4.7) of a single-line FIA system. During transport towards detection, the established zones coalesce themselves, resulting in a complex zone that was monitored at 562 nm. The local maximum and minimum values of the concentration / time obtained function were considered for multivariate calibration using the PLS-2 (partial least squares - 2) chemometric tool. The reagent concentration, buffering capacity, temperature, flow rate and lengths of the analytical path, sampling loops and initial distance between plugs were established and evaluated for the construction of mathematical models. To this end, 24 Cu2+ and Ni2+ (0.00 - 1.60 mg L-1, also 0.1% v/v HNO3) mixed standard solutions were used. Two latent variables were enough to capture > 98% of the variance inherent in the data set and average prediction errors (RMSEP) were estimated as 0.025 and 0.071 mg L-1 for Cu and Ni, emphasizing the good precision the calibration model. The proposed system presents good figures of merit: physical stability when kept in operation for four uninterrupted hours, consumption of 314 \'mü\'g 5-Br-PADAP per sample, sample throughput of 33 h-1 (165 data, 66 determinations) and error readings in absorbance signals typically <5%. However, inaccuracy of the predictions made by the proposed model when compared to results obtained by ICP OES was noted. Thus, further studies involving this type of matrix, as well as masking techniques of potential interferences present, are recommended
24

Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.

Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during development and to quality assure produce. Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis. Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures. A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to generate spectral data of the mixtures and this data was analyzed with MVA techniques by the construction of regression and prediction models. Selected samples from the mixture range were chosen to test the predictive ability of the models. Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction accuracy (at 10% accepted standard deviation), provided the minimum mass of a component exceeded 16% of the total sample. The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy. It was shown that multivariate data analysis of spectroscopic data of the selected fluorocarbon compounds could be used to quantitatively analyse mixtures with the possibility of further optimization of the method. The study was a representative study indicating that the combination of MVA and spectroscopy can be used successfully in the quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
25

Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.

Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during development and to quality assure produce. Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis. Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures. A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to generate spectral data of the mixtures and this data was analyzed with MVA techniques by the construction of regression and prediction models. Selected samples from the mixture range were chosen to test the predictive ability of the models. Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction accuracy (at 10% accepted standard deviation), provided the minimum mass of a component exceeded 16% of the total sample. The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy. It was shown that multivariate data analysis of spectroscopic data of the selected fluorocarbon compounds could be used to quantitatively analyse mixtures with the possibility of further optimization of the method. The study was a representative study indicating that the combination of MVA and spectroscopy can be used successfully in the quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
26

Direcionadores de preferencia para nectares de uva comerciais tradicionais e "lights" utilizando regressão por minimos quadrados parciais (PLSR) / Drivers of liking for grape nectars in the traditional commercial and light versions using partial least squares regression (PLSR)

Alves, Leonardo Rangel 07 October 2008 (has links)
Orientador: Helena Maria Andre Bolini / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos / Made available in DSpace on 2018-08-11T04:55:59Z (GMT). No. of bitstreams: 1 Alves_LeonardoRangel_M.pdf: 410164 bytes, checksum: eed7ffe76f347f00d0abb009ed908230 (MD5) Previous issue date: 2008 / Resumo: Este estudo objetivou Identificar direcionadores de preferência de oito amostras comerciais de néctar de uva (tradicionais e ¿light¿) utilizando metodologias estatísticas avançadas para relacionar dados de perfil sensorial, físico-químicos e aceitabilidade. Oito amostras comerciais de néctares de uva (quatro tradicionais e suas respectivas versões ¿light¿) foram analisadas. Um teste de Aceitação utilizando a escala hedônica híbrida foi realizado com 114 consumidores. Quatorze termos descritivos foram avaliados por uma equipe sensorial e seis atributos físico-químicos foram medidos. As amostras de néctar de uva A e C foram as mais aceitas e as amostras CL e DL (¿light¿) foram as mais rejeitadas. Construiu-se um Mapa de Preferência Interno e em seguida uma Análise de ¿Cluster¿ foi realizada para o atributo Impressão Global. Dois grupos de consumidores foram encontrados. A principal diferença entre os grupos foi com relação à utilização de diferentes porções da escala pelos consumidores de cada grupo. A metodologia PLSR foi utilizada para relacionar a aceitação dos consumidores com os termos descritivos e atributos físico-químicos, fornecendo correlações entre eles. Os resultados mostraram que os atributos Sabor de Uva, Sabor Residual de Uva, Acidez Total Titulável, Aroma de Uva, Cor Vinho, °Brix, Viscosidade, Acidez, Turbidez, Adstringência, Fenóis Totais e Consistência nesta ordem de importância, estavam fortemente correlacionados com a Impressão Global dos consumidores sendo portanto os direcionadores de preferência encontrados / Abstract: This study depicts the PLS regression method used to help find drivers of liking of the grape nectar. Eight commercial brands (four traditional and four lights) were analyzed. An acceptance test using hybrid hedonic scale was performed with 114 consumers. Fourteen attributes were evaluated by a sensory team of fourteen members, and six physical-chemical attributes were measured. The most accepted samples were A and C, and the less accepted ones were CL and DL (lights). An Internal Preference Mapping followed by a Cluster Analysis was performed on the consumer grades to Global Impression. Two clusters of consumers were found. The mainly difference between clusters was the use of different portions of the scale by the consumers. The PLSR methodology was used to relate the acceptance with the sensory and physical-chemical attributes giving a correlation between them. The model showed the importance of each sensory or physicalchemical attribute for the model projection. The results showed that Grape Flavor; Residual Grape Flavor, Total Sourness Titration, Grape Aroma, Wine Color, °Brix, Viscosity, Sourness, Turbidity, Astringency, Total Phenols and Consistency were positive correlated with consumer grades to Global Impression, therefore they are called drivers of liking / Mestrado / Consumo e Qualidade de Alimentos / Mestre em Alimentos e Nutrição
27

On the effective deployment of current machine translation technology

González Rubio, Jesús 03 June 2014 (has links)
Machine translation is a fundamental technology that is gaining more importance each day in our multilingual society. Companies and particulars are turning their attention to machine translation since it dramatically cuts down their expenses on translation and interpreting. However, the output of current machine translation systems is still far from the quality of translations generated by human experts. The overall goal of this thesis is to narrow down this quality gap by developing new methodologies and tools that improve the broader and more efficient deployment of machine translation technology. We start by proposing a new technique to improve the quality of the translations generated by fully-automatic machine translation systems. The key insight of our approach is that different translation systems, implementing different approaches and technologies, can exhibit different strengths and limitations. Therefore, a proper combination of the outputs of such different systems has the potential to produce translations of improved quality. We present minimum Bayes¿ risk system combination, an automatic approach that detects the best parts of the candidate translations and combines them to generate a consensus translation that is optimal with respect to a particular performance metric. We thoroughly describe the formalization of our approach as a weighted ensemble of probability distributions and provide efficient algorithms to obtain the optimal consensus translation according to the widespread BLEU score. Empirical results show that the proposed approach is indeed able to generate statistically better translations than the provided candidates. Compared to other state-of-the-art systems combination methods, our approach reports similar performance not requiring any additional data but the candidate translations. Then, we focus our attention on how to improve the utility of automatic translations for the end-user of the system. Since automatic translations are not perfect, a desirable feature of machine translation systems is the ability to predict at run-time the quality of the generated translations. Quality estimation is usually addressed as a regression problem where a quality score is predicted from a set of features that represents the translation. However, although the concept of translation quality is intuitively clear, there is no consensus on which are the features that actually account for it. As a consequence, quality estimation systems for machine translation have to utilize a large number of weak features to predict translation quality. This involves several learning problems related to feature collinearity and ambiguity, and due to the ¿curse¿ of dimensionality. We address these challenges by adopting a two-step training methodology. First, a dimensionality reduction method computes, from the original features, the reduced set of features that better explains translation quality. Then, a prediction model is built from this reduced set to finally predict the quality score. We study various reduction methods previously used in the literature and propose two new ones based on statistical multivariate analysis techniques. More specifically, the proposed dimensionality reduction methods are based on partial least squares regression. The results of a thorough experimentation show that the quality estimation systems estimated following the proposed two-step methodology obtain better prediction accuracy that systems estimated using all the original features. Moreover, one of the proposed dimensionality reduction methods obtained the best prediction accuracy with only a fraction of the original features. This feature reduction ratio is important because it implies a dramatic reduction of the operating times of the quality estimation system. An alternative use of current machine translation systems is to embed them within an interactive editing environment where the system and a human expert collaborate to generate error-free translations. This interactive machine translation approach have shown to reduce supervision effort of the user in comparison to the conventional decoupled post-edition approach. However, interactive machine translation considers the translation system as a passive agent in the interaction process. In other words, the system only suggests translations to the user, who then makes the necessary supervision decisions. As a result, the user is bound to exhaustively supervise every suggested translation. This passive approach ensures error-free translations but it also demands a large amount of supervision effort from the user. Finally, we study different techniques to improve the productivity of current interactive machine translation systems. Specifically, we focus on the development of alternative approaches where the system becomes an active agent in the interaction process. We propose two different active approaches. On the one hand, we describe an active interaction approach where the system informs the user about the reliability of the suggested translations. The hope is that this information may help the user to locate translation errors thus improving the overall translation productivity. We propose different scores to measure translation reliability at the word and sentence levels and study the influence of such information in the productivity of an interactive machine translation system. Empirical results show that the proposed active interaction protocol is able to achieve a large reduction in supervision effort while still generating translations of very high quality. On the other hand, we study an active learning framework for interactive machine translation. In this case, the system is not only able to inform the user of which suggested translations should be supervised, but it is also able to learn from the user-supervised translations to improve its future suggestions. We develop a value-of-information criterion to select which automatic translations undergo user supervision. However, given its high computational complexity, in practice we study different selection strategies that approximate this optimal criterion. Results of a large scale experimentation show that the proposed active learning framework is able to obtain better compromises between the quality of the generated translations and the human effort required to obtain them. Moreover, in comparison to a conventional interactive machine translation system, our proposal obtained translations of twice the quality with the same supervision effort. / González Rubio, J. (2014). On the effective deployment of current machine translation technology [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37888 / TESIS
28

Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation

Vitale, Raffaele 03 November 2017 (has links)
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included. / La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos. / La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos. / Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442 / TESIS
29

Quality by Design through multivariate latent structures

Palací López, Daniel Gonzalo 14 January 2019 (has links)
La presente tesis doctoral surge ante la necesidad creciente por parte de la mayoría de empresas, y en especial (pero no únicamente) aquellas dentro de los sectores farmacéu-tico, químico, alimentación y bioprocesos, de aumentar la flexibilidad en su rango ope-rativo para reducir los costes de fabricación, manteniendo o mejorando la calidad del producto final obtenido. Para ello, esta tesis se centra en la aplicación de los conceptos del Quality by Design para la aplicación y extensión de distintas metodologías ya exis-tentes y el desarrollo de nuevos algoritmos que permitan la implementación de herra-mientas adecuadas para el diseño de experimentos, el análisis multivariante de datos y la optimización de procesos en el ámbito del diseño de mezclas, pero sin limitarse ex-clusivamente a este tipo de problemas. Parte I - Prefacio, donde se presenta un resumen del trabajo de investigación realiza-do y los objetivos principales que pretende abordar y su justificación, así como una introducción a los conceptos más importantes relativos a los temas tratados en partes posteriores de la tesis, tales como el diseño de experimentos o diversas herramientas estadísticas de análisis multivariado. Parte II - Optimización en el diseño de mezclas, donde se lleva a cabo una recapitu-lación de las diversas herramientas existentes para el diseño de experimentos y análisis de datos por medios tradicionales relativos al diseño de mezclas, así como de algunas herramientas basadas en variables latentes, tales como la Regresión en Mínimos Cua-drados Parciales (PLS). En esta parte de la tesis también se propone una extensión del PLS basada en kernels para el análisis de datos de diseños de mezclas, y se hace una comparativa de las distintas metodologías presentadas. Finalmente, se incluye una breve presentación del programa MiDAs, desarrollado con la finalidad de ofrecer a sus usuarios la posibilidad de comparar de forma sencilla diversas metodologías para el diseño de experimentos y análisis de datos para problemas de mezclas. Parte III - Espacio de diseño y optimización a través del espacio latente, donde se aborda el problema fundamental dentro de la filosofía del Quality by Design asociado a la definición del llamado 'espacio de diseño', que comprendería todo el conjunto de posibles combinaciones de condiciones de proceso, materias primas, etc. que garanti-zan la obtención de un producto con la calidad deseada. En esta parte también se trata el problema de la definición del problema de optimización como herramienta para la mejora de la calidad, pero también para la exploración y flexibilización de los procesos productivos, con el objeto de definir un procedimiento eficiente y robusto de optimiza-ción que se adapte a los diversos problemas que exigen recurrir a dicha optimización. Parte IV - Epílogo, donde se presentan las conclusiones finales, la consecución de objetivos y posibles líneas futuras de investigación. En esta parte se incluyen además los anexos. / Aquesta tesi doctoral sorgeix davant la necessitat creixent per part de la majoria d'em-preses, i especialment (però no únicament) d'aquelles dins dels sectors farmacèutic, químic, alimentari i de bioprocessos, d'augmentar la flexibilitat en el seu rang operatiu per tal de reduir els costos de fabricació, mantenint o millorant la qualitat del producte final obtingut. La tesi se centra en l'aplicació dels conceptes del Quality by Design per a l'aplicació i extensió de diferents metodologies ja existents i el desenvolupament de nous algorismes que permeten la implementació d'eines adequades per al disseny d'ex-periments, l'anàlisi multivariada de dades i l'optimització de processos en l'àmbit del disseny de mescles, però sense limitar-se exclusivament a aquest tipus de problemes. Part I- Prefaci, en què es presenta un resum del treball de recerca realitzat i els objec-tius principals que pretén abordar i la seua justificació, així com una introducció als conceptes més importants relatius als temes tractats en parts posteriors de la tesi, com ara el disseny d'experiments o diverses eines estadístiques d'anàlisi multivariada. Part II - Optimització en el disseny de mescles, on es duu a terme una recapitulació de les diverses eines existents per al disseny d'experiments i anàlisi de dades per mit-jans tradicionals relatius al disseny de mescles, així com d'algunes eines basades en variables latents, tals com la Regressió en Mínims Quadrats Parcials (PLS). En aquesta part de la tesi també es proposa una extensió del PLS basada en kernels per a l'anàlisi de dades de dissenys de mescles, i es fa una comparativa de les diferents metodologies presentades. Finalment, s'inclou una breu presentació del programari MiDAs, que ofe-reix la possibilitat als usuaris de comparar de forma senzilla diverses metodologies per al disseny d'experiments i l'anàlisi de dades per a problemes de mescles. Part III- Espai de disseny i optimització a través de l'espai latent, on s'aborda el problema fonamental dins de la filosofia del Quality by Design associat a la definició de l'anomenat 'espai de disseny', que comprendria tot el conjunt de possibles combina-cions de condicions de procés, matèries primeres, etc. que garanteixen l'obtenció d'un producte amb la qualitat desitjada. En aquesta part també es tracta el problema de la definició del problema d'optimització com a eina per a la millora de la qualitat, però també per a l'exploració i flexibilització dels processos productius, amb l'objecte de definir un procediment eficient i robust d'optimització que s'adapti als diversos pro-blemes que exigeixen recórrer a aquesta optimització. Part IV- Epíleg, on es presenten les conclusions finals i la consecució d'objectius i es plantegen possibles línies futures de recerca arran dels resultats de la tesi. En aquesta part s'inclouen a més els annexos. / The present Ph.D. thesis is motivated by the growing need in most companies, and specially (but not solely) those in the pharmaceutical, chemical, food and bioprocess fields, to increase the flexibility in their operating conditions in order to reduce production costs while maintaining or even improving the quality of their products. To this end, this thesis focuses on the application of the concepts of the Quality by Design for the exploitation and development of already existing methodologies, and the development of new algorithms aimed at the proper implementation of tools for the design of experiments, multivariate data analysis and process optimization, specially (but not only) in the context of mixture design. Part I - Preface, where a summary of the research work done, the main goals it aimed at and their justification, are presented. Some of the most relevant concepts related to the developed work in subsequent chapters are also introduced, such as those regarding design of experiments or latent variable-based multivariate data analysis techniques. Part II - Mixture design optimization, in which a review of existing mixture design tools for the design of experiments and data analysis via traditional approaches, as well as some latent variable-based techniques, such as Partial Least Squares (PLS), is provided. A kernel-based extension of PLS for mixture design data analysis is also proposed, and the different available methods are compared to each other. Finally, a brief presentation of the software MiDAs is done. MiDAs has been developed in order to provide users with a tool to easily approach mixture design problems for the construction of Designs of Experiments and data analysis with different methods and compare them. Part III - Design Space and optimization through the latent space, where one of the fundamental issues within the Quality by Design philosophy, the definition of the so-called 'design space' (i.e. the subspace comprised by all possible combinations of process operating conditions, raw materials, etc. that guarantee obtaining a product meeting a required quality standard), is addressed. The problem of properly defining the optimization problem is also tackled, not only as a tool for quality improvement but also when it is to be used for exploration of process flexibilisation purposes, in order to establish an efficient and robust optimization method in accordance with the nature of the different problems that may require such optimization to be resorted to. Part IV - Epilogue, where final conclusions are drawn, future perspectives suggested, and annexes are included. / Palací López, DG. (2018). Quality by Design through multivariate latent structures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/115489 / TESIS
30

Internal combustion engine durability monitor : Identifying and analysing engine parameters affecting knock and lambda / Livslängdsövervakning av förbränningsmotor : Identifiering och analys av motorparametrar som påverkar knack och lambda

Jääskö, Pontus, Morén, Petter January 2021 (has links)
This study has been performed at Powertrain Engineering Sweden AB (PES), a fully owned subsidiary of Volvo Cars Group, which is constantly working to develop and improve internal combustion engines. As part of this work, durability tests are performed to analyse the impact of wear on the engines. At present, there is a strong focus on visual inspections after the engines have undergone durability tests. PES wants to develop a method where collected data from these tests can be used to explain how the phenomenon of knocking and the control of lambda changes over time. The study analyses one specific durability test and investigates the methodology of data analysis by using the open-source software platform Sympathy for Data, with an add-on developed by Volvo Cars Group, for data management, visualisation and analysis. To execute the analysis, engine parameters that affect these systems as well as parameters suitable to use as response variables are identified through literature studies of internal combustion engine fundamentalsas well as internal material, and knowledge acquired at the company. The result is presented in the form of an analysis generated by the node for partial least squares regression (PLSR) which is pre-programmed in Sympathy for Data as well as the images and graphs obtained as output. For knock, the signal for the final ignition angle was found to be suitable to use as the response variable in the PLSR. A suitable response variable for lambda was more difficult to identify, this is why both signals for the measured lambda and lambda adaptation are analysed. Studies of the internal material and knowledge highlighted the fact that several engine subsystems are highly dependent on each other and that even deeper research would be necessary to fully understand the process and identify the primary cause for the variations observed in the generated models. However, partial least squares regression was performed using parameters derived from literature reviews as input (predictors) in order produce regression models to explain the variance in sought response. Well-fitting models could be created with a varying number of latent variables needed for the different responses. The output obtained from the PLSR enables further studies of the specific cases as well as the methodology itself, hence, increase the use of data analysis with the help of the software used in the department for durability testing at PES. / Denna studie är utförd hos Powertrain Engineering Sweden AB (PES), vilka är ett helägt dotterbolag till Volvo Cars Group, som arbetar med att ta fram och förbättra förbränningsmotorer. En del i detta arbete är att genomföra långtidstest för att analysera hur motorernas egenskaper ändras vid förslitning över tid. I nuläget ligger stort fokus på visuella inspektioner efter att motorerna genomgått långtidstester. PES önskar utveckla en metod där redan insamlad data som registrerats i dessa tester kan förklara hur fenomenet knack och regleringen för lambda förändras över tid. Studien är genomförd i form av en fallstudie av ett specifikt långtidstest där den öppna programvaran Sympathy for Data, tillsammans med det av Volvo Cars Group utvecklade tillägget, används för datahantering, visualisering och analys. Studien undersöker också metodiken för dataanalys med nämnd programvara. För att genomföra detta identifieras motorparametrar som påverkar de undersökta systemen samt parametrar som lämpar sig att användas som responsvariabler i en regressionsmodell. Dessa parametrar togs fram genom litteraturstudier om de fundamentala delarna i en förbränningsmotor samt från företaget förvärvad intern kunskap kring systemen. Resultatet presenteras i form av en analys genomförd med den, i Sympathy for Data, förprogrammerade noden för partial least squares regression(PLSR) samt de bilder och grafer som erhålls. För knack visade det sig att den slutliga tändningsvinkeln var lämplig att använda som respons i PLSR-modellen. En lämplig responsvariabel för lambda var mer svåridentifierad, detta förklarar varför signalerna för uppmätt lambda och lambda adaption analyseras. Inläsning av internt material och grundläggande information om förbränningsmotorer visade att delsystem i ottomotorn är beroende och påverkas av varandra vilket innebär att mer ingående studier i dessa delsystem är nödvändigt för att förstå hela processen och hitta grundorsakerna till variationerna som påvisas för responssignalerna. Vidare utfördes PLSR med de parametrar som härletts från litteraturstudier som indatasignaler (prediktorer) för att skapa en regressionsmodell som förklarar variansen i sökta responssignaler. Beroende av responssignal krävdes varierande antal latenta variabler för att uppnå en tillräckligt precis modell. Resultatet från PLSR möjliggör vidare forskning inom området och metoden som använts och har på så sätt möjliggjort för fortsatt utveckling. Detta i sin tur kan öka användandet av dataanalys med hjälp av den programvara som används vid avdelningen för långtidstest hos PES.

Page generated in 0.0995 seconds