Spelling suggestions: "subject:"ariable"" "subject:"aariable""
1271 |
Variable Ratio Matrix Transformer based LLC Converter for Two-Stage Low-Voltage DC-DC Converter Efficiency ImprovementHou, Zhengming 12 December 2022 (has links)
The low-voltage dc-dc converter (LDC) in electrical vehicles (EVs) is to convert high dc voltage (270V~430V) from traction battery to low dc voltage (12.5V~15.5V) for the vehicle auxiliary systems. Galvanic isolation is required in the LDC due to safety considerations. Three challenges exist in the LDC design: (1) wide regulation range; (2) high output current; (3) thermal management. The single stage solutions, such as phase-shift full-bridge converter and LLC resonant converter, have been widely studied in the past. A matrix transformer is widely adopted in single-stage LDC design to deal with the large current. At last, the low-profile design allows large footprint area for high power density and ease of cooling design.
However, the trade-off between wide regulation range and efficiency exists in single-stage LDC design. Recently, a two-stage solution is proposed to achieve high efficiency and wide regulation range at the same time. The fixed turn ratio LLC stage serves as a dc transformer (DCX) to meet the galvanic isolation requirements and PWM dc-dc stage regulates the output voltages.
In this thesis, a variable ratio matrix transformer-based LLC converter is proposed for two-stage LDC efficiency improvement. The transformer secondary copper losses are reduced by taking advantage of the adaptive number of element transformers. In addition, the PWM dc-dc stage achieves better efficiency with variable intermediate bus voltage. The operation principle and design considerations are studied in this thesis. The proposed 1600W two-stage LDC prototype achieves 96.82% full load efficiency under 400V input condition which is 1.2% efficiency higher than the fixed ratio LLC based two-stage design. Last but not least, the prototype shows a comparable efficiency to the fixed ratio LLC based two-stage design even under the low input voltage (270V) condition. / M.S. / The electrical vehicle market is growing rapidly in recent years. However, the driving range is one of the bottlenecks which imperils market growth in the future. Thus, efficient power modules in electric vehicles are desired to extend the driving range. Low voltage dc-dc converter is one of the power modules in electric vehicles which is rated at several kilowatts and converts traction battery voltage for the vehicle auxiliary system, such as air conditioner, headlights, power steering and etc. In this thesis, a variable ratio matrix transformer-based LLC converter is proposed for the two-stage low-voltage dc-dc converter efficiency improvement. Consequently, the driving range of electric vehicles is further extended.
|
1272 |
Load-Independent Class-E Power ConversionZhang, Lujie 13 April 2020 (has links)
The Class-E topology was presented as a single-switch power amplifier with high efficiency at the optimum condition, where the switch enjoys zero-voltage switching (ZVS) and zero-voltage-derivative switching (ZDS). It is also used in MHz dc-dc converters, and in inverters for wireless power transfer, induction heating, and plasma pulsing. The load current in these applications usually varies over a range. Efficiency of a conventional Class-E design degrades dramatically due to the hard switching beyond the optimum conditions. Keeping ZVS with load change in a Class-E topology is preferred within the load range.
Soft switching with load variation is realized by duty cycle modulation with additional transformer, matching network, or resistance compression network. Since two ZVS requirements need to be satisfied in a conventional Class-E design, at least two parameters are tuned under load variation. Thus, changing switching frequency, duty cycle, and component values were used. Impressively, a load-independent Class-E inverter design was presented in 1990 for maintaining ZVS and output voltage under a given load change without tuning any parameters, and it was validated with experimental results recently. The operating principle of this special design (inconsistent with the conventional design) is not elucidated in the published literatures.
Load-independency illucidation by a Thevenin Model – A Thevenin model is then established (although Class-E is a nonliear circuit) to explain the load-independency with fixed switching frequency and duty cycle. The input block of a Class-E inverter (Vin, Lin, Cin, and S) behaves as a fixed voltage source vth1 and a fixed capacitive impedance Xth1 in series at switching frequency. When the output block (Lo and Co) is designed to compensate Xth1, the output current phase is always equal to the phase of vth1 with resistive load (satisfies the ZVS requirement of a load-independent design). Thus, soft switching is maintained within load variation. Output voltage is equal to vth1 since Xth1 is canceled, so that the output voltage is constant regardless of output resistance. Load-independency is achieved without adding any components or tuning any parameters.
Sequential design and tuning of a load-independent ZVS Class-E inverter with constant voltage based on Thevenin Model - Based on the model, it's found that each circuit parameter is linked to only one of the targeted performance (ZVS, fixed voltage gain, and load range). Thus, the sequential design equations and steps are derived and presented. In each step, the desired performance (e.g. ZVS) now could be used to check and tune component values so that ZVS and fixed voltage gain in the desired load range is guaranteed in the final Class-E inverter, even when component values vary from the expectations. The Thevenin model and the load-independent design is then extended to any duty cycles. A prototype switched at 6.78 MHz with 10-V input, 11.3-V output, and 22.5-W maximum output power was fabricated and tested to validate the theory. Soft switching is maintained with 3% output voltage variation while the output power is reduced tenfold.
A load-independent ZVS Class-E inverter with constant current by combining constant voltage design and a trans-susceptance network - A load-independent ZVS Class-E inverter with constant current under load variation is then presented, by combining the presented design (generating a constant voltage) and a trans-susceptance network (transferring the voltage to current). The impact of different types and the positions of the networks are discussed, and LCL network is selected so that both constant current and soft switching are maintained within the load variation. The operation principle, design, and tuning procedures are illustrated. The trade-off between input current ripple, output current amplitude, and the working load range is discussed. The expectations were validated by a design switched at 6.78 MHz with 10-V input, 1.4-A output, and 12.6-W maximum output power. Soft switching is maintained with 16% output current varying over a 10:1 output power range.
A "ZVS" Class-E dc-dc converter by adding a diode rectifier bridge and compensate the induced varying capacitance at full-load condition - The load-independent Class-E design is extended to dc-dc converter by adding a diode rectifier bridge followed by the Class-E inverter. The equivalent impedance seen by the inverter consists of a varying capacitance and a varying resistance when the output changes. As illustrated before, ZVS and constant output can only be maintained with resistive load. Since the varying capacitance cannot be compensated for the whole load range, performance with using different compensation is discussed. With the selected full-load compensation, ZVS is achieved at full load condition and slight non-ZVS occurs for the other load conditions. The expectation was validated by a dc-dc converter switched at 6.78 MHz with 11 V input, 12 V output, and 22 W maximum output power. ZVS (including slight non-ZVS) is maintained with 16% output voltage variation over 20:1 output power range.
Design of variable Capacitor by connecting two voltage-sensitive capacitors in series and controlling the bias voltage of them - The equivalent varying capacitance in the Class-E dc-dc converter can be compensated in the whole load range only with variable component. The sensitivity of a Class-E power conversion can also be improved by using variable capacitors. Thus, a Voltage Controlled Capacitor (VCC) is presented, based on the intrinsic property of Class II dielectric materials that permittivity changing much with electric field. Its equivalent circuit consists of two identical Class II capacitors in series. By changing the voltage of the common point of the two capacitors (named as control voltage), the two capacitance and the total capacitance are both changed. Its operation principle, measured characteristic, and the SPICE model are illustrated. The capacitance changes from 1 μF to 0.2 μF with a control voltage from 0 V to 25 V, resulting a 440% capacitance range. Since the voltage across the two capacitors (named as output voltage) also affects one of the capacitance when control voltage is applied, the capacitance range drops to only 40% with higher bias in the output voltage. Thus, a Linear Variable Capacitor (LVC) is presented. The equivalent circuit is the same as VCC, while one of the capacitance is designed much higher to mitigate the effect of output voltage. The structure, operational principle, required specifications, design procedures, and component selection were validated by a design example, with 380% maximum capacitance range and less than 20% drop in the designed capacitor voltage range.
This work contributes to
• Analytical analysis and Thevenin Model in load-independent Class-E power conversion
• Variable capacitance with wide range / Doctor of Philosophy / The Class-E topology was presented as a single-switch power amplifier with high efficiency at the optimum condition. Efficiency of a conventional Class-E design degrades with load variation dramatically due to the hard switching beyond the optimum conditions.
Since two requirements need to be satisfied for soft switching in a conventional Class-E design, at least two parameters are tuned under load variation. Impressively, a load-independent Class-E inverter design was presented for maintaining Zero-Voltage-Switching (ZVS) and output voltage under a given load change without tuning any parameters, and it was validated with experimental results recently.
A Thevenin model is established in this work to explain the realization of load-independency with fixed switching frequency and duty cycle. Based on that, a sequential design and tuning process is presented. A prototype switched at 6.78 MHz with 10-V input, 11.3-V output, and 22.5-W maximum output power was fabricated and tested to validate the theory. Soft switching is maintained with 3% output voltage variation while the output power is reduced tenfold.
A load-independent ZVS Class-E inverter with constant current under load variation is then presented, by combining the presented design and a trans-susceptance network. The expectations were validated by a design switched at 6.78 MHz with 10-V input, 1.4-A output, and 12.6-W maximum output power. Soft switching is maintained with 16% output current varying over a 10:1 output power range.
The load-independent Class-E design is extended to dc-dc converter by adding a diode rectifier bridge, inducing a varying capacitance. With the selected full-load compensation, ZVS is achieved at full load condition and slight non-ZVS occurs for the other load conditions. The expectation was validated by a dc-dc converter switched at 6.78 MHz with 11 V input, 12 V output, and 22 W maximum output power. ZVS (including slight non-ZVS) is maintained with 16% output voltage variation over 20:1 output power range.
The varying capacitance in the Class-E dc-dc converter needs variable component to compensate. Thus, a Voltage Controlled Capacitor (VCC) is presented. The capacitance changes from 1 μF to 0.2 μF with a control voltage from 0 V to 25 V, resulting a 440% capacitance range. The capacitance range drops to only 40% with higher bias in the output voltage. Thus, a Linear Variable Capacitor (LVC) is presented, with 380% maximum capacitance range and less than 20% drop in the designed capacitor voltage range.
|
1273 |
An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR TasksBijinemula, Sandeep Kumar 01 February 2019 (has links)
Engine-triggered tasks are real-time tasks that are released when the crankshaft arrives at certain positions in its path of rotation. This makes the rate of release of these jobs a function of the crankshaft's angular speed and acceleration. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique. / Master of Science / Real-time systems require temporal correctness along with accuracy. This notion of temporal correctness is achieved by specifying deadlines to each of the tasks. In order to ensure that all the deadlines are met, it is important to know the processor requirement, also known as demand, of a task over a given interval. For some tasks, the demand is not constant, instead it depends on several external factors. For such tasks, it becomes necessary to calculate the worst-case demand. Engine-triggered tasks are activated when the crankshaft in an engine is at certain points in its path of rotation. This makes their activation rate dependent on the angular speed and acceleration of the crankshaft. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique.
|
1274 |
Measuring Expected Returns in a Fluid Economic EnvironmentEvans, Donald C. III 15 March 2004 (has links)
This paper examines the components of the Capital Asset Pricing Model and the model's uses to analyze portfolios returns. It also looks at subsequent versions of the CAPM including a multi-variable CAPM with the inclusion of selected macro-variables as well as a non-stationary beta CAPM to estimate portfolio returns. A new model is proposed that combines the multi-variable component together with the non-stationary beta component to derive a new CAPM that is more effective at capturing current market conditions than the traditional CAPM with the fixed beta coefficient.
The multi-variable CAPM with non-stationary beta is applied, together with the select macro-variables, to estimate the returns of a portfolio of assets in the oil-sector of the economy. It looks at returns during the period of 1995-2001 when the economy exhibited a wide range of variation in market returns. This paper tests the hypothesis that adapting the traditional CAPM to include beta non-stationarity will better estimate portfolio returns in a fluid market environment.
The empirical results suggest that the new model is statistically significant at measuring portfolio returns. This model is estimated with an Ordinary Least Square (OLS) estimations process and identifies three factors that are statistically significant. These include quarterly changes in the Gross Domestic Product (GDP), the Unemployment Rate and the Consumer Price Index (CPI). / Master of Arts
|
1275 |
Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low RankCho, Taewon 20 November 2017 (has links)
In this age, there are many applications of inverse problems to lots of areas ranging from astronomy, geoscience and so on. For example, image reconstruction and deblurring require the use of methods to solve inverse problems. Since the problems are subject to many factors and noise, we can't simply apply general inversion methods. Furthermore in the problems of interest, the number of unknown variables is huge, and some may depend nonlinearly on the data, such that we must solve nonlinear problems. It is quite different and significantly more challenging to solve nonlinear problems than linear inverse problems, and we need to use more sophisticated methods to solve these kinds of problems. / Master of Science / In various research areas, there are many required measurements which can't be observed due to physical and economical reasons. Instead, these unknown measurements can be recovered by known measurements. This phenomenon can be modeled and be solved by mathematics.
|
1276 |
Application and feasibility of visible-NIR-MIR spectroscopy and classification techniques for wetland soil identificationWhatley, Caleb 10 May 2024 (has links) (PDF)
Wetland determinations require the visual identification of anaerobic soil indicators by an expert, which is a complex and subjective task. To eliminate bias, an objective method is needed to identify wetland soil. Currently, no such method exists that is rapid and easily interpretable. This study proposes a method for wetland soil identification using visible through mid-infrared (MIR) spectroscopy and classification algorithms. Wetland and non-wetland soils (n = 440) were collected across Mississippi. Spectra were measured from fresh and dried soil. Support Vector Classification and Random Forest modeling techniques were used to classify spectra with 75%/25% calibration and validation split. POWERSHAP Shapley feature selection and Gini importance were used to locate highest-contributing spectral features. Average classification accuracy was ~91%, with a maximum accuracy of 99.6% on MIR spectra. The most important features were related to iron compounds, nitrates, and soil texture. This study improves the reliability of wetland determinations as an objective and rapid wetland soil identification method while eliminating the need for an expert for determination.
|
1277 |
Application of the self-consistent method of moments to the investigation of dynamic and optical characteristics of plasmasDubovtsev, Denis 02 September 2019 (has links)
[ES] El método de los momentos ocupa un lugar especial entre los métodos teóricos dedicados al estudio de los sistemas con interacción de Coulomb entre partículas. Lo más importante y característico es el hecho de que la función de respuesta lineal del sistema está parametrizada a semejanza de una transformación lineal fraccionaria de una función de Nevanlinna (NPF, Nevanlinna Parameter Function) bajo ciertas propiedades matemáticas. La aproximación de frecuencia cero se aplica para determinar la última que permitió relacionarla con su momento, teniendo en cuenta aspectos físicos que lo justifiquen. Se muestra que esta aproximación estática NPF es consistente con el método de maximización de entropía de Shannon.
El presente trabajo constituye una versión autoconsistente del método de los momentos para su aplicación a la investigación de la corrección dinámica de campo local, entre otras características dinámicas, de los sistemas clásicos fuertemente acoplados de un componente, como son los plasmas densos de Coulomb y Yukawa. El modelo es autoconsistente ya que las propiedades dinámicas se obtienen sin ninguna introducción de datos obtenidos en las simulaciones, de modo que la función dieléctrica satisface las primeras cinco reglas de suma automáticamente. Además, tanto el factor de estructura dinámico, como la dispersión y la corrección dinámica del campo local, se determinan utilizando exclusivamente el factor de estructura estático calculado a partir de la aproximación de la cadena hiper enlazada. Se muestra que se logra un buen ajuste cuantitativo con los datos de simulaciones de dinámica molecular.
De igual manera, se observa poca discrepancia entre las características dinámicas del plasma calculadas a través de los factores de estructura estática, frente a los obtenidos por otros métodos de cálculo de ese factor de estructura estática, como son la aproximación de cadena hiper enlazada (HNC, Hiper-Netted Chain), la HNC modificada (MHNC, Modified Hiper-Netted Chain) y la HNC modificada variacionalmente (VMHNC, Variational Modified Hiper-Netted Chain). Esta estabilidad implica la robustez del enfoque que se presenta.
Asimismo, se analizan las posibilidades de abandonar la aproximación estática NPF. / [CA] El mètode dels moments ocupa un lloc especial entre els mètodes teòrics dedicats a l'estudi dels sistemes amb interacció de Coulomb entre partícules. El més important i característic és el fet que la funció de resposta lineal del sistema està parametritzada a semblança d'una transformació lineal fraccionària d'una funció de Nevanlinna (NPF, Nevanlinna Parameter Function) sota certes propietats matemàtiques. L'aproximació de freqüència zero s'aplica per a determinar l'última que va permetre relacionar-la amb el seu moment, tenint en compte aspectes físics que ho justifiquen. Es mostra que aquesta aproximació estàtica NPF és consistent amb el mètode de maximització d'entropia de Shannon.
El present treball constitueix una versió autoconsistente del mètode dels moments per a la seua aplicació a la investigació de la correcció dinàmica de camp local, entre altres característiques dinàmiques, dels sistemes clàssics fortament acoblats d'un component, com són els plasmes densos de Coulomb i Yukawa. El model és autoconsistent ja que les propietats dinàmiques s'obtenen sense cap introducció de dades obtingudes en les simulacions, de manera que la funció dielèctrica satisfà les primeres cinc regles de suma automàticament. A més, tant el factor d'estructura dinàmic, com la dispersió i la correcció dinàmica del camp local, es determinen utilitzant exclusivament el factor d'estructura estàtic calculat a partir de l'aproximació de la cadena hiper enllaçada. Es mostra que s'aconsegueix un bon ajust quantitatiu amb les dades de simulacions de dinàmica molecular.
D'igual manera, s'observa poca discrepància entre les característiques dinàmiques del plasma calculades a través dels factors d'estructura estàtica, enfront dels obtinguts per altres mètodes de càlcul d'aqueix factor d'estructura estàtica, com són l'aproximació de cadena hiper enllaçada (HNC, Hiper-NettedChain), la HNC modificada (MHNC, Modified Hiper-Netted Chain) i la HNC modificada variacionalmente (VMHNC, Variational Modified Hiper-Netted Chain). Aquesta estabilitat implica la robustesa de l'enfocament que es presenta.
Així mateix, s'analitzen les possibilitats d'abandonar l'aproximació estàtica NPF. / [EN] The method of moments occupies a special place among the theoretical methods dedicated to the study of systems with Coulomb interaction between particles. Its essence lies in the fact that the system linear response function is parameterized as a fractional-linear transformation of a (Nevanlinna) parameter function (NPF) with certain mathematical properties. The zero-frequency approximation is applied to determine the latter which permitted to relate it, on the basis of justified physical considerations, to the moments themselves. This NPF static approximation is shown to be consistent within the Shannon entropy maximization method.
In the present work, the self-consistent version of the method of moments is applied to the investigation of the dynamic local field correction and other dynamic characteristics of classical strongly coupled one-component systems, such as dense Coulomb and Yukawa plasmas. The self-consistency of the approach means that the dynamic properties are obtained without any data input from simulations so that the dielectric function satisfies the first five sum rules automatically. Moreover, the dynamic structure factor, dispersion and the dynamic local-field correction are determined using exclusively the static structure factor calculated from the hypernetted chain approximation. A good quantitative agreement with molecular dynamics simulation data is achieved.
In addition, little discrepancy is observed in the plasma dynamic characteristics calculated with the static structure factors, obtained within various methods of calculation of the static structure factor, namely, the hyper-netted chain approximation (HNC), the modified HNC (MHNC) and the variational modified HNC (VMHNC). This stability implies the robustness of the present approach.
Possibilities to abandon the NPF static approximation are analyzed as well. / Dubovtsev, D. (2019). Application of the self-consistent method of moments to the investigation of dynamic and optical characteristics of plasmas [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/125711
|
1278 |
Control estadístico de variables cuantitativas mediante inspección por atributos apoyada en el diseño de galgas con dimensiones óptimasMosquera Restrepo, Jaime 16 December 2019 (has links)
[ES] En el Control Estadístico de Procesos, los gráficos de control por variables suelen ser la herramienta empleada para vigilar el comportamiento de una característica cuantitativa de calidad. Para implementar estos gráficos se requiere de la medición periódica de una muestra de unidades del proceso. En algunos procesos, obtener una medición exacta de la variable de calidad es una tarea compleja, que demanda gran cantidad de recursos (costos, tiempo, mano de obra), o que deteriora/destruye la pieza inspeccionada. En estos casos, una alternativa más ágil y económica consiste en realizar el control basado en la verificación de piezas con una galga. Dado que la verificación con una galga suele ser tan sencilla como la verificación de un atributo, el control basado en galgas es mucho más ágil y económico que el control basado en mediciones exactas.
En la literatura del Control Estadístico de Procesos se encuentran múltiples propuestas de esquemas de control basado en inspección por galgas. En esta Tesis doctoral realizamos una detallada revisión de estas propuestas y proponemos un nuevo esquema de control basado en galgas, cuyo desempeño estadístico es siempre igual o mejor que el de cualquiera de las propuestas previas. Este esquema es diseñado para el control de la media/varianza de una variable de calidad con distribución normal y posteriormente es extendido para el control de estos parámetros en distribuciones asimétricas (log-normal, skew-normal y Weibull).
Adicionalmente, sobre el nuevo esquema de control basado en galgas, se realiza una adaptación de las estrategias de tamaño de muestra adaptativo, Doble muestreo y Tamaño de Muestra Variable, e incorporamos memoria al estadístico de control a través de un esquema de pesos exponencialmente ponderados EWMA. Como resultado se obtienen nuevos esquemas de control, cuya operación e implementación es tan sencilla como la de los gráficos de control por atributos, pero con mejor desempeño estadístico que los gráficos de control por variables. / [CA] En el Control Estadístic de Processos, els gràfics de control per variables solen ser la ferramenta empleada per a vigilar el comportament d'una característica quantitativa de qualitat. Per a implementar estos gràfics es requerix del mesurament periòdic d'una mostra d'unitats del procés. En alguns processos, obtindre un mesurament exacte de la variable de qualitat és una tasca complexa, que demanda gran quantitat de recursos (costos, temps, mà d'obra) , o que deteriora/ destruïx la peça inspeccionada. En estos casos, una alternativa més àgil i econòmica consistix a realitzar el control basat en la verificació de peces amb una llebrera. Atés que la verificació amb una llebrera sol ser tan senzilla com la verificació d'un atribut, el control basat en llebreres és molt més àgil i econòmic que el control basat en mesuraments exactes.
En la literatura del Control Estadístic de Processos es troben múltiples propostes d'esquemes de control basat en inspecció per llebreres. En esta Tesi doctoral realitzem una detallada revisió d'estes propostes i proposem un nou esquema de control basat en llebreres, l'exercici estadístic del qual és sempre igual o millor que el de qualsevol de les propostes prèvies. Este esquema és dissenyat per al control de la media/varianza d'una variable de qualitat amb distribució normal i posteriorment és estés per al control d'estos paràmetres en distribucions asimètriques (log-normal, skew-normal i Weibull).
Addicionalment, sobre el nou esquema de control basat en llebreres, es realitza una adaptació de les estratègies de grandària de mostra adaptatiu, Doble mostratge i Grandària de Mostra Variable, i incorporem memòria a l'estadístic de control a través d'un esquema de pesos exponencialment ponderats EWMA. Com resultat s'obtenen nous esquemes de control, l'operació i implementació és tan senzilla com la dels gràfics de control per atributs, però amb millor exercici estadístic que els gràfics de control per variables. / [EN] In Statistical Process Control, control charts by variables are usually the tool used to monitor a quantitative quality characteristic. To implement these charts, periodic measurement of a sample of process units is required. In some processes, obtaining an accurate measurement of the quality variable is a complex task, which demands a large amount of resources (costs, time, labor), or that deteriorates / destroys the inspected unit. In these cases, a more agile and economical alternative is to perform the control based on the verificatión of units with a gauge. Since the verificatión with a gauge is usually as simple as checking an attribute, the control based on gauges is much more agile and economical than the control based on exact measurements.
Several proposals of control schemes based on inspectión by gauges are found in the Statistical Process Control literature. In this PhD thesis we review these proposals and propose a new control scheme based on gauges, whose statistical performance is always the same or better than that of any of the previous proposals. This scheme is designed for the control of the means / variance of a quality variable with normal distributión and is subsequently extended for the control of these parameters in asymmetric distributións (log-normal, skew-normal and Weibull).
In additión, on the new gauge-based control scheme, an adaptatión of the adaptive sample size: double sampling and variable sample size strategies, is carried out, and incorporates memory to the control statistic through an exponentially weighted EWMA weights scheme. As a result, new control schemes were obtained, whose operatión and implementatión is as simple as that of the control charts by attributes, but with a best statistical performance than the control charts by variables. / Inicialmente quiero agradecer a la Universidad del Valle, Cali – Colombia, por el
soporte económico que me brindaron para garantizar mi estancia en la ciudad de
Valencia y para el desarrollo de esta tesis doctoral. / Mosquera Restrepo, J. (2019). Control estadístico de variables cuantitativas mediante inspección por atributos apoyada en el diseño de galgas con dimensiones óptimas [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/133059
|
1279 |
Performance of reverse osmosis based desalination process using spiral wound membrane: Sensitivity study of operating parameters under variable seawater conditionsAladhwani, S.H., Al-Obaidi, Mudhar A.A.R., Mujtaba, Iqbal 28 March 2022 (has links)
Yes / Reverse Osmosis (RO) process accounts for 80% of the world desalination capacity. Apparently, there is a rapid increase of deploying the RO process in seawater desalination due to its high efficiency in removing salts at a reduced energy consumption compared to thermal desalination technologies such as MSF and MED. Among different types of membranes, spiral would membranes is one of the most used. However, there is no in-depth study on the performance of spiral wound membranes in terms of salt rejection, water quality, water recovery and specific energy consumption subject to wide range of seawater salinity, temperature, feed flowrate and pressure using a high fidelity but a realistic process model which is therefore is the focus of this study. The membrane is subjected to conditions within the manufacturer's recommendations. The outcome of this research will certainly help the designers selecting optimum RO network configuration for a large-scale desalination process.
|
1280 |
High-Dimensional Functional Graphs and Inference for Unknown Heterogeneous PopulationsChen, Han 21 November 2024 (has links)
In this dissertation, we develop innovative methods for analyzing high-dimensional, heterogeneous functional data, focusing specifically on uncovering hidden patterns and network structures within such complex data. We utilize functional graphical models (FGMs) to explore the conditional dependence structure among random elements. We mainly focus on the following three research projects.
The first project combines the strengths of FGMs with finite mixture of regression models (FMR) to overcome the challenges of estimating conditional dependence structures from heterogeneous functional data. This novel approach facilitates the discovery of latent patterns, proving particularly advantageous for analyzing complex datasets, such as brain imaging studies of autism spectrum disorder (ASD). Through numerical analysis of both simulated data and real-world ASD brain imaging, we demonstrate the effectiveness of our methodology in uncovering complex dependencies that traditional methods may miss due to their homogeneous data assumptions.
Secondly, we address the challenge of variable selection within FMR in high-dimensional settings by proposing a joint variable selection technique. This technique employs a penalized expectation-maximization (EM) algorithm that leverages shared structures across regression components, thereby enhancing the efficiency of identifying relevant predictors and improving the predictive ability. We further expand this concept to mixtures of functional regressions, employing a group lasso penalty for variable selection in heterogeneous functional data.
Lastly, we recognize the limitations of existing methods in testing the equality of multiple functional graphs and develop a novel, permutation-based testing procedure. This method provides a robust, distribution-free approach to comparing network structures across different functional variables, as illustrated through simulation studies and functional magnetic resonance imaging (fMRI) analysis for ASD.
Hence, these research works provide a comprehensive framework for functional data analysis, significantly advancing the estimation of network structures, functional variable selection, and testing of functional graph equality. This methodology holds great promise for enhancing our understanding of heterogeneous functional data and its practical applications. / Doctor of Philosophy / This study introduces innovative techniques for analyzing complex, high-dimensional functional data, such as functional magnetic resonance imaging (fMRI) data from the brain. Our goal is to reveal underlying patterns and network connections, particularly in the context of autism spectrum disorder (ASD). In functional data, we treat each signal curve from various locations as a single data point. These datasets are characterized by high dimensionality, with the number of model parameters exceeding the sample size.
We employ functional graphical models (FGMs) to investigate the conditional dependencies among data elements. Our approach combines FGMs with finite mixture of regression models (FMR), allowing us to uncover hidden patterns that traditional methods assuming homogeneity might miss. Additionally, we introduce a new method for selecting relevant variables in high-dimensional regression contexts. This method enhances prediction accuracy by utilizing shared information among regression components.
Furthermore, we develop a robust testing framework to facilitate the comparison of network structures between groups without relying on distribution assumptions. This enables precise evaluations of functional graphs.
Hence, our research works contribute to a deeper understanding of complex, diverse functional data, paving the way for novel insights across various fields.
|
Page generated in 0.0691 seconds