71 |
An exploratory study of manufacturing data and its potential for continuous process improvements from a production economical perspectiveTodorovac, Kennan, Wiking, Nils January 2021 (has links)
Background: Continues improvements in production are essential in order to compete on the market. However, to be an active competitor on the market, companies need to know their strengths and weaknesses, and improve and develop their production continually. Today process industries generate enormous volumes of data and data are considered a valuable source for companies to find new ways to boost their operations' productivity and profitability. Data Mining (DM) is the process of discovering useful patterns and trends in large data sets. Several authors have pointed out data mining as a good data analysis process for manufacturing due to the large amount of data generated and collected from production processes. In manufacturing, DM has two primary goals, descriptive with the focus on discovering patterns to describe the data and predictive where a model is used to determine future values of important variables. Objectives: The objective of this study was to get a deeper understanding of how collected data from production can lead to insights regarding potential production economic improvementsby following the CRISP-DM methodology. In particular to the chosen production line if there were any differences in replenishment durations when it comes to different procedures. Duration in this study is the time the line is halted during a material replenishment. The procedures in question are single-replenishment versus double-replenishment. Further investigated was if there were any differences in the replenishment duration when it comes to which shift team and at what shift time the replenishment procedures were made. Methods: In this study the CRISP-DM methodology was used for structuring the collected data from the case company. The data was primarily historical data from a continues production process. To verify the objective of the study, three hypotheses derived from the objective was tested by using a t test and Bonferroni test. Results: The result showed that the duration of a double-replenishment is lower compared to two single-replenishments. Further results showed that there is a significant difference in the single-replenishment duration between the different shift times and different working teams. The interpretation of the result is that in the short term there is a possibility that implementingdouble replenishments can reduce the throughput time and possibility also the lead time. Conclusions: This study could contribute with knowledge for others who seek a way to use data to detect information or deeper knowledge about a continuous production process. The findings in this study could be specifically interesting for cable manufacturers and, in general, for continuous process manufacturers. Further conclusions are that time-based competition is one way for increasing the competitive advantage in the market. By using manufacturing generated data, it is possible to analyse and find valuable information that can contribute to continuous process improvements and increase the competitive advantage.
|
72 |
Objekt-Orientierung im CompilerbauKühl, Bernhard 25 February 2002 (has links)
Objekt-Orientierung im Compilerbau
Seit der Entwicklung der ersten Compiler wurden für sehr viele Programmiersprachen Compiler entwickelt. Jeder Informatikstudierende ist zum Beispiel der Umgang mit einem C- oder vielleicht auch Java-Compiler gewohnt. Am Anfang galten Compiler als nur sehr schwer zu implementierende Programme. Seitdem wurden aber systematische Techniken und Werkzeuge zur automatischen Erzeugung von (Teilen von) Compilern entwickelt.
Nur wenige Personen kommen allerdings in die Situation, für eine höhere Programmiersprache einen Compiler zu entwickeln. Aber auch andere "Dinge" sind wie Programme einer Sprache zu analysieren und zu verarbeiten:
Filter-Werkzeuge wie grep, sed oder awk agieren auf einem (zeilenweisen) Strom von Zeichen, analysieren die Zeichen und geben einen Strom von Zeichen aus. Applikationen lesen, untersuchen und schreiben Konfigurationsdateien. Pretty-Printer oder auch Präprozessoren erkennen Programme einer höheren Sprache und geben diese textuell abgeändert wieder aus.
Techniken des Compilerbaus finden daher in der Entwicklung von Software auch weiterhin häufig Verwendung. Allerdings ist die Entwicklung derartiger Software noch immer nicht ohne Probleme: Die Entwicklung von Übersetzer-Software ist von Hand sehr mühsam und fehleranfällig, und die entwickelten Übersetzer sind nur schwer zu erweitern. Es empfiehlt sich daher, Werkzeuge zur Generierung von Software zu verwenden. Diese sind allerdings für Anfänger nicht leicht zu erlernen und fest auf ihre Funktionalität begrenzt. Eine Erweiterung der Werkzeuge ist kaum möglich.
Xerox PARC hat, basierend auf Ideen der Sprache Simula, etwa um 1970 anhand der Programmiersprache Smalltalk die Technik der objekt-orientierten Programmierung erfunden. Objekt-orientierte Programmierung ist das Programmierkonzept der vergangenen Jahre. Viele der heute verwendeten Programmiersprachen sind entweder von Grund auf objekt-orientiert (zum Beispiel Java, C++ oder SmallTalk) oder wurden im Lauf der Zeit mit objekt-orientierten Erweiterungen versehen (zum Beispiel Basic oder Pascal). Selbst manche Skriptsprachen besitzen objekt-orientierte Eigenschaften (zum Beispiel JavaScript oder Python), und selbst nicht objekt-orientierte Sprachen wie C können objekt-orientiert verwendet werden.
Es ist daher naheliegend, die Objekt-Orientierung auf das Problem der Verarbeitung von Sprachen anzuwenden, um zu zeigen, daß ...
... durch den Einsatz der objekt-orientierten Programmierung die Entwicklung von (Teilen von) Compilern vereinfacht werden kann.
... die so entwickelten Werkzeuge und deren Resultate an Mächtigkeit gegenüber den Werkzeugen und Resultaten der gängigen Techniken gewinnen.
... auch objekt-orientiert konzipierte Software des Compilerbaus der Vorteil der leichten Erweiterbarkeit zugute kommt.
... die Wiederverwendung von Klassen und Objekten die Entwicklung und Verwendung von Scannern oder Compilern erleichtert.
... die objekt-orientierte Programmierung ein Automatismus für divide & conquer ist und auch hier gewinnbringend genutzt werden kann. Selbst komplexe Algorithmen des Compilerbaus zerfallen so in kleine, einfache Teilprobleme und sind dadurch in der Lehre für Studierende leicht zu verstehen bzw. sogar selbständig zu entdecken.
... der Einsatz verschiedener typischer Entwurfsmuster der Objekt-Orientierung die Flexibilität und Erweiterbarkeit zum Beispiel eines Scanners oder eines Compilers wesentlich erhöht.
|
73 |
Šifrované souborové systémy / Encrypted File SystemsHlavinka, Michal January 2008 (has links)
This thesis is about encrypted filesystems and is aimed mainly for Linux solutions. At first there is explained why is it necessary to encrypt your data at all. Next is written about pros and cons of encrypting methods such as whole disk encryption, partition encryption, container encryption and file encryption. After the introduction there are mentioned most common security issues which can help attacker to decrypt your data or modify them. Furthermore this thesis contains short description about the most often used encrypted filesystems. There is also small how-to for the most important encrypted filesystems available in Linux and comparison of their speed. Next part of this thesis contains dm-crypt and LUKS description. In the last chapter all information are concluded. There is also mentioned benefit of this work and possibilities, what can be done in future.
|
74 |
Návrh a implementace jader real-time operačních systémů běžících na HC08 / Design and Implementation of Real-Time Operating System Kernels Running on HC08Bednář, Jan January 2008 (has links)
The project is aimed at testing the kernels of real-time OS within the HC08 platform. The RM, EDF and polled loop mechanisms are being compared as well as freely available FreeRSTOS and QP systems. The project also incorporates descriptions of techniques used in the development, obtaining and the implementation of test environments. The evaluation is based on the tests made within the HC08 platform and the knowledge gained from the programming for every individual type of real-time OS.
|
75 |
Renormalization Group Approach to two Strongly Correlated Condensed Matter ModelsGhamari, M. Sedigh 11 1900 (has links)
This thesis presents renormalization group (RG) analyses of two strongly correlated condensed matter systems.
In the first part, the phase diagram of the spin-$\frac{1}{2}$ Heisenberg antiferromagnetic model on a spatially anisotropic triangular lattice is discussed. This model, together with a Dzyaloshinskii-Moriya (DM) interaction, describes the magnetic properties of the layered Mott insulator Cs$_{2}$CuCl$_{4}$. Employing a real-space RG approach, it is found, in agreement with a previous similar study, that a fragile collinear antiferromagnetic (CAF) state can be stabilized at sufficiently strong anisotropies. The presented RG analysis only indicates the presence of the CAF and spiral states in the phase diagram, with no extended quantum-disordered state at strong anisotropies. Specifically, it reveals a fine-tuning of couplings that entails the persistence of ferromagnetic correlations between second-nearest chains over large length scales even in the CAF phase. This has important implications on how numerical studies on finite-size systems should be interpreted, and reconciles the presence of the CAF state with the observation of only ferromagnetic correlations in numerical studies. The effect of a weak DM interaction within this RG approach is examined. It is concluded that Cs$_{2}$CuCl$_{4}$ is well within the stability region of the spiral ordering.
In the second part, the fate of a neck-narrowing Lifshitz transition in two-dimensions and in the presence of weak interactions is studied. Such a transition is a topological quantum phase transition, with no change in symmetry. At the critical point of this transition, the density of states at the Fermi energy is logarithmically divergent and a van Hove singularity appears. It is found that, at the critical point, the Wilsonian effective action is intrinsically non-local. This non-locality is attributed to integrating out an emergent soft degree of freedom. Away from the critical point, a local perturbative RG description is presented, and it is shown that weak attractive interactions grow as $\log^2L$ ($L$ is the physical length). However, this local description is restricted to a finite momentum range that shrinks as the critical point is approached. / Thesis / Candidate in Philosophy
|
76 |
Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM systemMannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE). Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras. DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
|
77 |
Diseño y verificación de sistemas de tiempo real heterogéneosPáez, Francisco Ezequiel 12 March 2021 (has links)
Un Sistema de Tiempo Real (STR) debe garantizar que sus resultados, además de
correctos, cumplan también con un conjunto de restricciones temporales. En general,
esto implica asegurar que sus tareas finalicen su ejecución antes de un vencimiento. Para
cumplir esto, la predictibilidad y el determinismo adquieren suma importancia.
El campo de aplicación clásico de los STR ha sido la industria, como por ejemplo la
aviónica, la exploración espacial, equipamiento médico, sistemas de control, etc. Todos
estos sistemas tienen en común el de ser de misión crítica, donde un fallo tiene consecuencias
graves, como pérdidas materiales y económicas, daños al medio ambiente o poner en
riesgo la vida humana. Por lo general estos sistemas son estáticos, y utilizan arquitecturas
de hardware y algoritmos de efectividad comprobada. En muchas ocasiones su diseño e
implementación es ad-hoc.
Sin embargo, en las últimas décadas el campo de aplicación de los STR se ha extendiendo.
En la actualidad son utilizados en ámbitos y productos de lo más variados:
electrodomésticos, productos electrónicos de consumo, telefonía celular, automóviles,
comunicaciones, sistemas de reservas de pasajes, etc.
Muchos de estos sistemas están constituidos tanto por tareas críticas como por tareas
no-críticas. Un fallo en la ejecución de las primeras tiene consecuencias graves, en tanto
que el incumplimiento de las restricciones temporales de las últimas afecta la calidad de
servicio esperada. Es vital entonces que las tareas no-críticas no afecten la correcta ejecución
de las tareas críticas. Un STR con estas características se denomina heterogéneo. En los
últimos años, gracias al incremento de la potencia de cálculo de los microprocesadores,
y la reducción de sus costos, el número de STR que permiten que coexistan estos dos
conjuntos de tareas ha aumentado.
Para lograr una correcta ejecución de un STR heterogéneo, se requiere de técnicas
que calculen y administren en línea, el tiempo ocioso disponible. De esta manera, el
planificador puede mantener la garantía decumplimiento de las constricciones temporales
de las tareas críticas, y al mismo tiempo brindar una atención aceptable a las tareas sin
requerimientos estrictos. En la actualidad, microprocesadores más potentes abren la
posibilidad de implementar estos métodos incluso en sistemas que antaño contaban con
muy baja potencia de cálculo. Sin embargo, la sobrecarga que agregan no es despreciable,
por lo que reducir el costo computacional de estos métodos sigue siendo de suma utilidad,
aún cuando se dispone de hardware con mayor capacidad de computo.
Existe una amplia literatura que aborda la problemática de la planificación de STR
heterogéneos. Sin embargo, existe una brecha significativa entre los desarrollos teóricos
en la disciplina, y las técnicas efectivamente utilizadas en la industria. Es necesario poder
comprobar el costo real y las ventajas y desventajas objetivas de implementar los modelos
teóricos de punta.
Muchos modelos teóricos no tienen en cuenta costos adicionales presentes en implementaciones
concretas. Estos son comúnmente considerados despreciables en la modelización,
a fin de simplificar el análisis, la evaluación y el desarrollo del sistema. Como
consecuencia, en la implementación real se estos parámetros se sobrestiman, lo que resulta
en una menor eficiencia del sistema. Un ejemplo común es el uso de microprocesadores
con una capacidad de cálculo por encima de la realmente requerida, lo que impacta negativamente
en el consumo de energía y en los costos. Un modelo más detallado en la etapa
de diseño, implementación y verificación, permitiría mejorar el desempeño del sistema
final, sin abandonar la garantía de predictibilidad temporal. Igualmente importantes,
se deben contar con técnicas y herramientas que permitan implementar estos modelos
métodos teóricos de manera eficiente.
La presente tesis se basa en la hipótesis de que los STR heterogéneos pueden ser eficaces
en la planificación de sus conjuntos de tareas y en el uso de sus recursos computacionales.
Bajo esta premisa, se presentan nuevos modelos y mejoras a modelos ya existentes, junto
con simulaciones, pruebas y desarrollos necesarios para su verificación. El trabajo se basa
fuertemente en la implementación práctica de los resultados teóricos, identificando las
dificultades reales que su puesta en práctica trae aparejado. De esta manera, se busca
reducir la brecha existente entre los resultados obtenidos por la investigación científica
en la disciplina de los STR, y aquello concretamente utilizado e implementado realmente
en la industria, la investigación y el desarrollo tecnológico. / A Real-Time System (RTS) must warrant that its results are correct and fulfill a set of
temporal restrictions. This implies that each task completes its execution before a specific
deadline. In order to accomplish this, the predictibility and determinism of the system as
a whole is very important.
These kind of systems are used in several industries, like aircraft avionics, space
exploration, medical equipment, etc., which are mission critical. A failure in this systems
could have catastrophic consequences, like loss of human lives. Most of the time the
design and implementation of these systems is ad-hoc.
In the last decades, thanks to the growth and sophistication of embedded systems,
the application domain of the RTS increased. Nowdays they can be found on consumer
electronics, cellphones, communications systems, cars, etc.
A lot of these new kind of real-time systems are composed of both critical and noncritical
tasks. A failure in the execution of the former have severe consequences, but a
missed deadline of the later only affects the quality of service. Such a RTS is known as a
heterogeneus one.
In order to accomplish a correct execution of a heterogeneus RTS, methods and techniques
that calculates and manages the system idle-time are needed. With these tools,
the system scheduler can guarantee that all the time-critical tasks fulfill their deadlines.
Nonetheless, these techniques add an execution overhead to the system.
Although severalworks in the literature proposes solutions for many of the scheduling
problems in a heterogeneus RTS, a gap exists between these results and what is actually
used and implemented in the industry.
Many theoretical models do not take into account the additional costs present in a
concrete implementation. These are commonly considered negligible in order to simplify
the analysis, evaluation and development of the system. As a consequence, some parameters are overestimated, resulting in reduced system efficiency. A common scenario is
the use of microprocessors more powerful than required, with negative impact on energy
consumption and production costs. A more detailed model in the design and implementation
stage could improve the performance of the final system, without abandoning the
guarantee of temporal predictability. Equally important, there must be techniques and
tools that allow the implementation of these theoretical results.
The working hipothesis of this thesis is that a heterogeneus RTS could be efficient in
the scheduling of their tasks and resources. Following this premise, new models and
improvements to existing ones are presented, in conjunction with several simulations and
implementations of the theoretical results, in order to identify the real difficulties that the
implementation brings about. This seeks to reduce the gap between the scientific research
in the discipline of RTS and what actually implemented in the industry.
|
78 |
Generalized Terminal Modeling of Electro-Magnetic InterferenceBaisden, Andrew Carson 10 December 2009 (has links)
Terminal models have been used for various power electronic applications. In this work a two- and three-terminal black box model is proposed for electro-magnetic interference (EMI) characterization. The modeling procedure starts with a time-variant system at a particular operating condition, which can be a converter, set of converters, sub-system or collection of components. A unique, linear equivalent circuit is created for applications in the frequency domain. Impedances and current / voltage sources define the noise throughout the entire EMI frequency spectrum. All parameters needed to create the model are clearly defined to ensure convergence and maximize accuracy.
The model is then used to predict the attenuation caused by a filter with increased accuracy over small signal insertion gain measurements performed with network analyzers. Knowledge of EMI filters interactions with the converter allows for advanced techniques and design constraints to optimize the filter for size, weight, and cost. Additionally, the model is also demonstrated when the operating point of the system does not remain constant, as with AC power systems. Modeling of a varying operating point requires information of all the operating conditions for a complete and accurate model. However, the data collection and processing quickly become unmanageable due to the large amounts of data needed. Therefore, simplification techniques are used to reduce the complexity of the model while maintaining accuracy throughout the frequency spectrum.
The modeling approach is verified for linear and power electronic networks including: a dc-dc boost converter, phase-leg module, and a simulated dc-ac inverter. The accuracy of the model is confirmed up to 100 MHz in simulation and at least 50 MHz for experimental validation. / Ph. D.
|
79 |
Frequency Domain Conductive Electromagnetic Interference Modeling and Prediction with Parasitics Extraction for InvertersHuang, Xudong 06 October 2004 (has links)
This dissertation is to focus on the development of modeling and simulation methodology to predict conductive electromagnetic interference (EMI) for high power converters. Conventionally, the EMI prediction relies on the Fast Fourier Transformation (FFT) method with the time-domain simulation result that requires long hours of simulation and a large amount of data. The proposed approach is to use the frequency-domain analysis technique that computes the EMI spectrum directly by decomposing noise sources and their propagation paths. This method not only largely reduces the computational effort, but also provides the insightful information about the critical components of the EMI generation and distribution. The study was first applied to a dc/dc chopper circuit by deriving the high frequency equivalent circuit model for differential mode (DM) and common mode (CM) EMIs. The noise source was modeled as the trapezoidal current and voltage pulses. The noise cut-off frequency was identified as a function of the rise time and fall time of the trapezoidal waves. The noise propagation path was modeled as lumped parasitic inductors and capacitors, and additional noise cut-off frequency was identified as the function of parasitic components. . Using the noise source and path models, the proposed method effectively predicts the EMI performance, and the results were verified with the hardware experiments. With the well-proven EMI prediction methodology with a dc/dc chopper, the method was then extended to the prediction of DM and CM EMIs of three-phase inverters under complex pulse width modulation (PWM) patterns. The inverter noise source requires the double Fourier integral technique because its switching cycle and the fundamental cycle are in two different time scales. The noise path requires parasitic parameter extraction through finite element analysis for complex-structured power bus bar and printed circuit layout. After inverter noise source and path are identified, the effects of different modulation schemes on EMI spectrum are evaluated through the proposed frequency-domain analysis technique and verified by hardware experiment. The results, again, demonstrate that the proposed frequency-domain analysis technique is valid and is considered a promising approach to effectively predicting the EMI spectrum up to tens of MHz range. / Ph. D.
|
80 |
EMI Terminal Behavioral Modeling of SiC-based Power ConvertersSun, Bingyao 28 September 2015 (has links)
With GaN and SiC switching devices becoming more commercially available, higher switching frequency is being applied to achieve higher efficiency and power density in power converters. However, electro-magnetic interference (EMI) becomes a more severe problem as a result. In this thesis, the switching frequency effect on conducted EMI noise is assessed.
As EMI noise increases, the EMI filter plays a more important role in a power converter. As a result, an effective EMI modeling technique of the power converter system is required in order to find an optimized size and effective EMI filter.
The frequency-domain model is verified to be an efficient and easy model to explore the EMI noise generation and propagation in the system. Of the various models, the unterminated behavioral model can simultaneously predict CM input and output noise of an inverter, and the prediction falls in line with the measurement around 10 MHz or higher. The DM terminated behavioral model can predict the DM input or output noise of the motor drive higher than 20 MHz. These two models are easy to extract and have high prediction capabilities; this is verified on a 10 kHz-switching-frequency Si motor drive. It is worthwhile to explore the prediction capability of the two models when they are applied to a SiC-based power inverter with switching frequency ranges from 20 kHz to 70 kHz.
In this thesis, the CM unterminated behavioral model is first applied to the SiC power inverter, and results show that the model prediction capability is limited by the noise floor of the oscilloscope measurement. The proposed segmented-frequency-range measurement is developed and verified to be a good solution to the noise floor. With the improved impedance fixtures, the prediction from CM model matches the measurement to 30 MHz.
To predict the DM input and output noise of the SiC inverter, the DM terminated behavioral model can be used under the condition that the CM and DM noise are decoupled. With the system noise analysis, the DM output side is verified to be independent of the CM noise and input side. The DM terminated behavioral model is extracted at the inverter output and predicts the DM output noise up to 30 MHz after solving the noise floor and DM choke saturation problem.
At the DM input side, the CM and DM are seen to be coupled with each other. It is found experimentally that the mixture of the CM and DM noise results from the asymmetric impedance of the system. The mixed mode terminated behavioral model is proposed to predict the DM noise when a mixed CM effect exists. The model can capture the DM noise up to to 30 MHz when the impedance between the inverter to CM ground is not balanced. The issue often happens in extraction of the model impedance and is solved by the curving-fitting optimization described in the thesis.
This thesis ends with a summary of contributions, limitations, and some future research directions. / Master of Science
|
Page generated in 0.0323 seconds