• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 366
  • 139
  • 47
  • 42
  • 34
  • 10
  • 9
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 844
  • 117
  • 106
  • 105
  • 61
  • 60
  • 59
  • 56
  • 50
  • 45
  • 45
  • 44
  • 44
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design

Zheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).
642

Analysis of Power Transistor Behavioural Modeling Techniques Suitable for Narrow-band Power Amplifier Design

Amini, Amir-Reza January 2012 (has links)
The design of power amplifiers within a circuit simulator requires a good non-linear model that accurately predicts the electormagnetic behaviour of the power transistor. In recent years, a certain class of large signal frequency-dependent black-box behavioural modeling techniques known as Poly-Harmonic Distortion (PHD) models has been devised to mimic the non-linear unmatched RF transistor. These models promise a good prediction of the device behaviour under multi-harmonic periodic continuous wave inputs. This thesis describes the capabilities of the PHD modeling framework and the theoretical type of behaviour that it is capable of predicting. Specifically, the PHD framework cannot necessarily predict the response of a broadband aperiodic signal. This analysis will be performed by deriving the PHD modeling framework as a simplification of the Volterra series kernel functions under the assumption that the power transistor is operating under continuous periodic multi-harmonic voltage and current signals in a stable circuit. A PHD model will be seen as a set of describing functions that predict the response of the Device Under Test (DUT) for any given non-linear periodic continuous-wave inputs that have a specific fundamental frequency. Two popular implementations of PHD models that can be found in the literature are the X-parameter and Cardiff models. Each model formulates the describing functions of the general PHD model differently. The mathematical formulation of the X-parameter and Cardiff models will be discussed in order to provide a theoretical ground for comparing their robustness. The X-parameter model will be seen as the first-order Taylor series approximation of the PHD model describing functions around a Large Signal Operating Point (LSOP) of the device under test. The Cardiff large-signal model uses Fourier series coefficient functions that vary with the magnitude of the large signal(s) as the PHD model describing functions. This thesis will provide a breakdown of the measurement procedure required for the extraction of these models, the challenges involved in the measurement, as well as the mathematical extraction of the model coe cients from measurement data. As each of these models contain have extended versions that enhance the predictive capability of the model under stronger nonlinear modes of operation, a comparison is used to represent the cost of increasing model accuracy as a function of the increasing model complexity for each model. The order of complexity of each model can manifest itself in terms of the mathematical formulation, the number of parameters required and the measurement time that is required to extract each model for a given DUT. This comparison will fairly assess the relative strengths and weaknesses of each model.
643

Performance Improvement Of A 3d Reconstruction Algorithm Using Single Camera Images

Kilic, Varlik 01 July 2005 (has links) (PDF)
In this study, it is aimed to improve a set of image processing techniques used in a previously developed method for reconstructing 3D parameters of a secondary passive target using single camera images. This 3D reconstruction method was developed and implemented on a setup consisting of a digital camera, a computer, and a positioning unit. Some automatic target recognition techniques were also included in the method. The passive secondary target used is a circle with two internal spots. In order to achieve a real time target detection, the existing binarization, edge detection, and ellipse detection algorithms are debugged, modified, or replaced to increase the speed, to eliminate the run time errors, and to become compatible for target tracking. The overall speed of 20 Hz is achieved for 640x480 pixel resolution 8 bit grayscale images on a 2.8 GHz computer A novel target tracking method with various tracking strategies is introduced to reduce the search area for target detection and to achieve a detection and reconstruction speed at the maximum frame rate of the hardware. Based on the previously suggested lens distortion model, distortion measurement, distortion parameters determination, and distortion correction methods for both radial and tangential distortions are developed. By the implementation of this distortion correction method, the accuracy of the 3D reconstruction method is enhanced. The overall 3D reconstruction method is implemented in an integrated software and hardware environment as a combination of the methods with the best performance among their alternatives. This autonomous and real time system is able to detect the secondary passive target and reconstruct its 3D configuration parameters at a rate of 25 Hz. Even for extreme conditions, in which it is difficult or impossible to detect the target, no runtime failures are observed.
644

Quantitative MRI and Micro-CT of Bone Architecture: Applications and Limitations in Orthopaedics

Hopper, Timothy Andrew John January 2005 (has links)
The aim of this thesis was to investigate some methods for quantitative analysis of bone structure, particularly techniques which might ultimately be applied post-operatively following orthopaedic reconstruction operations. Initially it was decided to explore the efficacy of MRI in quantifying the bone structure at high resolution by comparing high resolution MRI against 'gold standards' such as Scanning Electron Microscopy (SEM) and optical histology. This basic study provided a measure of the distortions in the morphological bone parameters derived from MR images due to susceptibility artefacts and partial volume effects. The study of bone architecture was then extended to a model of advanced renal osteodystrophy in a growing rat. For this study, high-resolution micro computed tomography (microCT) was used and as a result of the high resolution images obtained, three new bone morphological parameters were introduced to characterise the bone structure. The desire to study bone architecture post-operatively in hip replacements led to a preliminary study on ex-vivo sheep acetabulae following total hip replacement, to determine the extent that the bone architecture could be investigated around the acetabulum. The motivation for studying the acetabulum was based on the high occurrence of debonding at the bone / prosthesis interface. This study demonstrated the superior nature of 3D MRI over conventional x-ray radiographs in early quantitation of fibrous membranes located between the host bone and the non-metallic implant and/or the bone cement. The presence of such fibrous membranes is strongly indicative of failure of the prosthesis. When using clinical MRI to image post-operative hip replacement, the image quality is severely affected by the presence of the metallic implant. The head of the prosthesis is shaped like a metal sphere and is located in the acetabular cup. This problem was investigated by performing simulations of MR images in the presence of the field perturbation induced by the presence of a metal sphere, with the effects of slice excitation and frequency encoding incorporated into the simulations. The simulations were compared with experimental data obtained by imaging a phantom comprising a stainless steel ball bearing immersed in agarose gel. The simulations were used to predict the effects of changing imaging parameters that influence artefact size and also to show how current metal artefact reduction techniques such as view angle tilting (VAT) work and to identify their limitations. It was shown that 2D SE and VAT imaging techniques should not be used when metallic prosthesis are present due to extreme slice distortion, whereas 3D MRI provided a method that has no slice distortion, although the effects of using a frequency encoding gradient still remain.
645

Optimal source coding with signal transfer function constraints

Derpich, Milan January 2009 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / This thesis presents results on optimal coding and decoding of discrete-time stochastic signals, in the sense of minimizing a distortion metric subject to a constraint on the bit-rate and on the signal transfer function from source to reconstruction. The first (preliminary) contribution of this thesis is the introduction of new distortion metric that extends the mean squared error (MSE) criterion. We give this extension the name Weighted-Correlation MSE (WCMSE), and use it as the distortion metric throughout the thesis. The WCMSE is a weighted sum of two components of the MSE: the variance of the error component uncorrelated to the source, on the one hand, and the remainder of the MSE, on the other. The WCMSE can take account of signal transfer function constraints by assigning a larger weight to deviations from a target signal transfer function than to source-uncorrelated distortion. Within this framework, the second contribution is the solution of a family of feedback quantizer design problems for wide sense stationary sources using an additive noise model for quantization errors. These associated problems consist of finding the frequency response of the filters deployed around a scalar quantizer that minimize the WCMSE for a fixed quantizer signal-to-(granular)-noise ratio (SNR). This general structure, which incorporates pre-, post-, and feedback filters, includes as special cases well known source coding schemes such as pulse coded modulation (PCM), Differential Pulse-Coded Modulation (DPCM), Sigma Delta converters, and noise-shaping coders. The optimal frequency response of each of the filters in this architecture is found for each possible subset of the remaining filters being given and fixed. These results are then applied to oversampled feedback quantization. In particular, it is shown that, within the linear model used, and for a fixed quantizer SNR, the MSE decays exponentially with oversampling ratio, provided optimal filters are used at each oversampling ratio. If a subtractively dithered quantizer is utilized, then the noise model is exact, and the SNR constraint can be directly related to the bit-rate if entropy coding is used, regardless of the number of quantization levels. On the other hand, in the case of fixed-rate quantization, the SNR is related to the number of quantization levels, and hence to the bit-rate, when overload errors are negligible. It is shown that, for sources with unbounded support, the latter condition is violated for sufficiently large oversampling ratios. By deriving an upper bound on the contribution of overload errors to the total WCMSE, a lower bound for the decay rate of the WCMSE as a function of the oversampling ratio is found for fixed-rate quantization of sources with finite or infinite support. The third main contribution of the thesis is the introduction of the rate-distortion function (RDF) when WCMSE is the distortion metric, denoted by WCMSE-RDF. We provide a complete characterization for Gaussian sources. The resulting WCMSE-RDF yields, as special cases, Shannon's RDF, as well as the recently introduced RDF for source-uncorrelated distortions (RDF-SUD). For cases where only source-uncorrelated distortion is allowed, the RDF-SUD is extended to include the possibility of linear-time invariant feedback between reconstructed signal and coder input. It is also shown that feedback quantization schemes can achieve a bit-rate only 0.254 bits/sample above this RDF by using the same filters that minimize the reconstruction MSE for a quantizer-SNR constraint. The fourth main contribution of this thesis is to provide a set of conditions under which knowledge of a realization of the RDF can be used directly to solve encoder-decoder design optimization problems. This result has direct implications in the design of subband coders with feedback, as well as in the design of encoder-decoder pairs for applications such as networked control. As the fifth main contribution of this thesis, the RDF-SUD is utilized to show that, for Gaussian sta-tionary sources with memory and MSE distortion criterion, an upper bound on the information-theoretic causal RDF can be obtained by means of an iterative numerical procedure, at all rates. This bound is tighter than 0:5 bits/sample. Moreover, if there exists a realization of the causal RDF in which the re-construction error is jointly stationary with the source, then the bound obtained coincides with the causal RDF. The iterative procedure proposed here to obtain Ritc(D) also yields a characterization of the filters in a scalar feedback quantizer having an operational rate that exceeds the bound by less than 0:254 bits/sample. This constitutes an upper bound on the optimal performance theoretically attainable by any causal source coder for stationary Gaussian sources under the MSE distortion criterion.
646

Experimental study of water droplet flows in a model PEM fuel cell gas microchannel

Minor, Grant 17 January 2008 (has links)
Liquid water formation and flooding in PEM fuel cell gas distribution channels can significantly degrade fuel cell performance by causing substantial pressure drop in the channels and by inhibiting the transport of reactants to the reaction sites at the catalyst layer. A better understanding of the mechanisms of discrete water droplet transport by air flow in such small channels may be developed through the application of quantitative flow visualization techniques. This improved knowledge could contribute to improved gas channel design and higher fuel cell efficiencies. An experimental investigation was undertaken to gain better understanding of the relationships between air velocity in the channel, secondary rotational flows inside a droplet, droplet deformation, and threshold shear, drag, and pressure forces required for droplet removal. Micro-digital-particle-image-velocimetry (micro-DPIV) techniques were used to provide quantitative visualizations of the flow inside the liquid phase for the case of air flow around a droplet adhered to the wall of a 1 mm x 3 mm rectangular gas channel model. The sidewall against which the droplet was adhered was composed of PTFE treated carbon paper to simulate the porous GDL surface of a fuel cell gas channel. Visualization of droplet shape, internal flow patterns and Velocity measurements at the central cross-sectional plane of symmetry in the droplet were obtained for different air flow rates. A variety of rotational secondary flow patterns within the droplet were observed. The nature of these flows depended primarily on the air flow rate. The peak velocities of these secondary flow fields were observed to be around two orders of magnitude below the calculated channel-averaged driving air velocities. The resulting flow fields show in particular that the velocity at the air-droplet interface is finite. The experimental data collected from this study may be used for validation of numerical simulations of such droplet flows. Further study of such flow scenarios using the techniques developed in this experiment, including the general optical distortion correction algorithm developed as part of this work, may provide insight into an improved force balance model for a droplet exposed to an air flow in a gas channel.
647

A novel induction heating system using multilevel neutral point clamped inverter

Al Shammeri, Bashar Mohammed Flayyih January 2017 (has links)
This thesis investigates a novel DC/AC resonant inverter of Induction Heating (IH) system presenting a Multilevel Neutral Point Clamped (MNPCI) topology, as a new part of power supply design. The main function of the prototype is to provide a maximum and steady state power transfer from converter to the resonant load tank, by achieving zero current switching (ZCS) with selecting the best design of load tank topology, and utilizing the advantage aspects of both the Voltage Fed Inverter (VFI) and Current Fed Inverter (CFI) kinds, therefore it can considered as a hybrid-inverter (HVCFI) category . The new design benefits from series resonant inverter design through using two bulk voltage source capacitors to feed a constant voltage delivery to the MNPCI inverter with half the DC rail voltage to decrease the switching losses and mitigate the over voltage surge occurred in inverter switches during operation which may cause damage when dealing with high power systems. Besides, the design profits from the resonant load topology of parallel resonant inverter, through using the LLC resonant load tank. The design gives the advantage of having an output current gain value of about Quality Factor (Q) times the inverter current and absorbs the parasitic components. On the contrary, decreasing inverter current means decreasing the switching frequency and thus, decreasing the switching losses of the system. This aspect increases the output power, which increases the heating efficiency. In order for the proposed system to be more reliable and matches the characteristics of IH process , the prototype is modelled with a variable LLC topology instead of fixed load parameters with achieving soft switching mode of ZCS and zero voltage switching (ZVS) at all load conditions and a tiny phase shift angle between output current and voltage, which might be neglected. To achieve the goal of reducing harmonic distortion, a new harmonic control modulation is introduced, by controlling the ON switching time to obtain minimum Total Harmonic Distortion (THD) content accompanied with optimum power for heating energy.
648

Probability distortion in clinical judgment : field study and laboratory experiments / Distorsion de probabilité dans le jugement clinique : étude de terrain et expériences en laboratoire

Hainguerlot, Marine 21 December 2017 (has links)
Cette thèse étudie la distorsion de probabilité dans le jugement clinique afin de comparer le jugement des médecins à des modèles statistiques. Nous supposons que les médecins forment leur jugement clinique en intégrant une composante analytique et une composante intuitive. Dans ce cadre, les médecins peuvent souffrir de plusieurs biais dans la façon dont ils évaluent et intègrent les deux composantes. Cette thèse rassemble les résultats obtenus sur le terrain et en laboratoire. À partir de données médicales, nous avons constaté que les médecins n'étaient pas aussi bons que les modèles statistiques à intégrer des évidences médicales. Ils surestimaient les petites probabilités que le patient soit malade et sous­-estimaient les probabilités élevées. Nous avons constaté que leur jugement biaisé pourrait entraîner un sur­-traitement. Comment améliorer leur jugement? Premièrement, nous avons envisagé de remplacer le jugement du médecin par la probabilité de notre modèle statistique. Pour améliorer la décision, il était nécessaire d'élaborer un score statistique qui combine le modèle analytique, la composante intuitive du médecin et sa déviation observée par rapport à la décision attendue. Deuxièmement, nous avons testé en laboratoire des facteurs qui peuvent influencer le traitement de l'information. Nous avons trouvé que la capacité des participants à apprendre la valeur de la composante analytique, sans feedback externe, dépend de la qualité de leur composante intuitive et de leur mémoire de travail. Nous avons aussi trouvé que la capacité des participants à intégrer les deux composantes dépend de leur mémoire de travail, mais pas de leur évaluation de la composante intuitive. / This thesis studies probability distortion in clinical judgment to compare physicians’ judgment with statistical models. We considered that physicians form their clinical judgment by integrating an analytical component and an intuitive component. We documented that physicians may suffer from several biases in the way they evaluate and integrate the two components. This dissertation gathers findings from the field and the lab. With actual medical data practice, we found that physicians were not as good as the statistical models at integrating consistently medical evidence. They over­estimated small probabilities that the patient had the disease and under­ estimated large probabilities. We found that their biased probability judgment might cause unnecessary health care treatment. How then can we improve physician judgment? First, we considered to replace physician judgment by the probability generated from our statistical model. To actually improve decision it was necessary to develop a statistical score that combines the analytical model, the intuitive component of the physician and his observed deviation from the expected decision. Second, we tested in the lab factors that may affect information processing. We found that participants’ ability to learn about the value of the analytical component, without external feedback, depends on the quality of their intuitive component and their working memory. We also found that participants’ ability to integrate both components together depends on their working memory but not their evaluation of the intuitive component.
649

Sistemas de conversão de energia multiníveis obtidos através da interconexão de módulos de conversores estáticos de potência de dois níveis.

MAIA, Ayslan Caisson Norões. 27 August 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-08-27T19:35:43Z No. of bitstreams: 1 AYSLAN CAISSON NORÕES MAIA – TESE (PPGEEI) 2016.pdf: 13981392 bytes, checksum: 9ea0aa715fdf8400283e71381b49a21b (MD5) / Made available in DSpace on 2018-08-27T19:35:43Z (GMT). No. of bitstreams: 1 AYSLAN CAISSON NORÕES MAIA – TESE (PPGEEI) 2016.pdf: 13981392 bytes, checksum: 9ea0aa715fdf8400283e71381b49a21b (MD5) Previous issue date: 2016-02-22 / Nesse trabalho são apresentadas contribuições na área de identificação de sistemas representados em espaço de estados. É proposta uma metodologia completa para estimação de modelos que representem as principais dinâmicas de proessos industriais. O fluxo natural dos procedimentos de identificação consiste da coleta experimental dos dados, seguido pela esolha dos modelos candidatos e da utilização de um critério de ajuste que selecione o melhor modelo possível. Nesse sentido é proposta uma metodologia para estimativa de modelos em espaço de estados, utilizando excitações pulsadas. A abordagem desenvolvida combina algoritmos precisos e eficientes com experimentos rápidos, adequados a ambientes industriais. O projeto das excitações é realizado em tempo real, por meio de informações coletadas em um curto experimento inicial, baseado em uma única oscilação de uma estrutura realimentada por um relé. Esse mecanismo possibilita uma estimativa preliminar do atraso e da constante de tempo dominante do sistema. O método de identificação proposto é baseado na teoria de realizações de Kalman. É apresentada uma reformulação do problema de realizações clássico, para comportar sinais de entrada pulsados. Essa abordagem se mostra computacionalmente e cliente, assim como apresentar resultados semelhantes aos métodos de benchmark. A técnica possibilita também a estimativa de atrasos de transporte e a inserção de conhecimentos prévios por meio de um problema de otimização com restrições via LMI Linear Matrix Inequalities. Em muitos casos, somente as caraterísticas principais dos sistema são relevantes em um projeto de sistema de controle. Portanto é proposta uma técnica para obtenção de modelos de primeira ordem com atraso, a partir da redução de modelos balanceados em espaço de estados. Por fim, todas as contribuições discutidas nesse trabalho de teses não validadas em uma série de plantas experimentais em salas de laboratório. Plantas essas, projetadas e construídas com o intuito de emular o cotidiano operacional de instalações industriais reais. / Static converters are a widely used equipment in power systems to control the electrical energy low between sources and loads. In this context, it is observed a demand for converters topologies that generate high quality waveforms and are capable of supplying loads with ever larger powers. In high power applications such as industrial and power systems, the development of a special class of converters topologies, denominated multilevel converters, has been widely recognized as a viable solution to overcome the operational limits of semiconductor devices. In this work are developed and analyzed multilevel structures of type DC-AC applied to the six-phase machines drives and of type AC-DC-AC feeding singlephase and three-phase loads. These topologies are obtained by interconnecting two-level converters modules in order to optimize the system: reduction of losses in the semiconductor devices, harmonic distortion of the signals and ratings of voltage and/or current in the power switches. For this investigation were performed steady state analyzes, where the operatinglimits of the structures to the imposed control conditions and the behavior of the fundamental component of voltage and current are evaluated. In addition, for each investigated topology, were developed: dynamic models, PWM techniques, control strategies, simulation results and experimental results. The impact of this optimization is quanti ed by calculating the THD and WTHD of the current and voltage signals generated by the converter and by estimating losses in the semiconductor devices. Finally, a comparative study is done using conventional converters as reference in order to evaluate the performance of the proposed topologies
650

[en] STUDY OF SIGNAL DISTORTION IN ANALOGICAL OPTICAL SYSTEMS: FOR COMBINED EFFECTS OF POLARIZATION MODE DISPERSION AND POLARIZATION DEPENDENT LOSS / [pt] ESTUDO DA DISTORÇÃO DE SINAIS EM SISTEMAS ÓPTICOS ANALÓGICOS: POR EFEITOS COMBINADOS DA DISPERSÃO DOS MODOS DE POLARIZAÇÃO COM AS PERDAS DEPENDENTES DA POLARIZAÇÃO

LUIS CARLOS BLANCO LINARES 03 November 2005 (has links)
[pt] Neste trabalho é apresentado um estudo do formalismo matemático para a teoria da polarização da luz, da dispersão dos modos de polarização e das perdas dependentes da polarização e de seus efeitos combinados. Medidas experimentais para caracterização dos parâmetros dos dispositivos que compõe a montagem experimental são apresentados, sendo aqui abordadas as técnicas de medida utilizadas. Um novo modelamento teórico para os efeitos combinados da dispersão dos modos de polarização e das perdas dependentes da polarização em sistemas ópticos analógicos é apresentado. Curvas teóricas e experimentais de distorção harmônica em função dos vários parâmetros envolvidos no modelamento comprovam a natureza interferométrica dos fenômenos avaliados. Medidas experimentais comprovam o correto modelamento teórico e demonstram que o modelamento matemático apresentado em [6], não contempla corretamente os fenômenos envolvidos. / [en] In this work is presented a study of mathematical formalism to polarization of light thory, of polarization mode dispersion and polarization dependent loss and of this effects combined. Experimental measurements to characterize the devices of experimental set-up are presented, and the measure technique are mentioned here. A new theorical model to combined effects of polarization mode dispersion and polarization dependent loss is presented. Theorical and experimental curves of hamonic distortion in function of many parameters involved in theorical model prove the interferometric nature of phenomenon. Experimental measurement demonstrate a perfect agreement with modeling developed and prove differences of 5 dB with modeling presented in [6].

Page generated in 0.084 seconds