• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 11
  • 7
  • 6
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 91
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 11
  • 9
  • 9
  • 8
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Contribution de la Théorie des Valeurs Extrêmes à la gestion et à la santé des systèmes

Diamoutene, Abdoulaye 26 November 2018 (has links) (PDF)
Le fonctionnement d'un système, de façon générale, peut être affecté par un incident imprévu. Lorsque cet incident a de lourdes conséquences tant sur l'intégrité du système que sur la qualité de ses produits, on dit alors qu'il se situe dans le cadre des événements dits extrêmes. Ainsi, de plus en plus les chercheurs portent un intérêt particulier à la modélisation des événements extrêmes pour diverses études telles que la fiabilité des systèmes et la prédiction des différents risques pouvant entraver le bon fonctionnement d'un système en général. C'est dans cette optique que s'inscrit la présente thèse. Nous utilisons la Théorie des Valeurs Extrêmes (TVE) et les statistiques d'ordre extrême comme outil d'aide à la décision dans la modélisation et la gestion des risques dans l'usinage et l'aviation. Plus précisément, nous modélisons la surface de rugosité de pièces usinées et la fiabilité de l'outil de coupe associé par les statistiques d'ordre extrême. Nous avons aussi fait une modélisation à l'aide de l'approche dite du "Peaks-Over Threshold, POT" permettant de faire des prédictions sur les éventuelles victimes dans l'Aviation Générale Américaine (AGA) à la suite d'accidents extrêmes. Par ailleurs, la modélisation des systèmes soumis à des facteurs d'environnement ou covariables passent le plus souvent par les modèles à risque proportionnel basés sur la fonction de risque. Dans les modèles à risque proportionnel, la fonction de risque de base est généralement de type Weibull, qui est une fonction monotone; l'analyse du fonctionnement de certains systèmes comme l'outil de coupe dans l'industrie a montré qu'un système peut avoir un mauvais fonctionnement sur une phase et s'améliorer sur la phase suivante. De ce fait, des modifications ont été apportées à la distribution de Weibull afin d'avoir des fonctions de risque de base non monotones, plus particulièrement les fonctions de risque croissantes puis décroissantes. En dépit de ces modifications, la prise en compte des conditions d'opérations extrêmes et la surestimation des risques s'avèrent problématiques. Nous avons donc, à partir de la loi standard de Gumbel, proposé une fonction de risque de base croissante puis décroissante permettant de prendre en compte les conditions extrêmes d'opérations, puis établi les preuves mathématiques y afférant. En outre, un exemple d'application dans le domaine de l'industrie a été proposé. Cette thèse est divisée en quatre chapitres auxquels s'ajoutent une introduction et une conclusion générales. Dans le premier chapitre, nous rappelons quelques notions de base sur la théorie des valeurs extrêmes. Le deuxième chapitre s'intéresse aux concepts de base de l'analyse de survie, particulièrement à ceux relatifs à l'analyse de fiabilité, en proposant une fonction de risque croissante-décroissante dans le modèle à risques proportionnels. En ce qui concerne le troisième chapitre, il porte sur l'utilisation des statistiques d'ordre extrême dans l'usinage, notamment dans la détection de pièces défectueuses par lots, la fiabilité de l'outil de coupe et la modélisation des meilleures surfaces de rugosité. Le dernier chapitre porte sur la prédiction d'éventuelles victimes dans l'Aviation Générale Américaine à partir des données historiques en utilisant l'approche "Peaks-Over Threshold"
62

"Fatores que influenciam a resolução em energia na espectrometria de partículas alfa com diodos de Si" / "Factors affecting the energy resolution in alpha particle spectrometry with silicon diodes"

Camargo, Fabio de 10 May 2005 (has links)
Neste trabalho são apresentados os estudos das condições de resposta de um diodo de Si, com estrutura de múltiplos anéis de guarda, na detecção e espectrometria de partículas alfa. Este diodo foi fabricado por meio do processo de implantação iônica (Al/p+/n/n+/Al) em um substrato de Si do tipo n com resistividade de 3 kohm•cm, 300 mícrons de espessura e área útil de 4 mm2. Para usar este diodo como detector, a face n+ deste dispositivo foi polarizada reversamente, o primeiro anel de guarda aterrado e os sinais elétricos extraídos da face p+. Estes sinais eram enviados diretamente a um pré-amplificador desenvolvido em nosso laboratório, baseado no emprego do circuito híbrido A250 da Amptek, seguido da eletrônica nuclear convencional. Os resultados obtidos com este sistema na detecção direta de partículas alfa do Am-241evidenciaram excelente estabilidade de resposta com uma elevada eficiência de detecção (= 100 %). O desempenho deste diodo na espectrometria de partículas alfa foi estudado priorizando-se a influência da tensão de polarização, do ruído eletrônico, da temperatura e da distância fonte-detector na resolução em energia. Os resultados mostraram que a maior contribuição para a deterioração deste parâmetro é devida à espessura da camada morta do diodo (1 mícron). No entanto, mesmo em temperatura ambiente, a resolução medida (FWHM = 18,8 keV) para as partículas alfa de 5485,6 keV (Am-241) é comparável àquelas obtidas com detectores convencionais de barreira de superfície freqüentemente utilizados em espectrometria destas partículas. / In this work are presented the studies about the response of a multi-structure guard rings silicon diode for detection and spectrometry of alpha particles. This ion-implanted diode (Al/p+/n/n+/Al) was processed out of 300 micrometers thick, n type substrate with a resistivity of 3 kohm•cm and an active area of 4 mm2. In order to use this diode as a detector, the bias voltage was applied on the n+ side, the first guard ring was grounded and the electrical signals were readout from the p+ side. These signals were directly sent to a tailor made preamplifier, based on the hybrid circuit A250 (Amptek), followed by a conventional nuclear electronic. The results obtained with this system for the direct detection of alpha particles from Am-241 showed an excellent response stability with a high detection efficiency (= 100 %). The performance of this diode for alpha particle spectrometry was studied and it was prioritized the influence of the polarization voltage, the electronic noise, the temperature and the source-diode distance on the energy resolution. The results showed that the major contribution for the deterioration of this parameter is due to the diode dead layer thickness (1 micrometer). However, even at room temperature, the energy resolution (FWHM = 18.8 keV) measured for the 5485.6 MeV alpha particles (Am-241) is comparable to those obtained with ordinary silicon barrier detectors frequently used for these particles spectrometry.
63

"Fatores que influenciam a resolução em energia na espectrometria de partículas alfa com diodos de Si" / "Factors affecting the energy resolution in alpha particle spectrometry with silicon diodes"

Fabio de Camargo 10 May 2005 (has links)
Neste trabalho são apresentados os estudos das condições de resposta de um diodo de Si, com estrutura de múltiplos anéis de guarda, na detecção e espectrometria de partículas alfa. Este diodo foi fabricado por meio do processo de implantação iônica (Al/p+/n/n+/Al) em um substrato de Si do tipo n com resistividade de 3 kohm•cm, 300 mícrons de espessura e área útil de 4 mm2. Para usar este diodo como detector, a face n+ deste dispositivo foi polarizada reversamente, o primeiro anel de guarda aterrado e os sinais elétricos extraídos da face p+. Estes sinais eram enviados diretamente a um pré-amplificador desenvolvido em nosso laboratório, baseado no emprego do circuito híbrido A250 da Amptek, seguido da eletrônica nuclear convencional. Os resultados obtidos com este sistema na detecção direta de partículas alfa do Am-241evidenciaram excelente estabilidade de resposta com uma elevada eficiência de detecção (= 100 %). O desempenho deste diodo na espectrometria de partículas alfa foi estudado priorizando-se a influência da tensão de polarização, do ruído eletrônico, da temperatura e da distância fonte-detector na resolução em energia. Os resultados mostraram que a maior contribuição para a deterioração deste parâmetro é devida à espessura da camada morta do diodo (1 mícron). No entanto, mesmo em temperatura ambiente, a resolução medida (FWHM = 18,8 keV) para as partículas alfa de 5485,6 keV (Am-241) é comparável àquelas obtidas com detectores convencionais de barreira de superfície freqüentemente utilizados em espectrometria destas partículas. / In this work are presented the studies about the response of a multi-structure guard rings silicon diode for detection and spectrometry of alpha particles. This ion-implanted diode (Al/p+/n/n+/Al) was processed out of 300 micrometers thick, n type substrate with a resistivity of 3 kohm•cm and an active area of 4 mm2. In order to use this diode as a detector, the bias voltage was applied on the n+ side, the first guard ring was grounded and the electrical signals were readout from the p+ side. These signals were directly sent to a tailor made preamplifier, based on the hybrid circuit A250 (Amptek), followed by a conventional nuclear electronic. The results obtained with this system for the direct detection of alpha particles from Am-241 showed an excellent response stability with a high detection efficiency (= 100 %). The performance of this diode for alpha particle spectrometry was studied and it was prioritized the influence of the polarization voltage, the electronic noise, the temperature and the source-diode distance on the energy resolution. The results showed that the major contribution for the deterioration of this parameter is due to the diode dead layer thickness (1 micrometer). However, even at room temperature, the energy resolution (FWHM = 18.8 keV) measured for the 5485.6 MeV alpha particles (Am-241) is comparable to those obtained with ordinary silicon barrier detectors frequently used for these particles spectrometry.
64

Re-Discovering Kolchak: Elevating the Influence of the First Television Supernatural Drama

Herrmann, Andrew F. 03 April 2014 (has links)
Each panelist has chosen an artifact (or type, genre, etc.) from the recent past and interrogated its role as an influence on contemporary popular culture, working to show the linkage between then and now. This type of work is underappreciated and we would like to attempt to show how informing ourselves on popular culture past can make us better critics in the present. Our hope is to inspire others to take up that cause as well. In that spirit, we would like to encourage people to come prepared to discuss ideas and share their own work in a workshop type environment.
65

Energieffektivisering av luftningssteget på Käppalaverket, Lidingö / Energy optimization of the aeration at Käppala wastewater treatment plant in Stockholm

Thunberg, Andreas January 2007 (has links)
This master thesis in energy optimization was made during the autumn of 2006 at Käppala wastewater treatment plant in Lidingö, Stockholm. A preceding thesis, where all electricity consumption was mapped, showed that the aeration in the biological treatment is the single largest consumer in the plant, and it is therefore of interest to reduce this cost. The oxygen control strategy used at Käppala WWTP is working well from a nutrient removal point of view, but not from an economic one. The last aerobic zones have a very low oxygen consumption during low loading periods which give rise to enhanced dissolved oxygen concentrations with excessive costs and reduced denitrification as a result. But also during periods of normal loading unnecessary high oxygen concentration are sometimes given. By modifying the aeration control strategy three full-scale experiments have been made, with the intention to reduce the air consumption. The experiments were carried out during week 37-50 in the autumn of 2006 and showed that savings could be made. The regular oxygen control at Käppala WWTP controls the oxygen level in the aerobic compartment with two DO-setpoints; one in the first aerobic zone and one in the last. The zones in between are controlled by an airflow fractionation depending on the oxygen level in the first and last zone. In the first strategy to be evaluated, all four zones in the aerated part were individually controlled with its own setpoint. Two different setpoint combinations were tested. By using the fact that the efficiency in the oxygen transfer rate was higher at low airflows, savings of approximately 16 % were achieved. In the second strategy tested, an ammonia-feedback control combined with a DO-feedback controlled the DO-set point in the first aerobic zone. This strategy adjusted the DO- set points to the loading variations, and this gave a decreased airflow of approximately 9 %. Finally the two strategies were combined. All zones were then controlled individually with DO-set points set by an ammonium-feedback and a DO-feedback. The strategy gave savings in the airflow of approximately 18 %. In all three trials the aerated zones were more efficiently used, and the estimated savings are 550 000 SEK/year, and with a preserved nutrient removal efficiency. / Under hösten 2006 har ett examensarbete om energieffektivisering på Käppalaverket på Lidingö utförts. Ett föregående examensarbete där all elenergiförbrukning kartlades visade att blåsmaskinerna i biosteget står för den enskilt största förbrukningen i verket och det är därför av intresse att minska denna kostnad. Syrestyrningsstrategin som används på Käppalaverket fungerar mycket bra ur reningssynpunkt, men är inte optimal ur energisynpunkt. Dels luftas de första aeroba zonerna för mycket vid låg belastning vilket ger upphov till kraftigt förhöjda syrekoncentrationer i de sista aeroba zonerna med höga luftningskostnader och risk för försämrad denitrifikation, men även under normal belastning har det visat sig att onödigt höga syrekoncentrationer ibland ges. Tre fullskaliga optimeringsförsök har utförts, med syfte att minska luftförbrukningen med bibehållen reningsgrad. Försöken pågick från vecka 37 till 50 hösten 2006, och visade att det finns möjlighet att spara energi genom att modifiera syrestyrningsstrategin. Den reguljära syreregleringen i Käppalaverket styr syrehalten i den aeroba bassängen mot två syrebörvärden; ett i den första luftade zonen och ett i den sista. Luftflödet till de mellanliggande zonerna styrs av luftflödesandelar beroende på syrehalten i dessa två zoner. Den första strategin som utvärderades styrde istället samtliga zoner individuellt med egna börvärden, där två olika strukturer på de satta börvärdena användes. Genom att utnyttja en högre effektivitet i syreöverföringshastigheten vid låga luftflöden uppnåddes luftflödesbesparingar på ca 16 % i första försöket. I den andra strategin styrdes syrebörvärdet i den första luftade zonen med hjälp av två återkopplingar, en från utgående ammoniumhalt och en från syrehalten i den sista luftade zonen. Tack vare att strategin anpassade syrebörvärdena efter belastningen av syretärande ämnen erhölls luftflödesbesparingar på ca 9 %. Slutligen kombinerades de två strategierna; samtliga zoner styrdes individuellt med börvärden satta av en ammonium-återkoppling och en syre-återkoppling. Strategin medförde luftflödesbesparingar på ca 18 %. I samtliga försök utnyttjades de luftade zonerna bättre, och besparingspotentialen uträknad från 2005 års elpriser blev som mest 550 000 SEK/år, detta med en bibehållen reningsgrad.
66

Antiresonance and Noise Suppression Techniques for Digital Power Distribution Networks

Davis, Anto K January 2015 (has links) (PDF)
Power distribution network (PDN) design was a non-existent entity during the early days of microprocessors due to the low frequency of operation. Once the switching frequencies of the microprocessors started moving towards and beyond MHz regions, the parasitic inductance of the PCB tracks and planes started playing an important role in determining the maximum voltage on a PDN. Voltage regulator module (VRM) sup-plies only the DC power for microprocessors. When the MOSFETs inside a processor switches, it consumes currents during transition time. If this current is not provided, the voltage on the supply rails can go below the specifications of the processor. For lower MHz processors few ceramic-capacitors known as ‘decoupling capacitors’ were connected between power and ground to provide this transient current demand. When the processor frequency increased beyond MHz, the number of capacitors also increased from few numbers to hundreds of them. Nowadays, the PDN is said to be comprising all components from VRM till the die location. It includes VRM, bulk capacitors, PCB power planes, capacitor mounting pads and vias, mount for the electronic package, package capacitors, die mount and internal die capacitance. So, the PDN has evolved into a very complex system over the years. A PDN should provide three distinct roles; 1) provide transient current required by the processor 2) act as a stable reference voltage for processor 3) filter out the noise currents injected by the processor. The first two are required for the correct operation of the processor. Third one is a requirement from analog or other sensitive circuits connected to the same PDN. If the noise exits the printed circuit board (PCB), it can result in conducted and radiated EMI, which can in turn result in failure of a product in EMC testing. Every PDN design starts with the calculation of a target impedance which is given as the ratio of maximum allowed ripple voltage to the maximum transient current required by the processor. The transient current is usually taken as half the average input current. The definition of target impedance assumes that the PDN is flat over the entire frequency of operation, which is true only for a resistive network. This is seldom true for a practical PDN, since it contains inductances and capacitances. Because of this, a practical PDN has an uneven impedance versus frequency envelope. Whenever two capacitors with different self resonant frequencies are connected in parallel, their equivalent impedance produces a pole between the self resonant frequencies known as antiresonance peaks. Because of this, a PDN will have phase angles associated with them. Also, these antiresonance peaks are energy reservoirs which will be excited during the normal operation of a processor by the varying currents. The transient current of a microprocessor is modeled as a gamma function, but for practical cases it can be approximated as triangular waveforms during the transition time which is normally 10% of the time period. Depending upon the micro-operations running inside the processor, the peak value of this waveform varies. This is filtered by the on-chip capacitors, package inductance and package capacitors. Due to power gating, clock gating, IO operations, matrix multiplications and magnetic memory readings the waveforms at the board will be like pulse type, and their widths are determined by these operations. In literatures, these two types of waveforms are used for PDN analysis, depending upon at which point the study is conducted. Chapter 1 introduces the need for PDN design and the main roles of a PDN. The issue of antiresonance is introduced from a PDN perspective. Different types of capacitors used on a PDN are discussed with their strengths and limitations. The general nature of the switching noise injected by a microprocessor is also discussed. This chapter discusses the thesis contributions, and the existing work related to the field. Chapter 2 introduces a new method to calculate the target impedance (Zt ) by including the phase angles of a PDN which is based on a maximum voltage calculation. This new Zt equals to conventional Zt for symmetrical triangular switching current waveforms. The value of new Zt is less than the conventional Zt for trapezoidal excitation patterns. By adding the resonance effects into this, a maximum voltage value is obtained in this chapter. The new method includes the maximum voltage produced on a PDN when multiple antiresonance peaks are present. Example simulations are provided for triangular and pulse type excitations. A measured input current wave-form for PIC16F677 microcontroller driving eight IO ports is provided to prove the assumption of pulse type waveforms. For triangular excitation waveform, the maximum voltage predicted based on the expression was ¡0.6153 V, and the simulated maximum voltage was found to be at ¡0.5412 V which is less than the predicted value. But the predicted value based on Zt method was 1.9845 V. This shows that the conventional as well as the new target impedance method leads to over estimating the maximum voltage in certain cases. This is because most of the harmonics are falling on the minimum impedance values on a PDN. If the PDN envelope is changed by temperature and component tolerances, the maximum voltage can vary. So the best option is to design with the target impedance method. When pulse current excitation was studied for a particular PDN, the maximum voltage produced was -139.39 mV. The target impedance method produced a value of -100.24 mV. The maximum voltage predicted by the equation was -237 mV. So this shows that some times the conventional target impedance method leads to under estimating the PDN voltage. From the studies, it is shown that the time domain analysis is as important as frequency domain analysis. Another important observation is that the antiresonance peaks on a PDN should be damped both in number and peak value. Chapter 3 studies the antiresonance peak suppression methods for general cases. As discussed earlier, the antiresonance peaks are produced when two capacitors with different self resonant frequencies are connected in parallel. This chapter studies the effect of magnetic coupling between the mounting loops of two capacitors in parallel. The mounting loop area contribute to the parasitic inductance of a capacitor, and it is the major contributing factor to it. Other contributing factors are equivalent series inductance (ESL) and plane spreading inductance. The ESL depends on the size and on how the internal plates of the capacitors are formed. The spreading inductance is the inductance contributed by the parts of the planes connecting the capacitor connector vias to the die connections or to other capacitor vias. If the power and ground planes are closer, the spreading inductance is lower. On one/two layer boards dedicated power/ground planes are absent. So the spreading inductance is replaced by PCB track inductances. The inductance contributed by the mounted area of the capacitor is known as mounting inductance. On one/two layer boards dedicated power/ground planes are absent. So the spreading inductance is replaced by PCB track inductances. The dependencies of various circuit parameters on antiresonance peak are studied using circuit theory. A general condition for damping the antiresonance is formulated. The antiresonance peak reduces with Q factor. The conventional critical condition for antiresonance peak damping needs modification when magnetic coupling is present between the mounting loops of two parallel unequal value capacitors. By varying the connection geometry it is possible to obtain negative and positive coupling coefficients. The connection geometries to obtain these two are shown. An example is shown for positive and negative coupling coefficient cases with simulation and experimental results. For the example discussed, RC Æ 32 - for k Æ Å0.6 and RC Æ 64 - for k Æ ¡0.6, where RC is the critical damping value and k is the magnetic coupling coefficient between the two mounting loops. The reason for this is that, the antiresonance peak impedance value is higher for negative coupling coefficient case than that for positive coupling coefficient case. Above the self resonant frequencies of both the capacitors, the equivalent impedance of the parallel capacitors become inductive. This case is studied with two equal value capacitors in parallel. It is shown that the equivalent inductance is lower for negative coupling coefficient case as compared to positive coupling coefficient case. An example is provided with simulation and experimental results. In the experimental results, parasitic inductance is observed to be 2.6 times lower for negative coupling coefficient case than that for positive coupling coefficient case. When equal value capacitors are connected in parallel, it is advantageous to use a negative coupling geometry due to this. Chapter 4 introduces a new method to damp the antiresonance peak using a magnet-ically coupled resistive loop. Reducing the Q factor is an option to suppress the peak. In this new method, the Q factor reduction is achieved by introducing losses by mag-netically coupling a resistive loop. The proposed circuit is analyzed with circuit-theory, and governing equations are obtained. The optimum value of resistance for achieving maximum damping is obtained through analysis. Simulation and experimental results are shown to validate the theory. From the experimental results approximately 247 times reduction in antiresonance peak is observed with the proposed method. Effectiveness of the new method is limited by the magnetic coupling coefficient between the two mounting loops of capacitors. The method can be further improved if the coupling coefficient can be increased at the antiresonance frequency. Chapter 5 focuses on the third objective of a PDN, that is to reduce the noise injected by the microprocessor. A new method is proposed to reduce the conducted noise from a microprocessor with switched super capacitors. The conventional switched capacitor filters are based on the concept that the flying capacitor switching at high frequency looks like a resistor at low frequency. So for using at audio frequencies the flying capacitors were switching at MHz frequencies. In this chapter the opposite of this scenario is studied; the flying capacitors are the energy storage elements of a switched capacitor converter and they switch at lower frequencies as compared to the noise frequencies. Two basic circuits (1:1 voltage conversion ratio) providing noise isolation were discussed. They have distinct steady state input current waveforms and are explained with PSPICE simulations. The inrush current through switches are capable of destroying them in a practical implementation. A practical solution was proposed using PMOS-PNP pair. The self introduced switching noise of the converter is lower when switching frequency is low and turn ON-OFF time is higher. If power metal oxide semiconductor field effect transistor (MOSFET)s are used, the turn ON and turn OFF are slow. The switching frequency can be lowered based on the voltage drop power loss. The governing equations were formulated and simulated. It is found that the switching frequency can be lowered by increasing the capacitance value without affecting the voltage drop and power loss. From the equations, it is found that the design parameters have a cyclic dependency. Noise can short through the parasitic capacitance of the switches. Two circuits were proposed to improve the noise isolation: 1) T switch 2) ¦ switch. Of these, the ¦ switch has the higher measured transfer impedance. Experimental results showed a noise reduction of (40-20) dB for the conducted frequency range of 150 kHz - 30 MHz with the proposed 1:1 switched capacitor converter. One possible improvement of this method is to combine the noise isolation with an existing switched capacitor converter (SCC) topology. The discussed example had a switching frequency of 700 Hz, and it is shown that this can isolate the switching noise in kHz and MHz regions. In a PDN there are antiresonance peaks in kHz regions. If the proposed circuit is kept close to a microprocessor, it can reduce the excitation currents of these low frequency antiresonance peaks. Chapter 6 concludes the thesis by stating the major contributions and applications of the concepts introduced in the thesis. This chapter also discusses the future scope of these concepts.
67

Técnicas de seguimento do ponto de máxima potência para sistemas fotovoltaicos com sombreamento parcial

FURTADO, Artur Muniz Szpak 26 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-12T14:19:20Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Artur_Muniz_Dissertacao_Digital_Biblioteca.pdf: 14042205 bytes, checksum: 21290a8d10774b2848f16e9351043b9a (MD5) / Made available in DSpace on 2016-09-12T14:19:20Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Artur_Muniz_Dissertacao_Digital_Biblioteca.pdf: 14042205 bytes, checksum: 21290a8d10774b2848f16e9351043b9a (MD5) Previous issue date: 2016-02-26 / FACEPE / A curva da potência em função da tensão nos terminais de uma conexão em série de módulos fotovoltaicos, com diodos de passagem protetores, exposta a um sombreamento parcial, exibe um comportamento com múltiplos picos. Os múltiplos picos tornam as estratégias clássicas de seguimento do ponto de máxima potência, ou MPPT, ineficazes. Em primeiro momento, este trabalho realiza uma análise estatística que determina uma região trapezoidal no plano tensão potência onde o ponto de máxima potência global está inserido para qualquer situação de irradiâncias múltiplas e temperatura, para configurações com inversor central e módulos conectados em séries puras ou conectadas em paralelo. Em segundo momento, este trabalho pesquisa as técnicas de MPPT Global que rastreiam o ponto de máxima potência global de uma curva tensão potência com múltiplos picos. Duas destas técnicas são estudadas a fim de avaliar a rapidez em encontrar o ponto de máxima potência global e a energia perdida na busca. Por fim, é proposta uma nova técnica de MPPT Global baseada no estudo estatístico preliminar. Esta nova técnica é proposta tirando proveito do trapézio delimitado na análise estatística. / The power-voltage characteristic of series arrays of photovoltaic modules with bypass diodes under partial shading conditions, displays a multiple peaks behavior. The multiple peaks make the classical algorithms of maximum power point tracking, or MPPT, ineffective. At first, this work performs a statistical analysis that determines a trapezoidal region in the power voltage plan where the global maximum power point is situated for any situation from multiple irradiances and any temperature, for central inverter configuration with a series array of modules or parallel connected series arrays. Soon after, this paper researches the Global MPPT techniques that track the global maximum power point of power-voltage curves with multiple peaks. Some of these techniques are studied to assess how fast they can find the global maximum power point and the energy lost at this search. Finally, it is proposed a new Global MPPT technique based on the preliminary statistical study. This new technique is proposed taking advantage of the trapezoidal region defined in the statistical analysis.
68

Regulação financeira por objetivos: um modelo regulatório para o futuro?

Melo Filho, Augusto Rodrigues Coutinho de 05 February 2018 (has links)
Submitted by Augusto Rodrigues Coutinho de Melo Filho (augusto.filho@hotmail.com.br) on 2018-05-10T04:23:30Z No. of bitstreams: 1 Regulação por objetivos - versão biblioteca.pdf: 1394879 bytes, checksum: 9a29a0db296eb7bf9d7db7143e270747 (MD5) / Approved for entry into archive by Diego Andrade (diego.andrade@fgv.br) on 2018-05-16T12:55:49Z (GMT) No. of bitstreams: 1 Regulação por objetivos - versão biblioteca.pdf: 1394879 bytes, checksum: 9a29a0db296eb7bf9d7db7143e270747 (MD5) / Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2018-05-18T19:26:37Z (GMT) No. of bitstreams: 1 Regulação por objetivos - versão biblioteca.pdf: 1394879 bytes, checksum: 9a29a0db296eb7bf9d7db7143e270747 (MD5) / Made available in DSpace on 2018-05-18T19:26:37Z (GMT). No. of bitstreams: 1 Regulação por objetivos - versão biblioteca.pdf: 1394879 bytes, checksum: 9a29a0db296eb7bf9d7db7143e270747 (MD5) Previous issue date: 2018-02-05 / O presente trabalho pretende investigar os diferentes modelos de regulação financeira, com foco no modelo de regulação por objetivos ou “twin peaks”, no contexto das recentes transformações na estrutura do mercado financeiro global, notadamente no que tange (i) ao surgimento de novos produtos e serviços que não são facilmente enquadrados em um segmento específico do mercado financeiro, podendo envolver concomitantemente funções típicas dos mercados bancário, de capitais e de seguros; e (ii) às mudanças nas características dos principais participantes que atuam nesse mercado, dentre as quais pode se destacar (ii.1) a unificação de organizações financeiras que atuavam em diferentes setores do mercado financeiro – bancos de investimento, bancos comerciais, corretoras, seguradoras etc. – resultando na formação de conglomerados financeiros; (ii.2) o ingresso de empresas de tecnologia que competem e fornecem serviços financeiros de modo inovador, desafiando os participantes clássicos deste mercado. As mudanças supracitadas são significativas do ponto de vista da estrutura de mercado, e devem ser observadas pelos reguladores a fim de avaliar a compatibilidade de suas respectivas estruturas regulatórias com os novos riscos advindos desse “novo mercado financeiro” em formação. Parte-se da premissa de que uma estrutura regulatória efetiva precisa se adequar à respectiva estrutura de mercado a qual se pretende regular, sob o risco de tal desalinhamento resultar em uma regulação excessivamente custosa e que não promove os objetivos para os quais foi criada. A hipótese deste trabalho é que o modelo de regulação por objetivos é o mais apto a propiciar uma regulação financeira efetiva, em contraposição aos modelos de regulação por setores e unificada, tendo em vista que sua estrutura regulatória: (i) permite a consecução dos múltiplos objetivos da regulação financeira de forma harmonizada, em um cenário de crescente complexificação dos riscos emanados do sistema financeiro, mitigando a possibilidade de sobreposição de objetivos dentro de uma mesma agência; e (ii) amplia o limite de aplicação da regulação, uma vez que os critérios jurídicos para determinação de competência dos reguladores não são vinculados a conceitos próprios dos setores bancário, securitário e de capitais, cuja divisão é cada vez menos perceptível do ponto de vista da prática financeira, na qual as atividades financeiras vêm sendo desenvolvidas de modo transversal. A fim de desenvolver a hipótese elencada, o trabalho se propõe, num primeiro momento, a fazer uma revisão da literatura teórica sobre modelos de regulação, com o objetivo de identificar as características que tornam um modelo regulatório “ótimo” do ponto de vista da efetividade. Após essa análise, parte-se para um exame das mudanças práticas pelas quais o mercado financeiro vem passando, e como elas afetam as diferentes estruturas de regulação vigentes – propiciando a emergência do modelo de regulação por objetivos como predominante em algumas jurisdições. Nesse contexto, a regulação por objetivos se coloca como uma opção a ser considerada pelos diferentes reguladores nacionais para enfrentar os novos riscos do mercado financeiro global e auxiliar o pleno desenvolvimento do mercado financeiro nas próximas décadas. / The present work intends to investigate the different models of financial regulation, focusing on the model of regulation by objectives or "twin peaks", in the context of the recent transformations in the structure of the global financial market, especially regarding (i) the emergence of new products and services that are not easily framed in a specific segment of the financial market, and can simultaneously develop typical functions of the banking, securities and insurance markets; and (ii) changes in the characteristics of the main participants in this market, among which (ii.1) the unification of financial organizations operating in different sectors of the financial market - investment banks, commercial banks, brokerage firms, insurance companies etc. - resulting in the formation of financial conglomerates; (ii.2) the entry of financial technology companies that compete and provide innovative services in competition with the classic participants of this market. The aforementioned changes should be interpreted as major changes in terms of market structure, and must be observed by regulators in order to assess the compatibility of their respective financial structures with the new risks arising from this "new financial market" in formation. The work is based on the theoretical premise that an efficient and effective regulatory structure needs to reflect the respective market structure that it is intended to regulate, under the risk that such misalignment will result in an empty regulation, without regulatory tools to promote the objectives for which it was created. The hypothesis of this work is that the model of regulation by objectives, as opposed to the unified- and sector-based models, is the most adequate to provide an efficient and effective financial regulation, especially considering that its regulatory structure: (i) enable the fulfillment of multiple objectives of financial regulation in a harmonized manner, in a stage where the complexity of risks emanating from the financial system are increasing, mitigating the possibility of objective overlap within the same agency; and (ii) expand the regulatory perimeters, since the legal criteria for determining the competence of regulators is not linked to concepts of the banking, security and capital sectors, whose distinction increasingly blurred from the point of view that most financial activities have been developed in a cross-sectored way. In order to develop this hypothesis, this dissertation proposes, firstly, to review the theoretical literature on regulation models, with the objective of identifying the characteristics that make a regulatory model “optimal” from the point of view of efficiency and effectiveness. After this analysis, we examine the practical changes happening in the financial market, and how they affect the different regulatory structures in force - propitiating the emergence of the regulation by objectives as the redominant model in some jurisdictions. In this context, regulation by objectives is an option to be considered by the different national regulators to face the new risks of the global financial market and to help the full development of financial markets in the coming decades
69

Modelagem Bayesiana dos tempos entre extrapolações do número de internações hospitalares: associação entre queimadas de cana-de-açúcar e doenças respiratórias / Bayesian modelling of the times between peaks of hospital admissions: association between sugar cane plantation burning and respiratory diseases

Mayara Piani Luna da Silva Sicchieri 19 December 2012 (has links)
As doenças respiratórias e a poluição do ar são temas de muitos trabalhos científicos, porém a relação entre doenças respiratórias e queimadas de cana-de-açúcar ainda é pouco estudada. A queima da palha da cana-de-açúcar é uma prática comum em grande parte do Estado de São Paulo, com especial destaque para os dados da região de Ribeirão Preto. Os focos de queimadas são detectados por satélites do CPTEC/INPE (Centro de Previsão de Tempo e Estudos Climáticos do Instituto Nacional de Pesquisas Espaciais) e neste trabalho consideramos o tempo entre dias de extrapolação do número de internações diárias. Neste trabalho introduzimos diferentes modelos estatísticos para analisar dados de focos de queimadas e suas relações com as internações por doenças respiratórias. Propomos novos modelos para analisar estes dados, na presença ou não da covariável, que representa o número de queimadas. Sob o enfoque Bayesiano, usando os diferentes modelos propostos, encontramos os sumários a posteriori de interesse utilizando métodos de simulação de Monte Carlo em Cadeias de Markov. Também usamos técnicas Bayesianas para discriminar os diferentes modelos. Para os dados da região de Ribeirão Preto, encontramos modelos que levam à obtenção das inferências a posteriori com grande precisão e vericamos que a presença da covariável nos traz um grande ganho na qualidade dos dados ajustados. Os resultados a posteriori nos sugerem evidências de uma relação entre as queimadas e o tempo entre as extrapolações do número de internações, ou seja, de que quando observamos um maior número de queimadas anteriores à extrapolação, também observamos que o tempo entre as extrapolações é menor. / Relations between respiratory diseases and air pollution has been the goals of many scientic works, but the relation between respiratory diseases and sugar cane burning still is not well studied in the literature. Pre-harvest burning of sugarcane elds used primarily to get rid of the dried leaves is common in most of São Paulo state, Southeast Brazil, especially in the Ribeirão Preto region. The locals of pre-harvest sugar cane burning are detected by surveillance satellites of the CPTEC/INPE (Center of Climate Prediction of the Space Research National Institute). In this work, we consider as our data of interest, the time in days, between peaks numbers of hospitalizations due to respiratory diseases. Dierent statistical models are assumed to analyze the data of pre-harvest burning of sugar cane elds and their relations with hospitalizations due to respiratory diseases. These new models are considered to analyze data sets in presence or not of covariates, representing the numbers of pre-harvest burning of sugar cane elds. Under a Bayesian approach, we get the posterior summaries of interest using MCMC (Markov Chain Monte Carlo) methods. We also use dierent existing Bayesian discrimination methods to choose the best model. In our case, considering the data of Ribeirão Preto region, we observed that the models in presence of covariates give accurate inferences and good t for the data. We concluded that there is evidence of a relationship between respiratory diseases and sugar cane burning, that is, larger numbers of pre-harvest sugar cane burning, implies in larger numbers of hospitalizations due to respiratory diseases. In this case, we also observe small times (days) between extra numbers of hospitalizations.
70

Etude de l'encrassement biologique de matériaux cimentaires en eau de rivière : analyse de l'influence des paramètres de surface des pâtes cimentaires / A study of the biofouling ot cementitious materials in river water : analysis of the influence of surface parameters of cement pastes

Ben Ahmed, Karim 12 July 2016 (has links)
Les aspects biologiques ne sont généralement pas considérés lors de la conception des ouvrages de génie civil, malgré que la biocolonisation puisse affecter leur durabilité. Cette thèse s’intéresse à l’encrassement biologique des matériaux cimentaires en eau de rivière. Un essai de biocolonisation phototrophe accélérée, simulant les conditions en rivière a été mis au point et validé. Il a permis l’étude de pâtes cimentaires de différentes formulations. La colonisation a été évaluée par le taux de recouvrement de la surface, estimé par une méthode proposée d’analyse d’images. Une étude de l’influence de la rugosité sur la bioréceptivité du matériau a été réalisée à travers plusieurs paramètres de différentes natures et la densité de pics (paramètre d’espacement) a montré l’influence la plus déterminante. Un modèle a été proposé pour expliquer cette influence et a donné des résultats satisfaisants. Les influences de la porosité et du pH semblent être limitées dans les conditions de l’essai. Enfin, la micro-indentation a été adaptée pour l’évaluation mécanique de la détérioration des pâtes cimentaires sur de faibles épaisseurs. Cette technique pourra être utilisée pour évaluer la biodétérioration. / The biological aspects are generally not considered in the design of civil engineering works, although the biocolonisation may affect their durability. This thesis focuses on biofouling of cementitious materials in river water. A laboratory accelerated test of phototrophic biocolonisation, simulating the river conditions, was developed and validated. It allowed the study of cement pastes of different formulations. Colonization was assessed by the recovery rate of the surface, estimated by a proposed method of image analysis. A study of the roughness influence on the bioreceptivity of the material was conducted through several roughness parameters of different natures, and the peaks density (a spacing parameter) showed the most decisive influence. A model was proposed to explain this influence and gave satisfactory results. The influences of porosity and pH appeared to be limited in the test conditions. Finally, micro-indentation was adapted to the mechanical evaluation of the deterioration of thin layers of cement pastes. This technique may be used to evaluate the biodeterioration.

Page generated in 0.0355 seconds