• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 79
  • 33
  • 24
  • 14
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 431
  • 431
  • 47
  • 44
  • 41
  • 40
  • 38
  • 33
  • 32
  • 32
  • 28
  • 28
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Capacidade máxima de acúmulo de carbono em solos cultivados com cana-de-açúcar / Maximum capacity of carbon accumulation in soils cultivated with sugarcane

Carolina Braga Brandani 27 August 2013 (has links)
Um dos principais entraves na cultura da cana-de-açúcar quanto à sustentabilidade do solo é o manejo da cultura e da colheita. O objetivo deste estudo foi avaliar os estoques de C e sua dinâmica em diferentes frações da MOS (matéria orgânica do solo) em áreas cultivadas com cana-de-açúcar sob os manejos com e sem queima e, sob adubação orgânica com diferentes períodos de adoção (4 e 12 anos), tendo uma área de vegetação nativa (Cerradão) como referência. As áreas localizam-se em Goianésia-GO, sendo todas representativas da classe dos Latossolos Vermelho-Amarelo distróficos. Foram avaliados os teores de C e N do solo e seus respectivos estoques, além dos teores de C e N e, das abundâncias isotópicas do 13C e 15N para as frações organominerais < 53 ?m, 75-53 ?m e 2000- 75 ?m (fração orgânica e organomineral) da MOS. De posse desses resultados, objetivou-se simular a capacidade máxima de acúmulo de C, a partir do modelo Century, enfatizando os manejos avaliados e texturas de solos contrastantes. Os teores de C foram maiores para o manejo orgânico-4, em relação às demais áreas de cana-de-açúcar avaliadas. Os estoques de C e N foram maiores para as áreas em que o teor de argila foi superior; com queima e orgânica-4. Devido as diferenças entre texturas foram calculados os estoques em função dos ajustes quanto à massa de solo e teores de argila, utilizando-se como referência a vegetação nativa. Dessa forma os maiores estoques foram observados para o manejo orgânica-12. Os resultados para o conteúdo de C da biomassa microbiana (C-BM) mostrou aumento dos em função da melhoria do manejo. Em relação aos teores de C e N das frações da MOS, os manejos orgânicos (orgânica-12 e -4) resultaram nos maiores valores observados, principalmente para a fração organomineral <53 ?m à 5 cm de profundidade. Tais resultados refletiram diretamente sobre os valores de 13C, sua respectiva proporção de C derivado do resíduo C4 e, do 15N, indicando, respectivamente, maior acúmulo e proporção de C proveniente da cultura de canade- açúcar nas frações da MO, além de evidenciar, a partir do 15N, comportamento, quanto ao grau de humificação da MO, mais próximo ao observado para a vegetação nativa. Os resultados do estudo de modelagem enfatizaram a importância da textura do solo bem como as práticas de manejo para o acúmulo de C no solo. Os sistemas de manejo conservacionistas ao contrário da queima resultaram em estoques de C superiores em 78 % e 98 %, quando simulado solo com maior e menor teores de argila, respectivamente. Os resultados suportaram o uso do modelo Century em aplicações práticas de acúmulo de C em solos cultivados com cana-deaçúcar, principalmente, para os estoques de C e os ?13C dos solos avaliados, indicando que o modelo Century pode ser uma importante ferramenta para estabelecer estratégias apropriadas de manejo para aumentar os estoques de C do solo ao longo do tempo. O longo período de avaliação mostrou que conceitos, como o \"steady state\" de C no solo, podem ser investigados via modelagem, como pelo uso do modelo Century. A partir desse estudo foi possível concluir que práticas de manejos conservacionistas tendem a aumentar os estoques de C e N do solo ao longo do tempo e, que esse aumento é fortemente influenciado pelo teor de argila no solo. / One of the main obstacles of sugarcane production in the soil sustainability is the crop management and harvesting. The aim of this study was to evaluate the carbon (C) stocks and dynamics in different SOM fractions in sugarcane systems with or without burning, and under organic fertilization with different periods of adoption (4 and 12) and an area of native vegetation (Cerradão) used as a reference. The study areas are located in Goianésia, GO, all representatives of Latossolos Vermelho-Amarelo distróficos (Oxisols Hapludox distrofic). We evaluated C and N soil contents and natural isotopic abundances 13C e 15N to organominerals fractions < 53 ?m, 75-53 ?m e 2000-75 ?m (organic and organomineral fractions) of SOM. With these results, we aimed to simulate the maximum accumulation of C, by using the Century model, emphasizing different management practices and contrasting soil textures. The C contents were higher for organic-4, in relation to the other areas of sugarcane evaluated. The stocks of C and N were higher for areas where the clay content was higher (burning and organic-4). Due to the difference between soil texture we calculated stocks according to the adjustments on the equivalent mass of soil and clay content, with reference to the native vegetation and thus the largest stocks were observed for organic-12. The C from microbial biomass (C-BM) showed increased in the (C-BM) levels, due to improved management. Regarding the levels of C and N fractions of SOM, the organic-12 and organic-4 resulted in the highest values, especially for OM fraction < 53 ?m (organomineral) to 5 cm depth. These results reflected directly on the values of 13C, the respective proportion of C derived from C4 residue and the 15N, indicating, respectively, greater accumulation and proportion of C derived from the sugarcane in SOM fractions, and evidence from the 15N closer behavior to that observed for native vegetation, relative of OM humification degree. The results of the modeling study emphasized the importance of soil texture and management practices for soil C accumulation. The conservation management system resulted in higher C stocks, by 78% and 98% when simulated soil with higher and lower clay contents, respectively. The results supported the use of the Century model in practical applications of C accumulation in soils cultivated with sugarcane, especially for C stocks and 13C of soils evaluated, indicating that the Century model can be an important tool to establish appropriate management strategies to increase soil C stocks over time. The long period of evaluation showed that concepts such as the \"steady state\" of soil carbon, can be investigated via modeling. We concluded that the practices of the conservation tillage systems tend to increase the stocks of C and N in the soil over time, and this increase is strongly influenced by the clay content in the soil.
52

Capacidade máxima de acúmulo de carbono em solos cultivados com cana-de-açúcar / Maximum capacity of carbon accumulation in soils cultivated with sugarcane

Brandani, Carolina Braga 27 August 2013 (has links)
Um dos principais entraves na cultura da cana-de-açúcar quanto à sustentabilidade do solo é o manejo da cultura e da colheita. O objetivo deste estudo foi avaliar os estoques de C e sua dinâmica em diferentes frações da MOS (matéria orgânica do solo) em áreas cultivadas com cana-de-açúcar sob os manejos com e sem queima e, sob adubação orgânica com diferentes períodos de adoção (4 e 12 anos), tendo uma área de vegetação nativa (Cerradão) como referência. As áreas localizam-se em Goianésia-GO, sendo todas representativas da classe dos Latossolos Vermelho-Amarelo distróficos. Foram avaliados os teores de C e N do solo e seus respectivos estoques, além dos teores de C e N e, das abundâncias isotópicas do 13C e 15N para as frações organominerais < 53 ?m, 75-53 ?m e 2000- 75 ?m (fração orgânica e organomineral) da MOS. De posse desses resultados, objetivou-se simular a capacidade máxima de acúmulo de C, a partir do modelo Century, enfatizando os manejos avaliados e texturas de solos contrastantes. Os teores de C foram maiores para o manejo orgânico-4, em relação às demais áreas de cana-de-açúcar avaliadas. Os estoques de C e N foram maiores para as áreas em que o teor de argila foi superior; com queima e orgânica-4. Devido as diferenças entre texturas foram calculados os estoques em função dos ajustes quanto à massa de solo e teores de argila, utilizando-se como referência a vegetação nativa. Dessa forma os maiores estoques foram observados para o manejo orgânica-12. Os resultados para o conteúdo de C da biomassa microbiana (C-BM) mostrou aumento dos em função da melhoria do manejo. Em relação aos teores de C e N das frações da MOS, os manejos orgânicos (orgânica-12 e -4) resultaram nos maiores valores observados, principalmente para a fração organomineral <53 ?m à 5 cm de profundidade. Tais resultados refletiram diretamente sobre os valores de 13C, sua respectiva proporção de C derivado do resíduo C4 e, do 15N, indicando, respectivamente, maior acúmulo e proporção de C proveniente da cultura de canade- açúcar nas frações da MO, além de evidenciar, a partir do 15N, comportamento, quanto ao grau de humificação da MO, mais próximo ao observado para a vegetação nativa. Os resultados do estudo de modelagem enfatizaram a importância da textura do solo bem como as práticas de manejo para o acúmulo de C no solo. Os sistemas de manejo conservacionistas ao contrário da queima resultaram em estoques de C superiores em 78 % e 98 %, quando simulado solo com maior e menor teores de argila, respectivamente. Os resultados suportaram o uso do modelo Century em aplicações práticas de acúmulo de C em solos cultivados com cana-deaçúcar, principalmente, para os estoques de C e os ?13C dos solos avaliados, indicando que o modelo Century pode ser uma importante ferramenta para estabelecer estratégias apropriadas de manejo para aumentar os estoques de C do solo ao longo do tempo. O longo período de avaliação mostrou que conceitos, como o \"steady state\" de C no solo, podem ser investigados via modelagem, como pelo uso do modelo Century. A partir desse estudo foi possível concluir que práticas de manejos conservacionistas tendem a aumentar os estoques de C e N do solo ao longo do tempo e, que esse aumento é fortemente influenciado pelo teor de argila no solo. / One of the main obstacles of sugarcane production in the soil sustainability is the crop management and harvesting. The aim of this study was to evaluate the carbon (C) stocks and dynamics in different SOM fractions in sugarcane systems with or without burning, and under organic fertilization with different periods of adoption (4 and 12) and an area of native vegetation (Cerradão) used as a reference. The study areas are located in Goianésia, GO, all representatives of Latossolos Vermelho-Amarelo distróficos (Oxisols Hapludox distrofic). We evaluated C and N soil contents and natural isotopic abundances 13C e 15N to organominerals fractions < 53 ?m, 75-53 ?m e 2000-75 ?m (organic and organomineral fractions) of SOM. With these results, we aimed to simulate the maximum accumulation of C, by using the Century model, emphasizing different management practices and contrasting soil textures. The C contents were higher for organic-4, in relation to the other areas of sugarcane evaluated. The stocks of C and N were higher for areas where the clay content was higher (burning and organic-4). Due to the difference between soil texture we calculated stocks according to the adjustments on the equivalent mass of soil and clay content, with reference to the native vegetation and thus the largest stocks were observed for organic-12. The C from microbial biomass (C-BM) showed increased in the (C-BM) levels, due to improved management. Regarding the levels of C and N fractions of SOM, the organic-12 and organic-4 resulted in the highest values, especially for OM fraction < 53 ?m (organomineral) to 5 cm depth. These results reflected directly on the values of 13C, the respective proportion of C derived from C4 residue and the 15N, indicating, respectively, greater accumulation and proportion of C derived from the sugarcane in SOM fractions, and evidence from the 15N closer behavior to that observed for native vegetation, relative of OM humification degree. The results of the modeling study emphasized the importance of soil texture and management practices for soil C accumulation. The conservation management system resulted in higher C stocks, by 78% and 98% when simulated soil with higher and lower clay contents, respectively. The results supported the use of the Century model in practical applications of C accumulation in soils cultivated with sugarcane, especially for C stocks and 13C of soils evaluated, indicating that the Century model can be an important tool to establish appropriate management strategies to increase soil C stocks over time. The long period of evaluation showed that concepts such as the \"steady state\" of soil carbon, can be investigated via modeling. We concluded that the practices of the conservation tillage systems tend to increase the stocks of C and N in the soil over time, and this increase is strongly influenced by the clay content in the soil.
53

Untersuchungen zur Validität und Praktikabilität des mathematisch bestimmten maximalen Laktat-steady-states bei radergometrischen Belastungen

Hauser, Thomas 27 February 2013 (has links) (PDF)
Das maximale Laktat-steady-state (MLSS) gilt als ein physiologischer Parameter der Ausdauerleistungsfähigkeit. Bereits in den 1980er Jahren entwickelte Mader (1984) auf Basis der Michaelis-Menten-Kinetik eine Berechnungsmethode zur Bestimmung der Leistung im MLSS. Diese Methode setzt die Kenntnis der maximalen Reaktionsgeschwindigkeiten von Glykolyse und Atmung voraus. Die Goldstandard-Methode zur Ermittlung der Leistung im MLSS sind mehrere 30-minütige konstante Dauerbelastungen. Das hauptsächliche Ziel der vorliegenden Arbeit bestand in dem Vergleich der berechneten mit der empirisch ermittelten Leistung im MLSS. 57 männliche Probanden unterzogen sich zunächst in randomisierter Reihenfolge einem Test zur Bestimmung der maximalen Laktatbildungsrate sowie der maximalen Sauerstoffaufnahme. Im Anschluss absolvierten die Testpersonen mehrere 30 minütige Dauertests zur empirischen Ermittlung der Leistung im MLSS. Die ermittelten Ergebnisse zeigen, dass zwischen beiden Testmethoden eine hochsignifikante Korrelation (r = 0,89; p< 0,001) sowie eine mittlere Differenz von -13 Watt vorliegt. Ausgehend von den ermittelten Ergebnissen kann der Schluss gezogen werden, dass die Leistung im MLSS, ermittelt unter Verwendung der Methode nach Mader (1984) im Mittel mit der empirisch ermittelten Leistung im MLSS sehr gut übereinstimmt. Neben der angeführten Hauptstudie, wurde in der vorliegenden Arbeit weiterhin die Reliabilität und Tag-zu-Tag-Variabilität der Leistung im MLSS, der Einfluss der Testdauer auf die Laktatbildungsrate sowie die Praktikabilität der berechneten Leistung im MLSS in einem Einzelzeitfahren näher untersucht.
54

Energetics in Canoe Sprint

Li, Yongming 11 May 2015 (has links) (PDF)
This study reviewed first the development of race result in canoe sprint during the past decades. The race results of MK1-1000 and WK1-500 have increased 32.5 % and 42.1 %, respectively, a corresponding 5.0 % and 6.5 % increase in each decade. The development of race results in canoe sprint during the past decades resulted from the contributions of various aspects. The recruitment of taller and stronger athletes improved the physiological capacity of paddlers. Direct investigation on energy contribution in canoe sprint enhanced the emphasis on aerobic capacity and aerobic endurance training. Advancement of equipment design improved the efficiency of paddling. Physiological and biomechanical diagnostics in canoe sprint led to a more scientific way of training. Additionally, other aspects might also have contributed to the development of race results during the past decades. For example, the establishment of national team after World War II provided the possibility of systematic training, and the use of drugs in the last century accelerated the development of race results in that period. Recent investigations on energetics in high-intensity exercises demonstrated an underestimate of WAER % in the table provided by some textbooks since the 1960s. An exponential correlation between WAER % and the duration of high-intensity exercises was concluded from summarizing most of the relevant reports, including reports with different methods of energy calculation. However, when reports with the MAOD and Pcr-La-O2 methods were summarized separately, a greater overestimate of WAER % from MAOD was found compared to those from Pcr-La-O2, which was in line with the critical reports on MAOD. Because of the lack of investigation of the validity of the comparisons between MAOD and Pcr-La-O2, it is still not clear which method can generate more accurate results and which method is more reliable. With regard to kayaking, a range of variation in WAER % was observed. Many factors might contribute to the variation of WAER % in kayaking. Therefore, the methods utilized to calculate the energy contributions, different paddling conditions, and the level of performance were investigated in kayaking. The findings indicated that the method utilized to calculate the energy contributions in kayaking, rather than paddling condition and performance level of paddlers, might be the possible factor associated with WAER %. Some other possible factors associated with WAER % still need to be further investigated in the future. After verifying the dependence of WAER % on the method of energy calculation, but not on paddling condition and performance level of paddlers, energy contributions of kayaking were investigated for the three racing distances on a kayak ergometer with junior paddlers. Energetic profiles in kayaking varied with paddling distances. At 500 m and 1000 m the aerobic system was dominant (with WAER % of 57.8 % and 76.2 %), whereas at 200 m the anaerobic system was dominant (with WAER % of 31.1-32.4 %). Muscular volume seemed to have an influence on absolute energy productions. The anaerobic alactic system determined the performance during the first 5 to 10 s. The anaerobic lactic system probably played a dominant role during the period from the 5th-10th s to 30th-40th s. The aerobic system could dominate the energy contribution after 30–40 s. This energetic profile in kayaking could provide physiological support for developing the training philosophy in these three distances. Additionally, the method introduced by Beneke et al. seemed to be a valid method to calculate the energy contributions in maximal kayaking. Energy contributions in canoeing were similar to those in kayaking. The relative energy contributions on open water canoeing were 75.3 ± 2.8 % of aerobic, 11.5 ± 1.9 % of anaerobic lactic, and 13.2 ± 1.9 % of anaerobic alactic at maximal speed of simulated 1000 m. Further, the C of canoeing seemed also to be similar to the reported findings in kayaking, with a function of y = 0.0242 * x2.1225. Training programs could be designed similarly for kayaking and canoeing with regard to energetic profile. In order to extend the findings on energetics in canoe sprint to other exercises, energy contributions in kayaking, canoeing, running, cycling, as well as arm cranking were compared with the same duration. Results indicated that WAER % during maximal exercises with the same duration seemed to be independent of movement patterns, given similar VO2 kinetics during the maximal exertion. The exponential relationship between WAER % and duration in maximal exercises could be supported by excluding the influence from movement patterns. Additionally, MLSS in kayaking was investigated. The blood lactate value of MLSS was found to be 5.4 mM in kayaking, which could expand the knowledge of MLSS in different locomotion. The MLSS in kayaking might be attributed to the involved muscle mass in this locomotion, which could result in a certain level of lactate removal, and allow a certain level of equilibrium between lactate production and removal. LT5, instead of LT4, was recommended for diagnostics in kayaking, given an incremental test as used in this study.
55

Exploration of a Scalable Holomorphic Embedding Method Formulation for Power System Analysis Applications

January 2017 (has links)
abstract: The holomorphic embedding method (HEM) applied to the power-flow problem (HEPF) has been used in the past to obtain the voltages and flows for power systems. The incentives for using this method over the traditional Newton-Raphson based nu-merical methods lie in the claim that the method is theoretically guaranteed to converge to the operable solution, if one exists. In this report, HEPF will be used for two power system analysis purposes: a. Estimating the saddle-node bifurcation point (SNBP) of a system b. Developing reduced-order network equivalents for distribution systems. Typically, the continuation power flow (CPF) is used to estimate the SNBP of a system, which involves solving multiple power-flow problems. One of the advantages of HEPF is that the solution is obtained as an analytical expression of the embedding parameter, and using this property, three of the proposed HEPF-based methods can es-timate the SNBP of a given power system without solving multiple power-flow prob-lems (if generator VAr limits are ignored). If VAr limits are considered, the mathemat-ical representation of the power-flow problem changes and thus an iterative process would have to be performed in order to estimate the SNBP of the system. This would typically still require fewer power-flow problems to be solved than CPF in order to estimate the SNBP. Another proposed application is to develop reduced order network equivalents for radial distribution networks that retain the nonlinearities of the eliminated portion of the network and hence remain more accurate than traditional Ward-type reductions (which linearize about the given operating point) when the operating condition changes. Different ways of accelerating the convergence of the power series obtained as a part of HEPF, are explored and it is shown that the eta method is the most efficient of all methods tested. The local-measurement-based methods of estimating the SNBP are studied. Non-linear Thévenin-like networks as well as multi-bus networks are built using model data to estimate the SNBP and it is shown that the structure of these networks can be made arbitrary by appropriately modifying the nonlinear current injections, which can sim-plify the process of building such networks from measurements. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
56

Energetics in Canoe Sprint

Li, Yongming 10 February 2015 (has links)
This study reviewed first the development of race result in canoe sprint during the past decades. The race results of MK1-1000 and WK1-500 have increased 32.5 % and 42.1 %, respectively, a corresponding 5.0 % and 6.5 % increase in each decade. The development of race results in canoe sprint during the past decades resulted from the contributions of various aspects. The recruitment of taller and stronger athletes improved the physiological capacity of paddlers. Direct investigation on energy contribution in canoe sprint enhanced the emphasis on aerobic capacity and aerobic endurance training. Advancement of equipment design improved the efficiency of paddling. Physiological and biomechanical diagnostics in canoe sprint led to a more scientific way of training. Additionally, other aspects might also have contributed to the development of race results during the past decades. For example, the establishment of national team after World War II provided the possibility of systematic training, and the use of drugs in the last century accelerated the development of race results in that period. Recent investigations on energetics in high-intensity exercises demonstrated an underestimate of WAER % in the table provided by some textbooks since the 1960s. An exponential correlation between WAER % and the duration of high-intensity exercises was concluded from summarizing most of the relevant reports, including reports with different methods of energy calculation. However, when reports with the MAOD and Pcr-La-O2 methods were summarized separately, a greater overestimate of WAER % from MAOD was found compared to those from Pcr-La-O2, which was in line with the critical reports on MAOD. Because of the lack of investigation of the validity of the comparisons between MAOD and Pcr-La-O2, it is still not clear which method can generate more accurate results and which method is more reliable. With regard to kayaking, a range of variation in WAER % was observed. Many factors might contribute to the variation of WAER % in kayaking. Therefore, the methods utilized to calculate the energy contributions, different paddling conditions, and the level of performance were investigated in kayaking. The findings indicated that the method utilized to calculate the energy contributions in kayaking, rather than paddling condition and performance level of paddlers, might be the possible factor associated with WAER %. Some other possible factors associated with WAER % still need to be further investigated in the future. After verifying the dependence of WAER % on the method of energy calculation, but not on paddling condition and performance level of paddlers, energy contributions of kayaking were investigated for the three racing distances on a kayak ergometer with junior paddlers. Energetic profiles in kayaking varied with paddling distances. At 500 m and 1000 m the aerobic system was dominant (with WAER % of 57.8 % and 76.2 %), whereas at 200 m the anaerobic system was dominant (with WAER % of 31.1-32.4 %). Muscular volume seemed to have an influence on absolute energy productions. The anaerobic alactic system determined the performance during the first 5 to 10 s. The anaerobic lactic system probably played a dominant role during the period from the 5th-10th s to 30th-40th s. The aerobic system could dominate the energy contribution after 30–40 s. This energetic profile in kayaking could provide physiological support for developing the training philosophy in these three distances. Additionally, the method introduced by Beneke et al. seemed to be a valid method to calculate the energy contributions in maximal kayaking. Energy contributions in canoeing were similar to those in kayaking. The relative energy contributions on open water canoeing were 75.3 ± 2.8 % of aerobic, 11.5 ± 1.9 % of anaerobic lactic, and 13.2 ± 1.9 % of anaerobic alactic at maximal speed of simulated 1000 m. Further, the C of canoeing seemed also to be similar to the reported findings in kayaking, with a function of y = 0.0242 * x2.1225. Training programs could be designed similarly for kayaking and canoeing with regard to energetic profile. In order to extend the findings on energetics in canoe sprint to other exercises, energy contributions in kayaking, canoeing, running, cycling, as well as arm cranking were compared with the same duration. Results indicated that WAER % during maximal exercises with the same duration seemed to be independent of movement patterns, given similar VO2 kinetics during the maximal exertion. The exponential relationship between WAER % and duration in maximal exercises could be supported by excluding the influence from movement patterns. Additionally, MLSS in kayaking was investigated. The blood lactate value of MLSS was found to be 5.4 mM in kayaking, which could expand the knowledge of MLSS in different locomotion. The MLSS in kayaking might be attributed to the involved muscle mass in this locomotion, which could result in a certain level of lactate removal, and allow a certain level of equilibrium between lactate production and removal. LT5, instead of LT4, was recommended for diagnostics in kayaking, given an incremental test as used in this study.
57

Development and Deployment of Renewable and Sustainable Energy Technologies

Jung, Jae Sung 06 March 2014 (has links)
Solar and wind generation are one of the most rapidly growing renewable energy sources, and is regarded as an appealing alternative to conventional power generated from fossil fuel. This is leading to significant levels of distributed renewable generation being installed on distribution circuits. Although renewable generation brings many advantages, circuit problems are created due to its intermittency, and overcoming these problems is a key challenge to achieving high penetration. It is necessary for utilities to understand the impacts of Photovoltaic (PV) generation on distribution circuits and operations. An impact study is intended to quantify the extent of the issues, discover any problems, and investigate alternative solutions. In this manner, system wide and local impact study are proposed in the dissertation. 1) System wide impact study This study considers system effects due to the addition of Plug-in Hybrid Vehicles (PHEV) and Distributed Energy Resource (DER) generation. The DER and PHEV are considered with energy storage technology applied to the residential distribution system load. Two future year scenarios are considered, 2020 and 2030. The models used are of real distribution circuits located near Detroit, Michigan, and every customer load on the circuit and type of customer are modeled. Monte Carlo simulations are used to randomly select customers that receive PHEV, DER, and/or storage systems. The Monte Carlo simulations provide not only the expected average result, but also its uncertainty. 2) Local impact study Analysis of high PV penetration in distribution circuits using both steady-state and quasi steady-state impact studies are presented. The steady-state analysis evaluates impacts on the distribution circuit by comparing conditions before and after extreme changes in PV generation at three extreme circuit conditions, maximum load, maximum PV generation, and when the difference between the PV generation and the circuit load is a maximum. The quasi steady-state study consists of a series of steady-state impact studies performed at evenly spaced time points for evaluating the spectrum of impacts between the extreme impacts. Results addressing the impacts of cloud cover and various power factor control strategies are presented. PV penetration levels are limited and depend upon PV generation control strategies and the circuit design and loading. There are tradeoffs in PV generation control concerning circuit voltage variations, circuit losses, and the motion of automated utility control devices. The steady state and quasi steady-state impact studies provide information that is helpful in evaluating the effect of PV generation on distribution circuits, including circuit problems that result from the PV generation. In order to fully benefit from wind power, accurate wind power forecasting is an essential tool in addressing this challenge. This has motivated researchers to develop better forecast of the wind resources and the resulting power. As a solution for wind generation, frequency domain approach is proposed to characterize and analyze wind speed patterns in the dissertation. 3) Frequency Domain Approach This study introduces the frequency domain approach to characterize and analyze wind speed patterns. It first presents the technique of and the prerequisite conditions for the frequency domain approach. Three years of wind speed data at 10 different locations have been used. This chapter demonstrates that wind speed patterns during different times and at different locations can be well characterized by using the frequency domain approach with its compact and structured format. We also perform analysis using the characterized dataset. It affirms that the frequency domain approach is a useful indicator for understanding the characteristics of wind speed patterns and can express the information with superior accuracy. Among the various technical challenges under high PV penetration, voltage rise problems caused by reverse power flows are one of the foremost concerns. The voltage rises due to the PV generation. Furthermore, the need to limit the voltage rise problem limits PV generators from injecting more active power into the distribution network. This can be one of the obstacles to high penetration of PVs into circuits. As a solution for solar generation, coordinated control of automated devices and PV is proposed in the dissertation. 4) Coordinated Automated Device and PV Control A coordinating, model-centric control strategy for mitigating voltage rise problems due to PV penetration into power distribution circuits is presented. The coordinating control objective is to maintain an optimum circuit voltage distribution and voltage schedule, where the optimum circuit operation is determined without PV generation on the circuit. In determining the optimum circuit voltage distribution and voltage schedule, the control strategy schedules utility controls, such as switched capacitor banks and voltage regulators, separate from PV inverter controls. Optimization addresses minimizing circuit losses and motion of utility controls. The coordinating control action provides control setpoints to the PV inverters that are a function of the circuit loading or time-of-day and also the location of the PV inverter. Three PV penetration scenarios are considered, 10%, 20%, and 30%. Baselines with and without coordinating controls for circuit performance without PV generation are established, and these baselines are compared against the three PV penetration scenarios with and without coordinating control. Simulation results are compared and differences in voltage variations and circuit losses are considered along with differences in utility control motion. Results show that the coordinating control can solve the voltage rise problem while minimizing circuit losses and reducing utility control motion. The coordinating control will work with existing PV inverter controls that accept control setpoints without having to modify the inverter controls. 5) Coordinated Local and Centralized PV Control Existing distribution systems and their associated controls have been around for decades. Most distribution circuits have capacity to accommodate some level of PV generation, but the question is how much can they handle without creating problems. It proposes a Configurable, Hierarchical, Model-based, Scheduling Control (CHMSC) of automated utility control devices and photovoltaic (PV) generators. In the study here the automated control devices are assumed to be owned by the utility and the PV generators and PV generator controls by another party. The CHMSC, which exists in a hierarchical control architecture that is failure tolerant, strives to maintain the voltage level that existed before introducing the PV into the circuit while minimizing the circuit loss and reducing the motion of the automated control devices. This is accomplished using prioritized objectives. The CHMSC sends control signals to the local controllers of the automated control devices and PV controllers. To evaluate the performance of the CHMSC, increasing PV levels of adoption are analyzed in a model of an actual circuit that has significant existing PV penetration and automated voltage control devices. The CHMSC control performance is compared with that of existing, local control. Simulation results presented demonstrate that the CHMSC algorithm results in better voltage control, lower losses, and reduced automated control device motion, especially as the penetration level of PV increases. / Ph. D.
58

Quelques résultats sur la percolation d'information dans les marchés OTC.

Bayade, Sophia January 2014 (has links)
Résumé : La principale caractéristique des marchés OTC (Over-The-Counter) est l’absence d’un mécanisme de négociation centralisée (comme des ventes aux enchères, des spécialistes ou des limit-order books). Les acheteurs et les vendeurs sont donc souvent dans l'ignorance des prix actuellement disponibles auprès d'autres contreparties potentielles et ont une connaissance limitée de l’amplitude des transactions récemment négociées ailleurs sur le marché. C'est la raison pour laquelle les marchés OTC sont qualifiés de relativement opaques et nommés «Dark Markets» par Duffie (2012) dans sa récente monographie afin de refléter le fait que les investisseurs sont en quelque sorte dans le noir au sujet du meilleur prix disponible et de la personne à contacter pour faire la meilleure transaction. Dans ce travail, nous sommes particulièrement intéressés à l’évolution temporelle de la transmission de l’information au cours des séances de négociation. Plus précisément, nous cherchons à établir la stabilité asymptotique de la dynamique de partage de l'information au sein d’une large population d’investisseurs caractérisés par la fréquence/intensité des rencontres entre investisseurs. L’effort optimal déployé par un agent en recherche d’informations dépend de son niveau actuel d'information et de la distribution transversale des efforts de recherche des autres agents. Dans le cadre défini par Duffie-Malamud-Manso (2009), à l’équilibre, les agents recherchent au maximum jusqu'à ce que la qualité de leur information atteigne un certain niveau, déclenchant une nouvelle phase de recherche minimale. Dans le contexte de percolation d'information entre agents, l'information peut être transmise parfaitement ou imparfaitement. La première étude de ce problème de percolation a été faite par Duffie-Manso (2007), puis par Duffie-Giroux-Manso (2010). Dans cette deuxième étude, le cas de la percolation de l'information par des groupes de plus de deux investisseurs a été abordé et résolu. Cette dernière étude a conduit au problème de l'extension des sommes de Wild dans Bélanger-Giroux (2013). D'autre part, dans Duffie-Malamud-Manso (2009), chaque agent est doté de signaux quant à l'issue probable d'une variable aléatoire d'intérêt commun dans l’optique de transmission d’informations dans une large population d'agents. Un tel contexte conduit à des systèmes d'équations non linéaires d’évolution. Leur objectif est d'obtenir une politique d'équilibre déterminée par un ensemble de paramètres d'une politique de cible traduisant le fait que l’effort de recherche qui doit être minimal lorsqu’un agent possède suffisamment d’information. Dans ce travail, nous sommes en mesure d'obtenir l'existence de l’état d’équilibre, même lorsque la fonction d'intensité n'est pas un produit. De plus, nous sommes également en mesure de montrer la stabilité asymptotique pour toute loi initiale par un changement de noyaux. Enfin, nous élargissons les hypothèses de Bélanger-Giroux (2012) pour montrer la stabilité exponentielle par le critère de Routh-Hurwitz pour un autre exemple de système à un nombre fini d’équations. // Abstract : Over-the-counter (OTC) markets have the main characteristic that they do not use a centralized trading mechanism (such as auctions, specialist, or limit-order book) to aggregate bids and offers and to allocate trades. The buyers and sellers have often a limited knowledge of trades recently negotiated elsewhere in the market. They are also negotiating in potential ignorance of the prices currently available from other counterparties. This is the reason why OTC markets are said to be relatively opaque and are qualified as «Dark Markets» by Duffie (2012) in his recent monograph to reflect the fact that investors are somewhat in the dark about the most attractive available deals and about whom to contact. In this work, we are particularly interested in the evolution over time of the distribution across investors of information learned from private trade negotiations. Specifically, we aim to establish the asymptotic stability of equilibrium dynamics of information sharing in a large interaction set. An agent’s optimal current effort to search for information sharing opportunities depends on that agent’s current level of information and on the cross-sectional distribution of information quality and search efforts of other agents. Under the Duffie-Malamud-Manso (2009) framework, in equilibrium, agents search maximally until their information quality reaches a trigger level and then search minimally. In the context of percolation of information between agents, the information can be transmitted directly or indirectly. The first studies of such a problem were made by Duffie-Manso (2007) and then by Duffie-Giroux-Manso (2010). In that second study the case of the percolation of information by groups of more than 2 investors was addressed and solved for a perfect information transmission kernel. That last study has led Bélanger-Giroux (2013) to the problem of extending the Wild sums for a general interacting kernel (not only for the kernel which adds the information). On the other hand, in Duffie-Malamud-Manso (2009), the authors explain that, for the information sharing in a large population, each agent is endowed with signals regarding the likely outcome of a random variable of common concern, like the price of an asset of common interest. Such a setting leads to nonlinear systems of evolution equations. The agents’ goal is to obtain an equilibrium policy specified by a set of parameters of a trigger policy; more specifically the minimal search effort trigger policies. We concentrate our study on those trigger policies in order to provide more intuitive and practical results. Doing so, we are able to obtain the existence of the steady state even when the intensity function is not a product. And in our framework, we are even able to show the asymptotic stability starting with any initial law. This can be done because we are able to show that, by a change of kernels, the systems of ODE’s, which are expressed by a set of kernels (one 1-airy and one 2-airy) are equivalent to systems expressed with a single 2-airy kernel even with a constant intensity equal to one (by a change of time). We show also that starting from any distribution, the solution converges to the limit proportions. Furthermore, we are able to show the exponential stability using the Routh-Hurwitz criterion for an example of a finite system of differential equations. The solution of such a system of equations describes the cross distribution of types in the market.
59

Sequential estimation in statistics and steady-state simulation

Tang, Peng 22 May 2014 (has links)
At the onset of the "Big Data" age, we are faced with ubiquitous data in various forms and with various characteristics, such as noise, high dimensionality, autocorrelation, and so on. The question of how to obtain accurate and computationally efficient estimates from such data is one that has stoked the interest of many researchers. This dissertation mainly concentrates on two general problem areas: inference for high-dimensional and noisy data, and estimation of the steady-state mean for univariate data generated by computer simulation experiments. We develop and evaluate three separate sequential algorithms for the two topics. One major advantage of sequential algorithms is that they allow for careful experimental adjustments as sampling proceeds. Unlike one-step sampling plans, sequential algorithms adapt to different situations arising from the ongoing sampling; this makes these procedures efficacious as problems become more complicated and more-delicate requirements need to be satisfied. We will elaborate on each research topic in the following discussion. Concerning the first topic, our goal is to develop a robust graphical model for noisy data in a high-dimensional setting. Under a Gaussian distributional assumption, the estimation of undirected Gaussian graphs is equivalent to the estimation of inverse covariance matrices. Particular interest has focused upon estimating a sparse inverse covariance matrix to reveal insight on the data as suggested by the principle of parsimony. For estimation with high-dimensional data, the influence of anomalous observations becomes severe as the dimensionality increases. To address this problem, we propose a robust estimation procedure for the Gaussian graphical model based on the Integrated Squared Error (ISE) criterion. The robustness result is obtained by using ISE as a nonparametric criterion for seeking the largest portion of the data that "matches" the model. Moreover, an l₁-type regularization is applied to encourage sparse estimation. To address the non-convexity of the objective function, we develop a sequential algorithm in the spirit of a majorization-minimization scheme. We summarize the results of Monte Carlo experiments supporting the conclusion that our estimator of the inverse covariance matrix converges weakly (i.e., in probability) to the latter matrix as the sample size grows large. The performance of the proposed method is compared with that of several existing approaches through numerical simulations. We further demonstrate the strength of our method with applications in genetic network inference and financial portfolio optimization. The second topic consists of two parts, and both concern the computation of point and confidence interval (CI) estimators for the mean µ of a stationary discrete-time univariate stochastic process X \equiv \{X_i: i=1,2,...} generated by a simulation experiment. The point estimation is relatively easy when the underlying system starts in steady state; but the traditional way of calculating CIs usually fails since the data encountered in simulation output are typically serially correlated. We propose two distinct sequential procedures that each yield a CI for µ with user-specified reliability and absolute or relative precision. The first sequential procedure is based on variance estimators computed from standardized time series applied to nonoverlapping batches of observations, and it is characterized by its simplicity relative to methods based on batch means and its ability to deliver CIs for the variance parameter of the output process (i.e., the sum of covariances at all lags). The second procedure is the first sequential algorithm that uses overlapping variance estimators to construct asymptotically valid CI estimators for the steady-state mean based on standardized time series. The advantage of this procedure is that compared with other popular procedures for steady-state simulation analysis, the second procedure yields significant reduction both in the variability of its CI estimator and in the sample size needed to satisfy the precision requirement. The effectiveness of both procedures is evaluated via comparisons with state-of-the-art methods based on batch means under a series of experimental settings: the M/M/1 waiting-time process with 90% traffic intensity; the M/H_2/1 waiting-time process with 80% traffic intensity; the M/M/1/LIFO waiting-time process with 80% traffic intensity; and an AR(1)-to-Pareto (ARTOP) process. We find that the new procedures perform comparatively well in terms of their average required sample sizes as well as the coverage and average half-length of their delivered CIs.
60

A Mixed Frequency Steady-State Bayesian Vector Autoregression: Forecasting the Macroeconomy

Unosson, Måns January 2016 (has links)
This thesis suggests a Bayesian vector autoregressive (VAR) model which allows for explicit parametrization of the unconditional mean for data measured at different frequencies, without the need to aggregate data to the lowest common frequency. Using a normal prior for the steady-state and a normal-inverse Wishart prior for the dynamics and error covariance, a Gibbs sampler is proposed to sample the posterior distribution. A forecast study is performed using monthly and quarterly data for the US macroeconomy between 1964 and 2008. The proposed model is compared to a steady-state Bayesian VAR model estimated on data aggregated to quarterly frequency and a quarterly least squares VAR with standard parametrization. Forecasts are evaluated using root mean squared errors and the log-determinant of the forecast error covariance matrix. The results indicate that the inclusion of monthly data improves the accuracy of quarterly forecasts of monthly variables for horizons up to a year. For quarterly variables the one and two quarter forecasts are improved when using monthly data.

Page generated in 0.1066 seconds