• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1629
  • 918
  • 581
  • 182
  • 164
  • 157
  • 78
  • 54
  • 50
  • 30
  • 30
  • 22
  • 15
  • 14
  • 12
  • Tagged with
  • 4688
  • 690
  • 594
  • 415
  • 392
  • 338
  • 335
  • 333
  • 324
  • 297
  • 291
  • 285
  • 277
  • 270
  • 269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
941

Customer Load Profiling and Aggregation

Chang, Rung-Fang 28 June 2002 (has links)
Power industry restructuring has created many opportunities for customers to reduce their electricity bills. In order to facilitate the retail choice in a competitive power market, the knowledge of hourly load shape by customer class is necessary. Requiring a meter as a prerequisite for lower voltage customers to choose a power supplier is not considered practical at the present time. In order to be used by Energy Service Provider (ESP) to assign customers to specific load profiles with certainty factors, a technique which bases on load research and customers¡¦ monthly energy usage data for a preliminary screening of customer load profiles is required. Distribution systems supply electricity to different mixtures of customers, due to lack of field measurements, load point data used in distribution network studies have various degrees of uncertainties. In order to take the expected uncertainties in the demand into account, many previous methods have used fuzzy load models in their studies. However, the issue of deriving these models has not been discussed. To address this issue, an approach for building these fuzzy load models is needed. Load aggregation allows customers to purchase electricity at a lower price. In some contracts, load factor is considered as one critical aspect of aggregation. To facilitate a better load aggregation in distribution networks, feeder reconfiguration could be used to improve the load factor in a distribution subsystem. To solve the aforementioned problems, two data mining techniques, namely, the fuzzy c-means (FCM) method and an Artificial Neural Network (ANN) based pattern recognition technique, are proposed for load profiling and customer class assignment. A variant to the previous load profiling technique, customer hourly load distributions obtained from load research can be converted to fuzzy membership functions based on a possibility¡Vprobability consistency principle. With the customer class fuzzy load profiles, customer monthly power consumption and feeder load measurements, hourly loads of each distribution transformer on the feeder can be estimated and used in distribution network analysis. After feeder models are established, feeder reconfiguration based on binary particle swarm optimization (BPSO) technique is used to improve feeder load factors. Test results based on several simple sample networks have shown that the proposed feeder reconfiguration method could improve customers¡¦ position for a good bargain in electricity service.
942

Distributed Computation With Communication Delays: Design And Analysis Of Load Distribution Strategies

Bharadwaj, V 06 1900 (has links)
Load distribution problems in distributed computing networks have attracted much attention in the literature. A major objective in these studies is to distribute the processing load so as to minimize the time of processing of the entire load. In general, the processing load can be indivisible or divisible. An indivisible load has to be processed in its entirety on a single processor. On the other hand, a divisible load can be partitioned and processed on more than one processor. Divisible loads are either modularly divisible or arbitrarily divisible. Modularly divisible loads can be divided into pre-defined modules and cannot be further sub-divided. Further, precedence relations between modules may exist. Arbitrarily divisible loads can be divided into several fractions of arbitrary lengths which usually do not have any precedence relations. Such type of loads are characterized by their large volume and the property that each data element requires an identical and independent processing. One of the important problems here is to obtain an optimal load distribution, which minimizes the processing time when the distribution is subject to communication delays in the interconnecting links. A specific application in which such loads are encountered is in edge-detection of images. Here the given image frame can be arbitrarily divided into many sub-frames and each of these can be independently processed. Other applications include processing of massive experimental data. The problems associated with the distribution of such arbitrarily divisible loads are usually analysed in the framework of what is known as divisible job theory. The research work reported in this thesis is a contribution in the area of distributing arbitrarily divisible loads in distributed computing systems subject to communication delays. The main objective in this work is to design and analyseload distribution strategies to minimize the processing time of the entire load in a given network. Two types of networks are considered, namely (i) single-level tree (or star) network and (ii) linear network. In both the networks we assume that there is a non-zero delay associated with load transfer. Further, the processors in the network may or may not be equipped with front-ends (Le., communication co-processors). The main contributions in this thesis are summarized below. First, a mathematical formulation of the load distribution problem in single-level tree and linear networks is presented. In both the networks, it is assumed that there are (m +1) processors and m communication links. In the case of single-level tree networks, the load to be processed is assumed to originate at the root processor, which divides the load into (m +1) fractions, keeps its own share of the load for processing, and distributes the rest to the child processors one at a time and in a fixed sequence. In all the earlier studies in the literature, it had been assumed that for a load distribution to be optimal, it should be such that all the processors must stop computing at the same time. In this thesis, it is shown that this assumption is in general not true, and holds only for a restricted class of single-level tree networks which satisfy a certain condition. The concept of an equivalent network is introduced to obtain a precise formulation of this condition in terms of the processor and link speed parameters. It is shown that this condition can be used to identify processor-link pairs which can be eliminated from a given network (i.e., these processors need not be given any computational load) without degrading its time performance. It is proved that the resultant reduced network (a network from which these inefficient processor-link pairs have been removed) gives the optimal time performance if and only if the load distribution is such that all the processors stop computing at the same time instant. These results are first proved for the case when the root processor is equipped with a front-end and then extended to the case when it is not. In the latter case, an additional condition, between the speed of the root processor and the speed of each of the links, to be satisfied by the network is specified. An optimal sequence for applying these conditions is also obtained. In the case of linear networks the processing load is assumed to originate at the processor situated at one end of the network. Each processor in the network keeps its own load fraction for computing and transmits the rest to its successor. Here too, in all the earlier studies in the literature, it has been assumed that for the processing time to be a minimum, the load distribution must be such that all the processors must stop computing at the same instant in time. Though this condition has been proved by others to be both necessary and sufficient, a different and more rigorous proof, similar to the case of single-level tree network, is presented here. Finally, the effect of inaccurate modelling on the processing time and on the above conditions are discussed through an illustrative example and it is shown that the model adopted in this thesis gives reasonably accurate results. In the case of single-level tree networks, so far it has been assumed that the root processor distributes the processing load in a fixed sequence. However, since there are m child processors, a total of m! different sequences of load distribution are possible. Using the closed-form derived for the processing time, it is proved here that the optimal sequence of load distribution follows the decreasing order of link speeds. Further, if physical rearrangement of processors and links is allowed, then it is shown that the optimal arrangement follows a decreasing order of link and processor speeds with the fastest processor at the root. The entire analysis is first done for the case when the root processor is equipped with a front-end, and then extended to the case when it is not. In the without front-end case, it is shown that the same optimal sequencing result holds. However, in an optimal arrangement, the root processor need not be the fastest. In this case an algorithm has been proposed for obtaining optimal arrangement. Illustrative examples are given for all the cases considered. Next, a new strategy of load distribution is proposed by which the processing time obtained in earlier studies can be further minimized. Here the load is distributed by the root processor to a child processor in more than one installment (instead of in a single installment) such that the processing time is further minimized. First; the case in which all the processors are equipped :tn front-ends is considered. Recursive equations are obtained for a heterogeneous network and these are solved for the special case of a homogeneous network (having identical processors and identical links). Using this closed-form solution, the ultimate limits of performance are explored through an asymptotic analysis with respect to the number of installments and number of processors in the network. Trade-off relationships between the number of installments and the number of processors in the network are also presented. These results are then extended to the case when the processors are not equipped with front-ends. Finally, the efficiency of this new strategy of load distribution is demonstrated by comparing it with the existing single-installment strategy in the literature. The multi-installment strategy explained above is then applied to linear net-As. Here, .the processing load is assumed to originate at one extreme end of the network, First the case when all the processors are equipped with front-ends is considered. Recursive equations for a heterogeneous network are obtained and these are solved for the special case of a homogeneous network. Using this closed form solution, an asymptotic analysis is performed with respect to the number of installments. However, the asymptotic results with respect to the number of processors was obtained computationally since analytical results could not be obtained. It is found that for a given network, once the number of installments is fixed, there is an optimum number of processors to be used in the network, beyond which the time performance degrades. Trade-off relationships between the number of installments and the number of processors is obtained. These results are then extended to the case when the processors are not equipped with front-ends. Comparisions with the existing single-installment strategy is also done. The single-installment strategy discussed in the literature has the disadvantage that the front-ends of the processors are not utilized efficiently in a linear network. This is due to the fact that a processor starts computing its own load fraction only after the entire load to be communicated through its front-end has been received. In this thesis, a new strategy is proposed in which a processor starts computing as soon as it receives its load fraction, simultaneously allowing its front-end to receive and transmit load to its successors. Recursive equations are developed and solved for the special case of a heterogeneous network in which the processors and links are arranged in the decreasing order of speeds. Further, it is shown that in this strategy, if the processing load originates in the interior of the network, the sequence of load distribution should- be such that the load should be first distributed to the side with a lesser number of processors. An expression for the optimal load origination point in the network is derived. A comparative study of this strategy with an earlier strategy is also presented. Finally, it is shown that even though the analysis is carried out for a special case of a heterogeneous network, this load distribution strategy can also be applied to a linear network in which the processors and links are arbitrarily arranged and still obtain a significant improvement in the time performance.
943

The extended Hertzian Appraoch for lateral loading

Schwarzer, Norbert 11 February 2006 (has links) (PDF)
Motivated by the structure of the normal surface stress of the extended Hertzian approach [1] given due to terms of the form r^2n*(a^2-r^2)^(1/2) (n=0, 2, 4, 6…) it seems attractive to evaluate the complete elastic field also for shear loadings of this form. The reason for this lays in the demand for analytical tools for the description of mixed loading conditions as they appear for example in scratch experiments. [1] N. Schwarzer, "Elastic Surface Deformation due to Indenters with Arbitrary symmetry of revolution", J. Phys. D: Appl. Phys., 37 (2004) 2761-2772
944

Development of Moderate-Cost Methodologies for the Aerodynamic Simulation of Contra-Rotating Open Rotors.

Gonzalez-Martino, Ignacio 19 May 2014 (has links) (PDF)
This study is devoted to the development of moderate-cost methodologies for the aerodynamic simulation of open rotors. The main goals are, on one side, to develop and validate these rapid methodologies, and, on the other side, to better understand the mechanisms behind propeller in-plane loads, also called the 1P loads. To reach the first goal, the HOST-MINT code, based on the lifting-line theory, has been adapted and improved for the unsteady simulation of propellers and open rotors. The code has been assessed by comparison with experimental data and more complex and precise CFD simulations. Finally, the first developments and tests of a Lagrangian/Eulerian coupling strategy between HOST-MINT and the elsA CFD code have been performed. These studies enable to endeavor a number of applications of this type of rapid methodologies in the aerodynamic design of future open rotors. Moreover, these methodologies may be adapted for other domains linked to aerodynamics, such as aeroelastic problems or preliminary aeroacoustic predictions.
945

Effiziente MapReduce-Parallelisierung von Entity Resolution-Workflows

Kolb, Lars 11 December 2014 (has links) (PDF)
In den vergangenen Jahren hat das neu entstandene Paradigma Infrastructure as a Service die IT-Welt massiv verändert. Die Bereitstellung von Recheninfrastruktur durch externe Dienstleister bietet die Möglichkeit, bei Bedarf in kurzer Zeit eine große Menge von Rechenleistung, Speicherplatz und Bandbreite ohne Vorabinvestitionen zu akquirieren. Gleichzeitig steigt sowohl die Menge der frei verfügbaren als auch der in Unternehmen zu verwaltenden Daten dramatisch an. Die Notwendigkeit zur effizienten Verwaltung und Auswertung dieser Datenmengen erforderte eine Weiterentwicklung bestehender IT-Technologien und führte zur Entstehung neuer Forschungsgebiete und einer Vielzahl innovativer Systeme. Ein typisches Merkmal dieser Systeme ist die verteilte Speicherung und Datenverarbeitung in großen Rechnerclustern bestehend aus Standard-Hardware. Besonders das MapReduce-Programmiermodell hat in den vergangenen zehn Jahren zunehmend an Bedeutung gewonnen. Es ermöglicht eine verteilte Verarbeitung großer Datenmengen und abstrahiert von den Details des verteilten Rechnens sowie der Behandlung von Hardwarefehlern. Innerhalb dieser Dissertation steht die Nutzung des MapReduce-Konzeptes zur automatischen Parallelisierung rechenintensiver Entity Resolution-Aufgaben im Mittelpunkt. Entity Resolution ist ein wichtiger Teilbereich der Informationsintegration, dessen Ziel die Entdeckung von Datensätzen einer oder mehrerer Datenquellen ist, die dasselbe Realweltobjekt beschreiben. Im Rahmen der Dissertation werden schrittweise Verfahren präsentiert, welche verschiedene Teilprobleme der MapReduce-basierten Ausführung von Entity Resolution-Workflows lösen. Zur Erkennung von Duplikaten vergleichen Entity Resolution-Verfahren üblicherweise Paare von Datensätzen mithilfe mehrerer Ähnlichkeitsmaße. Die Auswertung des Kartesischen Produktes von n Datensätzen führt dabei zu einer quadratischen Komplexität von O(n²) und ist deswegen nur für kleine bis mittelgroße Datenquellen praktikabel. Für Datenquellen mit mehr als 100.000 Datensätzen entstehen selbst bei verteilter Ausführung Laufzeiten von mehreren Stunden. Deswegen kommen sogenannte Blocking-Techniken zum Einsatz, die zur Reduzierung des Suchraums dienen. Die zugrundeliegende Annahme ist, dass Datensätze, die eine gewisse Mindestähnlichkeit unterschreiten, nicht miteinander verglichen werden müssen. Die Arbeit stellt eine MapReduce-basierte Umsetzung der Auswertung des Kartesischen Produktes sowie einiger bekannter Blocking-Verfahren vor. Nach dem Vergleich der Datensätze erfolgt abschließend eine Klassifikation der verglichenen Kandidaten-Paare in Match beziehungsweise Non-Match. Mit einer steigenden Anzahl verwendeter Attributwerte und Ähnlichkeitsmaße ist eine manuelle Festlegung einer qualitativ hochwertigen Strategie zur Kombination der resultierenden Ähnlichkeitswerte kaum mehr handhabbar. Aus diesem Grund untersucht die Arbeit die Integration maschineller Lernverfahren in MapReduce-basierte Entity Resolution-Workflows. Eine Umsetzung von Blocking-Verfahren mit MapReduce bedingt eine Partitionierung der Menge der zu vergleichenden Paare sowie eine Zuweisung der Partitionen zu verfügbaren Prozessen. Die Zuweisung erfolgt auf Basis eines semantischen Schlüssels, der entsprechend der konkreten Blocking-Strategie aus den Attributwerten der Datensätze abgeleitet ist. Beispielsweise wäre es bei der Deduplizierung von Produktdatensätzen denkbar, lediglich Produkte des gleichen Herstellers miteinander zu vergleichen. Die Bearbeitung aller Datensätze desselben Schlüssels durch einen Prozess führt bei Datenungleichverteilung zu erheblichen Lastbalancierungsproblemen, die durch die inhärente quadratische Komplexität verschärft werden. Dies reduziert in drastischem Maße die Laufzeiteffizienz und Skalierbarkeit der entsprechenden MapReduce-Programme, da ein Großteil der Ressourcen eines Clusters nicht ausgelastet ist, wohingegen wenige Prozesse den Großteil der Arbeit verrichten müssen. Die Bereitstellung verschiedener Verfahren zur gleichmäßigen Ausnutzung der zur Verfügung stehenden Ressourcen stellt einen weiteren Schwerpunkt der Arbeit dar. Blocking-Strategien müssen stets zwischen Effizienz und Datenqualität abwägen. Eine große Reduktion des Suchraums verspricht zwar eine signifikante Beschleunigung, führt jedoch dazu, dass ähnliche Datensätze, z. B. aufgrund fehlerhafter Attributwerte, nicht miteinander verglichen werden. Aus diesem Grunde ist es hilfreich, für jeden Datensatz mehrere von verschiedenen Attributen abgeleitete semantische Schlüssel zu generieren. Dies führt jedoch dazu, dass ähnliche Datensätze unnötigerweise mehrfach bezüglich verschiedener Schlüssel miteinander verglichen werden. Innerhalb der Arbeit werden deswegen Algorithmen zur Vermeidung solch redundanter Ähnlichkeitsberechnungen präsentiert. Als Ergebnis dieser Arbeit wird das Entity Resolution-Framework Dedoop präsentiert, welches von den entwickelten MapReduce-Algorithmen abstrahiert und eine High-Level-Spezifikation komplexer Entity Resolution-Workflows ermöglicht. Dedoop fasst alle in dieser Arbeit vorgestellten Techniken und Optimierungen in einem nutzerfreundlichen System zusammen. Der Prototyp überführt nutzerdefinierte Workflows automatisch in eine Menge von MapReduce-Jobs und verwaltet deren parallele Ausführung in MapReduce-Clustern. Durch die vollständige Integration der Cloud-Dienste Amazon EC2 und Amazon S3 in Dedoop sowie dessen Verfügbarmachung ist es für Endnutzer ohne MapReduce-Kenntnisse möglich, komplexe Entity Resolution-Workflows in privaten oder dynamisch erstellten externen MapReduce-Clustern zu berechnen.
946

Para idêndicas frequências cardíacas, a magnitude das adaptações agudas metabólicas, cárdio-respiratórias e perceptuais é variável em função do modo de exercício

Abrantes, Catarina Isabel Neto Gavião January 2001 (has links)
No description available.
947

Efeitos da inclinação do terreno e da carga sobre o trabalho mecânico e o custo de transporte na caminhada humana / Effects of gradient and load on the mechanical work and the cost of transport in human walking

Gomeñuka, Natalia Andrea January 2011 (has links)
O objetivo do presente estudo foi comparar o comportamento dos parâmetros mecânicos (Wext, Wint, Wtot, CP, FP), parâmetros energéticos (Pmet, C, Eff, Vótima) e do mecanismo pendular (R, Rint, %Cong) durante a caminhada com carga no plano (0%), nas inclinações (+7% e +15%) e em distintas velocidades de caminhada. A amostra foi composta por 10 homens jovens, saudáveis, fisicamente ativos e não adaptados ao transporte de carga em mochilas. Os sujeitos caminharam em uma esteira rolante durante 5 min, em cinco diferentes velocidades, sem e com carga (25% da MC) transportada em mochilas, e em três planos distintos de caminhada (0%, 7% e 15%). A análise de movimento 3D (quatro câmeras de vídeo) foi realizada simultaneamente à análise de VO2. Realizaram-se rotinas computacionais para o processamento de dados cinemáticos em Matlab®. Utilizou-se ANOVA de 3 fatores para medidas repetidas, com post hoc de Bonferroni (p < 0,05; SPSS 17.0). Os resultados dos parâmetros mecânicos indicam modificações devido à velocidade e ao plano de caminhada; a carga não modificou algumas das variáveis. Todas as variáveis mecânicas aumentaram com o incremento da velocidade, o Wint e a FP diminuíram a 7% e logo aumentaram a 15%, o Wext e Wtot aumentaram com a inclinação, e o CP diminuiu com o aumento da inclinação. A carga não afetou na maioria das situações o Wext e o Wtot, demonstrando que os parâmetros mecânicos são de modo geral, independentes da carga tanto no plano como nas inclinações. As variáveis energéticas da caminhada foram influenciadas pela velocidade, inclinação e a carga. A Pmet aumentou com o incremento da velocidade, da inclinação e da carga. O C diminuiu com o incremento da velocidade e logo aumentou, atingindo um mínimo nas velocidades intermediárias e, também aumentou com o incremento da inclinação e da carga. A Eff aumentou com a velocidade, diminuiu com o aumento da inclinação e a carga. A Vótima de caminhada foi reduzida com o incremento da inclinação. Constatou-se que o mecanismo pendular é modificado principalmente como decorrência da velocidade e da inclinação do terreno, e é independente da carga. O R e o Rint aumentam com o acréscimo da velocidade de caminhada, logo diminuem com o incremento da inclinação e ambos são independentes da carga. Conclui-se que as diferentes restrições impostas através da variação da carga e inclinações provocaram adaptações na mecânica e energética da locomoção humana, sustentando a Vótima e a reconversão das energias mecânicas (R) nas inclinações. Deste modo, ainda que em menor proporção, a estratégia de minimização de energia por via pendular ainda persiste nestas condições. / The purpose of the present study was to compare the behavior of the mechanical parameters (Wext, Wint, Wtot, SF, SL), the energetic parameters (Metabolic Power, C, Eff, optimal speed) and the pendular mechanism (R, Rint, %congruity) during walking with load on level (0%) and gradients (7% and 15%) and at different walking speeds. Ten young men, healthy, physically active and not adapted to walking loaded in backpacks participated in the study. The subjects walked in a treadmill for five minutes, under five different speeds, without and with load (25% of bM) carried in backpacks and in three different gradients of walking (0%, 7% e 15%). The analysis of the 3D movement was registered (four video cameras), as well as the VO2 analysis. Computational routines for the processing of kinematic data were done in Matlab®. The results were analyzed using repeated measures ANOVA (factors: speed, gradients, load) with the Bonferroni correction for post-hoc comparisons (p < 0.05) (SPSS 17.0). The results of the mechanical parameters indicate modifications due to speed and gradients of the walking; the load did not modify some of the variables. All of the mechanical variables increased with the raise in speed, the Wint and the SF decreased at 7% and right away increased at 15%, the Wext and Wtot increased with the gradient, and the SL decreased with the raising gradient. The load did not affect most of the situations, the Wext and Wtot, decreasing with the loaded situation, showing that the mechanical parameters are, in general, independent of the load on level and gradients. The energetic parameters of the walking were influenced by the speed, the gradient and the load. The metabolic power increased with the raise in speed, in gradient and in load. The cost of transport decreased with raise in speed and increased right away, influencing the minimum cost at intermediate speeds and it also increased with the raise of the slope and the load. The efficiency increased with speed and decreased with the raise of gradient and load. The optimal speed of walking was reduced with increasing of gradient. It was verified that the pendular mechanism is mainly modified as a consequence of speed and the gradient, and is independent of load. The R and Rint increase with the raise of speed, and decrease with the raise of gradient, also there are independent of the load. The conclusion is that the different restrictions imposed through the load variation and gradients cause adaptations in the mechanics and energetic of the human locomotion, sustaining the optimal speed and reconversion of the mechanical energies in gradients. In this way, but in a smaller proportion, the strategy of minimizing the energy through the pendular mechanism still persists in these conditions.
948

Valorisation des gisements de flexibilité dans les investissements de smart grid / Impact of demand response on investment in smart grid

Battegay, Archie 09 October 2015 (has links)
Les travaux de recherche présentés dans ce mémoire visent à évaluer des économies d'investissement inhérentes à l'implémentation du pilotage de charge. Pour ce faire, l'approche que nous avons proposée s'inscrit dans l'analyse de l'adéquation des infrastructures électriques avec les projections de la demande. Dans cette perspective, les travaux de modélisation que nous avons développés s'articulent en trois étapes. Premièrement, la capacité du pilotage de charge à modifier les appels des consommateurs a été évaluée. Le modèle que nous proposons tient à la fois compte des effets de bord des effacements de consommation et des limites de disponibilité des flexibilités des consommateurs. Sur la base de cette modélisation, nous avons proposé un modèle évaluant l'apport de ce pilotage pour l'équilibre offre-demande à long terme. Ce modèle quantifie les économies d'investissement dans les capacités de production que pourraient permettre des flexibilités au sein de la demande électrique. Enfin, nous avons complété cette approche en évaluant l'impact de ces flexibilités dans le dimensionnement des réseaux électriques. L'application de nos modèles à un scénario énergétique élaboré au sein du projet GreenLys a permis de dégager quelques conclusions importantes. Ainsi, l'essentiel des économies d'infrastructures induites par le pilotage de charge concerne les capacités de production. Néanmoins, une utilisation des flexibilités des consommateurs optimale pour l'équilibre offre-demande se traduit localement par des coûts d'infrastructure accrus. En particulier, nos simulations mettent en évidence qu'un tel pilotage de charge, optimal à l'échelle nationale, induit localement une augmentation des transits sur les heures les plus chargées de l'année. Aussi, nous avons montré que des modifications ponctuelles des programmes d'appel optimaux pour l'équilibre offre-demande suffisent à dégager des bénéfices pour l'ensemble du système électrique. Dans le cadre de notre étude, ces modifications sont motivées par les situations de défaillance probable des réseaux. Ces défaillances résultent de la concomitance d'aléas climatiques et techniques défavorables. L'analyse que nous avons produite révèle qu'en l'absence de prise en compte de la situation spécifique des réseaux de distribution, les intersaisons et les heures creuses pourraient devenir plus critiques dans la gestion de ces réseaux qu'elles ne le sont aujourd'hui. / The research presented in this report aims to evaluate investment savings related to the implementation of direct load control. To this end, the proposed approach fits into the framework of system adequacy analysis. In this perspective, the models that we have developed are structured in three stages. First, the ability of the direct load control to change consumer demand has been evaluated. The model we have proposed takes into account the side effects of load shedding and the limits of the consumer availability . Based on this model, we have proposed a model in order ro assess the contribution of this control to the long-term supply-demand balance. This model quantifies the investment savings in production capacity led by electric demand flexibilities. Finally, we completed this approach by evaluating the impact of these flexibilities in the design of electrical networks. These models have been applied to an energy scenario that has been developped in the GreenLys project. The simulations led to indentifysome important conclusions. Thus, most of the infrastructure savings induced by direct load controls deal with production capacities. Nevertheless, the optimal use of flexibilities optimal for consumer supply-demand balance is reflected locally by an increase in networks costs. In particular, our simulations show that such a load control strategy, which is optimal on the national level, locally induces an increase in power flows during the most loaded period of the year. Also, we have shown that slight modifications in consumers call programs are sufficient to generate profits for the entire electrical system. In the context of our study, these changes are motivated by the possible networks failures. These failures result from the conjunction of unfavorable climatic and technical hazards. The analysis that we have produced shows that misconsidering the specific situation of distribution networks could lead the shoulder season and the off-peak hours to become more critical in the management of these networks.
949

Carga eletrônica CA programável com regeneração de energia / Programmable AC electronic load with energy regeneration

Klein, Rafael Luís 28 February 2012 (has links)
Made available in DSpace on 2016-12-12T17:38:31Z (GMT). No. of bitstreams: 1 RAFAEL LUIS KLEIN.pdf: 5183881 bytes, checksum: c72755a48cf9cf0f75f8a8c73a4fe23c (MD5) Previous issue date: 2012-02-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This study is about the design and implementation of a programmable ac electronic load with power regeneration capability. This equipment can be used in burn-in tests and at the development of switching power supplies. The main advantages of this kind of emulator is the power consumption reduction, lower volume compared to conventional loads, no cooling additional costs, peak load reduction, agility and easiness of non-linear and linear current load profile configuration. The emulator is composed by a current controlled rectifier, which drains from the equipment under test the desired current profile, and a current controlled inverter connected to the grid, which is responsible for power regenerating. Initially, a study for applications of the emulator is shown, where standards and tests requirements are analyzed. Afterward, the power structure of the emulator is shown. After that, the high frequencies filters are analyzed and designed, the circuit mathematical models are obtained, then a control project methodology based on frequency is shown. Simulation results complement the study and prove the applied methodology. Finally, a 4.5kVA prototype is developed and tested. The experimental results are analyzed and discussed. / Este estudo trata do projeto e implementação de uma carga eletrônica ca programável com regeneração de energia. Este equipamento pode ser empregado nos testes de Burn-in ou ensaios de desenvolvimento de fontes chaveadas. Dentre as principais vantagens na utilização do emulador, destacam-se: redução do consumo de energia elétrica, redução da área ocupada pelos dispositivos de testes com cargas convencionais, redução dos custos de instalação e de consumo de energia dos sistemas de refrigeração, redução dos picos de demanda de potência, facilidade e agilidade na configuração dos mais variados tipos de cargas lineares e nãolineares. O emulador é formado por um retificador controlado em corrente, responsável por drenar do EST o perfil de corrente desejado, e um inversor controlado em corrente, responsável pela injeção de corrente na rede elétrica, em contra-fase com a tensão, caracterizando a regeneração de energia. Inicialmente é apresentado um estudo das aplicações para o emulador, onde são analisadas as normas vigentes para testes de equipamentos com carga. Em seguida são apresentadas as estruturas de potência do emulador. Após isto são analisados e projetados os filtros de alta frequência, obtidos os modelos matemáticos dos circuitos necessários para o projeto dos controladores, assim como é apresentada uma metodologia de projeto de controle baseado na resposta em frequência. Resultados de simulação complementam o estudo e comprovam a metodologia apresentada. Para finalizar, um protótipo de 4,5kVA é desenvolvido e ensaiado, onde os resultados experimentais são analisados e discutidos.
950

Efeitos da inclinação do terreno e da carga sobre o trabalho mecânico e o custo de transporte na caminhada humana / Effects of gradient and load on the mechanical work and the cost of transport in human walking

Gomeñuka, Natalia Andrea January 2011 (has links)
O objetivo do presente estudo foi comparar o comportamento dos parâmetros mecânicos (Wext, Wint, Wtot, CP, FP), parâmetros energéticos (Pmet, C, Eff, Vótima) e do mecanismo pendular (R, Rint, %Cong) durante a caminhada com carga no plano (0%), nas inclinações (+7% e +15%) e em distintas velocidades de caminhada. A amostra foi composta por 10 homens jovens, saudáveis, fisicamente ativos e não adaptados ao transporte de carga em mochilas. Os sujeitos caminharam em uma esteira rolante durante 5 min, em cinco diferentes velocidades, sem e com carga (25% da MC) transportada em mochilas, e em três planos distintos de caminhada (0%, 7% e 15%). A análise de movimento 3D (quatro câmeras de vídeo) foi realizada simultaneamente à análise de VO2. Realizaram-se rotinas computacionais para o processamento de dados cinemáticos em Matlab®. Utilizou-se ANOVA de 3 fatores para medidas repetidas, com post hoc de Bonferroni (p < 0,05; SPSS 17.0). Os resultados dos parâmetros mecânicos indicam modificações devido à velocidade e ao plano de caminhada; a carga não modificou algumas das variáveis. Todas as variáveis mecânicas aumentaram com o incremento da velocidade, o Wint e a FP diminuíram a 7% e logo aumentaram a 15%, o Wext e Wtot aumentaram com a inclinação, e o CP diminuiu com o aumento da inclinação. A carga não afetou na maioria das situações o Wext e o Wtot, demonstrando que os parâmetros mecânicos são de modo geral, independentes da carga tanto no plano como nas inclinações. As variáveis energéticas da caminhada foram influenciadas pela velocidade, inclinação e a carga. A Pmet aumentou com o incremento da velocidade, da inclinação e da carga. O C diminuiu com o incremento da velocidade e logo aumentou, atingindo um mínimo nas velocidades intermediárias e, também aumentou com o incremento da inclinação e da carga. A Eff aumentou com a velocidade, diminuiu com o aumento da inclinação e a carga. A Vótima de caminhada foi reduzida com o incremento da inclinação. Constatou-se que o mecanismo pendular é modificado principalmente como decorrência da velocidade e da inclinação do terreno, e é independente da carga. O R e o Rint aumentam com o acréscimo da velocidade de caminhada, logo diminuem com o incremento da inclinação e ambos são independentes da carga. Conclui-se que as diferentes restrições impostas através da variação da carga e inclinações provocaram adaptações na mecânica e energética da locomoção humana, sustentando a Vótima e a reconversão das energias mecânicas (R) nas inclinações. Deste modo, ainda que em menor proporção, a estratégia de minimização de energia por via pendular ainda persiste nestas condições. / The purpose of the present study was to compare the behavior of the mechanical parameters (Wext, Wint, Wtot, SF, SL), the energetic parameters (Metabolic Power, C, Eff, optimal speed) and the pendular mechanism (R, Rint, %congruity) during walking with load on level (0%) and gradients (7% and 15%) and at different walking speeds. Ten young men, healthy, physically active and not adapted to walking loaded in backpacks participated in the study. The subjects walked in a treadmill for five minutes, under five different speeds, without and with load (25% of bM) carried in backpacks and in three different gradients of walking (0%, 7% e 15%). The analysis of the 3D movement was registered (four video cameras), as well as the VO2 analysis. Computational routines for the processing of kinematic data were done in Matlab®. The results were analyzed using repeated measures ANOVA (factors: speed, gradients, load) with the Bonferroni correction for post-hoc comparisons (p < 0.05) (SPSS 17.0). The results of the mechanical parameters indicate modifications due to speed and gradients of the walking; the load did not modify some of the variables. All of the mechanical variables increased with the raise in speed, the Wint and the SF decreased at 7% and right away increased at 15%, the Wext and Wtot increased with the gradient, and the SL decreased with the raising gradient. The load did not affect most of the situations, the Wext and Wtot, decreasing with the loaded situation, showing that the mechanical parameters are, in general, independent of the load on level and gradients. The energetic parameters of the walking were influenced by the speed, the gradient and the load. The metabolic power increased with the raise in speed, in gradient and in load. The cost of transport decreased with raise in speed and increased right away, influencing the minimum cost at intermediate speeds and it also increased with the raise of the slope and the load. The efficiency increased with speed and decreased with the raise of gradient and load. The optimal speed of walking was reduced with increasing of gradient. It was verified that the pendular mechanism is mainly modified as a consequence of speed and the gradient, and is independent of load. The R and Rint increase with the raise of speed, and decrease with the raise of gradient, also there are independent of the load. The conclusion is that the different restrictions imposed through the load variation and gradients cause adaptations in the mechanics and energetic of the human locomotion, sustaining the optimal speed and reconversion of the mechanical energies in gradients. In this way, but in a smaller proportion, the strategy of minimizing the energy through the pendular mechanism still persists in these conditions.

Page generated in 0.0388 seconds