• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1752
  • 650
  • 251
  • 236
  • 138
  • 71
  • 54
  • 38
  • 26
  • 19
  • 18
  • 15
  • 15
  • 12
  • 11
  • Tagged with
  • 3763
  • 3763
  • 729
  • 721
  • 601
  • 543
  • 543
  • 475
  • 474
  • 427
  • 403
  • 380
  • 347
  • 332
  • 273
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Uma metodologia baseada em algoritmos genéticos para melhorar o perfil de tensão diário de sistemas de potência = A genetic-algorithm-based methodology for improving daily voltage profile of power systems / A genetic-algorithm-based methodology for improving daily voltage profile of power systems

Leone Filho, Marcos de Almeida 06 August 2012 (has links)
Orientador: Takaaki Ohishi / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-21T06:33:46Z (GMT). No. of bitstreams: 1 LeoneFilho_MarcosdeAlmeida_D.pdf: 4843918 bytes, checksum: 9350da4d7c5f9bb7cb858c5d09a0d150 (MD5) Previous issue date: 2012 / Resumo: A principal contribuição desta tese é a proposta de uma metodologia juntamente com a implementação de um sistema de suporte à decisão para dar subsídio à programação diária de sistemas de potência. Basicamente, a metodologia implementada neste trabalho visa melhorar o perfil das tensões em uma rede de transmissão de energia elétrica através de um ajuste fino dos taps dos transformadores. Este processo de otimização dos taps é feito com a utilização de Algoritmos Genéticos de maneira que, ao final deste processo, seja obtido um conjunto de valores de taps que, se aplicados à rede de transmissão, tornará as tensões mais próximas de um mesmo nível de tensão pré-determinado. Além disto, a abordagem proposta não é somente capaz de analisar uma \fotografia" de carga do sistema, mas também é capaz de realizar uma análise diária (em intervalos horários) para melhorar o perfil de tensão durante um dia completo de operação. A metodologia proposta é avaliada inicialmente com os sistemas IEEE-30 barras e IEEE-118 barras para que, finalmente, fosse aplicada para o sistema interligado nacional (SIN) brasileiro. Além disto, um sistema de suporte à decisão foi implementado durante o desenvolvimento deste trabalho. Tal sistema poderia ser usado para proporcionar ao operador do sistema de transmissão meios de avaliar os fluxos da rede através de uma execução de análises de sensibilidade quanto às possíveis utuações de carga em tempo de operação e também avaliar cenários de contingências / Abstract: The main contribution of this thesis is the proposal of a new methodology together with the implementation of a decision support system for real-time transmission grid operation. Hence, a methodology for improving voltage pro- _le for power transmission systems is described in this thesis. Basically, it consists in tuning the transformers taps in a way that the buses voltages in the same area would stay around a pre-specified level. Genetic Algorithms are applied for this optimization process in a way that, at the end of this process, a set of taps values that can drive the power system's voltage closer to a desired voltage level (if applied to it) is obtained. Furthermore, the proposed approach is not only able to analyze a static \picture" of power load, but also to cope with the issue of programming the hourly daily tap strategy according to the variations of the daily load profile. The proposed methodology is first evaluated with the \IEEE-30-bus" and with the \IEEE-118-bus" test cases so that it could be finally applied to the Brazilian interconnected national power system. Besides, a decision support system was implemented during the progress of the work. Such system was designed in a way that it could be possibly used by a grid operator in order to evaluate load flows and also to develop many different studies by analyzing the system's sensitiveness to the load variations at real time operation and also by evaluating a variety of contingencies scenarios / Doutorado / Energia Eletrica / Doutor em Engenharia Elétrica
472

Estratégias para a correção dos efeitos de atraso de sistemas Hardware In the Loop (HIL) / Strategies to correct the effects of delay on the Hardware In the Loop (HIL) systems

Gordillo Carrillo, Camilo Andrés 20 August 2018 (has links)
Orientador: Janito Vaqueiro Ferreira / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-20T09:38:04Z (GMT). No. of bitstreams: 1 GordilloCarrillo_CamiloAndres_M.pdf: 5104981 bytes, checksum: cd95669708a4fb537590b77509d111b9 (MD5) Previous issue date: 2012 / Resumo: O conceito de Hardware In the Loop (HIL) é bastante útil em indústrias automotivas e em indústrias espaciais, já que sistemas complexos são difíceis de se modelar. Este conceito proporciona uma grande confiabilidade aos resultados, diminui o risco de avaria dos equipamentos e dos usuários em seu funcionamento, como também uma diminuição do tempo no desenvolvimento de projetos. Tudo isto sem precisar de um orçamento elevado ou protótipos elaborados para realização de testes. Neste trabalho propõem-se duas estratégias para solucionar o problema do atraso (delay) apresentado pelo sinal de resposta nos sistemas HIL em tempo real, levando-se em conta a sequência de execução real dos processos, bem como também outros aspectos como dos sistemas de aquisição e atuação (inercia, limitações de hardware e software, tempo de amostragem). Os resultados obtidos através das estratégias propostas foram analisados e comparados com resultados numéricos em uma bancada experimental obtendo uma boa concordância eliminando o atraso na resposta / Abstract: The Hardware In the Loop (HIL) concept is useful in automotive and spaceship industries, because of the difficulty of modeling complex systems. This concept provides great reliability at the results, decrease the risk of damage to the equipment and to the user operation, as well as decreasing the time of projects development. All of this without requiring a high budget or developing prototypes for testing. This study propose a strategy to solve the delay problem presented by the response signal in real time HIL systems, considering a real execution sequence of the process, as well as other aspects such as in the acquisition and the actuation systems (inertia, hardware and software limitations, sample time). The results obtained through the proposed strategies was analyzed and compared with numerical results in a testing platform with excellent concordance eliminating the delay in the response / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
473

[en] A FAULT-TOLERANT MICROCOMPUTER FOR REAL-TIME CONTROL / [pt] UM MICROCOMPUTADOR TOLERANTE A FALHAS PARA CONTROLE EM TEMPO REAL

HELANO DE SOUSA CASTRO 16 April 2007 (has links)
[pt] Este trabalho descreve o projeto e a implementação de um microcomputador tolerante a falhas para aplicação em tempo real. O sistema é baseado em uma estrutura duplex e utiliza o conceito de dissimilaridade como forma de reduzir a influência de falhas de modo comum. Vários mecanismos de detecção de falhas foram incorporados de forma a melhorar a cobertura do sistema. Com o objetivo de reduzir o hardcore, o único elemento central existente é o seletor de saída, sendo que os processadores sincronizam-se através da troca de mensagens. / [en] This work describes the design and implementation of a fault-tolerant microcomputer for real-time control applications. The system consists in a duplex structure and the dissimilarity concept is used in order to minimize the probability of common-mode faults. Several fault detection mecanisms were incorporated to increase the coverage of the system.
474

Real-time rendering of synthetic terrain

McRoberts, Duncan Andrew Keith 07 June 2012 (has links)
M.Sc. / Real-time terrain rendering (RTTR) is an exciting eld in computer graphics. The algorithms and techniques developed in this domain allow immersive virtual environments to be created for interactive applications. Many di culties are encountered in this eld of research, including acquiring the data to model virtual worlds, handling huge amounts of geometry, and texturing landscapes that appear to go on forever. RTTR has been widely studied, and powerful methodologies have been developed to overcome many of these obstacles. Complex natural terrain features such as detailed vertical surfaces, overhangs and caves, however, are not easily supported by the majority of existing algorithms. It becomes di cult to add such detail to a landscape. Existing techniques are incredibly e cient at rendering elevation data, where for any given position on a 2D horizontal plane we have exactly 1 altitude value. In this case we have a many-to-1 mapping between 2D position and altitude, as many 2D coordinates may map to 1 altitude value but any single 2D coordinate maps to 1 and only 1 altitude. In order to support the features mentioned above we need to allow for a many-to-many mapping. As an example, with a cave feature for a given 2D coordinate we would have elevation values for the oor, the roof and the outer ground. In this dissertation we build upon established techniques to allow for this manyto- many mapping, and thereby add support for complex terrain features. The many-to-many mapping is made possible by making use of geometry images in place of height-maps. Another common problem with existing RTTR algorithms is texture distortion. Texturing is an inexpensive means of adding detail to rendered terrain. Many existing technique map texture coordinates in 2D, leading to distortion on steep surfaces. Our research attempts to reduce texture distortion in such situations by allowing a more even spread of texture coordinates. Geometry images make this possible as they allow for a more even distribution of sample positions. Additionally we devise a novel means of blending tiled texture that enhances the important features of the individual textures. Fully sampled terrain employs a single global texture that covers the entire landscape. This technique provides great detail, but requires a huge volume of data. Tiled texturing requires comparatively little data, but su ers from disturbing regular patterns. We seek to reduce the gap between tiled textures and fully sampled textures. In particular, we aim at reducing the regularity of tiled textures by changing the blending function. In summary, the goal of this research is twofold. Firstly we aim to support complex natural terrain features|speci cally detailed vertical surfaces, over-hangs and caves. Secondly we wish to improve terrain texturing by reducing texture distortion, and by blending tiled texture together in a manner that appears more natural. We have developed a level of detail algorithm which operates on geometry images, and a new texture blending technique to support these goals.
475

The local binary pattern approach to texture analysis — extensions and applications

Mäenpää, T. (Topi) 08 August 2003 (has links)
Abstract This thesis presents extensions to the local binary pattern (LBP) texture analysis operator. The operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. It is made invariant against the rotation of the image domain, and supplemented with a rotation invariant measure of local contrast. The LBP is proposed as a unifying texture model that describes the formation of a texture with micro-textons and their statistical placement rules. The basic LBP is extended to facilitate the analysis of textures with multiple scales by combining neighborhoods with different sizes. The possible instability in sparse sampling is addressed with Gaussian low-pass filtering, which seems to be somewhat helpful. Cellular automata are used as texture features, presumably for the first time ever. With a straightforward inversion algorithm, arbitrarily large binary neighborhoods are encoded with an eight-bit cellular automaton rule, resulting in a very compact multi-scale texture descriptor. The performance of the new operator is shown in an experiment involving textures with multiple spatial scales. An opponent-color version of the LBP is introduced and applied to color textures. Good results are obtained in static illumination conditions. An empirical study with different color and texture measures however shows that color and texture should be treated separately. A number of different applications of the LBP operator are presented, emphasizing real-time issues. A very fast software implementation of the operator is introduced, and different ways of speeding up classification are evaluated. The operator is successfully applied to industrial visual inspection applications and to image retrieval.
476

An Image and Processing Comparison Study of Antialiasing Methods

Grahn, Alexander January 2016 (has links)
Context. Aliasing is a long standing problem in computer graphics. It occurs as the graphics card is unable to sample with an infinite accuracy to render the scene which causes the application to lose colour information for the pixels. This gives the objects and the textures unwanted jagged edges. Post-processing antialiasing methods is one way to reduce or remove these issues for real-time applications. Objectives. This study will compare two popular post-processing antialiasing methods that are used in modern games today, i.e., Fast approximate antialiasing (FXAA) and Submorphological antialiasing (SMAA). The main aim is to understand how both method work and how they perform compared to the other. Methods. The two methods are implemented in a real-time application using DirectX 11.0. Images and processing data is collected, where the processing data consists of the updating frequency of the rendering of screen known as frames per second (FPS), and the elapsed time on the graphics processing unit(GPU). Conclusions. FXAA shows difficulties in handling diagonal edges well but show only minor graphical artefacts in vertical and horizontal edges. The method can produce unwanted blur along edges. The edge pattern detection in SMAA makes it able to handle all directions well. The performance results conclude that FXAA do not lose a lot of FPS and is quick. FXAA is at least three times faster than SMAA on the GPU.
477

Real-time rendering of subsurface scattering and skin / Realtidsrendering av hud

Holst, Daniel January 2017 (has links)
Rendering of skin and translucent materials as a real-time solution for a game engine.
478

Physically-based Real-time Animation

Torstensson, Erik January 2006 (has links)
The field of real-time computer animation is undergoing major changes, and many of the methods used to this point are no longer sufficient to achieve the degree of realism that is desired. There is a need for an animation method that provides greater realism, simpler ways to create animations, and more vivid and lifelike virtual creatures. This thesis suggests the possibility of doing that with a physically-based method, by researching current and alternative solutions, developing an architecture for a physically-based system, and describing an implementation of such a system.
479

High Quality Shadows for Real-time Surface Visualization

Zachrisson, Mikael January 2016 (has links)
This thesis describes the implementation of a shadowing system able to produce hard shadows. Shadow mapping is the most common real-time shadowing algorithm but it suffers from severe aliasing artifacts and self-shadowing effects. Different advanced techniques based on Shadow Mapping are implemented in this thesis with the objective of creating accurate hard shadows. First, an implementation based on Cascaded Shadow Maps is presented. This technique improves the visual quality of shadow mapping by using multiple smaller shadow maps instead of a large one. The technique addresses the fact that objects near the viewer require a higher shadow map resolution than objects far away. The second technique presented is Sub-pixel Shadow Mapping. By storing information about occluding triangles in the shadow map this technique is able to produce accurate hard shadows with sub-pixel precision. Both methods can be combined in order to improve the resulting shadow quality. Finally, a collection of advanced biasing techniques that minimize the self-hadowing artifacts generated by shadow mapping are presented. The final implementation achieves real-time performances with considerably improved quality compared to standard shadow mapping.
480

Procedural Rendering of Geometry-Based Grass in Real-Time

Tillman, Markus January 2013 (has links)
Since grass is abundant on our planet it plays an important role in the rendering of many different outdoor scenes. This study focuses on the real-time rendering of many individual grass blades with geometry. As a grass blade in real life is very thin and have a simple shape it can be represented with only a handful of vertices. The challenge is introduced when a meadow of grass is to be rendered as it can contain billions of grass blades. Two different algorithms were developed; one which use traditional vertex buffers to store and render the grass blades while the other makes use of textures. Quantitative data was generated from these algorithms. Among this data were images of the scene. These images were subjected to a questionnaire to collect qualitative information about the grass. All the generated data was then analyzed and interpreted to find advantages and disadvantages of the algorithms. The buffer-based algorithm was found to be slightly more computationally efficient compared to the texture-based algorithm. The quality of the visual result was perceived to be towards good while the realism was perceived as mediocre at best. The advantage of the texture-based algorithm is that it allows more options to handle the grass blades data when rendering. Using the terrain data to generate the grass blades was concluded to be advantageous. The realism of the grass could have been improved by using a grass blade texture as well as introducing variety in density and grass species. / Eftersom gräs är rikligt på vår planet spelar den en viktig roll vid renderingen av många olika utomhusscener. Denna studie fokuserar på realtidsrendering av många individuella gräsblad med geometri. Eftersom ett gräsblad i verkligheten är mycket tunnt och har en enkel form kan den representeras med endast en handfull vertiser. Utmaningen introduceras när en äng av gräs ska renderas eftersom som den kan innehålla miljarder gräsblad. Två olika algoritmer utvecklades, en som använder traditionella vertex buffrar för att lagra och rendera gräsbladen medan den andra använder sig av texturer. Kvantitativ data genererades från dessa algoritmer. Bland denna data fanns bilder av scenen. Dessa bilder utsattes för ett frågeformulär för att samla in kvalitativ information om gräset. All den data som genereras analyserades och tolkades för att hitta fördelar och nackdelar med algoritmerna. Den bufferbaserade algoritmen upptäcktes vara beräkningsmässigt effektivare jämfört med den texturbaserade algoritmen. Den upplevda kvalitén på det visuella resultatet ansågs vara närmare bra medan realismen uppfattades som medioker i bästa fall. Fördelen med den texturen-baserad algoritm är att den tillåter fler möjligheter att hantera gräsblads-data vid rendering. Slutsatsen av att använda terrängens data för att generera gräsbladen sågs vara fördelaktigt. Realismen av gräset kunde förbättrats genom att använda en gräsblads-textur, samt variation i densitet och gräsarter.

Page generated in 0.0747 seconds