• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Development Of Algorithms For Improved Planning And Operation Of Deregulated Power Systems

Surendra, S 02 1900 (has links) (PDF)
Transmission pricing and congestion management are two important aspects of modern power sectors working under a deregulated environment or moving towards a deregulated system (open access) from a regulated environment. The transformation of power sector for open access environment with the participation of private sector and potential power suppliers under the regime of trading electricity as a commodity is aimed at overcoming some of the limitations faced by the vertically integrated system. It is believed that this transformation will bring in new technologies, efficient and alternative sources of power which are greener, self sustainable and competitive. There is ever increasing demand for electrical power due to the changing life style of human beings fueled by modernization and growth. Augmentation of existing capacity, siting of new power plants, and a search for alternate viable sources of energy that have lesser impact on environment are being taken up. With the integration of power plants into the grid depending upon the type, loca- tion and technology used, the cost of energy production also differs. In interconnected networks, power can flow from one point to other point in infinite number of possible paths which is decided by the circuit parameters, operating conditions, topology of network and the connected loads. The transmission facility provided for power transfer has to recover the charges from the entities present in the network based on the extent of utilization. Since power transmission losses account for nearly 4 to 8% of the total generation, this has to be accounted for and shared properly among the entities depending upon the connected generation/load. In this context, this thesis aims to evaluate the shortcomings of existing tracing methods and proposes a tracing method based upon the actual operating conditions of the network taking into account the network parameters, voltage gradient among the connected buses and topology of the network as obtained by the online state estimator/load flow studies. The concept proposed is relatively simple and easy to implement in a given transactional period. The proposed method is compared against one of the existing tracing technique available in literature. Both active and reactive power tracing is handled at one go. The summation of partial contributions from all the sources in any given line of the system always matches with that of the respective base case ow. The AC power flow equations themselves are nonlinear in nature. Since the sum of respective partial flows in a given branch is always equal to the original ow, these are termed as virtual flows and the effect of nonlinearity is still unknown. The virtual flows in a given line are complex in nature and their complex sum is equal to the original complex power flows as in the base case. It is required to determine whether these are the true partial flows. To answer this, a DC equivalent of the original AC network is proposed and is called as the R - P equivalent model. This model consists of only the resistances as that of original network (the resistances of transformers and lines neglecting the series reactance and the shunt charging) only. The real power injections in a AC network i.e. sources into respective buses and loads (negative real power injections) are taken as injection measurements of this R P model and the bus voltages (purely real quantities) are estimated using the method of least squares. Complex quantities are absent in this model and only real terms which are either sums or differences are present. For this model, virtual flows are evaluated and it has been verified that the virtual real power contributions from sources are in near agreement with the original AC network. This implies that the virtual flows determined for the original network can be applied for day-to-day applications. An important feature of the virtual flows is that it is possible to identify counter ow components. Counter flow components are the transactions taking place in opposite direction to the net flow in that branch. If a particular source is produces counter flow in a given line, then it is in effect reducing congestion to that extent. This information is lacking in most of the existing techniques. Counter flows are useful in managing congestion. HVDC links are integrated with HVAC systems in order to transfer bulk power and for the additional advantages they offer. The incremental cost of a DC link is zero due to the closed loop control techniques implemented to maintain constant power transfer (excluding constant voltage or constant current control). Consequently, cost allocation to HVDC is still a problem. The proposed virtual power flow tracing method is extended to HVAC systems integrated with HVDC in order to determine the extent of utilization of a given link by the sources. Before evaluating the virtual contributions to the HVDC links, the steady state operating condition of the combined system is obtained by per-forming a sequential load flow. Congestion is one of the main aspects of a deregulated system, and is a result of several transactions taking place simultaneously through a given transmission facility. If congestion is managed by providing pricing signals for the transmission usage by the parties involved. It can also be due to the non-availability of transmission paths due to line outages as a result of contingencies. In such a case, generation active power redispatch is considered as a viable option in addition to other available controls such as phase shifters and UPFCs to streamline the transactions within the available corridors. The virtual power flow tracing technique proposed in the thesis is used as a guiding factor for managing congestions occurring due to transactions/contingencies to the possible extent. The utilization of a given line by the sources present in the network in terms of real power flow is thus obtained. These line utilization factors are called as T-coefficients and these are approximately constant for moderate increments in active power change from the sources. A simple fuzzy logic based decision system is proposed in order to obtain active power rescheduling from the sources for managing network congestions. In order to enhance the system stability after rescheduling, reactive power optimization has life systems to illustrate the proposed approaches. For secure operation of the network, the ideal proportion of active power schedule from the sources present in the network for a given load pattern is found from network [FLG] matrix. The elements of this matrix are used in the computation of static voltage stability index (L-index). This [FLG] matrix is obtained from the partitioned network YBUS matrix and gives the Relative Electrical Distance (RED) of each of the loads with respect to the sources present in the network. From this RED, the ideal proportion of real power to be drawn by a given load from different sources can be determined. This proportion of active power scheduling from sources is termed as Desired Proportion of Generation (DPG). If the generations are scheduled accordingly, the network operates with less angular separation among system buses (improved angular stability), improved voltage profiles and better voltage stability. Further, the partitioned K[GL] matrix reveals information about the relative proportion in which the loads should draw active power from the sources as per DPG which is irrespective of the present scheduling. Other partitioned [Y ′ GG] matrix is useful in finding the deviation of the present active power output from the sources with respect to the ideal schedule. Many regional power systems are interconnected to form large integrated grids for both technical and economic benefits. In such situations, Generation Expansion Planning (GEP) has to be undertaken along with augmentation of existing transmission facilities. Generation expansion at certain locations need new transmission networks which involves serious problems such as getting right-of-way and environmental clearance. An approach to find suitable generation expansion locations in different zones with least requirements of transmission network expansion has been attempted using the concept of RED. For the anticipated load growth, the capacity and siting generation facilities are identified on zonal basis. Using sample systems and real life systems, the validity of the proposed approach is demonstrated using performance criteria such as voltage stability, effect on line MVA loadings and real power losses.
652

Echographie oculaire transcornéenne par sonde linéaire multi-éléments haute-fréquence : étude et correction des effets aberrateurs du cristallin dans la reconstruction d'image en mode-B / Trnscorneal ocular ultrasonography with high frequency linear array : study and correction of the phase aberration induced by the crystalline lens in B-mode imaging

Matéo, Tony 18 December 2014 (has links)
Milieu où les ultrasons se propagent environ 10% plus rapidement qu’au sein des tissus environnants, le cristallin est connu pour être la source majeure d’aberrations de phase du milieu oculaire. De fait, l’échographie ophtalmique trans cornéenne est affectée par ses effets qui se manifestent sur les B-scans par une dégradation marquée de la résolution spatiale et du contraste, accompagnée de plus, d’importantes distorsions, particulièrement notables au niveau du fond de l’œil. Face à ce problème et en vue de l’arrivée prochaine de barrettes US HF dans la pratique ophtalmologique, un beamforming adapté a été développé au cours de cette thèse. Basé sur un lancer de rayon à 2 points fixes, il permet le calcul de délais de focalisation qui compensent les aberrations induites par le cristallin, en prenant en compte les effets réfractifs à son interface avec les humeurs. Les résultats obtenus in vitro et ex vivo avec une barrette 20MHz et un échographe de recherche (ECODERM) sont rapportés. / In ophthalmic ultrasonography the crystalline lens is known to be the main source of phase aberration, as ultrasounds (US) propagate about 10% faster than in the surrounding intra-ocular medium. Hence, it impairs significantly both spatial and contrast resolution of axial B-scans, and in addition causes important distortion, especially on the ocular fundus. To deal with this issue and in view of the next coming of US arrays in ophthalmologic practice, we developed in this thesis an adapted beamforming (BF) free from crystalline lens aberrations. It lies on a two point ray tracing approach to compute focusing delays that take into account crystalline lens aberrations including refraction at the interface. Initially developed considering a uniform US velocity in the lens, the adapted BF has been extended to consider the velocity gradient that exists in the real lens. In vitro and ex vivo results obtained with a 20 MHz linear array driven by a US research scanner (the ECODERM) are reported.
653

Sketch-based intuitive 3D model deformations

Bao, Xin January 2014 (has links)
In 3D modelling software, deformations are used to add, to remove, or to modify geometric features of existing 3D models to create new models with similar but slightly different details. Traditional techniques for deforming virtual 3D models require users to explicitly define control points and regions of interest (ROIs), and to define precisely how to deform ROIs using control points. The awkwardness of defining these factors in traditional 3D modelling software makes it difficult for people with limited experience of 3D modelling to deform existing 3D models as they expect. As applications which require virtual 3D model processing become more and more widespread, it becomes increasingly desirable to lower the "difficulty of use" threshold of 3D model deformations for users. This thesis argues that the user experience, in terms of intuitiveness and ease of use, of a user interface for deforming virtual 3D models, can be greatly enhanced by employing sketch-based 3D model deformation techniques, which require the minimal quantities of interactions, while keeping the plausibility of the results of deformations as well as the responsiveness of the algorithms, based on modern home grade computing devices. A prototype system for sketch-based 3D model deformations is developed and implemented to support this hypothesis, which allows the user to perform a deformation using a single deforming stroke, eliminating the need to explicitly select control points, the ROI and the deforming operation. GPU based accelerations have been employed to optimise the runtime performance of the system, so that the system is responsive enough for real-time interactions. The studies of the runtime performance and the usability of the prototype system are conducted to provide evidence to support the hypothesis.
654

Extensão da transformada imagem-floresta diferencial para funções de conexidade com aumentos baseados na raiz e sua aplicação para geração de superpixels / Extending the differential Iimage foresting transform to connectivity functions with root-based increases and its application for superpixels generation

Marcos Ademir Tejada Condori 11 December 2017 (has links)
A segmentação de imagens é um problema muito importante em visão computacional, no qual uma imagem é dividida em regiões relevantes, tal como para isolar objetos de interesse de uma dada aplicação. Métodos de segmentação baseados na transformada imagem-floresta (IFT, Image Foresting Transform), com funções de conexidade monotonicamente incrementais (MI) têm alcançado um grande sucesso em vários contextos. Na segmentação interativa de imagens, na qual o usuário pode especificar o objeto desejado, novas sementes podem ser adicionadas e/ou removidas para corrigir a rotulação até conseguir a segmentação esperada. Este processo gera uma sequência de IFTs que podem ser calculadas de modo mais eficiente pela DIFT (Differential Image Foresting Transform). Recentemente, funções de conexidade não monotonicamente incrementais (NMI) têm sido usadas com sucesso no arcabouço da IFT no contexto de segmentação de imagens, permitindo incorporar informações de alto nível, tais como, restrições de forma, polaridade de borda e restrição de conexidade, a fim de customizar a segmentação para um dado objeto desejado. Funções não monotonicamente incrementais foram também exploradas com sucesso na geração de superpixels, via sequências de execuções da IFT. Neste trabalho, apresentamos um estudo sobre a Transformada Imagem-Floresta Diferencial no caso de funções NMI. Nossos estudos indicam que o algoritmo da DIFT original apresenta uma série de inconsistências para funções não monotonicamente incrementais. Este trabalho estende a DIFT, visando incorporar um subconjunto das funções NMI em grafos dirigidos e mostrar sua aplicação no contexto da geração de superpixels. Outra aplicação que é apresentada para difundir a relevância das funções NMI é o algoritmo Bandeirantes para perseguição de bordas e rastreamento de curvas. / Image segmentation is a problem of great relevance in computer vision, in which an image is divided into relevant regions, such as to isolate an object of interest for a given application. Segmentation methods with monotonically incremental connectivity functions (MI) based on the Image Foresting Transform (IFT) have achieved great success in several contexts. In interactive segmentation of images, in which the user is allowed to specify the desired object, new seeds can be added and/or removed to correct the labeling until achieving the expected segmentation. This process generates a sequence of IFTs that can be calculated more efficiently by the Differential Image Foresting Trans- form (DIFT). Recently, non-monotonically incremental connectivity functions (NMI) have been used successfully in the IFT framework in the context of image segmentation, allowing the incorporation of shape, boundary polarity, and connectivity constraints, in order to customize the segmentation for a given target object. Non-monotonically incremental functions were also successfully exploited in the generation of superpixels, via sequences of IFT executions. In this work, we present a study of the Differential Image Foresting Transform in the case of NMI functions. Our research indicates that the original DIFT algorithm presents a series of inconsistencies for non-monotonically incremental functions. This work extends the DIFT algorithm to NMI functions in directed graphs, and shows its application in the context of the generation of superpixels. Another application that is presented to spread the relevance of NMI functions is the Bandeirantes algorithm for curve tracing and boundary tracking.
655

Numerický model uspořádání dutých vláken v tepelném výměníku / Numerical model of hollow fiber arrangement in heat exchanger

Cabalová, Klára January 2020 (has links)
This paper deals with the topic of numerical arrangement of fibers in a heat exchanger. The heat exchanger is scanned in an industrial tomograph and the acquired data are represented by the field of voxels. The method used in this paper is based on tracing the fiber fragments through the use of image analysis and the subsequent numerical connection of the fragments. The result is a set of fibers that are represented by points in the field through which they are passing.
656

Numerické modelování zdrojů světla / Numerical Modelling of the Light Source

Pavelka, Adam January 2011 (has links)
The master’s thesis deals with photometry units used in light engineering. There are defined the methods of modelling the illumination systems, their advantages, disadvantages and possibilities of using. Furthermore, the thesis deals with modelling of two illumination systems by ray-tracing in programming environment of MATLAB. The master’s thesis describes the analysis of the problem and the program designing process. Acquired model results are then compared with the real measurements of both illumination systems which allow the discussion of the results and the deviations of the models.
657

Simulace poslechového prostoru, azimutu a vzdálenosti zvukového zdroje pro vícekanálové ozvučovací systémy / Simulations of auditory space and azimuth&distance of sound source, for multichannel sound systems

Orlovský, Kristián January 2011 (has links)
This thesis is aimed at simulation of auditory space. It describes the most frequently used panning method: Vector Base Amplitude Panning. Also is focused on image source method, which allows computing the parameters of direct sound wave and reflections in rectangular room. This method is compared with ray–tracing method, which is also often used. It deals with the matter of the frequency – dependent absorption of materials in reflection of the sound wave against the wall. On the basis of these information two applications were designed in MATLAB development environment. The first one allows the simulation of auditory space. The other one is the application for sound source panning by its azimuth and distance.
658

Mobilní app pro měření odstupu od předchozího vozidla v provozu / Mobile App for Measuring the Range from the Preceding Vehicle in Traffic

Henry, Andrii January 2015 (has links)
This master's thesis deals with development of mobile app for measuring the range from the preceding vehicle in traffic using visual-based methods. This paper describes implementation of computer vision algorithms of detection and tracing objects, detection of horizon on desktop and mobile devices. Also deals with visual-based range measuring without any other mechanisms. The output of the work is implemented detectors of vihicles and horizon using OpenCV library on the Windows platfom and draft of user inerface of a mobile phone aplication on the Android platform.
659

Amélioration des techniques de reconnaissance automatique de mines marines par analyse de l'écho à partir d'images sonar haute résolution / Improvement of automatic recognition techniques of marine mines by analyzing echo in high resolution sonar images

Elbergui, Ayda 10 December 2013 (has links)
La classification des cibles sous-marines est principalement basée sur l'analyse de l'ombre acoustique. La nouvelle génération des sonars d'imagerie fournit une description plus précise de la rétrodiffusion de l'onde acoustique par les cibles. Par conséquent, la combinaison de l'analyse de l'ombre et de l'écho est une voie prometteuse pour améliorer la classification automatique des cibles. Quelques systèmes performants de classification automatique des cibles s'appuient sur un modèle pour faire l'apprentissage au lieu d'utiliser uniquement des réponses expérimentales ou simulées de cibles pour entraîner le classificateur. Avec une approche basée modèle, un bon niveau de performance en classification peut être obtenu si la modélisation de la réponse acoustique de la cible est suffisamment précise. La mise en œuvre de la méthode de classification a nécessité de modéliser avec précision la réponse acoustique des cibles. Le résultat de cette modélisation est un simulateur d'images sonar (SIS). Comme les sonars d'imagerie fonctionnent à haute et très haute fréquence le modèle est basé sur le lancer de rayons acoustiques. Plusieurs phénomènes sont pris en compte pour augmenter le réalisme de la réponse acoustique (les effets des trajets multiples, l'interaction avec le fond marin, la diffraction, etc.). La première phase du classificateur utilise une approche basée sur un modèle. L'information utile dans la signature acoustique de la cible est nommée « A-scan ». Dans la pratique, l'A-scan de la cible détectée est comparé à un ensemble d'A-scans générés par SIS dans les mêmes conditions opérationnelles. Ces gabarits (A-scans) sont créés en modélisant des objets manufacturés de formes simples et complexes (mines ou non mines). Cette phase intègre un module de filtrage adapté pour permettre un résultat de classification plus souple capable de fournir un degré d'appartenance en fonction du maximum de corrélation obtenu. Avec cette approche, l'ensemble d'apprentissage peut être enrichi afin d'améliorer la classification lorsque les classes sont fortement corrélées. Si la différence entre les coefficients de corrélation de l'ensemble de classes les plus probables n'est pas suffisante, le résultat est considéré ambigu. Une deuxième phase est proposée afin de distinguer ces classes en ajoutant de nouveaux descripteurs et/ou en ajoutant davantage d'A-scans dans la base d'apprentissage et ce, dans de nouvelles configurations proches des configurations ambiguës. Ce processus de classification est principalement évalué sur des données simulées et sur un jeu limité de données réelles. L'utilisation de l'A-scan a permis d'atteindre des bonnes performances de classification en mono-vue et a amélioré le résultat de classification pour certaines ambiguïtés récurrentes avec des méthodes basées uniquement sur l'analyse d'ombre. / Underwater target classification is mainly based on the analysis of the acoustic shadows. The new generation of imaging sonar provides a more accurate description of the acoustic wave scattered by the targets. Therefore, combining the analysis of shadows and echoes is a promising way to improve automated target classification. Some reliable schemes for automated target classification rely on model based learning instead of only using experimental samples of target acoustic response to train the classifier. With this approach, a good performance level in classification can be obtained if the modeling of the target acoustic response is accurate enough. The implementation of the classification method first consists in precisely modeling the acoustic response of the targets. The result of the modeling process is a simulator called SIS (Sonar Image Simulator). As imaging sonars operate at high or very high frequency the core of the model is based on acoustical ray-tracing. Several phenomena have been considered to increase the realism of the acoustic response (multi-path propagation, interaction with the surrounding seabed, edge diffraction, etc.). The first step of the classifier consists of a model-based approach. The classification method uses the highlight information of the acoustic signature of the target called « A-scan ». This method consists in comparing the A-scan of the detected target with a set of simulated A-scans generated by SIS in the same operational conditions. To train the classifier, a Template base (A-scans) is created by modeling manmade objects of simple and complex shapes (Mine Like Objects or not). It is based on matched filtering in order to allow more flexible result by introducing a degree of match related to the maximum of correlation coefficient. With this approach the training set can be extended increasingly to improve classification when classes are strongly correlated. If the difference between the correlation coefficients of the most likely classes is not sufficient the result is considered ambiguous. A second stage is proposed in order to discriminate these classes by adding new features and/or extending the initial training data set by including more A-scans in new configurations derived from the ambiguous ones. This classification process is mainly assessed on simulated side scan sonar data but also on a limited data set of real data. The use of A-scans have achieved good classification performances in a mono-view configuration and can improve the result of classification for some remaining confusions using methods only based on shadow analysis.
660

Concepts for In-memory Event Tracing: Runtime Event Reduction with Hierarchical Memory Buffers

Wagner, Michael 03 July 2015 (has links)
This thesis contributes to the field of performance analysis in High Performance Computing with new concepts for in-memory event tracing. Event tracing records runtime events of an application and stores each with a precise time stamp and further relevant metrics. The high resolution and detailed information allows an in-depth analysis of the dynamic program behavior, interactions in parallel applications, and potential performance issues. For long-running and large-scale parallel applications, event-based tracing faces three challenges, yet unsolved: the number of resulting trace files limits scalability, the huge amounts of collected data overwhelm file systems and analysis capabilities, and the measurement bias, in particular, due to intermediate memory buffer flushes prevents a correct analysis. This thesis proposes concepts for an in-memory event tracing workflow. These concepts include new enhanced encoding techniques to increase memory efficiency and novel strategies for runtime event reduction to dynamically adapt trace size during runtime. An in-memory event tracing workflow based on these concepts meets all three challenges: First, it not only overcomes the scalability limitations due to the number of resulting trace files but eliminates the overhead of file system interaction altogether. Second, the enhanced encoding techniques and event reduction lead to remarkable smaller trace sizes. Finally, an in-memory event tracing workflow completely avoids intermediate memory buffer flushes, which minimizes measurement bias and allows a meaningful performance analysis. The concepts further include the Hierarchical Memory Buffer data structure, which incorporates a multi-dimensional, hierarchical ordering of events by common metrics, such as time stamp, calling context, event class, and function call duration. This hierarchical ordering allows a low-overhead event encoding, event reduction and event filtering, as well as new hierarchy-aided analysis requests. An experimental evaluation based on real-life applications and a detailed case study underline the capabilities of the concepts presented in this thesis. The new enhanced encoding techniques reduce memory allocation during runtime by a factor of 3.3 to 7.2, while at the same do not introduce any additional overhead. Furthermore, the combined concepts including the enhanced encoding techniques, event reduction, and a new filter based on function duration within the Hierarchical Memory Buffer remarkably reduce the resulting trace size up to three orders of magnitude and keep an entire measurement within a single fixed-size memory buffer, while still providing a coarse but meaningful analysis of the application. This thesis includes a discussion of the state-of-the-art and related work, a detailed presentation of the enhanced encoding techniques, the event reduction strategies, the Hierarchical Memory Buffer data structure, and a extensive experimental evaluation of all concepts.

Page generated in 0.0606 seconds