• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 242
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 451
  • 82
  • 54
  • 50
  • 48
  • 45
  • 44
  • 44
  • 41
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Shortening time-series power flow simulations for cost-benefit analysis of LV network operation with PV feed-in

López, Claudio David January 2015 (has links)
Time-series power flow simulations are consecutive power flow calculations on each time step of a set of load and generation profiles that represent the time horizon under which a network needs to be analyzed. These simulations are one of the fundamental tools to carry out cost-benefit analyses of grid planing and operation strategies in the presence of distributed energy resources, unfortunately, their execution time is quite substantial. In the specific case of cost-benefit analyses the execution time of time-series power flow simulations can easily become excessive, as typical time horizons are in the order of a year and different scenarios need to be compared, which results in time-series simulations that require a rather large number of individual power flow calculations. It is often the case that only a set of aggregated simulation outputs is required for assessing grid operation costs, examples of which are total network losses, power exchange through MV/LV substation transformers, and total power provision from PV generators. Exploring alternatives to running time-series power flow simulations with complete input data that can produce approximations of the required results with a level of accuracy that is suitable for cost-benefit analyses but that require less time to compute can thus be beneficial. This thesis explores and compares different methods for shortening time-series power flow simulations based on reducing the amount of input data and thus the required number of individual power flow calculations, and focuses its attention on two of them: one consists in reducing the time resolution of the input profiles through downsampling while the other consists in finding similar time steps in the input profiles through vector quantization and simulating them only once. The results show that considerable execution time reductions and sufficiently accurate results can be obtained with both methods, but vector quantization requires much less data to produce the same level of accuracy as downsampling. Vector quantization delivers a far superior trade-off between data reduction, time savings, and accuracy when the simulations consider voltage control or when more than one simulation with the same input data is required, as in such cases the data reduction process can be carried out only once. One disadvantage of this method is that it does not reproduce peak values in the result profiles with accuracy, which is due to the way downsampling disregards certain time steps in the input profiles and to the averaging effect vector quantization has on the them. This disadvantage makes the simulations shortened through these methods less precise, for example, for detecting voltage violations.
232

Quantização segundo o formalismo BRST-BFV de uma teoria com simetria de gauge e simetria conforme em um espaço-tempo com (d+2) dimensões / Quantization according to the BRST-BFV formalism of a Theory with Gauge Symmetry and Symmetry As in a space-time with (d +2) Dimensions

Sacramento, Wilson Pereira do 18 September 2003 (has links)
Sistemas geralmente covariantes têm uma Hamiltoniana canônica nula, não é necessário encontrar na Hamiltoniana efetiva para determinar sua evolução dinâmica.Esta Hamiltoniana efetiva é dependente do gauge e sua forma varia com a escolha do gauge. Dirac propôs um método baseado em teoria de grupos para determinar a Hamiltoniana efetiva. Nós propomos um método baseado em teorias de gauge, segundo o formalismo BRST-BFV, para determiná-la. Aplicaremos o método à partícula relativística e a um modelo com dois tempos, também geralmente covariante. Para a partícula relativística com massa nula e spin N/2 encontraremos a Hamiltoniana efetiva nos gauges canônicos propostos por Dirac, chamados as formas da dinâmica: instante, frente de onda e pontual. Para isso, determinaremos a função fermiônica fixadora de gauge apropriada no formalismo BRST-BFV . A função fermiônica fixadora de gauge quebra as simetrias da ação original, tanto as simetrias locais quanto as globais, de forma que a Hamiltoniana efetiva é invariante por um grupo de simetria menor que o da ação clássica. No caso da física com dois tempos, o grupo de simetria da ação clássica, é o grupo conforme SO(d,2), maior que o grupo de Poincaré da partícula relativística. A ação também é invariante pela simetria local OSp(N\\2). Utilizando a mesma técnica aplicada à partícula relativística determinaremos, após as fixações dos gauges, as Hamiltonianas efetivas. Veremos que suas simetrias são menores que as simetrias da ação original, porém maiores que as da partícula relativística. Encontraremos uma Hamiltoniana não-relativística arbitrária, invariante por rotações em um espaço com (d-1) domensões espaciais e spin N/2. Neste trabalho, procuramos resolver alguns problemas que aparecem na física com dois tempos formulada por I. Bars, tais como a arbitrariedade das Hamiltonianas e das escolhas de gauge que levam a elas. Bars escolheu arbitrariamente as Hamiltonianas como combinações de geradores do grupo conforme, e fez escolhas de gauge complicadas e arbitrárias. Nós apresentamos escolhas de gauge mais simples que, de modo sistemático, resultam em Hamiltonianas com grupos de simetrias menores que os da ação original. Além disso, o resultado descrito no parágrafo acima, i.e., a Hamiltoniana arbitrária e com spin N/2, não havia sido obtido antes. / A general covariant system hás a vanishing canonical Hamiltonian and its time evolution is determined by na effective Hamiltonian. This effective Hamiltonian is gauge dependent and its form depends on the gauge on the gauge choice. Dirac has proposed a method based on gauge theories, according to the BRST-BFV formalism to determine it. This method Will be applied both to the relativistic particle and to a two-times model. For the massless relativistic and spin N/2 we Will showhow to get the effective Hamiltonian for the canonical gauges discussed by Dirac, called the forms of dynamics: instant, front and point. We Will find the appropriate gauge fixing function in the BRST-BFV formalism. The gauge fixing function breaks the symmetries of the original action, the local and the global symmetries, so that the effective Hamiltonian is invariant by a gauge symmetry groupwhich is smaller than the gauge symmetry group of the classical action. In the two times physics, the symmetry group of the classical action is the conformal group SO(d,2), which is larger than the Poincares group of the relativistic particle. The action is also invariant by the local symmetry OSp(N\\2). By using the same technique used in the relativistic particle, we Will determine the effective Hamiltonians, after the gauges had been fixed. We Will see that their symmetries are smaller than the original action symmetries, but they are larger than the symmetry group of the relativistic particle. We Will find a non-relativistic arbitrary Hamiltonian, invariant by rotations in a space with (d-1) dimensions and spin N/2. In this work, we tried to solve some problems that appeared in two times physics elaborated by I Bars, Just like the arbitrariness of Hamiltonians and the choices of gauges, which lead to them. Bars has chosen the Hamiltonians arbitrarily as combinations of the generators of the conformal group and has chosen complicated and arbitrary gauges. We have presented simple gauge choices, in which, in a systematic way, arise in Hamiltonians with symmetry groups that are smaller than the former paragraph, i. e. , na arbitrary Hamiltonian with spin N/2, hadnt been obtained before.
233

Alguns problemas de quantização em teorias com fundos não-abelianos e em espaços-tempo não-comutativos / Some quartization problems in theories with non-Abelian backgrounds and in non-commutative spacetimes

Fresneda, Rodrigo 06 October 2008 (has links)
Esta tese tem por base três artigos publicados pelo autor e colaboradores. O primeiro artigo trata do problema da quantização de modelos pseudoclássicos de partículas escalares em campos de fundo não-abelianos, cujo foco é a dedução desses modelos pseudo-clássicos usando métodos de integral de trajetória. O segundo artigo investiga a possibilidade de realizar modelos de gravitação dilatônica em variedades não-comutativas em duas dimensões. Para tanto, vale-se de um método de análise de vínculos e simetrias especialmente desenvolvido para gravitação não-comutativa em duas dimensões. O terceiro artigo discute modelos renormalizáveis em espaços-tempo não-comutativos com parâmetro de não-comutatividade bifermiônico em quatro dimensões. / This thesis is based on three published papers by the author and co-authors. The rst article treats the quantization problem of pseudoclassical models of scalar particles in non-Abelian backgrounds, which aims at deriving these models using path-integral methods. The second article examines the possibility of realizing dilaton gravity models in noncommutative two-dimensional manifolds. It relies upon a method of analysis of constraints and symmetries especially developed for non-commutative dilaton gravities in two dimensions. The third article discusses renormalizable models in noncommutative spacetime with bifermionic noncommutative parameter in four dimensions.
234

An empirical analysis of scenario generation methods for stochastic optimization

Löhndorf, Nils 17 May 2016 (has links) (PDF)
This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well as a new method referred to as Voronoi cell sampling. Solution quality is assessed by measuring the error that arises from using scenarios to solve a multi-dimensional newsvendor problem, for which analytical solutions are available. In addition to the expected value, the work also studies scenario quality when minimizing the expected shortfall using the conditional value-at-risk. To quickly solve problems with millions of random parameters, a reformulation of the risk-averse newsvendor problem is proposed which can be solved via Benders decomposition. The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions. A controversial finding concerns evidence for the ineffectiveness of widely used methods based on minimizing probability metrics under high-dimensional randomness.
235

Controle quântico ótimo: fundamentos, aplicações e extensões da teoria. / Optimal quantum control : fundamentals , applications and extensions of the theory.

Lisboa, Alexandre Coutinho 31 March 2015 (has links)
Inicialmente, os conceitos fundamentais e a problemática básica subjacentes ao Controle de Sistemas Quânticos são apresentados, destacando-se, por exemplo, as questões físicas e dinâmicas envolvidas, os principais tipos e metodologias de controle no contexto quântico, bem como aplicações existentes e potenciais de Controle Quântico, muitas das quais situando-se na vanguarda da Ciência e da Tecnologia. Segue-se uma exposição do arcabouço teórico básico e do formalismo padrão da Mecânica Quântica, tendo em vista prover os elementos necessários à compreensão de sistemas quânticos, sua dinâmica e seu controle. O conceito de Controlabilidade é, então, apresentado no contexto de Sistemas Quânticos. Em seqüência, os fundamentos do Controle Quântico Ótimo são desenvolvidos como uma extensão da Teoria Clássica de Controle Ótimo, apresentando-se exemplos de aplicações. Ao problema da transferência de estados quânticos para um estado-alvo em tempo mínimo é devotada especial atenção, dada sua grande relevância em aplicações tecnológicas de ponta, como em Computação Quântica e Processamento de Informação Quântica. A partir de limitações físicas que são inerentes a qualquer sistema quântico, no tocante ao tempo mínimo necessário para que ocorra uma transição de estados, propõem-se Fatores de Mérito para quantificar a eficiência dos controles quânticos ótimos que minimizam o tempo de transferência de estados. Exemplos de aplicação, estudos teóricos e estudos de casos são levados a cabo para a definição dos Fatores de Mérito associados. Este trabalho termina com estudos relativos a uma possível formulação da Teoria de Controle Quântico Ótimo em termos de Integrais de Trajetória para o tratamento de sistemas quânticos contínuos, em especial, o controle espaço-temporal de partículas quânticas. Um possível emprego do Efeito Aharonov-Bohm é também discutido como estratégia de Controle Quântico. / Firstly, the fundamental concepts and the basic issues concerning the Control of Quantum Systems are presented, highlighting, for example, related physical and dynamical questions, the main control types and methodologies in the quantum context, as well as current and potential applications of Quantum Control, many of them situated on the avant-garde of Science and Technology. Then follows an exposition of the basic theoretical framework and the standard formalism of Quantum Mechanics, whose aim is to provide the necessary elements for understanding quantum systems, quantum dynamics and control. The concept of Controlability is then presented in the context of Quantum Systems. Subsequently, the fundamental concepts of Quantum Optimal Control are developed as an extension of the Classical Optimal Control Theory, featuring some examples of application. To the problem of transfering quantum states to a certain target state at minimal time a special attention is devoted, having in mind its great relevance in state-of-art technological applications, e.g., Quantum Computation and Quantum Information Processing. From physical limitations that are inherent to any quantum systems, regarding the minimal time necessary to perform a state transition, one proposes Figures of Merit in order to quantify the efficiency of optimal quantum controls which minimize the state transfer time. Examples of applications, theoretical studies and case studies are carried out in order to define the associated Figures of Merit. This work ends with studies concerning a possible formulation of Optimal Quantum Control Theory in terms of Path Integrals for handling continuous quantum systems, particularly, the space-time control of quantum particles. A possible use of the Aharonov-Bohm Effect is also discussed as a Quantum Control strategy.
236

Geração de imagens artificiais e quantização aplicadas a problemas de classificação / Artificial images generation and quantization applied to classification problems

Thumé, Gabriela Salvador 29 April 2016 (has links)
Cada imagem pode ser representada como uma combinação de diversas características, como por exemplo o histograma de intensidades de cor ou propriedades de textura da imagem. Essas características compõem um vetor multidimensional que representa a imagem. É comum esse vetor ser dado como entrada para um método de classificação de padrões que, após aprender por meio de diversos exemplos, pode gerar um modelo de decisão. Estudos sugerem evidências de que a preparação das imagens-- por meio da especificação cuidadosa da aquisição, pré-processamento e segmentação-- pode impactar significativamente a classificação. Além da falta de tratamento das imagens antes da extração de características, o desbalanceamento de classes também se apresenta como um obstáculo para que a classificação seja satisfatória. Imagens possuem características que podem ser exploradas para melhorar a descrição dos objetos de interesse e, portanto, sua classificação. Entre as possibilidades de melhorias estão: a redução do número de intensidades das imagens antes da extração de características ao invés de métodos de quantização no vetor já extraído; e a geração de imagens a partir das originais, de forma a promover o balanceamento de bases de dados cujo número de exemplos de cada classe é desbalanceado. Portanto, a proposta desta dissertação é melhorar a classificação de imagens utilizando métodos de processamento de imagens antes da extração de características. Especificamente, busca analisar a influência do balanceamento de bases de dados e da quantização na classificação. Este estudo analisa ainda a visualização do espaço de características após os métodos de geração artificial de imagens e de interpolação das características extraídas das imagens originais (SMOTE), comparando como espaço original. A ênfase dessa visualização se dá na observação da importância do rebalanceamento das classes. Os resultados obtidos indicam que a quantização simplifica as imagens antes da extração de características e posterior redução de dimensionalidade, produzindo vetores mais compactos; e que o rebalanceamento de classes de imagens através da geração de imagens artificiais pode melhorar a classificação da base de imagens, em relação à classificação original e ao uso de métodos no espaço de características já extraídas. / Each image can be represented by a combination of several features like color frequency and texture properties. Those features compose a multidimensional vector, which represents the original image. Commonly this vector is given as an input to a classification method that can learn from examplesand build a decision model. The literature suggests that image preparation steps like acute acquisition, preprocessing and segmentation can positively impact such classification. Besides that, class unbalancing is also a barrier to achieve good classification accuracy. Some features and methods can be explored to improveobjects\' description, thus their classification. Possible suggestions include: reducing colors number before feature extraction instead of applying quantization methods to raw vectors already extracted; and generating synthetic images from original ones, to balance the number of samples in an uneven data set. We propose to improve image classification using image processing methods before feature extraction. Specifically we want to analyze the influence of both balancing and quantization methods while applied to datasets in a classification routine. This research also analyses the visualization of feature space after the artificial image generation and feature interpolation (SMOTE), against to original space. Such visualization is used because it allows us to know how important is the rebalacing method. The results show that quantization simplifies imagesby producing compacted vectors before feature extraction and dimensionality reduction; and that using artificial generation to rebalance image datasets can improve classification, when compared to the original one and to applying methods on the already extracted feature vectors.
237

Wideband extension of narrowband speech for enhancement and coding

Epps, Julien, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2000 (has links)
Most existing telephone networks transmit narrowband coded speech which has been bandlimited to 4 kHz. Compared with normal speech, this speech has a muffled quality and reduced intelligibility, which is particularly noticeable in sounds such as /s/, /f/ and /sh/. Speech which has been bandlimited to 8 kHz is often coded for this reason, but this requires an increase in the bit rate. Wideband enhancement is a scheme that adds a synthesized highband signal to narrowband speech to produce a higher quality wideband speech signal. The synthesized highband signal is based entirely on information contained in the narrowband speech, and is thus achieved at zero increase in the bit rate from a coding perspective. Wideband enhancement can function as a post-processor to any narrowband telephone receiver, or alternatively it can be combined with any narrowband speech coder to produce a very low bit rate wideband speech coder. Applications include higher quality mobile, teleconferencing, and internet telephony. This thesis examines in detail each component of the wideband enhancement scheme: highband excitation synthesis, highband envelope estimation, and narrowband-highband envelope continuity. Objective and subjective test measures are formulated to assess existing and new methods for all components, and the likely limitations to the performance of wideband enhancement are also investigated. A new method for highband excitation synthesis is proposed that uses a combination of sinusoidal transform coding-based excitation and random excitation. Several new techniques for highband spectral envelope estimation are also developed. The performance of these techniques is shown to be approaching the limit likely to be achieved. Subjective tests demonstrate that wideband speech synthesized using these techniques has higher quality than the input narrowband speech. Finally, a new paradigm for very low bit rate wideband speech coding is presented in which the quality of the wideband enhancement scheme is improved further by allocating a very small bitstream for highband envelope and gain coding. Thus, this thesis demonstrates that wideband speech can be communicated at or near the bit rate of a narrowband speech coder.
238

Vektorkvantisering för kodning och brusreducering / Vector quantization for coding and noise reduction

Cronvall, Per January 2004 (has links)
<p>This thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.</p>
239

Evaluating and Implementing JPEG XR Optimized for Video Surveillance

Yu, Lang January 2010 (has links)
<p>This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.</p>
240

Imperfect Channel Knowledge for Interference Avoidance

Lajevardi, Saina 06 1900 (has links)
This thesis examines various signal processing techniques that are required for establishing efficient (near optimal) communications in multiuser multiple-input multiple-output (MIMO) environments. The central part of this thesis is dedicated to acquisition of information about the MIMO channel state - at both the receiver and the transmitter. This information is required to organize a communication set up which utilizes all the available channel resources. Realistic channel model, i.e., the spatial channel model (SCM), has been used in this study, together with modern long-term evolution (LTE) standard. The work consists of three major themes: (a) estimation of the channel at the receiver, also known as tracking; (b) quantization of the channel information and its feedback from receiver to the transmitter (feedback quantization); and (c) reconstruction of the channel knowledge at the transmitter, and its use for data precoding during communication transmission. / Communications

Page generated in 0.033 seconds