61 |
Novas formulações para o problema de reconfiguração de redes de distribuição de energia elétrica. / New formulations for the reconfiguration problem in energy distribution systems.Ana María García Cabezas 26 September 2007 (has links)
A reconfiguração de sistemas de distribuição de energia elétrica consiste em alterar a topologia das redes através da abertura ou fechamento das chaves de interconexão existentes nos alimentadores de distribuição primários, de forma a otimizar uma determinada função objetivo. Normalmente os objetivos são a minimização de perdas ativas, o isolamento de faltas, o balanceamento de cargas entre alimentadores e/ou a melhoria dos níveis de tensão. Neste trabalho considera-se a minimização da perda ativa total. As dificuldades do problema de reconfiguração de redes de distribuição resultam do tamanho dos sistemas reais, aos quais correspondem um número elevado de variáveis binárias que representam as chaves, e também da relação quadrática existente entre a perda elétrica e a corrente que flui nos elementos da rede. Este trabalho desenvolve algumas novas formulações para o problema de reconfiguração de redes de distribuição, utilizando Programação Não Linear Inteira Mista. Além disso, demonstra-se que a parte contínua de todas as formulações é convexa, o que garante a unicidade da solução ótima para um dado estado das chaves na rede. Esta propriedade permitiu a utilização do Método de Newton na resolução do problema contínuo, com as seguintes vantagens: impossibilidade de o método identificar mínimos locais em vez do mínimo global procurado, e convergência em apenas uma iteração, proporcionada pela natureza quadrática das formulações. As formulações desenvolvidas foram implementadas na forma de programas computacionais. O desempenho das formulações é descrito e analisado através de diversos casos de estudo. / The reconfiguration of electricity distribution systems is concerned with finding the state of switching and protective devices so as to optimize a given objective function, which is usually defined as minimization of total loss, fault isolation, load balancing among feeders, or improvement of voltage profile. In this work, the objective function is defined as the minimization of total active loss. The main difficulties associated with this problem arise from the high number of binary variables that represent the switching and protective devices, as well as the quadratic relationship between electric loss and currents flowing through the network branches. This work develops some new formulations for the distribution system reconfiguration problem, which are then solved through mixed-integer nonlinear programming. In addition, it is shown that the continuous part in all formulations is convex, which guarantees the uniqueness of the optimal solution for a given switch profile. This property allows using the Standard Newton Method for solving the continuous part of the problem, with the following advantages: impossibility of the Newton Method identifying a local minimum instead of the desired global minimum, and convergence in just one iteration owing to the quadratic nature of all formulations. The proposed formulations were implemented as computational programs and their performance was evaluated through various study cases.
|
62 |
Computations in Prime Fields using Gaussian IntegersEngström, Adam January 2006 (has links)
<p>In this thesis it is investigated if representing a field <i>Z</i><i>p</i><i>, p</i> = 1 (mod 4) prime, by another field <i>Z[i]</i>/ < <i>a + bi </i>> over the gaussian integers, with <i>p</i> = <i>a</i><i>2</i><i> + b</i><i>2</i>, results in arithmetic architectures using a smaller number of logic gates. Only bit parallell architectures are considered and the programs Espresso and SIS are used for boolean minimization of the architectures. When counting gates only NAND, NOR and inverters are used.</p><p>Two arithmetic operations are investigated, addition and multiplication. For addition the architecture over<i> Z[i]/ < a+bi ></i> uses a significantly greater number of gates compared with an architecture over<i> Z</i><i>p</i>. For multiplication the architecture using gaussian integers uses a few less gates than the architecture over <i>Z</i><i>p</i> for <i>p</i> = 5 and for<i> p</i> = 17 and only a few more gates when <i>p</i> = 13. Only the values 5, 13, 17 have been compared for multiplication. For addition 12 values, ranging from 5 to 525313, have been compared.</p><p>It is also shown that using a blif model as input architecture to SIS yields much better performance, compared to a truth table architecture, when minimizing.</p>
|
63 |
Community participation in waste minimization : the case of Emfuleni Local Municipality / Nompazamo Alma LudidiLudidi, Nompazamo Alma January 2009 (has links)
The purpose of this research is to understand factors contributing to successes and challenges in community participation especially in waste minimization initiatives; in order to inform policies and contribute to improve the design of the initiative. The objectives of the research are: firstly, to understand the current state of public participation in waste minimization at Emfuleni Local Municipality. Secondly, it is to determine the extent of willingness of the community to participate in waste minimization initiatives. Thirdly, to determine strategies on how to promote public participation in waste minimization. Fourthly, to identify constraints and challenges of public participation in waste minimization and what kind of support is required for the community to participate in waste minimization initiatives at Emfuleni Local Municipality. Respondents were drawn from fifty households within the community of Bophelong Extension 13, Emfuleni Recycling groups, Waste Management officials, Gauteng Department of Agriculture, Conservation and Environment stake holder participation unit and Waste Buyers in Vanderbijlpark. The researcher employed mainly a qualitative research design and data was collected through questionnaires and interviews. Quantitative study was done on officials and responses were ranked according to the importance of the factors influencing community participation. The findings of this research indicate that the community is willing to participate in waste minimization initiatives. 42% of respondents are currently NOT participating in waste minimization initiatives. It was noted that all community respondents promised and are willing to participate in waste minimization strategies. The research further indicates that there is a considerable number of constraints and challenges prohibiting successful community participation in waste minimization. The constraints include lack of knowledge especially regarding composting initiatives to minimize organic waste and the separation of waste, lack of infrastructure to exchange waste for cash, lack of time, lack of transport, lack of political support, lack of starter packs to initiate own waste minimization plant and lack of financial support to ensure that waste minimization initiatives create opportunities for job creation. This study recommends, amongst others, that the community requires support to participate meaningfully in waste minimization initiatives in the form of: awareness and education, infrastructure for reclaimed waste, waste recycling bins, project funding, community involvement and support from the Emfuleni Local Municipality and the private sector. / Thesis (M. Development and Management)--North-West University, Vaal Triangle Campus, 2009.
|
64 |
Post-mapping Topology Rewriting for FPGA Area MinimizationChen, Lei January 2009 (has links)
Circuit designers require Computer-Aided Design (CAD) tools when compiling designs into Field Programmable Gate Arrays (FPGAs) in order to achieve high quality results due to the complexity of the compilation tasks involved. Technology mapping is one critical step in the FPGA CAD flow. The final mapping
result has significant impact on the subsequent steps of clustering, placement
and routing, for the objectives of delay, area and power dissipation. While depth-optimal FPGA technology mapping can be solved in polynomial time, area minimization has proven to be NP-hard.
Most modern state-of-the-art FPGA technology mappers are structural in nature; they are based on cut enumeration and use various heuristics to yield depth and area minimized solutions. However, the results produced by structural technology mappers rely strongly on the structure of the input netlists.
Hence, it is common to apply additional heuristics after technology mapping to further optimize area and reduce the amount of structural bias while not harming depth.
Recently, SAT-based Boolean matching has been used for post-mapping area minimization. However, SAT-based matching is computationally complex and too time consuming in practice.
This thesis proposes an alternative Boolean matching approach based on NPN equivalence. Using a library of pre-computed topologies, the matching problem becomes as simple as performing NPN encoding followed by a hash lookup which is very efficient. In conjunction with Ashenhurst decomposition, the NPN-based Boolean matching is allowed to handle up to 10-input Boolean functions.
When applied to a large set of designs, the proposed algorithm yields, on average, more than 3% reduction in circuit area without harming circuit depth. The priori generation of a library of topologies can be difficult; the potential difficulty in generating a library of topologies represents one limitation of the proposed algorithm.
|
65 |
Computations in Prime Fields using Gaussian IntegersEngström, Adam January 2006 (has links)
In this thesis it is investigated if representing a field Zp, p = 1 (mod 4) prime, by another field Z[i]/ < a + bi > over the gaussian integers, with p = a2 + b2, results in arithmetic architectures using a smaller number of logic gates. Only bit parallell architectures are considered and the programs Espresso and SIS are used for boolean minimization of the architectures. When counting gates only NAND, NOR and inverters are used. Two arithmetic operations are investigated, addition and multiplication. For addition the architecture over Z[i]/ < a+bi > uses a significantly greater number of gates compared with an architecture over Zp. For multiplication the architecture using gaussian integers uses a few less gates than the architecture over Zp for p = 5 and for p = 17 and only a few more gates when p = 13. Only the values 5, 13, 17 have been compared for multiplication. For addition 12 values, ranging from 5 to 525313, have been compared. It is also shown that using a blif model as input architecture to SIS yields much better performance, compared to a truth table architecture, when minimizing.
|
66 |
Post-mapping Topology Rewriting for FPGA Area MinimizationChen, Lei January 2009 (has links)
Circuit designers require Computer-Aided Design (CAD) tools when compiling designs into Field Programmable Gate Arrays (FPGAs) in order to achieve high quality results due to the complexity of the compilation tasks involved. Technology mapping is one critical step in the FPGA CAD flow. The final mapping
result has significant impact on the subsequent steps of clustering, placement
and routing, for the objectives of delay, area and power dissipation. While depth-optimal FPGA technology mapping can be solved in polynomial time, area minimization has proven to be NP-hard.
Most modern state-of-the-art FPGA technology mappers are structural in nature; they are based on cut enumeration and use various heuristics to yield depth and area minimized solutions. However, the results produced by structural technology mappers rely strongly on the structure of the input netlists.
Hence, it is common to apply additional heuristics after technology mapping to further optimize area and reduce the amount of structural bias while not harming depth.
Recently, SAT-based Boolean matching has been used for post-mapping area minimization. However, SAT-based matching is computationally complex and too time consuming in practice.
This thesis proposes an alternative Boolean matching approach based on NPN equivalence. Using a library of pre-computed topologies, the matching problem becomes as simple as performing NPN encoding followed by a hash lookup which is very efficient. In conjunction with Ashenhurst decomposition, the NPN-based Boolean matching is allowed to handle up to 10-input Boolean functions.
When applied to a large set of designs, the proposed algorithm yields, on average, more than 3% reduction in circuit area without harming circuit depth. The priori generation of a library of topologies can be difficult; the potential difficulty in generating a library of topologies represents one limitation of the proposed algorithm.
|
67 |
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem.
We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
|
68 |
Restoration of Atmospheric Turbulence Degraded Video using Kurtosis Minimization and Motion CompensationLi, Dalong 30 November 2006 (has links)
In this thesis work, the background of atmospheric turbulence degradation in imaging was reviewed and two aspects are highlighted: blurring and geometric distortion. The turbulence burring parameter is determined by the atmospheric turbulence condition that is often unknown; therefore, a blur identification technique was developed that is based on a higher order statistics (HOS). It was observed that the kurtosis generally increases as an image becomes blurred (smoothed). Such an observation was interpreted in the frequency domain in terms of phase correlation. Kurtosis minimization based blur identification is built upon this observation. It was shown that kurtosis minimization is effective in identifying the blurring parameter directly from the degraded image. Kurtosis minimization is a general method for blur identification. It has been tested on a variety of blurs such as Gaussian blur, out of focus blur as well as motion blur. To compensate for the geometric distortion, earlier work on the turbulent motion compensation was extended to deal with situations in which there is camera/object motion. Trajectory smoothing is used to suppress the turbulent motion while preserving the real motion. Though the scintillation effect of atmospheric turbulence is not considered separately, it can be handled the same way as multiple frame denoising while motion trajectories are built.
|
69 |
IP Routing Table Compression Using TCAM and Distance-one MergeBollapalli, Kalyana Chakravarthy 2009 December 1900 (has links)
In an attempt to slow the exhaustion of the Internet Protocol (IP) address space,
Class-less Inter-Domain Routing (CIDR) was proposed and adopted. However, the
decision to utilize CIDR also increases the size of the routing table, since it allows
an arbitrary partitioning of the routing space. We propose a scheme to reduce the
size of routing table in the CIDR context. Our approach utilizes a well-known and
highly efficient heuristic to perform 2-level logic minimization in order to compress
the routing table. By considering the IP routing table as a set of completely specified
logic functions, we demonstrate that our technique can achieve about 25% reduction
in the size of IP routing tables, while ensuring that our approach can handle routing
table updates in real-time. The resulting routing table can be used with existing
routers without needing any change in architecture. However, by realizing the IP
routing table as proposed in this thesis, the implementation requires less complex
hardware than Ternary CAM (TCAM) which are traditionally used to implement IP
routing tables. The proposed architecture also reduces lookup latency by about 46%,
hardware area by 9% and power consumed by 15% in contrast to a TCAM based
implementation.
|
70 |
Resource conservation and optimization via process integrationGabriel, Frederico Burjack 12 April 2006 (has links)
The process industries are characterized by the enormous use of natural resources
such as raw materials, solvents, water, and utilities. Additionally, significant amounts of
wastes are discharged from industrial facilities. As the world moves toward sustainable
progress, that is, meeting the demand of the current generation without affecting or
compromising the new generation, future process facilities must focus on resource
conservation and pollution prevention. The purpose of this work is to introduce a new
process integration methodology for the conservation and optimization of resources in
the process industries. The work is also geared towards reducing waste discharge from
the processing facilities. The optimal management of fresh resources and waste disposal
requires the appropriate allocation, generation, and separation of streams and species.
Material recycle/reuse/substitution, reaction alteration, and process modification are
some of the main strategies employed to conserve resources in the process industries.
The overall problem addressed in this dissertation can be stated as follows: Given
is a process with a number of streams (sources) that are characterized by certain criteria
(e.g., compositions of certain compounds, targeted properties) where these streams can
be utilized in a number of process units (sinks) if they satisfy given constraints on flow
rate, compositions, and/or properties. Additionally, interception devices may be used to
adjust stream criteria. The objective is to develop targeting procedures and synthesis
tools for the identification of minimum usage of fresh resources, minimum discharge of
waste, and maximum integration of process resources. The devised methodology
addresses four classes of problems:
 Targeting techniques using direct recycle strategies
 Recycle and interception procedures for single-component systems
 Recycle and interception procedures for multi-component systems
 Property integration for direct recycle strategies
The framework provided by this dissertation couples traditional mass integration
with groundbreaking property integration techniques to target, synthesize and optimize a
plant for maximal conservation of resources. In particular, this work introduces new
techniques such as material recycle pinch analysis, simultaneous recycle and interception
networks, and property-based allocation. Additionally, graphical, algebraic, and
optimization approaches are developed and validated with case studies in order to
illustrate the applicability of the devised procedures.
|
Page generated in 0.1375 seconds