• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A compressive sensing approach to solving nonograms

Lopez, Oscar Fabian 12 December 2013 (has links)
A nonogram is a logic puzzle where one shades certain cells of a 2D grid to reveal a hidden image. One uses the sequences of numbers on the left and the top of the grid to figure out how many and which cells to shade. We propose a new technique to solve a nonogram using compressive sensing. Our method avoids (1) partial fill-ins, (2) heuristics, and (3) over-complication, and only requires that we solve a binary integer programming problem. / text
2

Modeling the problem of assigning Virginia’s localities to DCR’s Regional Offices

Srinivasan, Sudharshana 15 July 2009 (has links)
Virginia’s Department of Conservation and Recreation (DCR) assigns all of Virginia localities to Regional Offices to help conserve the natural resources in a certain region. In this paper, we present a mathematical model for optimizing such assignments by minimizing the travel time and cost of these assignments. With the growing increase in fuel costs and tighter budgets, our solution finds alternate assignments that will help cut costs by 14% annually. We have used integer programming techniques to find optimal assignments which satisfy requirements, respect limitations and minimize cost. Several plans are suggested, some keeping soil and water conservation districts together and some without.
3

Improving the Scalability of an Exact Approach for Frequent Item Set Hiding

LaMacchia, Carolyn 01 January 2013 (has links)
Technological advances have led to the generation of large databases of organizational data recognized as an information-rich, strategic asset for internal analysis and sharing with trading partners. Data mining techniques can discover patterns in large databases including relationships considered strategically relevant to the owner of the data. The frequent item set hiding problem is an area of active research to study approaches for hiding the sensitive knowledge patterns before disclosing the data outside the organization. Several methods address hiding sensitive item sets including an exact approach that generates an extension to the original database that, when combined with the original database, limits the discovery of sensitive association rules without impacting other non-sensitive information. To generate the database extension, this method formulates a constraint optimization problem (COP). Solving the COP formulation is the dominant factor in the computational resource requirements of the exact approach. This dissertation developed heuristics that address the scalability of the exact hiding method. The heuristics are directed at improving the performance of COP solver by reducing the size of the COP formulation without significantly affecting the quality of the solutions generated. The first heuristic decomposes the COP formulation into multiple smaller problem instances that are processed separately by the COP solver to generate partial extensions of the database. The smaller database extensions are then combined to form a database extension that is close to the database extension generated with the original, larger COP formulation. The second heuristic evaluates the revised border used to formulate the COP and reduces the number of variables and constraints by selectively substituting multiple item sets with composite variables. Solving the COP with fewer variables and constraints reduces the computational cost of the processing. Results of heuristic processing were compared with an existing exact approach based on the size of the database extension, the ability to hide sensitive data, and the impact on nonsensitive data.
4

Ridership Based Substation Planning for Mass Rapid Transit System

Fan, Liang-Jan 19 June 2000 (has links)
This thesis is to investigate the power system operation strategy for an electrified mass rapid transit¡]MRT¡^network with the load transfer among main transformers by considering load growth and due to annual ridership increase, the loading factors of main transformers are improved so that the power system loss can be reduced. For the conventional planning of an electrified MRT system to serve the public transportation for the metropolitan area, the transformer capacity is often designed to meet the criterion of not only covering the peak demand but also providing the 100% fully capacity reserve for the system operation of target year. With such a high backup capability, the transformers are very lightly loaded for most of the operation time and significant core loss will be introduced over the lifecycle. In this thesis the train motion equation has been applied to find the mechanical power required, the proper strategy of unit commitment of main transformers and network reconfiguration by switching operation has been considered to enhance the operation efficiency of an MRT power system. To demonstrate the effectiveness of the proposed methodology, the Taipei MRT network is selected for computer simulation. It is found that the loading factors of main transformers can be improved dramatically and the load balance among the transformers can be obtained by the proper switching operation. An efficient strategy for transformer planning by taking into account the growth rate of load so that the overall investment cost of main transformers can be justified. The load characteristics and load growth rate of mass rapid transit¡]MRT¡^are derived by an Energy Management Model (EMM) and the AC load flow analysis is used to solve the transformer copper loss and core loss over the study period. To obtain optimal planning and operation strategy of main transformers for the MRT power system, the transformers initial investment cost and depreciation cost, peak power loss and energy loss, and reliability cost of distribution transformers are combined to form the overall cost function .By performing the dynamic programming (DP) the unit commitment of main transformers by considering the annual peak and off peak power loading of whole MRT system is derived. It is found that more efficient system operation can be obtained by the proposed methodology.
5

A Pairwise Comparison Matrix Framework for Large-Scale Decision Making

January 2013 (has links)
abstract: A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
6

AN EXACT ALGORITHM FOR THE SHARE-OF-CHOICE PROBLEM

KANNAN, SRIRAM 18 July 2006 (has links)
No description available.
7

An Optimizing Approach For Highway Safety Improvement Programs

Unal, Serter Ziya 01 June 2004 (has links) (PDF)
Improvements to highway safety have become a high priority for highway authorities due to increasing public awareness and concern of the high social and economic costs of accidents. However, satisfying this priority in an environment of limited budgets is difficult. It is therefore important to ensure that the funding available for highway safety improvements is efficiently utilized. In attempt to maximize the overall highway safety benefits, highway professionals usually invoke an optimization process. The objective of this thesis study is to develop a model for the selection of appropriate improvements on a set of black spots which will provide the maximum reduction in the expected number of accidents (total return), subject to the constraint that the amount of money needed for the implementation of these improvements does not exceed the available budget. For this purpose, a computer program, BSAP (Black Spot Analysis Program) is developed. BSAP is comprised of two separate, but integrated programs: the User Interface Program (UIP) and the Main Analysis Program (MAP). The MAP is coded in MATLAB and contains the optimization procedure itself and performs all the necessary calculations by using a Binary Integer Optimization model. The UIP, coded in VISUAL BASIC, was used for monitoring the menu for efficient data preparation and providing a user-friendly environment.
8

THREE ESSAYS ON PRODUCTION AND INVENTORY MANAGEMENT

FENG, KELI 29 September 2005 (has links)
No description available.
9

PMU-Based Applications for Improved Monitoring and Protection of Power Systems

Pal, Anamitra 07 May 2014 (has links)
Monitoring and protection of power systems is a task that has manifold objectives. Amongst others, it involves performing data mining, optimizing available resources, assessing system stresses, and doing data conditioning. The role of PMUs in fulfilling these four objectives forms the basis of this dissertation. Classification and regression tree (CART) built using phasor data has been extensively used in power systems. The splits in CART are based on a single attribute or a combination of variables chosen by CART itself rather than the user. But as PMU data consists of complex numbers, both the attributes, should be considered simultaneously for making critical decisions. An algorithm is proposed here that expresses high dimensional, multivariate data as a single attribute in order to successfully perform splits in CART. In order to reap maximum benefits from placement of PMUs in the power grid, their locations must be selected judiciously. A gradual PMU placement scheme is developed here that ensures observability as well as protects critical parts of the system. In order to circumvent the computational burden of the optimization, this scheme is combined with a topology-based system partitioning technique to make it applicable to virtually any sized system. A power system is a dynamic being, and its health needs to be monitored at all times. Two metrics are proposed here to monitor stress of a power system in real-time. Angle difference between buses located across the network and voltage sensitivity of buses lying in the middle are found to accurately reflect the static and dynamic stress of the system. The results indicate that by setting appropriate alerts/alarm limits based on these two metrics, a more secure power system operation can be realized. A PMU-only linear state estimator is intrinsically superior to its predecessors with respect to performance and reliability. However, ensuring quality of the data stream that leaves this estimator is crucial. A methodology for performing synchrophasor data conditioning and validation that fits neatly into the existing linear state estimation formulation is developed here. The results indicate that the proposed methodology provides a computationally simple, elegant solution to the synchrophasor data quality problem. / Ph. D.
10

Otimizacão da coordenação de relés de sobrecorrente direcionais em sistemas elétricos de potência utilizando a programação inteira binária / Optimization of coordination of directional overcurrent relays in electric power systems using binary integer programming

Corrêa, Rafael 23 February 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work aims to optimize the coordination of microprocessor-based directional overcurrent relays in power systems using Binary Integer Programming (BIP). Two new mathematical models of BIP are presented. The first determines only the Time Multiplier of each relay, while the second determines simultaneously the Time Multiplier and Current Multiplier of each relay. These models have a great advantage over the Linear Programming (LP) and Nonlinear Programming (NLP) models to determine the relay settings directly within the range provided by these instead of the LP and NLP which use continuous variables. Thus, it avoids the rounding of settings to the closest values available in the relays, which can cause failures in coordination. Still, the algorithms used to solve these BIP models do not require an initial guess, unlike what happens in NLP, and the search from getting trapped in local minima. This paper presents the NLP and LP models considered and the necessary changes in order to obtain the two new BIP models. To validate the new mathematical models of coordination of overcurrent relays and compare them with the models which use continuous variables, the proposed methodology is applied in phase and earth protection of two test systems of different sizes considering whether or not the instantaneous unit of each relay. The results are evaluated in terms of the Objective Function, the obtained settings and operating times of relays for faults within the zone of primary protection. Thus, it is shown that the proposed models have the ability to determine the optimal solution of the problem in a reduced computational time and without the need to make any changes to the final solution. These models can also integrate a software aid to decision making by the protection engineer, allowing to interact in the construction of mathematical model to customize the final solution. / Este trabalho visa otimizar a coordenação de relés de sobrecorrente direcionais microprocessados em sistemas elétricos de potência com o auxílio da Programação Inteira Binária (PIB). Dois novos modelos matemáticos de PIB são apresentados. O primeiro determina somente o Multiplicador de Tempo de cada relé, enquanto que o segundo determina simultaneamente o Multiplicador de Tempo e o Multiplicador de Corrente de cada relé. Esses modelos possuem como grande vantagem em relação aos modelos de Programação Linear (PL) e de Programação Não Linear (PNL) a determinação dos ajustes dos relés diretamente dentro da faixa por estes disponibilizada, ao contrário desses últimos que utilizam variáveis contínuas. Dessa forma, evita-se o arredondamento dos ajustes para os valores mais próximos disponíveis nos relés, o que pode causar falhas na coordenação. Ainda, os algoritmos destinados à resolução desses modelos de PIB não necessitam de um valor inicial, ao contrário do que ocorre na PNL, e evita-se que a solução fique estagnada em ótimos locais durante o processo de busca. Este trabalho apresenta os modelos de PNL e PL considerados e as alterações necessárias para que se obtenha os dois novos modelos de PIB. Para validar os novos modelos de coordenação de relés de sobrecorrente e compará-los com os modelos que utilizam variáveis contínuas, a metodologia proposta é aplicada na proteção de fase e de neutro de dois sistemas teste de diferentes portes considerando, ou não, a unidade instantânea de cada relé. Os resultados são avaliados em termos da Função Objetivo, dos ajustes obtidos e dos tempos de operação dos relés para faltas dentro da zona de proteção primária. Desse modo, é demonstrado que os modelos propostos têm a capacidade de determinar a solução ótima do problema em um tempo computacional reduzido e sem a necessidade de se realizar quaisquer modificações na solução final. Esses modelos podem, ainda, integrar um software de auxílio à tomada de decisões por parte do engenheiro de proteção, permitindo a interação na construção do modelo matemático para que a solução final seja personalizada.

Page generated in 0.1462 seconds