Spelling suggestions: "subject:"anda coperations)"" "subject:"anda cooperations)""
51 |
Long-Term Spatial Load Forecasting Using Human-Machine Co-construct Intelligence FrameworkHong, Tao 28 October 2008 (has links)
This thesis presents a formal study of the long-term spatial load forecasting problem: given small area based electric load history of the service territory, current and future land use information, return forecast load of the next 20 years. A hierarchical S-curve trending method is developed to conduct the basic forecast. Due to uncertainties of the electric load data, the results from the computerized program may conflict with the nature of the load growth. Sometimes, the computerized program is not aware of the local development because the land use data lacks such information. A human-machine co-construct intelligence framework is proposed to improve the robustness and reasonability of the purely computerized load forecasting program. The proposed algorithm has been implemented and applied to several utility companies to forecast the long-term electric load growth in the service territory and to get satisfying results.
|
52 |
Soft Computing Approaches to Routing and Wavelength Assignment in Wavelength-Routed Optical NetworksLea, Djuana 30 November 2004 (has links)
The routing and wavelength assignment (RWA) problem is essential for achieving efficient performance in wavelength-routed optical networks. For a network without wavelength conversion capabilities, the RWA problem consists of selecting an appropriate path and wavelength for each connection request while ensuring that paths that share common links are not assigned the same wavelength. The purpose of this research is to develop efficient adaptive methods for routing and wavelength assignment in wavelength-routed optical networks with dynamic traffic. The proposed methods utilize soft computing techniques including genetic algorithms, fuzzy control theory, simulated annealing, and tabu search. All four algorithms consider the current availability of network resources before making a routing decision. Simulations for each algorithm show that each method outperforms fixed and alternate routing strategies. The fuzzy-controlled algorithm achieved the lowest blocking rates and the shortest running times in most cases.
|
53 |
Ad-hoc Wireless Routing for Wildlife Tracking with Environmental Power ConstraintMcClusky, Douglas 18 December 2006 (has links)
The purpose of this paper is to suggest an algorithm by which mica motes can organize themselves into a network to relay packets as quickly as possible under energy constraints from environmental harvesting. This problem is part of a larger project to develop a means to monitor red wolves using a mica mote network. The network has three parts: sensor motes attached to collars on the wolves, a base station or base stations that receive packets and display them in useable form for scientists and relay motes that forward packets from the sensor motes to a base station. The proposed algorithm adapts Hohlt et al's Flexible Power Scheduling to work under Kansal et al's Environmental Harvesting power constraint. Employing this strategy changes energy consumption from a performance objective to a constraint, allowing me to add my own throughput maximizing piece to the algorithm, based on dynamic programming and microeconomics. I also discuss the ongoing development of a simulation of this algorithm, designed to test its performance and to solve implementation problems.
|
54 |
ASSESSING THE IMPACT OF STRATEGIC SAFETY STOCK PLACEMENT IN A MULTI-ECHELON SUPPLY CHAINBryksina, Elena Alexandrovna 16 December 2005 (has links)
The objective of this study is to develop prescriptions for strategically placing safety stocks in an arborescent supply chain in which there are moderate to severe risks of disruptions in supply. Our work builds off of recently published work by Graves and Willems (2003) that demonstrates that a simple-to-compute, congestion-based adjustment to supply lead times, first developed by Ettl et al. (2000), can be embedded in a non-linear optimization problem to minimize total investment in safety stock across the entire supply chain. We are interested in investigating how the Graves and Willems (GW) model performs under uncertainty in supply. We first propose an adjustment to the model (Mod-GW) by considering two types of fulfillment times, a normal fulfillment time and a worst possible fulfillment time , which allows us to account for supply uncertainty, or disruptions in supply. We evaluate the performance of GW and Mod-GW using Monte Carlo simulation and, using motivation from Timed-Petri Net analysis, develop an Informed Safety Stock Adjustment (ISSA) algorithm to compute the additional buffer stock levels necessary to improve downstream service performance to the target level. We find that the service performance of the Mod-GW solution is most sensitive to the probability of disruption at any node in the supply chain, requiring higher safety stock adjustments through ISSA as this probability increases. In particular, the relative value of the holding costs for components and finished goods?and the resulting impact on where safety stock is held in the network?is an important moderating factor in determining the level of service performance degradation of the Mod-GW solution as either , the probability of disruption at node j, or , the ratio of the disrupted and normal lead times, increases (i.e., as disruptions exert more impact on the network). The Informed Safety Stock Adjustment algorithm generally suggests a sufficient complementary amount to the safety stock.
|
55 |
Optimization in Job Shop Scheduling Using Alternative RoutesDavenport, Catherine Elizabeth 09 January 2003 (has links)
The ability of a production system to complete orders on time is a critical measure of customer service. While there is typically a preferred routing for a job through the processing machines, often an alternative route is available that can be used to avoid bottleneck operations and improve due date performance. In this paper a heuristic approach is given to dynamically select routing alternatives for a set of jobs to be processed in a job shop. The approach is coupled with a job shop scheduling algorithm developed by Hodgson et al. (1998, 2000) to minimize the latest job (Lmax).
|
56 |
Heuristic Methods for Gang-Rip Saw Arbor Design and SchedulingAKSAKALLI, VURAL 14 November 1999 (has links)
<p>AKSAKALLI, VURAL. Heuristic Methods for Gang-Rip Saw Arbor Design and Scheduling. (Under the direction of Dr. Yahya Fathi).This research considers the problem of designing and scheduling arbors for gang-rip saw systems. Such systems are typically used within the furniture manufacturing industry for processing lumber, where lumber boards are first ripped lengthwise into strips of different widths, and then, cut to the required lengths to be used in manufacturing.A saw with multiple cutting channels is used to perform this operation. This saw has fixed blades at specific positions on a rotating shaft which rips incoming lumber boards into required finished widths. The pattern of cutting channels (i.e., the setting of the blades) along the saw shaft is referred to as an ''arbor''.A typical instance of the problem consists of (1) a set of required finished widths and their corresponding demands, (2) a frequency distribution of lumber boards in the uncut stock, (3) a shaft length, and (4) a blade width. The objective is to design a set of (one or more) arbors and the corresponding quantity of lumber to run through each arbor, such that the total amount of waste generated is minimized while the demand is satisfied.In the research, we focus on solving the problem using only one arbor. First, we discuss the computational complexity of the problem and propose a total enumeration procedure which can be used to solve relatively small instances. Then, we develop algorithms based on heuristic approaches such as local improvement procedures, simulated annealing, and genetic algorithms. Our computational experiments indicate that a local improvement procedure with two nested loops, performing local search with a different neighborhood structure within each loop, gives very high quality solutions to the problem within very short execution times.<P>
|
57 |
A WAVELET-BASED PROCEDURE FOR PROCESS FAULT DETECTIONLada, Emily Kate 06 April 2000 (has links)
<p>The objective of this research is to develop a procedure for detectingfaults in a particular process by analyzing data generated by theprocess on a compressed wavelet scale. In order to compress the data,three different methods are compared for reducing the number ofwavelet coefficients needed to obtain an accurate representation ofthe data. One method is ultimately chosen to compress all data usedin this study. This method, which involves minimizing the relativereconstruction error, effectively balances model parsimony againstreconstruction error. The compressed data from seven in-control runsis used to define a standard, or baseline, by which to compare newdata sets. The compressed baseline process is defined by forming theunion set of the top, or coarsest, coefficient positions selected bythe relative reconstruction error method for each of the seven runs.In order to determine if a new, compressed data set differssignificantly from the baseline, a variant of Hotelling'sT²-statistic for two-sample problems is formulated. This statisticis tested by applying it to four induced-fault data sets, as well asto 21 in-control data sets. The results provide substantial evidenceof the statistic's effectiveness in detecting process faults fromcompressed data. It is also shown that traditional bootstrappingmethods cannot be implemented in this case.<P>
|
58 |
Simulation Optimization Using Soft ComputingMedaglia, Andres L. 25 January 2001 (has links)
<p>To date, most of the research in simulation optimization has been focused on single response optimization on the continuous space of input parameters. However, the optimization of more complex systems does not fit this framework. Decision makers often face the problem of optimizing multiple performance measures of systems with both continuous and discrete input parameters. Previously acquired knowledge of the system by experts is seldomincorporated into the simulation optimization engine. Furthermore, when the goals of the system design are stated in natural language or vague terms, current techniques are unable to deal with this situation. For these reasons, we define and study the fuzzy single response simulation optimization (FSO) and fuzzy multiple response simulation optimization (FMSO) problems.<p>The primary objective of this research is to develop an efficient and robust method for simulation optimization of complex systems with multiple vague goals. This method uses a fuzzy controller to incorporate existing knowledge to generate high quality approximate Pareto optimal solutions in a minimum number of simulation runs.<p>For comparison purposes, we also propose an evolutionary method for solving the FMSO problem. Extensive computational experiments on the design of a flow line manufacturing system (in terms of tandem queues with blocking) have been conducted. Both methods are able to generate high quality solutions in terms of Zitzlerand Thiele's "dominated space" metric. Both methods are also able to generate an even sample of the Pareto front. However, the fuzzy controlled method is more efficient, requiring fewer simulation runs than the evolutionary method to achieve the same solution quality.<p>To accommodate the complexity of natural language, this research also provides a new Bezier curve-based mechanism to elicit knowledge and express complex vague concepts. To date, this is perhaps the most flexible and efficient mechanism for both automatic and interactive generation of membership functions for convex fuzzy sets.<p><P>
|
59 |
Implicit Runge-Kutta Methods for Stiff and Constrained Optimal Control ProblemsBiehn, Neil David 23 March 2001 (has links)
<p>The purpose of the research presented in this thesis is to better understand and improve direct transcription methods for stiff and state constrained optimal control problems. When some implicit Runge-Kutta methods are implemented as approximations to the dynamics of an optimal control problem, a loss of accuracy occurs when the dynamics are stiff or constrained. A new grid refinement strategy which exploits the variation of accuracy is discussed. In addition, the use of a residual function in place of classical error estimation techniques is proven to work well for stiff systems. Computational experience reveals the improvement in efficiency and reliability when the new strategies are incorporated as part of a direct transcription algorithm. For index three differential-algebraic equations, the solutions of some implicit Runge-Kutta methods may not converge. However, computational experience reveals apparent convergence for the same methods used when index three state inequality constraints become active. It is shown that the solution chatters along the constraint boundary allowing for better approximations. Moreover, the consistency of the nonlinear programming problem formed by a direct transcription algorithm using an implicit Runge-Kutta approximation is proven for state constraints of arbitrary index.<P>
|
60 |
FAULT DETECTION AND MODEL IDENTIFICATION IN LINEAR DYNAMICAL SYSTEMSHorton, Kirk Gerritt 05 April 2001 (has links)
<p>Horton, Kirk Gerritt. Fault Detection and Model Identification in Linear Dynamical Systems. (Under the direction of Dr. Stephen La Vern Campbell.) Linear dynamical systems, Ex'+Fx=f(t), in which E is singular, are useful in a wide variety of applications. Because of this wide spread applicability, much research has been done recently to develop theory for the design of linear dynamical systems. A key aspect of system design is fault detection and isolation (FDI). One avenue of FDI is via the multi-model approach, in which the parameters of the nominal, unfailed model of the system are known, as well as the parameters of one or more fault models. The design goal is to obtain an indicator for when a fault has occurred, and, when more than one type is possible, which type of fault it is. A choice that must be made in the system design is how to model noise. One way is as a bounded energy signal. This approach places very few restrictions on the types of noisy systems which can be addressed, requiring no complex modeling requirement. This thesis applies the multi-model approach to FDI in linear dynamical systems, modeling noise as bounded energy signals. A complete algorithm is developed, requiring very little on-line computation, with which nearly perfect fault detection and isolation over a finite horizon is attained. The algorithm applies techniques to convert complex system relationships into necessary and sufficient conditions for the solutions to optimal control problems. The first such problem provides the fault indicator via the minimum energy detection signal, while the second problem provides for fault isolation via the separating hyperplane. The algorithm is implemented and tested on a suite of examples in commercial optimization software. The algorithm is shownto have promise in nonlinear problems, time varying problems, and certain types of linear problems for which existing theory is not suitable.<P>
|
Page generated in 0.1277 seconds