71 |
Primary and secondary log breakdown simulationTodoroki, Christine Louisa January 1997 (has links)
Log breakdown by sawing can be viewed as a multi-phase process that converts logs into boards by a series of cutting operations. In the primary phase, logs are sawn into s labs of wood known as flitches or cants. These are further processed by secondary operations, that resaw, edge (cut lengthwise) and trim (cut widthwise) the raw material, resulting in the manufacture of the board product whose value is influenced by its composite dimensions and quality (as indicated by a grade). Board grade is in turn determined by the number, type, size, and location of defects. Owing to its biological origins, each log, and subsequent board, is unique. Furthermore, as each sawmill, and processing centre within the mill, has a unique configuration, the problem of determining how each log entering a mill should be sawn is very complex. Effective computer simulation of log breakdown processes must therefore entail detailed descriptions of both geometry and quality of individual logs. Appropriate strategies at each breakdown phase are also required. In this thesis models for emulating log breakdown are developed in conjunction with an existing sawing simulation system which requires, as input, detailed three-dimensional descriptions of both internal and external log characteristics. Models based on heuristic and enumerative procedures, and those based upon the principles of dynamic programming (DP) are formulated, encoded, and compared. Log breakdown phases are considered both independently and in a combined integrated approach-working backwards from the board product through to the primary log breakdown phase. This approach permits methodology developed for the later processes to be embedded within the primary phase thus permitting the determination of a global rather than local solution to the log breakdown problem whose objective is to seek the highest possible solution quality within the minimum possible time. Simulation results indicate that solution quality and processing speeds are influenced by both solution methodology and degree of data complexity. When the structure of either factor is simplified, solutions are generated more rapidly-but with an accompanying reduction in solution quality. A promising compromise that combines DP techniques with mathematical functions based on a subset of the original data is presented. / Subscription resource available via Digital Dissertations only.
|
72 |
Platform based approach for economic production of a product familyChoubey, Anand M January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / David H. Ben-Arieh / In present competitive market, there is growing concern for ascertaining and fulfilling the individual customer’s wants and needs. Therefore, the focus of manufacturing has been shifting from mass production to mass customization, which requires the manufacturers to introduce an increasing number of products with shorter life span and at a lower cost. Also, another challenge is to manage the variety of products in an environment where demands are stochastic and the lead times to fulfill those demands are short.
The focus of this thesis is to develop and investigate platform based production strategies, as opposed to producing each product independently, which would ensure the economic production of the broader specialized products with small final assembly time and under demand uncertainty.
The thesis proposes three different platform based production models. The first model considers the economic production of products based on a single platform and with forecasted demands of the products. The model is formulated as a general optimization problem that considers the minimization of total production costs.
The second model is the extension of the first model and considers the production of products based on multiple platforms and considers the minimization of total production costs and the setup costs of having multiple platforms.
The third model is also an extension of the first model and considers the demands of the products to be stochastic in nature. The model considers the minimization of total production costs and shortage costs of lost demands and holding cost of surplus platforms under demand uncertainties. The problem is modeled as a two stage stochastic programming with recourse.
As only the small instances of the models could be solved exactly in a reasonable time, various heuristics are developed by combining the genetic evolutionary search approaches and some operations research techniques to solve the realistic size problems. The various production models are validated and the performances of the various heuristics tailored for the applications are investigated by applying these solution approaches on a case of cordless drills.
|
73 |
Simultaneously lifting sets of variables in binary Knapsack problemsSharma, Kamana January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programming (IP) has been and continues to be widely used by industries to minimize cost and effectively manage resources. Faster computers and innovative IP techniques have enabled the solution to many large-scale IPs. However, IPs are NP-hard and many IPs require exponential time to solve.
Lifting is one of the most widely used techniques that helps to reduce computational time and is widely applied in today's commercial IP software. Lifting was first introduced by Gomory for bounded integer programs and a theoretical and computationally intractible technique to simultaneously lift sets of variables was introduced by Zemel in 1978.
This thesis presents a new algorithm called the Maximal Simultaneous Lifting Algorithm (MSLA), to simultaneously uplift sets of binary integer variables into a cover inequality. These lifted inequalities result in strong inequalities that are facet defining under fairly moderate assumptions.
A computational study shows that this algorithm can find numerous strong inequalities for random Knapsack (KP) instances. The pre-processing time observed for these instances is less than 1/50th of a second, which is negligible. These simultaneously lifted inequalities are easy to find and incorporating these cuts to KP instances reduced the solution time by an average of 41%. Therefore, implementing MSLA should be highly beneficial for large real-world problems.
|
74 |
Exact synchronized simultaneous uplifting over arbitrary initial inequalities for the knapsack polytopeBeyer, Carrie Austin January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IPs) are mathematical models that can provide an optimal solution to a variety of different problems. They have been used to reduce costs and optimize organizations. Additionally, IPs are NP-complete resulting in many IPs that cannot be
solved. Cutting planes or valid inequalities have been used to decrease the time required
to solve IPs.
Lifting is a technique that strengthens existing valid inequalities. Lifting inequalities can result in facet defining inequalities, which are the theoretically strongest valid inequalities. Because of these properties, lifting procedures are used in software to reduce the time required to solve an IP. The thesis introduces a new algorithm for exact synchronized simultaneous uplifting over an arbitrary initial inequality for knapsack problems. Synchronized Simultaneous Lifting (SSL) is a pseudopolynomial time algorithm requiring O(nb+n[superscript]3) effort to solve. It exactly uplifts two sets simultaneously into an initial arbitrary valid inequality and creates multiple inequalities of a particular form. This previously undiscovered class of inequalities generated by SSL can be facet defining.
A small computational study shows that SSL is quick to execute, requiring on average less than a quarter of a second. Additionally, applying SSL inequalities to a knapsack problem enabled commercial software to solve problems that it could not solve without them.
|
75 |
Optimization and resource management in wireless sensor networksRoseveare, Nicholas January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / In recent years, there has been a rapid expansion in the development and use of low-power, low-cost wireless modules with sensing, computing, and communication functionality. A wireless sensor network (WSN) is a group of these devices networked together wirelessly. Wireless sensor networks have found widespread application in infrastructure, environmental, and human health monitoring, surveillance, and disaster management. While there are many interesting problems within the WSN framework, we address the challenge of energy availability in a WSN tasked with a cooperative objective. We develop approximation algorithms and execute an analysis of concave utility maximization in resource constrained systems. Our analysis motivates a unique algorithm which we apply to resource management in WSNs. We also investigate energy harvesting as a way of improving system lifetime. We then analyze the effect of using these limited and stochastically available communication resources on the convergence of decentralized optimization techniques. The main contributions of this research are: (1) new optimization formulations which explicitly consider the energy states of a WSN executing a cooperative task; (2) several analytical insights regarding the distributed optimization of resource constrained systems; (3) a varied set of algorithmic solutions, some novel to this work and others based on extensions of existing techniques; and (4) an analysis of the effect of using stochastic resources (e.g., energy harvesting) on the performance of decentralized optimization methods. Throughout this work, we apply our developments to distribution estimation and rate maximization. The simulation results obtained help to provide verification of algorithm performance. This research provides valuable intuition concerning the trade-offs between energy-conservation and system performance in WSNs.
|
76 |
An employee assignment optimization model exploring cross-training and specialization through multiple management strategiesWipperfurth, Christy January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Jason Bergtold / Company managers continually face challenges in the market, such as
increased demand for their services and variability in the types of service requested.
In addition, managers may face internal challenges during periods adjustment such
as moving the company forward through a hiring freeze. In these situations, a
manager must be able to allocate their scarce resources in a way to continue to
perform. For employees, this could mean specializing in tasks or increasing crosstraining
to improve work schedule flexibility. The objective of this research is to
determine the optimal allocation of employees to tasks, given resource constraints
and the need for staff flexibility, to satisfy alternative management strategies. The
setting is the service industry, in particular a laboratory setting providing testing and
consulting services.
An optimization model was developed to incorporate key aspects of a
company’s operation, and determine labor allocation among tasks, and for how
many hours, to satisfy the manager’s objective. The model estimates the optimal
allocation of labor and how much production and net revenues would be generated,
with more specialized employees. A sensitivity analysis was employed to determine
the impact of cross-training current staff. Results indicate that cross-training affords
flexibility; however, the impact on overall production varies depending on the
employee trained. The highest benefit is derived from training a lower-producing
employee into a high value task at a high productivity rate. Specialization can help
to improve productivity in net returns for higher valued tasks, but may limit flexibility, as
employees cannot switch between tasks as readily.
|
77 |
Optimal mobility patterns in epidemic networksNirkhiwale, Supriya January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / Disruption Tolerant Networks or opportunistic networks represent a class of networks where there is no contemporaneous path from source to destination. In other words, these are networks with intermittent connections. These networks are generally sparse or highly mobile wireless networks. Each node has a limited radio range and the connections between nodes may be disrupted due to node movement, hostile environments or power sleep schedules, etc. A common example of such networks is a sensor network monitoring nature or military field or a herd of animals under study. Epidemic routing is a widely proposed routing mechanism for data propagation in these type of networks. According to this mechanism, the source copies its packets to all the nodes it meets in its radio range. These nodes in turn copy the received packets to the other nodes they meet and so on. The data to be transmitted travels in a way analogous to the spread of an infection in a biological network. The destination finally receives the packet and measures are taken to eradicate the packet from the network. The task of routing in epidemic networks faces certain difficulties involving minimizing the delivery delay with a reduced consumption of resources. Every node has severe power constraints and the network is also susceptible to temporary but random failure of nodes. In the previous work, the parameter of mobility has been considered a constant for a certain setting. In our setting, we consider a varying parameter of mobility. In this framework, we determine the optimal mobility pattern and a forwarding policy that a network should follow in order to meet the trade-off between delivery delay and power consumption. In addition, the mobility pattern should be such that it can be practically incorporated. In our work, we formulate an optimization problem which is solved by using the principles of dynamic programming. We have tested the optimal algorithm through extensive simulations and they show that this optimization problem has a global solution.
|
78 |
Pricing American options with jump-diffusion by Monte Carlo simulationFouse, Bradley Warren January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems
Engineering / Chih-Hang Wu / In recent years the stock markets have shown tremendous volatility with significant spikes and drops in the stock prices. Within the past decade, there have been numerous jumps in the market; one key example was on September 17, 2001 when the Dow industrial average dropped 684 points following the 9-11 attacks on the United States. These evident jumps in the markets show the inaccuracy of the Black-Scholes model for pricing options. Merton provided the first research to appease this problem in 1976 when he extended the Black-Scholes model to
include jumps in the market. In recent years, Kou has shown that the distribution of the jump sizes used in Merton’s model does not efficiently model the actual movements of the markets. Consequently, Kou modified Merton’s model changing the jump size distribution from a normal distribution to the double exponential distribution.
Kou’s research utilizes mathematical equations to estimate the value of an American put option where the underlying stocks follow a jump-diffusion process. The research contained within this thesis extends on Kou’s research using Monte Carlo simulation (MCS) coupled with
least-squares regression to price this type of American option. Utilizing MCS provides a
continuous exercise and pricing region which is a distinct difference, and advantage, between MCS and other analytical techniques. The aim of this research is to investigate whether or not MCS is an efficient means to pricing American put options where the underlying stock undergoes a jump-diffusion process. This thesis also extends the simulation to utilize copulas in the pricing of baskets, which contains several of the aforementioned type of American options.
The use of copulas creates a joint distribution from two independent distributions and provides an efficient means of modeling multiple options and the correlation between them.
The research contained within this thesis shows that MCS provides a means of accurately
pricing American put options where the underlying stock follows a jump-diffusion. It also shows that it can be extended to use copulas to price baskets of options with jump-diffusion. Numerical examples are presented for both portions to exemplify the excellent results obtained by using MCS for pricing options in both single dimension problems as well as multidimensional
problems.
|
79 |
Cliqued holes and other graphic structures for the node packing polytopeConley, Clark Logan January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Graph Theory is a widely studied topic. A graph is defined by two important features:
nodes and edges. Nodes can represent people, cities, variables, resources, products, while
the edges represent a relationship between two nodes. Using graphs to solve problems
has played a major role in a diverse set of industries for many years.
Integer Programs (IPs) are mathematical models used to optimize a problem. Often
this involves maximizing the utilization of resources or minimizing waste. IPs are
most notably used when resources must be of integer value, or cannot be split. IPs
have been utilized by many companies for resource distribution, scheduling, and conflict
management.
The node packing or independent set problem is a common combinatorial optimization
problem. The objective is to select the maximum nodes in a graph such that no two
nodes are adjacent. Node packing has been used in a wide variety of problems, which
include routing of vehicles and scheduling machines.
This thesis introduces several new graph structures, cliqued hole, odd bipartite hole,
and odd k-partite hole, and their corresponding valid inequalities for the node packing
polyhedron. These valid inequalities are shown to be new valid inequalities and conditions
are provided for when they are facet defining, which are known to be the strongest
class of valid inequalities. These new valid inequalities can be used by practitioners to
help solve node packing instances and integer programs.
|
80 |
Synchronized simultaneous lifting in binary knapsack polyhedraBolton, Jennifer Elaine January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IP) are used in companies and organizations across the world to
reach financial and time-related goals most often through optimal resource allocation and
scheduling. Unfortunately, integer programs are computationally difficult to solve and
in some cases the optimal solutions are unknown even with today’s advanced computing
machines.
Lifting is a technique that is often used to decrease the time required to solve an IP
to optimality. Lifting begins with a valid inequality and strengthens it by changing the
coefficients of variables in the inequality. Often times, this technique can result in facet
defining inequalities, which are the theoretically strongest inequalities.
This thesis introduces a new type of lifting called synchronized simultaneous lifting
(SSL). SSL allows for multiple sets of simultaneously lifted variables to be simultaneously
lifted which generates a new class of inequalities that previously would have required
an oracle to be found. Additionally, this thesis describes an algorithm to perform synchronized
simultaneous lifting for a binary knapsack inequality called the Synchronized
Simultaneous Lifting Algorithm (SSLA). SSLA is a quadratic time algorithm that will
exactly simultaneously lift two sets of simultaneously lifted variables.
Short computational studies show SSLA can sometimes solve IPs to optimality that
CPLEX, an advanced integer programming solver, alone cannot solve. Specifically, the
SSL cuts allowed a 76 percent improvement over CPLEX alone.
|
Page generated in 0.0245 seconds