Spelling suggestions: "subject:"integer"" "subject:"nteger""
331 |
Intermittent demand forecasting with integer autoregressive moving average modelsMohammadipour, Maryam January 2009 (has links)
This PhD thesis focuses on using time series models for counts in modelling and forecasting a special type of count series called intermittent series. An intermittent series is a series of non-negative integer values with some zero values. Such series occur in many areas including inventory control of spare parts. Various methods have been developed for intermittent demand forecasting with Croston’s method being the most widely used. Some studies focus on finding a model underlying Croston’s method. With none of these studies being successful in demonstrating an underlying model for which Croston’s method is optimal, the focus should now shift towards stationary models for intermittent demand forecasting. This thesis explores the application of a class of models for count data called the Integer Autoregressive Moving Average (INARMA) models. INARMA models have had applications in different areas such as medical science and economics, but this is the first attempt to use such a model-based method to forecast intermittent demand. In this PhD research, we first fill some gaps in the INARMA literature by finding the unconditional variance and the autocorrelation function of the general INARMA(p,q) model. The conditional expected value of the aggregated process over lead time is also obtained to be used as a lead time forecast. The accuracy of h-step-ahead and lead time INARMA forecasts are then compared to those obtained by benchmark methods of Croston, Syntetos-Boylan Approximation (SBA) and Shale-Boylan-Johnston (SBJ). The results of the simulation suggest that in the presence of a high autocorrelation in data, INARMA yields much more accurate one-step ahead forecasts than benchmark methods. The degree of improvement increases for longer data histories. It has been shown that instead of identification of the autoregressive and moving average order of the INARMA model, the most general model among the possible models can be used for forecasting. This is especially useful for short history and high autocorrelation in data. The findings of the thesis have been tested on two real data sets: (i) Royal Air Force (RAF) demand history of 16,000 SKUs and (ii) 3,000 series of intermittent demand from the automotive industry. The results show that for sparse data with long history, there is a substantial improvement in using INARMA over the benchmarks in terms of Mean Square Error (MSE) and Mean Absolute Scaled Error (MASE) for the one-step ahead forecasts. However, for series with short history the improvement is narrower. The improvement is greater for h-step ahead forecasts. The results also confirm the superiority of INARMA over the benchmark methods for lead time forecasts.
|
332 |
Planejamento de produção através do dimensionamento de lotes de itens únicos / Production planning by single item lot sizingOliveira, Pedro Henrique Simoes de 18 March 2011 (has links)
Este texto trata de um dos temas fundamentais no planejamento de produção, o problema de dimensionamento de lotes de um único item. Uma descrição sucinta e informal do problema segue abaixo. Considere um intervalo de tempo dividido em períodos e que a cada período de tempo está associada a demanda de um item. Dados os custos e as eventuais restrições na produção e no armazenamento, determine os períodos em que se produzirá e em que quantidade para que as demandas sejam atendidas com o menor custo possível, respeitando as restrições impostas. Apresentamos aqui resultados sobre a estrutura ótima do problema, sobre complexidade e algoritmos para os casos básicos do problema / This text studies one of the core subjects in production planning, the single-item lot-sizing problem. A brief and informal description of this problem follows below. Considering a time interval split into time periods and that there is a demand of an item associated with each time period. Given production and holding costs and possibly production and holding restrictions, determine in which periods the production must occur and in which quantity, in order to attend the demands with a minimum cost, without violate any restriction. Here, it will be shown some results about the optimal structure of the problem, about the complexity and algorithms for the simpler cases
|
333 |
Cost-Sensitive Selective Classification and its Applications to Online Fraud ManagementJanuary 2019 (has links)
abstract: Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
|
334 |
Detecting Covert Members of Terrorist NetworksPaul, Alice 31 May 2012 (has links)
Terrorism threatens both international peace and security and is a national concern. It is believed that terrorist organizations rely heavily on a few key leaders and that destroying such an organization's leadership is essential to reducing its influence. Martonosi et al. (2011) argues that increasing the amount of communication through a key leader increases the likelihood of detection. If we model a covert organization as a social network where edges represent communication between members, we want to determine the subset of members to remove that maximizes the amount of communication through the key leader. A mixed-integer linear program representing this problem is presented as well as a decomposition for this optimization problem. As these approaches prove impractical for larger graphs, often running out of memory, the last section focuses on structural characteristics of vertices and subsets that increase communication. Future work should develop these structural properties as well as heuristics for solving this problem.
|
335 |
Mathematical programming techniques for solving stochastic optimization problems with certainty equivalent measures of riskVinel, Alexander 01 May 2015 (has links)
The problem of risk-averse decision making under uncertainties is studied from both modeling and computational perspectives. First, we consider a framework for constructing coherent and convex measures of risk which is inspired by infimal convolution operator, and prove that the proposed approach constitutes a new general representation of these classes. We then discuss how this scheme may be effectively employed to obtain a class of certainty equivalent measures of risk that can directly
incorporate decision maker's preferences as expressed by utility functions. This approach is consequently utilized to introduce a new family of measures, the log-exponential convex measures of risk. Conducted numerical experiments show that this family can be a useful tool when modeling risk-averse decision preferences under heavy-tailed distributions of uncertainties. Next, numerical methods for solving the rising optimization problems are developed. A special attention is devoted to the class p-order cone programming problems and mixed-integer models. Solution approaches proposed include approximation schemes for $p$-order cone and more general nonlinear programming problems, lifted conic and nonlinear valid inequalities, mixed-integer rounding conic cuts and new linear disjunctive cuts.
|
336 |
On stochastic network design: modeling approaches and solution techniquesRichmond, Nathaniel 01 December 2016 (has links)
Network design problems have been prevalent and popular in the operations research community for decades, because of their practical and theoretical significance. Due to the relentless progression of technology and the creative development of intelligent, efficient algorithms, today we are able to efficiently solve or give excellent heuristic solutions to many network design problem instances. The purpose of this work is to thoroughly examine and tackle two classes of highly complex network design problems which find themselves at the cutting edge of modern research.
First we examine the stochastic incremental network design problem. This problem differs from traditional network design problems through the addition of both temporal and stochastic elements. We present a modeling framework for this class of problems, conduct a thorough theoretical analysis of the solution structure, and give insights into solution methods.
Next we introduce the robust network design problem with decision-dependent uncertainties. Traditional stochastic optimization approaches shy away from randomness which is directly influenced by a user's decisions, due to the computational challenges that arise. We present a two-stage stochastic programming framework, noting that the complexity of this class of problems is derived from a highly nonlinear term in the first-stage objective function. This term is due to the decision-dependent nature of the uncertainty. We perform a rigorous computational study in which we implement various solution algorithms which are both exact and heuristic, as well as both well-studied and original.
For each of the two classes of problems examined in our work, we give suggestions for future study and offer insights into effective ways of tackling these problems in practice.
|
337 |
Performance optimization of wind turbinesZhang, Zijun 01 May 2012 (has links)
Improving performance of wind turbines through effective control strategies to reduce the power generation cost is highly desired by the wind industry. The majority of the literature on performance of wind turbines has focused on models derived from principles versed in physics. Physics-based models are usually complex and not accurate due to the fact that wind turbines involve mechanical, electrical, and software components. These components interact with each other and are subjected to variable loads introduced by the wind as well as the rotating elements of the wind turbine. Recent advances in data acquisition systems allow collection of large volumes of wind energy data. Although the prime purpose of data collection is monitoring conditions of wind turbines, the collected data offers a golden opportunity to address most challenging issues of wind turbine systems. In this dissertation, data mining is applied to construct accurate models based on the turbine collected data. To solve the data-driven models, evolutionary computation algorithms are applied. As data-driven based models are non-parametric, the evolutionary computation approach makes an ideal solution tool. Optimizing wind turbines with different objectives is studied to accomplish different research goals. Two research directions of wind turbines performance are pursued, optimizing a wind turbine performance and optimizing a wind farm performance. The goal of single wind turbine optimization is to improve wind turbine efficiency and its life-cycle. The performance optimization of a wind farm is to minimize the total cost of operating a wind farm based on the computed turbine scheduling strategies. The methodology presented in the dissertation is applicable to processes besides wind industry.
|
338 |
Workforce planning in manufacturing and healthcare systemsJin, Huan 01 August 2016 (has links)
This dissertation explores workforce planning in manufacturing and healthcare systems. In manufacturing systems, the existing workforce planning models often lack fidelity with respect to the mechanism of learning. Learning refers to that employees’ productivity increases as they gain more experience. Workforce scheduling in the short term has a longer term impact on organizations’ capacity. The mathematical representations of learning are usually nonlinear. This nonlinearity complicates the planning models and provides opportunities to develop solution methodologies for realistically-sized instances. This research formulates the workforce planning problem as a mixed-integer nonlinear program (MINLP) and overcomes the limitations of cur- rent solution methods. Specifically, this research develops a reformulation technique that converts the MINLP to a mixed integer linear program (MILP) and proposes several techniques to speed up the solution time of solving the MILP.
In organizations that use group work, workers learn not only by individual learning but also from knowledge transferred from team members. Managers face the decision of how to pair or team workers such that organizations benefit from this transfer of learning. Using a mathematical representation that incorporates both in- dividual learning and knowledge transfer between workers, this research considers the problem of grouping workers to teams and assigning teams to sets of jobs based on workers’ learning and knowledge transfer characteristics. This study builds a Mixed- integer nonlinear programs (MINP) for parallel systems with the objective of maximizing the system throughput and propose exact and heuristic solution approaches for solving the MINLP.
In healthcare systems, we focus on managing medical technicians in medical laboratories, in particular, the phlebotomists. Phlebotomists draw specimens from patients based on doctors’ orders, which arrive randomly in a day. According to the literature, optimizing scheduling and routing in hospital laboratories has not been regarded as a necessity for laboratory management. This study is motivated by a real case at University of Iowa Hospital and Clinics, where there is a team of phlebotomists that cannot fulfill doctors requests in the morning shift. The goal of this research is routing these phlebotomists to patient units such that as many orders as possible are fulfilled during the shift. The problem is a team orienteering problem with stochastic rewards and service times. This research develops an a priori approach which applies a variable neighborhood search heuristic algorithm that improves the daily performance compared to the hospital practice.
|
339 |
Novel Models and Efficient Algorithms for Network-based Optimization in Biomedical ApplicationsSajjadi, Seyed Javad 30 June 2014 (has links)
We introduce and study a novel graph optimization problem to search for multiple cliques with the maximum overall weight, to which we denote as the Maximum Weighted Multiple Clique Problem (MWMCP). This problem arises in research involving network-based data mining, specifically, in bioinformatics where complex diseases, such as various types of cancer and diabetes, are conjectured to be triggered and influenced by a combination of genetic and environmental factors. To integrate potential effects from interplays among underlying candidate factors, we propose a new network-based framework to identify effective biomarkers by searching for "groups" of synergistic risk factors with high predictive power to disease outcome. An interaction network is constructed with vertex weight representing individual predictive power of candidate factors and edge weight representing pairwise synergistic interaction among factors. This network-based biomarker identification problem is then formulated as a MWMCP. To achieve near optimal solutions for large-scale networks, an analytical algorithm based on column generation method as well as a fast greedy heuristic have been derived. Also, to obtain its exact solutions, an advanced branch-price-and-cut algorithm is designed and solved after studying the properties of the problem. Our algorithms for MWMCP have been implemented and tested on random graphs and promising results have been obtained. They also are used to analyze two biomedical datasets: a Type 1 Diabetes (T1D) dataset from the Diabetes Prevention Trial-Type 1 (DPT-1) Study, and a breast cancer genomics dataset for metastasis prognosis. The results demonstrate that our network-based methods can identify important biomarkers with better prediction accuracy compared to the conventional feature selection that only considers individual effects.
|
340 |
Optimal Drill Assignment for Multi-Boom JumbosMichael Champion Unknown Date (has links)
Development drilling is used in underground mining to create access tunnels. A common method involves using a drilling rig, known as a jumbo, to drill holes into the face of a tunnel. Jumbo drill rigs have two or more articulated arms with drills as end-effectors, that extend outwards from a vehicle. Once drilled, the holes are charged with explosives and fired to advance the tunnel. There is an ongoing imperative within the mining industry to reduce development times and reducing time spent drilling is seen as the best opportunity for achieving this. Notwithstanding that three-boom jumbos have been available for some years, the industry has maintained a preference for using jumbo rigs with two drilling booms. Three-boom machines have the potential to reduce drilling time by as much as one third, but they have proven difficult to operate and, in practice, this benefit has not been realized. The key difficulty lies in manoeuvering the booms within the tight confines of the tunnel and ensuring sequencing the drilling of holes so that each boom spends maximum time drilling. This thesis addresses the problem of optimally sequencing multi-boom jumbo drill rigs to minimize the overall time to drill a blast hole pattern, taking into account the various constraints on the problem including the geometric constraints restricting motion of the booms. The specific aims of the thesis are to: ² develop the algorithmic machinery needed to determine minimum- or near-minimum-time drill assignment for multi-boom jumbos which is suitable for "real-time" implementation; ² use this drill pattern assignment algorithm to quantify the benefits of optimal drill pattern assignment with three-boom jumbos; and ² investigate the management of unplanned events, such as boom breakdowns, and assess the potential of the algorithm to assist a human operator with the forward planning of drill-hole selection. Jumbo drill task assignment is a combinatorial optimization problem. A methodology based around receding horizon mixed integer programming is developed to solve the problem. At any time the set of drill-holes available to a boom is restricted by the location of the other booms as well as the tunnel perimeter. Importantly these constraints change as the problem evolves. The methodology builds these constraints into problem through use of a feasibility tensor that encodes the moves available to each boom given configurations of other booms. The feasibility tensor is constructed off-line using a rapidly exploring random tree algorithm. Simulations conducted using the sequencing algorithm predict, for a standard drill-hole pattern, a 10 - 22% reduction in drilling time with the three-boom rig relative to two-boom machines. The algorithms developed in this thesis have two intended applications. The first is for automated jumbo drill rigs where the capability to plan drilling sequences algorithmically is a prerequisite. Automated drill rigs are still some years from being a reality. The second, and more immediate application is in providing decision support for drill rig operators. It is envisaged that the algorithms described here might form the basis of a operator assist that provides guidance on which holes to drill next with each boom, adapting this plan as circumstances change.
|
Page generated in 0.0567 seconds