• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 403
  • 315
  • 50
  • 46
  • 24
  • 12
  • 10
  • 10
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1041
  • 1041
  • 338
  • 279
  • 277
  • 186
  • 129
  • 114
  • 106
  • 100
  • 94
  • 94
  • 83
  • 80
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Mathematical modelling of blood spatter with optimization and other numerical methods / Anetta van der Walt

Van der Walt, Anetta January 2014 (has links)
The current methods used by forensic experts to analyse blood spatter neglects the influence of gravitation and drag on the trajectory of the droplet. This research attempts to suggest a more accurate method to determine the trajectory of a blood droplet using multi-target tracking. The multi-target tracking problem can be rewritten as a linear programming problem and solved by means of optimization and numerical methods. A literature survey is presented on relevant articles on blood spatter analysis and multi-target tracking. In contrast to a more advanced approach that assumes a background in probability, mathematical modelling and forensic science, this dissertation aims to give a comprehensive mathematical exposition of particle tracking. The tracking of multi-targets, through multi-target tracking, is investigated. The dynamic programming methods to solve the multi-target tracking are coded in the MATLAB programming language. Results are obtained for different scenarios and option inputs. Research strategies include studying documents, articles, journal entries and books. / MSc (Applied Mathematics), North-West University, Potchefstroom Campus, 2014
202

A feasibility study of combining expert system technology and linear programming techniques in dietetics / Annette van der Merwe

Van der Merwe, Annette January 2014 (has links)
Linear programming is widely used to solve various complex problems with many variables, subject to multiple constraints. Expert systems are created to provide expertise on complex problems through the application of inference procedures and advanced expert knowledge on facts relevant to the problem. The diet problem is well-known for its contribution to the development of linear programming. Over the years many variations and facets of the diet problem have been solved by means of linear programming techniques and expert systems respectively. In this study the feasibility of combining expert system technology and linear programming techniques to solve a diet problem topical to South Africa, is examined. A computer application is created that incorporates goal programming- and multi-objective linear programming models as the inference engine of an expert system. The program is successfully applied to test cases obtained through knowledge acquisition. The system delivers an eating-plan for an individual that conforms to the nutritional requirements of a healthy diet, includes the personal food preferences of that individual, and includes the food items that result in the lowest total cost. It further allows prioritization of the food preference and least cost factors through the use of weights. Based on the results, recommendations and contributions to the linear programming and expert system fields are presented. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2014
203

A decision support system for selecting IT audit areas using a capital budgeting approach / Dewald Philip Pieters

Pieters, Dewald Philip January 2015 (has links)
Internal audit departments strive to control risk within an organization. To do this they choose specific audit areas to include in an audit plan. In order to select areas, they usually focus on those areas with the highest risk. Even though high risk areas are considered, there are various other restrictions such as resource constraints (in terms of funds, manpower and hours) that must also be considered. In some cases, management might also have special requirements. Traditionally this area selection process is conducted using manual processes and requires significant decision maker experience. This makes it difficult to take all possibilities into consideration while also catering for all resource constraints and special management requirements. In this study, mathematical techniques used in capital budgeting problems are explored to solve the IT audit area selection problem. A DSS is developed which implements some of these mathematical techniques such as a linear programming model, greedy heuristic, improved greedy heuristic and evolutionary heuristic. The DSS also implements extensions to the standard capital budgeting model to make provision for special management requirements. The performance of the mathematical techniques in the DSS is tested by applying different decision rules to each of the techniques and comparing those results. The DSS, empirical experiments and results are also presented in this research study. Results have shown that in most cases a binary 0-1 model outperformed the other techniques. Internal audit management should therefore consider this model to assist with the construction of an IT internal audit plan. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2015
204

Portfolio optimisation models

Arbex Valle, Cristiano January 2013 (has links)
In this thesis we consider three different problems in the domain of portfolio optimisation. The first problem we consider is that of selecting an Absolute Return Portfolio (ARP). ARPs are usually seen as financial portfolios that aim to produce a good return regardless of how the underlying market performs, but our literature review shows that there is little agreement on what constitutes an ARP. We present a clear definition via a three-stage mixed-integer zero-one program for the problem of selecting an ARP. The second problem considered is that of designing a Market Neutral Portfolio (MNP). MNPs are generally defined as financial portfolios that (ideally)exhibit performance independent from that of an underlying market, but, once again, the existing literature is very fragmented. We consider the problem of constructing a MNP as a mixed-integer non-linear program (MINLP) which minimises the absolute value of the correlation between portfolio return and underlying benchmark return. The third problem is related to Exchange-Traded Funds (ETFs). ETFs are funds traded on the open market which typically have their performance tied to a benchmark index. They are composed of a basket of assets; most attempt to reproduce the returns of an index, but a growing number try to achieve a multiple of the benchmark return, such as two times or the negative of the return. We present a detailed performance study of the current ETF market and we find, among other conclusions, constant underperformance among ETFs that aim to do more than simply track an index. We present a MINLP for the problem of selecting the basket of assets that compose an ETF, which, to the best of our knowledge, is the first in the literature. For all three models we present extensive computational results for portfolios derived from universes defined by S&P international equity indices with up to 1200 stocks. We use CPLEX to solve the ARP problem and the software package Minotaur for both our MINLPs for MNP and an ETF.
205

Active-set prediction for interior point methods

Yan, Yiming January 2015 (has links)
This research studies how to efficiently predict optimal active constraints of an inequality constrained optimization problem, in the context of Interior Point Methods (IPMs). We propose a framework based on shifting/perturbing the inequality constraints of the problem. Despite being a class of powerful tools for solving Linear Programming (LP) problems, IPMs are well-known to encounter difficulties with active-set prediction due essentially to their construction. When applied to an inequality constrained optimization problem, IPMs generate iterates that belong to the interior of the set determined by the constraints, thus avoiding/ignoring the combinatorial aspect of the solution. This comes at the cost of difficulty in predicting the optimal active constraints that would enable termination, as well as increasing ill-conditioning of the solution process. We show that, existing techniques for active-set prediction, however, suffer from difficulties in making an accurate prediction at the early stage of the iterative process of IPMs; when these techniques are ready to yield an accurate prediction towards the end of a run, as the iterates approach the solution set, the IPMs have to solve increasingly ill-conditioned and hence difficult, subproblems. To address this challenging question, we propose the use of controlled perturbations. Namely, in the context of LP problems, we consider perturbing the inequality constraints (by a small amount) so as to enlarge the feasible set. We show that if the perturbations are chosen judiciously, the solution of the original problem lies on or close to the central path of the perturbed problem. We solve the resulting perturbed problem(s) using a path-following IPM while predicting on the way the active set of the original LP problem; we find that our approach is able to accurately predict the optimal active set of the original problem before the duality gap for the perturbed problem gets too small. Furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the active set for the perturbed problem or for the original one if no perturbations are used. Proof-of-concept algorithms are presented and encouraging preliminary numerical experience is also reported when comparing activity prediction for the perturbed and unperturbed problem formulations. We also extend the idea of using controlled perturbations to enhance the capabilities of optimal active-set prediction for IPMs for convex Quadratic Programming (QP) problems. QP problems share many properties of LP, and based on these properties, some results require more care; furthermore, encouraging preliminary numerical experience is also presented for the QP case.
206

Cyber-physical acquisition strategy for COTS-based agility-driven engineering

Knisely, Nathan C. L. 27 May 2016 (has links)
The rising cost of military aircraft has driven the DoD to increase the utilization of commercial off-the-shelf (COTS) components in new acquisitions. Despite several demonstrated advantages of COTS-based systems, challenges relating to obsolescence arise when attempting to design and sustain such systems using traditional acquisition processes. This research addresses these challenges through the creation of an Agile Systems Engineering framework that is specifically aimed at COTS-based systems. This framework, known as the Cyber-physical Acquisition Strategy for COTS-based Agility-Driven Engineering (CASCADE), amends the traditional systems engineering process through the addition of an "identification phase" during which requirements are balanced against the capabilities of commercially-available components. The CASCADE framework motivates the creation of a new Mixed Integer Linear Programming (MILP) formulation which enables the creation of optimum obsolescence mitigation plans. Using this CASCADE MILP formulation, two sets of experiments are carried out: First, verification experiments demonstrate that the CASCADE MILP conforms to expected trends and agrees with existing results. Next, the CASCADE MILP is applied to a representative set of COTS-based systems in order to determine the appropriate level of obsolescence forecast accuracy, and to uncover new system-level cost-vs-reliability trends associated with COTS component modification.
207

Decision-making in the future electricity grid: home energy management, pricing design, and architecture development

Hubert, Tanguy F 27 May 2016 (has links)
As the number of autonomous decision-making entities in the electricity grid increases, it is necessary to develop (1) new decision-making capabilities embedded within the grid's control and management, and (2) new grid architecture models ensuring that both individual and system objectives are met. This work develops (1) new decision-making mechanisms enabling residential energy users and electricity providers to interact through the use of dynamic price signals, and (2) policy recommendations to facilitate the emergence of shared architecture models describing the future state of the electricity grid. In the first part, two optimization models that capture the emerging flexible consumption, storage, and generation capabilities of residential end-users are formulated. An economic dispatch model that explicitly accounts for end-users' internal dynamics is proposed. A non-iterative pricing algorithm using convex and inverse linear programming is developed to induce autonomous residential end-users to behave cooperatively and minimize the provider's generation costs. In the second part, several factors that make the development of grid architecture models necessary from a public policy standpoint are identified and discussed. The grid architecture problem is rigorously framed as both a market failure legitimizing government intervention, and a meta-problem requiring the development of non-conventional methods of solution. A policy approach drawing on the theoretical concepts of broker, boundary object and boundary organization is proposed.
208

A manufacturing strategy: fuzzy multigoal mathematical programming to the Stanely cordless power tools

李沛雄, Lee, Pui-hung, Johnelly. January 1993 (has links)
published_or_final_version / Business Administration / Master / Master of Business Administration
209

Strategic Forest Management Planning Under Uncertainty Due to Fire

Savage, David William 23 February 2010 (has links)
Forest managers throughout Canada must contend with natural disturbance processes that vary over both time and space when developing and implementing forest management plans designed to provide a range of economic, ecological, and social values. In this thesis, I develop a stochastic simulation model with an embedded linear programming (LP) model and use it to evaluate strategies for reducing uncertainty due to forest fires. My results showed that frequent re-planning was sufficient to reduce variability in harvest volume when the burn fraction was low, however, as the burn fraction increased above 0.45%, the best strategy to reduce variability in harvest volume was to account for fire explicitly in the planning process using Model III. A risk analysis tool was also developed to demonstrate a method for managers to improve decision making under uncertainty. The impact of fire on mature and old forest areas was examined and showed that LP forest management planning models reduce the areas of mature and old forest to the minimum required area and fire further reduces the seral area. As the burn fraction increased, the likelihood of the mature and old forest areas satisfying the minimum area requirements decreased. However, if the seral area constraint was strengthened (i.e., the right hand side of the constraint was increased) the likelihood improved. When the planning model was modified to maximize mature and old forest areas, the two fixed harvest volumes (i.e., 2.0 and 8.0 M. m3/decade) had much different impacts on the areas of mature and old forest when the burn fraction was greater than 0.45%. Bootstrapped burn fraction confidence intervals were used to examine the impact of uncertain burn fraction estimates when using Model III to develop harvest schedules. I found that harvest volume bounds were large when the burn fraction was ≥0.45%. I also examined how the uncertainty in natural burn fraction (i.e., estimates of pre-fire suppression average annual area burned) estimates being used for ecosystem management can impact old forest area requirements and the resulting timber supply.
210

Complex question answering : minimizing the gaps and beyond

Hasan, Sheikh Sadid Al January 2013 (has links)
Current Question Answering (QA) systems have been significantly advanced in demonstrating finer abilities to answer simple factoid and list questions. Such questions are easier to process as they require small snippets of texts as the answers. However, there is a category of questions that represents a more complex information need, which cannot be satisfied easily by simply extracting a single entity or a single sentence. For example, the question: “How was Japan affected by the earthquake?” suggests that the inquirer is looking for information in the context of a wider perspective. We call these “complex questions” and focus on the task of answering them with the intention to minimize the existing gaps in the literature. The major limitation of the available search and QA systems is that they lack a way of measuring whether a user is satisfied with the information provided. This was our motivation to propose a reinforcement learning formulation to the complex question answering problem. Next, we presented an integer linear programming formulation where sentence compression models were applied for the query-focused multi-document summarization task in order to investigate if sentence compression improves the overall performance. Both compression and summarization were considered as global optimization problems. We also investigated the impact of syntactic and semantic information in a graph-based random walk method for answering complex questions. Decomposing a complex question into a series of simple questions and then reusing the techniques developed for answering simple questions is an effective means of answering complex questions. We proposed a supervised approach for automatically learning good decompositions of complex questions in this work. A complex question often asks about a topic of user’s interest. Therefore, the problem of complex question decomposition closely relates to the problem of topic to question generation. We addressed this challenge and proposed a topic to question generation approach to enhance the scope of our problem domain. / xi, 192 leaves : ill. ; 29 cm

Page generated in 0.0838 seconds