• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
701

Essays in information relaxations and scenario analysis for partially observable settings

Ruiz Lacedelli, Octavio January 2019 (has links)
This dissertation consists of three main essays in which we study important problems in engineering and finance. In the first part of this dissertation, we study the use of Information Relaxations to obtain dual bounds in the context of Partially Observable Markov Decision Processes (POMDPs). POMDPs are in general intractable problems and the best we can do is obtain suboptimal policies. To evaluate these policies, we investigate and extend the information relaxation approach developed originally for Markov Decision Processes. The use of information relaxation duality for POMDPs presents important challenges, and we show how change-of-measure arguments can be used to overcome them. As a second contribution, we show that many value function approximations for POMDPs are supersolutions. By constructing penalties from supersolutions we are able to achieve significant variance reduction when estimating the duality gap directly, and the resulting dual bounds are guaranteed to provide tighter bounds than those provided by the supersolutions themselves. Applications in robotic navigation and telecommunications are given in Chapter 2. A further application of this approach is provided in Chapter 5 in the context of personalized medicine. In the second part of this dissertation, we discuss a number of weaknesses inherent in traditional scenario analysis. For instance, the standard approach to scenario analysis aims to compute the P&L of a portfolio resulting from joint stresses to underlying risk factors, leaving all unstressed risk factors set to zero. This approach ignores thereby the conditional distribution of the unstressed risk factors given the stressed risk factors. We address these weaknesses by embedding the scenario analysis within a dynamic factor model for the underlying risk factors. We recur to multivariate state-space models that allow the modeling of real-world behavior of financial markets, like volatility clustering for example. Additionally, these models are sufficiently tractable to permit the computation (or simulation from) the conditional distribution of unstressed risk factors. Our approach permits the use of observable and unobservable risk factors. We provide applications to fixed income and options portfolios, where we are able to show the degree in which the two scenario analysis approaches can lead to dramatic differences. In the third part, we propose a framework to study a Human-Machine interaction system within the context of financial Robo-advising. In this setting, based on risk-sensitive dynamic games, the robo-advisor adaptively learns the preferences of the investor as the investor makes decisions that optimize her risk-sensitive criterion. The investor and machine's objectives are aligned but the presence of asymmetric information makes this joint optimization process a game with strategic interactions. By considering an investor with mean-variance risk preferences we are able to reduce the game to a POMDP. The human-machine interaction protocol features a trade-off between allowing the robo-advisor to learn the investors preferences through costly communications and optimizing the investor's objective relying on outdated information.
702

Using robust statistical methodology to evaluate the performance of project delivery systems| A case study of horizontal construction

Charoenphol, Dares 06 January 2017 (has links)
<p> The objective of this study is to demonstrate the application of the bootstrapping M-estimator (a robust Analysis of Variance, ANOVA) to test the null hypotheses of means equality among the cost and schedule performance of the three project delivery systems (PDS). A statistical planned contrast methodology is utilized after the robust ANOVA analysis to further determine where the differences of the means lie. </p><p> The results of this research concluded that traditional PDS (Design-Bid-Build, DBB) outperformed the two alternative PDS (&ldquo;Design-Build (DB) and Construction Manager/General Contractor (CMGC)&rdquo;), DBB and CMGC outperformed DB, and DBB outperformed CMGC, for the Cost Growth and the Change Order Cost Factor performance. On the other hand, alternative PDS (&ldquo;DB &amp; CMGC&rdquo;) outperformed DBB, DB and CMGC (separately) outperformed DBB, and between the two alternative PDS, CMGC outperformed DB, for the Schedule Cost Growth performance. </p><p> These findings can help decision makers/owners making an informed decision, regarding cost and schedule related aspects, when choosing PDS for their projects. Though the case study of this research is based on the sample data obtained from the construction industry, the same methodology and statistical process can be applied to other industries and factors/variables of interest when the study sample data are unbalanced and the normality and homogeneity of variance assumptions are violated.</p>
703

An implicit enumeration model with grouped activities

January 1974 (has links)
acase@tulane.edu
704

An integrated approach to the analysis of the working capital assets

January 1981 (has links)
The central question investigated is that of the potential worth of a simultaneous model of the working capital system (and its solution). The published research on the subject of working capital management can be shown to be deficient in that it lacks a system-wide perspective. Arguments have been made in support of the need for a theory of, and a method for, analyzing the working capital accounts as a whole. This integrated approach has not yet been devised since most research has concerned itself with only a single current-asset account This research develops two models to aid in the process of integrating the analysis of the working capital accounts. The first is a Deterministic Analytical Model designed to express the costs and returns experienced from a single day's credit sales. The second is a Stochastic Model (a simulation) which replicates a working capital system and considers the probabilistic nature of a 'real-world' system A Solution Improvement Algorithm is devised and used with the Stochastic Model to find a solution (policy) to the working capital system which approximates the simultaneous solution. This author concludes that there is significant improvement which could be made by a simultaneous model, beyond that obtained by the methods used in current practice, to justify the continued development of such a model Also, tests are made with the Solution Improvement Algorithm/Stochastic Model to evaluate the sensitivity of this conclusion to various system parameter changes. This author concludes that the system is sensitive to changes in the system's cost structure and internal functional relationships, and that in each case the initial conclusion holds true (although the exact amount of improvement differs from one situation to another) / acase@tulane.edu
705

An interactive optimization approach to scheduling long-term debt repayment for an expanding hospital

January 1977 (has links)
acase@tulane.edu
706

A Jacobian transformation method for nonlinear programming: a Lagrangian approach

January 1975 (has links)
acase@tulane.edu
707

Scheduling to minimize conflict cost

January 1973 (has links)
acase@tulane.edu
708

A response surface analysis of the effects of scheduling flexibility alternatives on labor utilization in a tour environment

Unknown Date (has links)
This dissertation examines labor scheduling in a tour environment. The impetus for this research lies in the lack of improvement in service sector productivity compared to manufacturing productivity since 1960. A contributing factor in this lack of productivity improvement is the inability of service delivery systems to stockpile product during periods of low demand for use in periods of high demand. This inability means that service delivery systems must often have large amounts of overcapacity during periods of low demand in order to meet requirements during periods of high demand. Thus, any increase in the efficiency with which labor is scheduled will lead to an increase in labor utilization and subsequently productivity. / This dissertation examines four scheduling flexibility factors, shift length, meal-break window, start-time interval, and tour length, which were found in previous research to have an effect on labor utilization. These factors were examined in an environment that is typical of service systems such as department stores, restaurants, amusement parks, etc. / The results of the analysis indicated that shift length and meal-break window are the important factors in determining labor utilization for the environment used in this dissertation. The analysis also indicated that ILP solutions to labor scheduling problems may have undesirable characteristics. The implications of these findings are that (1) efforts to improve labor utilization should concentrate on labor policies affecting shift length and meal-break window and (2) labor scheduling approaches which rely on optimal solutions should examine the surplus labor that is scheduled. / Source: Dissertation Abstracts International, Volume: 53-03, Section: B, page: 1588. / Major Professor: William A. Shrode. / Thesis (Ph.D.)--The Florida State University, 1992.
709

Financial Risk Management: Portfolio Optimization.

Yang, Song. Unknown Date (has links)
Risk management is a core activity by financial institutions. There are different types of financial risks, e.g. market, credit, operational, model, liquidity, business, etc. Managing these risks to minimize potential losses is essential to ensure viability and good reputations for financial institutions. Therefore, it is necessary to have an accurate model and a proper measurement that describes the risk. / In this thesis, we model the risks with proper measurement, like Value-at-risk (VaR) and Conditional Value-at-Risk (CVaR). The dependence between risks is described by the so-called copula, which can connect marginal distributions with joint distribution. Among many popular copulas, we find a proper copula to describe the correlations between risks and between financial data. Portfolio optimization problems with VaR and CVaR as risk measurement are solved and numerical results indicate that the model can describe the real world risk very well. In addition, we propose another method, called Independent Component Analysis. By linear transformation, we obtain models for independent components with the same optimal solution. The time of solving the new models is highly reduced with the same accuracy.
710

Implementing Innovation in Planning Practice: The Case of Travel Demand Forecasting.

Newmark, Gregory Louis. Unknown Date (has links)
Travel demand modeling is a core technology of transportation planning and has been so for half a century. This technology refers to the structured use of mathematical formulae and spatial data to forecast the likely travel impacts of possible transportation, land use, and demographic scenarios. Although this planning practice is pervasive, critics have long argued that is has been resistant to innovation. As the policy scenarios explored through modeling become increasingly complex, particularly in the face of climate change, the question arises of whether regional planning agencies will be able to change their practices through implementing innovation. This research addresses this question by examining the history of travel demand modeling as practiced at regional planning agencies, interviewing travel demand modeling experts, conducting detailed case studies of model practice evolution at two metropolitan planning organizations, the San Francisco Bay Area's Metropolitan Transportation Commission (MTC) and the capital region's Sacramento Area Council of Governments (SACOG), and analyzing the early impacts of California's groundbreaking climate change legislation on the modeling practiced in the Golden State. The findings suggest that far from being a static practice, travel demand modeling at regional agencies has advanced, particularly with public interest in exploring the impacts of major policy interventions. The nature of travel demand models does not naturally foster changes in practice; however, government action can structure the innovation process by establishing clear expectations of agency modeling capabilities to meet legislative mandates, providing resources for investments in new approaches, and creating forums for interagency interaction and information dissemination.

Page generated in 0.0814 seconds