• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Long-term versus Short-term Contracting in Salesforce Compensation

Long, Fei January 2019 (has links)
This dissertation investigates multi-period salesforce incentive contracting. The first chapter is an overview of the problems as well as the main findings. The second chapter continues with a review of the related literatures. The third and fourth chapters address a central question in salesforce contracting: how frequently should a firm compensate its sales agents over a long-term horizon? Agents can game the long-term contract by varying their effort levels dynamically over time, as discussed in Chapter 3, or by altering between a “bold" action and a “safe" action dynamically over time, as discussed in Chapter 4. Chapter 3 studies multi-period salesforce incentive provisions when agents are able to vary their demand-enhancing effort levels dynamically. I establish a stylized agency-theory model to analyze this central question. I consider salespeople's dynamic responses in exerting effort (often known as “gaming"). I find that long time horizon contracts weakly dominate short time horizon contracts, even though they enable gaming by the agent, because they allow compensation to be contingent on more extreme outcomes; this not only motivates the salesperson more, but also leads to lower expected payment to the salesperson. A counterintuitive observation that my analysis provides is that under the optimal long time horizon contract, the firm may find it optimal to induce the agent to not exert high effort in every period. This provides a rationale for effort exertion patterns that are often interpreted as suboptimal for the firm (e.g., exerting effort only in early periods, often called “giving up"; exerting effort only in later periods, often called “postponing effort"). I also discuss the implication of sales pull-in and push-out, and dependence of periods (through limited inventory) upon the structure of the optimal contracting. Chapter 4 examines multi-period salesforce incentive contracting, where sales agents can dynamically choose between a bold action with higher sales potential but also higher variance, and a safe action with limited sales potential but lower variance. I find that the contract format is determined by how much the firm wants later actions to depend on earlier outcomes. Making later actions independent of earlier demand outcomes reduces agents' gaming, but it also reduces an agent's incentive to take bold actions. When the two periods are independent, an extreme two-period contract with a hard-to-achieve quota, or a polarized two-period contract allowing agents to make up sales, can strictly dominate a period-by-period contract, because they induce more bold actions in earlier periods by making later actions dependent on earlier outcomes. However, when the two periods are dependent through a limited inventory to be sold across two periods, the period-by-period contract can strictly dominate the two-period contract, by allowing the principal more flexibility in adjusting the contract.
872

Ein Workshopzuteilungsverfahren als zweistufige Auktion zur Enthüllung privater Präferenzen

Thede, Anke January 2006 (has links)
Zugl.: Karlsruhe, Univ., Diss., 2006
873

A distance and shape-based methodology for the unequal area facility layout problem

Kriausakul, Thanat 16 November 2001 (has links)
Significant improvements in production effectiveness have resulted from implementing cellular manufacturing systems (CMS). Following the cell formation, an important issue that needs to be addressed is the unequal cell (or department/facility) layout problem, which is the sub-issue in the CF problem. The work reported in this thesis illustrates the assignment of unequal cell locations in dealing with the known traffic movements on a shop floor. In addition, this research addresses the impact of the geometry or shape of the department as an important design factor in the unequal area facility layout problem, an issue that has not been addressed by the previous researchers. The problem is formulated as a mixed-binary non-linear programming model and is proven to be NP-hard in the strong sense. Due to its computational complexity, a higher-level heuristic, based on a concept known as tabu-search, is proposed to efficiently solve the problem. Six different versions of the tabu search-based heuristic algorithm are tested on three different problem structures. The results obtained from performing the experiment concluded that the tabu search-based heuristic using short-term memory and variable tabu-list sizes is preferred over other heuristics as the problem size increases. The performance comparison between the current and the previous research shows that the solution obtained for the well-known problems in this research are better than that obtained in the past. / Graduation date: 2002
874

Comparison of regression and ARIMA models with neural network models to forecast the daily streamflow of White Clay Creek.

Liu, Greg Qi. Unknown Date (has links)
Linear forecasting models have played major roles in many applications for over a century. If error terms in models are normally distributed, linear models are capable of producing the most accurate forecasting results. The central limit theorem (CLT) provides theoretical support in applying linear models. / During the last two decades, nonlinear models such as neural network models have gradually emerged as alternatives in modeling and forecasting real processes. In hydrology, neural networks have been applied to rainfall-runoff estimation as well as stream and peak flow forecasting. Successful nonlinear methods rely on the generalized central limit theorem (GCLT), which provides theoretical justifications in applying nonlinear methods to real processes in impulsive environments. / This dissertation will attempt to predict the daily stream flow of White Clay Creek by making intensive comparisons of linear and nonlinear forecasting methods. Data are modeled and forecasted by seven linear and nonlinear methods: The random walk with drift method; the ordinary least squares (OLS) regression method; the time series Autoregressive Integrated Moving Average (ARIMA) method; the feed-forward neural network (FNN) method; the recurrent neural network (RNN) method; the hybrid OLS regression and feed-forward neural network (OLS-FNN) method; and the hybrid ARIMA and recurrent neural network (ARIMA-RNN) method. The first three methods are linear methods and the remaining four are nonlinear methods. The OLS-FNN method and the ARIMA-RNN method are two completely new nonlinear methods proposed in this dissertation. These two hybrid methods have three special features that distinguish them from any existing hybrid method available in literature: (1) using the OLS or ARIMA residuals as the targets of followed neural networks; (2) training two neural networks in parallel for each hybrid method by two objective functions (the minimum mean squares error function and the minimum mean absolute error function); and (3) using two trained neural networks to obtain respective forecasting results and then combining the forecasting results by a Bayesian Model Averaging technique. Final forecasts from hybrid methods have linear components resulting from the regression method or the ARIMA method and nonlinear components resulting from feed-forward neural networks or recurrent neural networks. / Forecasting performances are evaluated by both root of mean square errors (RMSE) and mean absolute errors (MAE). Forecasting results indicate that linear methods provide the lowest RMSE forecasts when data are normally distributed and data lengths are long enough, while nonlinear methods provide a more consistent RMSE and MAE forecasts when data are non-normally distributed. Nonlinear neural network methods also provide lower RMSE and MAE forecasts than linear methods even for data that are normally distributed but with small data samples. The hybrid methods provide the most consistent RMSE and MAE forecasts for data that are non-normally distributed. / The original flow is differenced and log differenced to get two differenced series: The difference series and the log difference series. These two series are then decomposed based on stochastic process decomposition theorems to produce two, three and four variables that are used as input variables in regression models and neural network models. / By working on an increment series, either difference series or log difference series, instead of the original flow series, we get two benefits: First we have a clear time series model. The secondary benefit is from the fact that the original flow series is an autocorrelated series and an increment series is approximately an independently ditributed series. For an independently ditributed series, parameters such as Mean and Standard Deviation can be calculated easily. / The length of data during modeling is in practice very important. Model parameters and forecasts are estimated from 30 data samples (1 month), 90 data samples (3 months), 180 data samples (6 months), and 360 data samples (1 year).
875

Two stochastic control problems in finance: American options and illiquid investments.

Karahan, Cenk Cevat. Unknown Date (has links)
Stochastic control problems are ubiquitous in modern finance. However, explicit solutions and policies of such problems faced by investors receive disproportionately little attention. This dissertation focuses on characterizing and solving the policies for two stochastic control problems that buy-side investors face in the market, exercising American options and optimal redemption of illiquid investments such as hedge funds. / The return an investor realizes from his investment in an American or Bermudan style derivative is highly dependent on the exercise policy he employs. Despite the fact that the exercise policy is as crucial to the option buyer as the price, constructing these policies has not received as much attention vis-a-vis the pricing problem. The existing research on the optimal exercise policies is complex, unpractical and conducted to the extent it is utilized to reach accurate prices. Here we present a simple and practical new heuristic to develop exercise policies for American and Bermudan style derivatives, which are immensely valuable to buy-side entities in the market. Our heuristic relies on a simple look-ahead algorithm, which yields comparatively good exercise policies for Bermudan options with few exercise opportunities. We resort to policy improvement to construct more accurate exercise frontiers for American options with frequent exercise opportunities. This exercise frontier is in turn used to estimate the price of the derivative via simulation. Numerical examples yield prices comparable to the existing sophisticated simulation methods in terms of accuracy. Chapter 1 introduces the problem and lays out the valuation framework, Chapter 2 defines and describes our heuristic approach, chapter 3 provides algorithms for implementation with numerical examples provided in Chapter 4. / Optimal redemption policies for illiquid investments are studied in Chapter 5, where we consider a risk-averse investor whose investable assets are held in a perfectly liquid asset (a portfolio of cash and liquid assets or a mutual fund) and another investment that has liquidity restrictions. The illiquidity could be due to restrictions on the investments (such as hedge funds) or due to nature of the asset held (such as real estate). The investor's objective is to maximize the utility he derives from his terminal wealth at a future end date of his investment horizon. Furthermore the investor wants to hold his liquid wealth above a certain subsistence level, below which he incurs hefty borrowing costs or shortfall penalty. We consider the optimal conditions under which the investor must liquidate his illiquid assets. The redemption notification problem for hedge fund investors has certain affinity with the optimal control methods used in widely-studied inventory management problems. The optimal policy has a monotone structure similar in nature to inventory management problems. / Chapter 6 concludes the study and suggests possible extensions.
876

Factors Affecting Forecast Accuracy of Scrap Price in the U.S. Steel Industry.

Hardin, Kristin. Unknown Date (has links)
The volatility of steel scrap pricing makes formulating accurate scrap price forecasts difficult if not entirely impossible. While literature abounds regarding price forecasts for oil, electricity, and other commodities, no reliable scrap price forecasting models exist. The purpose of this quantitative futuristic study was (1) to assess factors that influence scrap prices and (2) based on this information, to develop an effective scrap price forecast model to help steel managers make effective purchasing decisions. The theoretical foundation for the study was provided by probability theory, which has traditionally informed futures research. Probability theory draws conclusions of a dataset based on statistical, logical consequence but without attempting to identify physical causation. Secondary data from the American Metal Market were subjected to time series techniques and auto-regressive moving averages. The research led to the development of two key indices---the West Texas Index and the Advanced Decliner Index---that should improve the reliability of this scrap price forecast model. The literary, business, and social change implications of this work include a unique price forecasting technique for scrap material; a globally more competitive, profitable, and sustainable steel industry in America; and consequently, increased employment opportunities in this industrial sector so vital for the health of the entire American economy and society.
877

Dynamic electricity pricing for smart homes.

Uckun, Canan. Unknown Date (has links)
The electricity industry is now undergoing a dramatic transformation as computerization is being driven down to the home level through smart meters and smart home technologies. Smart meters enable consumers to more informatively control and manage their energy consumption, and they give retail energy providers the ability to do real-time pricing and interrupt consumers individually. Currently, retailers face electricity prices which change instantaneously with wholesale markets, but consumers see fixed rates that do not change over time in the short run. The main objective of this work is to investigate the conditions under which dynamic pricing to smart homes can improve social welfare, the magnitude of these improvements, and their sensitivity to home characteristics. / We develop a mathematical framework for a smart home's optimal dynamic response to dynamic price signals. We view the home as an energy system which we decompose by consumption category and appliance. We provide the first models for price-responsive, occupant-aware appliances. We propose approaches to estimate the utility function for thermal comfort. We also prove structural policy results. / We also provide a hierarchical pricing methodology for an electricity utility which sends price signals to homes, which in turn prices to appliances. Starting with first principles, we show that under certain conditions it is socially optimal for the electricity utility to pass through spot prices to the customers. We also provide a methodology to simulate a real-sized city. / Finally, we present extensive numerical results on ComEd's residential customers' responses to dynamic prices through air conditioners during a summer month. Our results suggest that dynamic prices reduce the power bills significantly and even more so with price-responsive appliances. On the other hand, it increases the power bills significantly on peak days while price-responsive air conditioners mitigate these bill increases. Overall, the social welfare may increase up to 2.6% for the month and up to 6.8% on a peak day. We discuss future directions for further exploring the benefits of smart homes and pricing.
878

Photogrammetry in Traffic Accident Reconstruction

O’Shields, Lara Lynn 01 August 2007 (has links)
The aim of this research is to utilize PhotoModeler, a closerange photogrammetry software package, in various traffic accident reconstruction applications. More specifically, three distinct studies were conducted: 1.) vehicle crush measurement, 2.) road curve measurement, and 3.) an evaluation of common traffic accident reconstruction measurement methodologies. The first study applied the photogrammetric process to controlled crash information generated by the National Highway Traffic Safety Administration (NHTSA). A statistical procedure known as bootstrapping was utilized to generate distributions from which the variability was examined. The “within” subject analysis showed that 44.8% of the variability is due to the technique itself and the “between” subjects analysis demonstrated that 55.2% of the variability is attributable to vehicle type—roughly half and half. Additionally, a 95% CI for the “within” analysis revealed that the mean difference (between this study and NHTSA) fell between -2.52 mph and +2.74 mph; the “between” analysis showed a mean difference between -3.26 and +2.41 mph. The second study focused on photogrammetry in road curve measurement. More particularly, this study applied photogrammetry to (simulated) road curves in lieu of traditional measurement methods, such as measuring tapes and measuring wheels. In this work, thirty (30) different radii of curvature of various known sizes were deliberately constructed. Then photogrammetry was used to measure each of the constructed curves. A comparison of the known “R’s” (control group) and photogrammetry’s value of “R” (treatment group) was then made. Matched Pairs or Paired Comparisons were then used to examine these two populations. The difference between the photogrammetry “R” and the known “R” range is between 0.001% and 0.874%. Additionally, we are 95% confident that the mean difference of the two techniques is between -0.33 and 0.51 feet. Since this interval contains zero, we can conclude that the two techniques do not differ.
879

Reliability in Lean Systems

Keyser, Robert S 01 December 2008 (has links)
Implementation of Lean manufacturing systems often turn into expensive hit-or-miss propositions. Whereas many organizations that lack immediate success quickly abandon their ‘Lean’ plans in hopes that the next great marketing panacea will solve their efficiency woes, organizations that experience early success often have difficulty in sustaining their Lean efforts. To further exacerbate the dilemma, knowledge of the reliability of Lean systems is currently inadequate. This paper proposes a contemporary Lean paradigm – reliability in Lean systems – through the development of an innovative Lean System Reliability model (LSRM). Principally, LSRM models the reliability of Lean subsystems as a basis for determining the reliability of Lean systems as a whole. Lean subsystems, in turn, consist of reliability measures for Lean components. Once principal components analysis techniques are employed to determine critical subsystems, value stream mapping is used to illustrate the critical subsystem workflow sequence. Monte Carlo simulations are performed for the Lean system, its subsystems, and components and are then compared with historical data to determine the adequacy of the LSRM model. In addition, a regression model is developed to ascertain the contribution of LSRM towards predicting % on time delivery.
880

Person-Environment Fit and Readiness for Change: Exploring the Moderating Role of Leader-Member Exchange and Perceived Organizational Support

Pimentel, Joana R.C. 01 December 2008 (has links)
This paper was aimed at investigating the interplay of multiple facets of personenvironment fit with individual readiness for change; and to expose potential moderators of this relationship, namely organizational support and quality of relationship with supervisors. The extant research on the relationships between person-environment fit and a number of individual- and organization-level outcomes reveals considerable discrepancies, mainly attributed to the measurement of person-environment fit and to potential moderators. With this in mind, moderated multiple regressions (MMR) were conducted in order to test the hypotheses of existing interaction effects. The results revealed no significant interactions between facets of personenvironment fit and the moderators proposed. However, the significant correlations found between tenure and readiness for change dimensions led to a series of post hoc analyses to explore whether different tenure groups exhibited different relationship patterns across the variables measured in this study, and to investigate a potential moderating effect of tenure on the relationship between person-environment fit and readiness for change. The results indicated that tenure significantly increased the prediction of readiness for change by person-environment fit, underscoring the importance of workforce composition on readiness for change research. The findings obtained hold interesting implications for both research and practice concerning the measurement of person-environment fit, and with respect to the impact of individual- and organization-level variables on the relationship between person-environment fit and readiness for change. These implications, along with limitations of the present study, are discussed.

Page generated in 0.1631 seconds