• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Pricing Decision Process in Software-as-a-Service Companies

Wilczkowski, Susanna January 2015 (has links)
This study examines various approaches used by companies providingsoftware-as-a-service (SaaS) in a business-to-business (B2B) environment to find a pricing strategy. To be able to meet competition in a global market, a good pricing strategy is vital. Pricing is an important part of marketing, which must be congruent with the company's overall objectives. Strategic pricing is made up of different factors represented in the strategic pricing pyramid, which is based on a value-based approach. It is paramount to know your customers and their preferences when designing a pricing strategy and selecting pricing models, price metrics, market segmentation, bundling, and price levels. After having estimated how much value a product or service creates for a customer, this must be communicated to potential customers in order to convince them to purchase your offering. Choosing the right pricing strategy is not a onetime occurrence, but an on-going process. In this qualitative study, three case studies are performed to tie theory to real world practise.
2

Essays In Effects of Market Power

Burya, Anastasia January 2023 (has links)
My dissertation within macroeconomics puts special emphasis on uncovering the effects of market power within product and labor markets. I conduct these studies using novel empirical techniques and detailed granular data sets at the firm- and household-levels.In the first chapter, coauthored with Shruti Mishra, we consider how firms’ price-setting decisions are affected by the properties of their markup. We start by designing a general oligopoly framework that accounts for firm heterogeneity, firm granularity, and the effects of market share distribution. We use this structural model to decompose the effect of price on the quantity demanded into a direct price effect and an indirect effect coming from the impact of the market-level aggregates, such as market-level price. This decomposition allows us to take care of all the degrees of heterogeneity in a flexible manner. Under plausible assumptions, the most crucial of which we test in the data, all the information about the distribution of shares within the market will be accounted for by the variation of the market aggregates. Under these conditions, we can estimate the structural parameters that do not depend on the distribution of shares within the market. We use the model to inform our empirical strategy and apply it to the ACNielsen Retail Scanner Data. We test the assumptions put forward by the theory, estimate structural parameters and then use the decomposition formulas to calculate the elasticity of the firm’s demand and other parameters important for the markup variation. We find that elasticity depends sharply on the firm’s market share and decreases significantly as market shares increase. There is a positive dependence of demand elasticities on relative prices (superelasticity), in line with Marshall’s second law of demand. Additionally, elasticity depends on the levels of competitiveness within the market. Even if a firm’s market share stays the same, its elasticity decreases if the market becomes less competitive. Lastly, we apply our estimates to calculate the optimal pass-through of marginal costs into prices and strategic complementarity. We find that an individual firm’s pass-through is contained between zero and one, but depends sharply on the firm’s market share. We find that strategic complementarity between two firms depends on both of their shares and is not symmetric so the degree of strategic complementarity between a small and a large firm, between two small firms, between two large firms, or between large and small firms would all be different. We then assess the non-linear effects of the marginal cost shock on the price and find that pass-through depends positively on the size of the marginal cost shock. This means that the total effect of marginal cost shock on prices is non-linear and that firm prices are more responsive to marginal cost increases than to marginal cost decreases. For market leaders, the pass-through of a large negative marginal cost shock would be close to zero, while the pass-through of a large positive marginal cost shock would approach that of small firms. In the second chapter, coauthored with Rui Mano, Yannick Timmer, and Anke Weber, we study the effect of the firm granularity in the labor market on their hiring decisions. We argue that prevalence of firms controlling large vacancy shares plays an important role in the transmission of monetary policy to labor demand and wage growth and can partially explain the flattening of the wage Philips curve after the GFC. Accommodative monetary policy raises the marginal product of labor, incentivizing all firms to hire more. However, since the wage elasticity of labor demand is lower for high vacancy share firms, they can hire more workers without raising wages disproportionately. We study this effect in the Burning Glass Technology vacancy microdata and, consistently with this mechanism, show that accommodative monetary policy increases labor demand more for high vacancy share firms and that this comes without a disproportionate response in wages. In aggregate, this implies that due to the presence of firms controlling large vacancy shares, accommodative monetary policy can lead to a decline in the unemployment rate that is decoupled from an increase in wage growth. Quantitatively, a firm at the 50th percentile of vacancy share distribution increases its labor demand by ≈ 7% in response to a 10 basis point surprise monetary loosening while a firm at the 95th percentile of the vacancy share distribution increases labor demand by ≈ 9%. Moreover, the effect of monetary policy shocks on firms with high vacancy share is much more persistent, with effects economically large and statistically significant at least for eight quarters. At the same time, there is no comparable differential response of wages, so even though firms with high vacancy shares hire more, they don’t have to increase their wages by more. In this case, more hiring does not result in a comparable increase in wage inflation. This channel can partly explain the flattening of the wage Phillips curve and the “wage-less” recovery after the Global Financial Crisis.In the third and last chapter, coauthored with Shruti Mishra, we study the impact of wealth heterogeneity on labor supply decisions. In the standard model, the positive wealth effect should decrease the willingness to supply labor. In the macroeconomic setting, this means that the direction and the magnitude of the wealth effect will determine whether people search for jobs more actively after a monetary intervention. For example, if unemployed consumers are indebted, they experience a negative wealth effect after a monetary contraction, search for jobs more actively and increase their probability of finding a job, therefore, reducing the total unemployment response. The sign and magnitude of the overall effect of monetary policy on unemployment will therefore depend on whether unemployed consumers are indebted and the magnitude of their debt. To study this mechanism, we develop a theoretical framework with heterogeneous consumers and employment search efforts and then decompose the effect of the monetary policy shock on aggregate unemployment. We test the prediction of the model in both micro and aggregate data. To test the prediction of the model in the aggregate, we estimate the coefficient of the interaction term between the debt-to-income ratio and Romer and Romer monetary policy shock. For the microdata, we use a similar regression with unemployment and mortgage variables for individual consumers from the PSID panel dataset. Consistently with the proposed mechanism, we find that the intuitive negative effect on employment of the monetary contraction is virtually non-existent or even reversed for indebted consumers. The three chapters together paint a complex picture of the impact of market power on macroeconomic variables. First, product market power impacts price-setting decisions of the firms and affects the dynamic of prices and inflation, effectively leading less concentrated economies to behave as if they have more flexible prices. Second, firms that control large share of vacancies in their labor market conduct hiring differently from their smaller counterparts leading to more quantity expansion. Lastly, labor markets exhibit complex supply dynamics as well, with labor supply potentially intensifying during recessions, which might lead the bargaining power of firms to become countercyclical. All these effects hold first-order significance for macroeconomic dynamics and influence our ability to project the future or asses the effects of monetary policy.
3

Integrating Machine Learning and Optimization for Problems in Contextual Decision-Making and Dynamic Learning

Zhao, Yunfan January 2023 (has links)
In this thesis, we study the intersection of optimization and machine learning, especially how to use machine learning and optimization tools to make decisions. In Chapter 1, we propose a novel approach for accurate policy evaluation in personalized pricing. We solve an optimization problem to evaluate new pricing strategies, while searching over some worst case revenue functions. In Chapter 2, we consider problems where parameters are predicted using a machine learning model to be used for downstream optimization tasks. Recent works have proposed an integrated approach, accounting for how predictions are used in the downstream optimization problem, instead of just minimizing prediction error. We analyze the asymptotic performance of methods under the integrated and traditional approaches, in the sense of first-order stochastic. We argue that when the model class is rich enough to cover the ground truth, the traditional predict-then-optimize approach outperforms the integrated approach, and the performance ordering between the two approaches is reversed when the model is misspecified. In Chapter 3, we present a new class of architectures for reinforcement learning, Implicit Two-Tower (ITT) policies, where the actions are chosen based on the attention scores of their learnable latent representations with those of the input states. We show that ITT-architectures are particularly suited for evolutionary optimization and the corresponding policy training algorithms outperform their vanilla unstructured implicit counterparts as well as commonly used explicit policies. In Chapter 4, we consider an active learning problem, in which the learner has the ability to sequentially select unlabeled samples for labeling. A typical active learning algorithm would sample more points at “difficult" regions in the feature space to more efficiently use the sampling budget and reduce excess risk. For nonparametric classification with smooth regression functions, we show that nuances in notions of margin that involves the uniqueness of the Bayes classifier, having no apparent effect on rates in passive learning, determine whether or not any active learner can outperform passive learning rates.
4

Beyond Worst-Case Analysis for Sequential Decision Making

Perivier, Noemie January 2023 (has links)
Traditionally, algorithms have been evaluated through worst-case analysis, where the input is presumed to take its worst possible configuration. However, in many real-world settings, the data is not adversarially constructed and, on the contrary, exhibits some recognizable patterns. This often leads worst-case guarantees to be poor indicators of algorithms' performance. To overcome this limitation, a growing body of work on Beyond Worst-Case analysis has recently emerged. In this thesis, we are concerned with sequential decision-making problems, where an agent must take successive decisions over multiple time steps without knowing in advance the forthcoming input. Examples of such settings include ride-sharing, online retail or job scheduling. Motivated by the unprecedented surge of data in these domains, which may help to overcome worst-case barriers by allowing to predict at least partially the future, we explore three distinct frameworks for Beyond Worst-Case analysis of sequential decision-making: (i) semi-random models, (ii) parametric models, and (iii) algorithms with predictions. While they all pursue the same objective — using previously collected data to provide stronger theoretical guarantees —, these frameworks mainly differ in the way the data is utilized. We examine each of them separately and present novel results for five different online optimization problems: minimum cost matching, assortment optimization (with and without inventory constraints), pricing and scheduling.
5

Simplifying Revenue Management

Sheth, Harsh Tarak January 2024 (has links)
In this thesis, we study three revenue management problems where we propose simple algorithms with provable guarantees. While online marketplaces provide retailers with tremendous flexibility, they are often large, noisy, have multiple stakeholders, and could be more challenging to characterize. These complexities give rise to a preference for simple, interpretable policies. Further, traditional marketplaces such as brick-and-mortar stores cannot always leverage tools designed for online environments due to physical constraints, higher latency, etc. With these motivations in mind, we develop algorithms for assortment optimization and pricing that are easy to implement in practice and have theoretical justifications for their performance. In Chapter 1, we consider a dynamic assortment optimization problem where the seller has a fixed inventory of multiple substitutable products to sell over a fixed time horizon. We consider two modifications to the traditional problem. First, we simplify the assortment planning by restricting assortment changes to "product retirements". When a product is retired, it becomes unavailable to all future customers. Second, we assume the seller has flexibility regarding which customers to approach. In each period, the seller chooses which subset of products to retire and selects a customer to visit. The selected customer then receives an option to purchase one of the available products, i.e., non-retired products with positive remaining inventory. We provide two policies for this problem. Our first policy guarantees a constant fraction of the best possible revenue. Our second policy is near-optimal but requires the problem to have a specific structure. In Chapter 2, we study the fundamental joint pricing and inventory management problem. The optimal policy for the model we consider is known to be an (s, S, p) policy: when the inventory level drops to s units, the seller immediately places an order to replenish the inventory to S units. Specifically, the optimal pricing policy p has a different price for every inventory state. We proposed simple policies requiring no more than three prices and prove that these policies are near-optimal compared to optimal policies which require more prices and are less robust. In particular, when orders cannot be backlogged, we show that a single price is sufficient for good performance. In Chapter 3, we analyze assortment optimization and pricing with opaque products. An opaque product is one for which only partial information is available to the buyer at the time of purchase. When a customer selects the opaque product, the seller can fulfill the purchase using any of the offered products. Opaque products can help sellers boost total sales. We propose simple policies for assortment optimization with provable constant factor guarantees, which are near-optimal in numerical experiments. We also provide upper bounds for the advantage of selling opaque products.
6

Analysis of the Benefits of Resource Flexibility, Considering Different Flexibility Structures

Hong, Seong-Jong 28 May 2004 (has links)
We study the benefits of resource flexibility, considering two different flexibility structures. First, we want to understand the impact of the firm's pricing strategy on its resource investment decision, considering a partially flexible resource. Secondly, we study the benefits of a flexible resource strategic approach, considering a resource flexibility structure that has not been studied in the previous literature. First, we study the capacity investment decision faced by a firm that offers two products/services and that is a price-setter for both products/services. The products offered by the firm are of varying levels (complexities), such that the resources that can be used to produce the higher level product can also be used to produce the lower level one. Although the firm needs to make its capacity investment decision under high demand uncertainty, it can utilize this limited (downward) resource flexibility, in addition to pricing, to more effectively match its supply with demand. Sample applications include a service company, whose technicians are of different capabilities, such that a higher level technician can perform all tasks performed by a lower level technician; a firm that owns a main plant, satisfying both end-product and intermediate-product demand, and a subsidiary, satisfying the intermediate-product demand only. We formulate this decision problem as a two-stage stochastic programming problem with recourse, and characterize the structural properties of the firm's optimal resource investment strategy when resource flexibility and pricing flexibility are considered in the investment decision. We show that the firm's optimal resource investment strategy follows a threshold policy. This structure allows us to understand the impact of coordinated decision-making, when the resource flexibility is taken into account in the investment decision, on the firm's optimal investment strategy, and establish the conditions under which the firm invests in the flexible resource. We also study the impact of demand correlation on the firm's optimal resource investment strategy, and show that it may be optimal for the firm to invest in both flexible and dedicated resources when product demand patterns are perfectly positively correlated. Our results offer managerial principles and insights on the firm's optimal resource investment strategy as well as extend the newsvendor problem with pricing, by allowing for multiple resources (suppliers), multiple products, and resource pooling. Secondly, we study the benefits of a delayed decision making strategy under demand uncertainty, considering a system that satisfies two demand streams with two capacitated and flexible resources. Resource flexibility allows the firm to delay its resource allocation decision to a time when partial information on demands is obtained and demand uncertainty is reduced. We characterize the structure of the firm's optimal delayed resource allocation strategy. This characterization allows us to study how the revenue benefits of the delayed resource allocation strategy depend on demand and capacity parameters, and the length of the selling season. Our study shows that the revenue benefits of this strategy can be significant, especially when demand rates of the different types are close, while resource capacities are much different. Based on our analysis, we provide guidelines on the utilization of such strategies. Finally, we incorporate the uncertainty in demand parameters into our models and study the effectiveness of several delayed capacity allocation mechanisms that utilize the resource flexibility. In particular, we consider that demand forecasts are uncertain at the start of the selling season and are updated using a Bayesian framework as early demand figures are observed. We propose several heuristic capacity allocation policies that are easy to implement as well as a heuristic procedure that relies on a stochastic dynamic programming formulation and perform a numerical study. Our study determines the conditions under which each policy is effective. / Ph. D.
7

Data-driven Decision-making: New Insights on Algorithm Performance and Data Value

Mouchtaki, Omar January 2024 (has links)
With the rise of data-driven algorithms, both industrial practitioners and academicians have aimed at understanding how one can use past information to make better future decisions. This question is particularly challenging, as any answer necessarily depends on several parameters, such as the features of the data used (e.g., the quantity and relevance of data), the downstream problem being solved, and the type of algorithms deployed to leverage the data. Most of the current literature analyzes the value of data by anchoring their methods in the large data regime, making the implicit assumption that data is widely available in practice. In this work, we depart from this implicit assumption and posit that, in fact, relevant data is a scarce resource in many practical settings. For instance, data is usually aggregated across different times, product categories, and geographies, and therefore the effective size of datasets is orders of magnitude lower than it may appear to be. The goal of this thesis is to bridge the gap between the theoretical understanding of data-driven decisions and practical performance by developing a problem-centric theory of data-driven decision-making in which we assess the value of data by quantifying its impact on our downstream decisions. In particular, we design methodological tools tailored to the problem at hand and derive fine-grained and problem-specific guarantees for algorithms. In the first chapter, we study the data-driven newsvendor problem under the modeling assumption that data is identically and independently distributed. We are interested in analyzing central policies in the literature, such as Sample Average Approximation (SAA), along with optimal ones, and in characterizing the performance achievable across data sizes, both small and large. Specifically, we characterize exactly the performance of SAA and uncover novel fundamental insights on the value of data. Indeed, our analysis reveals that tens of samples are sufficient to perform very efficiently, but also that more data can lead to worse out-of-sample performance for SAA. In turn, we derive an optimal algorithm in the minimax sense, enhancing decision quality with limited data. The second chapter explores the impact of data relevance on decision quality, addressing the challenge of using historical data from varying sources that may not be fully indicative of the future. We quantify the performance of SAA in these heterogeneous environments and design rate-optimal policies in settings where SAA falters. We illustrate the versatility of our framework by analyzing several prototypical problems across various fields: the newsvendor, pricing, and ski rental problems. Our analysis shows that the type of achievable asymptotic performance varies significantly across different problem classes and heterogeneity notions. Finally, the third chapter develops a framework for contextual decision-making, examining how past data relevance and quantity affect policy performance. Focusing on the contextual newsvendor problem, we analyze the wide class of Weighted Empirical Risk Minimization (WERM) policies, which weigh past data according to their relevance. This class of policies includes the SAA policy (also referred to as ERM), k-Nearest Neighbors, and kernel-based methods. While past literature focuses on upper bounds via concentration inequalities, we instead take an optimization approach and isolate a structure in the newsvendor loss function that allows us to reduce the infinite-dimensional optimization problem over worst-case distributions to a simple line search. In addition to this methodological contribution, our exact analysis offers new granular insights into the learning curve of algorithms in contextual settings. Through these contributions, the thesis advances our understanding of data-driven decision-making, offering both theoretical foundations and practical insights for diverse operational applications.
8

Dynamic demand modelling and pricing decision support systems for petroleum

Fox, David January 2014 (has links)
Pricing decision support systems have been developed in order to help retail companies optimise the prices they set when selling their goods and services. This research aims to enhance the essential forecasting and optimisation techniques that underlie these systems. This is first done by applying the method of Dynamic Linear Models in order to provide sales forecasts of a higher accuracy compared with current methods. Secondly, the method of Support Vector Regression is used to forecast future competitor prices. This new technique aims to produce forecasts of greater accuracy compared with the assumption currentlyused in pricing decision support systems that each competitor's price will simply remain unchanged. Thirdly, when competitor prices aren't forecasted, a new pricing optimisation technique is presented which provides the highest guaranteed profit. Existing pricing decision support systems optimise price assuming that competitor prices will remain unchanged but this optimisation can't be trusted since competitor prices are never actually forecasted. Finally, when competitor prices are forecasted, an exhaustive search of a game-tree is presented as a new way to optimise a retailer's price. This optimisation incorporates future competitor price moves, something which is vital when analysing the success of a pricing strategy but is absent from current pricing decision support systems. Each approach is applied to the forecasting and optimisation of daily retail vehicle fuel pricing using real commercial data, showing the improved results in each case.
9

Essays in Macroeconomics

Duarte Mascarenhas, Rui January 2023 (has links)
This dissertation consists of three chapters, each containing a distinct research paper in the field of macroeconomics. In the first chapter, I estimate the impact of mutual fund flows on corporate bond prices, issuance and firm investment. I leverage variation caused by the COVID-19 induced financial panic of March 2020 and find that safer firms suffered a larger impact in the component of bond spreads that does not compensate for expected default risk. However, I do not detect impacts of fund flows on issuance or investment. A simple model predicts liquidation decisions and price responses as being driven by demand and liquidation elasticities, which depend on the characteristics of the bond return processes. In the second chapter, we ask: what is the importance of firm and bank credit factors in determining investment responses to monetary policy? We decompose variation in corporate loan growth rates into purely firm-level and bank-level variation. The estimated factors are correlated with a set of variables that proxy for the firm’s and bank’s financial health. Firms with a higher borrowing factor experience relatively larger investment responses to an unexpected interest rate shock; the effect is muted when the shock is the reveal of central bank information. The bank factor does not induce similar heterogeneity in investment responses. In the third chapter, we ask: what is the nature of optimal monetary policy and central bank disclosure when the monetary authority is uncertain about the economic state? We consider a model in which firms make nominal pricing decisions and the central bank sets the nominal interest rate under incomplete information. We find that implementing flexible-price allocations is both feasible and optimal despite the existence of numerous measurability constraints; we explore a series of different implementations. When monetary policy is sub-optimal, public information disclosure by the central bank is welfare-improving as long as either firm or central bank information is sufficiently precise.

Page generated in 0.062 seconds