• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Performance optimization strategies for discrete event and hybrid systems

Pepyne, David Lawrence 01 January 1999 (has links)
This work considers the performance optimization of systems involving discrete entities competing for the services provided by a limited number of resources. Such systems are found in transportation, computer and communication networks, and manufacturing, and are typically modeled as discrete event dynamic systems (DEDS), or, more generally, as hybrid dynamical systems (HDS). For DEDS two new performance optimization strategies are proposed. For those whose performance is a function of a controllable parameter vector, an on-line adaptive control scheme is developed. Ile scheme, inspired by classical gain scheduling techniques for nonlinear systems, centers around a lookup table containing the best parameter values to use for different operating conditions. By estimating the instantaneous operating condition and performing a table lookup, the controller is able to adapt to changing operating conditions. The scheme can be used open-loop with a fixed lookup table, or the table can be constructed dynamically for a closed-loop controller. Another DEDS optimization strategy involves the use of calculus of variations (CV) techniques. Except for one other work, CV techniques have not previously been applied to DEDS. Difficulties associated with the nonsmooth ‘ax’ and ‘min’ functions which commonly appear in event-driven dynamics, integer state variables, and uncertainty are addressed. While the first two difficulties can be overcome at least for some problems, CV techniques cannot optimally solve problems involving uncertainty. Nevertheless, the approach supplies insights useful for developing controllers that are robust with respect to the uncertainty. For HDS a new framework combining event-driven dynamics with time-driven dynamics is proposed. The framework which uses “max-plus” recursive equations to describe the event-driven dynamics and differential equations to describe the time-driven dynamics appears useful for modeling many manufacturing systems such as those in steelmaking, food processing, and pharmaceuticals. Optimal control problems trading off demands on the event-driven states against demands on the time-driven states are formulated and simple examples are analyzed using variational techniques. Since the problems generally cannot be solved in closed-form, structural properties of optimal solutions are derived and used to develop quick and efficient numerical algorithms.
572

Productivity and comparative analysis in service and manufacturing operations

Chen, Yao 01 January 2000 (has links)
Productivity assessment has received increased attention over the past several years. At the same time, the focus has moved from single-factor productivity measures, or attempts to characterize performance in terms of simple ratios, to a multi-factor construct. In this dissertation, I propose a new framework of productivity assessment—using notations of relative, absolute, and comparative performance evaluation—via Data Envelopment Analysis (DEA)-based Malmquist indexes. Performance evaluation of member units of a set, whether companies or other organizational units, with respect to a technology that is determined by the entire set may be termed relative performance evaluation. Alternatively, performance evaluation of units of such a set against a different technology may be termed absolute performance evaluation. Traditional DEA evaluates the relative efficiency of such a set of units with respect to the frontier determined by the entire set, and can be used to determine relative performance change. Absolute performance evaluation can be in the context of a different technology, whether determined by a different industry or technology or determined by a different period of time. The latter context occurs in Malmquist productivity index calculations. The former context has not been considered in the open literature. This context defines what is termed comparative performance evaluation, namely, evaluating the performance of a set of units or observed behaviors with respect to the frontier (or best practice) of another set of units, with a different technology, held to exemplary or held to comprise a benchmark set. To operationalize absolute and comparative performance evaluation, and to allow calculation of DEA-based Malmquist productivity indexes, an extension to DEA models is developed in this dissertation. The extension, namely the Benchmark DEA model and score, is applied, in illustrative empirical studies of the productivity of service and manufacturing industries. The aggregated nature of the constituent calculations of the Malmquist index obscure sources and patterns of productivity change. We introduce a process for the analysis of the components of the Malmquist index, which reveals such patterns and presents a new interpretation along with the managerial implication of each component. The approach that is developed can identify strategy shifts of individual companies in a particular time period. Furthermore, we are able to make judgments on whether or not such strategy shifts are favorable and promising. The developed productivity measurement approach is applied to address productivity trends in the computer and automobile industries using Global Fortune 500 data for the period 1991–1997.
573

Statics and dynamics of complex network systems: Supply chain analysis and financial networks with intermediation

Ke, Ke 01 January 2004 (has links)
In this dissertation, I considered various novel extensions of network equilibrium problems both in static and in dynamic settings from modeling, qualitative analysis, and computational perspectives. For each problem, I identified the network structure, described the behavior of the network agents involved, presented the formulations, derived the equilibrium conditions, established the qualitative properties, and proposed the appropriate algorithm for computations. A variety of numerical examples were provided for illustration of the models presented. First, I proposed a multilevel network perspective for the conceptualization of the dynamics underlying supply chains in the presence of competition. Rather than being formulated over a single network, as was done by Nagurney, Dong, and Zhang (2002) and Nagurney et al (2002), who proposed static models of supply chain networks under competition, the multilevel network consisted of: the logistical network, the informational network, and the financial network. The network agents, in turn, consisted of the manufacturers, the retailers, and the consumers located at the demand markets. Next, I studied financial network equilibrium problems with intermediation in a static setting and described the disequilibrium dynamics as well. I considered an economy consisting of three types of agents: those with sources of funds, intermediaries, and the consumers located at demand markets corresponding to the uses of funds. Subsequently, I generalized the modeling framework to incorporate the impact of electronic transactions on the financial networks with intermediation. The modeling framework captured both competition and cooperation, and included transaction costs which brought a greater degree of realism to the study of financial intermediation. In order to capture the influence of the decision-makers' risk attitudes upon the financial network equilibrium, I further developed a value function approach in which the risk for each decision-maker was penalized by a variable weight. The models and computational methods were based on the methodologies of variational inequality theory for the study of the statics (cf. Nagurney (1999)) and projected dynamical systems for the dynamics (cf. Nagurney and Zhang (1996)). I concluded this dissertation with a summary of the modeling framework developed and provided suggestions for possible future extensions.
574

Topics in univariate time series analysis with business applications

Khachatryan, Davit 01 January 2010 (has links)
Recent technological advances in sensor and computer technology allow the observation of business and industrial processes at fairly high frequencies. For example, data used for monitoring critical parameters of industrial furnaces, conveyor belts or chemical processes may be sampled every minute or second. A high sampling rate is also possible in business related processes such as mail order distribution, fast food restaurant operations, and electronic commerce. Data obtained from frequently monitored business processes are likely to be autocorrelated time series that may or may not be stationary. If left alone, processes will typically not be stable, and hence they will usually not posses a fixed mean, thus exhibiting homogeneous non-stationarity. For monitoring, control, and forecasting purposes of such potentially non-stationary processes it is often important to develop an understanding of the dynamic properties of processes. However, it is sometimes difficult if not impossible to conduct deliberate experiments on full scale industrial plants or business processes to gain the necessary insight of their dynamic properties. Fortunately, intentional or inadvertent process changes that occur in the course of normal operation sometimes offer an opportunity to identify and estimate aspects of the dynamic behavior. To determine if a time series is stationary, the standard exploratory data analytic approach is to check that the sample autocorrelation function (ACF) fades out relatively quickly. An alternative, and at times a sounder approach is to use the variogram – a data exploratory tool widely used in spatial (geo) statistics for the investigation of spatial correlation of data. The first objective of this dissertation is to derive the basic properties of the variogram and to provide the literature on confidence intervals for the variogram. We then show how to use the multivariate Delta method to derive asymptotic confidence intervals for the variogram that are both practical and computationally appealing. The second objective of this dissertation is to review the theory of dynamic process modeling based on time series intervention analysis and to show how this theory can be used for an assessment of the dynamic properties of business and industrial processes. This is accompanied by a detailed example of the study of a large scale ceramic plant that was exposed to an intentional but unplanned structural change (a quasi experiment). The third objective of this dissertation concerns the analysis of multiple interventions. Multiple interventions occur either as a result of multiple changes made to the same process or because of a single change having non-homogeneous effects on time series. For evaluating the effects of undertaken structural changes, it is important to assess and compare the effects, such as gains or losses, of multiple interventions. A statistical hypothesis test for comparing the effects among multiple interventions on process dynamics is developed. Further, we investigate the statistical power of the suggested test and elucidate the results with examples.
575

Enabling easy consumer access to services and products

Hasdemir, Baris 01 January 2012 (has links)
The spatial dispersion of the population of a country is a function of its geography, history, and economic development. Enabling access to services and products for spatially dispersed populations is evermore pertinent in today's fiscally constrained socio-political landscape and relevant to both the public and private sectors. Consumer access is improved when larger segments of the population have access within shorter threshold distances, hence consuming less energy. A higher quality of access to centers that provide a service can be brought about by improving the infrastructure in a manner that facilitates access for larger segments of the population of a country or a region. This dissertation develops models for locating centers that address the enabling of access to centers for distance differentiated segments of the population of a region, whether a continent, country, state, or a city. A network of centers provides better access if the centers are located so as to serve maximal populations within each of multiple threshold distances. The models incorporate the consumer's higher utility for shorter distances by employing concentric discs or concentric rings of multiple distance thresholds to account the percentages of the population that are afforded a specific quality of access. The model, referred to as the Multiple-Concentric-Disc Location Model, uses multiple distance thresholds modeled using concentric discs, to differentiate access that is afforded to segments of the population. The model is applied to examining access to locations of branches of the Registry of Motor Vehicles and access to locations of Walmart stores in the Commonwealth of Massachusetts. Further, the use of the model to reveal the relationship between the geographic dispersion of a population and its access to centers-and thereby allow an examination of the challenges of locating centers in different regions of the world-is demonstrated via an extended application of the model to India, Africa, Europe, and USA. Another model, which is referred to as the Multiple-Concentric-Ring Location Model with Utility Decay, brings visibility to the differences in distance to centers for populations within each of a set of concentric rings by aggregating all populated places that are within each concentric ring. Model incorporates the consumer's utility for distance and frequency of usage and assumes that a distance decay function captures the consumer's utility by ascribing probabilities that a consumer will access a center that is within a specified distance. The use of these models to inform location decisions in a variety of infrastructure and economic development scenarios is also demonstrated. The computational studies in this dissertation require generating instances of the location models for multiple regions of the world. To facilitate these studies, a major component of the research is the ability to generate the data required for instances of each of the models, for any region, whether a continent, a country, a state, or a city. To facilitate solution of thousands of model instances, special purpose software employing efficient computational algorithms and complex data structures has been developed to generate the data files-in metric or imperial systems of measurement-for model instances. Visualization procedures for generating layered maps have been developed to facilitate discussion. Computational studies involving thousands of model instances, each with up to 598,488 variables and 448,526 constraints, reveal insights about possible access to services and products for a country.
576

A PLANNING METHODOLOGY FOR THE ANALYSIS AND DESIGN OF WIND-POWER SYSTEMS.

DAMBOLENA, ISMAEL GERARDO 01 January 1974 (has links)
Abstract not available
577

Steiner minimal trees in three-dimensional Euclidean space

Badri, Toppur N 01 January 2002 (has links)
The difficulty of straight edge and compass solutions to the Euclidean Steiner Minimal Tree Problem for more than three vertices, has been known for at least three centuries. Analytic geometry methods, in addition to these tools, use Algebra and Cartesian frames of reference. In E2, optimal solutions can be achieved for 10,000 points and more. For more than ten vertices in E 3 or higher dimensions, these exact formulations have proved difficult and cumbersome from the point of view of an algorithmic solution. A discrete version of the problem was shown to be NP-complete in 1977. Decomposition heuristics based on Computational Geometry were suggested, for these larger point sets. The thesis features a literature review of the considerable research efforts on the Steiner problem in two and three dimensional space, with the Euclidean metric. Heuristics of polynomial complexity, that have proven satisfactory for large point sets are considered after the O.R. methods that are of exponential order. The Steiner Ratio ρ of a vertex set is the length of the Steiner Minimal Tree (SMT), divided by the length of the Minimum Spanning Tree (MST), and in addition to execution time, is a key measure of performance, of algorithms and heuristics devoted to this problem. The consideration and comparison of the performance of algorithms, leads to the issue of the best Steiner ratio for a particular space. The d-Sausage, an unending geometric arrangement of regular simplices, has yielded the best Steiner Ratio in three and higher dimensional Euclidean space. A particular Full Steiner Tree topology, which we refer to as the path topology, is proven to be the optimal topology, for the d-Sausage when d = 1 or 2. Other structural properties of the flat sausage and the [special characters omitted]-Sausage, as these two instances of the d-Sausage are referred to, are proved as lemmas and theorems. This theoretical framework serves as a foundation for a heuristic for finding SMTs for the very large point sets. The sausage has been shown to have a superior Steiner ratio compared to a simplex. For this reason it is the preferred primitive for a decomposition technique. Finally, the ties between the Steiner Minimal Tree Problem, and the Euclidean Graph Embedding Problem, are explored in the light of the Minimum Energy Configuration of molecules, and Maxwell's theorem.
578

An empirical examination of the relationship between competitive strategy and process technology in the tooling and machining industry

Congden, Steven Wayne 01 January 1991 (has links)
A considerable segment of the business literature has espoused the importance of appropriately using process or manufacturing technology to support competitive strategy. This literature implicitly and explicitly suggests the importance of "fit" between a firm's business level strategy and its process technology. Three gaps remain with respect to the "fit" assertion: (1) The nature of fit is insufficiently specified. (2) No empirical research has attempted to statistically validate the existence of fit within an industry. (3) No empirical research has attempted to statistically link fit to firm performance. To address these issues, this dissertation surveys firms in the U.S. tooling and machining industry to test hypotheses on the nature, existence, and impact on performance of fit. Strategy is assessed as membership in one of six strategic groups derived from clustering eight strategy factors. Factor analysis results in four technology factors, Dedicated Automation, Non-Dedicated Automation, Range of Capabilities, and Computer Aided Design. Performance comprises ROS and average annual sales growth. Findings regarding the nature of fit suggest: (1) Dedicated and non-dedicated automation relate positively to new and existing product stability. Broad product range (products very different from each other) relates negatively to dedicated automation, but does not relate to non-dedicated automation. (2) Linkages may be obscured because multiple capabilities are often bundled in a given technology so that different strategies use the same technology for different reasons. (3) Process technology appears to relate primarily to strategic dimensions concerning physical product characteristics, and very little to service dimensions. The existence of fit is demonstrated by highly significant differences in technology between groups, combined with the qualitative plausibility with which these differences appear to correspond to each strategic group. Although insufficient support was found for fit linked to performance (technology moderating strategic group membership's impact on performance), results suggest that performance advantage from a technology is gained not in the group where it is most appropriate or a given, but in a group where it is also appropriate, but less widespread.
579

A multivariate analysis of shares listed on the Johannesburg Stock Exchange

Visser, Francesca January 1983 (has links)
This thesis examines the usefulness of multivariate statistical techniques to portfolio theory by applying two different multivariate techniques to two separate classificatory problems concerning shares listed on the Johannesburg Stock Exchange. In Chapter 1 the two techniques and two classificatory problems are introduced and their context within the general structure of portfolio theory is explained. Chapter 2 gives a theoretical overview of the first technique used, namely Factor Analysis. Chapters 3 and 4 discuss the application of factor analytic techniques to shares listed on the Johannesburg Stock Exchange. Chapter 5 gives a theoretical overview of Multiple Discriminant Analysis, the second multivariate technique used. Chapter 6 represents a survey of previous applications of Multiple Discriminant Analysis in the field of Finance, while Chapters 7 and 8 discuss the application of this technique to shares listed on the Johannesburg Stock Exchange. Finally, Chapter 9 gives a brief summary of the main conclusions in this thesis.
580

Past price and trend effects in promotion planning; from prediction to prescription

Cohen-Hillel, Tamar. January 2020 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, September, 2020 / Cataloged from student-submitted PDF of thesis. / Includes bibliographical references (pages 261-268). / Sales promotions are a popular type of marketing strategy. When undertaking a sales promotion, products are promoted using short-term price reductions to stimulate their demand and increase their sales. These sales promotions are widely used in practice by retailers. When undertaking a sales promotion, retailers must take into consideration both the direct and indirect effects of price promotions on consumers, and as a result, on the demand. In this thesis, we consider the impact of two of these indirect effects on the planning process of promotions. First, we consider the problem of the promotion planning process for fast-moving consumer goods. The main challenge when considering the promotion planning problem for fast-moving consumer goods is the negative indirect effect of promotions on future sales. While temporary price reductions substantially increase demand, in the following periods after a temporary price reduction, retailers observe a slowdown in sales. / To capture this post promotion slowdown, we suggest a new set of past prices (namely, the last seen as well as the minimum price seen within a limited number of past periods) as features in the demand model. We refer to demand models that use this set of past prices as Bounded Memory Peak-End models. When tested on realworld data, our suggested demand model improved the estimation quality relative to a traditional estimation approach through a relative improvement in WMAPE by approximately 1 - 19%. In addition to the improvement in prediction accuracy, we analyze the sensitivity of our proposed Bounded Memory Peak-End demand model to demand misspecification. Through statistical analysis, and using principles from duality theory, we establish that even in the face of demand misspecification, the proposed Bounded Memory Peak-End model can capture the demand with provably low estimation error, and with low impact on the resulting optimal pricing policy. / The structure of the new proposed demand model allows us to derive fast algorithms that can find the optimal solution to the problem of promotion planning for a single item. For the case of promotion planning for multiple items, although we show that the problem is NP-hard in the strong sense, we propose a Polynomial Time Approximation Scheme that can solve the problem efficiently. Overall, we show that using our proposed approach, the retailer can obtain an increase of 4 - 15.6% in profit compared to current practice. Second, we consider the promotion targeting problem for trendy commodities. In the case of trendy commodities, the demand is driven, among other factors, by social trends. Examples of trendy commodities include fashion items, wearable electronics, and smartphones. To capture the demand with high accuracy, retailers must understand how the purchasing behavior of customers can impact the future purchasing behavior of other customers. / Social media can be instrumental in learning how consumers can impose trends on one another. Unfortunately, many retailers are unable to obtain this information due to high costs and privacy issues. This has motivated us to develop a model that detects customer relationships based only on transaction data history. Incorporating the customer to customer trend in the demand estimation, we observe a significant improvement of 12% in the WMAPE forecasting metric. The proposed customer to customer trend-based demand model subsequently allows us to formulate the promotion targeting optimization problem in a way that consider the indirect effect of targeted promotions through trends. We show that the problem of finding the personalized promotion policy that would maximize the profit function is NP-hard. Nonetheless, we introduce an adaptive greedy algorithm that is intuitive to implement and can find a provably near-optimal personalized promotion policy. / We tested our approach on Oracle data and observed a 5-12% improvement in terms of profit. / by Tamar Cohen-Hillel. / Ph. D. / Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center

Page generated in 0.1239 seconds