• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 20
  • 15
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 208
  • 65
  • 42
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Dynamic Switching Times For Season And Single Tickets In Sports And Entertainment With Time Dependent Demand Rates

Pakyardim, Yusuf Kenan 01 August 2011 (has links) (PDF)
The most important market segmentation in sports and entertainment industry is the competition between customers that buy bundle and single tickets. A common selling practice is starting the selling season with bundle ticket sales and switching to selling single tickets later on. The aim of this practice is to increase the number of customers that buy bundles, to create a fund before the season starts and to increase the load factor of the games with low demand. In this thesis, we investigate the effect of time dependent demand on dynamic switching times and the potential revenue gain over the case where the demand rate is assumed to be constant with time.
72

Essays on monetary policy and banking regulation

Li, Jingyuan 15 November 2004 (has links)
A central bank is usually assigned two functions: the control of inflation and the maintenance of a safetybanking sector. What are the precise conditions under which trigger strategies from the private sector can solve the time inconsistency problem and induce the central bank to choose zero inflation under a nonstationary natural rate? Can an optimal contract be used together with reputation forces to implement a desired socially optimal monetary policy rule? How to design a truthtelling contract to control the risk taking behaviors of the bank? My dissertation attempts to deal with these issues using three primary methodologies: monetary economics, game theory and optimal stochastic control theory.
73

The Use of Landweber Algorithm in Image Reconstruction

Nikazad, Touraj January 2007 (has links)
<p>Ill-posed sets of linear equations typically arise when discretizing certain types of integral transforms. A well known example is image reconstruction, which can be modelled using the Radon transform. After expanding the solution into a finite series of basis functions a large, sparse and ill-conditioned linear system arises. We consider the solution of such systems. In particular we study a new class of iteration methods named DROP (for Diagonal Relaxed Orthogonal Projections) constructed for solving both linear equations and linear inequalities. This class can also be viewed, when applied to linear equations, as a generalized Landweber iteration. The method is compared with other iteration methods using test data from a medical application and from electron microscopy. Our theoretical analysis include convergence proofs of the fully-simultaneous DROP algorithm for linear equations without consistency assumptions, and of block-iterative algorithms both for linear equations and linear inequalities, for the consistent case.</p><p>When applying an iterative solver to an ill-posed set of linear equations the error typically initially decreases but after some iterations (depending on the amount of noise in the data, and the degree of ill-posedness) it starts to increase. This phenomena is called semi-convergence. It is therefore vital to find good stopping rules for the iteration.</p><p>We describe a class of stopping rules for Landweber type iterations for solving linear inverse problems. The class includes, e.g., the well known discrepancy principle, and also the monotone error rule. We also unify the error analysis of these two methods. The stopping rules depend critically on a certain parameter whose value needs to be specified. A training procedure is therefore introduced for securing robustness. The advantages of using trained rules are demonstrated on examples taken from image reconstruction from projections.</p> / <p>Vi betraktar lösning av sådana linjära ekvationssystem som uppkommer vid diskretisering av inversa problem. Dessa problem karakteriseras av att den sökta informationen inte direkt kan mätas. Ett välkänt exempel utgör datortomografi. Där mäts hur mycket strålning som passerar genom ett föremål som belyses av en strålningskälla vilken intar olika vinklar i förhållande till objektet. Syftet är förstås att generera bilder av föremålets inre (i medicinska tillämpngar av det inre av kroppen). Vi studerar en klass av iterativa lösningsmetoder för lösning av ekvationssystemen. Metoderna tillämpas på testdata från bildrekonstruktion och jämförs med andra föreslagna iterationsmetoder. Vi gör även en konvergensanalys för olika val av metod-parametrar.</p><p>När man använder en iterativ metod startar man med en begynnelse approximation som sedan gradvis förbättras. Emellertid är inversa problem känsliga även för relativt små fel i uppmätta data. Detta visar sig i att iterationerna först förbättras för att senare försämras. Detta fenomen, s.k. ’semi-convergence’ är väl känt och förklarat. Emellertid innebär detta att det är viktigt att konstruera goda stoppregler. Om man avbryter iterationen för tidigt fås dålig upplösning och om den avbryts för sent fås en oskarp och brusig bild.</p><p>I avhandligen studeras en klass av stoppregler. Dessa analyseras teoretiskt och testas på mätdata. Speciellt föreslås en inlärningsförfarande där stoppregeln presenteras med data där det korrekra värdet på stopp-indexet är känt. Dessa data används för att bestämma en viktig parameter i regeln. Sedan används regeln för nya okända data. En sådan tränad stoppregel visar sig fungera väl på testdata från bildrekonstruktionsområdet.</p>
74

Accelerated Fuzzy Clustering

Parker, Jonathon Karl 01 January 2013 (has links)
Clustering algorithms are a primary tool in data analysis, facilitating the discovery of groups and structure in unlabeled data. They are used in a wide variety of industries and applications. Despite their ubiquity, clustering algorithms have a flaw: they take an unacceptable amount of time to run as the number of data objects increases. The need to compensate for this flaw has led to the development of a large number of techniques intended to accelerate their performance. This need grows greater every day, as collections of unlabeled data grow larger and larger. How does one increase the speed of a clustering algorithm as the number of data objects increases and at the same time preserve the quality of the results? This question was studied using the Fuzzy c-means clustering algorithm as a baseline. Its performance was compared to the performance of four of its accelerated variants. Four key design principles of accelerated clustering algorithms were identified. Further study and exploration of these principles led to four new and unique contributions to the field of accelerated fuzzy clustering. The first was the identification of a statistical technique that can estimate the minimum amount of data needed to ensure a multinomial, proportional sample. This technique was adapted to work with accelerated clustering algorithms. The second was the development of a stopping criterion for incremental algorithms that minimizes the amount of data required, while maximizing quality. The third and fourth techniques were new ways of combining representative data objects. Five new accelerated algorithms were created to demonstrate the value of these contributions. One additional discovery made during the research was that the key design principles most often improve performance when applied in tandem. This discovery was applied during the creation of the new accelerated algorithms. Experiments show that the new algorithms improve speedup with minimal quality loss, are demonstrably better than related methods and occasionally are an improvement in both speedup and quality over the base algorithm.
75

Iterative Decoding of Codes on Graphs

Sankaranarayanan, Sundararajan January 2006 (has links)
The growing popularity of a class of linear block codes called the low-density parity-check (LDPC) codes can be attributed to the low complexity of the iterative decoders, and their potential to achieve performance very close to the Shannon capacity. This makes them an attractive candidate for ECC applications in communication systems. This report proposes methods to systematically construct regular and irregular LDPC codes.A class of regular LDPC codes are constructed from incidence structures in finite geometries like projective geometry and affine geometry. A class of irregular LDPC codes are constructed by systematically splitting blocks of balanced incomplete block designs to achieve desired weight distributions. These codes are decoded iteratively using message-passing algorithms, and the performance of these codes for various channels are presented in this report.The application of iterative decoders is generally limited to a class of codes whose graph representations are free of small cycles. Unfortunately, the large class of conventional algebraic codes, like RS codes, has several four cycles in their graph representations. This report proposes an algorithm that aims to alleviate this drawback by constructing an equivalent graph representation that is free of four cycles. It is theoretically shown that the four-cycle free representation is better suited to iterative erasure decoding than the conventional representation. Also, the new representation is exploited to realize, with limited success, iterative decoding of Reed-Solomon codes over the additive white Gaussian noise channel.Wiberg, Forney, Richardson, Koetter, and Vontobel have made significant contributions in developing theoretical frameworks that facilitate finite length analysis of codes. With an exception of Richardson's, most of the other frameworks are much suited for the analysis of short codes. In this report, we further the understanding of the failures in iterative decoders for the binary symmetric channel. The failures of the decoder are classified into two categories by defining trapping sets and propagating sets. Such a classification leads to a successful estimation of the performance of codes under the Gallager B decoder. Especially, the estimation techniques show great promise in the high signal-to-noise ratio regime where the simulation techniques are less feasible.
76

Cost minimization under sequential testing procedures using a Bayesian approach

Snyder, Lukas 04 May 2013 (has links)
In sequential testing an observer must choose when to observe additional data points and when to stop observation and make a decision. This stopping rule is traditionally based upon probability of error as well as certain cost parameters. The proposed stopping rule will instruct the observer to cease observation once the expected cost of the next observation increases. There is often a great deal of information about what the observer should see. This information will be used to develop a prior distribution for the parameters. The proposed stopping rule will be analyzed and compared to other stopping rules. Analysis of simulated data shows under which conditions the cost of the proposed stopping rule will approximate the minimum expected cost. / Department of Mathematical Sciences
77

Study of the effect of phase on the stopping power and straggling for low-energy protons in organic gases and their polymers

Mohammadi, Ahmad January 1984 (has links)
No description available.
78

Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems

Pornsawad, Pornsarp, Böckmann, Christine January 2014 (has links)
This work is devoted to the convergence analysis of a modified Runge-Kutta-type iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.
79

Steuern und Stoppen undiskontierter Markoffscher Entscheidungsmodelle /

Fassbender, Matthias. January 1990 (has links)
Thesis (doctoral)--Rheinische Friedrich-Wilhelms-Universität Bonn, 1989. / Includes bibliographical references (p. 82-83).
80

Semiclassical, Monte Carlo model of atomic collisions : stopping and capture of heavy charged particles and exotic atom formation /

Beck, William A., January 1997 (has links)
Thesis (Ph. D.)--University of Washington, 1997. / Vita. Includes bibliographical references (leaves [112]-119).

Page generated in 0.0637 seconds