• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 20
  • 9
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 20
  • 20
  • 19
  • 19
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Contributions to filtering under randomly delayed observations and additive-multiplicative noise

Allahyani, Seham January 2017 (has links)
This thesis deals with the estimation of unobserved variables or states from a time series of noisy observations. Approximate minimum variance filters for a class of discrete time systems with both additive and multiplicative noise, where the measurement might be delayed randomly by one or more sample times, are investigated. The delayed observations are modelled by up to N sample times by using N Bernoulli random variables with values of 0 or 1. We seek to minimize variance over a class of filters which are linear in the current measurement (although potentially nonlinear in past measurements) and present a closed-form solution. An interpretation of the multiplicative noise in both transition and measurement equations in terms of filtering under additive noise and stochastic perturbations in the parameters of the state space system is also provided. This filtering algorithm extends to the case when the system has continuous time state dynamics and discrete time state measurements. The Euler scheme is used to transform the process into a discrete time state space system in which the state dynamics have a smaller sampling time than the measurement sampling time. The number of sample times by which the observation is delayed is considered to be uncertain and a fraction of the measurement sample time. The same problem is considered for nonlinear state space models of discrete time systems, where the measurement might be delayed randomly by one sample time. The linearisation error is modelled as an additional source of noise which is multiplicative in nature. The algorithms developed are demonstrated throughout with simulated examples.
42

Population viability analysis for plants : practical recommendations and applications

Ramula, Satu January 2006 (has links)
<p>Population viability analysis (PVA) is commonly used in conservation biology to predict population viability in terms of population growth rate and risk of extinction. However, large data requirements limit the use of PVA for many rare and threatened species. This thesis examines the possibility of conducting a matrix model-based PVA for plants with limited data and provides some practical recommendations for reducing the amount of work required. Moreover, the thesis applies different forms of matrix population models to species with different life histories. Matrix manipulations on 37 plant species revealed that the amount of demographic data required can often be reduced using a smaller matrix dimensionality. Given that an individual’s fitness is affected by plant density, linear matrix models are unlikely to predict population dynamics correctly. Estimates of population size of the herb <i>Melampyrum sylvaticum</i> were sensitive to the strength of density dependence operating at different life stages, suggesting that in addition to identifying density-dependent life stages, it is important to estimate the strength of density dependence precisely. When a small number of matrices are available for stochastic matrix population models, the precision of population estimates may depend on the stochastic method used. To optimize the precision of population estimates and the amount of calculation effort in stochastic matrix models, selection of matrices and Tuljapurkar’s approximation are preferable methods to assess population viability. Overall, these results emphasize that in a matrix model-based PVA, the selection of a stage classification and a model is essential because both factors significantly affect the amount of data required as well as the precision of population estimates. By integrating population dynamics into different environmental and genetic factors, matrix population models may be used more effectively in conservation biology and ecology in the future.</p>
43

Hedging Strategies of an European Claim Written on a Nontraded Asset

Kaczorowska, Dorota, Wieczorek, Piotr Unknown Date (has links)
<p>An article of Zariphopoulou and Musiela "An example of indifference prices under exponential preferences", was background of our work.</p>
44

Population viability analysis for plants : practical recommendations and applications

Ramula, Satu January 2006 (has links)
Population viability analysis (PVA) is commonly used in conservation biology to predict population viability in terms of population growth rate and risk of extinction. However, large data requirements limit the use of PVA for many rare and threatened species. This thesis examines the possibility of conducting a matrix model-based PVA for plants with limited data and provides some practical recommendations for reducing the amount of work required. Moreover, the thesis applies different forms of matrix population models to species with different life histories. Matrix manipulations on 37 plant species revealed that the amount of demographic data required can often be reduced using a smaller matrix dimensionality. Given that an individual’s fitness is affected by plant density, linear matrix models are unlikely to predict population dynamics correctly. Estimates of population size of the herb Melampyrum sylvaticum were sensitive to the strength of density dependence operating at different life stages, suggesting that in addition to identifying density-dependent life stages, it is important to estimate the strength of density dependence precisely. When a small number of matrices are available for stochastic matrix population models, the precision of population estimates may depend on the stochastic method used. To optimize the precision of population estimates and the amount of calculation effort in stochastic matrix models, selection of matrices and Tuljapurkar’s approximation are preferable methods to assess population viability. Overall, these results emphasize that in a matrix model-based PVA, the selection of a stage classification and a model is essential because both factors significantly affect the amount of data required as well as the precision of population estimates. By integrating population dynamics into different environmental and genetic factors, matrix population models may be used more effectively in conservation biology and ecology in the future.
45

Hedging Strategies of an European Claim Written on a Nontraded Asset

Kaczorowska, Dorota, Wieczorek, Piotr Unknown Date (has links)
An article of Zariphopoulou and Musiela "An example of indifference prices under exponential preferences", was background of our work.
46

Hybrid is good: stochastic optimization and applied statistics for or

Chun, So Yeon 08 May 2012 (has links)
In the first part of this thesis, we study revenue management in resource exchange alliances. We first show that without an alliance the sellers will tend to price their products too high and sell too little, thereby foregoing potential profit, especially when capacity is large. This provides an economic motivation for interest in alliances, because the hope may be that some of the foregone profit may be captured under an alliance. We then consider a resource exchange alliance, including the effect of the alliance on competition among alliance members. We show that the foregone profit may indeed be captured under such an alliance. The problem of determining the optimal amounts of resources to exchange is formulated as a stochastic mathematical program with equilibrium constraints. We demonstrate how to determine whether there exists a unique equilibrium after resource exchange, how to compute the equilibrium, and how to compute the optimal resource exchange. In the second part of this thesis, we study the estimation of risk measures in risk management. In the financial industry, sell-side analysts periodically publish recommendations of underlying securities with target prices. However, this type of analysis does not provide risk measures associated with underlying companies. In this study, we discuss linear regression approaches to the estimation of law invariant conditional risk measures. Two estimation procedures are considered and compared; one is based on residual analysis of the standard least squares method and the other is in the spirit of the M-estimation approach used in robust statistics. In particular, Value-at-Risk and Average Value-at-Risk measures are discussed in detail. Large sample statistical inference of the estimators is derived. Furthermore, finite sample properties of the proposed estimators are investigated and compared with theoretical derivations in an extensive Monte Carlo study. Empirical results on the real data (different financial asset classes) are also provided to illustrate the performance of the estimators.
47

Syntactic foundations for machine learning

Bhat, Sooraj 08 April 2013 (has links)
Machine learning has risen in importance across science, engineering, and business in recent years. Domain experts have begun to understand how their data analysis problems can be solved in a principled and efficient manner using methods from machine learning, with its simultaneous focus on statistical and computational concerns. Moreover, the data in many of these application domains has exploded in availability and scale, further underscoring the need for algorithms which find patterns and trends quickly and correctly. However, most people actually analyzing data today operate far from the expert level. Available statistical libraries and even textbooks contain only a finite sample of the possibilities afforded by the underlying mathematical principles. Ideally, practitioners should be able to do what machine learning experts can do--employ the fundamental principles to experiment with the practically infinite number of possible customized statistical models as well as alternative algorithms for solving them, including advanced techniques for handling massive datasets. This would lead to more accurate models, the ability in some cases to analyze data that was previously intractable, and, if the experimentation can be greatly accelerated, huge gains in human productivity. Fixing this state of affairs involves mechanizing and automating these statistical and algorithmic principles. This task has received little attention because we lack a suitable syntactic representation that is capable of specifying machine learning problems and solutions, so there is no way to encode the principles in question, which are themselves a mapping between problem and solution. This work focuses on providing the foundational layer for enabling this vision, with the thesis that such a representation is possible. We demonstrate the thesis by defining a syntactic representation of machine learning that is expressive, promotes correctness, and enables the mechanization of a wide variety of useful solution principles.
48

Joint pricing and inventory control under reference price effects

Gimpl-Heersink, Lisa 05 1900 (has links) (PDF)
In many firms the pricing and inventory control functions are separated. However, a number of theoretical models suggest a joint determination of inventory levels and prices, as prices also affect stocking risks. In this work, we address the problem of simultaneously determining a pricing and inventory replenishment strategy under reference price effects. This reference price effect models the empirically well established fact that consumers not only react sensitively to the current price, but also to deviations from a reference price formed on the basis of past purchases. The current price is then perceived as a discount or surcharge relative to this reference price. Thus, immediate effects of price reductions on profits have to be weighted against the resulting losses in future periods. We study how the additional dynamics of the consumers' willingness to pay affect an optimal pricing and inventory control model and whether a simple policy such as a base-stock-list-price policy holds in such a setting. For a one-period planning horizon we analytically prove the optimality of a base-stock-list-price policy with respect to the reference price under general conditions. We then extend this result to the two-period time horizon for the linear and loss-neutral demand function and to the multi-period case under even more restrictive assumptions. However, numerical simulations suggest that a base-stock-list-price policy is also optimal for the multi-period setting under more general conditions. We furthermore show by numerical investigations that the presence of reference price effects decreases the incentive for price discounts to deal with overstocked situations. Moreover, we find that the potential benefits from simultaneously determining optimal prices and stocking quantities compared to a sequential procedure can increase considerably, when reference price effects are included in the model. This makes an integration of pricing and inventory control with reference price effects by all means worth the effort. (author's abstract)
49

Stochastic modeling of cooperative wireless multi-hop networks

Hassan, Syed Ali 18 October 2011 (has links)
Multi-hop wireless transmission, where radios forward the message of other radios, is becoming popular both in cellular as well as sensor networks. This research is concerned with the statistical modeling of multi-hop wireless networks that do cooperative transmission (CT). CT is a physical layer wireless communication scheme in which spatially separated wireless nodes collaborate to form a virtual array antenna for the purpose of increased reliability. The dissertation has two major parts. The first part addresses a special form of CT known as the Opportunistic Large Array (OLA). The second part addresses the signal-to-noise ratio (SNR) estimation for the purpose of recruiting nodes for CT. In an OLA transmission, the nodes from one level transmit the message signal concurrently without any coordination with each other, thereby producing transmit diversity. The receiving layer of nodes receives the message signal and repeats the process using the decode-and-forward cooperative protocol. The key contribution of this research is to model the transmissions that hop from one layer of nodes to another under the effects of channel variations, carrier frequency offsets, and path loss. It has been shown for a one-dimensional network that the successive transmission process can be modeled as a quasi-stationary Markov chain in discrete time. By studying various properties of the Markov chain, the system parameters, for instance, the transmit power of relays and distance between them can be optimized. This optimization is used to improve the performance of the system in terms of maximum throughput, range extensions, and minimum delays while delivering the data to the destination node using the multi-hop wireless communication system. A major problem for network sustainability, especially in battery-assisted networks, is that the batteries are drained pretty quickly during the operation of the network. However, in dense sensor networks, this problem can be alleviated by using a subset of nodes which take part in CT, thereby saving the network energy. SNR is an important parameter in determining which nodes to participate in CT. The more distant nodes from the source having least SNR are most suitable to transmit the message to next level. However, practical real-time SNR estimators are required to do this job. Therefore, another key contribution of this research is the design of optimal SNR estimators for synchronized as well as non-synchronized receivers, which can work with both the symbol-by-symbol Rayleigh fading channels as well as slow flat fading channels in a wireless medium.
50

Prioritization and optimization in stochastic network interdiction problems

Michalopoulos, Dennis Paul, 1979- 05 October 2012 (has links)
The goal of a network interdiction problem is to model competitive decision-making between two parties with opposing goals. The simplest interdiction problem is a bilevel model consisting of an 'adversary' and an interdictor. In this setting, the interdictor first expends resources to optimally disrupt the network operations of the adversary. The adversary subsequently optimizes in the residual interdicted network. In particular, this dissertation considers an interdiction problem in which the interdictor places radiation detectors on a transportation network in order to minimize the probability that a smuggler of nuclear material can avoid detection. A particular area of interest in stochastic network interdiction problems (SNIPs) is the application of so-called prioritized decision-making. The motivation for this framework is as follows: In many real-world settings, decisions must be made now under uncertain resource levels, e.g., interdiction budgets, available man-hours, or any other resource depending on the problem setting. Applying this idea to the stochastic network interdiction setting, the solution to the prioritized SNIP (PrSNIP) is a rank-ordered list of locations to interdict, ranked from highest to lowest importance. It is well known in the operations research literature that stochastic integer programs are among the most difficult optimization problems to solve. Even for modest levels of uncertainty, commercial integer programming solvers can have difficulty solving models such as PrSNIP. However, metaheuristic and large-scale mathematical programming algorithms are often effective in solving instances from this class of difficult optimization problems. The goal of this doctoral research is to investigate different methods for modeling and solving SNIPs (optimization) and PrSNIPs (prioritization via optimization). We develop a number of different prioritized and unprioritized models, as well as exact and heuristic algorithms for solving each problem type. The mathematical programming algorithms that we consider are based on row and column generation techniques, and our heuristic approach uses adaptive tabu search to quickly find near-optimal solutions. Finally, we develop a group of hybrid algorithms that combine various elements of both classes of algorithms. / text

Page generated in 0.0273 seconds