• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 56
  • 17
  • 13
  • 12
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Unscharfe Verfahren für lokale Phänomene in Zeitreihen

Herbst, Gernot 16 June 2011 (has links)
Die vorliegende Arbeit befaßt sich mit instationären, uni- oder multivariaten Zeitreihen, die bei der Beobachtung komplexer nichtlinearer dynamischer Systeme entstehen und sich der Modellierung durch ein globales Modell entziehen. In vielen natürlichen oder gesellschaftlichen Prozessen kann man jedoch wiederkehrende Phänomene beobachten, die von deren Rhythmen beeinflußt sind; ebenso lassen sich in technischen Prozessen beispielsweise aufgrund einer bedarfsorientierten Steuerung wiederholte, aber nicht periodische Verhaltensweisen ausmachen. Für solche Systeme und Zeitreihen wird deshalb vorgeschlagen, eine partielle Modellierung durch mehrere lokale Modelle vorzunehmen, die wiederkehrende Phänomene in Form zeitlich begrenzter Muster beschreiben. Um den Unwägbarkeiten dieser und sich anschließender Aufgabenstellungen Rechnung zu tragen, werden in dieser Arbeit durchgehend unscharfe Ansätze zur Modellierung von Mustern und ihrer Weiterverarbeitung gewählt und ausgearbeitet. Die Aufgabenstellung der Erkennung von Mustern in fortlaufenden Zeitreihen wird dahingehend verallgemeinert, daß unvollständige, sich noch in Entwicklung befindliche Musterinstanzen erkannt werden können. Basierend auf ebendieser frühzeitigen Erkennung kann der Verlauf der Zeitreihe -- und damit das weitere Systemverhalten -- lokal prognostiziert werden. Auf Besonderheiten und Schwierigkeiten, die sich aus der neuartigen Aufgabe der Online-Erkennung von Mustern ergeben, wird jeweils vermittels geeigneter Beispiele eingegangen, ebenso die praktische Verwendbarkeit des musterbasierten Vorhersageprinzips anhand realer Daten dokumentiert. / This dissertation focuses on non-stationary multivariate time series stemming from the observation of complex nonlinear dynamical systems. While one global model for such systems and time series may not always be feasible, we may observe recurring phenomena (patterns) in some of these time series. These phenomena might, for example, be caused by the rhythms of natural or societal processes, or a demand-oriented control of technical processes. For such systems and time series a partial modelling by means of multiple local models is being proposed. To cope with the intrinsic uncertainties of this task, fuzzy methods and models are being used throughout this work. Means are introduced for modelling and recognition of patterns in multivariate time series. Based on a novel method for the early recognition of incomplete patterns in streaming time series, a short-time prediction becomes feasible. Peculiarities and intrinsic difficulties of an online recognition of incomplete patterns are being discussed with the help of suitable examples. The usability of the pattern-based prediction approach is being demonstrated by means of real-world data.
42

Unit root, outliers and cointegration analysis with macroeconomic applications

Rodríguez, Gabriel 10 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / In this thesis, we deal with three particular issues in the literature on nonstationary time series. The first essay deals with various unit root tests in the context of structural change. The second paper studies some residual based tests in order to identify cointegration. Finally, in the third essay, we analyze several tests in order to identify additive outliers in nonstationary time series. The first paper analyzes the hypothesis that some time series can be characterized as stationary with a broken trend. We extend the class of M-tests and ADF test for a unit root to the case where a change in the trend function is allowed to occur at an unknown time. These tests (MGLS, ADFGLS) adopt the Generalized Least Squares (GLS) detrending approach to eliminate the set of deterministic components present in the model. We consider two models in the context of the structural change literature. The first model allows for a change in slope and the other for a change in slope as well as intercept. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal test (PF-Ls) which allows us to find the power envelope. The asymptotic critical values of the tests are tabulated and we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. Two methods to select the break point are analyzed. A first method estimates the break point that yields the minimal value of the statistic. In the second method, the break point is selected such that the absolute value of the t-statistic on the change in slope is maximized. We show that the MGLS and PTGLS tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power of the tests in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. In an empirical application, we consider two U.S. macroeconomic annual series widely used in the unit root literature: real wages and common stock prices. Our results suggest a rejection of the unit root hypothesis. In other words, we find that these series can be considered as trend stationary with a broken trend. Given the fact that using the GLS detrending approach allows us to attain gains in the power of the unit root tests, a natural extension is to propose this approach to the context of tests based on residuals to identify cointegration. This is the objective of the second paper in the thesis. In fact, we propose residual based tests for cointegration using local GLS detrending to eliminate separately the deterministic components in the series. We consider two cases, one where only a constant is included and one where a constant and a time trend are included. The limiting distributions of various residuals based tests are derived for a general quasi-differencing parameter and critical values are tabulated for values of c = 0 irrespective of the nature of the deterministic components and also for other values as proposed in the unit root literature. Simulations show that GLS detrending yields tests with higher power. Furthermore, using c = -7.0 or c = -13.5 as the quasi-differencing parameter, based on the two cases analyzed, is preferable. The third paper is an extension of a recently proposed method to detect outliers which explicitly imposes the null hypothesis of a unit root. it works in an iterative fashion to select multiple outliers in a given series. We show, via simulation, that under the null hypothesis of no outliers, it has the right size in finite samples to detect a single outlier but when applied in an iterative fashion to select multiple outliers, it exhibits severe size distortions towards finding an excessive number of outliers. We show that this iterative method is incorrect and derive the appropriate limiting distribution of the test at each step of the search. Whether corrected or not, we also show that the outliers need to be very large for the method to have any decent power. We propose an alternative method based on first-differenced data that has considerably more power. The issues are illustrated using two US/Finland real exchange rate series.
43

Identifikace tepelné vodivosti a tepelné kapacity stavebních látek metodou „Hot Wire Method“ / Identification of Thermal Conductivity and Thermal Capacity of Building Materials by the "Hot Wire Method"

Průša, David January 2019 (has links)
This aim of task deals with study of heat dissipation mechanisms and the description of physical phenomena, which is accompanied by non-stationary measurement of thermal characteristics by the method "hot-wire method". In particular, we observe the coefficient of thermal conductivity and its dependence on various variables such as the temperature of the measured sample, its moisture state, the volume of the sample and its porosity. The above mentioned findings are used for the invention of the measuring device of a nonstationary gauge, which is based on regular heating and is dedicated to measuring the thermal conductivity coefficient and the heat capacity by the "hot-wire method" method. In the last part of the thesis is verified functionality of the proposed measuring device, the suitability of the created algorithm for the processing of the measured data and the evaluation of the results was verified. The reproducibility of the measurements was verified and the measured results were compared with the measurement methods, which are commonly used. the influence of humidity on the coefficient of thermal conductivity.
44

Nonequilibrium fluctuations of a Brownian particle

Gomez-Solano, Juan Rubén 08 November 2011 (has links) (PDF)
This thesis describes an experimental study on fluctuations of a Brownian particle immersed in a fluid, confined by optical tweezers and subject to two different kinds of non-equilibrium conditions. We aim to gain a rather general understanding of the relation between spontaneous fluctuations, linear response and total entropy production for processes away from thermal equilibrium. The first part addresses the motion of a colloidal particle driven into a periodic non-equilibrium steady state by a nonconservative force and its response to an external perturbation. The dynamics of the system is analyzed in the context of several generalized fluctuation-dissipation relations derived from different theoretical approaches. We show that, when taking into account the role of currents due to the broken detailed balance, the theoretical relations are verified by the experimental data. The second part deals with fluctuations and response of a Brownian particle in two different aging baths relaxing towards thermal equilibrium: a Laponite colloidal glass and an aqueous gelatin solution. The experimental results show that heat fluxes from the particle to the bath during the relaxation process play the same role of steady state currents as a non-equilibrium correction of the fluctuation-dissipation theorem. Then, the present thesis provides evidence that the total entropy production constitutes a unifying concept which links the statistical properties of fluctuations and the linear response function for non-equilibrium systems either in stationary or non stationary states.
45

ARIMA demand forecasting by aggregation

Rostami Tabar, Bahman 10 December 2013 (has links) (PDF)
Demand forecasting performance is subject to the uncertainty underlying the time series an organisation is dealing with. There are many approaches that may be used to reduce demand uncertainty and consequently improve the forecasting (and inventory control) performance. An intuitively appealing such approach that is known to be effective is demand aggregation. One approach is to aggregate demand in lower-frequency 'time buckets'. Such an approach is often referred to, in the academic literature, as temporal aggregation. Another approach discussed in the literature is that associated with cross-sectional aggregation, which involves aggregating different time series to obtain higher level forecasts.This research discusses whether it is appropriate to use the original (not aggregated) data to generate a forecast or one should rather aggregate data first and then generate a forecast. This Ph.D. thesis reveals the conditions under which each approach leads to a superior performance as judged based on forecast accuracy. Throughout this work, it is assumed that the underlying structure of the demand time series follows an AutoRegressive Integrated Moving Average (ARIMA) process.In the first part of our1 research, the effect of temporal aggregation on demand forecasting is analysed. It is assumed that the non-aggregate demand follows an autoregressive moving average process of order one, ARMA(1,1). Additionally, the associated special cases of a first-order autoregressive process, AR(1) and a moving average process of order one, MA(1) are also considered, and a Single Exponential Smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical Mean Squared Error expressions are derived for the aggregate and the non-aggregate demand in order to contrast the relevant forecasting performances. The theoretical analysis is validated by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.In the second part of our research, the effect of cross-sectional aggregation on demand forecasting is evaluated. More specifically, the relative effectiveness of top-down (TD) and bottom-up (BU) approaches are compared for forecasting the aggregate and sub-aggregate demands. It is assumed that that the sub-aggregate demand follows either a ARMA(1,1) or a non-stationary Integrated Moving Average process of order one, IMA(1,1) and a SES procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and, as discussed above, SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA(1) process). Theoretical Mean Squared Errors are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate levels in addition to empirically validating our findings on a real dataset from a European superstore. The results show that the superiority of each approach is a function of the series autocorrelation, the cross-correlation between series and the comparison level.Finally, for both parts of the research, valuable insights are offered to practitioners and an agenda for further research in this area is provided.
46

Newborn EEG seizure detection using adaptive time-frequency signal processing

Rankine, Luke January 2006 (has links)
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
47

Human locomotion analysis, classification and modeling of normal and pathological vertical ground reaction force signals in elderly / Analyse, classification et modélisation de la locomotion humaine : application a des signaux GRF sur une population âgée

Alkhatib, Rami 12 July 2016 (has links)
La marche est définie par des séquences de gestes cycliques et répétées. Il a été déjà montré que la vitesse et la variabilité de ces séquences peuvent révéler des aptitudes ou des défaillances motrices. L’originalité de ce travail est alors d’analyser et de caractériser les foulées de sujets âgés à partir des signaux de pression issus de semelles instrumentées lors de la marche, au moyen d’outils de traitement du signal. Une étude préliminaire, sur les signaux de pression générés lors de la marche, nous a permis de mettre en évidence le caractère cyclo-stationnaire de ces signaux. Ces paramètres sont testées sur une population de 47 sujets. Tout d'abord, nous avons commencé par un prétraitement des signaux et nous avons montré dans la première de cette thèse que le filtrage peut éliminer une partie vitale du signal. C’est pourquoi un filtre adaptatif basé sur la décomposition en mode empirique a été conçu. Les points de retournement ont été filtrés ensuite en utilisant une technique temps-fréquence appelée «synochronosqueezing». Nous avons également montré que le contenu des signaux de force de marche est fortement affecté par des paramètres inquantifiables tels que les tâches cognitives qui les rendent difficiles à normaliser. C’est pourquoi les paramètres extraits de nos signaux sont tous dérivées par une comparaison inter-sujet. Par exemple, nous avons assimilé la différence dans la répartition de poids entre les pieds. Il est également recommandé dans ce travail de choisir le centre des capteurs plutôt que de compter sur la somme des forces issues du réseau de capteurs pour la classification. Ensuite, on a montré que l’hypothèse de la marche équilibrée et déséquilibrée peut améliorer les résultats de la classification. Le potentiel de cette hypothèse est montré à l'aide de la répartition du poids ainsi que le produit de l'âge × vitesse dans le premier classificateur et la corrélation dans le second classificateur. Une simulation de la série temporelle de VGRF basé sur une version modifiée du modèle de Markov non stationnaire, du premier ordre est ensuite dérivée. Ce modèle prédit les allures chez les sujets normaux et suffisamment pour les allures des sujets de Parkinson. On a trouvé que les trois modes: temps, fréquence et espace sont très utiles pour l’analyse des signaux de force, c’est pourquoi l’analyse de facteurs parallèles est introduite comme étant une méthode de tenseur qui peut être utilisée dans le futur / Walking is defined as sequences of repetitive cyclic gestures. It was already shown that the speed and the variability of these sequences can reveal abilities or motorskill failures. The originality of this work is to analyze and characterize the steps of elderly persons by using pressure signals. In a preliminary study, we showed that pressure signals are characterized by cyclostationarity. In this study, we intend to exploit the nonstationarity of the signals in a search for new indicators that can help in gait signal classification between normal and Parkinson subjects in the elderly population. These parameters are tested on a population of 47 subjects. First, we started with preprocessing the vertical ground reaction force (VGRF) signals and showed in this first part of the thesis that filtering can remove a vital part of the signal. That is why an adaptive filter based on empirical mode decomposition (EMD) was built. Turning points are filtered using synochronosqueezing of time-frequency representations of the signal. We also showed that the content of gait force signals is highly affected by unquantifiable parameter such as cognitive tasks which make them hard to be normalized. That is why features being extracted are derived from inter-subject comparison. For example we equated the difference in the load distribution between feet. It is also recommended in this work to choose the mid-sensor rather than relying on summation of forces from array of sensors for classification purposes. A hypothesis of balanced and unbalanced gait is verified to be potential in improving the classification accuracy. The power of this hypothesis is shown by using the load distribution and Age×Speed in the first classifier and the correlation in the second classifier. A time series simulation of VGRF based on a modified version of nonstationary- Markov model of first order is derived. This model successfully predict gaits in normal subjects and fairly did in Parkinson’s gait. We found out that the three modes: time, frequency and space are helpful in analyzing force signals that is why parallel factor analysis is introduced as a tensor method to be used in a future work
48

Identifikace tepelné vodivosti a tepelné kapacity stavebních látek metodou „Hot Wire Method“ / Identification of Thermal Conductivity and Thermal Capacity of Building Materials by the "Hot Wire Method"

Průša, David January 2019 (has links)
This aim of task deals with study of heat dissipation mechanisms and the description of physical phenomena, which is accompanied by non-stationary measurement of thermal characteristics by the method "hot-wire method". In particular, we observe the coefficient of thermal conductivity and its dependence on various variables such as the temperature of the measured sample, its moisture state, the volume of the sample and its porosity. The above mentioned findings are used for the invention of the measuring device of a nonstationary gauge, which is based on regular heating and is dedicated to measuring the thermal conductivity coefficient and the heat capacity by the "hot-wire method" method. In the last part of the thesis is verified functionality of the proposed measuring device, the suitability of the created algorithm for the processing of the measured data and the evaluation of the results was verified. The reproducibility of the measurements was verified and the measured results were compared with the measurement methods, which are commonly used. the influence of humidity on the coefficient of thermal conductivity.
49

Modèles graphiques évidentiels / Evidential graphical models

Boudaren, Mohamed El Yazid 12 January 2014 (has links)
Les modélisations par chaînes de Markov cachées permettent de résoudre un grand nombre de problèmes inverses se posant en traitement d’images ou de signaux. En particulier, le problème de segmentation figure parmi les problèmes où ces modèles ont été le plus sollicités. Selon ces modèles, la donnée observable est considérée comme une version bruitée de la segmentation recherchée qui peut être modélisée à travers une chaîne de Markov à états finis. Des techniques bayésiennes permettent ensuite d’estimer cette segmentation même dans le contexte non-supervisé grâce à des algorithmes qui permettent d’estimer les paramètres du modèle à partir de l’observation seule. Les chaînes de Markov cachées ont été ultérieurement généralisées aux chaînes de Markov couples et triplets, lesquelles offrent plus de possibilités de modélisation tout en présentant des complexités de calcul comparables, permettant ainsi de relever certains défis que les modélisations classiques ne supportent pas. Un lien intéressant a également été établi entre les modèles de Markov triplets et la théorie de l’évidence de Dempster-Shafer, ce qui confère à ces modèles la possibilité de mieux modéliser les données multi-senseurs. Ainsi, dans cette thèse, nous abordons trois difficultés qui posent problèmes aux modèles classiques : la non-stationnarité du processus caché et/ou du bruit, la corrélation du bruit et la multitude de sources de données. Dans ce cadre, nous proposons des modélisations originales fondées sur la très riche théorie des chaînes de Markov triplets. Dans un premier temps, nous introduisons les chaînes de Markov à bruit M-stationnaires qui tiennent compte de l’aspect hétérogène des distributions de bruit s’inspirant des chaînes de Markov cachées M-stationnaires. Les chaînes de Markov cachée ML-stationnaires, quant à elles, considèrent à la fois la loi a priori et les densités de bruit non-stationnaires. Dans un second temps, nous définissons deux types de chaînes de Markov couples non-stationnaires. Dans le cadre bayésien, nous introduisons les chaînes de Markov couples M-stationnaires puis les chaînes de Markov couples MM-stationnaires qui considèrent la donnée stationnaire par morceau. Dans le cadre évidentiel, nous définissons les chaînes de Markov couples évidentielles modélisant l’hétérogénéité du processus caché par une fonction de masse. Enfin, nous présentons les chaînes de Markov multi-senseurs non-stationnaires où la fusion de Dempster-Shafer est employée à la fois pour modéliser la non-stationnarité des données (à l’instar des chaînes de Markov évidentielles cachées) et pour fusionner les informations provenant des différents senseurs (comme dans les champs de Markov multi-senseurs). Pour chacune des modélisations proposées, nous décrivons les techniques de segmentation et d’estimation des paramètres associées. L’intérêt de chacune des modélisations par rapport aux modélisations classiques est ensuite démontré à travers des expériences menées sur des données synthétiques et réelles / Hidden Markov chains (HMCs) based approaches have been shown to be efficient to resolve a wide range of inverse problems occurring in image and signal processing. In particular, unsupervised segmentation of data is one of these problems where HMCs have been extensively applied. According to such models, the observed data are considered as a noised version of the requested segmentation that can be modeled through a finite Markov chain. Then, Bayesian techniques such as MPM can be applied to estimate this segmentation even in unsupervised way thanks to some algorithms that make it possible to estimate the model parameters from the only observed data. HMCs have then been generalized to pairwise Markov chains (PMCs) and triplet Markov chains (TMCs), which offer more modeling possibilities while showing comparable computational complexities, and thus, allow to consider some challenging situations that the conventional HMCs cannot support. An interesting link has also been established between the Dempster-Shafer theory of evidence and TMCs, which give to these latter the ability to handle multisensor data. Hence, in this thesis, we deal with three challenging difficulties that conventional HMCs cannot handle: nonstationarity of the a priori and/or noise distributions, noise correlation, multisensor information fusion. For this purpose, we propose some original models in accordance with the rich theory of TMCs. First, we introduce the M-stationary noise- HMC (also called jumping noise- HMC) that takes into account the nonstationary aspect of the noise distributions in an analogous manner with the switching-HMCs. Afterward, ML-stationary HMC consider nonstationarity of both the a priori and/or noise distributions. Second, we tackle the problem of non-stationary PMCs in two ways. In the Bayesian context, we define the M-stationary PMC and the MM-stationary PMC (also called switching PMCs) that partition the data into M stationary segments. In the evidential context, we propose the evidential PMC in which the realization of the hidden process is modeled through a mass function. Finally, we introduce the multisensor nonstationary HMCs in which the Dempster-Shafer fusion has been used on one hand, to model the data nonstationarity (as done in the hidden evidential Markov chains) and on the other hand, to fuse the information provided by the different sensors (as in the multisensor hidden Markov fields context). For each of the proposed models, we describe the associated segmentation and parameters estimation procedures. The interest of each model is also assessed, with respect to the former ones, through experiments conducted on synthetic and real data
50

Quadratic Spline Approximation of the Newsvendor Problem Optimal Cost Function

Burton, Christina Marie 10 March 2012 (has links) (PDF)
We consider a single-product dynamic inventory problem where the demand distributions in each period are known and independent but with density. We assume the lead time and the fixed cost for ordering are zero and that there are no capacity constraints. There is a holding cost and a backorder cost for unfulfilled demand, which is backlogged until it is filled by another order. The problem may be nonstationary, and in fact our approximation of the optimal cost function using splines is most advantageous when demand falls suddenly. In this case the myopic policy, which is most often used in practice to calculate optimal inventory level, would be very costly. Our algorithm uses quadratic splines to approximate the optimal cost function for this dynamic inventory problem and calculates the optimal inventory level and optimal cost.

Page generated in 0.1117 seconds