• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1749
  • 570
  • 450
  • 169
  • 81
  • 66
  • 62
  • 55
  • 47
  • 42
  • 38
  • 28
  • 27
  • 22
  • 21
  • Tagged with
  • 3943
  • 824
  • 733
  • 720
  • 597
  • 535
  • 532
  • 532
  • 404
  • 404
  • 361
  • 333
  • 330
  • 326
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Generation of the steady state for Markov chains using regenerative simulation.

January 1993 (has links)
by Yuk-ka Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 73-74). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Regenerative Simulation --- p.5 / Chapter § 2.1 --- Discrete time discrete state space Markov chain --- p.5 / Chapter § 2.2 --- Discrete time continuous state space Markov chain --- p.8 / Chapter Chapter 3 --- Estimation --- p.14 / Chapter § 3.1 --- Ratio estimators --- p.14 / Chapter § 3.2 --- General method for generation of steady states from the estimated stationary distribution --- p.17 / Chapter § 3.3 --- Bootstrap method --- p.22 / Chapter § 3.4 --- A new approach: the scoring method --- p.26 / Chapter § 3.4.1 --- G(0) method --- p.29 / Chapter § 3.4.2 --- G(1) method --- p.31 / Chapter Chapter 4 --- Bias of the Scoring Sampling Algorithm --- p.34 / Chapter § 4.1 --- General form --- p.34 / Chapter § 4.2 --- Bias of G(0) estimator --- p.36 / Chapter § 4.3 --- Bias of G(l) estimator --- p.43 / Chapter § 4.4 --- Estimation of bounds for bias: stopping criterion for simulation --- p.51 / Chapter Chapter 5 --- Simulation Study --- p.54 / Chapter Chapter 6 --- Discussion --- p.70 / References --- p.73
72

Aplicación de las cadenas de Markov en la determinación de circuitos turísticos del Perú

Farro Díaz, Víctor Daniel 03 October 2011 (has links)
La presente investigación tiene como objetivos presentar los departamentos o gobiernos regionales con mayor probabilidad de ser visitados por un turista, nacional o internacional, y brindar las rutas con el menor recorrido entre dichos departamentos. La base teórica del estudio realizado está comprendida primordialmente por lo temas de Cadenas de Markov y Diseño de Rutas, con estos temas se puede dar la aplicación a la investigación realizada, además se ha desarrollado los temas de Vectores y Muestreo Estadístico que sirven de apoyo para la aplicación de los primeros temas mencionados. El estudio del sector turístico tiene como finalidad brindar una imagen de cómo se encuentra actualmente y cómo ha venido mejorando este sector, con lo cual, se puede observar que su aporte ha sido cada vez mayor para nuestro país, por lo que deja claro por qué el interés en desarrollar esta investigación relacionada al turismo. La aplicación de las Cadenas de Markov a los recorridos turísticos se evidencia al formular los modelos o matrices para cada macro-región (norte, centro y sur) y a nivel nacional, los que al desarrollarlos, brindan las probabilidades de llegada de los turistas a los distintos departamentos. La obtención de datos se realizó en base a encuestas a turistas, internos o externos, e información dada por agencias de viaje y turismo. Para el diseño de rutas se utiliza el Método o Algoritmo “De Ahorros”, para lo cual sólo se usan los departamentos con mayor probabilidad y se detallan las diferentes rutas que se puedan realizar, siempre teniendo en cuenta que el recorrido sea mínimo. Finalmente, con los resultados obtenidos se observa que la principal ruta a nivel nacional con menor recorrido es: Lima – Arequipa – Puno – Cuzco – Ica – Lima, además se tienen las diferentes rutas que se desprenden de ésta, y las rutas por cada macro-región (norte, centro y sur). / Tesis
73

Cambio de fase en el proceso de contacto sobre Zd

Oliveros Ramos, David Ricardo 24 April 2015 (has links)
El proceso de contacto en un tipo de proceso de Markov en tiempo continuo para el cual el espacio de estados, también llamados configuraciones, es X = {0, 1} Z d y en el cual cada coordenada de una configuración del proceso pasa de 1 a 0 a una tasa constante igual a 1, y el paso de 0 a 1 es proporcional a la cantidad de unos en las coordenadas vecinas, siendo λ la constante de proporcionalidad que parametriza el modelo. En este trabajo se muestra que el proceso de contacto puede ser construido formalmente a partir de la descripción anterior de las tasas de transición entre las configuraciones, mostrando además que existe un único proceso de Markov definido por tales tasas. Se utilizaron algunas técnicas básicas para el estudio de sistemas de partículas en interacción (monotonicidad, acoplamiento, dualidad) que permitieron demostrar algunas propiedades del proceso de contacto, como la autodualidad y la monotonía de la ergodicidad con respecto al parámetro del proceso. El resultado principal es mostrar que en una dimensión (d = 1) existe un parámetro crítico finito (λc) que determina un cambio de fase para la ergodicidad del proceso, siendo ergódico si λ < λc y que existen al menos dos medidas invariantes para el proceso si λ > λc. Este resultado se generaliza para el proceso en d dimensiones, mostrando que el parámetro crítico λd está acotado por 1/ 2d ≤ λd ≤ 2/d . / Tesis
74

Aplicación de las cadenas ocultas de Markov para la preferencia de los consumidores en el mercado cervecero

Patiño Antonioli, Miguel Ángel 06 December 2011 (has links)
Debido al ambiente competitivo en las industrias peruanas del sector consumo masivo, es de gran interés poder determinar las preferencias de los consumidores para poder estimar de manera más eficiente sus necesidades. Es en este punto importante el uso de las Herramientas Estocásticas para el desarrollo de predicciones a largo plazo, evaluar posibles estados de movimiento entre marcas y determinar factores claves en el proceso de elección del consumidor. Este análisis se hace posible mediante el uso de modelos Estocásticos, pues se basan en Probabilidades, útiles al estimar las decisiones de los potenciales clientes. Este documento tiene como objetivo desarrollar a fondo y presentar los modelos ocultos markovianos, con la finalidad de orientar el análisis hacia los Procesos Estocásticos de tiempo discreto, que son las Cadenas de Markov, con la evidencia del supuesto de la optimización del análisis a través del reconocimiento de Estados Ocultos, difíciles de definir y que en los modelos markovianos ocultos, son el pilar para obtener los resultados deseados. Se tocarán temas relacionados y se explicarán los conceptos necesarios para poder entender las Cadenas Ocultas de Markov y su aplicación directa al sector consumo masivo. Finalmente, se demostrará su directa aplicación al tema de preferencias y los aportes para futuros estudios relacionados. En cuanto a la aplicación al tema de preferencias de los consumidores, especialmente en el mercado cervecero, cada vez cambiante, se eligieron las principales dos variables críticas que afectan de manera determinante y que además alimentan la situación de incertidumbre por la que una modelación matemática - estocástica es una de las soluciones más convenientes. Estas dos variables son: el Volumen de Ventas de cada empresa (de manera estimada) y las Transiciones entre marcas representativas por empresa. Para esas dos variables entonces, nuestro análisis tratará de poner a prueba al Modelamiento Clásico de Markov contra el Modelamiento Oculto. / Tesis
75

Methods for modelling precipitation persistence

Weak, Brenda Ann January 2010 (has links)
Digitized by Kansas Correctional Industries
76

Studies into global asset allocation strategies using the markov-switching model

Emery, Martin, Banking & Finance, Australian School of Business, UNSW January 2008 (has links)
This thesis presents the potential opportunities of global asset allocation and the possible enhancement of these opportunities from using a Markov Switching Model. The thesis extends upon previous conditional asset pricing studies in global asset allocation, such as those done by Ilamnen (1995), Harvey, Solnik and Zhou (1992) and Bilson (1993), where expected future returns are forecast based on conditional variables. The finding of these studies, and many others, are combined with the works on Markov Switching models and market segmentation theories to create a uniform structure for analysing regime switching properties in currencies, international equities and international bond markets. This thesis is segregated into 4 major sections. The chapters 1-4 develop a unified framework that is used in the analysis of markets. The chapters 5-7 are focused on currencies, international equities and international bonds. For each market a model is constructed that is based upon the structure proposed by Frankel and Froot (1988). In this model the market is segmented into two groups ?? value based investors and momentum based investors. To replicate this structure, a two regime Markov Switching model is used, where one regime is constructed as a value regime and the second is constructed as a momentum regime. These models are then compared to linear versions of the models, to see whether there is any additional benefit to the application of regime switching methods. In conjunction with testing the potential benefits of the Markov Regime Switching process, this study also investigates the very nature, or characteristics of regime switching in the international markets. This is undertaken though some alternate models and enhancements to see whether there is any predictability, or characterisations can be made of the switching process. To ensure a comprehensive analysis, several analytical methods have been used, including extensive econometric modelling, statistical analysis of forecasts and portfolio back testing. A number of conclusions can be drawn from the results. Firstly it appears that there is substantial evidence of regime switching in international markets, such as that shown in a Frankel-Froot framework. This in turn has major implication for the understanding of the way in which international markets function, and further the empirical evidence supports many of the anecdotal observations of market based participants. Secondly, there appears to be a strong level of economic relevance to the modelling. The models are shown to generate a theoretical economic profit, which shows that the international markets are only semi efficient. Further, forecasts generated from the Markov Switching models outperform the linear counterparts in economic significance in portfolio tests. However, for both equities and bonds, the general accuracy of the forecast tends to be inferior to the linear counterparts. Finally, the nature of regime switching is investigated in detail, particularly in reference to 3 potential drivers ?? greed, fear and success. The evidence shows that these can help explain the characteristics of regime switching, as in some cases potentially adding economic value. However, it seems that success is more important than a broader economic environment.
77

High-dimensional Markov chain models for categorical data sequences with applications

Fung, Siu-leung. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
78

Testing reversibility for multivariate Markov processes /

Navarro, Marcelo de Carvalho. January 1999 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Economics, June 1999. / Includes bibliographical references. Also available on the Internet.
79

A Design of Mandarin Speech Recognition System for Addresses in Taiwan¡AHong Kong and China

Wang, San-ming 06 September 2007 (has links)
The objective of this thesis is to design and implement a speech inputting system for addresses in Taiwan,Mainland china and HongKong,The completed system has the capability to identify full census and posting addresses in Taiwan and full posting addresses in Peking¡BShanghai¡BTien-Jin and Chungchin of China¡CFor HongKong,a partial address system,including region/street name or school,hotal and other public location names,is implemented¡C In this thesis,Mel frequency cepstrum coefficient,Hidden Mavkov model and lexicon search strategy are applied to choose the initial address candidates¡FMandarin intonation classification technique is then used to increase the final correct rate,under speaker dependent case,a 90%correct rate can be reached by using a Intel Celeron 2.4GHz CPU and RedHat Linux 9.0 operating system¡CThe total address-inputting task can be completed within 3 seconds¡C
80

Acceleration of Iterative Methods for Markov Decision Processes

Shlakhter, Oleksandr 21 April 2010 (has links)
This research focuses on Markov Decision Processes (MDP). MDP is one of the most important and challenging areas of Operations Research. Every day people make many decisions: today's decisions impact tomorrow's and tomorrow's will impact the ones made the day after. Problems in Engineering, Science, and Business often pose similar challenges: a large number of options and uncertainty about the future. MDP is one of the most powerful tools for solving such problems. There are several standard methods for finding optimal or approximately optimal policies for MDP. Approaches widely employed to solve MDP problems include value iteration and policy iteration. Although simple to implement, these approaches are, nevertheless, limited in the size of problems that can be solved, due to excessive computation required to find close-to-optimal solutions. My thesis proposes a new value iteration and modified policy iteration methods for classes of the expected discounted MDPs and average cost MDPs. We establish a class of operators that can be integrated into value iteration and modified policy iteration algorithms for Markov Decision Processes, so as to speed up the convergence of the iterative search. Application of these operators requires a little additional computation per iteration but reduces the number of iterations significantly. The development of the acceleration operators relies on two key properties of Markov operator, namely contraction mapping and monotonicity in a restricted region. Since Markov operators of the classical value iteration and modified policy iteration methods for average cost MDPs do not possess the contraction mapping property, for these models we restrict our study to average cost problems that can be formulated as the stochastic shortest path problem. The performance improvement is significant, while the implementation of the operator into the value iteration is trivial. Numerical studies show that the accelerated methods can be hundreds of times more efficient for solving MDP problems than the other known approaches. The computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which case the standard iterative algorithms suffer from slow convergence.

Page generated in 0.0227 seconds