• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 543
  • 82
  • 64
  • 20
  • 18
  • 18
  • 17
  • 12
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 1
  • Tagged with
  • 706
  • 416
  • 267
  • 216
  • 176
  • 166
  • 151
  • 144
  • 132
  • 125
  • 101
  • 87
  • 85
  • 85
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

FSR-BAY: modelo probabilístico para la fusión sensorial robótica

Aznar Gregori, Fidel 13 June 2006 (has links)
Los humanos y los animales han evolucionado desarrollando la capacidad de utilizar sus sentidos para sobrevivir. La fusión sensorial, que es uno de los pilares de esta evolución, se realiza de forma natural por animales y humanos para conseguir una mejor interacción con el entorno circundante. La emergencia de nuevos sensores, técnicas de procesamiento avanzado, y hardware de proceso mejorado, han hecho viable la fusión de muchos tipos de datos. Actualmente los sistemas de fusión sensorial se han utilizado de manera extensiva para el seguimiento de objetos, identificación automática, razonamiento, etc. Aparte de otras muchas áreas de aplicación (como la monitorización de sistemas complejos, el control automático de fabricación industrial...) las técnicas de fusión también se utilizan en el campo de la inteligencia artificial y la robótica. Esta tesis aporta el modelo FSR-BAY, para la fusión sensorial robótica. Este modelo tiene en cuenta algunos aspectos que desde nuestro punto de vista han sido tratados de manera secundaria por la mayoría de las arquitecturas de fusión actuales: la información incompleta e incierta, las capacidades de aprendizaje y el utilizar una representación homogénea de la información, independiente del nivel de fusión. También se proporcionan dos casos de estudio del modelo propuesto aplicado a un agente autónomo. El primer caso trata la fusión cooperativa de la información utilizando para fusionar información proveniente de varios sensores de un mismo tipo. El segundo caso fusiona de manera competitiva información tanto heterogénea como homogénea. / Throughout their evolution, both humans and animals have developed the capacity to use their senses to help them to survive. One of the pillars of this evolution; sensory fusion, is achieved naturally by animals and humans to obtain the best possible interaction with the surrounding environment. In the field of computers the emergence of new sensors, advanced processing techniques, and improved hardware have made possible the fusion of many different types of data. Nowadays, sensory fusion systems have been used extensively to follow objects, for automatic identification, reasoning, etc. Apart from the many other areas of application (such as the motorization of complex systems, the automatic control of industrial fabrication processes.) fusion techniques are also being used in the fields of artificial intelligence and robotics. This thesis presents the FSR-BAY model for robotic sensory fusion. This model takes into consideration certain aspects that in our opinion have been treated in a secondary manner by the majority of today's fusion architects: Incomplete and uncertain information learning capacities and the use of a homogeneous representation of the information, independent of the level of fusion. Two studies of the proposed model applied to an autonomous agent are also described. The first case deals with cooperative fusion of the information using various sensors of the same type to provide the information, and the second case describes the situation of competitive fusion when the information is provided both homogeneously and heterogeneously.
52

Statistical physics for compressed sensing and information hiding / Física Estatística para Compressão e Ocultação de Dados

Manoel, Antonio André Monteiro 22 September 2015 (has links)
This thesis is divided into two parts. In the first part, we show how problems of statistical inference and combinatorial optimization may be approached within a unified framework that employs tools from fields as diverse as machine learning, statistical physics and information theory, allowing us to i) design algorithms to solve the problems, ii) analyze the performance of these algorithms both empirically and analytically, and iii) to compare the results obtained with the optimal achievable ones. In the second part, we use this framework to study two specific problems, one of inference (compressed sensing) and the other of optimization (information hiding). In both cases, we review current approaches, identify their flaws, and propose new schemes to address these flaws, building on the use of message-passing algorithms, variational inference techniques, and spin glass models from statistical physics. / Esta tese está dividida em duas partes. Na primeira delas, mostramos como problemas de inferência estatística e de otimização combinatória podem ser abordados sob um framework unificado que usa ferramentas de áreas tão diversas quanto o aprendizado de máquina, a física estatística e a teoria de informação, permitindo que i) projetemos algoritmos para resolver os problemas, ii) analisemos a performance destes algoritmos tanto empiricamente como analiticamente, e iii) comparemos os resultados obtidos com os limites teóricos. Na segunda parte, este framework é usado no estudo de dois problemas específicos, um de inferência (compressed sensing) e outro de otimização (ocultação de dados). Em ambos os casos, revisamos abordagens recentes, identificamos suas falhas, e propomos novos esquemas que visam corrigir estas falhas, baseando-nos sobretudo em algoritmos de troca de mensagens, técnicas de inferência variacional, e modelos de vidro de spin da física estatística.
53

FBST seqüencial / Sequential FBST

Arruda, Marcelo Leme de 04 June 2012 (has links)
O FBST (Full Bayesian Significance Test) é um instrumento desenvolvido por Pereira e Stern (1999) com o objetivo de apresentar uma alternativa bayesiana aos testes de hipóteses precisas. Desde sua introdução, o FBST se mostrou uma ferramenta muito útil para a solução de problemas para os quais não havia soluções freqüentistas. Esse teste, contudo, depende de que a amostra seja coletada uma única vez, após o que a distribuição a posteriori dos parâmetros é obtida e a medida de evidência, calculada. Ensejadas por esse aspecto, são apresentadas abordagens analíticas e computacionais para a extensão do FBST ao contexto de decisão seqüencial (DeGroot, 2004). É apresentado e analisado um algoritmo para a execução do FBST Seqüencial, bem como o código-fonte de um software baseado nesse algoritmo. / FBST (Full Bayesian Significance Test) is a tool developed by Pereira and Stern (1999), to show a bayesian alternative to the tests of precise hypotheses. Since its introduction, FBST has shown to be a very useful tool to solve problems to which there were no frequentist solutions. This test, however, needs that the sample be collected just one time and, after this, the parameters posterior distribution is obtained and the evidence measure, computed. Suggested by this feature, there are presented analytic and computational approaches to the extension of the FBST to the sequential decision context (DeGroot, 2004). It is presented and analyzed an algorithm to execute the Sequential FBST, as well as the source code of a software based on this algorithm.
54

Inferencia bayesiana en el modelo de regresión spline penalizado con una aplicación a los tiempos en cola de una agencia bancaria

Huaraz Zuloaga, Diego Eduardo 08 April 2013 (has links)
En diversos campos de aplicación se requiere utilizar modelos de regresión para analizar la relación entre dos variables. Cuando esta relación es compleja, es difícil modelar los datos usando técnicas paramétricas tradicionales, por lo que estos casos requieren de la flexibilidad de los modelos no paramétricos para ajustar los datos. Entre los diferentes modelos no paramétricos está la regresión spline penalizada, que puede ser formulada dentro de un marco de modelos lineales mixtos. De este modo, los programas computacionales desarrollados originalmente para la inferencia clásica y Bayesiana de modelos mixtos pueden ser utilizados para estimarlo. La presente tesis se centra en el estudio de la inferencia Bayesiana en el modelo de regresión spline penalizado. Para lograr esto, este trabajo proporciona un marco teórico breve de este modelo semiparamétrico y su relación con el modelo lineal mixto, la inferencia Bayesiana de este modelo, y un estudio de simulación donde se comparan la inferencia clásica y Bayesiana en diferentes escenarios considerando diversos valores del n umero de nodos, tamaños de muestra y niveles de dispersión en la data simulada. Finalmente, en base a los resultados del estudio de simulación, el modelo se aplica para estimar el tiempo de espera en cola de los clientes en agencias bancarias con el fin de calcular la capacidad de personal óptima bajo determinadas metas de nivel de servicio. / Tesis
55

Un enfoque de credibilidad bajo espacios de Hilbert y su estimación mediante modelos lineales mixtos

Ruíz Arias, Raúl Alberto 08 April 2013 (has links)
La teoría de la credibilidad provee un conjunto de métodos que permiten a una compañía de seguros ajustar las primas futuras, sobre la base de la experiencia pasada individual e información de toda la cartera. En este trabajo presentaremos los principales modelos de credibilidad utilizados en la práctica, como lo son los modelos de Bühlmann (1967), Bühlmann-Straub (1970), Jewell (1975) y Hachemeister (1975), todos ellos analizados en sus propiedades desde un punto de vista geométrico a través de la teoría de espacios de Hilbert y en su estimación mediante el uso de los modelos lineales mixtos. Mediante un estudio de simulación se mostrará la ventaja de utilizar este último enfoque de estimación. / Tesis
56

Portafolios óptimos bajo estimadores robustos clásicos y bayesianos con aplicaciones al mercado peruano de acciones

Vera Chipoco, Alberto Manuel 20 July 2015 (has links)
El Modelo del Portafolio, propuesto por Markowitz (1952), es uno de los más importantes en el ámbito nanciero. En él, un agente busca lograr un nivel óptimo de sus inversiones considerando el nivel de riesgo y rentabilidad de un portafolio, conformado por un conjunto de acciones bursátiles. En este trabajo se propone una extensión a la estimación clásica del riesgo en el Modelo del Portafolio usando Estimadores Robustos tales como los obtenidos por los métodos del Elipsoide de Volumen mínimo, el Determinante de Covarianza Mínima, el Estimador Ortogonalizado de Gnanadesikan y Kettenring, el Estimador con base en la matriz de Covarianzas de la distribución t-student Multivariada y la Inferencia Bayesiana. En este último caso se hace uso de los modelos Normal Multivariado y t-student multivariado. En todos los modelos descritos se evalúa el impacto económico y las bondades estadísticas que se logran si se usaran estas técnicas en el Portafolio del inversionista en lugar de la estimación clásica. Para esto se utilizarán activos de la Bolsa de Valores de Lima. / Tesis
57

Ensaios sobre macroeconometria bayesiana aplicada / Essays on bayesian macroeconometrics

Santos, Fernando Genta dos 03 February 2012 (has links)
Os três artigos que compõe esta Tese possuem em comum a utilização de técnicas macroeconométricas bayesianas, aplicadas a modelos dinâmicos e estocásticos de equilíbrio geral, para a investigação de problemas específicos. Desta forma, esta Tese busca preencher importantes lacunas presentes na literatura nacional e internacional. No primeiro artigo, estimou-se a importância do canal de custo da política monetária por meio de um modelo novo-keynesiano dinâmico e estocástico de equilíbrio geral. Para tanto, alteramos o modelo convencional, assumindo que uma parcela das firmas precise contrair empréstimos para pagar sua folha salarial. Desta forma, a elevação da taxa nominal de juro impacta positivamente o custo unitário do trabalho efetivo, podendo acarretar em aumento da inflação. Este artigo analisa as condições necessárias para que o modelo gere esta resposta positiva da inflação ao aperto monetário, fenômeno esse que ficou conhecido como price puzzle. Devido ao uso da metodologia DSGE-VAR, os resultados aqui encontrados podem ser comparados tanto com a literatura que trata o puzzle como um problema de identificação dos modelos VAR como com a literatura que avalia o canal de custo por meio de modelos novo-keynesianos. No segundo artigo, avaliamos até que ponto as expectativas de inflação geradas por um modelo dinâmico e estocástico de equilíbrio geral são compatíveis com as expectativas coletadas pelo Banco Central do Brasil (BCB). Este procedimento nos permite analisar a racionalidade das expectativas dos agentes econômicos brasileiros, comparando-as não à inflação observada, mas sim à projeção de um modelo desenvolvido com a hipótese de expectativas racionais. Além disso, analisamos os impactos do uso das expectativas coletadas pelo BCB na estimação do nosso modelo, no que se refere aos parâmetros estruturais, função de resposta ao impulso e análise de decomposição da variância. Por fim, no terceiro artigo desta Tese, modificamos o modelo novo-keynesiano convencional, de forma a incluir a teoria do desemprego proposta pelo economista Jordi Galí. Com isso, procuramos preencher uma lacuna importante na literatura nacional, dominada por modelos que não contemplam a possibilidade de desequilíbrios no mercado de trabalho capazes de gerar desemprego involuntário. A interpretação alternativa do mercado de trabalho aqui utilizada permite superar os problemas de identificação notoriamente presentes na literatura, tornando o modelo resultante mais robusto. Desta forma, utilizamos o modelo resultante para, dentre outras coisas, avaliar os determinantes da taxa de desemprego ao longo da última década. / The three articles that comprise this thesis have in common the use of macroeconometric bayesian techniques, applied to dynamic stochastic general equilibrium models, for the investigation of specific problems. Thus, this thesis seeks to fill important gaps present in the national and international literatures. In the first article, I estimated the importance of the cost-push channel of monetary policy through a new keynesian dynamic stochastic general equilibrium model. To this end, we changed the conventional model, assuming now that a share of firms needs to borrow to pay its payroll. Thus, an increase in the nominal interest rate positively impacts the effective unit labor cost and may result in an inflation hike. This article analyzes the necessary conditions for the model to exhibit a positive response of inflation to a monetary tightening, a phenomenon that became known as the price puzzle. Because I use the DSGE-VAR methodology, the present results can be compared both with the empirical literature dealing with the puzzle as an identification problem of VAR models and with the theoretical literature that evaluates the cost-push channel through new keynesian models. In the second article, we assess the extent to which inflation expectations generated by a dynamic stochastic general equilibrium model are consistent with expectations compiled by the Central Bank of Brazil (BCB). This procedure allows us to analyze the rationality of economic agents\' expectations in Brazil, comparing them not with the observed inflation, but with the forecasts of a model developed with the hypothesis of rational expectations. In addition, we analyze the impacts of using expectations compiled by the BCB in the estimation of our model, looking at the structural parameters, the impulse response function and variance decomposition analysis. Finally, the third article in this thesis, I modified the conventional new keynesian model, to include unemployment as proposed by the economist Jordi Galí. With that, I fill an important gap in the national literature, dominated by models that do not contemplate the possibility of disequilibrium in the labor market that can generate involuntary unemployment. The alternative interpretation of the labor market used here overcomes the identification problems notoriously present in the literature, making the resulting model more robust to the Lucas critique. Thus, I use the resulting model to assess the determinants of the unemployment rate over the last decade, among other points.
58

FBST seqüencial / Sequential FBST

Marcelo Leme de Arruda 04 June 2012 (has links)
O FBST (Full Bayesian Significance Test) é um instrumento desenvolvido por Pereira e Stern (1999) com o objetivo de apresentar uma alternativa bayesiana aos testes de hipóteses precisas. Desde sua introdução, o FBST se mostrou uma ferramenta muito útil para a solução de problemas para os quais não havia soluções freqüentistas. Esse teste, contudo, depende de que a amostra seja coletada uma única vez, após o que a distribuição a posteriori dos parâmetros é obtida e a medida de evidência, calculada. Ensejadas por esse aspecto, são apresentadas abordagens analíticas e computacionais para a extensão do FBST ao contexto de decisão seqüencial (DeGroot, 2004). É apresentado e analisado um algoritmo para a execução do FBST Seqüencial, bem como o código-fonte de um software baseado nesse algoritmo. / FBST (Full Bayesian Significance Test) is a tool developed by Pereira and Stern (1999), to show a bayesian alternative to the tests of precise hypotheses. Since its introduction, FBST has shown to be a very useful tool to solve problems to which there were no frequentist solutions. This test, however, needs that the sample be collected just one time and, after this, the parameters posterior distribution is obtained and the evidence measure, computed. Suggested by this feature, there are presented analytic and computational approaches to the extension of the FBST to the sequential decision context (DeGroot, 2004). It is presented and analyzed an algorithm to execute the Sequential FBST, as well as the source code of a software based on this algorithm.
59

Dynamic bayesian statistical models for the estimation of the origin-destination matrix / Dynamic bayesian statistical models for the estimation of the origin-destination matrix / Dynamic bayesian statistical models for the estimation of the origin-destination matrix

Anselmo Ramalho Pitombeira Neto 29 June 2015 (has links)
In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices. / In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices. / In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices.
60

Lógica probabilística baseada em redes Bayesianas relacionais com inferência em primeira ordem. / Probabilistic logic based on Bayesian network with first order inference.

Polastro, Rodrigo Bellizia 03 May 2012 (has links)
Este trabalho apresenta três principais contribuições: i. a proposta de uma nova lógica de descrição probabilística; ii. um novo algoritmo de inferência em primeira ordem a ser utilizado em terminologias representadas nessa lógica; e iii. aplicações práticas em problemas reais. A lógica aqui proposta, crALC (credal ALC), adiciona inclusões probabilísticas na popular lógica ALC combinando as terminologias com condições de aciclicidade, de Markov, e adotando uma semântica baseada em interpretações. Como os métodos de inferência exata tradicionalmente apresentam problemas de escalabilidade devido à presença de quantificadores (restrições universal e existencial), apresentamos um algoritmo de loopy propagation em primeira-ordem que se comporta bem para terminologias com domínios não triviais. Uma série de testes foi feita com o algoritmo proposto em comparação com algoritmos tradicionais da literatura; os resultados apresentados mostram uma clara vantagem em relação aos outros algoritmos. São apresentadas ainda duas aplicações da lógica e do algoritmo para resolver problemas reais da área de robótica móvel. Embora os problemas tratados sejam relativamente simples, eles constituem a base de muitos outros problemas da área, sendo um passo importante na representação de conhecimento de agentes/robôs autônomos e no raciocínio sobre esse conhecimento. / This work presents two major contributions: i. a new probabilistic description logic; ii. a new algorithm for inference in terminologies expressed in this logic; iii. practical applications in real tasks. The proposed logic, referred to as crALC (credal ALC), adds probabilistic inclusions to the popular logic ALC, combining the usual acyclicity and Markov conditions, and adopting interpretation-based semantics. As exact inference does not seem scalable due to the presence of quantifiers (existential and universal), we present a first-order loopy propagation algorithm that behaves appropriately for non-trivial domain sizes. A series of tests were done comparing the performance of the proposed algorithm against traditional ones; the presented results are favorable to the first-order algorithm. Two applications in the field of mobile robotics are presented, using the new probabilistic logic and the inference algorithm. Though the problems can be considered simple, they constitute the basis for many other tasks in mobile robotics, being a important step in knowledge representation and in reasoning about it.

Page generated in 0.0576 seconds