• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 224
  • 224
  • 51
  • 43
  • 40
  • 37
  • 31
  • 30
  • 29
  • 28
  • 26
  • 26
  • 25
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Reinforcement learning and reward estimation for dialogue policy optimisation

Su, Pei-Hao January 2018 (has links)
Modelling dialogue management as a reinforcement learning task enables a system to learn to act optimally by maximising a reward function. This reward function is designed to induce the system behaviour required for goal-oriented applications, which usually means fulfilling the user’s goal as efficiently as possible. However, in real-world spoken dialogue systems, the reward is hard to measure, because the goal of the conversation is often known only to the user. Certainly, the system can ask the user if the goal has been satisfied, but this can be intrusive. Furthermore, in practice, the reliability of the user’s response has been found to be highly variable. In addition, due to the sparsity of the reward signal and the large search space, reinforcement learning-based dialogue policy optimisation is often slow. This thesis presents several approaches to address these problems. To better evaluate a dialogue for policy optimisation, two methods are proposed. First, a recurrent neural network-based predictor pre-trained from off-line data is proposed to estimate task success during subsequent on-line dialogue policy learning to avoid noisy user ratings and problems related to not knowing the user’s goal. Second, an on-line learning framework is described where a dialogue policy is jointly trained alongside a reward function modelled as a Gaussian process with active learning. This mitigates the noisiness of user ratings and minimises user intrusion. It is shown that both off-line and on-line methods achieve practical policy learning in real-world applications, while the latter provides a more general joint learning system directly from users. To enhance the policy learning speed, the use of reward shaping is explored and shown to be effective and complementary to the core policy learning algorithm. Furthermore, as deep reinforcement learning methods have the potential to scale to very large tasks, this thesis also investigates the application to dialogue systems. Two sample-efficient algorithms, trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER), are introduced. In addition, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning to handle the cold start problem. Combining these two methods, a practical approach is demonstrated to effectively learn deep reinforcement learning-based dialogue policies in a task-oriented information seeking domain. Overall, this thesis provides solutions which allow truly on-line and continuous policy learning in spoken dialogue systems.
182

Navegação autônoma para robôs móveis usando aprendizado supervisionado. / Autonomous navigation for mobile robots using supervised learning

Jefferson Rodrigo de Souza 21 March 2014 (has links)
A navegação autônoma é um dos problemas fundamentais na área da robótica móvel. Algoritmos capazes de conduzir um robô até o seu destino de maneira segura e eficiente são um pré-requisito para que robôs móveis possam executar as mais diversas tarefas que são atribuídas a eles com sucesso. Dependendo da complexidade do ambiente e da tarefa que deve ser executada, a programação de algoritmos de navegação não é um problema de solução trivial. Esta tese trata do desenvolvimento de sistemas de navegação autônoma baseados em técnicas de aprendizado supervisionado. Mais especificamente, foram abordados dois problemas distintos: a navegação de robôs/- veículos em ambientes urbanos e a navegação de robôs em ambientes não estruturados. No primeiro caso, o robô/veículo deve evitar obstáculos e se manter na via navegável, a partir de exemplos fornecidos por um motorista humano. No segundo caso, o robô deve identificar e evitar áreas irregulares (maior vibração), reduzindo o consumo de energia. Nesse caso, o aprendizado foi realizado a partir de informações obtidas por sensores. Em ambos os casos, algoritmos de aprendizado supervisionado foram capazes de permitir que os robôs navegassem de maneira segura e eficiente durante os testes experimentais realizados / Autonomous navigation is a fundamental problem in the field of mobile robotics. Algorithms capable of driving a robot to its destination safely and efficiently are a prerequisite for mobile robots to successfully perform different tasks that may be assigned to them. Depending on the complexity of the environment and the task to be executed, programming of navigation algorithms is not a trivial problem. This thesis approaches the development of autonomous navigation systems based on supervised learning techniques. More specifically, two distinct problems have been addressed: a robot/vehicle navigation in urban environments and robot navigation in unstructured environments. In the first case, the robot/vehicle must avoid obstacles and keep itself in the road based on examples provided by a human driver. In the second case, the robot should identify and avoid unstructured areas (higher vibration), reducing energy consumption. In this case, learning was based on information obtained by sensors. In either case, supervised learning algorithms have been capable of allowing the robots to navigate in a safe and efficient manner during the experimental tests
183

Uma abordagem computacional para predição de mortalidade em utis baseada em agrupamento de processos gaussianos / A gaussian process clustering based approach to mortality prediction in icus

Caixeta, Rommell Guimarães 09 September 2016 (has links)
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2016-09-28T11:42:24Z No. of bitstreams: 2 Dissertação - Rommell Guimaraes Caixeta - 2016.pdf: 1787149 bytes, checksum: 4187153e23c73bdc540c0032c99b52d3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-09-28T12:28:10Z (GMT) No. of bitstreams: 2 Dissertação - Rommell Guimaraes Caixeta - 2016.pdf: 1787149 bytes, checksum: 4187153e23c73bdc540c0032c99b52d3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-09-28T12:28:10Z (GMT). No. of bitstreams: 2 Dissertação - Rommell Guimaraes Caixeta - 2016.pdf: 1787149 bytes, checksum: 4187153e23c73bdc540c0032c99b52d3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-09-09 / The analysis of physiological variables of a patient can improve the death risk classification in Intensive Care Units(ICU) and help decision making and resource management. This work proposes a computational approach to death prediction through physiological variables analysis in ICU. Physiological variables that compounds time-series(e.g., blood pressure) are represented as Dependent Gaussian Processes(DGP). Variables that do not represent time-series (e.g., age) are used to cluster DGPs with Decision Trees. Classification is made according to a distance measure that combines Dynamic Time Warping and Kullback-Leibler divergence. The results of this approach are superior to other method already used, SAPS-I, on the considered test dataset.The results are similar to other computational methods published by the research community. The results comparing variations of the proposed method show that there is advatage in using the proposed clustering of DGPs. / A análise das variáveis fisiológicas de um paciente pode melhorar a classificação do risco de óbito de um paciente em uma Unidade de Terapia Intensiva(UTI) e auxiliar na tomada de decisões e alocação de recursos disponíveis. Este trabalho propõe uma abordagem computacional de análise de variáveis fisiológicas para previsão de óbito de pacientes em UTI. Variáveis fisiológicas que compõem séries temporais(e.g., pressão arterial) são representadas como Processos Gaussianos Dependentes(PGDs). Variáveis que não representam séries temporais(e.g., idade) são utilizadas para agrupar os PGDs com Árvores de Decisão. A classificação é feita de acordo com uma medida de distância que combina Deformação Temporal Dinâmica e divergência Kullback-Leibler. O resultado desta abordagem quanto ao desempenho de classificação é superior ao método padronizado SAPS-I já utilizado em UTI no conjunto de dados considerado para testes. O resultado é similar à outros métodos computacionais publicados pela comunidade de pesquisa. Os resultados comparando variações da abordagem proposta também mostram que há vantagem em utilizar o agrupamento de PGDs descrito.
184

Gaussinização de interferência, modulação multinível e identificação de distorções em sistemas de comunicações / Gaussianization of interference, multilevel modulation and identification of distortions in communications systems

Gomes, Marco Aurelio Cazarotto, 1984- 07 March 2014 (has links)
Orientador: Renato da Rocha Lopes / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-25T22:46:18Z (GMT). No. of bitstreams: 1 Gomes_MarcoAurelioCazarotto_D.pdf: 1053505 bytes, checksum: d53cf702c634d7c73f9e121a123bc2ba (MD5) Previous issue date: 2014 / Resumo: Este trabalho está dividido em duas partes. Na primeira, propomos um método para melhorar a estimação de informações sobre descasamentos de impedância em redes de cabos coaxiais. Esses descasamentos geram micro-reflexões, que por sua vez geram distorções na rede. Informações sobre a localização dos descasamentos podem ajudar as equipes de manutenção a localizá-los e repará-los. Atualmente a informação é estimada no padrão DOCSIS de manutenção preventiva, e baseia-se apenas nos coeficientes do equalizador que já são conhecidos pelo sistema. Assim, o método não exige qualquer alteração ao sistema. Iremos demonstrar que, em contraste com a abordagem atual DOCSIS, o nosso método fornece uma estimativa mais fina, e permite a estimativa de informação mesmo quando mais de uma micro-reflexão está presente. Na segunda parte propomos soluções para dois problemas relacionados ao canal gaussiano. No primeiro momento estamos interessados no problema do ponto de vista do primeiro usuário de um canal de múltiplo acesso (MAC, do inglês Multiple Access Channel), onde vamos tratar o sinal do segundo usuário como ruído na detecção. Esta estratégia é conhecida como cancelamento de interferência (SIC, do inglês Successive Interference Cancellation). O primeiro usuário utiliza um código Low Density Parity Check (LDPC). Na detecção, o LDPC assume que o ruído é gaussiano (AWGN, do inglês additive white Gaussian noise). Com isso, vamos propor duas estratégias (shaping e transformada de Fourier) para modificar a distribuição do segundo usuário, de forma que esta se aproxime de uma distribuição gaussiana. Vamos comprovar que as duas estratégias têm melhor desempenho comparadas à estratégia envolvendo modulação com distribuição discreta uniforme. Em um segundo momento propomos um sistema simples que se aproxima da capacidade do AWGN. Para isso, vamos explorar um paralelo entre o AWGN e o MAC. Em nossa proposta, o que no MAC eram usuários agora são níveis paralelos e independentes de um código multinível, onde cada nível usa uma entrada binária e um código que se aproxima da capacidade, resultando em um sistema com codificação simples que opera perto de capacidade. Na decodificação novamente vamos usar SIC. Como resultado, o receptor é constituído por uma série de receptores binários simples. Vamos mostrar que o sistema proposto funciona a uma pequena distância em relação à capacidade do AWGN, e que esta distância pode ser atribuída apenas à distância do próprio código / Abstract: This work is divided into two parts. At first, we propose a method for improving the estimation of impedance mismatch information in coaxial cable networks. These impedance mismatch generate micro-reflections, which distorts the network. Information about the location of mismatches can help maintenance team to locate to them and repair them. Currently, the information is estimated in the DOCSIS preventive maintenance, and is based only on the coefficients of the equalizer are already known by the system. Thus, the method does not require any change into the system. We will show that, in contrast with the current DOCSIS approach, our method provides a finer estimate, and allows the estimation of information even when more than one micro-reflection is present. In the second part we propose solutions to two problems related to the Gaussian channel. At first we are interested in the problem from the point of view of the first user of a multiple access channel (MAC), where we treat the user as the second signal in noise in the detection. This strategy is known as successive interference cancellation (SIC). The first user uses a Low Density Parity Check code (LDPC). In detection , the LDPC assumes that the noise is Gaussian (AWGN). With this, we propose two strategies (shaping and Fourier transform) to modify the distribution of the second user, so that it approaches a Gaussian distribution. We will show that both strategies outperform the strategy involving modulation compared with discrete uniform distribution. In second user, we propose a simple system that approaches the capacity of the AWGN. For this, we explore a parallel between the AWGN and MAC. In our proposal, which were in the MAC users, now are parallel and independent levels of a multilevel code, where each level uses a binary input and a code approaches the capacity, resulting in a simple coding system that operates close to capacity. We will use SIC for decoding again. As a result, the receiver is constituted by a simple binary receivers series. We will show that the proposed system works at a small distance from the capacity of AWGN, and this distance can be assigned only the gap of the code / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
185

Predicting the absorption rate of chemicals through mammalian skin using machine learning algorithms

Ashrafi, Parivash January 2016 (has links)
Machine learning (ML) methods have been applied to the analysis of a range of biological systems. This thesis evaluates the application of these methods to the problem domain of skin permeability. ML methods offer great potential in both predictive ability and their ability to provide mechanistic insight to, in this case, the phenomena of skin permeation. Historically, refining mathematical models used to predict percutaneous drug absorption has been thought of as a key factor in this field. Quantitative Structure-Activity Relationships (QSARs) models are used extensively for this purpose. However, advanced ML methods successfully outperform the traditional linear QSAR models. In this thesis, the application of ML methods to percutaneous absorption are investigated and evaluated. The major approach used in this thesis is Gaussian process (GP) regression method. This research seeks to enhance the prediction performance by using local non-linear models obtained from applying clustering algorithms. In addition, to increase the model's quality, a kernel is generated based on both numerical chemical variables and categorical experimental descriptors. Monte Carlo algorithm is also employed to generate reliable models from variable data which is inevitable in biological experiments. The datasets used for this study are small and it may raise the over-fitting/under-fitting problem. In this research I attempt to find optimal values of skin permeability using GP optimisation algorithms within small datasets. Although these methods are applied here to the field of percutaneous absorption, it may be applied more broadly to any biological system.
186

Uncertainty quantification on pareto fronts and high-dimensional strategies in bayesian optimization, with applications in multi-objective automotive design / Quantification d’incertitude sur fronts de Pareto et stratégies pour l’optimisation bayésienne en grande dimension, avec applications en conception automobile

Binois, Mickaël 03 December 2015 (has links)
Cette thèse traite de l’optimisation multiobjectif de fonctions coûteuses, aboutissant à laconstruction d’un front de Pareto représentant l’ensemble des compromis optimaux. En conception automobile, le budget d’évaluations est fortement limité par les temps de simulation numérique des phénomènes physiques considérés. Dans ce contexte, il est courant d’avoir recours à des « métamodèles » (ou modèles de modèles) des simulateurs numériques, en se basant notamment sur des processus gaussiens. Ils permettent d’ajouter séquentiellement des observations en conciliant recherche locale et exploration. En complément des critères d’optimisation existants tels que des versions multiobjectifs du critère d’amélioration espérée, nous proposons d’estimer la position de l’ensemble du front de Pareto avec une quantification de l’incertitude associée, à partir de simulations conditionnelles de processus gaussiens. Une deuxième contribution reprend ce problème à partir de copules. Pour pouvoir traiter le cas d’un grand nombre de variables d’entrées, nous nous basons sur l’algorithme REMBO. Par un tirage aléatoire directionnel, défini par une matrice, il permet de trouver un optimum rapidement lorsque seules quelques variables sont réellement influentes (mais inconnues). Plusieurs améliorations sont proposées, elles comprennent un noyau de covariance dédié, une sélection du domaine de petite dimension et des directions aléatoires mais aussi l’extension au casmultiobjectif. Enfin, un cas d’application industriel en crash a permis d’obtenir des gainssignificatifs en performance et en nombre de calculs requis, ainsi que de tester le package R GPareto développé dans le cadre de cette thèse. / This dissertation deals with optimizing expensive or time-consuming black-box functionsto obtain the set of all optimal compromise solutions, i.e. the Pareto front. In automotivedesign, the evaluation budget is severely limited by numerical simulation times of the considered physical phenomena. In this context, it is common to resort to “metamodels” (models of models) of the numerical simulators, especially using Gaussian processes. They enable adding sequentially new observations while balancing local search and exploration. Complementing existing multi-objective Expected Improvement criteria, we propose to estimate the position of the whole Pareto front along with a quantification of the associated uncertainty, from conditional simulations of Gaussian processes. A second contribution addresses this problem from a different angle, using copulas to model the multi-variate cumulative distribution function. To cope with a possibly high number of variables, we adopt the REMBO algorithm. From a randomly selected direction, defined by a matrix, it allows a fast optimization when only a few number of variables are actually influential, but unknown. Several improvements are proposed, such as a dedicated covariance kernel, a selection procedure for the low dimensional domain and of the random directions, as well as an extension to the multi-objective setup. Finally, an industrial application in car crash-worthiness demonstrates significant benefits in terms of performance and number of simulations required. It has also been used to test the R package GPareto developed during this thesis.
187

Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste. / Covariance kernels for simplified and interpretable modeling. A functional and probabilistic approach.

Durrande, Nicolas 09 November 2011 (has links)
Le thème général de cette thèse est celui de la construction de modèles permettantd’approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l’approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l’on souhaite modéliser une fonction dépendant d’une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l’utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l’analyse de sensibilité globale. / The framework of this thesis is the approximation of functions for which thevalue is known at limited number of points. More precisely, we consider here the so-calledkriging models from two points of view : the approximation in reproducing kernel Hilbertspaces and the Gaussian Process regression.When the function to approximate depends on many variables, the required numberof points can become very large and the interpretation of the obtained models remainsdifficult because the model is still a high-dimensional function. In light of those remarks,the main part of our work adresses the issue of simplified models by studying a key conceptof kriging models, the kernel. More precisely, the following aspects are adressed: additivekernels for additive models and kernel decomposition for sparse modeling. Finally, wepropose a class of kernels that is well suited for functional ANOVA representation andglobal sensitivity analysis.
188

Model independent searches for New Physics using Machine Learning at the ATLAS experiment / Recherche de Nouvelle Physique indépendante d'un modèle en utilisant l’apprentissage automatique sur l’experience ATLAS

Jimenez, Fabricio 16 September 2019 (has links)
Nous abordons le problème de la recherche indépendante du modèle pour la Nouvelle Physique (NP), au Grand Collisionneur de Hadrons (LHC) en utilisant le détecteur ATLAS. Une attention particulière est accordée au développement et à la mise à l'essai de nouvelles techniques d'apprentissage automatique à cette fin. Le présent ouvrage présente trois résultats principaux. Tout d'abord, nous avons mis en place un système de surveillance automatique des signatures génériques au sein de TADA, un outil logiciel d'ATLAS. Nous avons exploré plus de 30 signatures au cours de la période de collecte des données de 2017 et aucune anomalie particulière n'a été observée par rapport aux simulations des processus du modèle standard. Deuxièmement, nous proposons une méthode collective de détection des anomalies pour les recherches de NP indépendantes du modèle au LHC. Nous proposons l'approche paramétrique qui utilise un algorithme d'apprentissage semi-supervisé. Cette approche utilise une probabilité pénalisée et est capable d'effectuer simultanément une sélection appropriée des variables et de détecter un comportement anormal collectif possible dans les données par rapport à un échantillon de fond donné. Troisièmement, nous présentons des études préliminaires sur la modélisation du bruit de fond et la détection de signaux génériques dans des spectres de masse invariants à l'aide de processus gaussiens (GPs) sans information préalable moyenne. Deux méthodes ont été testées dans deux ensembles de données : une procédure en deux étapes dans un ensemble de données tiré des simulations du modèle standard utilisé pour ATLAS General Search, dans le canal contenant deux jets à l'état final, et une procédure en trois étapes dans un ensemble de données simulées pour le signal (Z′) et le fond (modèle standard) dans la recherche de résonances dans le cas du spectre de masse invariant de paire supérieure. Notre étude est une première étape vers une méthode qui utilise les GPs comme outil de modélisation qui peut être appliqué à plusieurs signatures dans une configuration plus indépendante du modèle. / We address the problem of model-independent searches for New Physics (NP), at the Large Hadron Collider (LHC) using the ATLAS detector. Particular attention is paid to the development and testing of novel Machine Learning techniques for that purpose. The present work presents three main results. Firstly, we put in place a system for automatic generic signature monitoring within TADA, a software tool from ATLAS. We explored over 30 signatures in the data taking period of 2017 and no particular discrepancy was observed with respect to the Standard Model processes simulations. Secondly, we propose a collective anomaly detection method for model-independent searches for NP at the LHC. We propose the parametric approach that uses a semi-supervised learning algorithm. This approach uses penalized likelihood and is able to simultaneously perform appropriate variable selection and detect possible collective anomalous behavior in data with respect to a given background sample. Thirdly, we present preliminary studies on modeling background and detecting generic signals in invariant mass spectra using Gaussian processes (GPs) with no mean prior information. Two methods were tested in two datasets: a two-step procedure in a dataset taken from Standard Model simulations used for ATLAS General Search, in the channel containing two jets in the final state, and a three-step procedure from a simulated dataset for signal (Z′) and background (Standard Model) in the search for resonances in the top pair invariant mass spectrum case. Our study is a first step towards a method that takes advantage of GPs as a modeling tool that can be applied to several signatures in a more model independent setup.
189

Hyper-optimalizace neuronových sítí založená na Gaussovských procesech / Gaussian Processes Based Hyper-Optimization of Neural Networks

Coufal, Martin January 2020 (has links)
Cílem této diplomové práce je vytvoření nástroje pro optimalizaci hyper-parametrů umělých neuronových sítí. Tento nástroj musí být schopen optimalizovat více hyper-parametrů, které mohou být navíc i korelovány. Tento problém jsem vyřešil implmentací optimalizátoru, který využívá Gaussovské procesy k predikci vlivu jednotlivých hyperparametrů na výslednou přesnost neuronové sítě. Z provedených experimentů na několika benchmark funkcích jsem zjistil, že implementovaný nástroj je schopen dosáhnout lepších výsledků než optimalizátory založené na náhodném prohledávání a snížit tak v průměru počet potřebných kroků optimalizace. Optimalizace založená na náhodném prohledávání dosáhla lepších výsledků pouze v prvních krocích optimalizace, než si optimalizátor založený na Gaussovských procesech vytvoří dostatečně přesný model problému. Nicméně téměř všechny experimenty provedené na datasetu MNIST prokázaly lepší výsledky optimalizátoru založeného na náhodném prohledávání. Tyto rozdíly v provedených experimentech jsou pravděpodobně dány složitostí zvolených benchmark funkcí nebo zvolenými parametry implementovaného optimalizátoru.
190

Matéria escura como campo escalar : aspectos teóricos e observacionais /

Escobal, Anderson Almeida January 2020 (has links)
Orientador: José Fernando de Jesus / Resumo: Estudamos o campo escalar real como um possível candidato para explicar a matéria escura no universo. No contexto de um campo escalar livre com potencial quadrático, após encontrar as equações dinâmicas do modelo usamos os dados observacionais para limitar os parâmetros livres e assim encontrar um limite inferior para o valor da massa que foi da ordem de $10^{-34}$eV, esse valor está próximo ao encontrado por alguns autores. Não foi possível encontrar um limite superior para a massa da matéria escura do campo escalar combinando os dados de $H(z)$, SN Ia. Como verificado neste trabalho e observado em outros estudos, a matéria escura pode ser descrita por um campo escalar real. Em outra linha de pesquisa, usando um método estatístico não-paramétrico envolvendo os chamados Processos Gaussianos, obtivemos um valor do redshift de transição, $z_t$, de $z_t = 0.59^{+0.12}_{-0.11}$ para dados de $H(z)$ e $z_t= 0.683^{+0.11}_{-0.082}$ para dados de SNs Ia. / Abstract: We studied the real scalar field as a possible candidate to explain the dark matter in the universe. In the context of a free scalar field with quadratic potential, after finding the dynamic equations of the model we used the observational data to limit the free parameters and thus find a lower limit for the mass value that was in the order of 10−34 eV , this value is close to that found by some authors. It was not possible to find an upper limit for the mass of dark matter in the scalar field by combining the H(z) + SNe Ia data. As verified in this work and observed in other studies, dark matter can be described by a real scalar field. In another line of research, using a non-parametric statistical method involving the so-called Gaussian Processes, we obtained a value of the transition redshift, zt , of zt = 0.59+0.12 −0.11 for H(z) data and zt = 0.683+0.11 −0.082 for SNs Ia data. / Mestre

Page generated in 0.0602 seconds