• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Experimentos com probabilidade e estatística : Jankenpon, Monte Carlo, variáveis antropométricas / Experiments with probability and statistics : Jankenpon, Monte Carlo, anthropometric variables

Coura, André da Silva, 1984- 26 August 2018 (has links)
Orientador: Laura Leticia Ramos Rifo / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-26T10:19:46Z (GMT). No. of bitstreams: 1 Coura_AndredaSilva_M.pdf: 8253159 bytes, checksum: 4cf2d4abd8227260acd62a6dd9dc2b98 (MD5) Previous issue date: 2014 / Resumo: A dissertação apresenta uma abordagem prática para o ensino da matemática nos níveis fundamental e médio. De forma mais específica, apresenta conceitos de estatística básica como tratamento de informações e estudo de probabilidades. Estes conceitos são de grande importância no âmbito científico (parte experimental, por exemplo) e social (compreensão de características populacionais), além de estarem inseridos na vida cotidiana dos alunos. Sendo assim, foi entendido que é primordial desenvolver as competências e habilidades para organizar e compreender informações. Foram realizados experimentos para a aplicação dos conceitos apresentados em sala de aula. Também uma pesquisa propondo questões para analisar aspectos sobre alimentação e prática de exercícios físicos. Estes experimentos, além da aplicação dos conceitos, pretendem desenvolver no público-alvo, raciocínio lógico e olhar crítico, para assuntos relacionados à disciplina de matemática, utilizando situações cotidianas. Para análise organizamos e interpretamos as informações por meio de tabelas e gráficos. A pesquisa teve como objetivo principal mostrar como é usada a teoria estatística para a tomada de decisão e, nesse caso, para melhorar a própria qualidade de vida. Desse modo, pretendemos que a metodologia apresentada neste trabalho possa contribuir para a disseminação do conhecimento destas ferramentas matemáticas para os níveis fundamental e médio do ensino escolar / Abstract: This dissertation presents a practical approach for teaching mathematics in the elementary and secondary levels. More specifically, presents concepts of Basic Statistics as information processing and the study of probabilities. These concepts are of great importance in scientific (experimental way, for example) and social (understanding of population characteristics), besides being inserted into the daily student's lives. Therefore, it was understood that is necessary to develop the skills and abilities to organize and understand information. Experiments were carried out for the application of the concepts presented in classroom. Also a search posing questions to analyze aspects of food and physical exercise. The realization of these experiments purpose, besides the application of classroom learnt concepts, develop in students, logic reasoning and critical look at issues related to the discipline of mathematics and daily situations by organizing and interpreting information with charts and graphs. The research aimed to show how it is used statistical theory for decision making and, if so , to improve their quality of life. Thus, we intend that presented methodology in this study may contribute to the dissemination of these mathematical knowledge tools for elementary and high school levels / Mestrado / Matemática em Rede Nacional / Mestre em Matemática em Rede Nacional
492

Teoremas limiares para o modelo SIR estocástico de epidemia / Threshold theorems for the SIR stochastic epidemic model

Estrada López, Mario Andrés, 1989- 27 August 2018 (has links)
Orientador: Élcio Lebensztayn / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T01:18:53Z (GMT). No. of bitstreams: 1 EstradaLopez_MarioAndres_M.pdf: 691310 bytes, checksum: c03e392b197051a7368585d6c09a7835 (MD5) Previous issue date: 2015 / Resumo: Este trabalho tem como objetivo estudar o modelo SIR (suscetível-infectado-removido) de epidemia nas versões determinística e estocástica. Nosso objetivo é encontrar limitantes para a probabilidade de que o tamanho da epidemia não sobrepasse certa proporção do número inicial de suscetíveis. Iniciamos apresentando as definições e a dinâmica do processo de epidemia determinístico. Obtemos um valor limiar para o número inicial de suscetíveis para que a epidemia exploda ou não. Consideramos o modelo de epidemia estocástico SIR assumindo que não há período latente, isto é, que um infectado pode transmitir a infecção ao instante de ser contagiado. O modelo é considerado com uma configuração inicial de suscetíveis e infectados e é feita especial ênfases no estudo da variável aleatória ''tamanho da epidemia'', que é definida como a diferença entre o número de suscetíveis ao começar e ao terminar a propagação da doença. Como na parte determinística, obtemos teoremas limiares para o modelo de epidemia estocástico. Os métodos usados para encontrar os limitantes são os de análise da cadeia de Markov imersa e de comparação estocástica / Abstract: This work has as objective to study the SIR (susceptible-infected-removed) epidemic model in the deterministic and stochastic version. Our objective is to find bounds for the probability that the size of the epidemic does not exceed certain proportion of the initial number of susceptible individuals. We begin presenting the definitions and the dynamics for the deterministic model for a general epidemic. We obtain a threshold value for the initial number of susceptible individuals for the epidemic to build up or not. As fundamental part of this work, we consider a stochastic epidemic SIR model assuming there is no latent period, that is, one infected can transmit the infection at the moment of being infected. The model is considered with an initial configuration of susceptible and infected individuals and the study is focused on the random variable ''size of the epidemic'', which is defined as the difference between the number of susceptible individuals at the start and at the end of the propagation of the epidemic. As in the deterministic part, we obtain a threshold theorem for the stochastic epidemic. The methods used to prove the theorem are analysis of the embedded chain and the stochastic comparison / Mestrado / Estatistica / Mestre em Estatística
493

Renewal theory for uniform random variables

Spencer, Steven Robert 01 January 2002 (has links)
This project will focus on finding formulas for E[N(t)] using one of the classical problems in the discipline first, and then extending the scope of the problem to include overall times greater than the time t in the original problem. The expected values in these cases will be found using the uniform and exponential distributions of random variables.
494

Arbitrage Theory Under Portfolio Constraints

Li, Zhi January 2020 (has links)
In this dissertation, we adopt the viability approach to mathematical finance developed in the book of Karatzas and Kardaras (2020), and extend it to settings where portfolio choice is constrained. We introduce in Chapter 2 the notions of supermartingale numeraire, supermartingale deflator, and viability. After that, we characterize all supermartingale deflators under conic constraints on portfolio choice. Most importantly, we prove a fundamental theorem for equity market structure and arbitrage theory under such conic constraints, to the effect that the existence of the supermartingale numeraire is equivalent to market viability. Further, and always under the assumption of viability, we establish some additional optimality properties of the supermartingale numeraire. In the end of Chapter 2, we pose and solve a problem of robust maximization of asymptotic growth, under some realistic assumptions. In Chapter 3, we state and prove the Optional Decomposition Theorem under conic constraints. Using this version of the Optional Decomposition Theorem, we deal with the problem, of superhedging contingent claims. In Chapter 4, we consider yet another portfolio optimization problem. Under simultaneous conic constraints on portfolio choice, and drawdown constraints on their generated wealth, we try to maximize the long-term growth rate from investment. Application of the Azema-Yor transform allows us to show that the optimal portfolio for this optimization problem is a simple path transformation of a supermartingale numeraire portfolio. Some asymptotic properties of this portfolio are also discussed in Chapter 4.
495

Predicting Plans and Actions in Two-Player Repeated Games

Mathema, Najma 22 September 2020 (has links)
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. One way to enable effective interaction is to create models of associates to help to predict the modeled agents' actions, plans, and intentions. If AI agents are able to predict what other agents in their environment will be doing in the future and can understand the intentions of these other agents, the AI agents can use these predictions in their planning, decision-making and assessing their own potential. Prior work [13, 14] introduced the S# algorithm, which is designed as a robust algorithm for many two-player repeated games (RGs) to enable cooperation among players. Because S# generates actions, has (internal) experts that seek to accomplish an internal intent, and associates plans with each expert, it is a useful algorithm for exploring intent, plan, and action in RGs. This thesis presents a graphical Bayesian model for predicting actions, plans, and intents of an S# agent. The same model is also used to predict human action. The actions, plans and intentions associated with each S# expert are (a) identified from the literature and (b) grouped by expert type. The Bayesian model then uses its transition probabilities to predict the action and expert type from observing human or S# play. Two techniques were explored for translating probability distributions into specific predictions: Maximum A Posteriori (MAP) and Aggregation approach. The Bayesian model was evaluated for three RGs (Prisoners Dilemma, Chicken and Alternator) as follows. Prediction accuracy of the model was compared to predictions from machine learning models (J48, Multi layer perceptron and Random Forest) as well as from the fixed strategies presented in [20]. Prediction accuracy was obtained by comparing the model's predictions against the actual player's actions. Accuracy for plan and intent prediction was measured by comparing predictions to the actual plans and intents followed by the S# agent. Since the plans and the intents of human players were not recorded in the dataset, this thesis does not measure the accuracy of the Bayesian model against actual human plans and intents. Results show that the Bayesian model effectively models the actions, plans, and intents of the S# algorithm across the various games. Additionally, the Bayesian model outperforms other methods for predicting human actions. When the games do not allow players to communicate using so-called cheaptalk, the MAP-based predictions are significantly better than Aggregation-based predictions. There is no significant difference in the performance of MAP-based and Aggregation-based predictions for modeling human behavior when cheaptalk is allowed, except in the game of Chicken.
496

Purchase Probability Prediction : Predicting likelihood of a new customer returning for a second purchase using machine learning methods

Alstermark, Olivia, Stolt, Evangelina January 2021 (has links)
When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame. To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models. The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.
497

Sequential Rerandomization in the Context of Small Samples

Yang, Jiaxi January 2021 (has links)
Rerandomization (Morgan & Rubin, 2012) is designed for the elimination of covariate imbalance at the design stage of causal inference studies. By improving the covariate balance, rerandomization helps provide more precise and trustworthy estimates (i.e., lower variance) of the average treatment effect (ATE). However, there are only a limited number of studies considering rerandomization strategies or discussing the covariate balance criteria that are observed before conducting the rerandomization procedure. In addition, researchers may find more difficulty in ensuring covariate balance across groups with small-sized samples. Furthermore, researchers conducting experimental design studies in psychology and education fields may not be able to gather data from all subjects simultaneously. Subjects may not arrive at the same time and experiments can hardly wait until the recruitment of all subjects. As a result, we have presented the following research questions: 1) How does the rerandomization procedure perform when the sample size is small? 2) Are there any other balancing criteria that may work better than the Mahalanobis distance in the context of small samples? 3) How well does the balancing criterion work in a sequential rerandomization design? Based on the Early Childhood Longitudinal Study, Kindergarten Class, a Monte-Carlo simulation study is presented for finding a better covariate balance criterion with respect to small samples. In this study, the neural network predicting model is used to calculate missing counterfactuals. Then, to ensure covariate balance in the context of small samples, the rerandomization procedure uses various criteria measuring covariate balance to find the specific criterion for the most precise estimate of sample average treatment effect. Lastly, a relatively good covariate balance criterion is adapted to Zhou et al.’s (2018) sequential rerandomization procedure and we examined its performance. In this dissertation, we aim to identify the best covariate balance criterion using the rerandomization procedure to determine the most appropriate randomized assignment with respect to small samples. On the use of Bayesian logistic regression with Cauchy prior as the covariate balance criterion, there is a 19% decrease in the root mean square error (RMSE) of the estimated sample average treatment effect compared to pure randomization procedures. Additionally, it is proved to work effectively in sequential rerandomization, thus making a meaningful contribution to the studies of psychology and education. It further enhances the power of hypothesis testing in randomized experimental designs.
498

Some Exactly Solvable Models And Their Asymptotics

Rychnovsky, Mark January 2021 (has links)
In this thesis, we present three projects studying exactly solvable models in the KPZ universality class and one project studying a generalization of the SIR model from epidemiology. The first chapter gives an overview of the results and how they fit into the study of KPZ universality when applicable. Each of the following 4 chapters corresponds to a published or submitted article. In the first project, we study an oriented first passage percolation model for the evolution of a river delta. We show that at any fixed positive time, the width of a river delta of length L approaches a constant times L²/³ with Tracy-Widom GUE fluctuations of order L⁴/⁹. This result can be rephrased in terms of a particle system generalizing pushTASEP. We introduce an exactly solvable particle system on the integer half line and show that after running the system for only finite time the particle positions have Tracy-Widom fluctuations. In the second project, we study n-point sticky Brownian motions: a family of n diffusions that evolve as independent Brownian motions when they are apart, and interact locally so that the set of coincidence times has positive Lebesgue measure with positive probability. These diffusions can also be seen as n random motions in a random environment whose distribution is given by so-called stochastic flows of kernels. For a specific type of sticky interaction, we prove exact formulas characterizing the stochastic flow and show that in the large deviations regime, the random fluctuations of these stochastic flows are Tracy-Widom GUE distributed. An equivalent formulation of this result states that the extremal particle among n sticky Brownian motions has Tracy-Widom distributed fluctuations in the large n and large time limit. These results are proved by viewing sticky Brownian motions as a diffusive limit of the exactly solvable beta random walk in random environment. In the third project, we study a class of probability distributions on the six-vertex model, which originates from the higher spin vertex model. For these random six-vertex models we show that the behavior near their base is asymptotically described by the GUE-corners process. In the fourth project, we study a model for the spread of an epidemic. This model generalizes the classical SIR model to account for inhomogeneity in the infectiousness and susceptibility of individuals in the population. A first statement of this model is given in terms of infinitely many coupled differential equations. We show that solving these equations can be reduced to solving a one dimensional first order ODE, which is easy to solve numerically. We use the explicit form of this ODE to characterize the total number of people who are ever infected before the epidemic dies out. This model is not related to the KPZ universality class.
499

Tirer parti de la structure des données incertaines / Leveraging the structure of uncertain data

Amarilli, Antoine 14 March 2016 (has links)
La gestion des données incertaines peut devenir infaisable, dans le cas des bases de données probabilistes, ou même indécidable, dans le cas du raisonnement en monde ouvert sous des contraintes logiques. Cette thèse étudie comment pallier ces problèmes en limitant la structure des données incertaines et des règles. La première contribution présentée s'intéresse aux conditions qui permettent d'assurer la faisabilité de l'évaluation de requêtes et du calcul de lignage sur les instances relationnelles probabilistes. Nous montrons que ces tâches sont faisables, pour diverses représentations de la provenance et des probabilités, quand la largeur d'arbre des instances est bornée. Réciproquement, sous des hypothèses faibles, nous pouvons montrer leur infaisabilité pour toute autre condition imposée sur les instances. La seconde contribution concerne l'évaluation de requêtes sur des données incomplètes et sous des contraintes logiques, sous l'hypothèse de finitude généralement supposée en théorie des bases de données. Nous montrons la décidabilité de cette tâche pour les dépendances d'inclusion unaires et les dépendances fonctionnelles. Ceci constitue le premier résultat positif, sous l'hypothèse de la finitude, pour la réponse aux requêtes en monde ouvert avec un langage d'arité arbitraire qui propose à la fois des contraintes d'intégrité référentielle et des contraintes de cardinalité. / The management of data uncertainty can lead to intractability, in the case of probabilistic databases, or even undecidability, in the case of open-world reasoning under logical rules. My thesis studies how to mitigate these problems by restricting the structure of uncertain data and rules. My first contribution investigates conditions on probabilistic relational instances that ensure the tractability of query evaluation and lineage computation. I show that these tasks are tractable when we bound the treewidth of instances, for various probabilistic frameworks and provenance representations. Conversely, I show intractability under mild assumptions for any other condition on instances. The second contribution concerns query evaluation on incomplete data under logical rules, and under the finiteness assumption usually made in database theory. I show that this task is decidable for unary inclusion dependencies and functional dependencies. This establishes the first positive result for finite open-world query answering on an arbitrary-arity language featuring both referential constraints and number restrictions.
500

Combined complexity of probabilistic query evaluation / Complexité combinée de l'évaluation de requêtes sur des données probabilistes

Monet, Mikaël 12 October 2018 (has links)
L'évaluation de requêtes sur des données probabilistes(probabilistic query evaluation, ou PQE) est généralement très coûteuse enressources et ce même à requête fixée. Bien que certaines restrictions sur les requêtes et les données aient été proposées pour en diminuerla complexité, les résultats existants ne s'appliquent pas à la complexité combinée, c'est-à-dire quand la requête n'est pas fixe.Ma thèse s'intéresse à la question de déterminer pour quelles requêtes et données l'évaluation probabiliste est faisable en complexité combinée.La première contribution de cette thèse est d'étudier PQE pour des requêtes conjonctives sur des schémas d'arité 2. Nous imposons que les requêtes et les données aient la forme d'arbres et montrons l'importance de diverses caractéristiques telles que la présence d'étiquettes sur les arêtes, les bifurcations ou la connectivité.Les restrictions imposées dans ce cadre sont assez sévères, mais la deuxième contribution de cette thèse montreque si l'on est prêts à augmenter la complexité en la requête, alors il devient possible d'évaluer un langage de requête plus expressif sur des données plus générales. Plus précisément, nous montrons que l'évaluation probabiliste d'un fragment particulier de Datalog sur des données de largeur d'arbre bornée peut s'effectuer en temps linéaire en les donnéeset doublement exponentiel en la requête. Ce résultat est prouvé en utilisant des techniques d'automatesd'arbres et de compilation de connaissances. La troisième contribution de ce travail est de montrer les limites de certaines de ces techniques, en prouvant desbornes inférieures générales sur la taille de formalismes de représentation utilisés en compilation de connaissances et en théorie des automates. / Query evaluation over probabilistic databases (probabilistic queryevaluation, or PQE) is known to be intractable inmany cases, even in data complexity, i.e., when the query is fixed. Althoughsome restrictions of the queries and instances have been proposed tolower the complexity, these known tractable cases usually do not apply tocombined complexity, i.e., when the query is not fixed. My thesis investigates thequestion of which queries and instances ensure the tractability ofPQE in combined complexity.My first contribution is to study PQE of conjunctive queries on binary signatures, which we rephraseas a probabilistic graph homomorphism problem. We restrict the query and instance graphs to be trees and show the impact on the combined complexity of diverse features such as edge labels, branching,or connectedness. While the restrictions imposed in this setting are quite severe, my second contribution shows that,if we are ready to increase the complexity in the query, then we can evaluate a much more expressive language on more general instances. Specifically, I show that PQE for a particular class of Datalog queries on instances of bounded treewidth can be solved with linear complexity in the instance and doubly exponential complexity in the query.To prove this result, we use techniques from tree automata and knowledge compilation. The third contribution is to show the limits of some of these techniques by proving general lower bounds on knowledge compilation and tree automata formalisms.

Page generated in 0.1281 seconds