• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 103
  • 39
  • 35
  • 32
  • 23
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 691
  • 126
  • 126
  • 123
  • 105
  • 93
  • 89
  • 82
  • 76
  • 70
  • 59
  • 57
  • 54
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Information theoretic models of social interaction

Salge, Christoph January 2013 (has links)
This dissertation demonstrates, in a non-semantic information-theoretic framework, how the principles of 'maximisation of relevant information' and 'information parsimony' can guide the adaptation of an agent towards agent-agent interaction. Central to this thesis is the concept of digested information; I argue that an agent is intrinsically motivated to a.) process the relevant information in its environment and b.) display this information in its own actions. From the perspective of similar agents, who require similar information, this differentiates other agents from the rest of the environment, by virtue of the information they provide. This provides an informational incentive to observe other agents and integrate their information into one's own decision making process. This process is formalized in the framework of information theory, which allows for a quantitative treatment of the resulting effects, specifically how the digested information of an agent is influenced by several factors, such as the agent's performance and the integrated information of other agents. Two specific phenomena based on information maximisation arise in this thesis. One is flocking behaviour similar to boids that results when agents are searching for a location in a girdworld and integrated the information in other agent's actions via Bayes' Theorem. The other is an effect where integrating information from too many agents becomes detrimental to an agent's performance, for which several explanations are provided.
232

Using Machine Learning to Detect Malicious URLs

Cheng, Aidan 01 January 2017 (has links)
There is a need for better predictive model that reduces the number of malicious URLs being sent through emails. This system should learn from existing metadata about URLs. The ideal solution for this problem would be able to learn from its predictions. For example, if it predicts a URL to be malicious, and that URL is deemed safe by the sandboxing environment, the predictor should refine its model to account for this data. The problem, then, is to construct a model with these characteristics that can make these predictions for the vast number of URLs being processed. Given that the current system does not employ machine learning methods, we intend to investigate multiple such models and summarize which of those might be worth pursuing on a large scale.
233

A machine learning approach to fundraising success in higher education

Ye, Liang 01 May 2017 (has links)
New donor acquisition and current donor promotion are the two major programs in fundraising for higher education, and developing proper targeting strategies plays an important role in the both programs. This thesis presents machine learning solutions as targeting strategies for the both programs based on readily available alumni data in almost any institution. The targeting strategy for new donor acquisition is modeled as a donor identification problem. The Gaussian na ̈ıve bayes, random forest, and support vector machine algorithms are used and evaluated. The test results show that having been trained with enough samples, all three algorithms can distinguish donors from rejectors well, and big donors are identified more often than others.While there is a trade off between the cost of soliciting candidates and the success of donor acquisition, the results show that in a practical scenario where the models are properly used as the targeting strategy, more than 85% of new donors and more than 90% of new big donors can be acquired when only 40% of the candidates are solicited. The targeting strategy for donor promotion is modeled as a promising donor(i.e., those who will upgrade their pledge) prediction problem in machine learning.The Gaussian na ̈ıve bayes, random forest, and support vector machine algorithms are tested. The test results show that all the three algorithms can distinguish promising donors from non-promising donors (i.e., those who will not upgrade their pledge).When the age information is known, the best model produces an overall accuracy of 97% in the test set. The results show that in a practical scenario where the models are properly used as the targeting strategy, more than 85% of promising donors can be acquired when only 26% candidates are solicited. / Graduate / liangye714@gmail.com
234

Fonctions de perte en actuariat

Craciun, Geanina January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
235

Shape from gradients : a psychophysical and computational study of the role complex illumination gradients, such as shading and mutual illumination, play in three-dimensional shape perception

Harding, Glen January 2013 (has links)
The human visual system gathers information about three-dimensional object shape from a wide range of sources. How effectively we can use these sources, and how they are combined to form a consistent and accurate percept of the 3D world is the focus of much research. In complex scenes inter-reflections of light between surfaces (mutual illumination) can occur, creating chromatic illumination gradients. These gradients provide a source of information about 3D object shape, but little research has been conducted into the capabilities of the visual system to use such information. The experiments described here were conducted with the aim of understanding the influence of chromatic gradients from mutual illumination on 3D shape perception. Psychophysical experiments are described that were designed to investigate: If the human visual system takes account of mutual illumination when estimating 3D object shape, and how this might occur; How colour shading cues are integrated with other shape cues; The relative influence on 3D shape perception of achromatic (luminance) shading and chromatic shading from mutual illumination. In addition, one chapter explores a selection of mathematical models of cue integration and their applicability in this case. The results of the experiments suggest that the human visual system is able to quickly assess and take account of colour mutual illuminations when estimating 3D object shape, and use chromatic gradients as an independent and effective cue. Finally, mathematical modelling reveals that the chromatic gradient cue is likely integrated with other shape cues in a way that is close to statistically optimal.
236

Application of Machine Learning Techniques for Real-time Classification of Sensor Array Data

Li, Sichu 15 May 2009 (has links)
There is a significant need to identify approaches for classifying chemical sensor array data with high success rates that would enhance sensor detection capabilities. The present study attempts to fill this need by investigating six machine learning methods to classify a dataset collected using a chemical sensor array: K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Classification and Regression Trees (CART), Random Forest (RF), Naïve Bayes Classifier (NB), and Principal Component Regression (PCR). A total of 10 predictors that are associated with the response from 10 sensor channels are used to train and test the classifiers. A training dataset of 4 classes containing 136 samples is used to build the classifiers, and a dataset of 4 classes with 56 samples is used for testing. The results generated with the six different methods are compared and discussed. The RF, CART, and KNN are found to have success rates greater than 90%, and to outperform the other methods.
237

Application of Dirichlet Distribution for Polytopic Model Estimation

Katkuri, Jaipal 05 August 2010 (has links)
The polytopic model (PM) structure is often used in the areas of automatic control and fault detection and isolation (FDI). It is an alternative to the multiple model approach which explicitly allows for interpolation among local models. This thesis proposes a novel approach to PM estimation by modeling the set of PM weights as a random vector with Dirichlet Distribution (DD). A new approximate (adaptive) PM estimator, referred to as a Quasi-Bayesian Adaptive Kalman Filter (QBAKF) is derived and implemented. The model weights and state estimation in the QBAKF is performed adaptively by a simple QB weights' estimator and a single KF on the PM with the estimated weights. Since PM estimation problem is nonlinear and non-Gaussian, a DD marginalized particle filter (DDMPF) is also developed and implemented similar to MPF. The simulation results show that the newly proposed algorithms have better estimation accuracy, design simplicity, and computational requirements for PM estimation.
238

Uma abordagem bayesiana para mapeamento de QTLs em populações experimentais / A Bayesian approach for mapping QTL in experimental populations

Meyer, Andréia da Silva 03 April 2009 (has links)
Muitos caracteres em plantas e animais são de natureza quantitativa, influenciados por múltiplos genes. Com o advento de novas técnicas moleculares tem sido possível mapear os locos que controlam os caracteres quantitativos, denominados QTLs (Quantitative Trait Loci). Mapear um QTL significa identificar sua posição no genoma, bem como, estimar seus efeitos genéticos. A maior dificuldade para realizar o mapeamento de QTLs, se deve ao fato de que o número de QTLs é desconhecido. Métodos bayesianos juntamente com método Monte Carlo com Cadeias de Markov (MCMC), têm sido implementados para inferir conjuntamente o número de QTLs, suas posições no genoma e os efeitos genéticos . O desafio está em obter a amostra da distribuição conjunta a posteriori desses parâmetros, uma vez que o número de QTLs pode ser considerado desconhecido e a dimensão do espaço paramétrico muda de acordo com o número de QTLs presente no modelo. No presente trabalho foi implementado, utilizando-se o programa estatístico R uma abordagem bayesiana para mapear QTLs em que múltiplos QTLs e os efeitos de epistasia são considerados no modelo. Para tanto foram ajustados modelos com números crescentes de QTLs e o fator de Bayes foi utilizado para selecionar o modelo mais adequado e conseqüentemente, estimar o número de QTLs que controlam os fenótipos de interesse. Para investigar a eficiência da metodologia implementada foi feito um estudo de simulação em que foram considerados duas diferentes populações experimentais: retrocruzamento e F2, sendo que para ambas as populações foi feito o estudo de simulação considerando modelos com e sem epistasia. A abordagem implementada mostrou-se muito eficiente, sendo que para todas as situações consideradas o modelo selecionado foi o modelo contendo o número verdadeiro de QTLs considerado na simulação dos dados. Além disso, foi feito o mapeamento de QTLs de três fenótipos de milho tropical: altura da planta (AP), altura da espiga (AE) e produção de grãos utilizando a metodologia implementada e os resultados obtidos foram comparados com os resultados encontrados pelo método CIM. / Many traits in plants and animals have quantitative nature, influenced by multiple genes. With the new molecular techniques, it has been possible to map the loci, which control the quantitative traits, called QTL (Quantitative Trait Loci). Mapping a QTL means to identify its position in the genome, as well as to estimate its genetics effects. The great difficulty of mapping QTL relates to the fact that the number of QTL is unknown. Bayesian approaches used with Markov Chain Monte Carlo method (MCMC) have been applied to infer QTL number, their positions in the genome and their genetic effects. The challenge is to obtain the sample from the joined distribution posterior of these parameters, since the number of QTL may be considered unknown and hence the dimension of the parametric space changes according to the number of QTL in the model. In this study, a Bayesian approach was applied, using the statistical program R, in order to map QTL, considering multiples QTL and epistasis effects in the model. Models were adjusted with the crescent number of QTL and Bayes factor was used to select the most suitable model and, consequently, to estimate the number of QTL that control interesting phenotype. To evaluate the efficiency of the applied methodology, a simulation study was done, considering two different experimental populations: backcross and F2, accomplishing the simulation study for both populations, considering models with and without epistasis. The applied approach resulted to be very efficient, considering that for all the used situations, the selected model was the one containing the real number of QTL used in the data simulation. Moreover, the QTL mapping of three phenotypes of tropical corn was done: plant height, corn-cob height and grain production, using the applied methodology and the results were compared to the results found by the CIM method.
239

What Men Want, What They Get and How to Find Out

Wolf, Alexander 12 July 2017 (has links) (PDF)
This thesis is concerned with a fundamental unit of the economy: Households. Even in advanced economies, upwards of 70% of the population live in households composed of multiple people. A large number of decisions are taken at the level of the household, that is to say, they are taken jointly by household members: How to raise children, how much and when to work, how many cartons of milk to purchase. How these decisions are made is therefore of great importance for the people who live in them and for their well-being.But precisely because household members make decisions jointly it is hard to know how they come about and to what extent they benefit individual members. This is why households are often viewed as unique decision makers in economics. Even if they contain multiple people, they are treated as though they were a single person with a single set of preferences. This unitary approach is often sufficient and can be a helpful simplification. But in many situations it does not deliver an adequate description of household behavior. For instance, the unitary model does not permit the study of individual wellbeing and inequality inside the household. In addition, implications of the unitary model have been rejected repeatedly in the demand literature.Bargaining models offer an alternative where household members have individual preferences and come to joint decisions in various ways. There are by now a great number of such models, all of which allow for the study of bargaining power, a measure of the influence a member has in decision making. This concept is important because it has implications for the welfare of individuals. If one household member’s bargaining power increases, the household’s choices will be more closely aligned with that member’s preferences, ceteris paribus.The three chapters below can be divided into two parts. The first part consists of Chapter 1, which looks to detect the influence of intra-household bargaining in a specific set of consumption choices: Consumption of the arts. The research in this chapter is designed to measure aspects of the effect of bargaining power in this domain, but does not seek to quantify bargaining power itself or to infer economic well-being of household members.Precisely this last point, however, is the focus of the second part of the thesis, consisting of Chapters 2 and 3. These focus specifically on the recovery of one measure of bargaining power, the resource share. Resource shares have the advantage of being interpretable in terms of economic well-being, which is not true of all such measures. They are estimated as part of structural models of household demand. These models are versions of the collective model of household decision making.Pioneered by Chiappori (1988) and Apps and Rees (1988), the collective model has become the go-to alternative to unitary approaches, where the household is seen as a single decision-making unit with a single well-behaved utility function. Instead, the collective model allows for individual utility functions for each member of the household. The model owes much of its success to the simplicity of its most fundamental assumption: That whatever the structure of the intra-household bargaining process, outcomes are Pareto-efficient. This means that no member can be made better off, without making another worse off. Though the model nests unitary models as special cases, it does have testable implications.The first chapter of the thesis is entitled “Household Decisions on Arts Consumption” and is joint work with Caterina Mauri, who has also collaborated with me on many other projects in her capacity as my girlfriend. In it, we explore the role of intra-household bargaining in arts consumption. We do this by estimating demand for various arts and cultural events such as the opera or dance performances using a large number of explanatory variables. One of these variables plays a special role. This variable is a distribution factor, meaning that it can be reasonably assumed to affect consumption only through the bargaining process, and not by modifying preferences. Such variables play an important role in the household bargaining literature. Here, three such variables are used. Among them is the share of household income that is contributed by the husband, the canonical distribution factor.The chapter fits into a literature on drivers of arts consumption, which has shown that in addition to such factors as age, income and education, spousal preferences and characteristics are important in determining how much and which cultural goods are consumed. Gender differences in preferences in arts consumption have also been shown to be important and to persist after accounting for class, education and other socio-economic factors (Bihagen and Katz-Gerro, 2000).We explore to what extent this difference in preferences can be used to shed light on the decision process in couples’ households. Using three different distribution factors, we infer whether changes in the relative bargaining power of spouses induce changes in arts consumption.Using a large sample from the US Current Population Survey which includes data on the frequency of visits to various categories of cultural activities, we regress atten- dance rates on a range of socio-economic variables using a suitable count data model.We find that attendance by men at events such as the opera, ballet and other dance performances, which are more frequently attended by women than by men, show a significant influence of the distribution factors. This significant effect persists irrespec- tively of which distribution factor is used. We conclude that more influential men tend to participate in these activities less frequently than less influential men, conditionally on a host of controls notably including hours worked.The second chapter centers around the recovery of resource shares. This chapter is joint work with Denni Tommasi, a fellow PhD student at ECARES. It relies on the collective model of the household, which assumes simply that household decisions are Pareto-efficient. From this assumption, a relatively simple household problem can be formulated. Households can be seen as maximizers of weighted sums of their members’ utility functions. Importantly the weights, known as bargaining weights (or bargaining power), may depend on many factors, including prices. The household problem in turn implies structure for household demand, which is observed in survey data.Collective demand systems do not necessarily identify measures of bargaining power however. In fact, the ability to recover such a measure, and especially one that is useful for welfare analysis, was an important milestone in the literature. It was reached by (Browning et al. 2013) (henceforth BCL), with a collective model capable of identi- fying resource shares (also known as a sharing rule). These shares provide a measure of how resources are allocated in the household and so can be used to study intra- household consumption inequality. They also take into account that households gen- erate economies of scale for their members, a phenomenon known as a consumption technology: By sharing goods such as housing, members of households can generate savings that can be used elsewhere.Estimation of these resource shares involves expressing household budget shares functions of preferences, a consumption technology and a sharing rule, each of which is a function of observables, and letting the resulting system loose on the data. But obtaining such a demand system is not free. In addition to the usual empirical speci- fications of the various parts of the system, an identifying assumption has to be made to assure that resource shares can be recovered in estimation. In BCL, this is the assumption that singles and adult members of households share the same preferences. In Chapter 2, however, an alternative assumption is used.In a recent paper, Dunbar et al. (2013) (hereafter DLP) develop a collective model based on BCL that allows to identify resource shares using assumptions on the simi- larity of preferences within and between households. The model uses demand only for assignable goods, a favorite of household economists. These are goods such as mens’ clothing and womens’ clothing for which it is known who in a household consumes them. In this chapter, we show why, especially when the data exhibit relatively flat Engel curves, the model is weakly identified and induces high variability and an im- plausible pattern in least squares estimates.We propose an estimation strategy nested in their framework that greatly reduces this practical impediment to recovery of individual resource shares. To achieve this, we follow an empirical Bayes method that incorporates additional (or out-of-sample) information on singles and relies on mild assumptions on preferences. We show the practical usefulness of this strategy through a series of Monte Carlo simulations and by applying it to Mexican data.The results show that our approach is robust, gives a plausible picture of the house- hold decision process, and is particularly beneficial for the practitioner who wishes to apply the DLP framework. Our welfare analysis of the PROGRESA program in Mexico is the first to include separate poverty rates for men and women in a CCT program.The third Chapter addresses a problem similar to the one discussed in Chapter 2. The goal, again, is to estimate resource shares and to remedy issues of imprecision and instability in the demand systems that can deliver them. Here, the collective model used is based on Lewbel and Pendakur (2008), and uses data on the entire basket of goods that households consume. The identifying assumption is similar to that used by BCL, although I allow for some differences in preferences between singles and married individuals.I set out to improve the precision and stability of the resulting estimates, and so to make the model more useful for welfare analysis. In order to do so, this chapter approaches, for the first time, the estimation of a collective household demand system from a Bayesian perspective. Using prior information on equivalence scales, as well as restrictions implied by theory, tight credible intervals are found for resource shares, a measure of the distribution of economic well-being in a household. A modern MCMC sampling method provides a complete picture of the high-dimensional parameter vec- tor’s posterior distribution and allows for reliable inference.The share of household earnings generated by a household member is estimated to have a positive effect on her share of household resources in a sample of couples from the US Consumer Expenditure survey. An increase in the earnings share of one percentage point is estimated to result in a shift of between 0.05% and 0.14% of household resources in the same direction, meaning that spouses partially insure one another against such shifts. The estimates imply an expected shift of 0.71% of household resources from the average man to the average woman in the same sample between 2008 and 2012, when men lost jobs at a greater rate than women.Both Chapters 2 and 3 explore unconventional ways to achieve gains in estimator precision and reliability at relatively little cost. This represents a valuable contribution to a literature that, for all its merits in complexity and ingenious modeling, has not yet seriously endeavored to make itself empirically useful. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
240

Detecção de falhas em sistemas dinâmicos com redes bayesianas aprendidas a partir de estimação de estados.

Jackson Paul Matsuura 07 March 2006 (has links)
A pronta detecção da ocorrência de falhas em sistemas dinâmicos é essencial na prevenção de condições de operação perigosas e mesmo de avaria física do sistema, o que colocaria em risco recursos valiosos, equipamento vital e vidas humanas. Os métodos convencionais de detecção de falhas, porém, esbarram em limitações de espaço físico, existência de um modelo matemático acurado do sistema e existência de dados sobre o comportando do sistema operando com falhas, entre outros. Nesse trabalho é proposto e avaliado um novo método de Detecção de Falhas em Sistemas Dinâmicos que apresenta vantagens tanto qualitativas quanto quantitativas sobre os métodos já reportados na literatura. O método proposto é fácil de ser entendido em alto nível, tem grande semelhança com a supervisão humana, não necessita de equipamento adicional, não necessita de um modelo acurado do sistema e não precisa de informação nenhuma sobre falhas anteriores no sistema; podendo ser aplicado a sistemas onde os outros métodos dificilmente apresentariam resultados satisfatórios. Nele uma rede Bayesiana é aprendida a partir de medidas do sistema operando normalmente sem falhas e essa rede é então usada na detecção de falhas, inferindo que desvios do comportamento probabilístico aprendido como normal são causados por falhas no sistema. Os resultados obtidos com o novo método, extremamente animadores, são comparados aos obtidos com a utilização de um método baseado em redundância analítica, mostrando-se bastante superior ao mesmo. Resultados adicionais obtidos no isolamento de falhas e na detecção de falhas de um sistema não-linear corroboram os excelentes resultados obtidos, apontando para um grande potencial de uso do método proposto.

Page generated in 0.029 seconds