1 |
Risk in the development designCrossland, Ross January 1997 (has links)
No description available.
|
2 |
Generative probabilistic models of goal-directed users in task-oriented dialogsEshky, Aciel January 2014 (has links)
A longstanding objective of human-computer interaction research is to develop better dialog systems for end users. The subset of user modelling research specifically, aims to provide dialog researchers with models of user behaviour to aid with the design and improvement of dialog systems. Where dialog systems are commercially deployed, they are often to be used by a vast number of users, where sub-optimal performance could lead to an immediate financial loss for the service provider, and even user alienation. Thus, there is a strong incentive to make dialog systems as functional as possible immediately, and crucially prior to their release to the public. Models of user behaviour fill this gap, by simulating the role of human users in the lab, without the losses associated with sub-optimal system performance. User models can also tremendously aid design decisions, by serving as tools for exploratory analysis of real user behaviour, prior to designing dialog software. User modelling is the central problem of this thesis. We focus on a particular kind of dialogs termed task-oriented dialogs (those centred around solving an explicit task) because they represent the frontier of current dialog research and commercial deployment. Users taking part in these dialogs behave according to a set of user goals, which specify what they wish to accomplish from the interaction, and tend to exhibit variability of behaviour given the same set of goals. Our objective is to capture and reproduce (at the semantic utterance level) the range of behaviour that users exhibit while being consistent with their goals. We approach the problem as an instance of generative probabilistic modelling, with explicit user goals, and induced entirely from data. We argue that doing so has numerous practical and theoretical benefits over previous approaches to user modelling which have either lacked a model of user goals, or have been not been driven by real dialog data. A principal problem with user modelling development thus far has been the difficulty in evaluation. We demonstrate how treating user models as probabilistic models alleviates some of these problems through the ability to leverage a whole raft of techniques and insights from machine learning for evaluation. We demonstrate the efficacy of our approach by applying it to two different kinds of task-oriented dialog domains, which exhibit two different sub-problems encountered in real dialog corpora. The first are informational (or slot-filling) domains, specifically those concerning flight and bus route information. In slot-filling domains, user goals take categorical values which allow multiple surface realisations, and are corrupted by speech recognition errors. We address this issue by adopting a topic model representation of user goals which allows us capture both synonymy and phonetic confusability in a unified model. We first evaluate our model intrinsically using held-out probability and perplexity, and demonstrate substantial gains over an alternative string-goal representations, and over a non-goal-directed model. We then show in an extrinsic evaluation that features derived from our model lead to substantial improvements over strong baseline in the task of discriminating between real dialogs (consistent dialogs) and dialogs comprised of real turns sampled from different dialogs (inconsistent dialogs). We then move on to a spatial navigational domain in which user goals are spatial trajectories across a landscape. The disparity between the representation of spatial routes as raw pixel coordinates and their grounding as semantic utterances creates an interesting challenge compared to conventional slot-filling domains. We derive a feature-based representation of spatial goals which facilitates reasoning and admits generalisation to new routes not encountered at training time. The probabilistic formulation of our model allows us to capture variability of behaviour given the same underlying goal, a property frequently exhibited by human users in the domain. We first evaluate intrinsically using held-out probability and perplexity, and find a substantial reduction in uncertainty brought by our spatial representation. We further evaluate extrinsically in a human judgement task and find that our model’s behaviour does not differ significantly from the behaviour of real users. We conclude by sketching two novel ideas for future work: the first is to deploy the user models as transition functions for MDP-based dialog managers; the second is to use the models as a means of restricting the search space for optimal policies, by treating optimal behaviour as a subset of the (distributions over) plausible behaviour which we have induced.
|
3 |
[en] EFFECT OF THE PROBABILISTIC MODELING OF EARTH STATION ANTENNA SIDELOBE GAINS ON THE EVALUATION OF INTERFERENCE AMONG SATELLITE COMMUNICATION NETWORKS / [pt] EFEITO DA MODELAGEM PROBABILÍSTICA DOS GANHOS NOS LÓBULOS LATERAIS DAS ANTENAS DAS ESTAÇÕES TERRENAS NO CÁLCULO DE INTERFERÊNCIAS ENTRE REDES DE COMUNICAÇÃO POR SATÉLITEALFREDO OMAR CORDOVA MANCHEGO 27 June 2014 (has links)
[pt] Quando diversos sistemas de comunicação compartilham uma determinada faixa de frequências, cada um dos sistemas envolvidos opera sujeito às interferências geradas pelos demais. Dentro de este panorama, cresce a importância de uma avaliação precisa dos efeitos de interferência. Dada a complexidade do problema, o cálculo de interferências é usualmente feito considerando diversas situações de pior caso. Estas situações de pior caso incluem, por exemplo, a hipótese de que a degradação devida a chuvas está presente apenas no enlace vítima, não afetando os enlaces interferentes, a hipótese de que as estações terrenas envolvidas estão localizadas nos pontos mais desfavoráveis (em termos de interferência) de suas área de serviço e a consideração de um diagrama de referência para os diagramas de radiação das antenas. Obviamente, estas hipóteses implicam num cálculo de interferências conservador, nos quais os níveis de interferência obtidos são maiores do que os níveis reais de interferência. No presente trabalho, como alternativa ao uso de uma envoltória, os ganhos nos lóbulos laterais das antenas envolvidas são modelados por variáveis aleatórias. Neste caso, a razão portadora interferência resultante é também uma variável aleatória. Seu comportamento estatístico é avaliado para dois tipos de modelagem dos ganhos os lóbulos laterais das antenas: como variáveis aleatórias com distribuição exponencial e como variáveis aleatórias com distribuição gama. Os resultados obtidos são comparados àqueles obtidos quando uma envoltória é utilizada na caracterização dos ganhos das antenas. / [en] When several communication systems share a particular frequency band, each of the systems operates subject to the interference generated by the others. Within this scenario, the importance of an accurate assessment of the effects of interference is increased. Given the complexity of the problem, the evaluation of interference is usually done by considering several worst-case situations. These worst-case situations include, for example, the hypothesis that degradation due to rain affects only the victim link and do not affect the interfering links, the hypothesis that the earth stations involved are located at the most unfavorable (in terms of interference) spots in their service area and the use of reference patterns for the radiation patterns of the antennas. obviously, these assumptions imply a conservative calculation of interference in which the obtained interference levels are higher than their actual levels. In this work, as an alternative to the use of envelopes, the earth station sidelobe antenna gains are modeled as random variables. In this case, the resulting carrier to interference ratio is also a random variable. The statistical behavior of the carrier to interference ratio is then evaluated for two different modelings of the antenna sidelobe gains: as exponential distributed random variables and as gamma distributed random variables. The results are compared to those obtained when an envelope is used to characterize the antenna radiation patterns.
|
4 |
Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration DataSchmidt, Philip J. 10 1900 (has links)
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
|
5 |
Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration DataSchmidt, Philip J. 10 1900 (has links)
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
|
6 |
IISS: A Framework to Influence Individuals through Social Signals on a Social NetworkJanuary 2014 (has links)
abstract: Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a subset of items presented to her. The network operator again, shows that activity to a selection of peers, and thus creating a behavioral loop. That mechanism of interaction and information flow raises some very interesting questions such as: can network operator design social signals to promote a particular activity like sustainability, public health care awareness, or to promote a specific product? The focus of my thesis is to answer that question. In this thesis, I develop a framework to personalize social signals for users to guide their activities on an online platform. As the result, we gradually nudge the activity distribution on the platform from the initial distribution p to the target distribution q. My work is particularly applicable to guiding collaborations, guiding collective actions, and online advertising. In particular, I first propose a probabilistic model on how users behave and how information flows on the platform. The main part of this thesis after that discusses the Influence Individuals through Social Signals (IISS) framework. IISS consists of four main components: (1) Learner: it learns users' interests and characteristics from their historical activities using Bayesian model, (2) Calculator: it uses gradient descent method to compute the intermediate activity distributions, (3) Selector: it selects users who can be influenced to adopt or drop specific activities, (4) Designer: it personalizes social signals for each user. I evaluate the performance of IISS framework by simulation on several network topologies such as preferential attachment, small world, and random. I show that the framework gradually nudges users' activities to approach the target distribution. I use both simulation and mathematical method to analyse convergence properties such as how fast and how close we can approach the target distribution. When the number of activities is 3, I show that for about 45% of target distributions, we can achieve KL-divergence as low as 0.05. But for some other distributions KL-divergence can be as large as 0.5. / Dissertation/Thesis / M.S. Computer Science 2014
|
7 |
Defects in E-PBF Ti-6Al-4V and their Effect on Fatigue Behaviour : Characteristics, Distribution and Impact on Life / Defekter i E-PBF Ti-6Al-4V och dess effekter på utmattningsegenskaper : Kännetecken, fördelning och livslängdspåverkanSandell, Viktor January 2020 (has links)
Layer by layer manufacturing (additive manufacturing, AM) of metals is emerging as an alternative to conventional subtractive manufacturing with the goal of enabling near net-shape production of complex part geometries with reduced material waste and shorter lead times. Recently this field has experienced rapid growth through industrial adaptation but has simultaneously encountered challenges. One such challenge is the ability of AM metal to withstand loading conditions ranging from static loads to complex multiaxial thermo-mechanical fatigue loads. This makes fatigue performance of AM materials a key consideration for the implementation of AM in production. This is especially true for AM in the aerospace industry where safety standards are strict. Defects in metal AM materials include rough surfaces, pores and lack-of-fusion (LOF) between build layers. These defects are detrimental to fatigue as they act as local stress concentrators that can give rise to cracks in the material. Some defects can be avoided by careful build process optimization and/or post-processing but fully eliminating all defects is not possible. Because of this, a need arises for the capability to estimate the fatigue performance of AM produced critical components containing defects. The aim of the thesis is to increase understanding regarding the connection between defect characteristics and the fatigue behaviour in AM produced Ti-6Al-4V. Defect distributions are statistically analysed for use in a simple fracture mechanical model for fatigue life prediction. Other study areas include the impact of post-production treatments such as chemical surface treatments and hot isostatic pressing (HIP) on defects and fatigue behaviour. The thesis constitutes three scientific papers. The AM technique studied in these papers is Electron Beam Melting (EBM) in which an electron beam selectively melts pre-alloyed metal powder. In paper 1, defects were studied using X-ray computed tomography (XCT) and fatigue crack initiation was related to the observed defect distribution. In paper 2, XCT data was used to relate the surface morphology and roughness of post-production treated EBM material to the surface near defect distribution. The connection between this distribution and manufacturing parameter has also been explored. Paper 3 builds on and extends the work presented in paper 1 by including further fatigue testing as well as a method for predicting fatigue life using statistical analysis of the observed defect distribution. The impact of a defect on the fatigue behaviour of the material was found to largely depend on its characteristics and position relative to the surface. Production and post-processing of the material was found to play a role in the severity of this impact. Finally, it was found that a probabilistic statistical analysis can be used to accurately predict the life of the studied material at the tested conditions. / SUDDEN
|
8 |
Probabilistic Modelling of Hearing : Speech Recognition and Optimal AudiometryStadler, Svante January 2009 (has links)
<p>Hearing loss afflicts as many as 10\% of our population.Fortunately, technologies designed to alleviate the effects ofhearing loss are improving rapidly, including cochlear implantsand the increasing computing power of digital hearing aids. Thisthesis focuses on theoretically sound methods for improvinghearing aid technology. The main contributions are documented inthree research articles, which treat two separate topics:modelling of human speech recognition (Papers A and B) andoptimization of diagnostic methods for hearing loss (Paper C).Papers A and B present a hidden Markov model-based framework forsimulating speech recognition in noisy conditions using auditorymodels and signal detection theory. In Paper A, a model of normaland impaired hearing is employed, in which a subject's pure-tonehearing thresholds are used to adapt the model to the individual.In Paper B, the framework is modified to simulate hearing with acochlear implant (CI). Two models of hearing with CI arepresented: a simple, functional model and a biologically inspiredmodel. The models are adapted to the individual CI user bysimulating a spectral discrimination test. The framework canestimate speech recognition ability for a given hearing impairmentor cochlear implant user. This estimate could potentially be usedto optimize hearing aid settings.Paper C presents a novel method for sequentially choosing thesound level and frequency for pure-tone audiometry. A Gaussianmixture model (GMM) is used to represent the probabilitydistribution of hearing thresholds at 8 frequencies. The GMM isfitted to over 100,000 hearing thresholds from a clinicaldatabase. After each response, the GMM is updated using Bayesianinference. The sound level and frequency are chosen so as tomaximize a predefined objective function, such as the entropy ofthe probability distribution. It is found through simulation thatan average of 48 tone presentations are needed to achieve the sameaccuracy as the standard method, which requires an average of 135presentations.</p>
|
9 |
Obrazové možné světy / Pictorial Possible WorldsŠpelda, Petr January 2017 (has links)
The present text develops a model designed to generate conceptual theories with respect to the pictorial (visual) form of representation. This is achieved by combining a computational approach to cognition with philosophical devices of the analytic tradition. The model itself, simulating the structure of reality, consists of (i) a metaphysical stage based on Armstrong's theory of combinatorial possibility, (ii) an epistemological stage proposing emergent phenomena founded upon the notion of computational irreducibility, and (iii) a semantic stage proposing a stochastic account of concepts anchored in the intensional/extensional apprehension of meaning. Towards the end, the model is applied to develop a conceptual account of a case of social, political, and economic organization of human communities as depicted in the visual propaganda of the so-called Islamic State.
|
10 |
Probabilistic Modelling of Hearing : Speech Recognition and Optimal AudiometryStadler, Svante January 2009 (has links)
Hearing loss afflicts as many as 10\% of our population.Fortunately, technologies designed to alleviate the effects ofhearing loss are improving rapidly, including cochlear implantsand the increasing computing power of digital hearing aids. Thisthesis focuses on theoretically sound methods for improvinghearing aid technology. The main contributions are documented inthree research articles, which treat two separate topics:modelling of human speech recognition (Papers A and B) andoptimization of diagnostic methods for hearing loss (Paper C).Papers A and B present a hidden Markov model-based framework forsimulating speech recognition in noisy conditions using auditorymodels and signal detection theory. In Paper A, a model of normaland impaired hearing is employed, in which a subject's pure-tonehearing thresholds are used to adapt the model to the individual.In Paper B, the framework is modified to simulate hearing with acochlear implant (CI). Two models of hearing with CI arepresented: a simple, functional model and a biologically inspiredmodel. The models are adapted to the individual CI user bysimulating a spectral discrimination test. The framework canestimate speech recognition ability for a given hearing impairmentor cochlear implant user. This estimate could potentially be usedto optimize hearing aid settings.Paper C presents a novel method for sequentially choosing thesound level and frequency for pure-tone audiometry. A Gaussianmixture model (GMM) is used to represent the probabilitydistribution of hearing thresholds at 8 frequencies. The GMM isfitted to over 100,000 hearing thresholds from a clinicaldatabase. After each response, the GMM is updated using Bayesianinference. The sound level and frequency are chosen so as tomaximize a predefined objective function, such as the entropy ofthe probability distribution. It is found through simulation thatan average of 48 tone presentations are needed to achieve the sameaccuracy as the standard method, which requires an average of 135presentations.
|
Page generated in 0.1225 seconds