Spelling suggestions: "subject:"computational.""
401 |
Attribute Learning using Joint Human and Machine ComputationLaw, Edith L.M. 01 August 2012 (has links)
This thesis is centered around the problem of attribute learning -- using the joint effort of humans and machines to describe objects, e.g., determining that a piece of music is "soothing," that the bird in an image "has a red beak", or that Ernest Hemingway is an "Nobel Prize winning author." In this thesis, we present new methods for solving the attribute-learning problem using the joint effort of the crowd and machines via human computation games.
When creating a human computation system, typically two design objectives need to be simultaneously satisfied. The first objective is human-centric -- the task prescribed by the system must be intuitive, appealing and easy to accomplish for human workers. The second objective is task-centric -- the system must actually perform the task at hand. These two goals are often at odds with each other, especially in the casual game setting. This thesis shows that human computation games can accomplish both the human-centric and task-centric objectives, if we first design for humans, then devise machine learning algorithms to work around the limitations of human workers and complement their abilities in order to jointly accomplish the task of learning attributes. We demonstrate the effectiveness of our approach in three concrete problem settings: music tagging, bird image classification and noun phrase categorization.
Contributions of this thesis include a framework for attribute learning, two new game mechanisms, experiments showing the effectiveness of the hybrid human and machine computation approach for learning attributes in vocabulary-rich settings and under the constraints of knowledge limitations, as well as deployed games played by tens of thousands of people, generating large datasets for machine learning.
|
402 |
Un kernel diseñado para la virtualizaciónZabaljáuregui, Matías January 2012 (has links)
En este trabajo, se denomina virtualización a la creación de abstracciones de dispositivos o recursos físicos con el fin de ofrecer sus servicios a uno o más entornos de ejecución. Concretamente, es una técnica que permite ocultar funcionalidades de un dispositivo (disco rígido, placa de red, memoria) o recurso (servidor, red, sistema operativo), sumar las de varios con el fin de presentarlos como otra entidad con capacidades diferentes o bien crear un equivalente virtual de los mismos. En términos generales, la virtualización hace independientes a las instancias de recursos virtualizados del sustrato físico subyacente, presentándolas de manera transparente a los usuarios y aplicaciones que los utilizan sin distinguirlos de los reales.
<i>(Párrafo extraído del texto a modo de resumen)</i>
|
403 |
Statistical Inference Utilizing Agent Based ModelsHeard, Daniel Philip January 2014 (has links)
<p>Agent-based models (ABMs) are computational models used to simulate the behaviors, </p><p>actionsand interactions of agents within a system. The individual agents </p><p>each have their own set of assigned attributes and rules, which determine</p><p>their behavior within the ABM system. These rules can be</p><p>deterministic or probabilistic, allowing for a great deal of</p><p>flexibility. ABMs allow us to</p><p>observe how the behaviors of the individual agents affect the system</p><p>as a whole and if any emergent structure develops within the</p><p>system. Examining rule sets in conjunction with corresponding emergent</p><p>structure shows how small-scale changes can</p><p>affect large-scale outcomes within the system. Thus, we can better</p><p>understand and predict the development and evolution of systems of</p><p>interest. </p><p>ABMs have become ubiquitous---they used in business</p><p>(virtual auctions to select electronic ads for display), atomospheric</p><p>science (weather forecasting), and public health (to model epidemics).</p><p>But there is limited understanding of the statistical properties of</p><p>ABMs. Specifically, there are no formal procedures</p><p>for calculating confidence intervals on predictions, nor for</p><p>assessing goodness-of-fit, nor for testing whether a specific</p><p>parameter (rule) is needed in an ABM.</p><p>Motivated by important challenges of this sort, </p><p>this dissertation focuses on developing methodology for uncertainty</p><p>quantification and statistical inference in a likelihood-free context</p><p>for ABMs. </p><p>Chapter 2 of the thesis develops theory related to ABMs, </p><p>including procedures for model validation, assessing model </p><p>equivalence and measuring model complexity. </p><p>Chapters 3 and 4 of the thesis focuses on two approaches </p><p>for performing likelihood-free inference involving ABMs, </p><p>which is necessary because of the intractability of the </p><p>likelihood function due to the variety of input rules and </p><p>the complexity of outputs.</p><p>Chapter 3 explores the use of </p><p>Gaussian Process emulators in conjunction with ABMs to perform </p><p>statistical inference. This draws upon a wealth of research on emulators, </p><p>which find smooth functions on lower-dimensional Euclidean spaces that approximate</p><p>the ABM. Emulator methods combine observed data with output from ABM</p><p>simulations, using these</p><p>to fit and calibrate Gaussian-process approximations. </p><p>Chapter 4 discusses Approximate Bayesian Computation for ABM inference, </p><p>the goal of which is to obtain approximation of the posterior distribution </p><p>of some set of parameters given some observed data. </p><p>The final chapters of the thesis demonstrates the approaches </p><p>for inference in two applications. Chapter 5 presents application models the spread </p><p>of HIV based on detailed data on a social network of men who have sex with</p><p>men (MSM) in southern India. Use of an ABM</p><p>will allow us to determine which social/economic/policy </p><p>factors contribute to thetransmission of the disease. </p><p>We aim to estimate the effect that proposed medical interventions will</p><p>have on the spread of HIV in this community. </p><p>Chapter 6 examines the function of a heroin market </p><p>in the Denver, Colorado metropolitan area. Extending an ABM </p><p>developed from ethnographic research, we explore a procedure </p><p>for reducing the model, as well as estimating posterior </p><p>distributions of important quantities based on simulations.</p> / Dissertation
|
404 |
Standing deadwood : an articulating landscape assemblageHare, Jason 12 September 2014 (has links)
The intention of this practicum is the exploration of my own articulating
actions as performed within a landscape assemblage. The goal of this
work is to act as a catalyst for the discuss surrounding the capacity for
action within an existing landscape assemblage, the agency of territorial/
de-territorialization that may follow these actions and small techniques
that may facilitate an individual’s design process within the discipline of
landscape architecture. Questioning what might constitute a landscape
assemblage while understanding that its identity relies upon the articulation
of multiple agents that formalize its makeup, remains as a constant.
|
405 |
Quantum information theory and the foundations of quantum mechanicsTimpson, Christopher Gordon January 2004 (has links)
This thesis is a contribution to the debate on the implications of quantum information theory for the foundational problems of quantum mechanics. In Part I an attempt is made to shed some light on the nature of information and quantum information theory. It is emphasized that the everyday notion of information is to be firmly distinguished from the technical notions arising in information theory; however it is maintained that in both settings ‘information’ functions as an abstract noun, hence does not refer to a particular or substance. The popular claim ‘Information is Physical’ is assessed and it is argued that this proposition faces a destructive dilemma. Accordingly, the slogan may not be understood as an ontological claim, but at best, as a methodological one. A novel argument is provided against Dretske’s (1981) attempt to base a semantic notion of information on ideas from information theory. The function of various measures of information content for quantum systems is explored and the applicability of the Shannon information in the quantum context maintained against the challenge of Brukner and Zeilinger (2001). The phenomenon of quantum teleportation is then explored as a case study serving to emphasize the value of recognising the logical status of ‘information’ as an abstract noun: it is argued that the conceptual puzzles often associated with this phenomenon result from the familiar error of hypostatizing an abstract noun. The approach of Deutsch and Hayden (2000) to the questions of locality and information flow in entangled quantum systems is assessed. It is suggested that the approach suffers from an equivocation between a conservative and an ontological reading; and the differing implications of each is examined. Some results are presented on the characterization of entanglement in the Deutsch-Hayden formalism. Part I closes with a discussion of some philosophical aspects of quantum computation. In particular, it is argued against Deutsch that the Church-Turing hypothesis is not underwritten by a physical principle, the Turing Principle. Some general morals are drawn concerning the nature of quantum information theory. In Part II, attention turns to the question of the implications of quantum information theory for our understanding of the meaning of the quantum formalism. Following some preliminary remarks, two particular information-theoretic approaches to the foundations of quantum mechanics are assessed in detail. It is argued that Zeilinger’s (1999) Foundational Principle is unsuccessful as a foundational principle for quantum mechanics. The information-theoretic characterization theorem of Clifton, Bub and Halvorson (2003) is assessed more favourably, but the generality of the approach is questioned and it is argued that the implications of the theorem for the traditional foundational problems in quantum mechanics remains obscure.
|
406 |
Applications of mathematical modelling in demand analgesiaLammer, Peter January 1986 (has links)
This thesis describes applications of mathematical modelling to systems of demand analgesia for the relief of acute postoperative pain. It builds upon work described in the D.Phil. thesis of M.P. Reasbeck. Following major surgery, patients are given a hand-held button which they press when in need of pain relief. The relief is afforded by automatic intravenous infusion of opiates. New clinical demand analgesia hardware, PRODAC, has been developed and data have been collected with it in two major trials involving a total of 80 patients. Patients' drug requirements have been found not to be correlated with body weight, contrary to conventional teaching. The type of operation was also found to have no significant influence upon drug requirements. The performance of transcutaneous nerve stimulation (TNS) as a method of analgesia for acute postoperative pain has been studied and found to be poor. Reasbeck's mathematical model of patients in pain has been corrected and extended. The representation of pharmacokinetics has been enhanced by modelling the transfer of drug between blood plasma and analgesic receptor sites as a first-order process. The time constant of this process has been calculated for morphine using a novel method and found to be 12 minutes. On line estimation of 2nd order pharmacokinetic time constants has been found in simulation not to be feasible. New software has been used to tune the revised model to the clinical data collected with PRODAC. Model behaviour is now demonstrably life-like, which was not previously the case. Blood samples taken during demand analgesia have permitted a comparison between measured and estimated drug concentrations, with good results.
|
407 |
Development of a Symbolic Computer Algebra Toolbox for 2D Fourier Transforms in Polar CoordinatesDovlo, Edem 29 September 2011 (has links)
The Fourier transform is one of the most useful tools in science and engineering and can be expanded to multi-dimensions and curvilinear coordinates. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained.
In this thesis, the development of a symbolic computer algebra toolbox to compute two dimensional Fourier transforms in polar coordinates is discussed. Among the many operations implemented in this toolbox are different types of convolutions and procedures that allow for managing the toolbox effectively. The implementation of the two dimensional Fourier transform in polar coordinates within the toolbox is shown to be a combination of two significantly simpler transforms. The toolbox is also tested throughout the thesis to verify its capabilities.
|
408 |
Hermite Forms of Polynomial MatricesGupta, Somit January 2011 (has links)
This thesis presents a new algorithm for computing the Hermite form of a polynomial
matrix. Given a nonsingular n by n matrix A filled with degree d polynomials with coefficients from a field, the algorithm computes the Hermite form of A in expected number of field operations similar to that of matrix multiplication. The algorithm is randomized of the Las Vegas type.
|
409 |
In-network computation in sensor networksSappidi, Rajasekhar Reddy 22 November 2012 (has links)
Sensor networks are an important emerging class of networks that have many
applications. A sink in these networks acts as a bridge between the sensor nodes
and the end-user (which may be automated and/or part of the sink). Typically,
convergecast is performed in which all the data collected by the sensors is
relayed to the sink, which in turn presents the relevant information to the
end-user. Interestingly, some applications require the sink to relay just a
function of the data collected by the sensors. For instance, in a fire alarm
system, the sinks needs to monitor the maximum of the temperature readings of
all the sensors. For these applications, instead of performing convergecast, we
can let the intermediate nodes process the data they receive, to significantly
reduce the volume of traffic transmitted and increase the rate at which
the data is collected and processed at the sink: this is known as in-network
computation.
Most of the current literature on this novel technique focuses on asymptotic
results for large networks and for very elementary functions. In this
dissertation, we study a new class of functions for which we want to compute
explicit solutions for networks of practical size.
We consider the applications where the sink is interested in the first
M statistical moments of the data collected at a certain time.
The k-th statistical moment is
defined as the expectation of the k-th power of the data. The M=1 case represents the
elementary functions like MAX, MIN, MEAN, etc. that are commonly considered in
the literature. For this class of functions, we are interested in explicitly
computing the maximum achievable throughput including routing, scheduling and
queue management for any given network when in-network computation is allowed.
Flow models have been routinely used to solve optimal joint routing and scheduling
problems when there is no in-network computation and they are typically
tractable for relatively large networks. However, deriving such models is not
obvious when in-network computation is allowed. Considering a single rate wireless network
and the physical model of interference, we develop a discrete-time model for
the real-time network operation and perform two transformations to obtain a flow
model that keeps the essence of in-network computation. This model gives an
upper bound on the maximum achievable throughput. To show the tightness of that
upper bound, we derive a numerical lower bound by computing a feasible solution
to the discrete-time model. This lower bound turns out to be
close to the upper bound proving that the flow model is an excellent
approximation to the discrete-time model.
We then adapt the flow model to a
wired multi-rate network with asynchronous transmissions on links with different
capacities. To compute the lower bound for wired networks, we propose a
heuristic strategy involving the generation of multiple trees and effective
queue management that achieves a throughput close to the one computed by the
flow model. This cross validates the tightness of the upper bound and the
goodness of our heuristic strategy. Finally, we provide several engineering
insights on what in-network computation can achieve in both types of networks.
|
410 |
On the Pareto-Following Variation Operator for fast converging Multiobjective Evolutionary AlgorithmsTalukder, A. K. M. K. A. January 2008 (has links)
The focus of this research is to provide an efficient approach to deal with computationally expensive Multiobjective Optimization Problems (MOP’s). Typically, approximation or surrogate based techniques are adopted to reduce the computational cost. In such cases, the original expensive objective function is replaced by a cheaper mathematical model, where this model mimics the behavior/input-output (i.e. design variable – objective value) relationship. However, it is difficult to model an exact substitute of the targeted objective function. Furthermore, if this kind of approach is used in an evolutionary search, literally, the number of function evaluations does not reduce (i.e. The number of original function evaluation is replaced by the number of surrogate/approximate function evaluation). However, if a large number of individuals are considered, the surrogate model fails to offer smaller computational cost. / To tackle this problem, we have reformulated the concept of surrogate modeling in a different way, which is more suitable for the Multiobjective Evolutionary Algorithm(MOEA) paradigm. In our approach, we do not approximate the objective function; rather we model the input-output behavior of the underlying MOEA itself. The model attempts to identify the search path (in both design-variable and objective spaces) and from this trajectory the model is guaranteed to generate non-dominated solutions (especially, during the initial iterations of the underlying MOEA – with respect to the current solutions) for the next iterations of the MOEA. Therefore, the MOEA can avoid re-evaluating the dominated solutions and thus can save large amount of computational cost due to expensive function evaluations. We have designed our approximation model as a variation operator – that follows the trajectory of the fronts and can be “plugged-in” to any kind of MOEA where non-domination based selection is used. Hence it is termed– the “Pareto-Following Variation Operator (PFVO)”. This approach also provides some added advantage that we can still use the original objective function and thus the search procedure becomes robust and suitable, especially for dynamic problems. / We have integrated the model into three base-line MOEA’s: “Non-dominated Sorting Genetic Algorithm - II (NSGA-II)”, “Strength Pareto Evolutionary Algorithm - II (SPEAII)”and the recently proposed “Regularity Model Based Estimation of Distribution Algorithm (RM-MEDA)”. We have also conducted an exhaustive simulation study using several benchmark MOP’s. Detailed performance and statistical analysis reveals promising results. As an extension, we have implemented our idea for dynamic MOP’s. We have also integrated PFVO into diffusion based/cellular MOEA in a distributed/Grid environment. Most experimental results and analysis reveal that PFVO can be used as a performance enhancement tool for any kind of MOEA.
|
Page generated in 0.1021 seconds