• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Alcohol screening and brief intervention in police custody suites: pilot Cluster Randomised Controlled Trial (AcCePT)

Addison, M., Mcgovern, R., Angus, C., Becker, F., Brennan, A., Brown, H., Coulton, S., Crowe, L., Gilvarry, E., Hickman, M., Howel, D., Mccoll, E., Muirhead, C., Newbury-Birch, D., Waqas, Muhammad, Kaner, E. 09 March 2020 (has links)
Yes / Aims: There is a clear association between alcohol use and offending behaviour and significant police time is spent on alcohol-related incidents. This study aimed to test the feasibility of a trial of screening and brief intervention in police custody suites to reduce heavy drinking and re-offending behaviour. Short summary: We achieved target recruitment and high brief intervention delivery if this occurred immediately after screening. Low rates of return for counselling and retention at follow-up were challenges for a definitive trial. Conversely, high consent rates for access to police data suggested at least some outcomes could be measured remotely. Methods: A three-armed pilot Cluster Randomised Controlled Trial with an embedded qualitative interview-based process evaluation to explore acceptability issues in six police custody suites (north east and south west of the UK). Interventions included: 1. Screening only (Controls), 2. 10 min Brief Advice 3. Brief Advice plus 20 min of brief Counselling. Results: Of 3330 arrestees approached: 2228 were eligible for screening (67%) and 720 consented (32%); 386 (54%) scored 8+ on AUDIT; and 205 (53%) were enroled (79 controls, 65 brief advice and 61 brief counselling). Follow-up rates at 6 and 12 months were 29% and 26%, respectively. However, routinely collected re-offending data were obtained for 193 (94%) participants. Indices of deprivation data were calculated for 184 (90%) participants; 37.6% of these resided in the 20% most deprived areas of UK. Qualitative data showed that all arrestees reported awareness that participation was voluntary, that the trial was separate from police work, and the majority said trial procedures were acceptable. Conclusion: Despite hitting target recruitment and same-day brief intervention delivery, a future trial of alcohol screening and brief intervention in a police custody setting would only be feasible if routinely collected re-offending and health data were used for outcome measurement. / NIHR School for Public Health Research (SPHR) (SPHR-SWP-ALC-WP2). Fuse is a UK Clinical Research Collaboration (UKCRC) Public Health Research Centre of Excellence. Funding for Fuse from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, the National Institute for Health Research, under the auspices of the UKCRC, is gratefully acknowledged.
52

Response Surface Design and Analysis in the Presence of Restricted Randomization

Parker, Peter A. 31 March 2005 (has links)
Practical restrictions on randomization are commonplace in industrial experiments due to the presence of hard-to-change or costly-to-change factors. Employing a split-plot design structure minimizes the number of required experimental settings for the hard-to-change factors. In this research, we propose classes of equivalent estimation second-order response surface split-plot designs for which the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. Designs that possess the equivalence property enjoy the advantages of best linear unbiased estimates and design selection that is robust to model misspecification and independent of the variance components. We present a generalized proof of the equivalence conditions that enables the development of several systematic design construction strategies and provides the ability to verify numerically that a design provides equivalent estimates, resulting in a broad catalog of designs. We explore the construction of balanced and unbalanced split-plot versions of the central composite and Box-Behnken designs. In addition, we illustrate the utility of numerical verification in generating D-optimal and minimal point designs, including split-plot versions of the Notz, Hoke, Box and Draper, and hybrid designs. Finally, we consider the practical implications of analyzing a near-equivalent design when a suitable equivalent design is not available. By simulation, we compare methods of estimation to provide a practitioner with guidance on analysis alternatives when a best linear unbiased estimator is not available. Our goal throughout this research is to develop practical experimentation strategies for restricted randomization that are consistent with the philosophy of traditional response surface methodology. / Ph. D.
53

Randomization for Efficient Nonlinear Parametric Inversion

Sariaydin, Selin 04 June 2018 (has links)
Nonlinear parametric inverse problems appear in many applications in science and engineering. We focus on diffuse optical tomography (DOT) in medical imaging. DOT aims to recover an unknown image of interest, such as the absorption coefficient in tissue to locate tumors in the body. Using a mathematical (forward) model to predict measurements given a parametrization of the tissue, we minimize the misfit between predicted and actual measurements up to a given noise level. The main computational bottleneck in such inverse problems is the repeated evaluation of this large-scale forward model, which corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, to efficiently compute derivative information, we need to solve, repeatedly, linear systems with the adjoint for each detector and frequency. As rapid advances in technology allow for large numbers of sources and detectors, these problems become computationally prohibitive. In this thesis, we introduce two methods to drastically reduce this cost. To efficiently implement Newton methods, we extend the use of simultaneous random sources to reduce the number of linear system solves to include simultaneous random detectors. Moreover, we combine simultaneous random sources and detectors with optimized ones that lead to faster convergence and more accurate solutions. We can use reduced order models (ROM) to drastically reduce the size of the linear systems to be solved in each optimization step while still solving the inverse problem accurately. However, the construction of the ROM bases still incurs a substantial cost. We propose to use randomization to drastically reduce the number of large linear solves needed for constructing the global ROM bases without degrading the accuracy of the solution to the inversion problem. We demonstrate the efficiency of these approaches with 2-dimensional and 3-dimensional examples from DOT; however, our methods have the potential to be useful for other applications as well. / Ph. D.
54

Numerical Methods for the Chemical Master Equation

Zhang, Jingwei 20 January 2010 (has links)
The chemical master equation, formulated on the Markov assumption of underlying chemical kinetics, offers an accurate stochastic description of general chemical reaction systems on the mesoscopic scale. The chemical master equation is especially useful when formulating mathematical models of gene regulatory networks and protein-protein interaction networks, where the numbers of molecules of most species are around tens or hundreds. However, solving the master equation directly suffers from the so called "curse of dimensionality" issue. This thesis first tries to study the numerical properties of the master equation using existing numerical methods and parallel machines. Next, approximation algorithms, namely the adaptive aggregation method and the radial basis function collocation method, are proposed as new paths to resolve the "curse of dimensionality". Several numerical results are presented to illustrate the promises and potential problems of these new algorithms. Comparisons with other numerical methods like Monte Carlo methods are also included. Development and analysis of the linear Shepard algorithm and its variants, all of which could be used for high dimensional scattered data interpolation problems, are also included here, as a candidate to help solve the master equation by building surrogate models in high dimensions. / Ph. D.
55

Nonparametric Combination Methodology : A Better Way to Handle Composite Endpoints?

Baurne, Yvette January 2015 (has links)
Composite endpoints are widely used in clinical trials. The outcome of a clinical trial can affect many individuals and it is therefore of importance that the methods used are as effective and correct as possible. Improvements of the standard method of testing composite endpoints have been proposed and in this thesis, the alternative method using nonparametric combination methodology is compared to the standard method. Performing a simulation study, the power of three combining functions (Fisher, Tippett and the Logistic) are compared to the power of the standard method. The performances of the four methods are evaluated for different compositions of treatment effects, as well as for independent and dependent components. The results show that using the nonparametric combination methodology leads to higher power in both dependent and independent cases. The combining functions are suitable for different compositions of treatment effects, the Fisher combining function being the most versatile. The thesis is written with support from Statisticon AB.
56

Fairneß, Randomisierung und Konspiration in verteilten Algorithmen

Völzer, Hagen 08 December 2000 (has links)
Fairneß (d.h. faire Konfliktlösung), Randomisierung (d.h. Münzwürfe) und partielle Synchronie sind verschiedene Konzepte, die häufig zur Lösung zentraler Synchronisations- und Koordinationsprobleme in verteilten Systemen verwendet werden. Beispiele für solche Probleme sind das Problem des wechselseitigen Ausschlusses (kurz: Mutex-Problem) sowie das Konsens-Problem. Für einige solcher Probleme wurde bewiesen, daß ohne die oben genannten Konzepte keine Lösung für das betrachtete Problem existiert. Unmöglichkeitsresultate dieser Art verbessern unser Verständnis der Wirkungsweise verteilter Algorithmen sowie das Verständnis des Trade-offs zwischen einem leicht analysierbaren und einem ausdrucksstarken Modell für verteiltes Rechnen. In dieser Arbeit stellen wir zwei neue Unmöglichkeitsresultate vor. Darüberhinaus beleuchten wir ihre Hintergründe. Wir betrachten dabei Modelle, die Randomisierung einbeziehen, da bisher wenig über die Grenzen der Ausdrucksstärke von Randomisierung bekannt ist. Mit einer Lösung eines Problems durch Randomisierung meinen wir, daß das betrachtete Problem mit Wahrscheinlichkeit 1 gelöst wird. Im ersten Teil der Arbeit untersuchen wir die Beziehung von Fairneß und Randomisierung. Einerseits ist bekannt, daß einige Probleme (z.B. das Konsens- Problem) durch Randomisierung nicht aber durch Fairneß lösbar sind. Wir zeigen nun, daß es andererseits auch Probleme gibt (nämlich das Mutex-Problem), die durch Fairneß, nicht aber durch Randomisierung lösbar sind. Daraus folgt, daß Fairneß nicht durch Randomisierung implementiert werden kann. Im zweiten Teil der Arbeit verwenden wir ein Modell, das Fairneß und Randomisierung vereint. Ein solches Modell ist relativ ausdrucksstark: Es erlaubt Lösungen für das Mutex-Problem, das Konsens-Problem, sowie eine Lösung für das allgemeine Mutex-Problem. Beim allgemeinen Mutex-Problem (auch bekannt als Problem der speisenden Philosophen) ist eine Nachbarschaftsrelation auf den Agenten gegeben und ein Algorithmus gesucht, der das Mutex-Problem für jedes Paar von Nachbarn simultan löst. Schließlich betrachten wir das ausfalltolerante allgemeine Mutex-Problem -- eine Variante des allgemeinen Mutex-Problems, bei der Agenten ausfallen können. Wir zeigen, daß sogar die Verbindung von Fairneß und Randomisierung nicht genügt, um eine Lösung für das ausfalltolerante allgemeine Mutex-Problem zu konstruieren. Ein Hintergrund für dieses Unmöglichkeitsresultat ist ein unerwünschtes Phänomen, für das in der Literatur der Begriff Konspiration geprägt wurde. Konspiration wurde bisher nicht adäquat charakterisiert. Wir charakterisieren Konspiration auf der Grundlage nicht-sequentieller Abläufe. Desweiteren zeigen wir, daß Konspiration für eine große Klasse von Systemen durch die zusätzliche Annahme von partieller Synchronie verhindert werden kann, d.h. ein konspirationsbehaftetes System kann zu einem randomisierten System verfeinert werden, das unter Fairneß und partieller Synchronie mit Wahrscheinlichkeit 1 konspirationsfrei ist. Partielle Synchronie fordert, daß alle relativen Geschwindigkeiten im System durch eine Konstante beschränkt sind, die jedoch den Agenten nicht bekannt ist. Die Darstellung der Unmöglichkeitsresultate und die Charakterisierung von Konspiration wird erst durch die Verwendung nicht-sequentieller Abläufe möglich. Ein nicht-sequentieller Ablauf repräsentiert im Gegensatz zu einem sequentiellen Ablauf kausale Ordnung und nicht zeitliche Ordnung von Ereignissen. Wir entwickeln in dieser Arbeit eine nicht-sequentielle Semantik für randomisierte verteilte Algorithmen, da es bisher keine in der Literatur gibt. In dieser Semantik wird kausale Unabhängigkeit durch stochastische Unabhängigkeit widergespiegelt. / Concepts such as fairness (i.e., fair conflict resolution), randomization (i.e., coin flips), and partial synchrony are frequently used to solve fundamental synchronization- and coordination-problems in distributed systems such as the mutual exclusion problem (mutex problem for short) and the consensus problem. For some problems it is proven that, without such concepts, no solution to the particular problem exists. Impossibilty results of that kind improve our understanding of the way distributed algorithms work. They also improve our understanding of the trade-off between a tractable model and a powerful model of distributed computation. In this thesis, we prove two new impossibility results and we investigate their reasons. We are in particular concerned with models for randomized distributed algorithms since little is yet known about the limitations of randomization with respect to the solvability of problems in distributed systems. By a solution through randomization we mean that the problem under consideration is solved with probability 1. In the first part of the thesis, we investigate the relationship between fairness and randomization. On the one hand, it is known that to some problems (e.g. to the consensus problem), randomization admits a solution where fairness does not admit a solution. On the other hand, we show that there are problems (viz. the mutex problem) to which randomization does not admit a solution where fairness does admit a solution. These results imply that fairness cannot be implemented by coin flips. In the second part of the thesis, we consider a model which combines fairness and randomization. Such a model is quite powerful, allowing solutions to the mutex problem, the consensus problem, and a solution to the generalized mutex problem. In the generalized mutex problem (a.k.a. the dining philosophers problem), a neighborhood relation is given and mutual exclusion must be achieved for each pair of neighbors. We finally consider the crash-tolerant generalized mutex problem where every hungry agent eventually becomes critical provided that neither itself nor one of its neighbors crashes. We prove that even the combination of fairness and randomization does not admit a solution to the crash-tolerant generalized mutex problem. We argue that the reason for this impossibility is the inherent occurrence of an undesirable phenomenon known as conspiracy. Conspiracy was not yet properly characterized. We characterize conspiracy on the basis of non-sequential runs, and we show that conspiracy can be prevented by help of the additional assumption of partial synchrony, i.e., we show that every conspiracy-prone system can be refined to a randomized system which is, with probability 1, conspiracy-free under the assumptions of partial synchrony and fairness. Partial synchrony means that each event consumes a bounded amount of time where, however, the bound is not known. We use a non-sequential semantics for distributed algorithms which is essential to some parts of the thesis. In particular, we develop a non-sequential semantics for randomized distributed algorithms since there is no such semantics in the literature. In this non-sequential semantics, causal independence is reflected by stochastic independence.
57

Visualizing Endpoint Security Technologies using Attack Trees

Pettersson, Stefan January 2008 (has links)
Software vulnerabilities in programs and malware deployments have been increasing almost every year since we started measuring them. Information about how to program securely, how malware shall be avoided and technological countermeasures for this are more available than ever. Still, the trend seems to favor the attacker. This thesis tries to visualize the effects of a selection of technological countermeasures that have been proposed by researchers. These countermeasures: non-executable memory, address randomization, system call interception and file integrity monitoring are described along with the attacks they are designed to defend against. The coverage of each countermeasure is then visualized with the help of attack trees. Attack trees are normally used for describing how systems can be attacked but here they instead serve the purpose of showing where in an attack a countermeasure takes effect. Using attack trees for this highlights a couple of important aspects of a security mechanism, such as how early in an attack it is effective and which variants of an attack it potentially defends against. This is done by the use of what we call defensive codes that describe how a defense mechanism counters a sub-goal in an attack. Unfortunately the whole process is not well formalized and depends on many uncertain factors.
58

Digital rights management (DRM) - watermark encoding scheme for JPEG images

Samuel, Sindhu 12 September 2008 (has links)
The aim of this dissertation is to develop a new algorithm to embed a watermark in JPEG compressed images, using encoding methods. This encompasses the embedding of proprietary information, such as identity and authentication bitstrings, into the compressed material. This watermark encoding scheme involves combining entropy coding with homophonic coding, in order to embed a watermark in a JPEG image. Arithmetic coding was used as the entropy encoder for this scheme. It is often desired to obtain a robust digital watermarking method that does not distort the digital image, even if this implies that the image is slightly expanded in size before final compression. In this dissertation an algorithm that combines homophonic and arithmetic coding for JPEG images was developed and implemented in software. A detailed analysis of this algorithm is given and the compression (in number of bits) obtained when using the newly developed algorithm (homophonic and arithmetic coding). This research shows that homophonic coding can be used to embed a watermark in a JPEG image by using the watermark information for the selection of the homophones. The proposed algorithm can thus be viewed as a ‘key-less’ encryption technique, where an external bitstring is used as a ‘key’ and is embedded intrinsically into the message stream. The algorithm has achieved to create JPEG images with minimal distortion, with Peak Signal to Noise Ratios (PSNR) of above 35dB. The resulting increase in the entropy of the file is within the expected 2 bits per symbol. This research endeavor consequently provides a unique watermarking technique for images compressed using the JPEG standard. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
59

On learning and generalization in unstructured taskspaces

Mehta, Bhairav 08 1900 (has links)
L'apprentissage robotique est incroyablement prometteur pour l'intelligence artificielle incarnée, avec un apprentissage par renforcement apparemment parfait pour les robots du futur: apprendre de l'expérience, s'adapter à la volée et généraliser à des scénarios invisibles. Cependant, notre réalité actuelle nécessite de grandes quantités de données pour former la plus simple des politiques d'apprentissage par renforcement robotique, ce qui a suscité un regain d'intérêt de la formation entièrement dans des simulateurs de physique efficaces. Le but étant l'intelligence incorporée, les politiques formées à la simulation sont transférées sur du matériel réel pour évaluation; cependant, comme aucune simulation n'est un modèle parfait du monde réel, les politiques transférées se heurtent à l'écart de transfert sim2real: les erreurs se sont produites lors du déplacement des politiques des simulateurs vers le monde réel en raison d'effets non modélisés dans des modèles physiques inexacts et approximatifs. La randomisation de domaine - l'idée de randomiser tous les paramètres physiques dans un simulateur, forçant une politique à être robuste aux changements de distribution - s'est avérée utile pour transférer des politiques d'apprentissage par renforcement sur de vrais robots. En pratique, cependant, la méthode implique un processus difficile, d'essais et d'erreurs, montrant une grande variance à la fois en termes de convergence et de performances. Nous introduisons Active Domain Randomization, un algorithme qui implique l'apprentissage du curriculum dans des espaces de tâches non structurés (espaces de tâches où une notion de difficulté - tâches intuitivement faciles ou difficiles - n'est pas facilement disponible). La randomisation de domaine active montre de bonnes performances sur le pourrait utiliser zero shot sur de vrais robots. La thèse introduit également d'autres variantes de l'algorithme, dont une qui permet d'incorporer un a priori de sécurité et une qui s'applique au domaine de l'apprentissage par méta-renforcement. Nous analysons également l'apprentissage du curriculum dans une perspective d'optimisation et tentons de justifier les avantages de l'algorithme en étudiant les interférences de gradient. / Robotic learning holds incredible promise for embodied artificial intelligence, with reinforcement learning seemingly a strong candidate to be the \textit{software} of robots of the future: learning from experience, adapting on the fly, and generalizing to unseen scenarios. However, our current reality requires vast amounts of data to train the simplest of robotic reinforcement learning policies, leading to a surge of interest of training entirely in efficient physics simulators. As the goal is embodied intelligence, policies trained in simulation are transferred onto real hardware for evaluation; yet, as no simulation is a perfect model of the real world, transferred policies run into the sim2real transfer gap: the errors accrued when shifting policies from simulators to the real world due to unmodeled effects in inaccurate, approximate physics models. Domain randomization - the idea of randomizing all physical parameters in a simulator, forcing a policy to be robust to distributional shifts - has proven useful in transferring reinforcement learning policies onto real robots. In practice, however, the method involves a difficult, trial-and-error process, showing high variance in both convergence and performance. We introduce Active Domain Randomization, an algorithm that involves curriculum learning in unstructured task spaces (task spaces where a notion of difficulty - intuitively easy or hard tasks - is not readily available). Active Domain Randomization shows strong performance on zero-shot transfer on real robots. The thesis also introduces other variants of the algorithm, including one that allows for the incorporation of a safety prior and one that is applicable to the field of Meta-Reinforcement Learning. We also analyze curriculum learning from an optimization perspective and attempt to justify the benefit of the algorithm by studying gradient interference.
60

Fisher Inference and Local Average Treatment Effect: A Simulation study

Tvaranaviciute, Iveta January 2020 (has links)
This thesis studies inference to the complier treatment effect denoted LATE. The standard approach is to base the inference on the two-stage least squares (2SLS) estimator and asymptotic Neyman inference, i.e., the t-test. The paper suggests a Fisher Randomization Test based on the t-test statistic as an alternative to the Neyman inference. Based on the setup with a randomized experiment with noncompliance, for which one can identify the LATE, I compare the two approaches in a Monte Carlo (MC) simulations. The results from the MC simulation is that the Fisher randomization test is not a valid alternative to the Neyman’s test as it has too low power.

Page generated in 0.1124 seconds