• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

[en] RANDOMIZATION IN DESIGN OF EXPERIMENTS: A CASE STUDY / [pt] ALEATORIZAÇÃO EM PLANEJAMENTO DE EXPERIMENTOS: UM ESTUDO DE CASO

IZABEL CRISTINA CORREA SALDANHA 22 October 2008 (has links)
[pt] O presente trabalho teve como objetivo apresentar diretrizes para a execução de experimentos fatoriais com restrições na aleatorização, mostrando a importância em identificar tais restrições, com base na visão de alguns autores e da aplicação de um estudo de caso. Este estudo foi cedido pela Companhia Siderúrgica Nacional - CSN, e exposto através da comparação entre dois modelos, cujas análises refletem as diferenças ao se considerar a restrição na aleatorização do experimento para obter uma resposta otimizada. Conforme identificado na literatura, poucos autores abordam a importância de reinicializar o nível dos fatores em um projeto de experimento industrial. Reinicializar o nível dos fatores, junto à necessidade de aleatorizar a ordem das corridas experimentais, torna válida a hipótese de que as observações obtidas no experimento serão variáveis aleatórias independentemente distribuídas. Quando a aleatorização completa do experimento não é possível de ser atingida, cabe ao experimentalista a decisão de projetar o experimento de tal forma que garanta a correta análise estatística e, conseqüentemente, a validação do modelo. Ao identificar se o experimento apresenta restrições em ser aleatorizado, classificando-o, identificando os fatores fáceis e difíceis de reinicializar, e analisando-se corretamente, evitam-se avaliações equivocadas ou incompletas, como se apresentou neste trabalho. Por fim, a análise, tendo em vista a existência da restrição em executar um experimento completamente aleatorizado e levando em consideração a presença de dois termos de erro no modelo permitiu a identificação das condições experimentais que garantem a minimização da resposta para o estudo de caso. / [en] This work presents some guidance for the execution of factorial experiments with restrictions in the randomization by showing the importance of restrictions identifying. The study is based on some author´s points of view and on a case study application. The original research information comes from Companhia Siderúrgica Nacional - CSN, in fact, the research is presented through two models comparisons. The analysis of these models reveals the differences in taking into account a restriction in the experiment randomization with the aim of getting an optimized response. As shown in the studied literature, just a few authors approach the importance of restarting the factors level in an experimental industrial project. Resetting the factor´s level added to the necessity of randomizing the order of the experimental runs, valid the hypothesis that sustains that the experiment observations will be random variables independently distributed. When the complete randomization of the experiment results in an impossible chore, it is expected that the one who is in charge decides to project the experiment in a way that assures the correct statistic analysis, and consequently, the model´s validation. By identifying if the experiment has restrictions to be randomized, classifying the experiment, identifying which ones are the easiest and hardest factors and doing a correct analyze; it is expected that incomplete or mistaken assessments, like those showed in this research, will be avoided. Finally, the analyses taking into account a restriction in the complete randomized experiment execution and the presence of two error terms in the model, allowed the identification of the experimental conditions that guarantee the case study´s response minimization.
22

Counter-conditioning habitual rumination with a concrete-thinking exercise

Buchanan, Max January 2017 (has links)
Objective: Anxiety and depression have been conceptualised as being associated with “an abundance of habit and a dearth of control” (Hertel, 2015, p. 1). There has been a recent and burgeoning interest toward understanding the role of habits in health psychology and in the psychological disorders of obsessive-compulsive disorder and addiction in particular. To the author’s knowledge, there has been no previous systematic review that aimed to summarise the research investigating the involvement of mental habits in anxiety and depression in clinical and non-clinical populations. Method: The term habit was operationalized and inclusion criteria were specified in the domains of habit measurement, research paradigms, and manipulation tasks. A search across four databases was conducted: Web of Science, EBSCOhost, PubMed and OVID (PsycARTICLES and Journals@OVID). A progressive screening procedure yielded 8 relevant studies related to mental habits in anxiety (n = 1), depression (n = 4) and both anxiety and depression (n = 3). Results: Self-report habit measures correlate with the presence of symptoms. Computational modelling reinforcement learning and goal-devaluation paradigms demonstrate that anxiety and depression are associated with deficits in goal-directed learning and decision-making in favour of habitual learning strategies. Cognitive bias modification meets the criteria for enabling habit change and can strengthen or weaken interpretative habits in response to training. Conclusions: Despite considerable variability and limitations in the design of the studies appraised in this review, overall findings indicate support for habitual thought processes being implicated in anxiety and depression. Treating problematic thought processes in anxiety and depression as habitual – cued automatically by contextual cues, not goal-dependent and resistant to change – may be beneficial for future research and clinical applications. Abstract (Experimental Study) This study investigated predictions from the habit-goal framework for depressive rumination (Watkins & Nolen-Hoeksema, 2014) using a simultaneous replication single case experimental design in a multiple baseline case series. Seven high ruminators were recruited from community and university settings (with one participant’s data later excluded due to insufficient baseline rumination). Following a baseline monitoring period, participants received an intervention that included (i) spotting personal triggers for rumination and (ii) the practice of a scripted concrete thinking exercise (CTE) in response to these triggers, utilising an implementation intention (If-Then plan). It was predicted that practice of the IF-THEN CTE, linked to warning signs, would result in a significant reduction in both frequency and automaticity of rumination in the intervention phase compared to baseline. At the group level, using randomization tests (Onghena & Edgington, 2005), reductions in automaticity of rumination were trending toward statistical significance whilst the impact of the intervention on rumination frequency was not statistically significant. Effect size calculations, using nonoverlap of all pairs, demonstrated a medium effect of the intervention on automaticity (NAP = .76) and weak to medium effect on frequency of rumination (NAP = .66). Visual and statistical analysis of individual data demonstrated that two participants experienced statistically significant benefits (p < .05) for a reduction in automaticity of rumination and one participant’s frequency of rumination was significantly reduced. These two participants also showed the greatest levels of automaticity for the IF-THEN-CTE intervention during the intervention phase. Five participants demonstrated a strong or medium effect of the intervention on automaticity and two participants demonstrated a medium effect on frequency. Taken together, the data is broadly consistent with the predictions made by the habit-goal framework. Pre and post measures indicate reductions for all participants in rumination as habit using the self-report habit index (SRHI) and overall rumination levels rated on the ruminative responses scale (RRS). At post intervention three participants no longer met criteria for inclusion to the study on the RRS. Despite mixed results, feedback at debrief indicated that the intervention was acceptable to participants who reported that they would carry on using it after the study ended.
23

Improved interval estimation of comparative treatment effects

Van Krevelen, Ryne Christian 01 May 2015 (has links)
Comparative experiments, in which subjects are randomized to one of two treatments, are performed often. There is no shortage of papers testing whether a treatment effect exists and providing confidence intervals for the magnitude of this effect. While it is well understood that the object and scope of inference for an experiment will depend on what assumptions are made, these entities are not always clearly presented. We have proposed one possible method, which is based on the ideas of Jerzy Neyman, that can be used for constructing confidence intervals in a comparative experiment. The resulting intervals, referred to as Neyman-type confidence intervals, can be applied in a wide range of cases. Special care is taken to note which assumptions are made and what object and scope of inference are being investigated. We have presented a notation that highlights which parts of a problem are being treated as random. This helps ensure the focus on the appropriate scope of inference. The Neyman-type confidence intervals are compared to possible alternatives in two different inference settings: one in which inference is made about the units in the sample and one in which inference is made about units in a fixed population. A third inference setting, one in which inference is made about a process distribution, is also discussed. It is stressed that certain assumptions underlying this third type of inference are unverifiable. When these assumptions are not met, the resulting confidence intervals may cover their intended target well below the desired rate. Through simulation, we demonstrate that the Neyman-type intervals have good coverage properties when inference is being made about a sample or a population. In some cases the alternative intervals are much wider than necessary on average. Therefore, we recommend that researchers consider using our Neyman-type confidence intervals when carrying out inference about a sample or a population as it may provide them with more precise intervals that still cover at the desired rate.
24

Site- and Location-Adjusted Approaches to Adaptive Allocation Clinical Trial Designs

Di Pace, Brian S 01 January 2019 (has links)
Response-Adaptive (RA) designs are used to adaptively allocate patients in clinical trials. These methods have been generalized to include Covariate-Adjusted Response-Adaptive (CARA) designs, which adjust treatment assignments for a set of covariates while maintaining features of the RA designs. Challenges may arise in multi-center trials if differential treatment responses and/or effects among sites exist. We propose Site-Adjusted Response-Adaptive (SARA) approaches to account for inter-center variability in treatment response and/or effectiveness, including either a fixed site effect or both random site and treatment-by-site interaction effects to calculate conditional probabilities. These success probabilities are used to update assignment probabilities for allocating patients between treatment groups as subjects accrue. Both frequentist and Bayesian models are considered. Treatment differences could also be attributed to differences in social determinants of health (SDH) that often manifest, especially if unmeasured, as spatial heterogeneity amongst the patient population. In these cases, patient residential location can be used as a proxy for these difficult to measure SDH. We propose the Location-Adjusted Response-Adaptive (LARA) approach to account for location-based variability in both treatment response and/or effectiveness. A Bayesian low-rank kriging model will interpolate spatially-varying joint treatment random effects to calculate the conditional probabilities of success, utilizing patient outcomes, treatment assignments and residential information. We compare the proposed methods with several existing allocation strategies that ignore site for a variety of scenarios where treatment success probabilities vary.
25

Mass-balanced randomization : a significance measure for metabolic networks

Basler, Georg January 2012 (has links)
Complex networks have been successfully employed to represent different levels of biological systems, ranging from gene regulation to protein-protein interactions and metabolism. Network-based research has mainly focused on identifying unifying structural properties, including small average path length, large clustering coefficient, heavy-tail degree distribution, and hierarchical organization, viewed as requirements for efficient and robust system architectures. Existing studies estimate the significance of network properties using a generic randomization scheme - a Markov-chain switching algorithm - which generates unrealistic reactions in metabolic networks, as it does not account for the physical principles underlying metabolism. Therefore, it is unclear whether the properties identified with this generic approach are related to the functions of metabolic networks. Within this doctoral thesis, I have developed an algorithm for mass-balanced randomization of metabolic networks, which runs in polynomial time and samples networks almost uniformly at random. The properties of biological systems result from two fundamental origins: ubiquitous physical principles and a complex history of evolutionary pressure. The latter determines the cellular functions and abilities required for an organism’s survival. Consequently, the functionally important properties of biological systems result from evolutionary pressure. By employing randomization under physical constraints, the salient structural properties, i.e., the smallworld property, degree distributions, and biosynthetic capabilities of six metabolic networks from all kingdoms of life are shown to be independent of physical constraints, and thus likely to be related to evolution and functional organization of metabolism. This stands in stark contrast to the results obtained from the commonly applied switching algorithm. In addition, a novel network property is devised to quantify the importance of reactions by simulating the impact of their knockout. The relevance of the identified reactions is verified by the findings of existing experimental studies demonstrating the severity of the respective knockouts. The results suggest that the novel property may be used to determine the reactions important for viability of organisms. Next, the algorithm is employed to analyze the dependence between mass balance and thermodynamic properties of Escherichia coli metabolism. The thermodynamic landscape in the vicinity of the metabolic network reveals two regimes of randomized networks: those with thermodynamically favorable reactions, similar to the original network, and those with less favorable reactions. The results suggest that there is an intrinsic dependency between thermodynamic favorability and evolutionary optimization. The method is further extended to optimizing metabolic pathways by introducing novel chemically feasibly reactions. The results suggest that, in three organisms of biotechnological importance, introduction of the identified reactions may allow for optimizing their growth. The approach is general and allows identifying chemical reactions which modulate the performance with respect to any given objective function, such as the production of valuable compounds or the targeted suppression of pathway activity. These theoretical developments can find applications in metabolic engineering or disease treatment. The developed randomization method proposes a novel approach to measuring the significance of biological network properties, and establishes a connection between large-scale approaches and biological function. The results may provide important insights into the functional principles of metabolic networks, and open up new possibilities for their engineering. / In der Systembiologie und Bioinformatik wurden in den letzten Jahren immer komplexere Netzwerke zur Beschreibung verschiedener biologischer Prozesse, wie Genregulation, Protein-Interaktionen und Stoffwechsel (Metabolismus) rekonstruiert. Ein Hauptziel der Forschung besteht darin, die strukturellen Eigenschaften von Netzwerken für Vorhersagen über deren Funktion nutzbar zu machen, also eine Verbindung zwischen Netzwerkeigenschaften und Funktion herzustellen. Die netzwerkbasierte Forschung zielte bisher vor allem darauf ab, gemeinsame Eigenschaften von Netzwerken unterschiedlichen Ursprungs zu entdecken. Dazu zählen die durchschnittliche Länge von Verbindungen im Netzwerk, die Häufigkeit redundanter Verbindungen, oder die hierarchische Organisation der Netzwerke, welche als Voraussetzungen für effiziente Kommunikationswege und Robustheit angesehen werden. Dabei muss zunächst bestimmt werden, welche Eigenschaften für die Funktion eines Netzwerks von besonderer Bedeutung (Signifikanz) sind. Die bisherigen Studien verwenden dafür eine Methode zur Erzeugung von Zufallsnetzwerken, welche bei der Anwendung auf Stoffwechselnetzwerke unrealistische chemische Reaktionen erzeugt, da sie physikalische Prinzipien missachtet. Es ist daher fraglich, ob die Eigenschaften von Stoffwechselnetzwerken, welche mit dieser generischen Methode identifiziert werden, von Bedeutung für dessen biologische Funktion sind, und somit für aussagekräftige Vorhersagen in der Biologie verwendet werden können. In meiner Dissertation habe ich eine Methode zur Erzeugung von Zufallsnetzwerken entwickelt, welche physikalische Grundprinzipien berücksichtigt, und somit eine realistische Bewertung der Signifikanz von Netzwerkeigenschaften ermöglicht. Die Ergebnisse zeigen anhand der Stoffwechselnetzwerke von sechs Organismen, dass viele der meistuntersuchten Netzwerkeigenschaften, wie das Kleine-Welt-Phänomen und die Vorhersage der Biosynthese von Stoffwechselprodukten, von herausragender Bedeutung für deren biologische Funktion sind, und somit für Vorhersagen und Modellierung verwendet werden können. Die Methode ermöglicht die Identifikation von chemischen Reaktionen, welche wahrscheinlich von lebenswichtiger Bedeutung für den Organismus sind. Weiterhin erlaubt die Methode die Vorhersage von bisher unbekannten, aber physikalisch möglichen Reaktionen, welche spezifische Zellfunktionen, wie erhöhtes Wachstum in Mikroorganismen, ermöglichen könnten. Die Methode bietet einen neuartigen Ansatz zur Bestimmung der funktional relevanten Eigenschaften biologischer Netzwerke, und eröffnet neue Möglichkeiten für deren Manipulation.
26

Multiparty Communication Complexity

David, Matei 06 August 2010 (has links)
Communication complexity is an area of complexity theory that studies an abstract model of computation called a communication protocol. In a $k$-player communication protocol, an input to a known function is partitioned into $k$ pieces of $n$ bits each, and each piece is assigned to one of the players in the protocol. The goal of the players is to evaluate the function on the distributed input by using as little communication as possible. In a Number-On-Forehead (NOF) protocol, the input piece assigned to each player is metaphorically placed on that player's forehead, so that each player sees everyone else's input but its own. In a Number-In-Hand (NIH) protocol, the piece assigned to each player is seen only by that player. Overall, the study of communication protocols has been used to obtain lower bounds and impossibility results for a wide variety of other models of computation. Two of the main contributions presented in this thesis are negative results on the NOF model of communication, identifying limitations of NOF protocols. Together, these results consitute stepping stones towards a better fundamental understanding of this model. As the first contribution, we show that randomized NOF protocols are exponentially more powerful than deterministic NOF protocols, as long as $k \le n^c$ for some constant $c$. As the second contribution, we show that nondeterministic NOF protocols are exponentially more powerful than randomized NOF protocols, as long as $k \le \delta \cdot \log n$ for some constant $\delta < 1$. For the third major contribution, we turn to the NIH model and we present a positive result. Informally, we show that a NIH communication protocol for a function $f$ can simulate a Stack Machine (a Turing Machine augmented with a stack) for a related function $F$, consisting of several instances of $f$ bundled together. Using this simulation and known communication complexity lower bounds, we obtain the first known (space vs. number of passes) trade-off lower bounds for Stack Machines.
27

Multiparty Communication Complexity

David, Matei 06 August 2010 (has links)
Communication complexity is an area of complexity theory that studies an abstract model of computation called a communication protocol. In a $k$-player communication protocol, an input to a known function is partitioned into $k$ pieces of $n$ bits each, and each piece is assigned to one of the players in the protocol. The goal of the players is to evaluate the function on the distributed input by using as little communication as possible. In a Number-On-Forehead (NOF) protocol, the input piece assigned to each player is metaphorically placed on that player's forehead, so that each player sees everyone else's input but its own. In a Number-In-Hand (NIH) protocol, the piece assigned to each player is seen only by that player. Overall, the study of communication protocols has been used to obtain lower bounds and impossibility results for a wide variety of other models of computation. Two of the main contributions presented in this thesis are negative results on the NOF model of communication, identifying limitations of NOF protocols. Together, these results consitute stepping stones towards a better fundamental understanding of this model. As the first contribution, we show that randomized NOF protocols are exponentially more powerful than deterministic NOF protocols, as long as $k \le n^c$ for some constant $c$. As the second contribution, we show that nondeterministic NOF protocols are exponentially more powerful than randomized NOF protocols, as long as $k \le \delta \cdot \log n$ for some constant $\delta < 1$. For the third major contribution, we turn to the NIH model and we present a positive result. Informally, we show that a NIH communication protocol for a function $f$ can simulate a Stack Machine (a Turing Machine augmented with a stack) for a related function $F$, consisting of several instances of $f$ bundled together. Using this simulation and known communication complexity lower bounds, we obtain the first known (space vs. number of passes) trade-off lower bounds for Stack Machines.
28

Approximate Private Quantum Channels

Dickinson, Paul January 2006 (has links)
This thesis includes a survey of the results known for private and approximate private quantum channels. We develop the best known upper bound for &epsilon;-randomizing maps, <em>n</em> + 2log(1/&epsilon;) + <em>c</em> bits required to &epsilon;-randomize an arbitrary <em>n</em>-qubit state by improving a scheme of Ambainis and Smith [5] based on small bias spaces [16, 3]. We show by a probabilistic argument that in fact the great majority of random schemes using slightly more than this many bits of key are also &epsilon;-randomizing. We provide the first known nontrivial lower bound for &epsilon;-randomizing maps, and develop several conditions on them which we hope may be useful in proving stronger lower bounds in the future.
29

Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

Laforest, Martin 09 September 2008 (has links)
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
30

Approximate Private Quantum Channels

Dickinson, Paul January 2006 (has links)
This thesis includes a survey of the results known for private and approximate private quantum channels. We develop the best known upper bound for &epsilon;-randomizing maps, <em>n</em> + 2log(1/&epsilon;) + <em>c</em> bits required to &epsilon;-randomize an arbitrary <em>n</em>-qubit state by improving a scheme of Ambainis and Smith [5] based on small bias spaces [16, 3]. We show by a probabilistic argument that in fact the great majority of random schemes using slightly more than this many bits of key are also &epsilon;-randomizing. We provide the first known nontrivial lower bound for &epsilon;-randomizing maps, and develop several conditions on them which we hope may be useful in proving stronger lower bounds in the future.

Page generated in 0.1208 seconds