• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 10
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 108
  • 22
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

Laforest, Martin 09 September 2008 (has links)
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
32

Using Novel Image-based Interactional Proofs and Source Randomization for Prevention of Web Bots

Shardul Vikram 2011 December 1900 (has links)
This work presents our efforts on preventing the web bots to illegitimately access web resources. As the first technique, we present SEMAGE (SEmantically MAtching imaGEs), a new image-based CAPTCHA that capitalizes on the human ability to define and comprehend image content and to establish semantic relationships between them. As the second technique, we present NOID - a "NOn-Intrusive Web Bot Defense system" that aims at creating a three tiered defence system against web automation programs or web bots. NOID is a server side technique and prevents the web bots from accessing web resources by inherently hiding the HTML elements of interest by randomization and obfuscation in the HTML responses. A SEMAGE challenge asks a user to select semantically related images from a given image set. SEMAGE has a two-factor design where in order to pass a challenge the user is required to figure out the content of each image and then understand and identify semantic relationship between a subset of them. Most of the current state-of-the-art image-based systems like Assira only require the user to solve the first level, i.e., image recognition. Utilizing the semantic correlation between images to create more secure and user-friendly challenges makes SEMAGE novel. SEMAGE does not suffer from limitations of traditional image-based approaches such as lacking customization and adaptability. SEMAGE unlike the current Text based systems is also very user friendly with a high fun factor. We conduct a first of its kind large-scale user study involving 174 users to gauge and compare accuracy and usability of SEMAGE with existing state-of-the-art CAPTCHA systems like reCAPTCHA (text-based) and Asirra (image-based). The user study further reinstates our points and shows that users achieve high accuracy using our system and consider our system to be fun and easy. We also design a novel server-side and non-intrusive web bot defense system, NOID, to prevent web bots from accessing web resources by inherently hiding and randomizing HTML elements. Specifically, to prevent web bots uniquely identifying HTML elements for later automation, NOID randomizes name/id parameter values of essential HTML elements such as "input textbox", "textarea" and "submit button" in each HTTP form page. In addition, to prevent powerful web bots from identifying special user-action HTML elements by analyzing the content of their accompanied "label text" HTML tags, we enhance NOID by adding a component, Label Concealer, which hides label indicators by replacing "label text" HTML tags with randomized images. To further prevent more powerful web bots identifying HTML elements by recognizing their relative positions or surrounding elements in the web pages, we enhance NOID by adding another component, Element Trapper, which obfuscates important HTML elements' surroundings by adding decoy elements without compromising usability. We evaluate NOID against five powerful state-of-the-art web bots including XRumer, SENuke, Magic Submitter, Comment Blaster, and UWCS on several popular open source web platforms including phpBB, Simple Machine Forums (SMF), and Wordpress. According to our evaluation, NOID can prevent all these web bots automatically sending spam on these web platforms with reasonable overhead.
33

Visualizing Endpoint Security Technologies using Attack Trees

Pettersson, Stefan January 2008 (has links)
<p>Software vulnerabilities in programs and malware deployments have been increasing almost every year since we started measuring them. Information about how to program securely, how malware shall be avoided and technological countermeasures for this are more available than ever. Still, the trend seems to favor the attacker. This thesis tries to visualize the effects of a selection of technological countermeasures that have been proposed by researchers. These countermeasures: non-executable memory, address randomization, system call interception and file integrity monitoring are described along with the attacks they are designed to defend against. The coverage of each countermeasure is then visualized with the help of attack trees. Attack trees are normally used for describing how systems can be attacked but here they instead serve the purpose of showing where in an attack a countermeasure takes effect. Using attack trees for this highlights a couple of important aspects of a security mechanism, such as how early in an attack it is effective and which variants of an attack it potentially defends against. This is done by the use of what we call defensive codes that describe how a defense mechanism counters a sub-goal in an attack. Unfortunately the whole process is not well formalized and depends on many uncertain factors.</p>
34

Topics in experimental and tournament design

Hennessy, Jonathan Philip 21 October 2014 (has links)
We examine three topics related to experimental design in this dissertation. Two are related to the analysis of experimental data and the other focuses on the design of paired comparison experiments, in this case knockout tournaments. The two analysis topics are motivated by how to estimate and test causal effects when the assignment mechanism fails to create balanced treatment groups. In Chapter 2, we apply conditional randomization tests to experiments where, through random chance, the treatment groups differ in their covariate distributions. In Chapter 4, we apply principal stratification to factorial experiments where the subjects fail to comply with their assigned treatment. The sources of imbalance differ, but, in both cases, ignoring the imbalance can lead to incorrect conclusions. In Chapter 3, we consider designing knockout tournaments to maximize different objectives given a prior distribution on the strengths of the players. These objectives include maximizing the probability the best player wins the tournament. Our emphasis on balance in the other two chapters comes from a desire to create a fair comparison between treatments. However, in this case, the design uses the prior information to intentionally bias the tournament in favor of the better players. / Statistics
35

L'approche expérimentale du J-Pal en économie du développement : un tournant épistémologique? / The J-Pal's experimental approach in development economics : an epistemilogical shift?

Favereau, Judith 14 February 2014 (has links)
Cette thèse porte sur la méthode expérimentale utilisée par les chercheurs du J-PAL en économie du développement. La volonté de ces chercheurs est double: (1) produire des preuves d'efficacité des programmes de développement, (2) afin de guider la décision politique. L'objectif de la thèse est de mener une analyse épistémologique de l'approche du J-PAL. Cette analyse s'offre d'étudier une double dimension: une dimension méthodologique et une dimension théorique. La dimension méthodologique vise à interroger la méthode utilisée par le J-PAL: la randomisation. Deux principales questions guident cette analyse méthodologique: (1) la nature de go/d standard méthodologique de la randomisation, (2) et la possible transposition des résultats obtenus par le J-PAL dans la sphère politique. La seconde dimension (théorique) questionne l'apport du J-PAL aux débats théoriques qui ont traversé l'économie du développement ces dix dernières années. S'intéresser à ces deux dimensions permet d'examiner l'approche du J-PAL dans son ensemble. La thèse montre alors que cette approche offre des résultats dont la validité interne est importante mais dont la validité externe est faible, rendant difficile l'utilisation de tels résultats dans la sphère politique. Cette tension entre la validité interne et la validité externe montre l'antagonisme des deux objectifs que se fixe le J-PAL et définit un problème épistémologique: le refus de théorie ainsi que l'absence de mise en évidence des mécanismes qui sous-tendent les résultats obtenus par le J-P AL empêche ce dernier de produire des recommandations politiques claires. / This thesis focuses on the experimental approach used by J-PAL's researchers in development economics. The goal of these researchers is twofold: (1) producing evidence concerning the efficiency of development programs, (2) in order to guide policy decisions. The aim of this thesis is to conduct an epistemological analysis of the J-PAL's approach. This analysis studies two dimensions: a methodological one and a theoretical one. The methodological dimension aims to question the method used by the J-PAL: randomization. Two main questions guide this methodological analysis: (1) randomization's methodological status of «gold standard », (2) and the possibility to transpose the results obtained by the J-P AL in the political area. The second, theoretical, dimension questions the J-PAL's contributions to the (theoretical) debates within development economics over the past ten years. The study of these two dimensions allows an examination of the J-PAL's approach as a whole. The thesis shows that the results of this approach have a strong internal validity at the expense of a weak external validity. This makes the use of such results in the political area difficult. This tension between internal validity and external validity points out the antagonism of the J-PAL's two objectives and defines an epistemological issue: the refusal of a theory along with the lack of emphasis on the underlying mechanisms of the J-PAL's results prevents it from producing clear political recommendations.
36

[en] USING LINEAR MIXED MODELS ON DATA FROM EXPERIMENTS WITH RESTRICTION IN RANDOMIZATION / [pt] UTILIZAÇÃO DE MODELOS LINEARES MISTOS EM DADOS PROVENIENTES DE EXPERIMENTOS COM RESTRIÇÃO NA ALEATORIZAÇÃO

MARCELA COHEN MARTELOTTE 04 October 2010 (has links)
[pt] Esta dissertação trata da aplicação de modelos lineares mistos em dados provenientes de experimentos com restrição na aleatorização. O experimento utilizado neste trabalho teve como finalidade verificar quais eram os fatores de controle do processo de laminação a frio que mais afetavam a espessura do material utilizado na fabricação das latas para bebidas carbonatadas. A partir do experimento, foram obtidos dados para modelar a média e a variância da espessura do material. O objetivo da modelagem era identificar quais fatores faziam com que a espessura média atingisse o valor desejado (0,248 mm). Além disso, era necessário identificar qual a combinação dos níveis desses fatores que produzia a variância mínima na espessura do material. Houve replicações neste experimento, mas estas não foram executadas de forma aleatória, e, além disso, os níveis dos fatores utilizados não foram reinicializados, nas rodadas do experimento. Devido a estas restrições, foram utilizados modelos mistos para o ajuste da média, e da variância, da espessura, uma vez que com tais modelos é possível trabalhar na presença de dados auto-correlacionados e heterocedásticos. Os modelos mostraram uma boa adequação aos dados, indicando que para situações onde existe restrição na aleatorização, a utilização de modelos mistos se mostra apropriada. / [en] This dissertation presents an application of linear mixed models on data from an experiment with restriction in randomization. The experiment used in this study was aimed to verify which were the controlling factors, in the cold-rolling process, that most affected the thickness of the material used in the carbonated beverages market segment. From the experiment, data were obtained to model the mean and variance of the thickness of the material. The goal of modeling was to identify which factors were significant for the thickness reaches the desired value (0.248 mm). Furthermore, it was necessary to identify which combination of levels, of these factors, produced the minimum variance in the thickness of the material. There were replications of this experiment, but these were not performed randomly. In addition, the levels of factors used were not restarted during the trials. Due to these limitations, mixed models were used to adjust the mean and the variance of the thickness. The models showed a good fit to the data, indicating that for situations where there is restriction on randomization, the use of mixed models is suitable.
37

Moving Target Defense for Web Applications

January 2018 (has links)
abstract: Web applications continue to remain as the most popular method of interaction for businesses over the Internet. With it's simplicity of use and management, they often function as the "front door" for many companies. As such, they are a critical component of the security ecosystem as vulnerabilities present in these systems could potentially allow malicious users access to sensitive business and personal data. The inherent nature of web applications enables anyone to access them anytime and anywhere, this includes any malicious actors looking to exploit vulnerabilities present in the web application. In addition, the static configurations of these web applications enables attackers the opportunity to perform reconnaissance at their leisure, increasing their success rate by allowing them time to discover information on the system. On the other hand, defenders are often at a disadvantage as they do not have the same temporal opportunity that attackers possess in order to perform counter-reconnaissance. Lastly, the unchanging nature of web applications results in undiscovered vulnerabilities to remain open for exploitation, requiring developers to adopt a reactive approach that is often delayed or to anticipate and prepare for all possible attacks which is often cost-prohibitive. Moving Target Defense (MTD) seeks to remove the attackers' advantage by reducing the information asymmetry between the attacker and defender. This research explores the concept of MTD and the various methods of applying MTD to secure Web Applications. In particular, MTD concepts are applied to web applications by implementing an automated application diversifier that aims to mitigate specific classes of web application vulnerabilities and exploits. Evaluation is done using two open source web applications to determine the effectiveness of the MTD implementation. Though developed for the chosen applications, the automation process can be customized to fit a variety of applications. / Dissertation/Thesis / Masters Thesis Computer Science 2018
38

Randomização progressiva para esteganalise / Progressive randomization for steganalysis

Rocha, Anderson de Rezende, 1980- 17 February 2006 (has links)
Orientador: Siome Klein Goldenstein / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T04:51:36Z (GMT). No. of bitstreams: 1 Rocha_AndersondeRezende_M.pdf: 1408210 bytes, checksum: 086a5c63f2aeae657441d79fa179fe6c (MD5) Previous issue date: 2006 / Resumo: Neste trabalho, nós descrevemos uma nova metodologia para detectar a presença de conteúdo digital escondido nos bits menos significativos (LSBs) de imagens. Nós introduzimos a técnica de Randomização Progressiva (PR), que captura os artefatos estatísticos inseridos durante um processo de mascaramento com aleatoriedade espacial. Nossa metodologia consiste na progressiva aplicação de transformações de mascaramento nos LSBs de uma imagem. Ao receber uma imagem I como entrada, o método cria n imagens, que apenas se diferenciam da imagem original no canal LSB. Cada estágio da Randomização Progressiva representa possíveis processos de mascaramento com mensagens de tamanhos diferentes e crescente entropia no canal LSB. Analisando esses estágios, nosso arcabouço de detecção faz a inferência sobre a presença ou não de uma mensagem escondida na imagem I. Nós validamos nossa metodologia em um banco de dados com 20.000 imagens reais. Nosso método utiliza apenas descritores estatísticos dos LSBs e já apresenta melhor qualidade de classificação que os métodos comparáveis descritos na literatura / Abstract: In this work, we describe a new methodology to detect the presence of hidden digital content in the Least Significant Bits (LSBs) of images. We introduce the Progressive Randomization technique that captures statistical artifacts inserted during the hiding process. Our technique is a progressive application of LSB modifying transformations that receives an image as input, and produces n images that only differ in the LSB from the initial image. Each step of the progressive randomization approach represents a possible content-hiding scenario with increasing size, and increasing LSB entropy. Analyzing these steps, our detection framework infers whether or not the input image I contains a hidden message. We validate our method with 20,000 real, non-synthetic images. Our method only uses statistical descriptors of LSB occurrences and already performs better than comparable techniques in the literature / Mestrado / Visão Computacional / Mestre em Ciência da Computação
39

Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types

Levin, Joel R., Ferron, John M., Gafurov, Boris S. 08 1900 (has links)
A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
40

Universal prevention of anxiety and depression in school children

Åhlén, Johan January 2017 (has links)
Anxiety and depression are common in children and adolescents, and involve individual suffering, risk of future psychiatric problems, and high costs to society. However, only a limited number of children experiencing debilitating anxiety and depression are identified and receive professional help. One approach that could possibly reduce the prevalence of these conditions is universal school-based prevention aimed at reducing the impact of risk factors and strengthening protective factors involved in the development of anxiety and depression. The current thesis aimed to contribute to the literature on universal prevention of anxiety and depression in children. Study I involved a meta-analysis of earlier randomized, and cluster-randomized trials of universal prevention of anxiety and depression. Overall, the meta-analysis showed small but significant effects of universal preventive interventions, meaning that lower levels of anxiety and depression were evident after intervention completion and partially evident at follow-up assessments. No variables were found to significantly enhance the effects, however, there was a tendency for larger effects to be associated with mental health professionals delivering the interventions. In Study II, a widely adopted prevention program called Friends for Life was evaluated in a large school-based cluster-randomized effectiveness trial. The results showed no evidence of an intervention effect for the whole sample. However, children with elevated depressive symptoms at baseline and children with teachers who highly participated in supervision, seemed to benefit from the intervention in the short term. Study III involved a 3-year follow-up of Study II and an examination of the effects of sample attrition. The results showed no long-term effects for the whole sample and no maintenance of the short-term subgroup effects observed in Study II. Finally, to increase our understanding of the development of anxiety in children and to assist future improvements of universal prevention, Study IV evaluated different trajectories of overall anxiety together with related patterns of disorder-specific symptoms in a school-based sample over 39 months. Evidence favored a model of three different developmental trajectories across age. One trajectory was characterized by increasing levels of overall anxiety, but fluctuating disorder-specific symptoms arguably related to the normal challenges of children’s developmental level, which warrants an increased focus on age-relevant challenges in universal prevention. The four studies provide further understanding of the overall effectiveness of universal prevention of anxiety and depression in children, the short- and long-term effects of universal prevention in a Swedish context, and ideas for further development of preventive interventions.

Page generated in 0.1135 seconds