• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 58
  • 16
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient Reconstruction of User Sessions from HTTP Traces for Rich Internet Applications

Hooshmand, Salman January 2017 (has links)
The generated HTTP traffic of users' interactions with a Web application can be logged for further analysis. In this thesis, we present the ``Session Reconstruction'' problem that is the reconstruction of user interactions from recorded request/response logs of a session. The reconstruction is especially useful when the only available information about the session is its HTTP trace, as could be the case during a forensic analysis of an attack on a website. New Web technologies such as AJAX and DOM manipulation have provided more responsive and smoother Web applications, sometimes called ``Rich Internet Applications''(RIAs). Despite the benefits of RIAs, the previous session reconstruction methods for traditional Web applications are not effective anymore. Recovering information from a log in RIAs is significantly more challenging as compared with classical Web applications, because the HTTP traffic contains often only application data and no obvious clues about what the user did to trigger that traffic. This thesis studies applying different techniques for efficient reconstruction of RIA sessions. We define the problem in the context of the client/server applications, and propose a solution for it. We present different algorithms to make the session reconstruction possible in practice: learning mechanisms to guide the session reconstruction process efficiently, techniques for recovering user-inputs and handling client-side randomness, and also algorithms for detections of actions that do not generate any HTTP traffic. In addition, to further reduce the session reconstruction time, we propose a distributed architecture to concurrently reconstruct a RIA session over several nodes. To measure the effectiveness of our proposed algorithms, a prototype called D-ForenRIA is implemented. The prototype is made of a proxy and a set of browsers. Browsers are responsible for trying candidate actions on each state, and the proxy, which contains the observed HTTP trace, is responsible for responding to browsers' requests and validating attempted actions on each state. We have used this tool to measure the effectiveness of the proposed techniques during session reconstruction process. The results of our evaluation on several RIAs show that the proposed solution can efficiently reconstruct use-sessions in practice.
12

Reflective Redo Within a Three-Dimensional Simulation and its Influence on Student Metacognition, Reflection, and Learning

Scoresby, Jon M. 01 August 2011 (has links)
The objective of this study was to investigate the effects on a student’s metacognition, reflection, and learning in a specifically designed educational simulation supported by unique technology. The simulation allows players’ actions to be recorded for the purpose of review to identify mistakes. The simulation also allows students to start at and redo actions while fixing previous mistakes instead of starting over at the beginning of a new scenario. When starting at the mistake or point of failure, as identified by a facilitator, during the redo of the initial saved scenario, students reflect on the actions performed during the initial scenario. Student thinking during a redo of a scenario, after the initial scenario reflection, may be called reflective redo when the simulation technology can support starting from the point of failure. This research investigated how metacognition, reflection, and learning were affected by reflective redo. Two key findings were identified when analyzing reflective redo in how students. learn the content and how they learn about their own use of metacognition and reflection. The first key finding relating to the influence of reflective redo on learning was that participants used reflection at levels that matched their need as a support mechanism. The second key finding was that the students’ abilities to place themselves in the problem space contributed to the amount of contextual information they needed to be successful— in this case, either starting from the beginning or from the point of failure.
13

Spatial Replay Protection for Proximity Services : Security and privacy aspects

Lindblom, Fredrik January 2016 (has links)
Proximity Services is a new feature in the 3rd Generation Partnership Project (3GPP) standard for mobile communication. This features gives the opportunity to provide services locally if the targets are sufficiently close. However, in the current version of the proposed specification, there is no protection against a malicious user tunneling messages to a remote location to give the impression of proximity. This thesis proposes solutions to protect against such a spatial replay attack and evaluates these solutions based on how the user’s integrity is preserved, their complexity, and the added overhead. It is not obvious today what the consequences of a spatial replay attack are and how serious such an attack could be. However, once the feature is deployed and people start using it, it could prove to be a major vulnerability. The methods presented in this thesis could be used to prevent spatial replay in 3GPP or similar standards proximity services. The chosen method is a geographical packet leash based on a poly-cylindrical grid for which only a certain amount of Least Significant Bits of the grid cell identifier is included in the initial Discovery Message and the rest could be used in the calculation of the Message Authentication Code. / Proximity Services är en ny funktion inom 3rd Generation Partnership Project (3GPP) standard för mobil kommunikation. Den möjliggör att erbjuda tjänster lokalt om de tänkta användarna är tillräckligt nära. I den nuvarande versionen av specifikationen så finns det dock inget som hindrar en tredje part med onda avsikter från att tunnla meddelanden från den ursprungliga platsen till en annan som inte är i närheten för att ge intrycket till mottagaren att sändaren finns nära. Det här examensarbetet föreslår lösningar för att begränsa nämnda attack och utvärderar dem efter hur de påverkar användarnas platssekretess, lösningens komplexitet och den overhead de innebär. Det är idag inte uppenbart på vilket sätt den nämnda attacken skulle kunna påverka användarna och hur allvarliga konsekvenserna kan bli, men när standarden är implementerad och eventuella användare tillkommer så skulle det kunna visa sig innebära en stor risk. Lösningarna som presenteras i det här examensarbetet skulle kunna användas för att begränsa den här typen av attacker inom 3GPPs standard eller liknande baserade på närhet. Den metoden som har valts är ett ’geographical packet leash’ baserat på ett polycylindriskt rutnät för vilket endast en bestämd mängd minst signifikanta bitar är inkluderade i ett inledande Discovery Message medans resten kan användas i beräkningen av Message Authentication Code.
14

Efficiency determination of automated techniques for GUI testing

Jönsson, Tim January 2014 (has links)
Efficiency as a term in software testing is, in the research community, a term that is not so well defined. In the industry, and specifically the test tool industry, it has become a sales pitch without meaning. GUI testing in its manual form is a time consuming task, which can be thought of as repetitive and tedious by testers. Using human testers to perform a task, where focus is hard to keep, often ends in defects going unnoticed. The purpose of this thesis is to collect knowledge on the area efficiency in software testing, but focusing more on efficiency in GUI testing in order to keep the scope focused. Part of the purpose is also to test the hypothesis that automated GUI testing is more efficient than traditional, manual GUI testing. In order to reach the purpose, the choice fell to use case study research as the main research method. Through the case study, a theoretical study was performed to gain knowledge on the subject. To gain data used for an analysis in the case study, the choice fell on using a semi-experimental research approach where one automated GUI testing technique called Capture & Replay was tested against a more traditional approach towards GUI testing. The results obtained throughout the case study gives a definition on efficiency in software testing, as well as three measurements on efficiency, those being defect detection, repeatability of test cases, and time spent with human interaction. The result also includes the findings from the semi-experimental research approach where the testing tools Squish, and TestComplete, where used beside a manual testing approach. The main conclusion deducted in this work is that an automated approach towards GUI testing can become more efficient than a manual approach, in the long run. This is when efficiency is determined on the points of defect detection, repeatability, and time.
15

Contributions to statistical analysis methods for neural spiking activity

Tao, Long 27 November 2018 (has links)
With the technical advances in neuroscience experiments in the past few decades, we have seen a massive expansion in our ability to record neural activity. These advances enable neuroscientists to analyze more complex neural coding and communication properties, and at the same time, raise new challenges for analyzing neural spiking data, which keeps growing in scale, dimension, and complexity. This thesis proposes several new statistical methods that advance statistical analysis approaches for neural spiking data, including sequential Monte Carlo (SMC) methods for efficient estimation of neural dynamics from membrane potential threshold crossings, state-space models using multimodal observation processes, and goodness-of-fit analysis methods for neural marked point process models. In a first project, we derive a set of iterative formulas that enable us to simulate trajectories from stochastic, dynamic neural spiking models that are consistent with a set of spike time observations. We develop a SMC method to simultaneously estimate the parameters of the model and the unobserved dynamic variables from spike train data. We investigate the performance of this approach on a leaky integrate-and-fire model. In another project, we define a semi-latent state-space model to estimate information related to the phenomenon of hippocampal replay. Replay is a recently discovered phenomenon where patterns of hippocampal spiking activity that typically occur during exploration of an environment are reactivated when an animal is at rest. This reactivation is accompanied by high frequency oscillations in hippocampal local field potentials. However, methods to define replay mathematically remain undeveloped. In this project, we construct a novel state-space model that enables us to identify whether replay is occurring, and if so to estimate the movement trajectories consistent with the observed neural activity, and to categorize the content of each event. The state-space model integrates information from the spiking activity from the hippocampal population, the rhythms in the local field potential, and the rat's movement behavior. Finally, we develop a new, general time-rescaling theorem for marked point processes, and use this to develop a general goodness-of-fit framework for neural population spiking models. We investigate this approach through simulation and a real data application.
16

High performance trace replay event simulation of parallel programs behavior / Ferramenta de alto desempenho para análise de comportamento de programas paralelos baseada em rastos de execução

Korndorfer, Jonas Henrique Muller January 2016 (has links)
Sistemas modernos de alto desempenho compreendem milhares a milhões de unidades de processamento. O desenvolvimento de uma aplicação paralela escalável para tais sistemas depende de um mapeamento preciso da utilização recursos disponíveis. A identificação de recursos não utilizados e os gargalos de processamento requere uma boa análise desempenho. A observação de rastros de execução é uma das técnicas mais úteis para esse fim. Infelizmente, o rastreamento muitas vezes produz grandes arquivos de rastro, atingindo facilmente gigabytes de dados brutos. Portanto ferramentas para análise de desempenho baseadas em rastros precisam processar esses dados para uma forma legível e serem eficientes a fim de permitirem uma análise rápida e útil. A maioria das ferramentas existentes, tais como Vampir, Scalasca e TAU, focam no processamento de formatos de rastro com semântica associada, geralmente definidos para lidar com programas desenvolvidos com bibliotecas populares como OpenMP, MPI e CUDA. No entanto, nem todas aplicações paralelas utilizam essas bibliotecas e assim, algumas vezes, essas ferramentas podem não ser úteis. Felizmente existem outras ferramentas que apresentam uma abordagem mais dinâmica, utilizando um formato de arquivo de rastro aberto e sem semântica específica. Algumas dessas ferramentas são Paraver, Pajé e PajeNG. Por outro lado, ser genérico tem custo e assim tais ferramentas frequentemente apresentam baixo desempenho para o processamento de grandes rastros. O objetivo deste trabalho é apresentar otimizações feitas para o conjunto de ferramentas PajeNG. São apresentados o desenvolvimento de um estratégia de paralelização para o PajeNG e uma análise de desempenho para demonstrar nossos ganhos. O PajeNG original funciona sequencialmente, processando um único arquivo de rastro que contém todos os dados do programa rastreado. Desta forma, a escalabilidade da ferramenta fica muito limitada pela leitura dos dados. Nossa estratégia divide o arquivo em pedaços permitindo seu processamento em paralelo. O método desenvolvido para separar os rastros permite que cada pedaço execute em um fluxo de execução separado. Nossos experimentos foram executados em máquinas com acesso não uniforme à memória (NUMA).Aanálise de desempenho desenvolvida considera vários aspectos como localidade das threads, o número de fluxos, tipo de disco e também comparações entre os nós NUMA. Os resultados obtidos são muito promissores, escalando o PajeNG cerca de oito a onze vezes, dependendo da máquina. / Modern high performance systems comprise thousands to millions of processing units. The development of a scalable parallel application for such systems depends on an accurate mapping of application processes on top of available resources. The identification of unused resources and potential processing bottlenecks requires good performance analysis. The trace-based observation of a parallel program execution is one of the most helpful techniques for such purpose. Unfortunately, tracing often produces large trace files, easily reaching the order of gigabytes of raw data. Therefore tracebased performance analysis tools have to process such data to a human readable way and also should be efficient to allow an useful analysis. Most of the existing tools such as Vampir, Scalasca, TAU have focus on the processing of trace formats with a fixed and well-defined semantic. The corresponding file format are usually proposed to handle applications developed using popular libraries like OpenMP, MPI, and CUDA. However, not all parallel applications use such libraries and so, sometimes, these tools cannot be useful. Fortunately, there are other tools that present a more dynamic approach by using an open trace file format without specific semantic. Some of these tools are the Paraver, Pajé and PajeNG. However the fact of being generic comes with a cost. These tools very frequently present low performance for the processing of large traces. The objective of this work is to present performance optimizations made in the PajeNG tool-set. This comprises the development of a parallelization strategy and a performance analysis to set our gains. The original PajeNG works sequentially by processing a single trace file with all data from the observed application. This way, the scalability of the tool is very limited by the reading of the trace file. Our strategy splits such file to process several pieces in parallel. The created method to split the traces allows the processing of each piece in each thread. The experiments were executed in non-uniform memory access (NUMA) machines. The performance analysis considers several aspects like threads locality, number of flows, disk type and also comparisons between the NUMA nodes. The obtained results are very promising, scaling up the PajeNG about eight to eleven times depending on the machine.
17

Instant Replay Television as a Method for Teaching Certain Physical Aspects of Choral Conducting

Hunter, John Richard 08 1900 (has links)
The problem of this study was to determine the effectiveness of instant replay television as a method for teaching certain physical aspects of choral conducting to undergraduate college or university students majoring in music education.
18

Light-Weight Authentication Schemes with Applications to RFID Systems

Malek, Behzad 03 May 2011 (has links)
The first line of defence against wireless attacks in Radio Frequency Identi cation (RFID) systems is authentication of tags and readers. RFID tags are very constrained in terms of power, memory and size of circuit. Therefore, RFID tags are not capable of performing sophisticated cryptographic operations. In this dissertation, we have designed light-weight authentication schemes to securely identify the RFID tags to readers and vice versa. The authentication schemes require simple binary operations and can be readily implemented in resource-constrained Radio Frequency Identi cation (RFID) tags. We provide a formal proof of security based on the di culty of solving the Syndrome Decoding (SD) problem. Authentication veri es the unique identity of an RFID tag making it possible to track a tag across multiple readers. We further protect the identity of RFID tags by a light-weight privacy protecting identifi cation scheme based on the di culty of the Learning Parity with Noise (LPN) complexity assumption. To protect RFID tags authentication against the relay attacks, we have designed a resistance scheme in the analog realm that does not have the practicality issues of existing solutions. Our scheme is based on the chaos-suppression theory and it is robust to inconsistencies, such as noise and parameters mismatch. Furthermore, our solutions are based on asymmetric-key algorithms that better facilitate the distribution of cryptographic keys in large systems. We have provided a secure broadcast encryption protocol to effi ciently distribute cryptographic keys throughout the system with minimal communication overheads. The security of the proposed protocol is formally proven in the adaptive adversary model, which simulates the attacker in the real world.
19

Light-Weight Authentication Schemes with Applications to RFID Systems

Malek, Behzad 03 May 2011 (has links)
The first line of defence against wireless attacks in Radio Frequency Identi cation (RFID) systems is authentication of tags and readers. RFID tags are very constrained in terms of power, memory and size of circuit. Therefore, RFID tags are not capable of performing sophisticated cryptographic operations. In this dissertation, we have designed light-weight authentication schemes to securely identify the RFID tags to readers and vice versa. The authentication schemes require simple binary operations and can be readily implemented in resource-constrained Radio Frequency Identi cation (RFID) tags. We provide a formal proof of security based on the di culty of solving the Syndrome Decoding (SD) problem. Authentication veri es the unique identity of an RFID tag making it possible to track a tag across multiple readers. We further protect the identity of RFID tags by a light-weight privacy protecting identifi cation scheme based on the di culty of the Learning Parity with Noise (LPN) complexity assumption. To protect RFID tags authentication against the relay attacks, we have designed a resistance scheme in the analog realm that does not have the practicality issues of existing solutions. Our scheme is based on the chaos-suppression theory and it is robust to inconsistencies, such as noise and parameters mismatch. Furthermore, our solutions are based on asymmetric-key algorithms that better facilitate the distribution of cryptographic keys in large systems. We have provided a secure broadcast encryption protocol to effi ciently distribute cryptographic keys throughout the system with minimal communication overheads. The security of the proposed protocol is formally proven in the adaptive adversary model, which simulates the attacker in the real world.
20

Light-Weight Authentication Schemes with Applications to RFID Systems

Malek, Behzad 03 May 2011 (has links)
The first line of defence against wireless attacks in Radio Frequency Identi cation (RFID) systems is authentication of tags and readers. RFID tags are very constrained in terms of power, memory and size of circuit. Therefore, RFID tags are not capable of performing sophisticated cryptographic operations. In this dissertation, we have designed light-weight authentication schemes to securely identify the RFID tags to readers and vice versa. The authentication schemes require simple binary operations and can be readily implemented in resource-constrained Radio Frequency Identi cation (RFID) tags. We provide a formal proof of security based on the di culty of solving the Syndrome Decoding (SD) problem. Authentication veri es the unique identity of an RFID tag making it possible to track a tag across multiple readers. We further protect the identity of RFID tags by a light-weight privacy protecting identifi cation scheme based on the di culty of the Learning Parity with Noise (LPN) complexity assumption. To protect RFID tags authentication against the relay attacks, we have designed a resistance scheme in the analog realm that does not have the practicality issues of existing solutions. Our scheme is based on the chaos-suppression theory and it is robust to inconsistencies, such as noise and parameters mismatch. Furthermore, our solutions are based on asymmetric-key algorithms that better facilitate the distribution of cryptographic keys in large systems. We have provided a secure broadcast encryption protocol to effi ciently distribute cryptographic keys throughout the system with minimal communication overheads. The security of the proposed protocol is formally proven in the adaptive adversary model, which simulates the attacker in the real world.

Page generated in 0.0594 seconds