• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Benchmarking measurement-based quantum computation on graph states

Qin, Zhangjie 26 August 2024 (has links)
Measurement-based quantum computation is a form of quantum computing that operates on a prepared entangled graph state, typically a cluster state. In this dissertation, we will detail the creation of graph states across various physical platforms using different entangling gates. We will then benchmark the quality of graph states created with error-prone interactions through quantum wire teleportation experiments. By leveraging underlying symmetry, we will design graph states as measurement-based quantum error correction codes to protect against perturbations, such as ZZ crosstalk in quantum wire teleportation. Additionally, we will explore other measurement-based algorithms used for the quantum simulation of time evolution in fermionic systems, using the Kitaev model and the Hubbard model as examples. / Doctor of Philosophy / A quantum computer refers to a device that performs general computational functions relying on logic gates using units dominated by microscopic quantum properties. The fundamental difference between quantum computers and classical computers lies in the distinction be- tween the basic quantum unit, the qubit, and the classical computational unit, the bit. Both qubits and bits can exist in states 0 and 1. However, qubits possess two characteristics that classical computational units do not: superposition and entanglement. Superposition allows a qubit to exist in a combination of both states 0 and 1 simultaneously. Entanglement refers to the phenomenon where qubits interact and form an inseparable unified state. The effec- tive utilization of these unique properties enables quantum computers to exhibit capabilities far surpassing those of classical computers. Analogous to classical computers, qubits can be interconnected in a circuit-like manner sim- ilar to classical bits, forming an architecture known as circuit-based quantum computation (CBQC). However, given the unique properties of quantum systems, particularly entan- glement, a novel architecture called measurement-based quantum computing (MBQC) can also be designed. MBQC relies on pre-entangled graph states, usually cluster states, and only requires single-qubit measurements to implement quantum algorithms. The MBQC framework also includes a universal gate set, similar to other quantum computing architec- tures like CBQC. In this dissertation, we will introduce the creation of graph states and the implementation of measurement-based quantum algorithms.
2

Brief Adjustment Scale - 6 for Measurement-Based Care: Further Evaluation of Psychometric Properties, Ethnic Samples, and Clinical Utility

Ko, Hayoung 08 September 2021 (has links)
The Brief Adjustment Scale-6 (BASE-6) was recently developed and initially examined as a brief, reliable, no-cost survey for measuring general psychological functioning within Measurement-Based Care (MBC). This study aims to further evaluate the psychometric properties of the BASE-6 in both clinical and nonclinical populations. More specifically, psychometric properties including reliability and validity, generalizability for different race/ethnic samples, and clinical utility within the context of MBC were examined. Three adult samples participated in this study: online community participants (Sample 1: n = 394), college students (Sample 2: n = 249), and individuals receiving outpatient clinical services (Sample 3: n = 80). Participants within the clinical sample regularly completed the BASE-6 while receiving evidence-based treatment in a community based psychological training clinic, where the practice of MBC is standard. Results showed that the BASE-6 had a high level of internal consistency and good test-retest reliability. It showed high convergent validity through a significant positive correlation with total and subscale scores of the Depression Anxiety Stress Scale-21 (DASS-21). Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) results supported a unidimensional model that strongly fit the data. All results were uniform across samples. When comparing the clinical and nonclinical samples, participants from the clinical sample demonstrated higher scores in total and most of the single items of the BASE-6. Finally, within the clinical sample, the BASE-6 was sensitive enough to capture change over time by demonstrating a large effect size of pre-post treatment changes and significant linear change in multilevel growth modeling. These results support the BASE-6 as a reliable and valid measure that has a one-factor structure, regardless of race/ethnicity and participants’ clinical status. Additionally, it can sensitively detect clinical change in individuals over the course of the treatment. Thus, the BASE-6 appears to accurately monitor overall psychological adjustment. / M.S. / The Brief Adjustment Scale-6 (BASE-6) is a short, no-cost survey that assesses general psychological adjustment. It can be used in a psychotherapy setting to evaluate overall functioning of individuals on a regular basis. This study aims to further investigate the BASE-6 in both clinical and nonclinical populations, more specifically, whether the BASE-6 questionnaire is reliable, consistent, and valid with various race/ethnic populations. Additionally, it was examined if the BASE-6 can sensitively capture the change in psychological adjustment over the course of treatment. Three adult groups participated in this study: online community participants (Sample 1: n = 394), college students (Sample 2: n = 249), and individuals receiving outpatient clinical services (Sample 3: n = 80). Participants within the clinical sample regularly completed the BASE-6 while receiving psychotherapy in a community based psychological training clinic. Results showed that the BASE-6 was a reliable, consistent measure and the items all measured the same construct – general psychological adjustment. The BASE-6 was also highly correlated with depression, anxiety, and stress. Results did not differ across samples or race/ethnicity. When comparing the clinical and nonclinical samples, participants from the clinical sample showed higher scores of the BASE-6. Finally, within the clinical sample, the BASE-6 was sensitive enough to capture change over time by demonstrating large treatment changes from the beginning until the end of the treatment. These results support the BASE-6 as a reliable and valid measure regardless of race/ethnicity and participants’ clinical status. Additionally, it can sensitively detect clinical changes in clients over the course of the treatment. Thus, the BASE-6 appears to accurately monitor overall psychological adjustment.
3

Evaluating error when estimating the loss probability in a packet buffer

Wahid, Amna Abdul January 2016 (has links)
In this thesis we explore precision in measurement of buffer overflow and loss probability. We see how buffer overflow probability compares with queuing delay measurements covered in the literature. More specifically, we measure the overflow probability of a packet buffer for various sampling rates to see the effect of sampling rate on the estimation. There are various reasons for measurement in networks; one key context assumed here is Measurement Based Admission Control. We conduct simulation experiments with analytically derived VoIP and bursty traffic parameters, in Matlab, while treating the buffer under consideration as a two-state Markov Chain. We note that estimation error decreases with increase in sampling gap (or in other words precision improves/variance decreases with decrease in sampling rate). We then perform experiments for VoIP and bursty data using NS-2 simulator and record the buffer states generated therein. We see a similar trend of increase in precision with increase in sampling gap. In our simulations, we have mainly considered static traffic passing through the buffer, and we use elastic traffic (TCP) for comparison. We see from our results that the sampling error becomes constant beyond certain asymptotic level. We thus look into asymptotic error in estimation,for the lowest sampling gap, to establish a lower bound on estimation error for buffer loss probability measurement. We use formulae given in recent literature for computing the experimental and theoretic asymptotic variance of the buffer state traces in our scenarios. We find that the theoretical and experimental asymptotic variance of overflow probability match when sampling a trace of buffer states modelled as a two-state Markov Chain in Matlab. We claim that this is a new approach to computing the lower bound on the measurement of buffer overflow probability, when the buffer states are modelled as a Markov process. Using Markov Chain modelling for buffer overflow we further explore the relationship between sampling rate and accuracy. We find that there is no relationship between sampling gap and bias of estimation. Crucially we go on to show that a more realistic simulation of a packet buffer reveals that the distribution of buffer overflow periods is not always such as to allow simple Markov modelling of the buffer states; while the sojourn periods are exponential for the smaller burst periods, the tail of the distribution does not fit to the same exponential fitting. While our work validates the use of a two-state Markov model for a useful approximation modelling the overflow of a buffer, we have established that earlier work which relies on simple Markovian assumptions will thereby underestimate the error in the measured overflow probabilities.
4

Anomaly Detection for Control Centers

Gyamfi, Cliff Oduro 06 1900 (has links)
The control center is a critical location in the power system infrastructure. Decisions regarding the power system’s operation and control are often made from the control center. These control actions are made possible through SCADA communication. This capability however makes the power system vulnerable to cyber attacks. Most of the decisions taken by the control center dwell on the measurement data received from substations. These measurements estimate the state of the power grid. Measurement-based cyber attacks have been well studied to be a major threat to control center operations. Stealthy false data injection attacks are known to evade bad data detection. Due to the limitations with bad data detection at the control center, a lot of approaches have been explored especially in the cyber layer to detect measurement-based attacks. Though helpful, these approaches do not look at the physical layer. This study proposes an anomaly detection system for the control center that operates on the laws of physics. The system also identifies the specific falsified measurement and proposes its estimated measurement value. / United States Department of Energy (DOE) National Renewable Energy Laboratory (NREL) / Master of Science / Electricity is an essential need for human life. The power grid is one of the most important human inventions that fueled other technological innovations in the industrial revolution. Changing demands in usage have added to its operational complexity. Several modifications have been made to the power grid since its invention to make it robust and operationally safe. Integration of ICT has significantly improved the monitoring and operability of the power grid. Improvements through ICT have also exposed the power grid to cyber vulnerabilities. Since the power system is a critical infrastructure, there is a growing need to keep it secure and operable for the long run. The control center of the power system serves mainly as the decision-making hub of the grid. It operates through a communication link with the various dispersed devices and substations on the grid. This interconnection makes remote control and monitoring decisions possible from the control center. Data from the substations through the control center are also used in electricity markets and economic dispatch. The control center is however susceptible to cyber-attacks, particularly measurement-based attacks. When attackers launch measurement attacks, their goal is to force control actions from the control center that can make the system unstable. They make use of the vulnerabilities in the cyber layer to launch these attacks. They can inject falsified data packets through this link to usurp correct ones upon arrival at the control center. This study looks at an anomaly detection system that can detect falsified measurements at the control center. It will also indicate the specific falsified measurements and provide an estimated value for further analysis.
5

Patient Compliance of Patient Reported Outcome Measures in Measurement-Based Care After an Abrupt Shift to Telehealth During COVID-19

Raines, Adam J. 02 1900 (has links)
Measurement Based Care (MBC) is considered to be an evidence-based practice. Despite its well-documented efficacy, it is underutilized in the clinical community for various reasons, including clinician and patient buy-in. A key component to the successful implementation of MBC is the routine administration of Patient Reported Outcome Measures (PROMs). There is a lack of research describing the utilization of MBC in a telehealth setting. As technological innovations continue, a greater number of clinics are offering telemental health services. Additionally, the COVID-19 pandemic caused a majority of underprepared clinics to begin implementing telehealth. The present study sought to evaluate patient compliance with PROMs in MBC after an abrupt shift to telehealth due to the COVID-19 pandemic. Participants were collected from a clinical population at a community based psychological training clinic. The participants were separated into groups: modality 1 (in-person services, n = 17), modality 2 (telehealth services, n = 17), and modality 3 (hybrid of modalities 1 and 2, n = 10), to assess the effect of modality on mean PROM compliance. The participants were separated into groups adult (≥18 years of age, n = 23) and child (<18 years of age, n = 17), to assess the effect of maturity on mean PROM compliance. Results showed that mean PROM compliance was significantly higher in the in-person modality than the telehealth modality. Results also showed that PROM compliance was significantly higher in adults than in children. There was not a significant interaction effect of modality and maturity on PROM compliance. Additionally, results showed that PROM compliance decreased significantly after the switch from in-person services to telehealth services in the hybrid modality. These findings show that clinics may face significant barriers to the implementation of MBC after a sudden shift to telehealth caused by an unforeseen stressful event. / M.S. / Measurement-Based Care (MBC) is an effective practice for the treatment of patients in psychological practices. It is a collaborative process that involves the clinician and patient tracking treatment progress and outcomes through the use of consistently administered measures known as Patient Reported Outcome Measures (PROMs). Although MBC has been shown to be effective, there is little literature regarding its use in a telehealth setting. As a greater number of clinics begin offering telehealth services, questions regarding patient adherence to interventions have arisen. Furthermore, the global COVID-19 pandemic forced a majority of underprepared clinics to offer telehealth services. The current study sought to better understand potential barriers to the implementation of MBC in a telehealth setting. Participants were collected from patients receiving therapy at a community based psychological training clinic. The participants were separated into the groups: in-person services (n = 17), telehealth services (n = 17), and hybrid of in-person and telehealth (n = 10), to assess the effect of modality on mean PROM compliance. The participants were separated into groups adult (≥18 years of age, n = 23) and child (<18 years of age, n = 17), to assess the effect of maturity on mean PROM compliance. Results showed that patients in the in-person therapy group were more likely to complete their measures than patients in the telehealth therapy group, regardless of their maturity. Additionally, adult patients were more likely to complete their measures than child patients, regardless of the modality. Results also showed that patients who experienced both in-person and telehealth services were more likely to complete their measures before the switch to telehealth. These findings show that clinics may face significant barriers to the implementation of MBC after a sudden shift to telehealth caused by an unforeseen stressful event.
6

Advancing Assessments on Aerosol Radiative Effect by Measurement-based Direct Effect Estimation and through Developing an Explicit Climatological Convective Boundary Layer Model

Zhou, Mi 09 November 2006 (has links)
The first part of the thesis assesses the aerosol direct radiative effect (ADRE) with a focus on ground-based AERONET and satellite MODIS measurements. The AERONET aerosol climatology is used, in conjunction with surface albedo and cloud products from MODIS, to calculate the ADRE and its normalized form (NADRE) for distinct aerosol regimes. The NADRE is defined as the ADRE normalized by optical depth at 550 nm and is mainly determined by internal aerosol optical properties and geographical parameters. These terms are evaluated for cloud-free and cloudy conditions and for all-mode and fine-mode aerosols. We find that the NADRE of fine-mode aerosol is larger at the TOA but smaller at the surface in comparison to that of all-mode aerosol. Cloudy-sky TOA ADRE with clouds is sensitive to the relative location of aerosols and cloud layer. The high-resolution MODIS land surface albedo is also applied to study the clear-sky ADRE over North Africa and the Arabian Peninsula for summer 2001. TOA ADRE shows the high spatial variability with close similarity to that of surface albedo. The second part of the thesis is to develop a 2-D conceptual model for a climatological convective boundary layer over land as a persistent and distinct component in climate models, where the convective-scale motion is explicitly described by fluid dynamics and thermodynamics while the smaller scale effect is parameterized for a neutral stratification. Our conceptual model reasonably reproduces essential statistics of a convective boundary layer in comparison to large eddy simulations. The major difference is that our model produces a better organized and more constrained spatial distribution with coherent convective cells. The simulations for a climatological convective boundary layer are conducted for a prescribed constant and homogenous surface heat flux and a specified cooling term representing the background large scale thermal balance. The results show the 2-D coherent structures of convective cells with characteristic scales comparable with PBL height; downward maximum velocities being 70-80% of the accompanying upward maxima; vertical profiles of a constant potential temperature and linear decreasing heat fluxes; a square-root increase in the velocity magnitude with increasing surface heat flux.
7

Exploration of a Scalable Holomorphic Embedding Method Formulation for Power System Analysis Applications

January 2017 (has links)
abstract: The holomorphic embedding method (HEM) applied to the power-flow problem (HEPF) has been used in the past to obtain the voltages and flows for power systems. The incentives for using this method over the traditional Newton-Raphson based nu-merical methods lie in the claim that the method is theoretically guaranteed to converge to the operable solution, if one exists. In this report, HEPF will be used for two power system analysis purposes: a. Estimating the saddle-node bifurcation point (SNBP) of a system b. Developing reduced-order network equivalents for distribution systems. Typically, the continuation power flow (CPF) is used to estimate the SNBP of a system, which involves solving multiple power-flow problems. One of the advantages of HEPF is that the solution is obtained as an analytical expression of the embedding parameter, and using this property, three of the proposed HEPF-based methods can es-timate the SNBP of a given power system without solving multiple power-flow prob-lems (if generator VAr limits are ignored). If VAr limits are considered, the mathemat-ical representation of the power-flow problem changes and thus an iterative process would have to be performed in order to estimate the SNBP of the system. This would typically still require fewer power-flow problems to be solved than CPF in order to estimate the SNBP. Another proposed application is to develop reduced order network equivalents for radial distribution networks that retain the nonlinearities of the eliminated portion of the network and hence remain more accurate than traditional Ward-type reductions (which linearize about the given operating point) when the operating condition changes. Different ways of accelerating the convergence of the power series obtained as a part of HEPF, are explored and it is shown that the eta method is the most efficient of all methods tested. The local-measurement-based methods of estimating the SNBP are studied. Non-linear Thévenin-like networks as well as multi-bus networks are built using model data to estimate the SNBP and it is shown that the structure of these networks can be made arbitrary by appropriately modifying the nonlinear current injections, which can sim-plify the process of building such networks from measurements. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
8

Stochastic harmonic emission model of aggregate residential customers

Blanco Castaneda, Ana Maria 10 July 2018 (has links) (PDF)
Harmonic propagation studies of public distribution networks require accurate models of aggregate residential customers (groups of customers) that simulate the harmonic emission of the multitude of household appliances in the network. Most of the present models were developed with the component-based approach, where models of individual household appliances are combined to build the model of multiple customers. This approach requires high amount of input data, like models of individual household appliances and detail information of customer behavior and device composition, which is usually not easy to acquire. However, with the increasing number of PQ-analyzers in the networks, the measurement-based approach is now more and more considered for the modeling of aggregate customers. The measurement-based approach uses measurements of the network in combination with top-down methodologies to obtain models of the aggregate customers. Compared to the component-based approach it has several advantages, like inherent consideration of the real operating changes of the individual household appliances, variation of customer behavior, effect of line impedances, cancellation and attenuation effects, etc. This thesis presents the development of a time-series stochastic model of the low-order harmonic emission of aggregate residential customers based on a top-down measurement-based approach. The model represents the daily variation of the harmonic magnitudes and phase angles. Besides, the model includes the representation of the harmonic unbalances, which is of great importance for the proper analysis of harmonic propagation in medium-voltage networks. The model is parametrized for German networks, but the methodology can be applied to find the models of other regions or countries.
9

Relações monogâmicas entre estados multipartidos e efeitos de memória em computação quântica baseada em medidas projetivas /

Filenga, Daví. January 2020 (has links)
Orientador: Felipe Fernandes Fanchini / Resumo: Na presente Tese realizou-se um estudo acerca das relações monogâmicas entre emaranhamento de formação (EF) e discórdia quântica (DQ) para sistemas quânticos multipartidos, bem como um estudo acerca da dinâmica dissipativa de operações lógicas de 1 (portas NOT e Z) e 2 (porta CNOT) qubits para uma computação quântica baseada em medidas projetivas (MBQC). Como resultado, expressões as quais generalizam relações de conservação entre EF e DQ puderam ser deduzidas, bem como relações de distribuição de DQ para sistemas de n partes. Ademais, ampliando os estudos referentes a sistemas multipartidos, uma pesquisa a respeito da influência dos canais amplitude damping (AD) e phase damping (PD) em uma MBQC considerando ambientes altamente não-Markovianos pôde ser desenvolvida. Nesse sentido, uma medida denominada fidelidade média (Fm) foi então proposta, a partir da qual expressões analíticas puderam ser deduzidas para os canais em questão, e sendo demonstrado que Fm resulta em valores idênticos para as portas X e Z. Além do mais, também foi possível realizar um estudo acerca dos tempos ótimos das medidas, segundo o qual pôde-se concluir que sua rápida execução não necessariamente implica em melhores resultados, tampouco sua lenta execução não necessariamente implica em piores. Nesse contexto, pôde-se também demonstrar que para o canal AD o conhecimento do mapa dissipativo já é o suficiente para intuitivamente determinar os melhores tempos de medidas, sendo que o mesmo não necessariamen... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In this work a study about the monogamous relations between entanglement of formation (EF) and quantum discord (QD) for multipartite quantum systems, as well the dissipative dynamics of 1 (NOT and Z gates) and 2 (CNOT gate) qubits for a measurement-based quantum computation (MBQC) could be developed. As a result, expressions which generalize conservation laws between EF and DQ could be deduced, as well as DQ distribution laws for n part quantum systems. In addition, expanding the multipartite systems studies, a research about the influence of the amplitude damping (AD) and phase damping (PD) channels in an MBQC considering highly non-Markovian environments also could be developed. In this sense, a measure called average gate fidelity (Fm) was proposed, from which we deduce analytical expressions for the channels and show that it is identical for the X and Z gates. In addition, we conducted a study of the optimal measurement times, where we conclude that neither fast application of the projective measurements necessarily implies better results, nor slow application necessarily implies worse results. Furthermore, it was also possible to demonstrate that while for the AD the knowledge of the dissipative map is sufficient to determine the best measurement times, the same is not necessarily true for the PD, where the time of the set of measures becomes crucial since a phase error in one qubit can fix the phase error that takes place in another. Finally, a study was carried out on ... (Complete abstract click electronic access below) / Doutor
10

Stochastic harmonic emission model of aggregate residential customers

Blanco Castaneda, Ana Maria 25 August 2017 (has links)
Harmonic propagation studies of public distribution networks require accurate models of aggregate residential customers (groups of customers) that simulate the harmonic emission of the multitude of household appliances in the network. Most of the present models were developed with the component-based approach, where models of individual household appliances are combined to build the model of multiple customers. This approach requires high amount of input data, like models of individual household appliances and detail information of customer behavior and device composition, which is usually not easy to acquire. However, with the increasing number of PQ-analyzers in the networks, the measurement-based approach is now more and more considered for the modeling of aggregate customers. The measurement-based approach uses measurements of the network in combination with top-down methodologies to obtain models of the aggregate customers. Compared to the component-based approach it has several advantages, like inherent consideration of the real operating changes of the individual household appliances, variation of customer behavior, effect of line impedances, cancellation and attenuation effects, etc. This thesis presents the development of a time-series stochastic model of the low-order harmonic emission of aggregate residential customers based on a top-down measurement-based approach. The model represents the daily variation of the harmonic magnitudes and phase angles. Besides, the model includes the representation of the harmonic unbalances, which is of great importance for the proper analysis of harmonic propagation in medium-voltage networks. The model is parametrized for German networks, but the methodology can be applied to find the models of other regions or countries.

Page generated in 0.0503 seconds