101 |
Adaptive TDC : Implementation and Evaluation of an FPGAAndersson Holmström, Simon January 2015 (has links)
Time to digital converter (TDC) is a digital unit that measures the time interval between two events.This is useful to determine the characteristics and patterns of a signal or an event. In this thesis ahybrid TDC is presented consisting of a tapped delay line and a clock counter principle. The TDC is used to measure the time between received data in a QKD application. If the measuredtime does not exceed a certain value then data had been sent without any interception. It is alsopossible to use TDCs in other fields such as laser-ranging and time-of-flight applications. The TDC consists of two carry chains, an encoder, a FIFO and a counter for each channel, anAXI-module and a control unit to generate command signals to all channels that are implemented.The time is measured by sampling the signal that has propagated through the carry chain and from thissample encode the propagation length. In this thesis a TDC is implemented that has a 10 ns dead time and a resolution below 28 psin a four channel mode. The propagation variation is approximately two percent of the total valueduring testing. For the implementation an FPGA-board with a Zynq XC7Z020 SoC is used withSystemVerilog that is a hardware describing language (HDL).
|
102 |
Maintaining Reductions in Challenging Behavior Following Reinforcement-Based Intervention with Schedule Thinning and Delay-to-ReinforcementEmily V Gregori (7037888) 13 August 2019 (has links)
The purpose of this series of studies was to evaluate the effects of schedule thinning and delay-to-reinforcement following intervention for individuals diagnosed with intellectual and developmental disabilities. Study one was a systematic review of the available literature on schedule thinning, and study two evaluated the effects of a novel approach to delay-to-reinforcement following functional communication training. Results of both studies found that schedule thinning and delay-to-reinforcement are efficacious procedures for continued reductions in challenging behavior following intervention.
|
103 |
The development of a screening tool to evaluate infants who are HIV positiveHilburn, Nicole Clare 06 April 2011 (has links)
PhD, Faculty of Health Sciences, University of the Witwatersrand / HIV/AIDS continues to be one of the greatest health challenges which South Africa faces.
The epidemic in children is closely linked to that in women, the prevalence of which
continues to grow according to antenatal statistics from the South African Department of
Health (DOH). HIV is known to invade the central nervous system at the time of infection,
and causes widespead damage. In children, this leads to a well-researched condition
known as HIV encephalopathy, which affects all areas of neurodevelopment. The effects
of timely initiation of antiretroviral therapy on alleviating the impact of encephalopathy
have been well described.
Neurodevelopmental delay is a stage four disease indicator according to the World Health
Organisation (WHO), and therefore is a criterion for initiation of Highly Active Antiretroviral
Therapy (HAART). HAART is often only administered according to the virologic and
immunologic status of a child, as standardised neurodevelopmental assessment tools are
not widely available in South African clinics. When HAART initation is dependent on
immunologic status, it is often too late to prevent encephalopahy. To date, the only means
of prevention of this condition is early initation of HAART, which has not been widely
available in South Africa. Stringent guidelines for the commencement of this therapy
according to the WHO, and the South African Department of Health (DOH) have had to
be followed, leading to late initiation of HAART, and widespread central nervous system
encephalopathy. Studies which have been carried out in South African clinics have
demonstrated the high prevalence of this condition. Once there is evidence of
encephalopathy, children should be referred for assessments in all facets of development,
and where necessary, for rehabilitation. A standardised developmental screening tool
which is suitable for use in a developing country is therefore necessary in order to screen
for neurodevelopmental delays to allow for further assessment and referral to
rehabilitation services, as well as providing an additional assessment criterion for initiation
of HAART.
Paediatric HIV clinics in developing countries are understaffed, and children may be seen
by junior staff or screened by nurses due to the high numbers of clinic attendees. This
often results in neurodevelopment being inadequately assessed and children are
therefore not referred for intervention services. A standardised screening tool, which
The Development of a Screening Tool to Evaluate Infants who are HIV Positive
could be administered by clinic staff in order to ensure correct and timely referral of
children for further assessment and intervention is therefore necessary. This is of
importance both locally and internationally where a screening tool, which has been
developed specifically for this purpose, does not exist.
The aim of this study was therefore to evaluate the agreement between the Bayley-III
Screening Test and the Bayley Scales of Infant Development (3rd version) in a population
of HIV positive infants in order to evaluate its appropriateness for use in South Africa. The
Bayley Scales of Infant Development have long been considered the ‘gold standard’ in
infant developmental assessment, which is why this tool was chosen to evaluate the
Bayley-III Screening Test against. The developmental scores in each facet (cognitive,
motor or language) were evaluated to determine which should be included in an
assessment tool for this population. Further objectives for the study were to adapt the
screening tool to the needs of the population, or to develop a new screening tool should
the Bayley-III Screening Test not prove suitable for use in this population.
In order to meet the aims and objectives, a cross-sectional study was conducted where
112 HIV positive infants between the ages of six and eighteen months were assessed
using the Bayley-III Screening Test and the Bayley Scales of Infant Development (3rd
version) (BSID III). The infants were stratified into four age groups namely 6-8 months, 9-
12 months, 13-16 months, and 17-18 months. Children were recruited from Harriet Shezi
Children’s Clinic at Chris Hani Baragwanath Hospital in Soweto.
The agreement between the Bayley-III Screening Test and the Bayley Scales of Infant
Development (3rd version) was analysed using Kappa, for the overall group, and for each
age group. Overall agreement between the tools was as follows: K=0.58 for the Cognitive
facet, K=0.82 for the Expressive Communication facet, K=0.76 for the Receptive
Communication facet, K=0.44 for the Fine Motor facet and K=0.57 for the Gross Motor
facet. These values indicate that the Bayley-III Screening Test is therefore not
acceptable for clinical use, as excellent agreement (k≥0.75) in all facets would be
necessary for this purpose.
A new screening tool therefore had to be developed. The infant’s developmental scores
from the BSID III were analysed to determine which facets of development were most
severely affected, and therefore which facets should be included in a new screening tool.
Gross motor function was demonstrated to be the area which was most severely affected,
followed by cognitive function. A gross motor screening tool would therefore be suitable
for use in this population, as no equipment would be necessary. Gross motor
development is the most universally similar aspect of development, which is not
completely dependent on cultural or socioeconomic factors which often have an influence
on language and cognitive development.
Item selection from the BSID III was undertaken to determine which items should be
included in a brief screening tool. In each of the four age groups, item selection occurred
as follows: Two items which discriminated the At-Risk, from Emerging and Competent
groups (less than 20% in the At-Risk group, and 100% in the other groups) were selected.
Two items, which discriminated between children in the ‘Emerging’ and ‘Competent’
categories on the BSID III were selected (0-5% of children who were At-Risk obtained
credit, 30-50% of the Emerging group obtained credit, and 100% of the Competent group
obtained credit). Lastly, two items were selected which discriminated the Competent
group from the other two groups (100% or as high as possible in the Competent group,
and 0% in the other groups).
The new gross motor screening tool was assembled using the selected items, scoring
was allocated, and it was tested against the scores obtained on the Gross Motor facet of
the BSID III for the initial 112 infants. Agreement between the tools was analysed using
Kappa, and refinements were made according to the discrepancies. This was done three
times, until the Kappa value revealed excellent agreement between the tools (k = 0.87). A
panel of experts was then invited to examine the new gross motor screening tool, and to
comment on it, and further adjustments were made accordingly.
Preliminary concurrent validity testing of the new gross motor screening tool was then
carried out against the Gross Motor facet of the BSID III on 60 children, who were
recruited from the Harriet Shezi Children’s Clinic at Chris Hani Baragwanath Hospital in
Soweto. Statistical analysis revealed that the agreement between the BSID III and the
new screening tool was excellent (k=0.85). The diagnostic properties of the new gross
motor screening tool were as follows: sensitivity 97.4%, specificity 85.7%, positive
predictive value 92.7%, and negative predictive value 94.7%. These values indicate that the statistical properties of the tool are excellent, and the tool will not be predisposed to
underreferrals or over-referrals. Preliminary reliability testing was carried out on 15
children for test-retest/intrarater reliability and 15 children for interrater reliability.
Interrater, test-retest and intrarater reliability were excellent (r=1, r=0.98, r=0.98
respectively). Further testing of reliablity and validity should be undertaken in order to
establish these properties, and standardisation should also be carried out on healthy
children. Given the need for an assessment tool of this nature in South Africa and other
developing countries, and the statistical properties thus far, the tool may be used clinically
for the purposes for which it was developed.
|
104 |
Distractibility, Impulsivity, and Activation of Top-down Control ResourcesSkogsholm, Lauren January 2011 (has links)
Thesis advisor: Elizabeth Kensinger / Thesis advisor: Katherine Mickley Steinmetz / Distractibility and impulsivity have long been thought of as two separate psychological processes; however, there is currently evidence that suggests otherwise. The aim of this study was to gain a better understanding on the behavioral level of the interaction between these two traits. I proposed a model in which some individuals have a higher than average threshold for activation of the top-down cognitive control resources that are important for directing and maintaining attention as well as for regulating impulsive behaviors. To test the strength of this model I used an experimental paradigm that combined two different types of tasks—a spatial working memory task and a delay discounting of a primary reward (juice) task. Participants were administered the Conners’ Adult ADHD Rating Scale in order to be classified in terms of their trait distractibility and trait impulsivity subscale scores. The results suggest that there is indeed an association between the traits of distractibility and impulsivity, and that they may be linked by a common mechanism involving a variable threshold of activation of top-down control resources to regulate these behaviors. / Thesis (BS) — Boston College, 2011. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: College Honors Program. / Discipline: Psychology.
|
105 |
Predicting abnormal combustion phenomena in highly booted spark ignition enginesGiles, Karl January 2018 (has links)
As powertrains and IC engines continue to grow in complexity, many vehicle manufacturers (OEMs) are turning to simulation in an effort to reduce design validation and calibration costs. Ultimately, their aim is to complete this process entirely within the virtual domain, without the need for any physical testing. Practical simulation techniques for the prediction of knock in spark ignition (SI) engines rely on empirical ignition delay correlations (IDCs). These IDCs are used to approximate the complex ignition delay characteristics of real and surrogate fuel compositions with respect to temperature, pressure and mixture composition. Over the last 40 years, a large number of IDCs have been put forward in the literature, spanning a broad range of fuels, operating conditions and calibration methods. However, the applicability of these tools has yet to be verified at the high brake mean effective pressure (BMEP) operating conditions relevant to highly boosted, downsized engines. Here, the applicability of 16 gasoline-relevant IDCs for predicting knock onset at high loads (BMEP > 30bar) has been investigated by comparing the knock predictions from each IDC against experimentally measured knock onset times. Firstly, a detailed investigation into cylinder pressure data processing techniques was performed to determine which knock detection and angle of knock onset (aKO) measurement methods were most appropriate at high loads. A method based on the maximum amplitude pressure oscillation (MAPO) during knock-free operation best estimated cycle classifications, whilst Shahlari’s Signal Energy Ratio technique [1] most accurately predicted knock onset. To the author’s knowledge, this is the first time that such a comprehensive study on the accuracy of these techniques at such high loads has been conducted. Importantly, these findings represent a valuable framework to inform other researchers in the field of knocking combustion on which techniques are needed to extract accurate and relevant information from measured cylinder pressure records. Secondly, the data processing techniques derived were applied to experimental data collected across a wide range of high BMEP operating conditions (up to a maximum of 32 bar) using a 1.6 litre, 4-cylinder SI engine. Trapped charge composition and temperature were predicted using a calibrated 1D model of the engine, whilst the temperature of a hypothetical hotspot in the unburned zone was estimated separately by assuming adiabatic compression from a point after intake valve closing and by mapping γ (the ratio of specific heat capacities) as a function of temperature. This revealed that none of the IDCs tested performed well at conditions relevant to modern, downsized engines. The IDC that achieved the best overall balance between aKO accuracy and cycle-classification agreement was the “cool-flame” correlation for iso-octane proposed by Ma [2]. However, this had an unacceptably high average aKO error of ±3.5° compared to the ±2°CA limit observed within the literature, and its average cycle-classification accuracy was below 60%. The main reason for this relatively modest accuracy was a large number of false-positive cycle classifications, which mainly occurred in slow or late burning cycles. Further work should therefore focus on methods to reduce the number of false positive classifications obtained with this correlation, which could be achieved using empirical correlations to describe the latest point in the cycle for which knock would be permitted to occur in terms other measureable combustion parameters. Overall, this research has generated a unique insight into combustion at very high loads, as well as an extensive dataset that can be used for future research to improve the accuracy of empirical knock modelling techniques. Furthermore, this work has demonstrated that for the purposes of virtual spark timing calibration and the avoidance of knock, the current crop of practical simulation tools is not accurate enough at the conditions relevant to modern SI engines and has provided a better understanding of their limitations. These findings represent a major contribution to the field from both a research perspective and for industrial applications.
|
106 |
Digitally-controlled programmable delay line for TV signal.January 1974 (has links)
Thesis (M.Sc.)--Chinese University of Hong Kong. / Bibliography: leaves 84-86.
|
107 |
Att leva med endometrios, en sjukdom som styr livet / To live with endometriosis, a disease that rules the lifeDahlgren, Johanna, Kiesen, Clara January 2016 (has links)
Background: Endometriosis is a disease which, though it impacts one of ten women, has a lack of knowledge in the society and among healthcare professionals. The lack of knowledge causes delays in diagnosis and a lack of comprehension for the women. Aim: The aim of this study was to describe women's experiences of living with endometriosis. Method: The method used in this study was a method to contribute to evidence-based nursing with ground in analysis of qualitative research. Through the similarities and contradictions in the analyzed studies, themes and under-themes were created. Result: Findings showed that the women's experiences often were negative. Friends, family and healthcare professionals normalized the pain and the women often endured the pain in belief that it was normal. The lack of knowledge in endometriosis caused misdiagnosis and diagnostic delays. The endometriosis limited the women's daily living, work and social life by decreasing their activity because of symptoms as pain, nausea, fatigue, heavy bleeding and diarrhea. The disease also affected the women's relationships and the feeling of being a woman. Conclusion: The lack of knowledge in endometriosis caused an unnecessarily suffering among the women with the disease. If the awareness of endometriosis would increase in the society and among healthcare professionals the time for diagnosis and the encounters with patients suffering from endometriosis could improve.
|
108 |
Advanced signal processing techniques for GPR by taking into account the interface roughness of a stratified medium / Techniques avancées de traitement du signal pour applications GPR en tenant compte des rugosités d’interfaces des milieu x stratifiésSun, Meng 30 September 2016 (has links)
Dans cette thèse, on s'intéresse au développement de nouvelles méthodes d'auscultation GPR pour déterminer la géométrie et la structure des chaussées. Cette thèse a deux objectifs principaux. Tout d'abord, elle a pour but d'améliorer la compréhension des mécanismes de diffusion à très large bande dans un milieu stratifié composé d'interfaces rugueuses. Avec l'augmentation des fréquences d'utilisation de différents systèmes, les interfaces de chaussée ne peuvent plus être considérées comme planes. Ainsi, la rugosité des interfaces doit être prise en compte dans la modélisation de la propagation. Donc, une analyse de l'influence de cette rugosité sur l'onde rétrodiffusée a été réalisée. Elle a permis de montrer que la rugosité induit une décroissance en fréquence de l'amplitude des échos. Cette décroissance a ensuite été introduite dans le modèle du signal. Dans un second temps, plusieurs méthodes de traitement de signal ont été proposées pour estimer conjointement les paramètres de rugosité et d'épaisseur. D'abord, des méthodes multidimensionnelles ont été proposées en prenant en compte l'influence de la rugosité. Ensuite, afin de réduire la charge de calcul, des méthodes monodimensionnelles ont été proposées. Ces méthodes ont été évaluées à partir de signaux simulés. Les résultats ont montré de bonnes performances pour l'estimation des temps de retard et des paramètres de rugosité des interfaces. Enfin, les méthodes de traitement proposées dans ce manuscrit ont été testées sur des données expérimentales, qui permettent de valider les résultats théoriques et de montrer la faisabilité de la mesure de couches minces de chaussée et du paramètre de rugosité. / In this thesis, we focus on the development of new GPR methods to estimate the pavement structure. This thesis has two main objectives. First, it aims to improve the understanding of the scattering mechanisms for large-band radars in a stratified medium composed of rough interfaces. With increasing frequencies, pavement interfaces can no longer be considered as flat. The interface roughness must be taken into account in the propagation modelling. Thus, the influence of the roughness has been analysed. It has been shown that the interface roughness provides a continuous frequency decay of the magnitude of the echoes. This continuous frequency decay has then been introduced into the signal model. Secondly, several signal processing methods have been proposed to jointly estimate the roughness and thickness of pavement. Thus, multidimensional methods have been proposed by taking into account the roughness.Then, in order to reduce the computational burden, one-dimensional methods have also been proposed. From simulations, it can be seen that the proposed algorithms provide a good performance in parameter estimations (time delay, permittivity, roughness and thickness). Finally, the proposed signal processing methods are tested on experimental data. The results confirm the theoretical prediction. They show the feasibility to estimate both the thickness of thin pavements and roughness parameter.
|
109 |
A Quality of Service Monitoring System for Service Level Agreement VerificationTa, Xiaoyuan January 2006 (has links)
Master of Engineering by Research / Service-level-agreement (SLA) monitoring measures network Quality-of-Service (QoS) parameters to evaluate whether the service performance complies with the SLAs. It is becoming increasingly important for both Internet service providers (ISPs) and their customers. However, the rapid expansion of the Internet makes SLA monitoring a challenging task. As an efficient method to reduce both complexity and overheads for QoS measurements, sampling techniques have been used in SLA monitoring systems. In this thesis, I conduct a comprehensive study of sampling methods for network QoS measurements. I develop an efficient sampling strategy, which makes the measurements less intrusive and more efficient, and I design a network performance monitoring software, which monitors such QoS parameters as packet delay, packet loss and jitter for SLA monitoring and verification. The thesis starts with a discussion on the characteristics of QoS metrics related to the design of the monitoring system and the challenges in monitoring these metrics. Major measurement methodologies for monitoring these metrics are introduced. Existing monitoring systems can be broadly classified into two categories: active and passive measurements. The advantages and disadvantages of both methodologies are discussed and an active measurement methodology is chosen to realise the monitoring system. Secondly, the thesis describes the most common sampling techniques, such as systematic sampling, Poisson sampling and stratified random sampling. Theoretical analysis is performed on the fundamental limits of sampling accuracy. Theoretical analysis is also conducted on the performance of the sampling techniques, which is validated using simulation with real traffic. Both theoretical analysis and simulation results show that the stratified random sampling with optimum allocation achieves the best performance, compared with the other sampling methods. However, stratified sampling with optimum allocation requires extra statistics from the parent traffic traces, which cannot be obtained in real applications. In order to overcome this shortcoming, a novel adaptive stratified sampling strategy is proposed, based on stratified sampling with optimum allocation. A least-mean-square (LMS) linear prediction algorithm is employed to predict the required statistics from the past observations. Simulation results show that the proposed adaptive stratified sampling method closely approaches the performance of the stratified sampling with optimum allocation. Finally, a detailed introduction to the SLA monitoring software design is presented. Measurement results are displayed which calibrate systematic error in the measurements. Measurements between various remote sites have demonstrated impressively good QoS provided by Australian ISPs for premium services.
|
110 |
Flight Delay-Cost Simulation Analysis and Airline Schedule OptimizationYuan, Duojia, S3024047@student.rmit.edu.au January 2007 (has links)
In order to meet the fast-growing demand, airlines have applied much more compact air-fleet operation schedules which directly lead to airport congestion. One result is the flight delay, which appears more frequently and seriously; the flight delay can also significantly damage airline's profitability and reputation The aim of this project is to enhance the dispatch reliability of Australian X Airline's fleet through a newly developed approach to reliability modeling, which employs computer-aided numerical simulation of the departure delay distribution and related cost to achieve the flight schedule optimization. The reliability modeling approach developed in this project is based on the probability distributions and Monte Carlo Simulation (MCS) techniques. Initial (type I) delay and propagated (type II) delay are adopted as the criterion for data classification and analysis. The randomicity of type I delay occurrence and the internal relationship between type II delay and changed flight schedule are considered as the core factors in this new approach of reliability modeling, which compared to the conventional assessment methodologies, is proved to be more accurate on the departure delay and cost evaluation modeling. The Flight Delay and Cost Simulation Program (FDCSP) has been developed (Visual Basic 6.0) to perform the complicated numerical calculations through significant amount of pseudo-samples. FDCSP is also designed to provide convenience for varied applications in dispatch reliability modeling. The end-users can be airlines, airports and aviation authorities, etc. As a result, through this project, a 16.87% reduction in departure delay is estimated to be achieved by Australian X Airline. The air-fleet dispatch reliability has been enhanced to a higher level - 78.94% compared to initial 65.25%. Thus, 13.35% of system cost can be saved. At last, this project also achieves to set a more practical guideline for air-fleet database and management upon overall dispatch reliability optimization.
|
Page generated in 0.0369 seconds