• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Clock Jitter in Communication Systems

Martwick, Andrew Wayne 21 May 2018 (has links)
For reliable digital communication between devices, the sources that contribute to data sampling errors must be properly modeled and understood. Clock jitter is one such error source occurring during data transfer between integrated circuits. Clock jitter is a noise source in a communication link similar to electrical noise, but is a time domain noise variable affecting many different parts of the sampling process. Presented in this dissertation, the clock jitter effect on sampling is modeled for communication systems with the degree of accuracy needed for modern high speed data communication. The models developed and presented here have been used to develop the clocking specifications and silicon budgets for industry standards such as PCI Express, USB3.0, GDDR5 Memory, and HBM Memory interfaces.
2

On Efficiency and Accuracy of Data Flow Tracking Systems

Jee, Kangkook January 2015 (has links)
Data Flow Tracking (DFT) is a technique broadly used in a variety of security applications such as attack detection, privacy leak detection, and policy enforcement. Although effective, DFT inherits the high overhead common to in-line monitors which subsequently hinders their adoption in production systems. Typically, the runtime overhead of DFT systems range from 3× to 100× when applied to pure binaries, and 1.5× to 3× when inserted during compilation. Many performance optimization approaches have been introduced to mitigate this problem by relaxing propagation policies under certain conditions but these typically introduce the issue of inaccurate taint tracking that leads to over-tainting or under-tainting. Despite acknowledgement of these performance / accuracy trade-offs, the DFT literature consistently fails to provide insights about their implications. A core reason, we believe, is the lack of established methodologies to understand accuracy. In this dissertation, we attempt to address both efficiency and accuracy issues. To this end, we begin with libdft, a DFT framework for COTS binaries running atop commodity OSes and we then introduce two major optimization approaches based on statically and dynamically analyzing program binaries. The first optimization approach extracts DFT tracking logics and abstracts them using TFA. We then apply classic compiler optimizations to eliminate redundant tracking logic and minimize interference with the target program. As a result, the optimization can achieve 2× speed-up over base-line performance measured for libdft. The second optimization approach decouples the tracking logic from execution to run them in parallel leveraging modern multi-core innovations. We apply his approach again applied to libdft where it can run four times as fast, while concurrently consuming fewer CPU cycles. We then present a generic methodology and tool for measuring the accuracy of arbitrary DFT systems in the context of real applications. With a prototype implementation for the Android framework – TaintMark, we have discovered that TaintDroid’s various performance optimizations lead to serious accuracy issues, and that certain optimizations should be removed to vastly improve accuracy at little performance cost. The TaintMark approach is inspired by blackbox differential testing principles to test for inaccuracies in DFTs, but it also addresses numerous practical challenges that arise when applying those principles to real, complex applications. We introduce the TaintMark methodology by using it to understand taint tracking accuracy trade-offs in TaintDroid, a well-known DFT system for Android. While the aforementioned works focus on the efficiency and accuracy issues of DFT systems that dynamically track data flow, we also explore another design choice that statically tracks information flow by analyzing and instrumenting the application source code. We apply this approach to the different problem of integer error detection in order to reduce the number of false alarmings.
3

Laser as a Tool to Study Radiation Effects in CMOS

Ajdari, Bahar 01 August 2017 (has links)
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
4

Enhancing Value-Based Healthcare with Reconstructability Analysis: Predicting Risk for Hip and Knee Replacements

Froemke, Cecily Corrine 08 August 2017 (has links)
Legislative reforms aimed at slowing growth of US healthcare costs are focused on achieving greater value, defined specifically as health outcomes achieved per dollar spent. To increase value while payments are diminishing and tied to individual outcomes, healthcare must improve at predicting risks and outcomes. One way to improve predictions is through better modeling methods. Current models are predominantly based on logistic regression (LR). This project applied Reconstructability Analysis (RA) to data on hip and knee replacement surgery, and considered whether RA could create useful models of outcomes, and whether these models could produce predictions complimentary to or even stronger than LR models. RA is a data mining method that searches for relations in data, especially non-linear and higher ordinality relations, by decomposing the frequency distribution of the data into projections, several of which taken together define a model, which is then assessed for statistical significance. The predictive power of the model is expressed as the percent reduction of uncertainty (Shannon entropy) of the dependent variable (the DV) gained by knowing the values of the predictive independent variables (the IVs). Results showed that LR and RA gave the same results for equivalent models, and showed that exploratory RA provided better models than LR. Sixteen RA predictive models were then generated across the four DVs: complications, skilled nursing discharge, readmissions, and total cost. While the first three DVs are nominal, RA generated continuous predictions for cost by calculating expected values. Models included novel comorbidity variables and non-hypothesized interaction terms, and often resulted in substantial reductions in uncertainty. Predictive variables consisted of both delivery system variables and binary patient comorbidity variables. Complications were predicted by the total number of patient comorbidities. Skilled nursing discharges were predicted both by patient-related factors and delivery system variables (location, surgeon volume), suggesting practice patterns influence utilization of skilled nursing facilities. Readmissions were not well predicted, suggesting the data used in this project lacks the right variables or that readmissions are simply unpredictable. Delivery system variables (surgeon, location, and surgeon volume) were found to be the predominant predictors of total cost. Risk ratios were generated as an additional measure of effect size. These risk ratios were used to classify the IV states of the models as indicating higher or lower risk of adverse outcomes. Some IV states showed nearly 25% of patients at increased risk, while other IV states showed over 75% of patients at decreased risk. In real time, such risk predictions could support clinical decision making and custom-tailored utilization of services. Future research might address the limitations of this project's data and employ additional RA techniques and training-test splits. Implementation of predictive models is also discussed, with considerations for data supply lines, maintenance of models, organizational buy-in, and the acceptance of model output by clinical teams for use in real-time clinical practice. If outcomes and risk are adequately predicted, areas for potential improvement become clearer, and focused changes can be made to drive improvements in patient care. Better predictions, such as those resulting from the RA methodology, can thus support improvement in value--better outcomes at a lower cost. As reimbursement increasingly evolves into value-based programs, understanding the outcomes achieved, and customizing patient care to reduce unnecessary costs while improving outcomes, will be an active area for clinicians, healthcare administrators, researchers, and data scientists for many years to come.

Page generated in 0.0923 seconds