• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Formal verification of high-level synthesis with global code motions

Kim, Youngsik January 2007 (has links)
Thesis (PH.D.) -- Syracuse University, 2007. / "Publication number AAT 3281725"
12

Multiple constraint space-time direct data domain approach using nonlinear arrays.

Carlo, Jeffrey Thomas. Sarkar, Tapan Salazar-Palma, Magdalena Wicks, Michael C. Unknown Date (has links)
Thesis (PH.D.)--Syracuse University, 2003. / "Publication number AAT 3114800."
13

Interdomain traffic engineering on a bandwidth broker-supported diffserv Internet.

Okumus, Ibrahim Taner. Chapin, Stephen J. Hwang, Junseo January 2003 (has links)
Thesis (PH.D.)--Syracuse University, 2003. / "Publication number AAT 3099527."
14

Distributed deployment algorithms for mobile wireless sensor networks

Heo, Nojeong Varshney, Pramod K. January 2004 (has links)
Thesis (PH.D.) -- Syracuse University, 2004. / Adviser: Varshney, Pramod K. "Publication number AAT 3132691."
15

Power Islands a high-level synthesis methodology for reducing spurious switching activity and leakage /

Dal, Deniz. January 2006 (has links)
Thesis (PH.D.) -- Syracuse University, 2006 / "Publication number AAT 3251765."
16

Grid-based collaboration

Wang, Minjun. January 2006 (has links)
Thesis (PH.D.) -- Syracuse University, 2006 / "Publication number AAT 3242510."
17

Wavelet Transform and Ensemble Logistic Regression for Driver Drowsiness Detection

Kannanthanathu, Amal Francis 29 December 2017 (has links)
<p> Drowsy driving has become a serious concern over the last few decades. The rise in the number of automobiles as well as the stress and fatigue induced due to lifestyle factors have been major contributors to this problem. Accidents due to drowsy driving have caused innumerable deaths and losses to the state. Therefore, detecting drowsiness accurately and within a short period of time before it impairs the driver has become a major challenge. Previous researchers have found that the Electrocardiogram (ECG/EKG) is an important parameter to detect drowsiness. Incorporating machine learning (ML) algorithms like Logistic Regression (LR) can help in detecting drowsiness accurately to some extent. Accuracy in LR can be increased with a larger data set and more features for a robust machine learning model. However, having a larger dataset and more features increases detection time, which can be fatal if the driver is drowsy. Reducing the dataset size for faster detection causes the problem of overfitting, in which the model performs well with training data than test data. </p><p> In this thesis, we increased the accuracy, reduced detection time, and solved the problem of overfitting using a machine learning model based on Ensemble Logistic Regression (ELR). The ECG signal after filtering was first converted from the time domain to the frequency domain using Wavelet Transform (WT) instead of the traditional Short Term Fourier Transform (STFT). Frequency features were then extracted and an ensemble based logistic regression model was trained to detect drowsiness. The model was then tested on twenty-five male and female subjects who varied between 20 and 60 years of age. The results were compared with traditional methods for accuracy and detection time. </p><p> The model outputs the probability of drowsiness. Its accuracy is between 90% and 95% within a detection time of 20 to 30 seconds. A successful implementation of the above system can significantly reduce road accidents due to drowsy driving.</p><p>
18

Dependability analysis of fault-tolerant multiprocessor architectures through simulated fault injection

Clark, Jeffrey Alan 01 January 1993 (has links)
This dissertation develops a new approach for evaluating the dependability of fault-tolerant computer systems. Dependability has traditionally been evaluated through combinatorial and Markov modeling. These analytical techniques have several limitations which can restrict their applicability. Simulation avoids many of the limitations, allowing for more precise representation of system attributes than feasible with analytical modeling. However, the computational demands of simulating a system in detail, at a low abstraction level, currently prohibit evaluation of high level dependability metrics such as reliability and availability. The new approach abstracts a system at the architectural level, and employs life testing through simulated fault-injection to accurately and efficiently measure dependability. The simulation models needed to implement this approach have been derived and integrated into a generalized software testbed called the REliable Architecture Characterization Tool (REACT). The effectiveness of REACT is demonstrated through the analysis of several alternative fault-tolerant multiprocessor architectures. Specifically, two dependability tradeoffs associated with triple-modular redundant (TMR) systems are investigated. The first explores the reliability-performance tradeoff made by voting unidirectionally, instead of bidirectionally, on either memory read or write accesses. The second examines the reliability-cost tradeoff made by duplicating, rather than triplicating, memory modules and comparing their outputs via error detecting codes. Both studies show that in many cases, acceptably little reliability is sacrificed for potentially large performance increases or cost reductions, in comparison to the original TMR system design.
19

A hierarchical directory scheme for large-scale cache coherent multiprocessors

Maa, Yeong-Chang 01 January 1994 (has links)
Cache coherence problem is a major concern in the design of shared-memory multiprocessors. As the number of processors scales to higher orders of magnitude, traditional bus-based snoopy cache coherence schemes broadcast as notification medium are no longer adequate. Instead, the directory-based scheme is a promising approach to deal with the large-scale cache coherence problem. However, the storage overhead of (full-map) directory scheme becomes too prohibitive as the system size increases. This dissertation champions the use of hierarchical full-map directory (HFMD) to reduce the storage requirement while still achieving satisfactory performance. The key point is to exploit the locality among shared data access in the parallel programs. The organization and protocol for the HFMD scheme are defined and verified. Storage requirement comparison and trace driven simulation are performed to evaluate the effectiveness of HFMD against other directory schemes. The result is quite encouraging. While reducing the storage overhead to less than 10% of that required by the full-map directory, the performance of the HFMD scheme compares competitively to the full-map directory scheme. The proposed hierarchical full-map directory scheme seems to be an auspicious hardware approach for handling cache coherence in the design of future large-scale multi-processor memory systems. Finally possible extensions for HFMD to enhance its performance are discussed.
20

Data memory subsystem resilient to process variations

Ben Naser, Mahmoud 01 January 2008 (has links)
As technology scales, more sophisticated fabrication processes cause variations in many different parameters in the device. These variations could severely affect the performance and power consumption of processors by making the latency of circuits less predictable and thus requiring conservative design approaches and/or techniques to increase performance that often affect power consumption. In this dissertation, we introduce and study step-by-step a 16KB cache subsystem in 32-nm CMOS technology, at both circuit and architecture levels, aiming for a single-cycle process-variation resilient subsystem design in a 1GHz processor that is high performance and power efficient at the same time. We use expected-case simulations in addition to worst-case circuit analysis to establish the overall delay and power consumption due to process variations under both typical and worst-case conditions. The distribution of the cache critical-path delay and power consumption in the typical scenario was determined by performing Monte Carlo simulations at different supply voltages, threshold voltages, and transistor lengths on the complete cache design. In addition to establishing the delay and power variations, we introduce an adaptive variable-cycle-latency cache architecture that mitigates the impact of process variations on access latency by closely following the typical latency behavior rather than assuming a conservative worst-case design point, and allowing tradeoffs between power and performance to be controlled. We show that the proposed adaptive cache is transparent to other processor subsystems and has negligible power and area overhead compared to a conventional design. We also establish what the overall leakage power is due to process variations. The distribution of the cache leakage power was determined before and after incorporating state-of-the-art leakage optimizations. Simulation results show that our adaptive data cache is process-variation-resilient and can achieve on average 10% performance improvement on SPEC2000 applications in a superscalar processor, in conjunction with 6X reduction in the mean leakage power compared with a conservative design. Additional performance improvement potential exists in processors in which the data cache access is on the critical path, by allowing a more aggressive clock rate in the processor.

Page generated in 0.1768 seconds