• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 17
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 20
  • 19
  • 17
  • 14
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Journey Mapping: A New Approach for Defining Automotive Drive Cycles

Divakarla, Kavya Prabha 06 1900 (has links)
Driving has become a very common activity for most of the people around the world today. People are becoming more and more dependent on vehicles, contributing to the growth of automotive industry. New vehicles are released regularly into the market in order to meet the high demand. With the increase in demand, the importance of vehicle testing has also increased by many folds. Besides testing new vehicles for their performance prediction, existing vehicles also need to be tested in order to check their compliance to safety standards. Drive Cycles that have been traditionally defined as velocity over time profiles are used as vehicle testing beds. The need for re-defining drive cycles is demonstrated through the high deviations between the predicted and the actual performance values. As such, a new approach for defining automotive drive cycles, Journey Mapping, is proposed. Journey Mapping defines a drive cycle more realistically as the journey of a particular vehicle from an origin to the destination, which during its journey is influenced by various conditions such as weather, terrain, traffic, driver behavior, road , vehicle and aerodynamic. This concept of Journey Mapping has been implemented using AMESim for a Ford Focus Electric 2012. Journey Mapping was seen to predict its energy consumption with about 5% error; whereas, the error was about 13% when it was tested against the US06 cycle, which provided the most accurate results out of the various traditional drive cycles used for testing for the selected scope. / Thesis / Master of Applied Science (MASc)
42

Study of Dynamic Component Substitutions

Rao, Dhananjai M. 02 September 2003 (has links)
No description available.
43

NUMERICAL NEAR-STALL PERFORMANCE PREDICTION FOR A LOW SPEED SINGLE STAGE COMPRESSOR

SHUEY, MICHAEL G.E. January 2005 (has links)
No description available.
44

Improving the Efficiency of Parallel Applications on Multithreaded and Multicore Systems

Curtis-Maury, Matthew 15 April 2008 (has links)
The scalability of parallel applications executing on multithreaded and multicore multiprocessors is often quite limited due to large degrees of contention over shared resources on these systems. In fact, negative scalability frequently occurs such that a non-negligable performance loss is observed through the use of more processors and cores. In this dissertation, we present a prediction model for identifying efficient operating points of concurrency in multithreaded scientific applications in terms of both performance as a primary objective and power secondarily. We also present a runtime system that uses live analysis of hardware event rates through the prediction model to optimize applications dynamically. We discuss a dynamic, phase-aware performance prediction model (DPAPP), which combines statistical learning techniques, including multivariate linear regression and artificial neural networks, with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. We find that the scalability model achieves accuracy approaching 95%, sufficiently accurate to identify improved concurrency levels and thread placements from within real parallel scientific applications. Using DPAPP, we develop a prediction-driven runtime optimization scheme, called ACTOR, which throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each parallel execution phase in an application. ACTOR successfully identifies and exploits program phases where limited scalability results in a performance loss through the use of more processing elements, providing simultaneous reductions in execution time by 5%-18% and power consumption by 0%-11% across a variety of parallel applications and architectures. Further, we extend DPAPP and ACTOR to include support for runtime adaptation of DVFS, allowing for the synergistic exploitation of concurrency throttling and DVFS from within a single, autonomically-acting library, providing improved energy-efficiency compared to either approach in isolation. / Ph. D.
45

Scheduling on Asymmetric Architectures

Blagojevic, Filip 22 July 2008 (has links)
We explore runtime mechanisms and policies for scheduling dynamic multi-grain parallelism on heterogeneous multi-core processors. Heterogeneous multi-core processors integrate conventional cores that run legacy codes with specialized cores that serve as computational accelerators. The term multi-grain parallelism refers to the exposure of multiple dimensions of parallelism from within the runtime system, so as to best exploit a parallel architecture with heterogeneous computational capabilities between its cores and execution units. To maximize performance on heterogeneous multi-core processors, programs need to expose multiple dimensions of parallelism simultaneously. Unfortunately, programming with multiple dimensions of parallelism is to date an ad hoc process, relying heavily on the intuition and skill of programmers. Formal techniques are needed to optimize multi-dimensional parallel program designs. We investigate user- and kernel-level schedulers that dynamically "rightsize" the dimensions and degrees of parallelism on the asymmetric parallel platforms. The schedulers address the problem of mapping application-specific concurrency to an architecture with multiple hardware layers of parallelism, without requiring programmer intervention or sophisticated compiler support. Our runtime environment outperforms the native Linux and MPI scheduling environment by up to a factor of 2.7. We also present a model of multi-dimensional parallel computation for steering the parallelization process on heterogeneous multi-core processors. The model predicts with high accuracy the execution time and scalability of a program using conventional processors and accelerators simultaneously. More specifically, the model reveals optimal degrees of multi-dimensional, task-level and data-level concurrency, to maximize performance across cores. We evaluate our runtime policies as well as the performance model we developed, on an IBM Cell BladeCenter, as well as on a cluster composed of Playstation3 nodes, using two realistic bioinformatics applications. / Ph. D.
46

Prediction Models for Multi-dimensional Power-Performance Optimization on Many Cores

Shah, Ankur Savailal 28 May 2008 (has links)
Power has become a primary concern for HPC systems. Dynamic voltage and frequency scaling (DVFS) and dynamic concurrency throttling (DCT) are two software tools (or knobs) for reducing the dynamic power consumption of HPC systems. To date, few works have considered the synergistic integration of DVFS and DCT in performance-constrained systems, and, to the best of our knowledge, no prior research has developed application-aware simultaneous DVFS and DCT controllers in real systems and parallel programming frameworks. We present a multi-dimensional, online performance prediction framework, which we deploy to address the problem of simultaneous runtime optimization of DVFS, DCT, and thread placement on multi-core systems. We present results from an implementation of the prediction framework in a runtime system linked to the Intel OpenMP runtime environment and running on a real dual-processor quad-core system as well as a dual-processor dual-core system. We show that the prediction framework derives near-optimal settings of the three power-aware program adaptation knobs that we consider. Our overall runtime optimization framework achieves significant reductions in energy (12.27% mean) and ED² (29.6% mean), through simultaneous power savings (3.9% mean) and performance improvements (10.3% mean). Our prediction and adaptation framework outperforms earlier solutions that adapt only DVFS or DCT, as well as one that sequentially applies DCT then DVFS. Further, our results indicate that prediction-based schemes for runtime adaptation compare favorably and typically improve upon heuristic search-based approaches in both performance and energy savings. / Master of Science
47

Verification of Mechanistic-Empirical Pavement Deterioration Models Based on Field Evaluation of In-Service Pavements

Gramajo, Carlos Rafael 15 July 2005 (has links)
This thesis focused on using a detailed structural evaluation of seven (three flexible and four composite) high performance in-service pavements designated as high-priority routes to verify the applicability of the Mechanistic Empirical (M-E) models to high performance pavements in the Commonwealth of Virginia. The structural evaluation included: determination of layer thicknesses (from cores, GPR and historical data), pavement condition assessment based on visual survey, estimation of layer moduli from FWD analysis as well as material characterization. One of the main objectives of this study was to utilize the results from the backcalculated moduli in order to predict the performance of this group of pavement structures using the M-E Design Guide Software. This allowed a quick verification of the performance prediction models used by comparing their outcome with the current condition. The in-depth structural evaluation of the three flexible and four composite pavements showed that all the sites are structurally sound. The investigation also confirmed that the use of GPR to determine layer thicknesses and the comparison with a minimum number of cores is a helpful tool for pavement structural evaluation. Despite some difficulties performing the backcalculation analysis for complex structures, the obtained results were considered reasonable and were useful in estimating the current structural adequacy of the evaluated structures. The comparison of the measured distresses with those predicted by the M-E Design Guide software showed poor agreement. In general, the predicted distresses were higher than the distresses actually measured. However, there was not enough evidence to determine whether this is due to errors in the prediction models or software, or because of the use of defaults material properties, specially for the AC layers. It must be noted that although an in-depth field evaluation was performed, only Level 3 data was available for many of the input parameters. The results suggest that significant calibration and validation will be required before implementation of the M-E Design Guide. / Master of Science
48

A Pavement Structural Capacity Index for Use in Network-level Evaluation of Asphalt Pavements

Bryce, James Matthew 05 April 2012 (has links)
The objective of this research was to develop a structural index for use in network-level pavement evaluation, which facilitates the inclusion of the pavements structural condition in many pavement management applications. The primary goal of network-level pavement management is to maintain an acceptable condition of the pavements within the network using available, and often limited, resources. Pavement condition is described in terms of functional and structural condition, and the current widespread practice is to only consider the functional condition during network-level evaluation. This practice results in treatments that are often under-designed or over-designed when considered in more detail at the project-level. The disagreement may be reduced by considering the structural capacity of the pavements as part of the network-level decision process. This research was conducted by identifying various structural indices, choosing an appropriate index, and then applying data from the state of Virginia to modify the index and show example application for the index. It was concluded that the Modified Structural Index best met the research objectives. Project-level and network level data were used to conduct a sensitivity analysis on the index, and example applications were presented. The results indicated that the inclusion of the Modified Structural Index into the network-level decision process minimized the errors between network-level and project-level decisions, when compared to the current network-level decision making process. Furthermore, the Modified Structural Index could be used in various pavement management applications, such as network-level structural screening, and developing structural performance measures. / Master of Science
49

Adaptive Brain-Computer Interface Systems For Communication in People with Severe Neuromuscular Disabilities

Mainsah, Boyla O. January 2016 (has links)
<p>Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel. </p><p>This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication. </p><p>In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.</p> / Dissertation
50

Throughput-oriented analytical models for performance estimation on programmable hardware accelerators / Analyse de performance potentielle d'une simulation de QCD sur réseau sur processeur Cell et GPU

Lai, Junjie 15 February 2013 (has links)
Durant cette thèse, nous avons principalement travaillé sur deux sujets liés à l'analyse de la performance GPU (Graphics Processing Unit - Processeur graphique). Dans un premier temps, nous avons développé une méthode analytique et un outil d'estimation temporel (TEG) pour prédire les performances d'applications CUDA s’exécutant sur des GPUs de la famille GT200. Cet outil peut prédire les performances avec une précision approchant celle des outils précis au cycle près. Dans un second temps, nous avons développé une approche pour estimer la borne supérieure des performances d'une application GPU, en se basant sur l'analyse de l'application et de son code assembleur. Avec cette borne, nous connaissons la marge d'optimisation restante, et nous pouvons décider des efforts d'optimisation à fournir. Grâce à cette analyse, nous pouvons aussi comprendre quels paramètres sont critiques à la performance. / In this thesis work, we have mainly worked on two topics of GPU performance analysis. First, we have developed an analytical method and a timing estimation tool (TEG) to predict CUDA application's performance for GT200 generation GPUs. TEG can predict GPU applications' performance in cycle-approximate level. Second, we have developed an approach to estimate GPU applications' performance upper bound based on application analysis and assembly code level benchmarking. With the performance upper bound of an application, we know how much optimization space is left and can decide the optimization effort. Also with the analysis we can understand which parameters are critical to the performance.

Page generated in 0.158 seconds