• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 464
  • 87
  • 74
  • 64
  • 48
  • 39
  • 16
  • 10
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 995
  • 130
  • 84
  • 79
  • 76
  • 71
  • 66
  • 65
  • 63
  • 62
  • 59
  • 58
  • 54
  • 51
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

A Study of Deflagration To Detonation Transition In a Pulsed Detonation Engine

Chapin, David Michael 22 November 2005 (has links)
A Pulse Detonation Engine (PDE) is a propulsion device that takes advantage of the pressure rise inherent to the efficient burning of fuel-air mixtures via detonations. Detonation initiation is a critical process that occurs in the cycle of a PDE. A practical method of detonation initiation is Deflagration-to-Detonation Transition (DDT), which describes the transition of a subsonic deflagration, created using low initiation energies, to a supersonic detonation. This thesis presents the effects of obstacle spacing, blockage ratio, DDT section length, and airflow on DDT behavior in hydrogen-air and ethylene-air mixtures for a repeating PDE. These experiments were performed on a 2 diameter, 40 long, continuous-flow PDE located at the General Electric Global Research Center in Niskayuna, New York. A fundamental study of experiments performed on a modular orifice plate DDT geometry revealed that all three factors tested (obstacle blockage ratio, length of DDT section, and spacing between obstacles) have a statistically significant effect on flame acceleration. All of the interactions between the factors, except for the interaction of the blockage ratio with the spacing between obstacles, were also significant. To better capture the non-linearity of the DDT process, further studies were performed using a clear detonation chamber and a high-speed digital camera to track the flame chemiluminescence as it progressed through the PDE. Results show that the presence of excess obstacles, past what is minimally required to transition the flame to detonation, hinders the length and time to transition to detonation. Other key findings show that increasing the mass flow-rate of air through the PDE significantly reduces the run-up time of DDT, while having minimal effect on run-up distance. These experimental results provided validation runs for computational studies. In some cases as little as 20% difference was seen. The minimum DDT length for 0.15 lb/s hydrogen-air studies was 8 L/D from the spark location, while for ethylene it was 16 L/D. It was also observed that increasing the airflow rate through the tube from 0.1 to 0.3 lbs/sec decreased the time required for DDT by 26%, from 3.9 ms to 2.9 ms.
342

Tweel (TM) technology tires for wheelchairs and instrumentation for measuring everyday wheeled mobility

Meruani, Azeem 04 April 2007 (has links)
This thesis is focused on two aspects related to wheeled mobility: 1) Evaluating the impact of a new tire design on powered mobility, and 2) Instrumentation that permits better monitoring and assessment of wheeled mobility in everyday use. The Tweel technology tires developed by Michelin USA are comprised of an outer polyurethane ring supported by polyurethane fins instead of metal spokes, which allow the tire to deflect under pressure. As a wheelchair tire they offer a potential breakthrough as they have deflection properties similar to a pneumatic tire while maintaining the low maintenance of a solid foam-core tire. A study was conducted to compare the Tweel technology tires to standard solid foam-core tires for vibration transmission, traction and overall life span. The Tweel technology tires failed produce any significant difference in vibration transmitted to the user compared to solid foam-core tires. Additionally, the Tweel technology tires showed significant signs of deterioration after a month long field trial, thus indicating a short life span. However, Tweel technology tires provided better traction on both dry and wet concrete. Overall, Tweel technology tires have to be re-engineered to provide better damping properties, leading to lower vibrational levels transmitted to the user. The second section this thesis addressed the need to develop a methodology of measuring mobility in everyday usage. This section is part of a greater ongoing research project at CATEA (Center for Assistive Technology and Environmental Access) aimed at understanding everyday wheelchair usage. Methodology was developed to measure bouts of mobility that characterize wheelchair usage; which includes the number of starts, stops, turns and distance traveled through the day. Three different technologies which included, Accelerometer unit on the rim of the drive wheel, Gyro-Accelerometer unit on the frame of the chair and Reed switches, were tested. Testing included various criteria for accuracy, durability and compatibility for measuring bouts of everyday wheeled mobility. Although a single technology could not be used to measure all aspects of mobility, the Accelerometer unit on the rim met the design criteria for measuring starts stops and distance, while the Gyro-Accelerometer unit met the requirements for measuring turns.
343

Hardware Acceleration of Electronic Design Automation Algorithms

Gulati, Kanupriya 2009 December 1900 (has links)
With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms.
344

Vibration Transmission To Bicycle And Rider:a Field And A Laboratory Study

Arpinar-avsar, Pinar 01 August 2009 (has links) (PDF)
The purpose of this study is to investigate the frequency and amplitude characteristics of vibration exposed to the bicycle and the rider as well as the features of the vibration transmission to the riders&amp / #8217 / body. The findings showed that, vibration transmission to the bicycle and the rider is effective in x-and z axis. As a result of increased roughness, effective frequency range shifted to lower frequencies between 15-30Hz at both saddle and stem. The severity of transmitted vibration to the bicycle was found to be considerably higher in road bike trials (up to 25 ms-2). The frequency range of the vibration exposure of the body parts were in between 0-30Hz and independent of the level of vibration transmission the peak values were within the range of 3-12Hz. As the acceleration magnitude increased depending on road roughness, normalized rms EMG values also increased up to 50% in forearm extensor muscles during MTB trials and in the flexor muscles during road bike trials. With respect to no vibration trials, rms EMG values increased in order to maintain the same force output. Vibration transmission to the body tends to be amplified with increased force production. Transmission values were found to be higher at lower frequencies. Since the magnitude and frequency of vibration is known to have some adverse effects on body functions such as impaired breathing pattern and increased muscle tone, vibration transmitted to the body might be considered to influence the riding comfort, controllability and overall health of the cyclist.
345

Dynamic Effects Of Moving Traffic On Railway Bridges

Cinek, Fatih 01 May 2010 (has links) (PDF)
In this study, dynamic effects on high speed railway bridges under moving traffic are investigated. Within this context, the clear definition of the possible dynamic effects is provided and the related studies that exist in literature are investigated. In the light of those studies, analytical procedures that are defined to find the critical dynamic responses such as deflections, accelerations and resonance conditions are examined and a MatLab programming language is written to obtain the responses for different train loading and velocity values. The reliability of the written program is conformed by comparing the results with the related studies in literature. In addition to the analytical procedures, the approaches in the European standards concerning the dynamic effects of railway traffic are defined. A case study is investigated for a bridge that is in the scope of the Ankara-Sivas High Speed Railway Project. The related bridge is modeled by using finite element program, SAP2000 according to the definitions that are stated in European standards. The related high speed railway bridge is analysed with a real train which is French TGV together with the HSLM trains that are defined in Eurocode and the results obtained are compared with each other. This study also includes the analysis of the bridges performed for 7 different stiffness and 3 different mass values to determine the parameters affecting dynamic behaviour.
346

VIBRATION EXPOSURE AND PREVENTION IN THE UNITED STATES

WASSERMAN, DONALD E. 05 1900 (has links)
No description available.
347

Adaptive Genetic Algorithms with Elitist Strategy to the Design of Active Vibration controller for Linear Motors Position Plain

Chen, Yih-Ren 05 July 2001 (has links)
We use the adaptive probabilities of crossover and mutation, elitist strategy, and extinction and immigration strategy to improve the simple genetic algorithm in this study. We expect that the search technique can avoid falling into the local maximum due to the premature convergence, and the chance of finding the near-optimal parameter in the larger searching space could be obviously increased. The accelerometer is then taken as the sensor for output measurement, and the designed actuator and digital PID controller is implemented to actively suppress the vibration of the plain that is due to the excitation effect of the high-speed and precision positioning of the linear motor. From the computer simulations and the experimented results, it is obvious that the near-optimal digital PID controller designed by modified genetic approach can improve the effect of vibration suppression; the settling time is also decrease. For the vibration suppressions of high-speed precision positioning problems, the vibrating plain system can fastly be stabilized.
348

Entropy-based diagnostics of criticality Monte Carlo simulation and higher eigenmode acceleration methodology

Shi, Bo 10 June 2010 (has links)
Because of the accuracy and ease of implementation, Monte Carlo methodology is widely used in analysis of nuclear systems. The obtained estimate of the multiplication factor (keff) or flux distribution is statistical by its nature. In criticality simulation of a nuclear critical system, whose basis is the power iteration method, the guessed source distribution initially is generally away from the converged fundamental one. Therefore, it is necessary to ensure that the convergence is achieved before data are accumulated. Discarding a larger amount of initial histories could reduce the risk of contaminating the results by non-converged data but increases the computational expense. This issue is amplified for large loosely coupled nuclear systems with low convergence rate. Since keff is a generation-based global value, frequently no explicit criterion is applied to the diagnostic of keff directly. As an alternative, a flux-based entropy check available in MCNP5 works well in many cases. However, when applied to a difficult storage fuel pool benchmark problem, it could not always detect the non-convergence of flux distribution. Preliminary evaluation indicates that it is due to collapsing local information into a single number. This thesis addresses this problem by two new developments. First, it aims to find a more reliable way to assess convergence by analyzing the local flux change. Second, it introduces an approach to simultaneously compute both the first and second eigenmodes. At the same time, by computing these eigenmodes, this approach could increase the convergence rate. Improvement in these two areas could have a significant impact on practicality of Monte Carlo criticality simulations.
349

Application acceleration for wireless and mobile data networks

Zhuang, Zhenyun 27 August 2010 (has links)
This work studies application acceleration for wireless and mobile data networks. The problem of accelerating application can be addressed along multiple dimensions. The first dimension is advanced network protocol design, i.e., optimizing underlying network protocols, particulary transport layer protocol and link layer protocol. Despite advanced network protocol design, in this work we observe that certain application behaviors can fundamentally limit the performance achievable when operating over wireless and mobile data networks. The performance difference is caused by the complex application behaviors of these non-FTP applications. Explicitly dealing with application behaviors can improve application performance for new environments. Along this overcoming application behavior dimension, we accelerate applications by studying specific types of applications including Client-server, Peer-to-peer and Location-based applications. In exploring along this dimension, we identify a set of application behaviors that significantly affect application performance. To accommodate these application behaviors, we firstly extract general design principles that can apply to any applications whenever possible. These design principles can also be integrated into new application designs. We also consider specific applications by applying these design principles and build prototypes to demonstrate the effectiveness of the solutions. In the context of application acceleration, even though all the challenges belong to the two aforementioned dimensions of advanced network protocol design and overcoming application behavior are addressed, application performance can still be limited by the underlying network capability, particularly physical bandwidth. In this work, we study the possibility of speeding up data delivery by eliminating traffic redundancy present in application traffics. Specifically, we first study the traffic redundancy along multiple dimensions using traces obtained from multiple real wireless network deployments. Based on the insights obtained from the analysis, we propose Wireless Memory (WM), a two-ended AP-client solution to effectively exploit traffic redundancy in wireless and mobile environments. Application acceleration can be achieved along two other dimensions: network provision ing and quality of service (QoS). Network provisioning allocates network resources such as physical bandwidth or wireless spectrum, while QoS provides different priority to different applications, users, or data flows. These two dimensions have their respective limitations in the context of application acceleration. In this work, we focus on the two dimensions of overcoming application behavior and Eliminating traffic redundancy to improve application performance. The contribution of this work is as follows. First, we study the problem of application acceleration for wireless and mobile data networks, and we characterize the dimensions along which to address the problem. Second, we identify that application behaviors can significantly affect application performance, and we propose a set of design principles to deal with the behaviors. We also build prototypes to conduct system research. Third, we consider traffic redundancy elimination and propose a wireless memory approach.
350

Development and implementation of convergence diagnostics and acceleration methodologies in Monte Carlo criticality simulations

Shi, Bo 12 December 2011 (has links)
Because of the accuracy and ease of implementation, the Monte Carlo methodology is widely used in the analysis of nuclear systems. The estimated effective multiplication factor (keff) and flux distribution are statistical by their natures. In eigenvalue problems, however, neutron histories are not independent but are correlated through subsequent generations. Therefore, it is necessary to ensure that only the converged data are used for further analysis. Discarding a larger amount of initial histories would reduce the risk of contaminating the results by non-converged data, but increase the computational expense. This issue is amplified for large nuclear systems with slow convergence. One solution would be to use the convergence of keff or the flux distribution as the criterion for initiating accumulation of data. Although several approaches have been developed aimed at identifying convergence, these methods are not always reliable, especially for slow converging problems. This dissertation has attacked this difficulty by developing two independent but related methodologies. One aims to find a more reliable and robust way to assess convergence by statistically analyzing the local flux change. The other forms a basis to increase the convergence rate and thus reduce the computational expense. Eventually, these two topics will contribute to the ultimate goal of improving the reliability and efficiency of the Monte Carlo criticality calculations.

Page generated in 0.0239 seconds