• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 18
  • 12
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 24
  • 21
  • 16
  • 16
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analysis and optimization of cross-docking systems through simulation and analytical modeling

Kumar ML, Vinod Kumar January 2001 (has links)
No description available.
12

Experimental Study of Multi-Mesh Gear Dynamics

Del Donno, Andrew Mark 09 January 2009 (has links)
No description available.
13

LABORATORY AND MODELLING STUDY EVALUATING THERMAL REMEDIATION OF TETRACHLOROETHENE AND MULTI-COMPONENT NAPL IMPACTED SOIL

Zhao, Chen 02 October 2013 (has links)
In Situ Thermal Treatment (ISTT) is a candidate remediation technology for dense non-aqueous phase liquids (DNAPLs). However, the relationships between gas production, gas flow, and contaminant mass removal during ISTT are not fully understood. A laboratory study was conducted to assess the degree of mass removal, as well as the gas generation rate and the composition of the gas phase as a function of different heating times and initial DNAPL saturations. The temperature of the contaminated soil was measured continuously using a thermocouple to identify periods of heating, co-boiling and boiling. Samples were collected from the aqueous and DNAPL phase of the condensate, as well as from the source soil, at different heating times, and analyzed by gas chromatography/mass spectrometry. In addition to laboratory experiments, a mathematical model was developed to predict the co-boiling temperature and transient composition of the gas phase during heating of a uniform source. Predictions for single-component sources matched the experiments well, with a co-boiling plateau at 88°C ± 1°C for experiments with tetrachloroethene (PCE) and water. A comparison of predicted and observed boiling behaviour showed a discrepancy at the end of the co-boiling period, with earlier temperature increases occurring in the experiments. The results of this study suggest that temperature observations related to the co-boiling period during ISTT applications may not provide a clear indication of complete NAPL mass removal, and that multi-compartment modeling associated with various NAPL saturation zones is required to consider mass-transfer limitations within the heated zone. Predictions for multi-component DNAPL, containing 1,2-Dichloroethane (1,2-DCA), PCE and Chlorobenzene, showed no co-boiling plateau. CB is the least volatile component and dominates in the vapour phase at the end of the co-boiling process, and it can be used as an indicator of the end of the co-boiling stage. Two field NAPL mixtures were simulated using the screening-level analytical model to demonstrate its potential application on ISTT. The two mixtures with similar composition but different mass fractions result in distinct co-boiling temperature and mass transfer behaviour. The non-volatile component in the NAPL mixture results in larger amounts of water consumption and longer ISTT operation time. / Thesis (Master, Civil Engineering) -- Queen's University, 2013-09-30 09:26:00.857
14

Empirical Studies on Incentives, Information Disclosure, and Social Interactions in Online Platforms

Guo, Chenhui, Guo, Chenhui January 2016 (has links)
Nowadays, people have many business activities and entertainments on a variety of online platforms. Despite their various functionalities, online platforms have a fundamental administrative problem: How do platform designers or administrators create proper online environments, including mechanisms and policies, to better manage user behaviors, in order to reach the goals of the platforms? Starting with a taxonomy of online platforms, I introduce three critical dimensions that help to characterize such platforms, including revenue model, heterogeneity in the role of users and level of user interaction. Then, I choose three online platforms as research contexts and conduct empirical studies, trying to identify and understand the impact of the incentive program, quality information disclosure, and social influence, on users' decision-making in online platforms. The first essay investigates the effectiveness of incentive hierarchies, where users achieve increasingly higher status in the community after achieving increasingly more challenging goals, in motivating user contribution in the same platform. The findings have important implications for crowd-based online applications, such as knowledge exchange and crowdsourcing. The second essay focuses on online consumer review sites, and studies whether and how consumer-generated word-of-mouth of restaurants-both volume and valence-is influenced by the disclosure of quality information from health inspectors, by conducting analytical modeling and econometric analyses using data from a leading consumer review site. The third essay examines how social interactions matter in a large-scale online social game that adopts an increasingly popular freemium revenue model. The study leverages an econometric model to quantify the effect of peer consumption on players' repeated decisions for the consumption of both free services and premium services. Finally, I conclude the dissertation by highlighting the three fundamental issues of design and management of online platforms.
15

Residual stress modeling in machining processes

Su, Jiann-Cherng 17 November 2006 (has links)
Residual stresses play an important role in the performance of machined components. Component characteristics that are influenced by residual stress include fatigue life, corrosion resistance, and part distortion. The functional behavior of machined components can be enhanced or impaired by residual stresses. Because of this, understanding the residual stress imparted by machining is an important aspect of understanding machining and overall part quality. Machining-induced residual stress prediction has been a topic of research since the 1950s. Research efforts have been primarily composed of experimental findings, analytical modeling, finite element modeling, and various combinations of those efforts. Although there has been significant research in the area, there are still opportunities for advancing predictive residual stress methods. The objectives of the current research are as follows: (1) develop a method of predicting residual stress based on an analytical description of the machining process and (2) validate the model with experimental data. The research focuses on predicting residual stresses in machining based on first principles. Machining process output parameters such as cutting forces and cutting temperatures are predicted as part of the overall modeling effort. These output parameters serve as the basis for determining the loads which generate residual stresses due to machining. The modeling techniques are applied to a range of machining operations including orthogonal cutting, broaching, milling, and turning. The strengths and weaknesses of the model are discussed as well as opportunities for future work.
16

Assessment Of Diffusive And Convective Mechanisms During Carbon Dioxide Sequestration Into Deep Saline Aquifers

Ozgur, Emre 01 December 2006 (has links) (PDF)
The analytical and numerical modeling of CO2 sequestration in deep saline aquifers having different properties was studied with diffusion and convection mechanisms. The complete dissolution of CO2 in the aquifer by diffusion took thousands, even millions of years. In diffusion dominated system, an aquifer with 100 m thickness saturated with CO2 after 10,000,000 years. It was much earlier in convective dominant system. In diffusion process, the dissolution of CO2 in aquifer increased with porosity increase / however, in convection dominant process dissolution of CO2 in aquifer decreased with porosity increase. The increase in permeability accelerated the dissolution of CO2 in aquifer significantly, which was due to increasing velocity. The dissolution process in the aquifer realized faster for the aquifers with lower dispersivity. The results of convective dominant mechanism in aquifers with 1md and 10 md permeability values were so close to that of diffusion dominated system. For the aquifer having permeability higher than 10 md, the convection mechanism began to dominate gradually and it became fully convection dominated system for 50 md and higher permeability values. These results were also verified with calculated Rayleigh number and mixing zone lengths. The mixing zone length increased with increase in porosity and time in diffusion dominated system. However, the mixing zone length decreased with increase in porosity and it increased with increase in dispersivity and permeability higher than 10 md in convection dominated system.
17

Enhancing Petroleum Recovery From Heavy Oil Fields By Microwave Heating

Acar, Cagdas 01 June 2007 (has links) (PDF)
There are many heavy oil reservoirs with thin pay zones (less than 10 m) in the world and in Turkey. Conventional steam injection techniques are not costeffective for such reservoirs, due to excessive heat loss through the overburden. Heat losses can be minimized through controlled heating of the pay zone. One such way is to introduce heat to the reservoir in a controlled manner is microwave heating. Laboratory studies on microwave heating of a scaled model of a heavy oil reservoir with a thin pay zone are presented with an economical feasibility of the method. In this thesis, three different conceptual oil reservoirs from south east Turkey are evaluated: Bati Raman (9.5 API) and &Ccedil / amurlu (12 API) heavy crude oils and paraffinic Garzan (26 API)crude oil. Using a graphite core holder packed with crushed limestone with crude oil and water microwave effects of operational parameters like heating time and waiting period as well as rock and fluid properties like permeability, porosity, wettability, salinity, and initial water saturation are studied. The main recovery mechanisms for the experiments are viscosity reduction and gravity drainage. An analytical model is developed by coupling heat equation with the electromagnetic dissipated power per unit of volume based in Maxwell&#039 / s equation successfully models the experiments for temperatures less than the pyrolysis temperature is presented. Also the experiments are scaled to the model by geometric similarity concept. In economic evaluation, the cost of oil is calculated based on domestic electricity prices.
18

Efficient modeling of soft error vulnerability in microprocessors

Nair, Arun Arvind 11 July 2012 (has links)
Reliability has emerged as a first class design concern, as a result of an exponential increase in the number of transistors on the chip, and lowering of operating and threshold voltages with each new process generation. Radiation-induced transient faults are a significant source of soft errors in current and future process generations. Techniques to mitigate their effect come at a significant cost of area, power, performance, and design effort. Architectural Vulnerability Factor (AVF) modeling has been proposed to easily estimate the processor's soft error rates, and to enable the designers to make appropriate cost/reliability trade-offs early in the design cycle. Using cycle-accurate microarchitectural or logic gate-level simulations, AVF modeling captures the masking effect of program execution on the visibility of soft errors at the output. AVF modeling is used to identify structures in the processor that have the highest contribution to the overall Soft Error Rate (SER) while running typical workloads, and used to guide the design of SER mitigation mechanisms. The precise mechanisms of interaction between the workload and the microarchitecture that together determine the overall AVF is not well studied in literature, beyond qualitative analyses. Consequently, there is no known methodology for ensuring that the workload suite used for AVF modeling offers sufficient SER coverage. Additionally, owing to the lack of an intuitive model, AVF modeling is reliant on detailed microarchitectural simulations for understanding the impact of scaling processor structures, or design space exploration studies. Microarchitectural simulations are time-consuming, and do not easily provide insight into the mechanisms of interactions between the workload and the microarchitecture to determine AVF, beyond aggregate statistics. These aforementioned challenges are addressed in this dissertation by developing two methodologies. First, beginning with a systematic analysis of the factors affecting the occupancy of corruptible state in a processor, a methodology is developed that generates a synthetic workload for a given microarchitecture such that the SER is maximized. As it is impossible for every bit in the processor to simultaneously contain corruptible state, the worst-case realizable SER while running a workload is less than the sum of their circuit-level fault rates. The knowledge of the worst-case SER enables efficient design trade-offs by allowing the architect to validate the coverage of the workload suite and select an appropriate design point, and to identify structures that may potentially have high contribution to SER. The methodology induces 1.4X higher SER in the core as compared to the highest SER induced by SPEC CPU2006 and MiBench programs. Second, a first-order analytical model is proposed, which is developed from the first principles of out-of-order superscalar execution that models the AVF induced by a workload in microarchitectural structures, using inexpensive profiling. The central component of this model is a methodology to estimate the occupancy of correct-path state in various structures in the core. Owing to its construction, the model provides fundamental insight into the precise mechanism of interaction between the workload and the microarchitecture to determine AVF. The model is used to cheaply perform sizing studies for structures in the core, design space exploration, and workload characterization for AVF. The model is used to quantitatively explain results that may appear counter-intuitive from aggregate performance metrics. The Mean Absolute Error in determining AVF of a 4-wide out-of-order superscalar processor using model is less than 7% for each structure, and the Normalized Mean Square Error for determining overall SER is 9.0%, as compared to cycle-accurate microarchitectural simulation. / text
19

Using analytical and numerical modeling to assess deep groundwater monitoring parameters at carbon capture, utilization, and storage sites

Porse, Sean Laurids 09 April 2014 (has links)
Carbon Dioxide (CO₂) Enhanced Oil Recovery (EOR) is becoming an important bridge to commercialize geologic sequestration (GS) in order to help reduce anthropogenic CO₂ emissions. Current U.S. environmental regulations require operators to monitor operational and groundwater aquifer changes within permitted bounds, depending on the injection activity type. We view one goal of monitoring as maximizing the chances of detecting adverse fluid migration signals into overlying aquifers. To maximize these chances, it is important to: (1) understand the limitations of monitoring pressure versus geochemistry in deep aquifers (i.e., >450 m) using analytical and numerical models, (2) conduct sensitivity analyses of specific model parameters to support monitoring design conclusions, and (3) compare the breakthrough time (in years) for pressure and geochemistry signals. Pressure response was assessed using an analytical model, derived from Darcy's law, which solves for diffusivity in radial coordinates and the fluid migration rate. Aqueous geochemistry response was assessed using the numerical, single-phase, reactive solute transport program PHAST that solves the advection-reaction-dispersion equation for 2-D transport. The conceptual modeling domain for both approaches included a fault that allows vertical fluid migration and one monitoring well, completed through a series of alternating confining units and distinct (brine) aquifers overlying a depleted oil reservoir, as observed in the Texas Gulf Coast, USA. Physical and operational data, including lithology, formation hydraulic parameters, and water chemistry obtained from field samples were used as input data. Uncertainty evaluation was conducted with a Monte Carlo approach by sampling the fault width (normal distribution) via Latin Hypercube and the hydraulic conductivity of each formation from a beta distribution of field data. Each model ran for 100 realizations over a 100 year modeling period. Monitoring well location was varied spatially and vertically with respect to the fault to assess arrival times of pressure signals and changes in geochemical parameters. Results indicate that the pressure-based, subsurface monitoring system provided higher probabilities of fluid migration detection in all candidate monitoring formations, especially those closest (i.e., 1300 m depth) to the possible fluid migration source. For aqueous geochemistry monitoring, formations with higher permeabilities (i.e., greater than 4 x 10⁻¹³ m²) provided better spatial distributions of chemical changes, but these changes never preceded pressure signal breakthrough, and in some cases were delayed by decades when compared to pressure. Differences in signal breakthrough indicate that pressure monitoring is a better choice for early migration signal detection. However, both pressure and geochemical parameters should be considered as part of an integrated monitoring program on a site-specific basis, depending on regulatory requirements for longer term (i.e., >50 years) monitoring. By assessing the probability of fluid migration detection using these monitoring techniques at this field site, it may be possible to extrapolate the results (or observations) to other CCUS fields with different geological environments. / text
20

An Analytical Approach to Efficient Circuit Variability Analysis in Scaled CMOS Design

January 2011 (has links)
abstract: Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation. / Dissertation/Thesis / M.S. Electrical Engineering 2011

Page generated in 0.104 seconds