• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1425
  • 210
  • 188
  • 188
  • 181
  • 178
  • 123
  • 116
  • 102
  • 102
  • 98
  • 85
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

MEASUREMENT OF HEAT TRANSFER ENHANCEMENT AND PRESSURE DROP FOR TURBULENCE ENHANCING INSERTS IN LIQUID-TO-AIR MEMBRANE ENERGY EXCHANGERS (LAMEEs)

2014 April 1900 (has links)
The fluid flow channels of modern heat exchangers are often equipped with different flow disturbance elements which enhance the convective heat transfer coefficient in each channel. These structural or surface roughness elements induce enhanced flow mixing and convective heat transfer at low Reynolds numbers (500 < Re < 2200) by fluid mixing near the channel walls and increasing the surface area. These elements, however, are accompanied by higher pressure drops in comparison to hollow smooth channels (without inserts). The Run-Around Membrane Energy Exchanger (RAMEE) system is an air-to-air energy recovery system comprised of two remote liquid-to-air membrane energy exchangers (LAMEEs) coupled by a pumped liquid desiccant loop. LAMEEs use semi-permeable membranes that are permeable to water vapor, but impermeable to liquid water. The membranes separate the liquid desiccant from the air flow channels, while still allowing both heat and water vapor transfer. The air channels are equipped with turbulence enhancing inserts which serve dual purposes: (a) to support the adjacent flexible membranes, and (b) to enhance the convective heat and mass transfer. This research experimentally investigates the increase in the air pressure drop and average convective heat transfer coefficient after an air-side insert is installed in a Small-scale wind tunnel for exchanger insert testing (WEIT) facility that is designed to simulate the air channels of a LAMEE and to measure all the properties required to determine the flow friction factor and Nusselt number. Experiments are conducted in the test section under steady state conditions at Reynolds numbers between 900 and 2200 for a channel with and without inserts. The 500-mm-long test section has a rectangular cross section (5 mm wide and 152.4 mm high) and is designed to maintain a specified constant heat flux on each side wall. The flow is laminar and hydrodynamically fully developed at the entrance of the test section and, within the test section, thermal development occurs. Nine different insert panels are tested. Each insert is comprised of several plastic rib spacers, each aligned parallel to the stream-wise direction, and several cross-bars aligned normal to the flow direction. The plastic rib spacers are placed either 30 mm, 20 mm or 10 mm apart, and the distance between the cylindrical bars is either 30 mm, 45 mm, 60 mm or 90 mm. The measured convective heat transfer coefficient and the friction factor have uncertainties that are less than ±7% and ±11%, respectively. It is found that the Nusselt number and friction factor are dependent on the insert geometry and the Reynolds number. An empirical correlation is developed for the inserts to predict Nusselt number and friction factor within an air channel of a LAMEE. The correlations are able to determine the Nusselt number and the friction factor within ±9% and ±20% of the experimental data. Results show the flow insert bar spacing is the most important factor in determining the convective heat transfer improvement. As an application of the experimental data in this thesis, the experimental and the numerical results from a LAMEE which has an insert in each airflow channel are presented. The results show that the insert within the air channel of the LAMEE is able to improve the total effectiveness of the LAMEE by 4% to 15% depending on the insert geometry and air flow Reynolds number and operating inlet conditions for the exchanger.
132

Pretreatment and hydrolysis of recovered fibre for ethanol production

Ruffell, John 11 1900 (has links)
Energy utilization is a determining factor for the standards of living around the world, and the current primary source of energy is fossil fuels. A potential source of liquid fuels that could ease the strain caused by diminishing petroleum resources is bioethanol. Effective exploitation of biomass materials requires a pretreatment to disrupt the lignin and cellulose matrix. The pretreatment utilized for this research was oxygen delignification, which is a standard process stage in the production of bleached chemical pulp. The model substrate utilized as a feedstock for bioethanol was recovered fibre. An analysis of the substrates digestibility resulted in a hexose yield of approximately 23%, which justified the need for an effective pretreatment. An experimental design was performed to optimize the delignification conditions by performing experiments over a range of temperature, caustic loadings, and reaction times. Equations were developed that outline the dependence of various response parameters on the experimental variables. An empirical model that can predict sugar concentrations from enzymatic hydrolysis based on the Kappa number, enzyme loading, and initial fibre concentration was also developed. A study of hydrolysis feeding regimes for untreated recovered fibre (87 Kappa), pretreated recovered fibre (17 Kappa), and bleached pulp (6 Kappa) showed that the batch feeding regime offers reduced complexity and high sugar yields for lower Kappa substrates. In order to evaluate the possibility of lignin recovery, the pH of delignification liquor was reduced by the addition of CO₂ and H₂SO₄, resulting in up to 25% lignin yield. An experiment that looked at effect of post-delignification fibre washing on downstream hydrolysis found that a washing efficiency of approximately 90% is required in order to achieve a hexose sugar yield of 85%.
133

Analytical and empirical models of online auctions

Ødegaard, Fredrik 11 1900 (has links)
This thesis provides a discussion on some analytical and empirical models of online auctions. The objective is to provide an alternative framework for analyzing online auctions, and to characterize the distribution of intermediate prices. Chapter 1 provides a mathematical formulation of the eBay auction format and background to the data used in the empirical analysis. Chapter 2 analyzes policies for optimally disposing inventory using online auctions. It is assumed a seller has a fixed number of items to sell using a sequence of, possibly overlapping, single-item auctions. The decision the seller must make is when to start each auction. The decision involves a trade-off between a holding cost for each period an item remains unsold, and a cannibalization effect among competing auctions. Consequently the seller must trade-off the expected marginal gain for the ongoing auctions with the expected marginal cost of the unreleased items by further deferring their release. The problem is formulated as a discrete time Markov Decision Problem. Conditions are derived to ensure that the optimal release policy is a control limit policy in the current price of the ongoing auctions. Chapter 2 focuses on the two item case which has sufficient complexity to raise challenging questions. An underlying assumption in Chapter 2 is that the auction dynamics can be captured by a set of transition probabilities. Chapter 3 shows with two fixed bidding strategies how the transition probabilities can be derived for a given auction format and bidder arrival process. The two specific bidding strategies analyzed are when bidders bid: 1) a minimal increment, and 2) their true valuation. Chapters 4 and 5 provides empirical analyzes of 4,000 eBay auctions conducted by Dell. Chapter 4 provides a statistical model where over discrete time periods, prices of online auctions follow a zero-inflated gamma distribution. Chapter 5 provides an analysis of the 44,000 bids placed in the auctions, based on bids following a gamma distribution. Both models presented in Chapters 4 and 5 are based on conditional probabilities given the price and elapsed time of an auction, and certain parameters of the competing auctions. Chapter 6 concludes the thesis with a discussion of the main results and possible extensions.
134

A Study of the Mean Residual Life Function and Its Applications

Mbowe, Omar B 12 June 2006 (has links)
The mean residual life (MRL) function is an important function in survival analysis, actuarial science, economics and other social sciences and reliability for characterizing lifetime. Different methods have been proposed for doing inference on the MRL but their coverage probabilities for small sample sizes are not good enough. In this thesis we apply the empirical likelihood method and carry out a simulation study of the MRL function using different statistical distributions. The simulation study does a comparison of the empirical likelihood method and the normal approximation method. The comparisons are based on the average lengths of confidence intervals and coverage probabilities. We also did comparisons based on median lengths of confidence intervals for the MRL. We found that the empirical likelihood method gives better coverage probability and shorter confidence intervals than the normal approximation method for almost all the distributions that we considered. Applying the two methods to real data we also found that the empirical likelihood method gives thinner pointwise confidence bands.
135

Empirical Likelihood Confidence Intervals for the Sensitivity of a Continuous-Scale Diagnostic Test

Davis, Angela Elaine 04 May 2007 (has links)
Diagnostic testing is essential to distinguish non-diseased individuals from diseased individuals. More accurate tests lead to improved treatment and thus reduce medical mistakes. The sensitivity and specificity are two important measurements for the diagnostic accuracy of a diagnostic test. When the test results are continuous, it is of interest to construct a confidence interval for the sensitivity at a fixed level of specificity for the test. In this thesis, we propose three empirical likelihood intervals for the sensitivity. Simulation studies are conducted to compare the empirical likelihood based confidence intervals with the existing normal approximation based confidence interval. Our studies show that the new intervals had better coverage probability than the normal approximation based interval in most simulation settings.
136

Empirical Likelihood Confidence Intervals for the Ratio and Difference of Two Hazard Functions

Zhao, Meng 21 July 2008 (has links)
In biomedical research and lifetime data analysis, the comparison of two hazard functions usually plays an important role in practice. In this thesis, we consider the standard independent two-sample framework under right censoring. We construct efficient and useful confidence intervals for the ratio and difference of two hazard functions using smoothed empirical likelihood methods. The empirical log-likelihood ratio is derived and its asymptotic distribution is a chi-squared distribution. Furthermore, the proposed method can be applied to medical diagnosis research. Simulation studies show that the proposed EL confidence intervals have better performance in terms of coverage accuracy and average length than the traditional normal approximation method. Finally, our methods are illustrated with real clinical trial data. It is concluded that the empirical likelihood methods provide better inferential outcomes.
137

Empirical Likelihood Confidence Intervals for the Difference of Two Quantiles with Right Censoring

Yau, Crystal Cho Ying 21 November 2008 (has links)
In this thesis, we study two independent samples under right censoring. Using a smoothed empirical likelihood method, we investigate the difference of quantiles in the two samples and construct the pointwise confidence intervals from it as well. The empirical log-likelihood ratio is proposed and its asymptotic limit is shown as a chi-squared distribution. In the simulation studies, in terms of coverage accuracy and average length of confidence intervals, we compare the empirical likelihood and the normal approximation method. It is concluded that the empirical likelihood method has a better performance. At last, a real clinical trial data is used for the purpose of illustration. Numerical examples to illustrate the efficacy of the method are presented.
138

A New Jackknife Empirical Likelihood Method for U-Statistics

Ma, Zhengbo 25 April 2011 (has links)
U-statistics generalizes the concept of mean of independent identically distributed (i.i.d.) random variables and is widely utilized in many estimating and testing problems. The standard empirical likelihood (EL) for U-statistics is computationally expensive because of its onlinear constraint. The jackknife empirical likelihood method largely relieves computation burden by circumventing the construction of the nonlinear constraint. In this thesis, we adopt a new jackknife empirical likelihood method to make inference for the general volume under the ROC surface (VUS), which is one typical kind of U-statistics. Monte Carlo simulations are conducted to show that the EL confidence intervals perform well in terms of the coverage probability and average length for various sample sizes.
139

Empirical analysis of wireless sensor networks

Gupta, Ashish 10 September 2010 (has links) (PDF)
Wireless sensor networks are the collection of wireless nodes that are deployed to monitor certain phenomena of interest. Once the node takes measurements it transmits to a base station over a wireless channel. The base station collects data from all the nodes and do further analysis. To save energy, it is often useful to build clusters, and the head of each cluster communicates with the base station. Initially, we do the simulation analysis of the Zigbee networks where few nodes are more powerful than the other nodes. The results show that in the mobile heterogeneous sensor networks, due to phenomenon orphaning and high cost of route discovery and maintenance, the performance of the network degrades with respect to the homogeneous network. The core of this thesis is to empirically analyze the sensor network. Due to its resource constraints, low power wireless sensor networks face several technical challenges. Many protocols work well on simulators but do not act as we expect in the actual deployments. For example, sensors physically placed at the top of the heap experience Free Space propagation model, while the sensors which are at the bottom of the heap have sharp fading channel characteristics. In this thesis, we show that impact of asymmetric links in the wireless sensor network topology and that link quality between sensors varies consistently. We propose two ways to improve the performance of Link Quality Indicator (LQI) based algorithms in the real asymmetric link sensor networks. In the first way, network has no choice but to have some sensors which can transmit over the larger distance and become cluster heads. The number of cluster heads can be given by Matérn Hard-Core process. In the second solution, we propose HybridLQI which improves the performance of LQI based algorithm without adding any overhead on the network. Later, we apply theoretical clustering approaches in sensor network to real world. We deploy Matérn Hard Core Process and Max-Min cluster Formation heuristic on real Tmote nodes in sparse as well as highly dense networks. Empirical results show clustering process based on Matérn Hard Core Process outperforms Max-Min Cluster formation in terms of the memory requirement, ease of implementation and number of messages needed for clustering. Finally, using Absorbing Markov chain and measurements we study the performance of load balancing techniques in real sensor networks.
140

An empirical power model of a low power mobile platform

Magudilu Vijayaraj, Thejasvi Magudilu 20 September 2013 (has links)
Power is one of the today’s major constraints for both hardware and software design. Thus the need to understand the statistics and distribution of power consumption from a hardware and software perspective is high. Power models satisfy this requirement to a certain extent, by estimating the power consumption for a subset of applications, or by providing a detailed power consumption distribution of a system. Till date, many power models have been proposed for the desktop and mobile platforms. However, most of these models were created based on power measurements performed on the entire system when different microbenchmarks stressing different blocks of the system were run. Then the measured power and the profiled information of the subsystem stressing benchmarks were used to create a regression analysis based model. Here, the power/energy prediction accuracy of the models created in this way, depend on both the method and accuracy of the power measurements and the type of regression used in generating the model. This work tries to eliminate the dependency of the accuracy of the power models on the type of regression analysis used, by performing power measurements at a subsystem granularity. When the power measurement of a single subsystem is obtained while stressing it, one can know the exact power it is consuming, instead of obtaining the power consumption of the entire system - without knowing the power consumption of the subsystem of interest - and depending on the regression analysis to provide the answer. Here we propose a generic method that can be used to create power models of individual subsystems of mobile platforms, and validate the method by presenting an empirical power model of the OMAP4460 based Pandaboard-ES, created using the proposed method. The created model has an average percentage of energy prediction error of just around -2.7% for the entire Pandaboard-ES system.

Page generated in 0.0556 seconds