• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 621
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1436
  • 211
  • 191
  • 191
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 86
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Simulation of nonlinear internal wave based on two-layer fluid model

Wu, Chung-lin 25 August 2011 (has links)
The main topic of this research is the simulation of internal wave interaction by a two-dimensional numerical model developed by Lynett & Liu (2002) of Cornell University, then modified by Cheng et al. (2005). The governing equation includes two-dimensional momentum and continuity equation. The model uses constant upper and lower layer densities; hence, these factors as well as the upper layer thickness. Should be determined before the simulation. This study discusses the interface depth and the density according to the buoyancy frequency distribution, the EOF, and the eigen-value based on the measured density profile. Besides, a method based on the two-layer KdV equation and the KdV of continuously-stratified fluid. By minimize the difference of linear celeriy, nonlinear and dispersion terms, the upper layer thicknes can also be determined. However, the interface will be much deeper than the depth of max temperature drop in the KdV method if the total water depth is bigger than 500 meters. Thus, the idealization buoyancy frequency formula proposed by Vlasenko et al. (2005) or Xie et al. (2010) are used to modify the buoyancy frequency. The internal wave in the Luzon Strait and the South China Sea are famous and deserves detailed study. We use the KdV method to find the parameters in the two fluid model to speed up the simulation of internal wave phenomena found in the satellite image.
352

Novel Pulse Train Generation Method and Signal analysis

Mao, Chia-Wei 30 August 2011 (has links)
In this thesis we use pulse shaping system to generate pulse train. Using empirical mode decomposition(EMD) and short-time Fourier transform(STFT) to analyze the signal of terahertz radiation. we use pulse shaping system to modulate the amplitude and phase of light which provide for pulse train generation. Compare with other method, first, our method will improve the stability of time delay control. Second this method is easier to control the time delay and number of pulse in the pulse train. In the past, people find the occur time of high frequency by observed the time domain of terahertz radiation directly, but if the occur time near the time of the peak power of terahertz radiation, we can¡¦t find out the occur time of high frequency. Using STFT can find out the relationship between intensity and time, but if the modes in signal have different width of frequency STFT have to use different time window to get the best frequency resolution and time resolution. However the time window with different width will have different frequency resolution, and the relationship between intensity and time will change with different frequency resolution, therefore using different frequency resolution will get different result, so we need a new signal analysis method. To solve this problem we use EMD to decompose different mode in the signal of terahertz radiation into different intrinsic mode function(IMF), and analyze the signal of terahertz by STFT to find the occur time of high frequency of terahertz radiation. Because the modes are separated in to different IMF, we can use STFT with the same time window. We expect this method applied to narrow-band frequency-tunable THz wave generation will be better.
353

Speckle-reduction using the bidimensional empirical mode decomposition for fringe analysis

Chen, Ting-wei 31 August 2011 (has links)
Phase-extraction from fringe patterns is an inevitable procedure in the field of optical metrology and interferometry. However, speckle noise will introduce and influence the precision of wrapped phase map when a coherent light is used. In this thesis, we use the bidimensional empirical mode decomposition (BEMD) to perform the speckle-reduction. Moreover, different interpolation method in BEMD will be used to compare their performance in speckle-reduction. Finally, the database will be developed to make the BEMD a robotic tool to reduce noises. And the database also points out that the performance of BEMD is highly related to the fringe period, the fringe visibility, and the SNR of speckle noise.
354

Turbulent flows induced by the interaction of continuous internal waves and a sloping bottom

Kuo, Je-Cheng 08 October 2012 (has links)
Internal waves occur in the interface between two layers of fluids with density stratification. In order to better understand the characteristics of continuous internal waves, a series of experiments were conducted in a laboratory tank. The upper and lower layers are fresh water of 15 cm thick and salt water of 30 cm thick, respectively. The periods of internal waves are 2.5, 5.5 and 6.6 sec. A micro-ADV is used to measure velocity profiles. Wave profiles at the density interface and the free surface are monitored respectively by an ultrasonic and capacitance wave gauges. Our results indicate that particle velocities (u and w) above and below the density interface have opposite directions. The speed is peaked near the density interface and it becomes weaker further away from the interface. Empirical Mode Decomposition is used to remove noise from the observed particle velocities, and the period is consistent with those derived from the interface elevations. The observed particle velocities also compare favorably with the theoretical results. When internal waves propagate without the interference of a sloping bottom, the turbulence induced is rather insignificant. The turbulence is more significant only near the density interface. With the existence of a sloping bottom, the internal waves gradually shoal and deform, the crest becomes sharp and steep, finally the waves become unstable, break and overturn. In this study the effect of bottom slope and the steepness of internal waves on the reflectivity of incoming waves are investigated. The reflectivity is smaller with gentler slope, and it increases and reaches a constant value with steeper slopes. The observed energy dissipation rate£`is higher near the slope. Three methods were used to estimate the energy dissipation rate and shear stress; namely, the inertial dissipation, the TKE and auto-correlation method. The£` estimated from the auto-correlation method is larger than that from the other two methods, but their trend is similar. The energy dissipation rate is found to increase with a gentler sloping bottom.
355

Object-oriented software development effort prediction using design patterns from object interaction analysis

Adekile, Olusegun 15 May 2009 (has links)
Software project management is arguably the most important activity in modern software development projects. In the absence of realistic and objective management, the software development process cannot be managed in an effective way. Software development effort estimation is one of the most challenging and researched problems in project management. With the advent of object-oriented development, there have been studies to transpose some of the existing effort estimation methodologies to the new development paradigm. However, there is not in existence a holistic approach to estimation that allows for the refinement of an initial estimate produced in the requirements gathering phase through to the design phase. A SysML point methodology is proposed that is based on a common, structured and comprehensive modeling language (OMG SysML) that factors in the models that correspond to the primary phases of object-oriented development into producing an effort estimate. This dissertation presents a Function Point-like approach, named Pattern Point, which was conceived to estimate the size of object-oriented products using the design patterns found in object interaction modeling from the late OO analysis phase. In particular, two measures are proposed (PP1 and PP2) that are theoretically validated showing that they satisfy wellknown properties necessary for size measures. An initial empirical validation is performed that is meant to assess the usefulness and effectiveness of the proposed measures in predicting the development effort of object-oriented systems. Moreover, a comparative analysis is carried out; taking into account several other size measures. The experimental results show that the Pattern Point measure can be effectively used during the OOA phase to predict the effort values with a high degree of confidence. The PP2 metric yielded the best results with an aggregate PRED (0.25) = 0.874.
356

Mitigating cotton revenue risk through irrigation, insurance, and/or hedging

Bise, Elizabeth Hart 15 May 2009 (has links)
Texas is the leading U.S. producer of cotton, and the U.S. is the largest international market supplier of cotton. Risks and uncertainties plague Texas cotton producers with unpredictable weather, insects, diseases, and price variability. Risk management studies have examined the risk reducing capabilities of alternative management strategies, but few have looked at the interaction of using several strategies in different combinations. The research in this study focuses on managing risk faced by cotton farmers in Texas using irrigation, put options, and yield insurance. The primary objective was to analyze the interactions of irrigation, put options, and yield insurance as risk management strategies on the economic viability of a 1,000 acre cotton farm in the Lower Rio Grande Valley (LRGV) of Texas. The secondary objective was to determine the best combination of these strategies for decision makers with alternative preferences for risk aversion. Stochastic values for yields and prices were used in simulating a whole-farm financial statement for a 1000 acre furrow irrigated cotton farm in the LRGV with three types of risk management strategies. Net returns were simulated using a multivariate empirical distribution for 16 risk management scenarios. The scenarios were ranked across a range of risk aversion levels using stochastic efficiency with respect to a function. Analyses for risk averse decision makers showed that multiple irrigations are preferred, and that yield insurance is strongly preferred at lower irrigation levels. The benefits to purchasing put options increase with yields, so they are more beneficial when higher yields are expected from applying more irrigation applications.
357

A Recommendation System for Preconditioned Iterative Solvers

George, Thomas 2009 December 1900 (has links)
Solving linear systems of equations is an integral part of most scientific simulations. In recent years, there has been a considerable interest in large scale scientific simulation of complex physical processes. Iterative solvers are usually preferred for solving linear systems of such magnitude due to their lower computational requirements. Currently, computational scientists have access to a multitude of iterative solver options available as "plug-and- play" components in various problem solving environments. Choosing the right solver configuration from the available choices is critical for ensuring convergence and achieving good performance, especially for large complex matrices. However, identifying the "best" preconditioned iterative solver and parameters is challenging even for an expert due to issues such as the lack of a unified theoretical model, complexity of the solver configuration space, and multiple selection criteria. Therefore, it is desirable to have principled practitioner-centric strategies for identifying solver configuration(s) for solving large linear systems. The current dissertation presents a general practitioner-centric framework for (a) problem independent retrospective analysis, and (b) problem-specific predictive modeling of performance data. Our retrospective performance analysis methodology introduces new metrics such as area under performance-profile curve and conditional variance-based finetuning score that facilitate a robust comparative performance evaluation as well as parameter sensitivity analysis. We present results using this analysis approach on a number of popular preconditioned iterative solvers available in packages such as PETSc, Trilinos, Hypre, ILUPACK, and WSMP. The predictive modeling of performance data is an integral part of our multi-stage approach for solver recommendation. The key novelty of our approach lies in our modular learning based formulation that comprises of three sub problems: (a) solvability modeling, (b) performance modeling, and (c) performance optimization, which provides the flexibility to effectively target challenges such as software failure and multiobjective optimization. Our choice of a "solver trial" instance space represented in terms of the characteristics of the corresponding "linear system", "solver configuration" and their interactions, leads to a scalable and elegant formulation. Empirical evaluation of our approach on performance datasets associated with fairly large groups of solver configurations demonstrates that one can obtain high quality recommendations that are close to the ideal choices.
358

Mechanical Behavior of Small-Scale Channels in Acid-etched Fractures

Deng, Jiayao 2010 December 1900 (has links)
The conductivity of acid-etched fractures highly depends on spaces along the fracture created by uneven etching of the fracture walls remaining open after fracture closure. Formation heterogeneities such as variations of mineralogy and permeability result in channels that contribute significantly to the fracture conductivity. Current numerical simulators or empirical correlations do not account for this channeling characteristic because of the scale limitations. The purpose of this study is to develop new correlations for conductivity of acid-etched fracturing at the intermediate scale. The new correlations close the gap between laboratory scale measurements and macro scale acid fracture models. Beginning with acid-etched fracture width profiles and conductivity at zero closure stress obtained by the previous work, I modeled the deformation of the fracture surfaces as closure stress is applied to the fracture. At any cross-section along the fracture, I approximated the fracture shape as being a series of elliptical openings. With the assumption of elastic behavior for the rock, the numerical simulation presents how many elliptical openings remain open and their sizes as a function of the applied stress. The sections of the fracture that are closed are assigned a conductivity because of small-scale roughness features using a correlation obtained from laboratory measurements of acid fracture conductivity as a function of closure stress. The overall conductivity of the fracture is then obtained by numerically modeling the flow through this heterogeneous system. The statistical parameters of permeability distribution and the mineralogy distribution, and Young’s modulus are the primary aspects that affect the overall conductivity in acid-etched fracturing. A large number of deep, narrow channels through the entire fracture leads to high conductivity when the rock is strong enough to resist closure stress effectively. Based on extensive numerical experiments, I developed the new correlations in three categories to predict the fracture conductivity after closure. Essentially, they are the exponential functions that incorporate the influential parameters. Combined with the correlations for conductivity at zero closure stress from previous work, the new correlations are applicable to a wide range of situations.
359

A Prescription for Partial Synchrony

Sastry, Srikanth 2011 May 1900 (has links)
Algorithms in message-passing distributed systems often require partial synchrony to tolerate crash failures. Informally, partial synchrony refers to systems where timing bounds on communication and computation may exist, but the knowledge of such bounds is limited. Traditionally, the foundation for the theory of partial synchrony has been real time: a time base measured by counting events external to the system, like the vibrations of Cesium atoms or piezoelectric crystals. Unfortunately, algorithms that are correct relative to many real-time based models of partial synchrony may not behave correctly in empirical distributed systems. For example, a set of popular theoretical models, which we call M_*, assume (eventual) upper bounds on message delay and relative process speeds, regardless of message size and absolute process speeds. Empirical systems with bounded channel capacity and bandwidth cannot realize such assumptions either natively, or through algorithmic constructions. Consequently, empirical deployment of the many M_*-based algorithms risks anomalous behavior. As a result, we argue that real time is the wrong basis for such a theory. Instead, the appropriate foundation for partial synchrony is fairness: a time base measured by counting events internal to the system, like the steps executed by the processes. By way of example, we redefine M_* models with fairness-based bounds and provide algorithmic techniques to implement fairness-based M_* models on a significant subset of the empirical systems. The proposed techniques use failure detectors — system services that provide hints about process crashes — as intermediaries that preserve the fairness constraints native to empirical systems. In effect, algorithms that are correct in M_* models are now proved correct in such empirical systems as well. Demonstrating our results requires solving three open problems. (1) We propose the first unified mathematical framework based on Timed I/O Automata to specify empirical systems, partially synchronous systems, and algorithms that execute within the aforementioned systems. (2) We show that crash tolerance capabilities of popular distributed systems can be denominated exclusively through fairness constraints. (3) We specify exemplar system models that identify the set of weakest system models to implement popular failure detectors.
360

The Comparison of Parameter Estimation with Application to Massachusetts Health Care Panel Study (MHCPS) Data

Huang, Yao-wen 03 June 2004 (has links)
In this paper we propose two simple algorithms to estimate parameters £] and baseline survival function in Cox proportional hazard model with application to Massachusetts Health Care Panel Study (MHCPS) (Chappell, 1991) data which is a left truncated and interval censored data. We find that, in the estimation of £] and baseline survival function, Kaplan and Meier algorithm is uniformly better than the Empirical algorithm. Also, Kaplan and Meier algorithm is uniformly more powerful than the Empirical algorithm in testing whether two groups of survival functions are the same. We also define a distance measure D and compare the performance of these two algorithms through £] and D.

Page generated in 0.0898 seconds