• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1072
  • 463
  • 266
  • 142
  • 81
  • 58
  • 49
  • 41
  • 41
  • 37
  • 32
  • 22
  • 20
  • 14
  • 14
  • Tagged with
  • 2777
  • 358
  • 293
  • 266
  • 263
  • 257
  • 209
  • 191
  • 161
  • 154
  • 153
  • 134
  • 128
  • 127
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Design, fabrication, and testing of a variable focusing micromirror array lens

Cho, Gyoungil 29 August 2005 (has links)
A reflective type Fresnel lens using an array of micromirrors is designed and fabricated using the MUMPs?? surface micromachining process. The focal length of the lens can be rapidly changed by controlling both the rotation and translation of electrostatically actuated micromirrors. The suspension spring, pedestal and electrodes are located under the mirror to maximize the optical efficiency. The micromirror translation and rotation are plotted versus the applied voltage. Relations are provided for the fill-factor and the numerical aperture as functions of the lens diameter, the mirror size, and the tolerances specified by the MUMPs?? design rules. Linnik interferometry is used to measure the translation, rotation, and flatness of a fabricated micromirror. The reflective type Fresnel lens is controlled by independent DC voltages of 16 channels with a 0 to 50V range, and translational and torsional stiffness are calibrated with measured data. The spot diameter of the point source by the fabricated and electrostatically controlled reflective type Fresnel lens is measured to test focusing quality of the lens.
842

Bayesian variable selection in clustering via dirichlet process mixture models

Kim, Sinae 17 September 2007 (has links)
The increased collection of high-dimensional data in various fields has raised a strong interest in clustering algorithms and variable selection procedures. In this disserta- tion, I propose a model-based method that addresses the two problems simultane- ously. I use Dirichlet process mixture models to define the cluster structure and to introduce in the model a latent binary vector to identify discriminating variables. I update the variable selection index using a Metropolis algorithm and obtain inference on the cluster structure via a split-merge Markov chain Monte Carlo technique. I evaluate the method on simulated data and illustrate an application with a DNA microarray study. I also show that the methodology can be adapted to the problem of clustering functional high-dimensional data. There I employ wavelet thresholding methods in order to reduce the dimension of the data and to remove noise from the observed curves. I then apply variable selection and sample clustering methods in the wavelet domain. Thus my methodology is wavelet-based and aims at clustering the curves while identifying wavelet coefficients describing discriminating local features. I exemplify the method on high-dimensional and high-frequency tidal volume traces measured under an induced panic attack model in normal humans.
843

In situ characterization of soil properties using visible near-infrared diffuse reflectance spectroscopy

Waiser, Travis Heath 17 September 2007 (has links)
Diffuse reflectance spectroscopy (DRS) is a rapid proximal-sensing method that is being used more and more in laboratory settings to measure soil properties. Diffuse reflectance spectroscopy research that has been completed in laboratories shows promising results, but very little has been reported on how DRS will work in a field setting on soils scanned in situ. Seventy-two soil cores were obtained from six fields in Erath and Comanche County, Texas. Each soil core was scanned with a visible near-infrared (VNIR) spectrometer with a spectral range of 350-2500 nm in four different combinations of moisture content and pre-treatment: field-moist in situ, air-dried in situ, field-moist smeared in situ, and air-dried ground. Water potential was measured for the field-moist in situ scans. The VNIR spectra were used to predict total and fine clay content, water potential, organic C, and inorganic C of the soil using partial least squares (PLS) regression. The PLS model was validated with data 30% of the original soil cores that were randomly selected and not used in the calibration model. The root mean squared deviation (RMSD) of the air-dry ground samples were within the in situ RMSD and comparable to literature values for each soil property. The validation data set had a total clay content root mean squared deviation (RMSD) of 61 g kg-1 and 41 g kg-1 for the field-moist and air-dried in situ cores, respectively. The organic C validation data set had a RMSD of 5.8 g kg-1 and 4.6 g kg-1 for the field-moist and air-dried in situ cores, respectively. The RMSD values for inorganic C were 10.1 g kg-1 and 8.3 g kg-1 for the field moist and air-dried in situ scans, respectively. Smearing the samples increased the uncertainty of the predictions for clay content, organic C, and inorganic C. Water potential did not improve model predictions, nor did it correlate with the VNIR spectra; r2-values were below 0.31. These results show that DRS is an acceptable technique to measure selected soil properties in-situ at varying water contents and from different parent materials.
844

The Relationships between Business Environment, Strategy, and Performance: An Identification of Opportunities and Threats

Wang, Tzu-wei 14 January 2009 (has links)
In recent years, corporate strategy has drawn a lot of attention in the academic an practice. However, there are fewer literatures on how to put these ideas into practice, that is, how to quantify the interrelationships between the three key elements in strategic management¡Ðperformance, strategies, and environments, and how to judge and measure the opportunities and threats (O & T) when the environments change. This study is an attempt to answer these questions. The theoretical method developed incorporates a dynamic simultaneous equations model to express the interrelationship between these three elements. The method requires the identification of O & T in a three-step procedure. Step 1 relates the strategic components to the performance measures by the management¡¦s concept of business and philosophy of resource depolyment. Step 2 points out the suitable (unsuitable) environment circumstances for each of the scope and resource deployment elements. In Step 3, we link the results of Step 1 and Step 2 to identify and measure O & T. The above methodology is applied to the case of Cathy Financial Holding Company, a Taiwan largest listed financial holding company, over the period 2002Q2-2007Q3. We use the Instrument Variables Three Stage Least Square Method (IV-3SLS) to estimate them. In addition, we also use some tests to ascertain the validity of the selected instrument variables in order to obtain the more reliable results. Our empirical results indicate that both the firm strategies and the environments play significant roles in influencing the firm¡¦s performance. More specifically, whereas the diversification of products, and the debt allowance reservation rate are negatively associated with the cost/income ratio and positively associated with adjusted ROE and Tobin¡¦s Q. Additionally, the managers also can increase the investment efficiency by adjusting the content of the asset allocation, especially with regard to the holding of bonds. We also extract some major environment factors such as unemployment rate that affect the firm¡¦s performance and use the estimated results to identify and measure O & T.
845

Design of the nth Order Adaptive Integral Variable Structure Derivative Estimator

Shih, Wei-Che 17 January 2009 (has links)
Based on the Lyapunov stability theorem, a methodology of designing an nth order adaptive integral variable structure derivative estimator (AIVSDE) is proposed in this thesis. The proposed derivative estimator not only is an improved version of the existing AIVSDE, but also can be used to estimate the nth derivative of a smooth signal which has continuous and bounded derivatives up to n+1. Analysis results show that adjusting some of the parameters can facilitate the derivative estimation of signals with higher frequency noise. The adaptive algorithm is incorporated in the estimation scheme for tracking the unknown upper bounded of the input signal as well as their's derivatives. The stability of the proposed derivative estimator is guaranteed, and the comparison between recently proposed derivative estimator of high-order sliding mode control and AIVSDE is also demonstrated.
846

The Impact of Advertising on Investors¡¦ Behavior: Disposition Effect and Threshold Effect

Lee, Wan-shiuan 25 June 2009 (has links)
Previous researches find that advertising expenditure and performance can significantly influence fund flows. With a unique data from Securities Investment Trust and Consulting Association (SITCA) of Taiwan, we can use monthly data of exact purchasing amounts, redemption amounts and advertising expenditures to gain more insight into investors¡¦ investment behavior. We examine the impact of advertising on mutual fund investors¡¦ behavior and the performance-flow relationship. This paper differs from the existing literature, which only concerned with the average advertising effect on fund flow. We follow the procedure of Tsay (1989) time series autoregressive processes model and modify it to cross-section variables threshold model to examine whether threshold effect of advertising on fund flows exists. We generate four empirical results. (1) Performance is significantly associated with higher fund flows. (2) Advertising is significantly associated with higher fund flows. (3) Disposition effect exists in Taiwanese mutual fund market and advertising expenditure can partially enhance the disposition effect. (4) We also measure the threshold effect of advertising on fund flows.
847

Statistical validation and calibration of computer models

Liu, Xuyuan 21 January 2011 (has links)
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
848

Contributions to variable selection for mean modeling and variance modeling in computer experiments

Adiga, Nagesh 17 January 2012 (has links)
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
849

A Fold Recognition Approach to Modeling of Structurally Variable Regions

Levefelt, Christer January 2004 (has links)
<p>A novel approach is proposed for modeling of structurally variable regions in proteins. In this approach, a prerequisite sequence-structure alignment is examined for regions where the target sequence is not covered by the structural template. These regions, extended with a number of residues from adjacent stem regions, are submitted to fold recognition. The alignments produced by fold recognition are integrated into the initial alignment to create a multiple alignment where gaps in the main structural template are covered by local structural templates. This multiple alignment is used to create a protein model by existing protein modeling techniques.</p><p>Several alternative parameters are evaluated using a set of ten proteins. One set of parameters is selected and evaluated using another set of 31 proteins. The most promising result is for loop regions not located at the C- or N-terminal of a protein, where the method produces an average RMSD 12% lower than the loop modeling provided with the program MODELLER. This improvement is shown to be statistically significant.</p>
850

High precision motion control based on a discrete-time sliding mode approach

Li, Yufeng January 2001 (has links)
No description available.

Page generated in 0.0324 seconds