• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1216
  • 305
  • 123
  • 100
  • 67
  • 60
  • 42
  • 24
  • 22
  • 18
  • 14
  • 13
  • 8
  • 7
  • 7
  • Tagged with
  • 2410
  • 875
  • 403
  • 332
  • 299
  • 245
  • 237
  • 203
  • 196
  • 189
  • 178
  • 170
  • 163
  • 152
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Application of Residual Mapping Calibration to a Transient Groundwater Flow Model

White, Jeremy 07 October 2005 (has links)
Residual mapping is an automated groundwater-model calibration technique which rapidly identifies parameter-zone configurations, while limiting tendencies to over-parameterize. Residual mapping analyzes the model residual, or the difference between model-calculated head and spatially-interpolated observation data, for non-random trends. These trends are entered in the model as parameter zones. The values of hydrologic variables in each parameter zone are then optimized, using parameter-estimation software. Statistics calculated by the parameter-estimation software are used to determine the statistical significance of the parameter zones. If the parameter-value ranges for adjacent zones do not have significant overlap, the zones are considered to be valid. This technique was applied to a finite-difference, transient groundwater flow model of a major municipal well field, located in west-central Florida. A computer conde automates the residual mapping process, making it practical for application to large, transient flow models. The calibration data set includes head values from 37 monitor wells over a period of 181 days, including a 96-day well-field scale aquifer-performance test. The transient residual-mapping technique identified five significant transmissivity zones and one leakance zone.
32

Stereolithography Cure Process Modeling

Tang, Yanyan 20 July 2005 (has links)
Although stereolithography (SL) is a remarkable improvement over conventional prototyping production, it is being pushed aggressively for improvements in both speed and resolution. However, it is not clear currently how these two features can be improved simultaneously and what the limits are for such optimization. In order to address this issue a quantitative SL cure process model is developed which takes into account all the sub-processes involved in SL: exposure, photoinitiation, photopolymerizaion, mass and heat transfer. To parameterize the model, the thermal and physical properties of a model compound system, ethoxylated (4) pentaerythritol tetraacrylate (E4PETeA) with 2,2-dimethoxy-2-phenylacetophenone (DMPA) as initiator, are determined. The free radical photopolymerization kinetics is also characterized by differential photocalorimetry (DPC) and a comprehensive kinetic model parameterized for the model material. The SL process model is then solved using the finite element method in the software package, FEMLAB, and validated by the capability of predicting fabricated part dimensions. The SL cure process model, also referred to as the degree of cure (DOC) threshold model, simulates the cure behavior during the SL fabrication process, and provides insight into the part building mechanisms. It predicts the cured part dimension within 25% error, while the prediction error of the exposure threshold model currently utilized in SL industry is up to 50%. The DOC threshold model has been used to investigate the effects of material and process parameters on the SL performance properties, such as resolution, speed, maximum temperature rise in the resin bath, and maximum DOC of the green part. The effective factors are identified and parameter optimization is performed, which also provides guidelines for SL material development as well as process and laser improvement.
33

Rock Physics Based Determination of Reservoir Microstructure for Reservoir Characterization

Adesokan, Hamid 1976- 07 October 2013 (has links)
One of the most important, but often ignored, factors affecting the transport and the seismic properties of hydrocarbon reservoir is pore shape. Transport properties depend on the dimensions, geometry, and distribution of pores and cracks. Knowledge of pore shape distribution is needed to explain the often-encountered complex interrelationship between seismic parameters (e.g. seismic velocity) and the independent physical properties (e.g. porosity) of hydrocarbon reservoirs. However, our knowledge of reservoir pore shape distribution is very limited. This dissertation employs a pore structure parameter via a rock physics model to characterize mean reservoir pore shape. The parameter was used to develop a new physical concept of critical clay content in the context of pore compressibility as a function of pore aspect ratio for a better understanding of seismic velocity as a function of porosity. This study makes use of well log dataset from offshore Norway and from North Viking Graben in the North Sea. In the studied North Sea reservoir, porosity and measured horizontal permeability was found to increase with increasing pore aspect ratio (PAR). PAR is relatively constant at 0.23 for volumes of clay (V_cl) less than 32% with a significant decrease to 0.04 for V_cl above 32%. The point of inflexion at 32% in the PAR –V_cl plane is defined as the critical clay volume. Much of the scatters in the compressional velocity-porosity cross-plots are observed where V_cl is above this critical value. For clay content higher than the critical value, Hertz-Mindlin (HM) contact theory over-predicts compressional velocity (V_p) by about 69%. This was reduced to 4% when PAR distribution was accounted for in the original HM formulation. The pore structure parameter was also used to study a fractured carbonate reservoir in the Sichuan basin, China. Using the parameter, the reservoir interval can be distinguished from those with no fracture. The former has a pore structure parameter value that is ≥ 3.8 whereas it was < 3.8 for the latter. This finding was consistent with the result of fracture analysis, which was based on FMI image. The results from this dissertation will find application in reservoir characterization as the industry target more complex, deeper, and unconventional reservoirs.
34

Parameter Estimation of Microwave Filters

Sun, Shuo 12 1900 (has links)
The focus of this thesis is on developing theories and techniques to extract lossy microwave filter parameters from data. In the literature, the Cauchy methods have been used to extract filters’ characteristic polynomials from measured scattering parameters. These methods are described and some examples are constructed to test their performance. The results suggest that the Cauchy method does not work well when the Q factors representing the loss of filters are not even. Based on some prototype filters and the relationship between Q factors and the loss, we conduct preliminary studies on alternative representations of the characteristic polynomials. The parameters in these new models are extracted using the Levenberg–Marquardt algorithm to accurately estimate characteristic polynomials and the loss information.
35

ESTIMATION OF RECEIVER OPERATING CHARACTERISTIC (ROC) CURVE PARAMETERS: SMALL SAMPLE PROPERTIES OF ESTIMATORS.

BORGSTROM, MARK CRAIG. January 1987 (has links)
When studying detection systems, parameters associated with the Receiver Operating Characteristic (ROC) curve are often estimated to assess system performance. In some applied settings it is often not possible to test the detection system with large numbers of stimuli. The resulting small sample statistics many have undesirable properties. The characteristics of these small sample ROC estimators were examined in a Monte Carlo simulation. Three popular ROC parameters were chosen for study. One of the parameters was a single parameter index of system performance, Area under the ROC curve. The other parameters, ROC intercept and slope, were considered as a pair. ROC intercept and slope were varied along with sample size and points on the certainty rating scale to form a four way factorial design. Several types of estimators were examined. For the parameter, Area under the curve, Maximum Likelihood (ML), three types of Least Squares (LS), and Distribution Free (DF) estimators were considered. Except for the DF estimator, the same estimators were considered for the parameters, intercept and slope. These estimators were compared with respect to three characteristics: bias, efficiency, and consistency. For Area under the curve, the ML estimator was the least biased. The DF estimator was the most efficient, and all the estimators except the DF estimator appeared to be consistent. For intercept and slope the LS estimator that minimized vertical error of the points from the ROC curve (line) was the least biased for both estimators. This LS estimator was also the most efficient. This estimator along with the ML estimator also appeared to be the most consistent. The other two estimators had no significant trend toward consistency. These results along with other findings, illustrate that different estimators may be "best" for different sample sizes and for different parameters. Therefore, researchers should carefully consider the characteristics of ROC estimators before using them as indices of system performance.
36

DERIVED PARAMETER IMPLEMENTATION IN A TELEMETRY PREPROCESSOR

Bossert, Kathleen B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Today’s telemetry preprocessing systems are often required to create and process new telemetry parameters by combining multiple actual parameters in a telemetry data stream. The newly created parameters are commonly referred to as “derived parameters” and are often required for analysis in real time at relatively high speeds. Derived parameters are created through algebraic or logical combinations of multiple parameters distributed throughout the telemetry data frame. Creation and processing of derived parameters is frequently performed in telemetry system preprocessors, which are much more efficient at processing time division multiplex data streams than general purpose processors. Providing telemetry system users with a “user friendly” method for creating and installing newly derived parameter functions has been a subject of considerable discussion. Successful implementation of derived parameter processing has typically required the telemetry system user to be knowledgeable of the telemetry preprocessor architecture and to possess software programming skills. An innovative technique which requires no programming language skills is presented in this paper. Programmers or non-programmers may use the technique to easily define derived parameter calculations. Both single derived parameters and multiple derived parameters may be calculated in the preprocessor at high throughput rates.
37

Study of Multichannel Acquisition Method of Impulse Parameters

Yang, Mingji 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / One of the primary tasks of a telemetry system is able to acquire multichannel impulse parameters. Some of the defects of old methods such as, low availability of equipment, channels and data never suit the needs of the requirements for the modern telemetry system, it would be more small, rapid, flexible and low cost. This paper gives a new method, it raises the availability mentioned above and forms an integrated telemetry system with the acquisition of other parameters, so it satisfies the requirements for modern telemetry system.
38

First principles and black box modelling of biological systems

Grosfils, Aline 13 September 2007 (has links)
Living cells and their components play a key role within biotechnology industry. Cell cultures and their products of interest are used for the design of vaccines as well as in the agro-alimentary field. In order to ensure optimal working of such bioprocesses, the understanding of the complex mechanisms which rule them is fundamental. Mathematical models may be helpful to grasp the biological phenomena which intervene in a bioprocess. Moreover, they allow prediction of system behaviour and are frequently used within engineering tools to ensure, for instance, product quality and reproducibility. Mathematical models of cell cultures may come in various shapes and be phrased with varying degrees of mathematical formalism. Typically, three main model classes are available to describe the nonlinear dynamic behaviour of such biological systems. They consist of macroscopic models which only describe the main phenomena appearing in a culture. Indeed, a high model complexity may lead to long numerical computation time incompatible with engineering tools like software sensors or controllers. The first model class is composed of the first principles or white box models. They consist of the system of mass balances for the main species (biomass, substrates, and products of interest) involved in a reaction scheme, i.e. a set of irreversible reactions which represent the main biological phenomena occurring in the considered culture. Whereas transport phenomena inside and outside the cell culture are often well known, the reaction scheme and associated kinetics are usually a priori unknown, and require special care for their modelling and identification. The second kind of commonly used models belongs to black box modelling. Black boxes consider the system to be modelled in terms of its input and output characteristics. They consist of mathematical function combinations which do not allow any physical interpretation. They are usually used when no a priori information about the system is available. Finally, hybrid or grey box modelling combines the principles of white and black box models. Typically, a hybrid model uses the available prior knowledge while the reaction scheme and/or the kinetics are replaced by a black box, an Artificial Neural Network for instance. Among these numerous models, which one has to be used to obtain the best possible representation of a bioprocess? We attempt to answer this question in the first part of this work. On the basis of two simulated bioprocesses and a real experimental one, two model kinds are analysed. First principles models whose reaction scheme and kinetics can be determined thanks to systematic procedures are compared with hybrid model structures where neural networks are used to describe the kinetics or the whole reaction term (i.e. kinetics and reaction scheme). The most common artificial neural networks, the MultiLayer Perceptron and the Radial Basis Function network, are tested. In this work, pure black box modelling is however not considered. Indeed, numerous papers already compare different neural networks with hybrid models. The results of these previous studies converge to the same conclusion: hybrid models, which combine the available prior knowledge with the neural network nonlinear mapping capabilities, provide better results. From this model comparison and the fact that a physical kinetic model structure may be viewed as a combination of basis functions such as a neural network, kinetic model structures allowing biological interpretation should be preferred. This is why the second part of this work is dedicated to the improvement of the general kinetic model structure used in the previous study. Indeed, in spite of its good performance (largely due to the associated systematic identification procedure), this kinetic model which represents activation and/or inhibition effects by every culture component suffers from some limitations: it does not explicitely address saturation by a culture component. The structure models this kind of behaviour by an inhibition which compensates a strong activation. Note that the generalization of this kinetic model is a challenging task as physical interpretation has to be improved while a systematic identification procedure has to be maintained. The last part of this work is devoted to another kind of biological systems: proteins. Such macromolecules, which are essential parts of all living organisms and consist of combinations of only 20 different basis molecules called amino acids, are currently used in the industrial world. In order to allow their functioning in non-physiological conditions, industrials are open to modify protein amino acid sequence. However, substitutions of an amino acid by another involve thermodynamic stability changes which may lead to the loss of the biological protein functionality. Among several theoretical methods predicting stability changes caused by mutations, the PoPMuSiC (Prediction Of Proteins Mutations Stability Changes) program has been developed within the Genomic and Structural Bioinformatics Group of the Université Libre de Bruxelles. This software allows to predict, in silico, changes in thermodynamic stability of a given protein under all possible single-site mutations, either in the whole sequence or in a region specified by the user. However, PoPMuSiC suffers from limitations and should be improved thanks to recently developed techniques of protein stability evaluation like the statistical mean force potentials of Dehouck et al. (2006). Our work proposes to enhance the performances of PoPMuSiC by the combination of the new energy functions of Dehouck et al. (2006) and the well known artificial neural networks, MultiLayer Perceptron or Radial Basis Function network. This time, we attempt to obtain models physically interpretable thanks to an appropriate use of the neural networks.
39

Model identification and parameter estimation of stochastic linear models.

Vazirinejad, Shamsedin. January 1990 (has links)
It is well known that when the input variables of the linear regression model are subject to noise contamination, the model parameters can not be estimated uniquely. This, in the statistical literature, is referred to as the identifiability problem of the errors-in-variables models. Further, in linear regression there is an explicit assumption of the existence of a single linear relationship. The statistical properties of the errors-in-variables models under the assumption that the noise variances are either known or that they can be estimated are well documented. In many situations, however, such information is neither available nor obtainable. Although under such circumstances one can not obtain a unique vector of parameters, the space, Ω, of the feasible solutions can be computed. Additionally, assumption of existence of a single linear relationship may be presumptuous as well. A multi-equation model similar to the simultaneous-equations models of econometrics may be more appropriate. The goals of this dissertation are the following: (1) To present analytical techniques or algorithms to reduce the solution space, Ω, when any type of prior information, exact or relative, is available; (2) The data covariance matrix, Σ, can be examined to determine whether or not Ω is bounded. If Ω is not bounded a multi-equation model is more appropriate. The methodology for identifying the subsets of variables within which linear relations can feasibly exist is presented; (3) Ridge regression technique is commonly employed in order to reduce the ills caused by collinearity. This is achieved by perturbing the diagonal elements of Σ. In certain situations, applying ridge regression causes some of the coefficients to change signs. An analytical technique is presented to measure the amount of perturbation required to render such variables ineffective. This information can assist the analyst in variable selection as well as deciding on the appropriate model; (4) For the situations when Ω is bounded, a new weighted regression technique based on the computed upper bounds on the noise variances is presented. This technique will result in identification of a unique estimate of the model parameters.
40

Optical properties of living organisms

Zhou, Yuming January 2000 (has links)
No description available.

Page generated in 0.0734 seconds