• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 16
  • 11
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 177
  • 177
  • 177
  • 44
  • 31
  • 22
  • 20
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

The impact of inventory record inaccuracy on material requirements planning systems /

Bragg, Daniel Jay, January 1900 (has links)
Thesis (Ph. D.)--Ohio State University, 1984. / Includes bibliographical references (leaves 171-177). Available online via OhioLINK's ETD Center.
122

Regression calibration and maximum likelihood inference for measurement error models

Monleon-Moscardo, Vicente J. 08 December 2005 (has links)
Graduation date: 2006 / Regression calibration inference seeks to estimate regression models with measurement error in explanatory variables by replacing the mismeasured variable by its conditional expectation, given a surrogate variable, in an estimation procedure that would have been used if the true variable were available. This study examines the effect of the uncertainty in the estimation of the required conditional expectation on inference about regression parameters, when the true explanatory variable and its surrogate are observed in a calibration dataset and related through a normal linear model. The exact sampling distribution of the regression calibration estimator is derived for normal linear regression when independent calibration data are available. The sampling distribution is skewed and its moments are not defined, but its median is the parameter of interest. It is shown that, when all random variables are normally distributed, the regression calibration estimator is equivalent to maximum likelihood provided a natural estimate of variance is non-negative. A check for this equivalence is useful in practice for judging the suitability of regression calibration. Results about relative efficiency are provided for both external and internal calibration data. In some cases maximum likelihood is substantially more efficient than regression calibration. In general, though, a more important concern when the necessary conditional expectation is uncertain, is that inferences based on approximate normality and estimated standard errors may be misleading. Bootstrap and likelihood-ratio inferences are preferable.
123

Disassociation between arithmetic and algebraic knowledge in mathematical modeling /

Borchert, Katja. January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (leaves 88-93).
124

Code verification using the method of manufactured solutions

Murali, Vasanth Kumar. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computational Engineering. / Title from title screen. Includes bibliographical references.
125

Truncated multiplications and divisions for the negative two's complement number system

Park, Hyuk, 1973- 28 August 2008 (has links)
In the design of digital signal processing systems, where single-precision results are required, the power dissipation and area of parallel multipliers can be significantly reduced by truncating the less significant columns and compensating to produce an approximate rounded product. This dissertation presents the design of truncated multiplications of signed inputs utilizing a new number system, the negative fractional two's complement number system which solves an inherent problem of the conventional two's complement number system. This research also presents a new truncated multiplication method to reduce the errors with only slightly more hardware. Error, area, delay and dynamic power estimates are performed at the structural HDL level. The new method is also applied to various conventional number systems. For division, which is the slowest and most complex of the arithmetic operations, a new truncated division method is described that yields the same errors as those of true rounding without additional execution time that is normally required for true rounding. The new method is also applied to various conventional number systems.
126

Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms

Bauman, Paul Thomas, 1980- 29 August 2008 (has links)
Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented. / text
127

Controlled Lagrangian particle tracking: analyzing the predictability of trajectories of autonomous agents in ocean flows

Szwaykowska, Klementyna 13 January 2014 (has links)
Use of model-based path planning and navigation is a common strategy in mobile robotics. However, navigation performance may degrade in complex, time-varying environments under model uncertainty because of loss of prediction ability for the robot state over time. Exploration and monitoring of ocean regions using autonomous marine robots is a prime example of an application where use of environmental models can have great benefits in navigation capability. Yet, in spite of recent improvements in ocean modeling, errors in model-based flow forecasts can still significantly affect the accuracy of predictions of robot positions over time, leading to impaired path-following performance. In developing new autonomous navigation strategies, it is important to have a quantitative understanding of error in predicted robot position under different flow conditions and control strategies. The main contributions of this thesis include development of an analytical model for the growth of error in predicted robot position over time and theoretical derivation of bounds on the error growth, where error can be attributed to drift caused by unmodeled components of ocean flow. Unlike most previous works, this work explicitly includes spatial structure of unmodeled flow components in the proposed error growth model. It is shown that, for a robot operating under flow-canceling control in a static flow field with stochastic errors in flow values returned at ocean model gridpoints, the error growth is initially rapid, but slows when it reaches a value of approximately twice the ocean model gridsize. Theoretical values for mean and variance of error over time under a station-keeping feedback control strategy and time-varying flow fields are computed. Growth of error in predicted vehicle position is modeled for ocean models whose flow forecasts include errors with large spatial scales. Results are verified using data from several extended field deployments of Slocum autonomous underwater gliders, in Monterey Bay, CA in 2006, and in Long Bay, SC in 2012 and 2013.
128

Multiple-path stack algorithms for decoding convolutional codes

Haccoun, David January 1974 (has links)
No description available.
129

The treatment of missing measurements in PCA and PLS models /

Nelson, Philip R. C. MacGregor, John F. Taylor, Paul A. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Adviser: P.A. Taylor and John F. MacGregor. Includes bibliographical references. Also available via World Wide Web.
130

The treatment of missing measurements in PCA and PLS models /

Nelson, Philip R. C. MacGregor, John F. Taylor, Paul A. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Adviser: P.A. Taylor and John F. MacGregor. Includes bibliographical references. Also available via World Wide Web.

Page generated in 0.0839 seconds