• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2163
  • 363
  • 282
  • 175
  • 98
  • 70
  • 38
  • 35
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 3962
  • 498
  • 465
  • 461
  • 415
  • 409
  • 404
  • 382
  • 368
  • 350
  • 332
  • 311
  • 283
  • 281
  • 268
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Leveraged Plans for Measurement System Assessment

Browne, Ryan January 2009 (has links)
In manufacturing, measurement systems are used to control processes and inspect parts with the goal of producing high quality product for the customer. Modern Quality Systems require the periodic assessment of key measurement systems to ensure that they are functioning as expected. Estimating the proportion of the process variation due to the measurement system is an important part of these assessments. The measurement system may be simple, for example, with one gauge automatically measuring a single characteristic on every part or complex with multiple characteristics, gauges, operators etc. Traditional assessment plans involve selecting a random sample of parts and then repeatedly measuring each part under a variety of conditions that depend on the complexity of the measurement system. In this thesis, we propose new plans for assessing the measurement system variation based on the concept of leveraging. In a leveraged plan, we select parts (non-randomly) with extreme initial values to measure repeatedly. Depending on the context, parts with initial measurements may be available from regular production or from a specially conducted baseline study. We use the term leveraging because of the re-use of parts with extreme values. The term leverage has been used by the  proponents of the problem solving system initially proposed by Dorian  Shainin. Parts with relatively large and small values of the response  are compared to identify the major causes of the variation. There is no discussion of the theory of leveraging in the literature or its application to measurement system  assessment. In this thesis, we provide motivation for why leveraging  is valuable and apply it to measurement system  assessments. We consider three common contexts in the thesis: Simple measurement systems with one gauge, no operator effects and no external information about the process performance; Measurement systems, as stated above, where we have external information, as would be the case, for example, if the measurement system was used for 100% inspection; Measurement systems with multiple operators. For each of these contexts, we develop new leveraged assessment plans and show that these plans are substantially more efficient than traditional plans in estimating the proportion of the process variation due to the measurement system. In each case, we also provide methodology for planning the leveraged study and for analysing the data generated. We then develop another new application of leveraging in the assessment of a measurement system used for 100% inspection. A common practice is to re-measure all parts with a first  measurement outside of inspection limits. We propose using these repeated measurements to assess the variation in the measurement system. Here the system itself does the leveraging since we have repeated measurements only on relatively large or small parts. We recommend using maximum likelihood estimation but we show that the ANOVA estimator, although  biased, is comparable to the MLE when the measurement system is reliable.  We also provide guidelines on how to schedule such assessments. To outline the thesis, in the first two chapters, we review the contexts described above. For each context, we discuss how to characterize the measurement system performance, the common assessment plans and their analysis. In Chapter 3, we introduce the concept of leveraging and provide motivation for why it is effective. Chapters 4 to 7 contain the bulk of the new results in the thesis. In Chapters 4, 5 and 6, which correspond to the three contexts described above, we provide new leveraged plans, show their superiority to the standard plans and provide a methodology to help design leveraged plans. In Chapter 7, we show how to assess an inspection system using repeated measurements on initially rejected parts. In the final chapter, we discuss other potential applications of leveraging to other measurement system assessment problems and to a problem in genetics.
42

Empirical spectral analysis of random number generators /

Zeitler, David. January 2001 (has links)
Thesis (Ph. D.)--Western Michigan University, 2001. / Also available on the World Wide Web at above URL. Includes bibliographical references (leaves 94-102).
43

A BIST circuit for random jitter measurement

Lee, Jae Wook 12 July 2012 (has links)
Jitter is a dominant factor contributing to a high bit error rate (BER) in high speed I/O circuitry, and it aggravates the quality of a clock signal from a phase-locked loop (PLL), subsequently impacting a given timing budget. The recent proliferation of systems-on-a-chip (SoCs) with help of technology scaling makes jitter measurement more challenging as the SoCs integrate more I/O circuitry and PLLs within a chip. Jitter has been, however, one of the most difficult parameters to measure accurately when validating the high speed serial I/O circuitry or PLLs, mostly due to its small value. External instruments with full-fledged high precision measurement hardware, along with comprehensive analysis tools, have been used for jitter measurement, but increased test cost from long test time, signal integrity, and human intervention prevent this approach from being used for high volume manufacturing testing. Built-in self-test (BIST) solutions have recently become attractive to overcome these drawbacks, but complicated analog circuit designs that are sensitive to ever increasing process variations, and associated complex analysis methods impede their adoption in the SoCs. This dissertation studies practical random jitter measurement methods that achieve measurement accuracy by exploiting a differential approach and make the proposed methods tester-friendly solutions for an automatic test equipment (ATE). We first propose a method of measuring the average value of the random jitter, rather than measuring the jitter at every clock cycle, that can be converted to the root-mean-square (RMS) value of the random jitter, which is the key indicator of the quantity of the random jitter. Then, we propose a simple but accurate delay measurement method which uses the proposed jitter measurement method for random jitter measurement when a reference signal, such as a golden PLL output in high speed I/O validation, is not available. The validity of the proposed random jitter measurement method is supported by measurement results from a test chip. The impact of substrate noise on the signal of interest is also shown with measurements using a test chip. To address the random jitter of a clock signal when the clock is operating in its functional mode, we demonstrate a novel method for random jitter measurement that explores the shmoo capability of a low-cost production tester without relying on any BIST circuitry. / text
44

On some negative dependence structures and their applications

Lo, Ambrose, 羅彥博 January 2014 (has links)
Recently, the study of negative dependence structures has aroused considerable interest amongst researchers in actuarial science and quantitative risk management. This thesis centres on two extreme negative dependence structures in different dimensions - counter-monotonicity and mutual exclusivity, and develops their novel characterizations and applications to risk management. Bivariate random vectors are treated in the first part of the thesis, where the characterization of comonotonicity by the optimality of aggregate sums in convex order is extended to its bivariate antithesis, namely, counter-monotonicity. It is shown that two random variables are counter-monotonic if and only if their aggregate sum is minimal with respect to convex order. This defining property of counter-monotonicity is then exploited to identify a necessary and sufficient condition for merging counter-monotonic positions to be risk-reducing. In the second part, the notion of mutual exclusivity is introduced as a multi-dimensional generalization of counter-monotonicity. Various characterizations of mutually exclusive random vectors are presented, including their pairwise counter-monotonic behaviour, minimal convex sum property, and the characteristic function of their aggregate sums. These properties highlight the role of mutual exclusivity as the strongest negative dependence structure in a multi-dimensional setting. As an application, the practical problem of deriving general lower bounds on three common convex functionals of aggregate sums with arbitrary marginal distributions is considered. The sharpness of these lower bounds is characterized via the mutual exclusivity of the underlying random variables. Compared to existing bounds in the literature, the new lower bounds proposed enjoy the advantages of generality and simplicity. / published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
45

Site-specific comparisons of random vibration theory-based and traditional seismic site response analysis

Ozbey, Mehmet Cem 28 August 2008 (has links)
Not available / text
46

ESTIMATION OF THE MEAN VALUE FUNCTION OF A GAUSSIAN PROCESS

Driscoll, Michael Francis, 1944- January 1971 (has links)
No description available.
47

THE WEAK-CONVERGENCE OF RECURRENT RANDOM-WALK CONDITIONED BY A LATE-RETURN TO ZERO

Kaigh, William Daniel, 1944- January 1973 (has links)
No description available.
48

Topics in Combinatorics and Random Matrix Theory

Novak, JONATHAN 27 September 2009 (has links)
Motivated by the longest increasing subsequence problem, we examine sundry topics at the interface of enumerative/algebraic combinatorics and random matrix theory. We begin with an expository account of the increasing subsequence problem, contextualizing it as an ``exactly solvable'' Ramsey-type problem and introducing the RSK correspondence. New proofs and generalizations of some of the key results in increasing subsequence theory are given. These include Regev's single scaling limit, Gessel's Toeplitz determinant identity, and Rains' integral representation. The double scaling limit (Baik-Deift-Johansson theorem) is briefly described, although we have no new results in that direction. Following up on the appearance of determinantal generating functions in increasing subsequence type problems, we are led to a connection between combinatorics and the ensemble of truncated random unitary matrices, which we describe in terms of Fisher's random-turns vicious walker model from statistical mechanics. We prove that the moment generating function of the trace of a truncated random unitary matrix is the grand canonical partition function for Fisher's random-turns model with reunions. Finally, we consider unitary matrix integrals of a very general type, namely the ``correlation functions'' of entries of Haar-distributed random matrices. We show that these expand perturbatively as generating functions for class multiplicities in symmetric functions of Jucys-Murphy elements, thus addressing a problem originally raised by De Wit and t'Hooft and recently resurrected by Collins. We argue that this expansion is the CUE counterpart of genus expansion. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2009-09-27 12:27:21.479
49

On approximate normalizing transformations

D'Avirro, Mario Michael Anthony. January 1974 (has links)
No description available.
50

The Probabilistic Method and Random Graphs

Ketelboeter, Brian 01 October 2012 (has links)
The probabilistic method in combinatorics is a nonconstructive tool popularized through the work of Paul Erd˝os. Many difficult problems can be solved through a relatively simple application of probability theory that can lead to solutions which are better than known constructive methods. This thesis presents some of the basic tools used throughout the probabilistic method along with some of the applications of the probabilistic method throughout the fields of Ramsey theory, graph theory and other areas of combinatorial analysis. Then the topic of random graphs is covered. The theory of random graphs was founded during the late fifties and early sixties to study questions involving the effect of probability distributions upon graphical properties. This thesis presents some of the basic results involving graph models and graph properties.

Page generated in 0.0475 seconds