Spelling suggestions: "subject:"intervals"" "subject:"ntervals""
111 |
Scales and their Use in the Piano Music of Debussy and RavelChristensen, Elizabeth Ann 01 1900 (has links)
The purpose of this study is to acquaint the reader with certain scales, not merely as they are used to provide the theoretical "building stones" of all music, but as they have been used as actual scale passages in the piano music of two twentieth century composers. This is a study of the use of various scales in the piano music of Debussy and Ravel. A list of the piano music of these composers which was available for the present investigation is given in the Bibliography. A careful analysis of the music has been necessary in order to determine: First, the types of scales the composer uses; second, the interpretation of the scale in its context; third, the ways in which some scale passages are employed to help project musical thought in the composition; and fourth, the frequency of occurrence of types of scale passages.
|
112 |
Development of confidence intervals for process capability assessment in short run manufacturing environment using bootstrap methodologyKnezevic, Zec Gorana 01 October 2003 (has links)
No description available.
|
113 |
Using the Timeline Followback to Identify Time Windows Representative of Annual Posttreatment DrinkingGioia, Christopher J. 01 January 2011 (has links)
Using 12-month post treatment Timeline Followback drinking reports, data extrapolated from shorter time windows (e.g., 1 month, 6 months) were used to estimate total annual drinking. The objective was to determine whether data from a shorter time window would provide an estimate of annual drinking sufficiently consistent with the full year report such that it can be used in place of the full report. Data for this study were obtained from problem drinkers who voluntarily participated in a randomized controlled trial of a mail-based intervention. Complete follow-up data were obtained for 467 of the 825 participants who completed a 12-month Timeline Followback of their post intervention drinking. The results of this study suggest that 3 months is the necessary minimum time window to best represent annual posttreatment drinking with alcohol abusers. The major implication of this finding is that alcohol treatment outcome studies can use a shorter posttreatment time window, which is more time and resource efficient, over which to obtain follow-up data with little to no loss in the representativeness of that data.
|
114 |
Statistical inference for inequality measures based on semi-parametric estimatorsKpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Measures of inequality, also used as measures of concentration or diversity, are very popular in economics
and especially in measuring the inequality in income or wealth within a population and between
populations. However, they have applications in many other fields, e.g. in ecology, linguistics, sociology,
demography, epidemiology and information science.
A large number of measures have been proposed to measure inequality. Examples include the Gini
index, the generalized entropy, the Atkinson and the quintile share ratio measures. Inequality measures
are inherently dependent on the tails of the population (underlying distribution) and therefore their
estimators are typically sensitive to data from these tails (nonrobust). For example, income distributions
often exhibit a long tail to the right, leading to the frequent occurrence of large values in samples. Since
the usual estimators are based on the empirical distribution function, they are usually nonrobust to such
large values. Furthermore, heavy-tailed distributions often occur in real life data sets, remedial action
therefore needs to be taken in such cases.
The remedial action can be either a trimming of the extreme data or a modification of the (traditional)
estimator to make it more robust to extreme observations. In this thesis we follow the second option,
modifying the traditional empirical distribution function as estimator to make it more robust. Using results
from extreme value theory, we develop more reliable distribution estimators in a semi-parametric
setting. These new estimators of the distribution then form the basis for more robust estimators of the
measures of inequality. These estimators are developed for the four most popular classes of measures,
viz. Gini, generalized entropy, Atkinson and quintile share ratio. Properties of such estimators
are studied especially via simulation. Using limiting distribution theory and the bootstrap methodology,
approximate confidence intervals were derived. Through the various simulation studies, the proposed
estimators are compared to the standard ones in terms of mean squared error, relative impact of contamination,
confidence interval length and coverage probability. In these studies the semi-parametric
methods show a clear improvement over the standard ones. The theoretical properties of the quintile
share ratio have not been studied much. Consequently, we also derive its influence function as well as
the limiting normal distribution of its nonparametric estimator. These results have not previously been
published.
In order to illustrate the methods developed, we apply them to a number of real life data sets. Using
such data sets, we show how the methods can be used in practice for inference. In order to choose
between the candidate parametric distributions, use is made of a measure of sample representativeness
from the literature. These illustrations show that the proposed methods can be used to reach
satisfactory conclusions in real life problems. / AFRIKAANSE OPSOMMING: Maatstawwe van ongelykheid, wat ook gebruik word as maatstawwe van konsentrasie of diversiteit,
is baie populêr in ekonomie en veral vir die kwantifisering van ongelykheid in inkomste of welvaart
binne ’n populasie en tussen populasies. Hulle het egter ook toepassings in baie ander dissiplines,
byvoorbeeld ekologie, linguistiek, sosiologie, demografie, epidemiologie en inligtingskunde.
Daar bestaan reeds verskeie maatstawwe vir die meet van ongelykheid. Voorbeelde sluit in die Gini
indeks, die veralgemeende entropie maatstaf, die Atkinson maatstaf en die kwintiel aandeel verhouding.
Maatstawwe van ongelykheid is inherent afhanklik van die sterte van die populasie (onderliggende
verdeling) en beramers daarvoor is tipies dus sensitief vir data uit sodanige sterte (nierobuust). Inkomste
verdelings het byvoorbeeld dikwels lang regtersterte, wat kan lei tot die voorkoms van groot
waardes in steekproewe. Die tradisionele beramers is gebaseer op die empiriese verdelingsfunksie, en
hulle is gewoonlik dus nierobuust teenoor sodanige groot waardes nie. Aangesien swaarstert verdelings
dikwels voorkom in werklike data, moet regstellings gemaak word in sulke gevalle.
Hierdie regstellings kan bestaan uit of die afknip van ekstreme data of die aanpassing van tradisionele
beramers om hulle meer robuust te maak teen ekstreme waardes. In hierdie tesis word die
tweede opsie gevolg deurdat die tradisionele empiriese verdelingsfunksie as beramer aangepas word
om dit meer robuust te maak. Deur gebruik te maak van resultate van ekstreemwaardeteorie, word
meer betroubare beramers vir verdelings ontwikkel in ’n semi-parametriese opset. Hierdie nuwe beramers
van die verdeling vorm dan die basis vir meer robuuste beramers van maatstawwe van ongelykheid.
Hierdie beramers word ontwikkel vir die vier mees populêre klasse van maatstawwe, naamlik
Gini, veralgemeende entropie, Atkinson en kwintiel aandeel verhouding. Eienskappe van hierdie
beramers word bestudeer, veral met behulp van simulasie studies. Benaderde vertrouensintervalle
word ontwikkel deur gebruik te maak van limietverdelingsteorie en die skoenlus metodologie. Die
voorgestelde beramers word vergelyk met tradisionele beramers deur middel van verskeie simulasie
studies. Die vergelyking word gedoen in terme van gemiddelde kwadraat fout, relatiewe impak van
kontaminasie, vertrouensinterval lengte en oordekkingswaarskynlikheid. In hierdie studies toon die
semi-parametriese metodes ’n duidelike verbetering teenoor die tradisionele metodes. Die kwintiel
aandeel verhouding se teoretiese eienskappe het nog nie veel aandag in die literatuur geniet nie.
Gevolglik lei ons die invloedfunksie asook die asimptotiese verdeling van die nie-parametriese beramer
daarvoor af.
Ten einde die metodes wat ontwikkel is te illustreer, word dit toegepas op ’n aantal werklike datastelle.
Hierdie toepassings toon hoe die metodes gebruik kan word vir inferensie in die praktyk. ’n Metode
in die literatuur vir steekproefverteenwoordiging word voorgestel en gebruik om ’n keuse tussen die
kandidaat parametriese verdelings te maak. Hierdie voorbeelde toon dat die voorgestelde metodes
met vrug gebruik kan word om bevredigende gevolgtrekkings in die praktyk te maak.
|
115 |
Edgeworth-corrected small-sample confidence intervals for ratio parameters in linear regressionBinyavanga, Kamanzi-wa 03 1900 (has links)
Dissertation (PhD)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: In this thesis we construct a central confidence interval for a smooth scalar non-linear function of
parameter vector f3 in a single general linear regression model Y = X f3 + c. We do this by first
developing an Edgeworth expansion for the distribution function of a standardised point estimator.
The confidence interval is then constructed in the manner discussed. Simulation studies reported at
the end of the thesis show the interval to perform well in many small-sample situations.
Central to the development of the Edgeworth expansion is our use of the index notation which, in
statistics, has been popularised by McCullagh (1984, 1987).
The contributions made in this thesis are of two kinds. We revisit the complex McCullagh Index
Notation, modify and extend it in certain respects as well as repackage it in the manner that is more
accessible to other researchers.
On the new contributions, in addition to the introduction of a new small-sample confidence interval,
we extend the theory of stochastic polynomials (SP) in three respects. A method, which we believe to
be the simplest and most transparent to date, is proposed for deriving cumulants for these. Secondly,
the theory of the cumulants of the SP is developed both in the context of Edgeworth expansion as well
as in the regression setting. Thirdly, our new method enables us to propose a natural alternative to
the method of Hall (1992a, 1992b) regarding skewness-reduction in Edgeworth expansions. / AFRIKAANSE OPSOMMING: In hierdie proefskrif word daar aandag gegee aan die konstruksie van 'n sentrale vertrouensinterval
vir 'n gladde skalare nie-lineêre funksie van die parametervektor (3 in 'n enkele algemene lineêre
regressiemodel y = X (3 + e.. Dit behels eerstens die ontwikkeling van 'n Edgeworth uitbreiding
vir die verdelingsfunksie van 'n gestandaardiseerde puntberamer. Die vertrouensinterval word dan op
grond van hierdie uitbreiding gekonstrueer. Simulasiestudies wat aan die einde van die proefskrif
gerapporteer word, toon dat die voorgestelde interval goed vertoon in verskeie klein-steekproef
gevalle.
Die gebruik van indeksnotasie, wat in die statistiek deur McCullagh (1984, 1987) bekendgestel is,
speel 'n sentrale rol in die ontwikkeling van die Edgeworth uitbreiding.
Die bydrae wat in hierdie proefskrif gemaak word, is van 'n tweërlei aard. Die ingewikkelde
Indeksnotasie van McCullagh word ondersoek, aangepas en ten opsigte van sekere aspekte uitgebrei.
Die notasie word ook aangebied in 'n vorm wat dit hopelik meer toeganklik sal maak vir ander
navorsers.
Betreffende die bydrae wat gemaak word, word 'n nuwe klein-steekproef vertrouensinterval
voorgestel, en word die teorie van stogastiese polinome (SP) ook in drie opsigte uitgebrei. 'n Metode
word voorgestelom die kumulante van SP'e af te lei. Ons glo dat hierdie metode die duidelikste
en eenvoudigste metode is wat tot dusver hiervoor voorgestel is. Tweedens word die teorie van die
kumulante van SP'e ontwikkel binne die konteks van Edgeworth uitbreidings, sowel as die konteks
van regressie. Derdens stelons nuwe metode ons in staat om 'n natuurlike alternatief voor te stel vir
die metode van Hall (1992a, 1992b) vir die vermindering van skeefheid in Edgeworth uitbreidings.
|
116 |
Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic ProgrammingPierre-Louis, Péguy January 2012 (has links)
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
|
117 |
Méthodes et outils ensemblistes pour le pré-dimensionnement de systèmes mécatroniques / Set-membership methods and tools for the pre-design of mechatronic systemsRaka, Sid-Ahmed 21 June 2011 (has links)
Le pré-dimensionnement se situe en amont du processus de conception d'un système : à partir d'un ensemble d'exigences, il consiste à déterminer un ensemble de solutions techniques possibles dans un espace de recherche souvent très grand et structuré par une connaissance partielle et incertaine du futur système et de son environnement. Bien avant de penser au choix définitif des composants et au dimensionnement précis du système complet, les concepteurs doivent s'appuyer sur un premier cahier des charges, des modélisations des principaux phénomènes et divers retours d'expériences pour formaliser des contraintes, faire des hypothèses simplificatrices, envisager diverses architectures et faire des choix sur des données imprécises (i.e. caractérisées sous la forme d'intervalles, de listes de valeurs possibles, etc…). Les choix effectués lors du pré-dimensionnement pouvant être très lourds de conséquences sur la suite du développement, il est primordial de détecter au plus tôt d'éventuelles incohérences et de pouvoir vérifier la satisfaction des exigences dans un contexte incertain. Dans ce travail, une méthodologie de vérification des exigences basée sur l'échange de modèles ensemblistes entre donneurs d'ordre et fournisseurs est proposée. Elle s'inscrit dans le cadre d'un paradigme de conception basé sur la réduction d'incertitudes. Après un travail portant sur la modélisation des systèmes mécatroniques, une attention particulière est portée à la prise en compte d'incertitudes déterministes sur des grandeurs continues : des techniques basées sur l'analyse par intervalles telles que la satisfaction de contraintes (CSP), des calculs d'atteignabilité pour des modèles dynamiques de connaissances ou encore l'identification de modèles de comportements ensemblistes sont ainsi mis en œuvre et développés afin d'outiller la méthodologie proposée et contribuer à répondre à l'objectif d'une vérification à couverture garantie. / The pre-sizing takes place upstream to the process of designing a system: from a set of requirements, it consists in determining a set of possible technical solutions in an often very large search space which is structured by the uncertain and partial knowledge available about the future system and its environment. Long before making the final choice and the final sizing of the system components, the designers have to use specifications, models of the main phenomena, and experience feedbacks to formalize some constraints, make simplifying assumptions, consider various architectures and make choices based on imprecise data (i.e. intervals, finite sets of possible values, etc…). The choices made during the pre-sizing process often involving strong commitments for the further developments, it is very important to early detect potential inconsistencies and to verify the satisfaction of the requirements in an uncertain context. In this work, a methodology based on the exchange of set-membership models between principals and suppliers is proposed for the verification of requirements. This methodology is fully consistent with a design paradigm based on the reduction of uncertainties. After a work dedicated to the modeling of mechatronic systems, a special attention is paid to dealing with deterministic uncertainties affecting continuous values: some techniques based on interval analysis such as constraint satisfaction (interval CSP), reachability computations for knowledge dynamic models or identification of set-membership behavioral models are used and developed, so providing a set of tools to implement the proposed methodology and contribute to reach the goal of a verification with a full and guaranteed coverage.
|
118 |
The Missae De Beata Virgine C. 1500-1520: A Study of Transformation From Monophonic to Polyphonic ModalityWoodruff, Lawrence Theodore 08 1900 (has links)
While musical sources and documents from throughout the Middle Ages reveal that mode was an enduring and consciously derived trait of monophonic chant, modality in later polyphony shares neither the historical span nor the theoretical clarity of its monophonic counterpart. Modern theorists are left with little more than circumstantial evidence of the early development of modality in polyphony. This study attempts to shed light on the problem by detailed analysis of a select body of paraphrase masses from the early sixteenth century. First, it correlates the correspondence between the paraphrased voice and the original chant, establishing points of observation that become the basis of melodic analysis. Then, these points are correlated with known rules of counterpoint. Exceptions are identified and examined for their potential to place emphasis on individual mode-defining pitches. A set of tools is derived for quantifying the relative strength of cadential actions. Levels of cadence are defined, ranging from full, structural cadences to surfacelevel accentuations of individual pitches by sixth-to-octave dyadic motions. These cadence levels are traced through the Missae de beata virqine repertoire from c. 1500-1520, a repertoire that includes masses of Josquin, Brumel, La Rue, Isaac, and Rener. While the Credos, based on two chant sources—one early (11th century) and one later (15th century)—showed little modal consistency, the Kyries show some suggestion of purposeful modal expression; and the Glorias show even greater implications. Results of the study have potential application in sixteenth-century music scholarship to such important issues as musica ficta, performance practice, text underlay, and form.
|
119 |
A Study Of Lu-Pitch Name Signification: A Translation with CommentaryTung, Anne An-Yi Lin 12 1900 (has links)
The purpose of this thesis is to provide translation of documents on lu from two primary sources for a study of the theory of ,lu, with the main focus on the interpretation and the signification attached to each of the twelve lu-pitch names. To establish the background information of the lu-10 system, an explanation of its acoustical properties is first presented. Based on the most important and widely used tonal system in ancient China -- the san-fen-sun-i system, the illustration is provided for the process of tone generation. Methods proposed by the main theorists who engaged in the discussion of the system of lu are presented. The introduction of the concept of yin and yang in reference to the twelve lu and the signification of the lu-lu system in relation to the human and natural world will also be discussed. The main body of this study is devoted to the translation of written references on the meaning of the twelve lu. The first part is the translation of the selected passages from The Anthology of the Historical Document of Ancient Chinese Music. edited by Tsai-Ping Liang; the second part is the translation of a modern exegesis from an article written by Deh-I Liu. This translation offers a perspective to understand the concept of l-pitch names from the ancient points of view in relation to philosophy, education, religion, and science.
|
120 |
Quantile-based methods for prediction, risk measurement and inferenceAlly, Abdallah K. January 2010 (has links)
The focus of this thesis is on the employment of theoretical and practical quantile methods in addressing prediction, risk measurement and inference problems. From a prediction perspective, a problem of creating model-free prediction intervals for a future unobserved value of a random variable drawn from a sample distribution is considered. With the objective of reducing prediction coverage error, two common distribution transformation methods based on the normal and exponential distributions are presented and they are theoretically demonstrated to attain exact and error-free prediction intervals respectively. The second problem studied is that of estimation of expected shortfall via kernel smoothing. The goal here is to introduce methods that will reduce the estimation bias of expected shortfall. To this end, several one-step bias correction expected shortfall estimators are presented and investigated via simulation studies and compared with one-step estimators. The third problem is that of constructing simultaneous confidence bands for quantile regression functions when the predictor variables are constrained within a region is considered. In this context, a method is introduced that makes use of the asymmetric Laplace errors in conjunction with a simulation based algorithm to create confidence bands for quantile and interquantile regression functions. Furthermore, the simulation approach is extended to an ordinary least square framework to build simultaneous bands for quantiles functions of the classical regression model when the model errors are normally distributed and when this assumption is not fulfilled. Finally, attention is directed towards the construction of prediction intervals for realised volatility exploiting an alternative volatility estimator based on the difference of two extreme quantiles. The proposed approach makes use of AR-GARCH procedure in order to model time series of intraday quantiles and forecast intraday returns predictive distribution. Moreover, two simple adaptations of an existing model are also presented.
|
Page generated in 0.0551 seconds