• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 90
  • 36
  • 20
  • 19
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 512
  • 57
  • 55
  • 54
  • 44
  • 38
  • 38
  • 37
  • 37
  • 34
  • 32
  • 31
  • 31
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Comparação de algoritmos usados na construção de mapas genéticos / Comparison of algorithms used in the construction of genetic linkage maps

Mollinari, Marcelo 23 January 2008 (has links)
Mapas genéticos são arranjos lineares que indicam a ordem e distância entre locos nos cromossomos de uma determinada espécie. Recentemente, a grande disponibilidade de marcadores moleculares tem tornado estes mapas cada vez mais saturados, sendo necessários métodos eficientes para sua construção. Uma das etapas que merece mais atenção na construção de mapas de ligação é a ordenação dos marcadores genéticos dentro de cada grupo de ligação. Tal ordenação é considerada um caso especial do clássico problema do caixeiro viajante (TSP), que consiste em escolher a melhor ordem entre todas as possíveis. Entretanto, a estratégia de busca exaustiva torna-se inviável quando o número de marcadores é grande. Nesses casos, para que esses mapas possam ser construídos uma alternativa viável é a utilização de algoritmos que forneçam soluções aproximadas. O objetivo desse trabalho foi avaliar a eficiência dos algoritmos Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) e Unidirectional Growth (UG), além dos critérios PARF (produto mínimo das frações de recombinação adjacentes), SARF (soma mínima das frações de recombinação adjacentes), SALOD (soma máxima dos LOD scores adjacentes) e LMHC (verossimilhança via cadeias de Markov ocultas), usados juntamente com o algoritmo de verificação de erros RIPPLE, para a construção de mapas genéticos. Para tanto, foi simulado um mapa de ligação de uma espécie vegetal hipotética, diplóide e monóica, contendo 21 marcadores com distância fixa entre eles de 3 centimorgans. Usando o método Monte Carlo, foram obtidas aleatoriamente 550 populações F2 com 100 e 400 indivíduos, além de diferentes combinações de marcadores dominantes e codominantes. Foi ainda simulada perda de 10% e 20% dos dados. Os resultados mostraram que os algoritmos TRY e SER tiveram bons resultados em todas as situações simuladas, mesmo com presença de elevado número de dados perdidos e marcadores dominantes ligados em repulsão, podendo ser então recomendado em situações práticas. Os algoritmos RECORD e UG apresentaram bons resultados na ausência de marcadores dominantes ligados em repulsão, podendo então ser recomendados em situações com poucos marcadores dominantes. Dentre todos os algoritmos, o RCD foi o que se mostrou menos eficiente. O critério LHMC, aplicado com o algoritmo RIPPLE, foi o que apresentou melhores resultados quando se deseja fazer verificações de erros na ordenação. / Genetic linkage maps are linear arrangements showing the order and distance between loci in chromosomes of a particular species. Recently, the availability of molecular markers has made such maps more saturated and efficient methods are needed for their construction. One of the steps that deserves more attention in the construction of genetic linkage maps is the ordering of genetic markers within each linkage group. This ordering is considered a special case of the classic traveling salesman problem (TSP), which consists in choosing the best order among all possible ones. However, the strategy of exhaustive search becomes unfeasible when the number of markers is large. One possible alternative to construct such maps is to use algorithms that provide approximate solutions. Thus, the aim of this work was to evaluate the efficiency of algorithms Try (TRY), Seriation (SER), Rapid Chain Delineation (RCD), Recombination Counting and Ordering (RECORD) and Unidirectional Growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent lod scores) and LMHC (likelihood via hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. For doing so, a linkage map of a hypothetical diploid and monoecious plant species was simulated, containing 21 markers with fixed distance of 3 centimorgans between them. Using Monte Carlo methods, 550 F2 populations were randomly simulated with 100 and 400 individuals, together with different combinations of dominant and codominant markers. 10 % and 20 % of missing data was also included. Results showed that the algorithms TRY and SER gave good results in all situations, even with presence of a large number of missing data and dominant markers linked in repulsion phase. Thus, these can be recommended for analyzing real data. The algorithms RECORD and UG gave good results in the absence of dominant markers linked in repulsion phase and can be used in this case. Among all algorithms, RCD was the least efficient. The criterion LHMC, applied with the RIPPLE algorithm, showed the best results when the goal is to check ordering errors.
142

Numerical analysis in energy dependent radiative transfer

Czuprynski, Kenneth Daniel 01 December 2017 (has links)
The radiative transfer equation (RTE) models the transport of radiation through a participating medium. In particular, it captures how radiation is scattered, emitted, and absorbed as it interacts with the medium. This process arises in numerous application areas, including: neutron transport in nuclear reactors, radiation therapy in cancer treatment planning, and the investigation of forming galaxies in astrophysics. As a result, there is great interest in the solution of the RTE in many different fields. We consider the energy dependent form of the RTE and allow media containing regions of negligible absorption. This particular case is not often considered due to the additional dimension and stability issues which arise by allowing vanishing absorption. In this thesis, we establish the existence and uniqueness of the underlying boundary value problem. We then proceed to develop a stable numerical algorithm for solving the RTE. Alongside the construction of the method, we derive corresponding error estimates. To show the validity of the algorithm in practice, we apply the algorithm to four different example problems. We also use these examples to validate our theoretical results.
143

Augmented testing and effects on item and proficiency estimates in different calibration designs

Wall, Nathan Lane 01 May 2011 (has links)
Broadening the term augmented testing to include a combination of multiple measures to assess examinee performance on a single construct, the issues of IRT item parameter and proficiency estimates were investigated. The intent of this dissertation is to determine if different IRT calibration designs result in differences to item and proficiency parameter estimates and to understand the nature of those differences. Examinees were sampled from a testing program in which each examinee was administered three mathematics assessments measuring a broad mathematics domain at the high school level. This sample of examinees was used to perform a real data analysis to investigate the item and proficiency estimates. A simulation study was also conducted based upon the real data. The factors investigated for the real data study included three IRT calibration designs and two IRT models. The calibration designs included: separately calibrating each assessment, calibrating all assessments in one joint calibration, and separately calibrating items in three distinct content areas. Joint calibration refers to the use of IRT methodology to calibrate two or more tests, which have been administered to a single group, together so as to place all of the items on a common scale. The two IRT models were the one- and three-parameter logistic model. Also investigated were five proficiency estimators: maximum likelihood estimates, expected a posteriori, maximum a posteriori, summed-score EAP, and test characteristic curve estimates. The simulation study included the same calibration designs and IRT models but the data were simulated with varying levels of correlations among the proficiencies to determine the affect upon the item parameter estimates. The main findings indicate that item parameter and proficiency estimates are affected by the IRT calibration design. The discrimination parameter estimates of the three-parameter model were larger when calibrated under the joint calibration design for one assessment but not for the other two. Noting that equal item discrimination is an assumption of the 1-PL model, this finding raises questions as to the degree of model fit when the 1-PL model is used. Items on a second assessment had lower difficulty parameters in the joint calibration design while the item parameter estimates of the other two assessments were higher. Differences in proficiency estimates between calibration designs were also discovered, which were found to result in examinees being inconsistently classified into performance categories. Differences were observed in regards to the choice of IRT model. Finally, as the level of correlation among proficiencies increased in the simulation data, the differences observed in the item parameter estimates were decreased. Based upon the findings, IRT item parameter estimates resulting from differing calibrations designs should not be used interchangeably. Practitioners who use item pools should base the pool refreshment calibration design upon the one used to originally create the pool. Limitations to this study include the use of a single dataset consisting of high school examinees in only one subject area, thus the degree of generalization regarding research findings to other content areas of grade levels should be made with caution.
144

An investigation of the methods for estimating usual dietary intake distributions : a thesis presented in partial fulfillment of the requirements for the degree of Master of Applied Statistics at Massey University, Albany, New Zealand

Stoyanov, Stefan Kremenov January 2008 (has links)
The estimation of the distribution of usual intake of nutrients is important for developing nutrition policies as well as for etiological research and educational purposes. In most nutrition surveys only a small number of repeated intake observations per individual are collected. Of main interest is the longterm usual intake which is defined as long-term daily average intake of a dietary component. However, dietary intake on a single day is a poor estimate of the individual’s long-term usual intake. Furthermore, the distribution of individual intake means is also a poor estimator of the distribution of usual intake since usually there is large within-individual compared to between-individual variability in the dietary intake data. Hence, the variance of the mean intakes is larger than the variance of the usual intake distribution. Essentially, the estimation of the distribution of long-term intake is equivalent to the estimation of a distribution of a random variable observed with measurement error. Some of the methods for estimating the distributions of usual dietary intake are reviewed in detail and applied to nutrient intake data in order to evaluate their properties. The results indicate that there are a number of robust methods which could be used to derive the distribution of long-term dietary intake. The methods share a common framework but differ in terms of complexity and assumptions about the properties of the dietary consumption data. Hence, the choice of the most appropriate method depends on the specific characteristics of the data, research purposes as well as availability of analytical tools and statistical expertise.
145

An analysis of population lifetime data of South Australia 1841 - 1996

Leppard, Phillip I. January 2003 (has links)
The average length of life from birth until death in a human population is a single statistic that is often used to characterise the prevailing health status of the population. It is one of many statistics calculated from an analysis that, for each age, combines the number of deaths with the size of the population in which these deaths occur. This analysis is generally known as life table analysis. Life tables have only occasionally been produced specifically for South Australia, although the necessary data has been routinely collected since 1842. In this thesis, the mortality pattern of South Australia over the period of 150 years of European settlement is quantified by using life table analyses and estimates of average length of life. In Chapter 1, a mathematical derivation is given for the lifetime statistical distribution function that is the basis of life table analysis, and from which the average length of life or current expected life is calculated. This derivation uses mathematical notation that clearly shows the deficiency of current expected life as a measure of the life expectancy of an existing population. Four statistical estimation procedures are defined, and the computationally intensive method of bootstrapping is discussed as an estimation procedure for the standard error of each of the estimates of expected life. A generalisation of this method is given to examine the robustness of the estimate of current expected life. In Chapter 2, gender and age-specific mortality and population data are presented for twenty five three-year periods; each period encompassing one of the colonial (1841-1901) or post-Federation (1911-96) censuses that have been taken in South Australia. For both genders within a census period, four types of estimate of current expected life, each with a bootstrap standard error, are calculated and compared, and a robustness assessment is made. In Chapter 3, an alternate measure of life expectancy known as generation expected life is considered. Generation expected life is derived by extracting, from official records arranged in temporal order, the mortality pattern of a notional group of individuals who were born in the same calendar year. Several estimates of generation expected life are calculated using South Australian data, and each estimate is compared to the corresponding estimate of current expected life. Additional estimates of generation expected life calculated using data obtained from the Roll of Honour at the Australian War Memorial quantify the reduction in male generation expected life for 1881-1900 as a consequence of military service during World War I, 1914-18, and the Influenza Pandemic, 1919. / Thesis (M.Sc.) -- University of Adelaide, School of Applied Mathematics, 2003.
146

Costing Constitutional Change: Estimates of the Financial Benefits of New States, Regional Governments, Unification and Related Reforms

Drummond, Mark Lea, n/a January 2007 (has links)
There have been numerous proposals to reform Australia's government structures, both prior to and since Federation in 1901, including calls for New Colonies and New States, Unification plans, Regional Government models spanning across the federal-unitary continuum, and proposals to transfer functions between Commonwealth and State governments, such as the modern day attempts by the Commonwealth government to establish a national Industrial Relations system. But while several functions have been transferred from the States to the Commonwealth since Federation, major changes sought by supporters of New States, Regional Governments and Unification have never been achieved. The financial benefits possible through various reformed government structures are first examined in terms of claims and estimates that have accompanied past reform proposals. Financial benefits are then estimated for the four years from 1998-99 to 2001-02 using population and expenditure data, per capita expenditure comparisons, and various linear and non-linear regression techniques. New States appear likely to cost in the order of $1 billion per annum per New State, and possibly more if costs associated with State-Territory borders are taken into account, but their financial viability could be vastly improved if New State formation follows or is accompanied by functional transfers to achieve national systems in areas such as health and education. It is estimated that Unification and some Regional Government models could achieve financial benefits in the order of five to ten per cent in both public and private sectors and the economy as a whole, which, in June 2002 dollar terms, would amount to some $15 billion to $30 billion per annum in the public sector, $25 billion to $50 billion in the private sector, and hence $40 billion to $80 billion per annum across both public and private sectors and the entire Australian economy. It is also estimated that for several functions, including education and health, unitary national systems under Commonwealth control could generate significant financial benefits, whereas for other functions, notably transport and communications, national systems could prove more costly. Additional research could clarify estimates, but ultimately the only way to fully check estimates is to observe and measure actual reforms in action. If all State-Territory level health care functions, for example, were transferred to the Commonwealth government to achieve a fully national health system, then the benefits and costs of such reform could be assessed with much more certainty than is possible through pre-reform empirical estimates. The establishment of a national health system could also diminish concerns that New States or Regional Governments might exacerbate problems associated with separate State laws, regulations and systems - problems likely to be tolerated least in health care given its life-and-death gravity. And for Unification advocates, a national health system would represent a significant step towards complete Unification across all functions. Estimates appear to be robust when assessed in light of Commonwealth Grants Commission methodologies, differential levels of tax expenditures and privatisation across the current States and Territories, and Australia's economic and industrial geography, and on balance suggest that intelligent government structure reforms have the potential to significantly enhance Australia's financial and economic strength, and hence provide the financial capacity to achieve significantly improved social and environmental outcomes as well.
147

Analysis of Some Linear and Nonlinear Time Series Models

Ainkaran, Ponnuthurai January 2004 (has links)
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
148

Evolution equations and vector-valued Lp spaces: Strichartz estimates and symmetric diffusion semigroups.

Taggart, Robert James, Mathematics & Statistics, Faculty of Science, UNSW January 2008 (has links)
The results of this thesis are motivated by the investigation of abstract Cauchy problems. Our primary contribution is encapsulated in two new theorems. The first main theorem is a generalisation of a result of E. M. Stein. In particular, we show that every symmetric diffusion semigroup acting on a complex-valued Lebesgue space has a tensor product extension to a UMD-valued Lebesgue space that can be continued analytically to sectors of the complex plane. Moreover, this analytic continuation exhibits pointwise convergence almost everywhere. Both conclusions hold provided that the UMD space satisfies a geometric condition that is weak enough to include many classical spaces. The theorem is proved by showing that every symmetric diffusion semigroup is dominated by a positive symmetric diffusion semigoup. This allows us to obtain (a) the existence of the semigroup's tensor extension, (b) a vector-valued version of the Hopf--Dunford--Schwartz ergodic theorem and (c) an holomorphic functional calculus for the extension's generator. The ergodic theorem is used to prove a vector-valued version of a maximal theorem by Stein, which, when combined with the functional calculus, proves the pointwise convergence theorem. The second part of the thesis proves the existence of abstract Strichartz estimates for any evolution family of operators that satisfies an abstract energy and dispersive estimate. Some of these Strichartz estimates were already announced, without proof, by M. Keel and T. Tao. Those estimates which are not included in their result are new, and are an abstract extension of inhomogeneous estimates recently obtained by D. Foschi. When applied to physical problems, our abstract estimates give new inhomogeneous Strichartz estimates for the wave equation, extend the range of inhomogeneous estimates obtained by M. Nakamura and T. Ozawa for a class of Klein--Gordon equations, and recover the inhomogeneous estimates for the Schr??dinger equation obtained independently by Foschi and M. Vilela. These abstract estimates are applicable to a range of other problems, such as the Schr??dinger equation with a certain class of potentials.
149

Speed Choice : The Driver, the Road and Speed Limits

Haglund, Mats January 2001 (has links)
<p>Speed choice is one of the more characteristic features of driver behaviour. The speed a driver chooses to travel at determines the degree of difficulty he or she operates under. Higher speeds lead to more accidents, higher accident risk and more severe consequences of an accident. The present thesis examines factors that are associated with drivers’ speed choice. Repeated measures of drivers’ speed showed a reasonably high correlation, but also that stability in speed varied with road layout between measurement sites. Effects of police enforcement were studied on roads with temporary reduced speed limits (from 50 km/h to 30 km/h) during school hours. Lower speeds were found on roads with enforcement and drivers observed on one such road showed a higher perceived probability of detection than did drivers observed on a non-enforced road. However, in a laboratory study higher driving speeds and lower accident risk was associated with enforced roads. Drivers not informed about existing speed limits overestimated the limits to a large extent and chose driving speeds above the limit as did drivers informed about the limits. In an on-the-road survey, fast drivers reported higher driving speed, thought a higher percentage of other drivers were speeding and had a more positive attitude towards speeding than did slower drivers. The results suggest that drivers’ travel speed is influenced by road factors, other road users and enforcement. Furthermore, drivers’ own judgements of what is an appropriate speed are also important for speed choice.</p>
150

Fast Polyhedral Adaptive Conjoint Estimation

Olivier, Toubia, Duncan, Simester, John, Hauser 02 1900 (has links)
We propose and test a new adaptive conjoint analysis method that draws on recent polyhedral “interior-point” developments in mathematical programming. The method is designed to offer accurate estimates after relatively few questions in problems involving many parameters. Each respondent’s ques-tions are adapted based upon prior answers by that respondent. The method requires computer support but can operate in both Internet and off-line environments with no noticeable delay between questions. We use Monte Carlo simulations to compare the performance of the method against a broad array of relevant benchmarks. While no method dominates in all situations, polyhedral algorithms appear to hold significant potential when (a) metric profile comparisons are more accurate than the self-explicated importance measures used in benchmark methods, (b) when respondent wear out is a concern, and (c) when product development and/or marketing teams wish to screen many features quickly. We also test hybrid methods that combine polyhedral algorithms with existing conjoint analysis methods. We close with suggestions on how polyhedral methods can be used to address other marketing problems. / Sloan School of Management and the Center for Innovation in Product Development at MIT

Page generated in 0.0708 seconds