• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 647
  • 251
  • 64
  • 55
  • 45
  • 34
  • 20
  • 20
  • 20
  • 20
  • 20
  • 19
  • 10
  • 10
  • 10
  • Tagged with
  • 1367
  • 213
  • 180
  • 161
  • 130
  • 123
  • 113
  • 110
  • 98
  • 90
  • 87
  • 86
  • 85
  • 82
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
<p>This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. </p><p>The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. </p><p>The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used.</p><p>The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power.</p><p>The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.</p>
482

Design, implementation and evaluation of MPVS : a tool to support the teaching of a programming method

Dony, Isabelle 14 September 2007 (has links)
Teaching formal methods is notoriously difficult and is linked to motivation problems among the students; we think that formal methods need to be supported by adequate tools to get better acceptance from the students. One of the goals of the thesis is to build a practical tool to help students to deeply understand the classical programming methodology based on specifications, loop invariants, and decomposition into subproblems advocated by Dijkstra, Gries, and Hoare to name only a few famous computer scientists. Our motivation to build this tool is twofold. On the one hand, we demonstrate that existing verification tools (e.g., ESC/Java, Spark, SMV) are too complex to be used in a pedagogical context; moreover they often lack completeness, (and sometimes, even soundness). On the other hand teaching formal (i.e., rigorous) program construction with pen and paper does not motivate students at all. Thus, since students love to use tools, providing them with a tool that checks not only their programs but also their specifications and the structure of their reasoning seemed appealing to us. Obviously, building such a system is far from an easy task. It may even be thought completely unfeasible to experts in the field. Our approach is to restrict our ambition to a very simple programming language with simple types (limited to finite domains) and arrays. In this context, it is possible to specify problems and subproblems, both clearly and formally, using a specific assertion language based on mathematical logic. It appears that constraint programming over finite domains is especially convenient to check the kind of verification conditions that are needed to express the correctness of imperative programs. However, to conveniently generate the constraint problems equivalent to a given verification condition, we wish to have at hand a powerful language that allows us to interleave constraints generation, constraints solving, and to specify a distribution strategy to overcome the incompleteness of the usual consistency techniques used by finite domain constraint programming. We show in this thesis that the Oz language includes all programming mechanisms needed to reach our goals. Such a tool has been fully implemented and is intended to provide interesting feedback to students learning the programming method: it detects programming and/or reasoning errors and it provides typical counter-examples. We argue that our system is adapted to our pedagogical context and we report on experiments of using the tool with students in a third year programming course.
483

Real-Time Part Position Sensing

Gordon, Steven J., Seering, Warren P. 01 May 1988 (has links)
A light stripe vision system is used to measure the location of polyhedral features of parts from a single frame of video camera output. Issues such as accuracy in locating the line segments of intersection in the image and combining redundant information from multiple measurements and multiple sources are addressed. In 2.5 seconds, a prototype sensor was capable of locating a two inch cube to an accuracy (one standard deviation) of .002 inches (.055 mm) in translation and .1 degrees (.0015 radians) in rotation. When integrated with a manipulator, the system was capable of performing high precision assembly tasks.
484

Failure-Oblivious Computing and Boundless Memory Blocks

Rinard, Martin C. 01 1900 (has links)
Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities. / Singapore-MIT Alliance (SMA)
485

Evaluation of Land-Atmosphere Interactions in Models of the North American Monsoon

Kelly, Patrick 01 January 2008 (has links)
Improving diurnal errors in surface-based heating processes in models might be a promising step towards improved seasonal simulation of the North American Monsoon (NAM). This study isolates model errors in the surface energy budget and examines diurnal heating implications for seasonal development of the NAM 500hPa anticyclone and 850-500hPa thickness ridge using observations and multi-model output. Field data from the 2004 North American Monsoon Experiment (NAME) and satellite estimates are used to evaluate land-atmosphere interactions in regional and global models as part of the North American Monsoon Model Assessment Project 2 (NAMAP2). Several key findings about heating in the NAM emerge: ? Models exhibit considerable differences in surface radiation of the NAM, beginning with albedo (Fig. 3.1). All models have highly-biased albedo throughout summer (Fig. 3.2). ? Observed net surface radiation is around 125 Wm-2 over land in the NAM region in summer (Table 3.5). Models overestimate it by an average of about 20 Wm-2, despite their high albedo, apparently due to deficiencies in cloud radiative forcing. ? Partitioning of this net radiation into latent and sensible fluxes to the atmosphere differs substantially among models. Sensitivity of this partitioning to rainfall also varies widely among models, and appears clearly excessive in some models relative to observations (Fig. 4.10). ? Total sensible heating exceeds latent heating in the NAM (Table 4.1), since it covers a much larger area than the rainy core region (Fig. 4.11). ? Inter-model differences in sensible heating can be traced consistently from surface heat flux (Table 5.1), to PBL diurnal evolution (Fig. 5.1), to diurnal thickening of the lower troposphere (Fig. 5.2). ? Seasonal biases in the NAM?s synoptic structure correspond well to diurnal heating biases (Fig. 5.3, Fig. 5.5), suggesting that diurnal cycle studies from a single field season may suffice to inform physical process improvements that could impact seasonal simulation and forecasting. These NAMAP2 results highlight the range of uncertainty and errors in contemporary models, including those defining US national weather forecasting capability. Model experimentation will be necessary to fully interpret the lessons and harvest the fruits of this offline inter-comparison exercise.
486

On the evolution of codon usage bias

Shah, Premal R 01 May 2011 (has links)
The genetic code is redundant, with most amino acids coded by multiple codons. In many organisms, codon usage is biased towards particular codons. A variety of adaptive and non-adaptive explanations have been proposed to explain these patterns of codon usage bias. Using mechanistic models of protein translation and population genetics, I explore the relative importance of various evolutionary forces in shaping these patterns. This work challenges one of the fundamental assumptions made in over 30 years of research: codons with higher tRNA abundances leads to lower error rates. I show that observed patterns of codon usage are inconsistent with selection for translation accuracy. I also show that almost all the variation in patterns of codon usage in S. cerevisiae can be explained by a model taking into account the effects of mutational biases and selection for efficient ribosome usage. In addition, by sampling suboptimal mRNA secondary structures at various temperatures, I show that melting of ribosomal binding sites in a special class of mRNAs known as RNA thermometers is a more general phenomenon.
487

Regression calibration and maximum likelihood inference for measurement error models

Monleon-Moscardo, Vicente J. 08 December 2005 (has links)
Graduation date: 2006 / Regression calibration inference seeks to estimate regression models with measurement error in explanatory variables by replacing the mismeasured variable by its conditional expectation, given a surrogate variable, in an estimation procedure that would have been used if the true variable were available. This study examines the effect of the uncertainty in the estimation of the required conditional expectation on inference about regression parameters, when the true explanatory variable and its surrogate are observed in a calibration dataset and related through a normal linear model. The exact sampling distribution of the regression calibration estimator is derived for normal linear regression when independent calibration data are available. The sampling distribution is skewed and its moments are not defined, but its median is the parameter of interest. It is shown that, when all random variables are normally distributed, the regression calibration estimator is equivalent to maximum likelihood provided a natural estimate of variance is non-negative. A check for this equivalence is useful in practice for judging the suitability of regression calibration. Results about relative efficiency are provided for both external and internal calibration data. In some cases maximum likelihood is substantially more efficient than regression calibration. In general, though, a more important concern when the necessary conditional expectation is uncertain, is that inferences based on approximate normality and estimated standard errors may be misleading. Bootstrap and likelihood-ratio inferences are preferable.
488

Knowing mathematics for teaching: a case study of teacher responses to students' errors and difficulties in teaching equivalent fractions

Ding, Meixia 15 May 2009 (has links)
The goal of this study is to align teachers’ Mathematical Knowledge for Teaching (MKT) with their classroom instruction. To reduce the classroom complexity while keeping the connection between teaching and learning, I focused on Teacher Responses to Student Errors and Difficulties (TRED) in teaching equivalent fractions with an eye on students’ cognitive gains as the assessment of teaching effects. This research used a qualitative paradigm. Classroom videos concerning equivalent fractions from six teachers were observed and triangulated with tests of teacher knowledge and personal interviews. The data collection and analysis went through a naturalistic inquiry process. The results indicated that great differences about TRED existed in different classrooms around six themes: two learning difficulties regarding critical prior knowledge; two common errors related to the learning goal, and two emergent topics concerning basic mathematical ideas. Each of these themes affected students’ cognitive gains. Teachers’ knowledge as reflected by teacher interviews, however, was not necessarily consistent with their classroom instruction. Among these six teachers, other than one teacher whose knowledge obviously lagged behind, the other five teachers demonstrated similar good understanding of equivalent fractions. With respect to the basic mathematical ideas, their knowledge and sensitivity showed differences. The teachers who understood equivalent fractions and also the basic mathematical ideas were able to teach for understanding. Based on these six teachers’ practitioner knowledge, a Mathematical Knowledge Package for Teaching (MKPT) concerning equivalent fractions was provided as a professional knowledge base. In addition, this study argued that only when teachers had knowledge bases with strong connections to mathematical foundations could they flexibly activate and transfer their knowledge (CCK and PCK) to their use of knowledge (SCK) in the teaching contexts. Therefore, further attention is called for in collaboratively cultivating teachers’ mathematical sensitivity.
489

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized. Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling. The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty.
490

Preanalytical errors in hospitals : implications for quality improvement of blood sample collection

Wallin, Olof January 2008 (has links)
Background: Most errors in the venous blood testing process are preanalytical, i.e. they occur before the sample reaches the laboratory. Unlike the laboratory analysis, the preanalytical phase involves several error-prone manual tasks not easily avoided with technological solutions. Despite the importance of the preanalytical phase for a correct test result, little is known about how blood samples are collected in hospitals. Aim: The aim of this thesis was to survey preanalytical procedures in hospitals to identify sources of error. Methods: The first part of this thesis was a questionnaire survey. After a pilot study (Paper I), a questionnaire addressing clinical chemistry testing was completed by venous blood sampling staff (n=314, response rate 94%) in hospital wards and hospital laboratories (Papers II–IV). The second part of this thesis was an experimental study. Haematology, coagulation, platelet function and global coagulation parameters were compared between pneumatic tube-transported samples and samples that had not been transported (Paper V). Results: The results of the questionnaire survey indicate that the desirable procedure for the collection and handling of venous blood samples were not always followed in the wards (Papers II–III). For example, as few as 2.4% of the ward staff reported to always label the test tube immediately before sample collection. Only 22% of the ward staff reported to always use wristbands for patient identification, while 18% reported to always use online laboratory manuals, the only source of updated information. However, a substantial part of the ward staff showed considerable interest in re-education (45%) and willingness to improve routines (44%) for venous blood sampling. Compared to the ward staff, the laboratory staff reported significantly higher proportions of desirable practices regarding test request management, test tube labelling, test information search procedures, and the collection and handling of venous blood samples, but not regarding patient identification. Of the ward staff, only 5.5% had ever filed an error report regarding venous blood sampling, compared to 28% of the laboratory staff (Paper IV). In the experimental study (Paper V), no significant preanalytical effect of pneumatic tube transport was found for most haematology, coagulation and platelet function parameters. However, time-to-clot formation was significantly shorter (16%) in the pneumatic tube-transported samples, indicating an in vitro activation of global coagulation. Conclusions. The questionnaire study of the rated experiences of venous blood sampling ward staff is the first of its kind to survey manual tasks in the preanalytical phase. The results suggest a clinically important risk of preanalytical errors in the surveyed wards. Computerised test request management will eliminate some, but not all, of the identified risks. The better performance reported by the laboratory staff may reflect successful quality improvement initiatives in the laboratories. The current error reporting system needs to be functionally implemented. The experimental study indicates that pneumatic tube transport does not introduce preanalytical errors for regular tests, but manual transport is recommended for analysis with thromboelastographic technique. This thesis underscores the importance of quality improvement in the preanalytical phase of venous blood testing in hospitals.

Page generated in 0.0333 seconds