541 |
A Correlational Study of the Relationships Between Syntactical Accuracy, Lexical Accuracy and the Quality of Turkish EFL Student WritingGonulal, Talip 22 June 2012 (has links)
No description available.
|
542 |
New Step Down Procedures for Control of the Familywise Error RateYang, Zijiang January 2008 (has links)
The main research topic in this dissertation is the development of the closure method of multiple testing procedures. Considering a general procedure that allows the underlying test statistics as well as the associated parameters to be dependent, we first propose a step-down procedure controlling the FWER, which is defined as the probability of committing at least one false discovery. Holm (1979) first proposed a step-down procedure for multiple hypothesis testing with a control of the familywise error rate (FWER) under any kind of dependence. Under the normal distributional setup, Seneta and Chen (2005) sharpened the Holm procedure by taking into account the correlations between the test statistics. In this dissertation, the Seneta-Chen procedure is further modified yielding a more powerful FWER controlling procedure. We then advance our research and propose another step-down procedure to control the generalized FWER (k-FWER), which is defined as the probability of making at least k false discoveries. We compare our proposed k-FWER procedure with the Lehmann and Romano (2005) procedure. The proposed k-FWER procedure is more powerful, particularly when there is a strong dependence in the tests. When the proportion of true null hypotheses is expected to be small, the traditional tests are usually conservative by a factor associated with pi0, which is the proportion of true null hypotheses among all null hypotheses. Under independence, two procedures controlling the FWER and the k-FWER are proposed in this dissertation. Simulations are carried out to show that our procedures often provide much better FWER or k-FWER control and power than the traditional procedures. / Statistics
|
543 |
Data Structure and Error Estimation for an Adaptive <i>p</i>-Version Finite Element Method in 2-D and 3-D SolidsPromwungkwa, Anucha 13 May 1998 (has links)
Automation of finite element analysis based on a fully adaptive <i>p</i>-refinement procedure can reduce the magnitude of discretization error to the desired accuracy with minimum computational cost and computer resources. This study aims to 1) develop an efficient <i>p</i>-refinement procedure with a non-uniform <i>p</i> analysis capability for solving 2-D and 3-D elastostatic mechanics problems, and 2) introduce a stress error estimate. An element-by-element algorithm that decides the appropriate order for each element, where element orders can range from 1 to 8, is described. Global and element data structures that manage the complex data generated during the refinement process are introduced. These data structures are designed to match the concept of object-oriented programming where data and functions are managed and organized simultaneously.
The stress error indicator introduced is found to be more reliable and to converge faster than the error indicator measured in an energy norm called the residual method. The use of the stress error indicator results in approximately 20% fewer degrees of freedom than the residual method. Agreement of the calculated stress error values and the stress error indicator values confirms the convergence of final stresses to the analyst. The error order of the stress error estimate is postulated to be one order higher than the error order of the error estimate using the residual method. The mapping of a curved boundary element in the working coordinate system from a square-shape element in the natural coordinate system results in a significant improvement in the accuracy of stress results.
Numerical examples demonstrate that refinement using non-uniform <i>p</i> analysis is superior to uniform <i>p</i> analysis in the convergence rates of output stresses or related terms. Non-uniform <i>p</i> analysis uses approximately 50% to 80% less computational time than uniform <i>p</i> analysis in solving the selected stress concentration and stress intensity problems. More importantly, the non-uniform <i>p</i> refinement procedure scales the number of equations down by 1/2 to 3/4. Therefore, a small scale computer can be used to solve equation systems generated using high order <i>p</i>-elements. In the calculation of the stress intensity factor of a semi-elliptical surface crack in a finite-thickness plate, non-uniform <i>p</i> analysis used fewer degrees of freedom than a conventional <i>h</i>-type element analysis found in the literature. / Ph. D.
|
544 |
Application of Improved Truncation Error Estimation Techniques to Adjoint Based Error Estimation and Grid AdaptationDerlaga, Joseph Michael 23 July 2015 (has links)
Numerical solutions obtained through the use of Computational Fluid Dynamics (CFD) are subject to discretization error, which is locally generated by truncation error. The discretization error is extremely difficult to properly estimate and this in turn leads to uncertainty over the quality of the numerical solutions obtained via CFD methods and the engineering functionals computed using these solutions. Adjoint error estimation techniques specifically seek to estimate the error in functionals, but are dependent upon accurate truncation error estimates. This work examines the application of new, single-grid, truncation error estimation procedures to the problem of adjoint error estimation for both the quasi-1D and 2D Euler equations. The new truncation error estimation techniques are based on local reconstructions of the computed solutions and comparisons are made for the quasi-1D study in order to determine the most appropriate solution variables to reconstruct as well as the most appropriate reconstruction method. In addition, comparisons are made between the single-grid truncation error estimates and methods based on uniformally refining or coarsening the underlying numerical mesh on which the computed solutions are obtained. A method based on an refined grid error estimate is shown to work well for a non-isentropic flow for the quasi-1D Euler equations, but all truncation error estimations methods ultimately result in over prediction of functional discretization error in the presence of a shock in 2D. Alternatives to adjoint methods, which can only estimate the error in a single functional for each adjoint solution obtained, are examined for the 2D Euler equations. The defection correction method and error transport equations are capable of locally improving the entire computed solution, allowing for error estimates in multiple functionals. It is found that all three functional discretization error estimates perform similarly for the same truncation error estimate, although the defect correction method is the most costly from a computational viewpoint. Comparisons are made between truncation error and adjoint weighted truncation error based adaptive indicators. For the quasi-1D Euler equations it is found that both methods are competitive, however the truncation error based method is cheaper as a separate adjoint solve is avoided. For the 2D Euler equations, the truncation error estimates on the adapted meshes suffer due to a lack of smooth grid transformations which are used in reconstructing the computed solutions. In order to complete this work, a new CFD code incorporating a variety of best practices from the field of Computer Science is developed as well as a new method of performing code verification using the method of manufactured solutions which is significantly easier to implement than traditional manufactured solution techniques. / Ph. D.
|
545 |
Error Estimations in the Design of a Terrain Measurement SystemRainey, Cameron Scott 22 March 2013 (has links)
Terrain surface measurement is an important tool in vehicle design work as well as pavement classification and health monitoring. �Non-deformable terrains are the primary excitation to vehicles traveling over it, and therefore it is important to be able to quantify these terrain surfaces. Knowledge of the terrain can be used in combination with vehicle models in order to predict force loads the vehicles would experience while driving over the terrain surface. �This is useful in vehicle design, as it can speed the design process through the use of simulation as opposed to prototype construction and durability testing. �Additionally, accurate terrain maps can be used by highway engineers and maintenance personnel to identify deterioration in road surface conditions for immediate correction. �Repeated measurements of terrain surfaces over an extended length of time can also allow for long term pavement health monitoring.
Many systems have been designed to measure terrain surfaces, most of them historically single line profiles, with more modern equipment capable of capturing three dimensional measurements of the terrain surface. �These more modern systems are often constructed using a combination of various sensors which allow the system to measure the relative height of the terrain with respect to the terrain measurement system. �Additionally, these terrain measurement systems are also equipped with sensors which allow the system to be located in some global coordinate space, as well as the angular attitude of that system to be estimated. �Since all sensors return estimated values, with some uncertainty, the combination of a group of sensors serves to also combine their uncertainties, resulting in a system which is less precise than any of its individual components. �In order to predict the precision of the system, the individual probability densities of the components must be quantified, in some cases transformed, and finally combined in order to predict the system precision. �This thesis provides a proof-of-concept as to how such an evaluation of final precision can be performed. / Master of Science
|
546 |
NAVIGATING THE COMPLEXITIES OF MEDICAL ERROR AND ITS ETHICAL IMPLICATIONSKadakia, Esha, 0009-0002-2872-9605 05 1900 (has links)
The discourse surrounding medical error and its ethical implications has become a pivotal focus within healthcare. Thus, this thesis aims to delve into the multifaceted aspects of and influences on medical error and its disclosure, with each chapter progressively shedding light on their complexities and ethical considerations.
The overarching argument posits that despite society’s general intolerance for errors and a recognized aim for perfection, error remains an unavoidable and inevitable aspect of the practice of medicine and medical training. There exists an inherent fallibility in healthcare juxtaposed against the gravity of the profession and its consequent medical and legal ramifications when something goes awry.
The following ten chapters collectively highlight the intricacies of error management in healthcare through discussions on societal expectations, medical training, error analysis, accountability, systemic influences, patient-provider relationships, legal implications, and bioethical tenets. Ultimately, advocating for a cultural shift towards greater transparency, collective accountability, systemic quality improvement, and support for healthcare professionals to address errors effectively while upholding patient safety and trust. This thesis also recognizes the ethical imperative of error disclosure and the importance of fostering a balanced approach that acknowledges both the inevitability of errors in healthcare and the significant physical, emotional, and financial burdens caused by medical errors. / Urban Bioethics
|
547 |
Estimación y acotación del error de discretización en el modelado de grietas mediante el método extendido de los elementos finitosGonzález Estrada, Octavio Andrés 19 February 2010 (has links)
El Método de los Elementos Finitos (MEF) se ha afianzado durante las últimas décadas como una de las técnicas numéricas más utilizadas para resolver una gran variedad de problemas en diferentes áreas de la ingeniería, como por ejemplo, el análisis estructural, análisis térmicos, de fluidos, procesos de fabricación, etc. Una de las aplicaciones donde el método resulta de mayor interés es en el análisis de problemas propios de la Mecánica de la Fractura, facilitando el estudio y evaluación de la integridad estructural de componentes mecánicos, la fiabilidad, y la detección y control de grietas.
Recientemente, el desarrollo de nuevas técnicas como el Método Extendido de los Elementos Finitos (XFEM) ha permitido aumentar aún más el potencial del MEF. Dichas técnicas mejoran la descripción de problemas con singularidades, con discontinuidades, etc., mediante la adición de funciones especiales que enriquecen el espacio de la aproximación convencional de elementos finitos.
Sin embargo, siempre que se aproxima un problema mediante técnicas numéricas, la solución obtenida presenta discrepancias con respecto al sistema que representa. En las técnicas basadas en la representación discreta del dominio mediante elementos finitos (MEF, XFEM, ...) interesa controlar el denominado error de discretización. En la literatura se pueden encontrar numerosas referencias a técnicas que permiten cuantificar el error en formulaciones convencionales de elementos finitos. No obstante, por ser el XFEM un método relativamente reciente, aún no se han desarrollado suficientemente las técnicas de estimación del error para aproximaciones enriquecidas de elementos finitos.
El objetivo de esta Tesis es cuantificar el error de discretización cuando se utilizan aproximaciones enriquecidas del tipo XFEM para representar problemas propios de la Mecánica de la Fractura Elástico Lineal (MFEL), como es el caso del modelado de una grieta. / González Estrada, OA. (2010). Estimación y acotación del error de discretización en el modelado de grietas mediante el método extendido de los elementos finitos [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7203
|
548 |
Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimizationNadal Soriano, Enrique 14 February 2014 (has links)
More and more challenging designs are required everyday in today¿s industries.
The traditional trial and error procedure commonly used for mechanical
parts design is not valid any more since it slows down the design process and
yields suboptimal designs. For structural components, one alternative consists
in using shape optimization processes which provide optimal solutions.
However, these techniques require a high computational effort and require
extremely efficient and robust Finite Element (FE) programs. FE software
companies are aware that their current commercial products must improve in
this sense and devote considerable resources to improve their codes. In this
work we propose to use the Cartesian Grid Finite Element Method, cgFEM
as a tool for efficient and robust numerical analysis. The cgFEM methodology
developed in this thesis uses the synergy of a variety of techniques to achieve
this purpose, but the two main ingredients are the use of Cartesian FE grids
independent of the geometry of the component to be analyzed and an efficient
hierarchical data structure. These two features provide to the cgFEM
technology the necessary requirements to increase the efficiency of the cgFEM
code with respect to commercial FE codes. As indicated in [1, 2], in order to
guarantee the convergence of a structural shape optimization process we need
to control the error of each geometry analyzed. In this sense the cgFEM code
also incorporates the appropriate error estimators. These error estimators are
specifically adapted to the cgFEM framework to further increase its efficiency.
This work introduces a solution recovery technique, denoted as SPR-CD, that in combination with the Zienkiewicz and Zhu error estimator [3] provides very
accurate error measures of the FE solution. Additionally, we have also developed
error estimators and numerical bounds in Quantities of Interest based
on the SPR-CD technique to allow for an efficient control of the quality of
the numerical solution. Regarding error estimation, we also present three new
upper error bounding techniques for the error in energy norm of the FE solution,
based on recovery processes. Furthermore, this work also presents an
error estimation procedure to control the quality of the recovered solution in
stresses provided by the SPR-CD technique. Since the recovered stress field
is commonly more accurate and has a higher convergence rate than the FE
solution, we propose to substitute the raw FE solution by the recovered solution
to decrease the computational cost of the numerical analysis. All these
improvements are reflected by the numerical examples of structural shape optimization
problems presented in this thesis. These numerical analysis clearly
show the improved behavior of the cgFEM technology over the classical FE
implementations commonly used in industry. / Cada d'¿a dise¿nos m'as complejos son requeridos por las industrias actuales.
Para el dise¿no de nuevos componentes, los procesos tradicionales de prueba y
error usados com'unmente ya no son v'alidos ya que ralentizan el proceso y dan
lugar a dise¿nos sub-'optimos. Para componentes estructurales, una alternativa
consiste en usar procesos de optimizaci'on de forma estructural los cuales
dan como resultado dise¿nos 'optimos. Sin embargo, estas t'ecnicas requieren
un alto coste computacional y tambi'en programas de Elementos Finitos (EF)
extremadamente eficientes y robustos. Las compa¿n'¿as de programas de EF
son conocedoras de que sus programas comerciales necesitan ser mejorados
en este sentido y destinan importantes cantidades de recursos para mejorar
sus c'odigos. En este trabajo proponemos usar el M'etodo de Elementos Finitos
basado en mallados Cartesianos (cgFEM) como una herramienta eficiente
y robusta para el an'alisis num'erico. La metodolog'¿a cgFEM desarrollada en
esta tesis usa la sinergia entre varias t'ecnicas para lograr este prop'osito, cuyos
dos ingredientes principales son el uso de los mallados Cartesianos de EF independientes
de la geometr'¿a del componente que va a ser analizado y una
eficiente estructura jer'arquica de datos. Estas dos caracter'¿sticas confieren
a la tecnolog'¿a cgFEM de los requisitos necesarios para aumentar la eficiencia
del c'odigo cgFEM con respecto a c'odigos comerciales. Como se indica en
[1, 2], para garantizar la convergencia del proceso de optimizaci'on de forma
estructural se necesita controlar el error en cada geometr'¿a analizada. En
este sentido el c'odigo cgFEM tambi'en incorpora los apropiados estimadores de error. Estos estimadores de error han sido espec'¿ficamente adaptados al
entorno cgFEM para aumentar su eficiencia. En esta tesis se introduce un
proceso de recuperaci'on de la soluci'on, llamado SPR-CD, que en combinaci'on
con el estimador de error de Zienkiewicz y Zhu [3], da como resultado medidas
muy precisas del error de la soluci'on de EF. Adicionalmente, tambi'en se han
desarrollado estimadores de error y cotas num'ericas en Magnitudes de Inter'es
basadas en la t'ecnica SPR-CD para permitir un eficiente control de la calidad
de la soluci'on num'erica. Respecto a la estimaci'on de error, tambi'en se presenta
un proceso de estimaci'on de error para controlar la calidad del campo
de tensiones recuperado obtenido mediante la t'ecnica SPR-CD. Ya que el
campo recuperado es por lo general m'as preciso y tiene un mayor orden de
convergencia que la soluci'on de EF, se propone sustituir la soluci'on de EF por
la soluci'on recuperada para disminuir as'¿ el coste computacional del an'alisis
num'erico. Todas estas mejoras se han reflejado en esta tesis mediante ejemplos
num'ericos de problemas de optimizaci'on de forma estructural. Los resultados
num'ericos muestran claramente un mejor comportamiento de la tecnolog'¿a
cgFEM con respecto a implementaciones cl'asicas de EF com'unmente usadas
en la industria. / Nadal Soriano, E. (2014). Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35620
|
549 |
Adaptive Protocols to Improve TCP/IP Performance in an LMDS Network using a Broadband Channel SounderEshler, Todd Jacob 26 April 2002 (has links)
Virginia Tech researchers have developed a broadband channel sounder that can measure channel quality while a wireless network is in operation. Channel measurements from the broadband sounder hold the promise of improving TCP/IP performance by trigging configuration changes in an adaptive data link layer protocol. We present an adaptive data link layer protocol that can use different levels of forward error correction (FEC) codes and link layer automatic retransmission request (ARQ) to improve network and transport layer performance.
Using a simulation model developed in OPNET, we determine the effects of different data link layer protocol configurations on TCP/IP throughput and end-to-end delay using a Rayleigh fading channel model. Switching to higher levels of FEC encoding improves TCP/IP throughput for high bit error rates, but increases end-to-end delay of TCP/IP segments. Overall TCP/IP connections with link layer ARQ showed approximately 150 Kbps greater throughput than without ARQ, but lead to the highest end-to-end delay for high bit error rate channels.
Based on the simulation results, we propose algorithms to maximize TCP/IP throughput and minimize end-to-end delay using the current bit error rate of the channel. We propose a metric, carrier-to-interference ratio (CIR) that is calculated from data retrieved from the broadband channel sounder. We propose algorithms using the carrier-to-interference ratio to control TCP/IP throughput and end-to-end delay.
The thesis also describes a monitor program to use in the broadband wireless system. The monitor program displays data collected from the broadband sounder and controls the settings for the data link layer protocol and broadband sounder while the network is in operation. / Master of Science
|
550 |
ECC Video: An Active Second Error Control Approach for Error Resilience in Video CodingDu, Bing Bing January 2003 (has links)
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
|
Page generated in 0.0362 seconds