491 |
Modeling and Control of Bilinear Systems : Application to the Activated Sludge ProcessEkman, Mats January 2005 (has links)
This thesis concerns modeling and control of bilinear systems (BLS). BLS are linear but not jointly linear in state and control. In the first part of the thesis, a background to BLS and their applications to modeling and control is given. The second part, and likewise the principal theme of this thesis, is dedicated to theoretical aspects of identification, modeling and control of mainly BLS, but also linear systems. In the last part of the thesis, applications of bilinear and linear modeling and control to the activated sludge process (ASP) are given.
|
492 |
Detection of frameshifts and improving genome annotationAntonov, Ivan Valentinovich 12 November 2012 (has links)
We developed a new program called GeneTack for ab initio frameshift detection in intronless protein-coding nucleotide sequences. The GeneTack program uses
a hidden Markov model (HMM) of a genomic sequence with possibly frameshifted
protein-coding regions. The Viterbi algorithm nds the maximum likelihood path
that discriminates between true adjacent genes and a single gene with a frameshift.
We tested GeneTack as well as two other earlier developed programs FrameD and
FSFind on 17 prokaryotic genomes with frameshifts introduced randomly into known
genes. We observed that the average frameshift prediction accuracy of GeneTack, in
terms of (Sn+Sp)/2 values, was higher by a signicant margin than the accuracy of
the other two programs.
GeneTack was used to screen 1,106 complete prokaryotic genomes and 206,991
genes with frameshifts (fs-genes) were identifed. Our goal was to determine if a
frameshift transition was due to (i) a sequencing error, (ii) an indel mutation or (iii)
a recoding event. We grouped 102,731 genes with frameshifts (fs-genes) into 19,430
clusters based on sequence similarity between their protein products (fs-proteins),
conservation of predicted frameshift position, and its direction. While fs-genes in
2,810 clusters were classied as conserved pseudogenes and fs-genes in 1,200 clusters
were classied as hypothetical pseudogenes, 5,632 fs-genes from 239 clusters pos-
sessing conserved motifs near frameshifts were predicted to be recoding candidates.
Experiments were performed for sequences derived from 20 out of the 239 clusters;
programmed ribosomal frameshifting with eciency higher than 10% was observed
for four clusters.
GeneTack was also applied to 1,165,799 mRNAs from 100 eukaryotic species and 45,295 frameshifts were identied. A clustering approach similar to the one used for
prokaryotic fs-genes allowed us to group 12,103 fs-genes into 4,087 clusters. Known
programmed frameshift genes were among the obtained clusters. Several clusters may
correspond to new examples of dual coding genes.
We developed a web interface to browse a database containing all the fs-genes
predicted by GeneTack in prokaryotic genomes and eukaryotic mRNA sequences.
The fs-genes can be retrieved by similarity search to a given query sequence, by fs-
gene cluster browsing, etc. Clusters of fs-genes are characterized with respect to their
likely origin, such as pseudogenization, phase variation, programmed frameshifts etc.
All the tools and the database of fs-genes are available at the GeneTack web site
http://topaz.gatech.edu/GeneTack/
|
493 |
Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in BiostatisticsKarlsson, Andreas January 2006 (has links)
This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used. The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power. The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.
|
494 |
Interventions to Mitigate the Effects of Interruptions During High-risk Medication AdministrationPrakash, Varuna 13 January 2011 (has links)
Research suggests that interruptions are ubiquitous in healthcare settings and have a negative impact on patient safety. However, there is a lack of solutions to reduce harm arising from interruptions. Therefore, this research aimed to design and test the effectiveness of interventions to mitigate the effects of interruptions during medication administration. A three-phased study was conducted. First, direct observation was conducted to quantify the state of interruptions in an ambulatory unit where nurses routinely administered high-risk medications. Secondly, a user-centred approach was used to design interventions targeting errors arising from these interruptions. Finally, the effectiveness of these interventions was evaluated through a high-fidelity simulation experiment. Results showed that medication administration error rates decreased significantly on 4 of 7 measures with the use of interventions, compared to the control condition. Results of this work will help guide the implementation of interventions in nursing environments to reduce medication errors caused by interruptions.
|
495 |
Interventions to Mitigate the Effects of Interruptions During High-risk Medication AdministrationPrakash, Varuna 13 January 2011 (has links)
Research suggests that interruptions are ubiquitous in healthcare settings and have a negative impact on patient safety. However, there is a lack of solutions to reduce harm arising from interruptions. Therefore, this research aimed to design and test the effectiveness of interventions to mitigate the effects of interruptions during medication administration. A three-phased study was conducted. First, direct observation was conducted to quantify the state of interruptions in an ambulatory unit where nurses routinely administered high-risk medications. Secondly, a user-centred approach was used to design interventions targeting errors arising from these interruptions. Finally, the effectiveness of these interventions was evaluated through a high-fidelity simulation experiment. Results showed that medication administration error rates decreased significantly on 4 of 7 measures with the use of interventions, compared to the control condition. Results of this work will help guide the implementation of interventions in nursing environments to reduce medication errors caused by interruptions.
|
496 |
Analysis of Third- and Fifth-Grade Spelling Errors on the Test of Written Spelling-4: Do Error Types Indicate Levels of Linguistic Knowledge?Conway, Barbara Tenney 2011 August 1900 (has links)
A standardized test of spelling ability, Test of Written Spelling – 4, was used to explore the error patterns of Grade 3 and Grade 5 students in public and private schools in the southwestern region of the US. The study was for the purpose of examining the relationship between types of errors students make within a grade level (Grades 3 & 5 for this study), and the students’ spelling proficiency. A qualitative analysis of errors on the Test of Written Spelling – 4 (TWS-4) resulted in distributions of errors categorized as phonological, phonetic, orthographic, etymological, and morphological. For both Grades 3 and 5, a higher proportion of phonological and phonetic errors were made by students in the lowest spelling achievement group. Students with higher standard spelling scores made a lower proportion of phonological and phonetic errors and a higher proportion of errors categorized as etymological and morphological. The Test of Silent Word Reading Fluency (TOSWRF; Mather, Allen, Hammill, & Roberts, 2004) was also administered to the students to examine the relationship of these error types to literacy. The correlation between reading fluency standard scores and phonological and phonetic errors was negative, whereas the correlation between reading fluency and orthographic, etymological, and morphological error types was positive. This study underscores the value of looking at spelling achievement as a part of students’ literacy profiles. In addition, the study highlights the importance of making sure students beyond the years of very early reading and spelling development (Grades 3-5), especially those with low spelling proficiency, have the basic skills of phonological awareness and basic sound/symbol correspondences in place to support their ability to spell and to read, and that spelling must be taught in such a way as to meet students’ individual student needs.
|
497 |
Productivity and quality in the post-editing of outputs from translation memories and machine translationGuerberof Arenas, Ana 24 September 2012 (has links)
This study presents empirical research on no-match, machine-translated and translation-memory segments, analyzed in terms of translators’ productivity, final quality and prior professional experience. The findings suggest that translators have higher productivity and quality when using machine-translated output than when translating on their own, and that the productivity and quality gained with machine translation are not significantly different from the values obtained when processing fuzzy matches from a translation memory in the 85-94 percent range. The translators’ prior experience impacts on the quality they deliver but not on their productivity. These quantitative findings are triangulatedwith qualitative data from an online questionnaire and from one-to-one debriefings with the translators.
Este estudio presenta una investigación empírica sobre la traducción de segmentos nuevos y aquellos procesados con traducción automática y memorias de traducción analizados en relación a la productividad, calidad final y experiencia profesional de un grupo de traductores. Los resultados sugieren que los traductores obtienen una productividad y calidad más altas cuando procesan segmentos de traducción automática que cuando traducen sin ninguna ayuda y que dicha productividad y calidad no son significativamente diferentes a la que se obtiene cuando procesan coincidencias parciales de una memoria de traducción (del 85 al 94 por ciento). La experiencia profesional previa de los traductores influye en la calidad pero no así en la productividad obtenidas. Los resultados cuantitativos se triangulan, además, con datos cualitativos obtenidos a través de un cuestionario en línea y de entrevistas individuales realizadas a los trad
|
498 |
Design of Soft Error Robust High Speed 64-bit Logarithmic AdderShah, Jaspal Singh January 2008 (has links)
Continuous scaling of the transistor size and reduction of the operating voltage have led to a significant performance improvement of integrated circuits. However, the vulnerability of the scaled circuits to transient data upsets or soft errors, which are caused by alpha particles and cosmic neutrons, has emerged as a major reliability concern. In this thesis, we have investigated the effects of soft errors in combinational circuits and proposed soft error detection techniques for high speed adders. In particular, we have proposed an area-efficient 64-bit soft error robust logarithmic adder (SRA). The adder employs the carry merge Sklansky adder architecture in which carries are generated every 4 bits. Since the particle-induced transient, which is often referred to as a single event transient (SET) typically lasts for 100~200 ps, the adder uses time redundancy by sampling the sum outputs twice. The sampling instances have been set at 110 ps apart. In contrast to the traditional time redundancy, which requires two clock cycles to generate a given output, the SRA generates an output in a single clock cycle. The sampled sum outputs are compared using a 64-bit XOR tree to detect any possible error. An energy efficient 4-input transmission gate based XOR logic is implemented to reduce the delay and the power in this case. The pseudo-static logic (PSL), which has the ability to recover from a particle induced transient, is used in the adder implementation. In comparison with the space redundant approach which requires hardware duplication for error detection, the SRA is 50% more area efficient. The proposed SRA is simulated for different operands with errors inserted at different nodes at the inputs, the carry merge tree, and the sum generation circuit. The simulation vectors are carefully chosen such that the SET is not masked by error masking mechanisms, which are inherently present in combinational circuits. Simulation results show that the proposed SRA is capable of detecting 77% of the errors. The undetected errors primarily result when the SET causes an even number of errors and when errors occur outside the sampling window.
|
499 |
Design of Soft Error Robust High Speed 64-bit Logarithmic AdderShah, Jaspal Singh January 2008 (has links)
Continuous scaling of the transistor size and reduction of the operating voltage have led to a significant performance improvement of integrated circuits. However, the vulnerability of the scaled circuits to transient data upsets or soft errors, which are caused by alpha particles and cosmic neutrons, has emerged as a major reliability concern. In this thesis, we have investigated the effects of soft errors in combinational circuits and proposed soft error detection techniques for high speed adders. In particular, we have proposed an area-efficient 64-bit soft error robust logarithmic adder (SRA). The adder employs the carry merge Sklansky adder architecture in which carries are generated every 4 bits. Since the particle-induced transient, which is often referred to as a single event transient (SET) typically lasts for 100~200 ps, the adder uses time redundancy by sampling the sum outputs twice. The sampling instances have been set at 110 ps apart. In contrast to the traditional time redundancy, which requires two clock cycles to generate a given output, the SRA generates an output in a single clock cycle. The sampled sum outputs are compared using a 64-bit XOR tree to detect any possible error. An energy efficient 4-input transmission gate based XOR logic is implemented to reduce the delay and the power in this case. The pseudo-static logic (PSL), which has the ability to recover from a particle induced transient, is used in the adder implementation. In comparison with the space redundant approach which requires hardware duplication for error detection, the SRA is 50% more area efficient. The proposed SRA is simulated for different operands with errors inserted at different nodes at the inputs, the carry merge tree, and the sum generation circuit. The simulation vectors are carefully chosen such that the SET is not masked by error masking mechanisms, which are inherently present in combinational circuits. Simulation results show that the proposed SRA is capable of detecting 77% of the errors. The undetected errors primarily result when the SET causes an even number of errors and when errors occur outside the sampling window.
|
500 |
Generika: Patientsäkert eller en risk? : En litteraturstudie om synonympreparat utgör hinder för patientsäkerheten samt vilka förebyggande åtgärder hälso- och sjukvården kan vidtaÖhman, Malin, Sigblad, Fanny January 2012 (has links)
Syftet var att genom granskning av vetenskapliga originalartiklar undersöka patientsäkerheten vid generisk läkemedelsanvändning. Det som studerades var den eventuella problematiken, åtgärderna och patienternas egna erfarenheter av generisk användning. Den Metod som användes var en litteraturstudie där artiklar söktes i ett flertal databaser och via manuella sökningar. Sjutton artiklar inkluderades och granskades utifrån en mall för kvalitet- och resultatanalys. I Resultatet framgick att sjuksköterskor anser att osäkerhet, täta läkemedelsbyten, otydlig kommunikation, lika läkemedel till utseende och namn, samt tung arbetsbelastning är riskfaktorer som leder till medicineringsfel i form av under- och överdosering och/ eller utebliven läkemedelsdos. Patientsäkerheten påverkas även då försämrad följsamhet, biverkningar och sämre effekt är konsekvenser av generisk substitution. Åtgärder anses vara kvalificerade patient- och personal utbildningar. Tydligt uttalade ordinationer, minskat antal mediciner med lika namn och utseenden, samt dubbelkontroller och uppdatering av listor var förslag på förbättringar. Patienter hade generellt negativa attityder och erfarenheter av generika. Attityderna grundade sig främst på otrygghet och misstro. Patienter upplevde förvirring, fler eller värre biverkningar samt sämre effekt. Slutsatsen visar att det förekommer att patienter och sjuksköterskor erfar problem vid användning av generiska läkemedel och att detta får konsekvenser för patientsäkerheten. Därför bör fler problemåtgärder fastställas och tillämpas för att förhindra medicineringsfel som äventyrar patientsäkerheten.
|
Page generated in 0.0215 seconds