• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Role of Epistasis in Alzheimer's Disease Genetics

Ebbert, Mark T 01 December 2014 (has links) (PDF)
Alzheimer's disease is a complex neurodegenerative disease whose basic etiology and genetic structure remains elusive, despite decades of intensive investigation. To date, the significant genetic markers identified have no obvious functional effects, and are unlikely to play a role in Alzheimer's disease etiology, themselves. These markers are likely linked to other genetic variations, rare or common. Regardless of what causal mutations are found, research has demonstrated that no single gene determines Alzheimer's disease development and progression. It is clear that Alzheimer's disease development and progression are based on a set of interactions between genes and environmental variables. This dissertation focuses on gene-gene interactions (epistasis) and their effects on Alzheimer's disease case-control status. We genotyped the top Alzheimer's disease genetic markers as found on AlzGene.org (accessed 2014), and tested for interactions that were associated with Alzheimer's disease case- control status. We identified two potential gene-gene interactions between rs11136000 (CLU) and rs670139 (MS4A4E) (synergy factor = 3.81; p = 0.016), and rs3865444 (CD33) and rs670139 (MS4A4E) (synergy factor = 5.31; p = 0.003). Based on one data set alone, however, it is difficult to know whether the interactions are real. We replicated the CLU-MS4A4E interaction in an independent data set from the Alzheimer's Disease Genetics Consortium (synergy factor = 2.37, p = 0.007) using a meta-analysis. We also identified potential dosage (synergy factor = 2.98, p = 0.05) and APOE ε4 effects (synergy factor = 4.75, p = 0.005) in Cache County that did not replicate independently. The APOE ε4 effect is an association with Alzheimer's disease case-control status in APOE ε4 negative individuals. There is minor evidence both the dosage (synergy factor = 1.73, p = 0.02) and APOE ε4 (synergy factor = 2.08, p = 0.004) effects are real, however, because they replicate when including the Cache County data in the meta-analysis. These results demonstrate the importance of understanding the role of epistasis in Alzheimer's disease. During this research, we also developed a novel tool known as the Variant Tool Chest. The Variant Tool Chest has played an integral part in this research and other projects, and was developed to fill numerous gaps in next-generation sequence data analysis. Critical features include advanced, genotype-aware set operations on single- or multi-sample variant call format (VCF) files. These features are critical for genetics studies using next-generation sequencing data, and were used to perform important analyses in the third study of this dissertation.By understanding the role of epistasis in Alzheimer's disease, researchers will begin to untangle the complex nature of Alzheimer's disease etiology. With this information, therapies and diagnostics will be possible, alleviating millions of patients, their families and caregivers of the painful experience Alzheimer's disease inflicts upon them.
2

Možnosti využití metod vícerozměrné statistické analýzy dat při hodnocení spolehlivosti distribučních sítí / Possibilities of using multi - dimensional statistical analyses methods when evaluating reliability of distribution networks

Geschwinder, Lukáš January 2009 (has links)
The aim of this study is evaluation of using multi-dimensional statistical analyses methods as a tool for simulations of reliability of distribution network. Prefered methods are a cluster analysis (CLU) and a principal component analysis (PCA). CLU is used for a division of objects on the basis of their signs and a calculation of the distance between objects into groups whose characteristics should be similar. The readout can reveal a secret structure in data. PCA is used for a location of a structure in signs of multi-dimensional matrix data. Signs present separate quantities describing the given object. PCA uses a dissolution of a primary matrix data to structural and noise matrix data. It concerns the transformation of primary matrix data into new grid system of principal components. New conversion data are called a score. Principal components generating orthogonal system of new position. Distribution network from the aspect of reliability can be characterized by a number of new statistical quantities. Reliability indicators might be: interruption numbers, interruption time. Integral reliability indicators might be: system average interruption frequency index (SAIFI) and system average interruption duration index (SAIDI). In conclusion, there is a comparison of performed SAIFI simulation according to negatively binomial division and provided values from a distribution company. It is performed a test at description of sign dependences and outlet divisions.
3

How States are Meeting the Highly Qualified Teacher Component of NCLB

Pinney, Jean 20 May 2005 (has links)
As part of the reauthorization of the Elementary and Secondary Education Act the federal government has added the requirement that all schools receiving Title I funds must have "highly qualified teachers" in every classroom. The term "highly qualified teacher" comes from the No Child Left Behind Act of 2001. What exactly is a "highly qualified" teacher? This part of the law is widely debated throughout the fifty states, but most agree that a teacher's subject-matter knowledge and experience result in increased student achievement.(Ansell& McCase, 2003) Some states have made progress in meeting the "highly qualified" requirement of NCLB. However, most states have merely established the criteria for determining if a teacher is highly qualified (Keller, 2003). The Education Trust has called for clarification from the Department of Education on the guidelines for the teacher quality provision of the law. Ten states have put into law all the requirements of the federal law, 22 have done some work toward that goal, and 18 states still have a long way to go (Keller). With so many states still grappling with compliance to the law, this study may well give policy makers in those states options that are being used in other states to consider. In addition, the study focuses on middle school and the possible impact these requirements will have on staffing of middle schools. Policy makers would do well to look at this aspect closely since middle school is often where education "loses" many students to dropping out. Also, the middle school is where the greatest number of non-certified teachers are working and where the greatest percentage (44%) of teachers are teaching without even a minor in the subject they teach (Ingersoll, 2002).
4

Natural language processing techniques for the purpose of sentinel event information extraction

Barrett, Neil 23 November 2012 (has links)
An approach to biomedical language processing is to apply existing natural language processing (NLP) solutions to biomedical texts. Often, existing NLP solutions are less successful in the biomedical domain relative to their non-biomedical domain performance (e.g., relative to newspaper text). Biomedical NLP is likely best served by methods, information and tools that account for its particular challenges. In this thesis, I describe an NLP system specifically engineered for sentinel event extraction from clinical documents. The NLP system's design accounts for several biomedical NLP challenges. The specific contributions are as follows. - Biomedical tokenizers differ, lack consensus over output tokens and are difficult to extend. I developed an extensible tokenizer, providing a tokenizer design pattern and implementation guidelines. It evaluated as equivalent to a leading biomedical tokenizer (MedPost). - Biomedical part-of-speech (POS) taggers are often trained on non-biomedical corpora and applied to biomedical corpora. This results in a decrease in tagging accuracy. I built a token centric POS tagger, TcT, that is more accurate than three existing POS taggers (mxpost, TnT and Brill) when trained on a non-biomedical corpus and evaluated on biomedical corpora. TcT achieves this increase in tagging accuracy by ignoring previously assigned POS tags and restricting the tagger's scope to the current token, previous token and following token. - Two parsers, MST and Malt, have been evaluated using perfect POS tag input. Given that perfect input is unlikely in biomedical NLP tasks, I evaluated these two parsers on imperfect POS tag input and compared their results. MST was most affected by imperfectly POS tagged biomedical text. I attributed MST's drop in performance to verbs and adjectives where MST had more potential for performance loss than Malt. I attributed Malt's resilience to POS tagging errors to its use of a rich feature set and a local scope in decision making. - Previous automated clinical coding (ACC) research focuses on mapping narrative phrases to terminological descriptions (e.g., concept descriptions). These methods make little or no use of the additional semantic information available through topology. I developed a token-based ACC approach that encodes tokens and manipulates token-level encodings by mapping linguistic structures to topological operations in SNOMED CT. My ACC method recalled most concepts given their descriptions and performed significantly better than MetaMap. I extended my contributions for the purpose of sentinel event extraction from clinical letters. The extensions account for negation in text, use medication brand names during ACC and model (coarse) temporal information. My software system's performance is similar to state-of-the-art results. Given all of the above, my thesis is a blueprint for building a biomedical NLP system. Furthermore, my contributions likely apply to NLP systems in general. / Graduate
5

SIMULATIONS NUMERIQUES DE L'ATMOSPHERE URBAINE AVEC LE MODELE SUBMESO : <br />APPLICATION A LA CAMPAGNE CLU-ESCOMPTE SUR L'AGGLOMERATION DE MARSEILLE

Leroyer, Sylvie 16 November 2006 (has links) (PDF)
En vue de comprendre et prévoir les modes de dispersion des polluants émis dans les zones urbanisées, des simulations numériques sont menées à haute résolution. L'objectif est de reproduire les caractéristiques atmosphériques au-dessus d'un milieu complexe urbanisé. On a développé une méthode précise de mise en œuvre des simulations numériques de l'atmosphère urbaine à haute résolution spatiale en se basant sur trois outils complémentaires, optimisés sur l'exemple de Marseille : le modèle atmosphérique SUBMESO, en mode SGE (simulation des grandes échelles), le modèle de sol pour les échelles sub-meso urbanisé SM2-U, le logiciel de cartographie des caractéristiques morphologiques de la canopée urbaine DFMap. Pour pouvoir simuler l'atmosphère de villes côtières, on a développé et validé une méthode de calcul des flux à l'interface mer – atmosphère adaptée aux données de température disponibles par mesure ou télédétection. Une étude de sensibilité est ensuite menée sur une configuration académique de ville dans son environnement rural et/ou côtier, à l'aide de douze simulations permettant d'évaluer les rétroactions entre le modèle de sol et le modèle atmosphérique. Cinq autres simulations sont effectuées sur la région marseillaise au cours d'une période d'observation intensive de la campagne expérimentale CLU – ESCOMPTE, avec trois grilles emboîtées, permettant la première validation du couple SUBMESO – SM2-U, l'analyse des interactions entre la ville, les systèmes de brise et la topographie, et aussi l'étude des champs turbulents à très haute résolution. La méthode développée peut être utilisée pour l'étude de la qualité de l'air d'autres agglomérations.

Page generated in 0.02 seconds