• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Towards a Better Use: The Utah Shakespearean Festival, Teaching Artists, and Outreach Programs

Kidd, Karen Marie 16 December 2011 (has links) (PDF)
Teaching Artists are an important component of the Utah Shakespearean Festival's Education Department's outreach touring program that visits K-12 schools throughout Utah each year. However, the Education Department could be using Teaching Artists in different and better ways to help K-12 teachers infuse theatre into their curriculum. This work looks carefully at the outreach offered by the Utah Shakespearean Festival's Education Department and then compares it to the outreach work being done by the Oregon Shakespeare Festival and Shakespeare Santa Cruz. Based on the analysis of the three festivals, assessment benchmarks are identified to aid the Education Department in evaluating their use of Teaching Artists and suggestions are made to help them strengthen their outreach programs through the creation of a Teaching Artist training program that would allow more Teaching Artists to work in Utah K-12 schools. The work concludes with ideas for lesson and unit plans for Teaching Artists of various levels to use in the K-12 classroom that align with the State Common Core Standards for Language Arts that were adopted by Utah in August, 2010.
432

Machine Learning methods in shotgun proteomics

Truong, Patrick January 2023 (has links)
As high-throughput biology experiments generate increasing amounts of data, the field is naturally turning to data-driven methods for the analysis and extraction of novel insights. These insights into biological systems are crucial for understanding disease progression, drug targets, treatment development, and diagnostics methods, ultimately leading to improving human health and well-being, as well as, deeper insight into cellular biology. Biological data sources such as the genome, transcriptome, proteome, metabolome, and metagenome provide critical information about biological system structure, function, and dynamics. The focus of this licentiate thesis is on proteomics, the study of proteins, which is a natural starting point for understanding biological functions as proteins are crucial functional components of cells. Proteins play a crucial role in enzymatic reactions, structural support, transport, storage, cell signaling, and immune system function. In addition, proteomics has vast data repositories and technical and methodological improvements are continually being made to yield even more data. However, generating proteomic data involves multiple steps, which are prone to errors, making sophisticated models essential to handle technical and biological artifacts and account for uncertainty in the data. In this licentiate thesis, the use of machine learning and probabilistic methods to extract information from mass-spectrometry-based proteomic data is investigated. The thesis starts with an introduction to proteomics, including a basic biological background, followed by a description of how massspectrometry-based proteomics experiments are performed, and challenges in proteomic data analysis. The statistics of proteomic data analysis are also explored, and state-of-the-art software and tools related to each step of the proteomics data analysis pipeline are presented. The thesis concludes with a discussion of future work and the presentation of two original research works. The first research work focuses on adapting Triqler, a probabilistic graphical model for protein quantification developed for data-dependent acquisition (DDA) data, to data-independent acquisition (DIA) data. Challenges in this study included verifying that DIA data conformed with the model used in Triqler, addressing benchmarking issues, and modifying the missing value model used by Triqler to adapt for DIA data. The study showed that DIA data conformed with the properties required by Triqler, implemented a protein inference harmonization strategy, and modified the missing value model to adapt for DIA data. The study concluded by showing that Triqler outperformed current protein quantification techniques. The second research work focused on developing a novel deep-learning based MS2-intensity predictor by incorporating the self-attention mechanism called transformer into Prosit, an established Recurrent Neural Networks (RNN) based deep learning framework for MS2 spectrum intensity prediction. RNNs are a type of neural network that can efficiently process sequential data by capturing information from previous steps, in a sequential manner. The transformer self-attention mechanism allows a model to focus on different parts of its input sequence during processing independently, enabling it to capture dependencies and relationships between elements more effectively. The transformers therefore remedy some of the drawbacks of RNNs, as such, we hypothesized that the implementation of MS2-intensity predictor using transformers rather than RNN would improve its performance. Hence, Prosit-transformer was developed, and the study showed that the model training time and the similarity between the predicted MS2 spectrum and the observed spectrum improved. These original research works address various challenges in computational proteomics and contribute to the development of data-driven life science. / Allteftersom high-throughput experiment genererar allt större mängder data vänder sig området naturligt till data-drivna metoder för analys och extrahering av nya insikter. Dessa insikter om biologiska system är avgörande för att förstå sjukdomsprogression, läkemedelspåverkan, behandlingsutveckling, och diagnostiska metoder, vilket i slutändan leder till en förbättring av människors hälsa och välbefinnande, såväl som en djupare förståelse av cell biologi. Biologiska datakällor som genomet, transkriptomet, proteomet, metabolomet och metagenomet ger kritisk information om biologiska systems struktur, funktion och dynamik. I licentiatuppsats fokusområde ligger på proteomik, studiet av proteiner, vilket är en naturlig startpunkt för att förstå biologiska funktioner eftersom proteiner är avgörande funktionella komponenter i celler. Dessa proteiner spelar en avgörande roll i enzymatiska reaktioner, strukturellt stöd, transport, lagring, cellsignalering och immunsystemfunktion. Dessutom har proteomik har stora dataarkiv och tekniska samt metodologiska förbättringar görs kontinuerligt för att ge ännu mer data. Men för att generera proteomisk data krävs flera steg, som är felbenägna, vilket gör att sofistikerade modeller är väsentliga för att hantera tekniska och biologiska artefakter och för att ta hänsyn till osäkerhet i data. I denna licentiatuppsats undersöks användningen av maskininlärning och probabilistiska metoder för att extrahera information från masspektrometribaserade proteomikdata. Avhandlingen börjar med en introduktion till proteomik, inklusive en grundläggande biologisk bakgrund, följt av en beskrivning av hur masspektrometri-baserade proteomikexperiment utförs och utmaningar i proteomisk dataanalys. Statistiska metoder för proteomisk dataanalys utforskas också, och state-of-the-art mjukvara och verktyg som är relaterade till varje steg i proteomikdataanalyspipelinen presenteras. Avhandlingen avslutas med en diskussion om framtida arbete och presentationen av två original forskningsarbeten. Det första forskningsarbetet fokuserar på att anpassa Triqler, en probabilistisk grafisk modell för proteinkvantifiering som utvecklats för datadependent acquisition (DDA) data, till data-independent acquisition (DIA) data. Utmaningarna i denna studie inkluderade att verifiera att DIA-datas egenskaper överensstämde med modellen som användes i Triqler, att hantera benchmarking-frågor och att modifiera missing-value modellen som användes av Triqler till DIA-data. Studien visade att DIA-data överensstämde med de egenskaper som krävdes av Triqler, implementerade en proteininferensharmoniseringsstrategi och modifierade missing-value modellen till DIA-data. Studien avslutades med att visa att Triqler överträffade nuvarande state-of-the-art proteinkvantifieringsmetoder. Det andra forskningsarbetet fokuserade på utvecklingen av en djupinlärningsbaserad MS2-intensitetsprediktor genom att inkorporera self-attention mekanismen som kallas för transformer till Prosit, en etablerad Recurrent Neural Network (RNN) baserad djupinlärningsramverk för MS2 spektrum intensitetsprediktion. RNN är en typ av neurala nätverk som effektivt kan bearbeta sekventiell data genom att bevara och använda dolda tillstånd som fångar information från tidigare steg på ett sekventiellt sätt. Självuppmärksamhetsmekanismen i transformer tillåter modellen att fokusera på olika delar av sekventiellt data samtidigt under bearbetningen oberoende av varandra, vilket gör det möjligt att fånga relationer mellan elementen mer effektivt. Genom detta lyckas Transformer åtgärda vissa nackdelar med RNN, och därför hypotiserade vi att en implementation av en ny MS2-intensitetprediktor med transformers istället för RNN skulle förbättra prestandan. Därmed konstruerades Prosit-transformer, och studien visade att både modellträningstiden och likheten mellan predicerat MS2-spektrum och observerat spektrum förbättrades. Dessa originalforskningsarbeten hanterar olika utmaningar inom beräkningsproteomik och bidrar till utvecklingen av datadriven livsvetenskap. / <p>QC 2023-05-22</p>
433

The Effects of Self-Graphing Oral Reading Fluency in Tier 2 Response-to-Intervention

Hansen, Carolyn M. January 2014 (has links)
No description available.
434

MULTI-AGENT REPLICATOR CONTROL METHODOLOGIES FOR SUSTAINABLE VIBRATION CONTROL OF SMART BUILDING AND BRIDGE STRUCTURES

Gutierrez Soto, Mariantonieta 23 October 2017 (has links)
No description available.
435

台灣證券交易所發行量加權指數未納入現金股利之再投資因素對投資報酬及基金績效衡量之影響 / The Bias in Return Calculation and the Benchmark Error Problem Associated with Not Adjusting the Taiwan Stock Exchange Market Weighted Index for Cash Dividend

陳怡雯, Chen, Yi-Wen Unknown Date (has links)
台灣發行量加權股價指數在編製時並未調整現金股利的影響,不僅會低估實際的投資報酬率,以其作為標竿指標,在評估共同基金績效時,亦會產生標竿錯誤的問題。因此,本文將現金股利的再投資報酬納入,重新編製加權股價指數。實證結果發現,若自民國75年起調整現金股利之影響,則在民國89年10月31日時,股價指數由5544.18點調整為6419.83點,約增加1.16倍。以新指數重新衡量基金績效的結果,發現績效排名並無大幅度的改變,而且基金績效是否擊敗大盤的情形,受新指標的影響亦不大,此乃因近年來上市公司配息少,而且基金績效非常極端。但基於理論上的正確性,在計算投資報酬率及評估共同基金績效時,仍應以納入現金股利之加權股價指數為基礎,以降低因標竿指標錯誤所造成研究結果的偏誤,否則未來我國股票配息的情況及基金報酬率的特性若改變之後,以過去的方式評估績效將可能造成極大之偏差。 / The Taiwan Stock Exchange Market Weighted Index (TAIEX) is not adjusted for cash dividend. Since the TAIEX is commonly used for calculating the investment return of the Taiwan’s market and as the benchmark index for mutual fund performance evaluation, the investment return in Taiwan is underestimated and there is benchmark error in the evaluation of mutual fund performance. This paper adjusts the TAIEX by incorporating the effect of the reinvestment of cash dividend in the TAIEX. The beginning date of our adjustment is January 4, 1986. Since then until the end of October 2000, the adjusted TAIEX grew to 1.16 times of the unadjusted index. However, The mutual fund performance evaluated based on the adjusted index is insignificantly different from that based on the un-adjusted index. This is because mutual funds have extreme performance. Due to the small cash dividend paid out by the listed firms on the Taiwan Stock Exchange, the adjustment effect is not enough to overturn the evaluation of
436

Studies of Accelerator-Driven Systems for Transmutation of Nuclear Waste / Studier av acceleratordrivna system för transmutation av kärnavfall

Dahlfors, Marcus January 2006 (has links)
<p>Accelerator-driven systems for transmutation of nuclear waste have been suggested as a means for dealing with spent fuel components that pose potential radiological hazard for long periods of time. While not entirely removing the need for underground waste repositories, this nuclear waste incineration technology provides a viable method for reducing both waste volumes and storage times. Potentially, the time spans could be diminished from hundreds of thousand years to merely 1.000 years or even less. A central aspect for accelerator-driven systems design is the prediction of safety parameters and fuel economy. The simulations performed rely heavily on nuclear data and especially on the precision of the neutron cross section representations of essential nuclides over a wide energy range, from the thermal to the fast energy regime. In combination with a more demanding neutron flux distribution as compared with ordinary light-water reactors, the expanded nuclear data energy regime makes exploration of the cross section sensitivity for simulations of accelerator-driven systems a necessity. This fact was observed throughout the work and a significant portion of the study is devoted to investigations of nuclear data related effects. The computer code package EA-MC, based on 3-D Monte Carlo techniques, is the main computational tool employed for the analyses presented. Directly related to the development of the code is the extensive IAEA ADS Benchmark 3.2, and an account of the results of the benchmark exercises as implemented with EA-MC is given. CERN's Energy Amplifier prototype is studied from the perspectives of neutron source types, nuclear data sensitivity and transmutation. The commissioning of the n_TOF experiment, which is a neutron cross section measurement project at CERN, is also described.</p>
437

Application of the Stimulus-Driven Theory of Probabilistic Dynamics to the hydrogen issue in level-2 PSA. Application de la Stimulus Driven Theory of Probabilistic Dynamics (SDTPD) au risque hydrogène dans les EPS de niveau 2.

Peeters, Agnès 05 October 2007 (has links)
Les Etudes Probabilistes de Sûreté (EPS) de niveau 2 en centrale nucléaire visent à identifier les séquences d’événements pouvant correspondre à la propagation d’un accident d’un endommagement du cœur jusqu’à une perte potentielle de l’intégrité de l’enceinte, et à estimer la fréquence d’apparition des différents scénarios possibles. Ces accidents sévères dépendent non seulement de défaillances matérielles ou d’erreurs humaines, mais également de l’occurrence de phénomènes physiques, tels que des explosions vapeur ou hydrogène. La prise en compte de tels phénomènes dans le cadre booléen des arbres d’événements s’avère difficile, et les méthodologies dynamiques de réalisation des EPS sont censées fournir une manière plus cohérente d’intégrer l’évolution du processus physique dans les changements de configuration discrète de la centrale au long d’un transitoire accidentel. Cette thèse décrit l’application d’une des plus récentes approches dynamiques des EPS – la Théorie de la Dynamique Probabiliste basée sur les Stimuli (SDTPD) – à différents modèles de déflagration d'hydrogène ainsi que les développements qui ont permis cette applications et les diverses améliorations et techniques qui ont été mises en oeuvre. Level-2 Probabilistic Safety Analyses (PSA) of nuclear power plants aims to identify the possible sequences of events corresponding to an accident propagation from a core damage to a potential loss of integrity of the containment, and to assess the frequency of occurrence of the different scenarios. These so-called severe accidents depend not only on hardware failures and human errors, but also on the occurrence of physical phenomena such as e.g. steam or hydrogen explosions. Handling these phenomena in the classical Boolean framework of event trees is not convenient, and dynamic methodologies to perform PSA studies are expected to provide a more consistent way of integrating the physical process evolution with the discrete changes of plant configuration along an accidental transient. This PhD Thesis presents the application of one of the most recently proposed dynamic PSA methodologies, i.e. the Stimulus-Driven Theory of Probabilistic Dynamics (SDTPD), to several models of hydrogen explosion in the containment of a plant, as well as the developed methods and improvements.
438

Studies of Accelerator-Driven Systems for Transmutation of Nuclear Waste / Studier av acceleratordrivna system för transmutation av kärnavfall

Dahlfors, Marcus January 2006 (has links)
Accelerator-driven systems for transmutation of nuclear waste have been suggested as a means for dealing with spent fuel components that pose potential radiological hazard for long periods of time. While not entirely removing the need for underground waste repositories, this nuclear waste incineration technology provides a viable method for reducing both waste volumes and storage times. Potentially, the time spans could be diminished from hundreds of thousand years to merely 1.000 years or even less. A central aspect for accelerator-driven systems design is the prediction of safety parameters and fuel economy. The simulations performed rely heavily on nuclear data and especially on the precision of the neutron cross section representations of essential nuclides over a wide energy range, from the thermal to the fast energy regime. In combination with a more demanding neutron flux distribution as compared with ordinary light-water reactors, the expanded nuclear data energy regime makes exploration of the cross section sensitivity for simulations of accelerator-driven systems a necessity. This fact was observed throughout the work and a significant portion of the study is devoted to investigations of nuclear data related effects. The computer code package EA-MC, based on 3-D Monte Carlo techniques, is the main computational tool employed for the analyses presented. Directly related to the development of the code is the extensive IAEA ADS Benchmark 3.2, and an account of the results of the benchmark exercises as implemented with EA-MC is given. CERN's Energy Amplifier prototype is studied from the perspectives of neutron source types, nuclear data sensitivity and transmutation. The commissioning of the n_TOF experiment, which is a neutron cross section measurement project at CERN, is also described.
439

Control basat en lògica difusa per sistemes de fangs activats. Disseny, implementació i validació en EDAR reals

Fiter i Cirera, Mireia 24 March 2006 (has links)
En aquesta tesi es presenten dues propostes de Sistemes de Control basats en Lògica Difusa (SCLD), des del seu disseny fins a la seva implementació i validació en dues instal·lacions: l'EDAR Granollers i l'EDAR Taradell.El capítol 1 explica els conceptes bàsics per comprendre el desenvolupament dels dos SCLD i el resum d'una revisió d'articles que relacionen la lògica difusa amb aigües residuals. A continuació, s'estableixen els objectius. En el capítol 3 es presenten els materials i mètodes. Els capítols 4 i 5 exposen el treball realitzat per arribar a la implementació dels dos SCLD. Ambdós capítols presenten la mateixa estructura: presentació, definició dels objectius del SCLD, descripció de l'EDAR on s'implementa el SCLD, desenvolupament d'índexs d'avaluació, explicació del disseny del SCLD i la seva avaluació mitjançant estudis de simulació, implementació i validació del SCLD a l'EDAR i, finalment discussió del treball realitzat. En el capítol 6 s'enumeren les conclusions obtingudes. / This thesis presents two proposals of Fuzzy Logic Control Systems (FLCS's), from their design to their implementation and validation in two facilities: Granollers and Taradell WWTP. The fundamental concepts to understand the FLCS's development are explained in the first chapter with a summary of scientific papers related to fuzzy logic applications in the wastewater treatment. The objectives are established in Chapter 2 and the materials and methods are described in Chapter 3.In Chapters 4 and 5, the work to design, implement and validate the FLCS's is presented. Both chapters have the same structure: a presentation, a goal definition of FLCS, a description of the WWTP, the evaluation index development, a detailed explanation about designing the FLCS process and its evaluation by simulation studies, the implementation of FLCS to WWTPs and their validation and, finally, a discussion of the work carried out is presented. The main conclusions are enumerated in Chapter 6.
440

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 03 December 2015 (has links) (PDF)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.

Page generated in 0.0442 seconds