• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 14
  • 10
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 19
  • 18
  • 18
  • 17
  • 16
  • 14
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Caractérisation des réservoirs basée sur des textures des images scanners de carottes

Jouini, Mohamed Soufiane 04 February 2009 (has links)
Les carottes, extraites lors des forages de puits de pétrole, font partie des éléments les plus importants dans la chaîne de caractérisation de réservoir. L’acquisition de celles-ci à travers un scanner médical permet d’étudier de façon plus fine les variations des types de dépôts. Le but de cette thèse est d’établir les liens entre les imageries scanners 3D de carottes, et les différentes propriétés pétrophysiques et géologiques. Pour cela la phase de modélisation des images, et plus particulièrement des textures, est très importante et doit fournir des descripteurs extraits qui présentent un assez haut degrés de confiance. Une des solutions envisagée pour la recherche de descripteurs a été l’étude des méthodes paramétriques permettant de valider l’analyse faite sur les textures par un processus de synthèse. Bien que ceci ne représente pas une preuve pour un lien bijectif entre textures et paramètres, cela garantit cependant au moins une confiance en ces éléments. Dans cette thèse nous présentons des méthodes et algorithmes développés pour atteindre les objectifs suivants : 1. Mettre en évidence les zones d’homogénéités sur les zones carottées. Cela se fait de façon automatique à travers de la classification et de l’apprentissage basés sur les paramètres texturaux extraits. 2. Établir les liens existants entre images scanners et les propriétés pétrophysiques de la roche. Ceci se fait par prédiction de propriétés pétrophysiques basées sur l’apprentissage des textures et des calibrations grâce aux données réelles. . / Cores extracted, during wells drilling, are essential data for reservoirs characterization. A medical scanner is used for their acquisition. This feature provide high resolution images improving the capacity of interpretation. The main goal of the thesis is to establish links between these images and petrophysical data. Then parametric texture modelling can be used to achieve this goal and should provide reliable set of descriptors. A possible solution is to focus on parametric methods allowing synthesis. Even though, this method is not a proven mathematically, it provides high confidence on set of descriptors and allows interpretation into synthetic textures. In this thesis methods and algorithms were developed to achieve the following goals : 1. Segment main representative texture zones on cores. This is achieved automatically through learning and classifying textures based on parametric model. 2. Find links between scanner images and petrophysical parameters. This is achieved though calibrating and predicting petrophysical data with images (Supervised Learning Process).
122

Využití nových výukových metod a výchovných strategií na podporu čtení a čtenářské gramotnosti na 1. stupni ZŠ / Using new teaching methods and educational strategies to support reading and reading literacy at the primary school.

Kapounová, Eva January 2013 (has links)
This thesis deals with the use of innovative teaching methods and instructional strategies to support and develop literacy at the first grade of primary school. The theoretical part describes the changes in education related to curricular reform, also the change the objectives and the content of education towards the formation and development of key competencies, to prepare students for real life. This paper introduces the concept of contemporary Czech language and literature, its inclusion in the National Curriculum for Basic Education. The theoretical part focuses on the concept of literacy, reading literacy defined criteria. It also discusses the methods of critical thinking, which contribute to the development of reading skills. It also describes the course "Promoting literacy", which the author of the thesis completed in 2011. Research section presents action research, where the author verifies the theoretical knowledge of new specific methods and instructional strategies to promote literacy in practice. Research validates the effectiveness of innovative methods in terms of the development of key competencies for learning in terms of reading literacy criteria. It also examines whether these methods contribute to the education of students, whether helping to meet the objectives of cognitive...
123

Tsunami inundation : estimating damage and predicting flow properties

Wiebe, Dane Michael 22 March 2013 (has links)
The 2004 Indian Ocean and 2011 Tohoku tsunami events have shown the destructive power of tsunami inundation to the constructed environment in addition to the tragic loss of life. A comparable event is expected for the Cascadia Subduction Zone (CSZ) which will impact the west coast of North America. Research efforts have focused on understanding and predicting the hazard to mitigate potential impacts. This thesis presents two manuscripts which pertain to estimating infrastructure damage and determining design loads of tsunami inundation. The first manuscript estimates damage to buildings and economic loss for Seaside, Oregon, for CSZ events ranging from 3 to 25 m of slip along the entire fault. The analysis provides a community scale estimate of the hazard with calculations performed at the parcel level. Hydrodynamic results are obtained from the numerical model MOST and damage estimates are based on fragility curves from the recent literature. Seaside is located on low lying coastal land which makes it particularly sensitive to the magnitude of the events. For the range of events modeled, the percentage of building within the inundation zone ranges from 9 to 88%, with average economic losses ranging from $2 million to $1.2 billion. The second manuscript introduces a new tsunami inundation model based on the concept of an energy grade line to estimate the hydrodynamic quantities of maximum flow depth, velocity, and momentum flux between the shoreline and extent of inundation along a 1D transect. Using the numerical model FUNWAVE empirical relations were derived to tune the model. For simple bi-linear beaches the average error for the tuned model in flow depth, velocity, and momentum flux were 10, 23, and 10%, respectively; and for complex bathymetry at Rockaway Beach, Oregon, without recalibration, the errors were 14, 44, and 14% for flow depth, velocity, and momentum flux, respectively. / Graduation date: 2013
124

Diffusion de l’information dans les médias sociaux : modélisation et analyse / Information diffusion in social media : modeling and analysis

Guille, Adrien 25 November 2014 (has links)
Les médias sociaux ont largement modifié la manière dont nous produisons, diffusons et consommons l'information et sont de fait devenus des vecteurs d'information importants. L’objectif de cette thèse est d’aider à la compréhension du phénomène de diffusion de l’information dans les médias sociaux, en fournissant des moyens d’analyse et de modélisation.Premièrement, nous proposons MABED, une méthode statistique pour détecter automatiquement les évènements importants qui suscitent l'intérêt des utilisateurs des médias sociaux à partir du flux de messages qu'ils publient, dont l'originalité est d'exploiter la fréquence des interactions sociales entre utilisateurs, en plus du contenu textuel des messages. Cette méthode diffère par ailleurs de celles existantes en ce qu'elle estime dynamiquement la durée de chaque évènement, plutôt que de supposer une durée commune et fixée à l'avance pour tous les évènements. Deuxièmement, nous proposons T-BASIC, un modèle probabiliste basé sur la structure de réseau sous-jacente aux médias sociaux pour prédire la diffusion de l'information, plus précisément l'évolution du volume d'utilisateurs relayant une information donnée au fil du temps. Contrairement aux modèles similaires également basés sur la structure du réseau, la probabilité qu'une information donnée se diffuse entre deux utilisateurs n'est pas constante mais dépendante du temps. Nous décrivons aussi une procédure pour l'inférence des paramètres latents du modèle, dont l'originalité est de formuler les paramètres comme des fonctions de caractéristiques observables des utilisateurs. Troisièmement, nous proposons SONDY, un logiciel libre et extensible implémentant des méthodes tirées de la littérature pour la fouille et l'analyse des données issues des médias sociaux. Le logiciel manipule deux types de données : les messages publiés par les utilisateurs, et la structure du réseau social interconnectant ces derniers. Contrairement aux logiciels académiques existants qui se concentrent soit sur l'analyse des messages, soit sur l'analyse du réseau, SONDY permet d'analyser ces deux types de données conjointement en permettant l'analyse de l'influence par rapport aux évènements détectés. Les expérimentations menées à l'aide de divers jeux de données collectés sur le média social Twitter démontrent la pertinence de nos propositions et mettent en lumière des propriétés qui nous aident à mieux comprendre les mécanismes régissant la diffusion de l'information. Premièrement, en comparant les performances de MABED avec celles de méthodes récentes tirées de la littérature, nous montrons que la prise en compte des interactions sociales entre utilisateurs conduit à une détection plus précise des évènements importants, avec une robustesse accrue en présence de contenu bruité. Nous montrons également que MABED facilite l'interprétation des évènements détectés en fournissant des descriptions claires et précises, tant sur le plan sémantique que temporel. Deuxièmement, nous montrons la validité de la procédure proposée pour estimer les probabilités de diffusion sur lesquelles repose le modèle T-BASIC, en illustrant le pouvoir prédictif des caractéristiques des utilisateurs sélectionnées et en comparant les performances de la méthode d'estimation proposée avec celles de méthodes tirées de la littérature. Nous montrons aussi l'intérêt d'avoir des probabilités non constantes, ce qui permet de prendre en compte dans T-BASIC la fluctuation du niveau de réceptivité des utilisateurs des médias sociaux au fil du temps. Enfin, nous montrons comment, et dans quelle mesure, les caractéristiques sociales, thématiques et temporelles des utilisateurs affectent la diffusion de l'information. Troisièmement, nous illustrons à l'aide de divers scénarios l'utilité du logiciel SONDY, autant pour des non-experts, grâce à son interface utilisateur avancée et des visualisations adaptées, que pour des chercheurs du domaine, grâce à son interface de programmation. / Social media have greatly modified the way we produce, diffuse and consume information, and have become powerful information vectors. The goal of this thesis is to help in the understanding of the information diffusion phenomenon in social media by providing means of modeling and analysis.First, we propose MABED (Mention-Anomaly-Based Event Detection), a statistical method for automatically detecting events that most interest social media users from the stream of messages they publish. In contrast with existing methods, it doesn't only focus on the textual content of messages but also leverages the frequency of social interactions that occur between users. MABED also differs from the literature in that it dynamically estimates the period of time during which each event is discussed rather than assuming a predefined fixed duration for all events. Secondly, we propose T-BASIC (Time-Based ASynchronous Independent Cascades), a probabilistic model based on the network structure underlying social media for predicting information diffusion, more specifically the evolution of the number of users that relay a given piece of information through time. In contrast with similar models that are also based on the network structure, the probability that a piece of information propagate from one user to another isn't fixed but depends on time. We also describe a procedure for inferring the latent parameters of that model, which we formulate as functions of observable characteristics of social media users. Thirdly, we propose SONDY (SOcial Network DYnamics), a free and extensible software that implements state-of-the-art methods for mining data generated by social media, i.e. the messages published by users and the structure of the social network that interconnects them. As opposed to existing academic tools that either focus on analyzing messages or analyzing the network, SONDY permits the joint analysis of these two types of data through the analysis of influence with respect to each detected event.The experiments, conducted on data collected on Twitter, demonstrate the relevance of our proposals and shed light on some properties that give us a better understanding of the mechanisms underlying information diffusion. First, we compare the performance of MABED against those of methods from the literature and find that taking into account the frequency of social interactions between users leads to more accurate event detection and improved robustness in presence of noisy content. We also show that MABED helps with the interpretation of detected events by providing clearer textual description and more precise temporal descriptions. Secondly, we demonstrate the relevancy of the procedure we propose for estimating the pairwise diffusion probabilities on which T-BASIC relies. For that, we illustrate the predictive power of users' characteristics, and compare the performance of the method we propose to estimate the diffusion probabilities against those of state-of-the-art methods. We show the importance of having non-constant diffusion probabilities, which allows incorporating the variation of users' level of receptivity through time into T-BASIC. We also study how -- and in which proportion -- the social, topical and temporal characteristics of users impact information diffusion. Thirdly, we illustrate with various scenarios the usefulness of SONDY, both for non-experts -- thanks to its advanced user interface and adapted visualizations -- and for researchers -- thanks to its application programming interface.
125

Strategie předvídání a její místo ve výuce literární výchovy na 1.stupni ZŠ / Strategy of predicting and its role in teaching literature in primary schools

Jamburová, Tereza January 2016 (has links)
The thesis deals with predicting, as one of the basic reading strategies applied through out the reading lessons in primary school. The major objective of this study is to identify the main principles of successful application of this strategy in the field, in the terms of all three involving aspects: the teacher, the student and the text. The thesis consists of a theoretical and empirical part. The theoretical part defines the key principles of the mental process when the reader is making predictions. This part deals with the theoretical framework and analyzes related terms to that issue. The practical part presents gathered data from the action research that involves plans of model reading lessons aimed at the strategy of predicitng and my own teaching practice that has been finally reflected. The results of this research provide some support for teachers who decide to implement the strategy into their literature lessons. KEYWORDS didactics of literature, early literacy, reading literacy, reading strategies, reading comprehension, predicting, critical thinking
126

Dolovací modul systému pro dolování z dat na platformě NetBeans / Data Mining Module of a Data Mining System on NetBeans Platform

Výtvar, Jaromír January 2010 (has links)
The aim of this work is to get basic overview about the process of obtaining knowledge from databases - datamining and to analyze the datamining system developed at FIT BUT on the NetBeans platform in order to create a new mining module. We decided to implement a module for mining outliers and to extend existing regression module with multiple linear regression using generalized linear models. New methods using existing methods of Oracle Data Mining.
127

Predikce škodlivosti aminokyselinových mutací s využitím metody MAPP / Predicting the Effect of Amino Acid Substitutions on Protein Function Using MAPP Method

Pelikán, Ondřej January 2014 (has links)
This thesis discusses the issue of predicting the effect of amino acid substitutions on protein function using MAPP method. This method requires the multiple sequence alignment and phylogenetic tree constructed by third-party tools. Main goal of this thesis is to find the combination of suitable tools and their parameters to generate the inputs of MAPP method on the basis of analysis on one massively mutated protein. Then, the MAPP method is tested with chosen combination of parameters and tools on two large independent datasets and consequently is compared with the other tools focused on prediction of the effect of mutations. Apart from this the web interface for the MAPP method was created. This interface simplifies the use of the method since the user need not to install any tools or set any parameters.
128

A Quantitative Investigation of the Relationship Between English Language Assessments and Academic Performance of Long-Term ELLs

Rios, Yesmi 01 January 2018 (has links)
Research shows academic literacy is a challenge for students classified as Long-Term English Language Learners (LTELLs). In the pseudonymous Windy Desert School District (WDSD), there are 17,365 students classified as LTELLs. Of these students, the majority are falling short of English academic literacy goals on the Assessing Comprehension and Communication in English State-to-State for English Language Learners (ACCESS for ELLs) test and 67% do not graduate from high school. This quantitative study examined the predictive relationship between ACCESS English language proficiency subscale scores in the language domains of speaking, listening, reading, and writing and course semester grades in English 9, English 10, and English 11. This longitudinal study, informed by theorists Cummins and Krashen, followed a cohort of 718 Grade 9 students for 3 years (2012-2015). Of the 718, only 161 participant data sets were valid for the final ordinal logistic regression analysis. ACCESS subscale scores in speaking, listening, reading, and writing comprised the predictor variables and English course semester grades comprised the criterion variables. Results revealed that LTELLs' ACCESS subscale scores in listening, reading, and writing were significant predictors of their English course grades whereas speaking scores were not. For each predictor variable, a 1-unit increase in the predictor decreased the likelihood of receiving a lower grade in the course. Social change can result from the WDSD using ACCESS results to create and implement effective instructional programs that develop LTELLs' proficiency in the language domains found significant in predicting their academic grades, thereby increasing their language proficiency, academic grades, and graduation rates over time.
129

Predicting Student Success in an Introductory Programming Course at an Urban Midwestern Community College with Computer Programming Experience, Self-Efficacy, and Hope

Newman, Reece Elton January 2021 (has links)
No description available.
130

Mitigating serverless cold starts through predicting computational resource demand : Predicting function invocations based on real-time user navigation

Persson, Gustav, Branth Sjöberg, William January 2023 (has links)
Serverless functions have emerged as a prominent paradigm in software deployment, providing automated resource scaling, resulting in demand-based operational expenses. One of the most significant challenges associated with serverless functionsis the cold start delay, preventing organisations with latency-critical web applications from adopting a serverless technology. Existing research on the cold start problem primarily focuses on mitigating the delay by modifying and optimising serverless platform technologies. However, these solutions have predominantly yielded modest reductions in time delay. Consequently, the purpose of this study is to establish conditions and circumstances under which the cold start issue can be addressed through the type of approach presented in this study. Through a design science research methodology, a software artefact named AdaptiveServerless Invocation Predictor (ASIP) was developed to mitigate the cold start issue through monitoring web application user traffic in real-time. Based on the user traffic, ASIP preemptively pre-initialises serverless functions likely to be invoked, to avoid cold start occurrences. ASIP was tested against a realistic workload generated by test participants. Evaluation of ASIP was performed through analysing the reduction in time delay achieved and comparing this against existing cold start mitigation strategies. The results indicate that predicting serverless function invocations based on real-time traffic analysis is a viable approach, as a tangible reduction in response time was achieved. Conclusively, the cold start mitigation strategy assessed and presented in this study may not provide a sufficiently significant mitigation effect relative to the required implementation effort and operational expenses. However, the study has generated valuable insights regarding circumstantial factors concerning cold start mitigation. Consequently, this study provides a proof of concept for a more sophisticated version of the mitigation strategy developed in this study, with greater potential to provide a significant delay reduction without requiring substantial computational resources.

Page generated in 0.0216 seconds