Spelling suggestions: "subject:"predictive model"" "subject:"redictive model""
71 |
Nimuendajú revisitado: arqueologia da antiga Guiana Brasileira / Nimuendajú riviewed: Archaeology of ancient Brazilian GuyanaJoão Aires Ataide da Fonseca Júnior 16 December 2008 (has links)
O presente trabalho é um esforço metodológico ao tentar aplicar um modelo arqueológico preditivo em sítios do Amapá conhecidos como Alinhamentos de Pedra. Após serem feitas as análises de documentos históricos da década de 1920 e das pesquisas realizadas na década de 1940, juntamente com os levantamentos feitos pelo Museu Goeldi em 2005, foi possível testar em campo o modelo preditivo proposto. Para a sua construção foram utilizadas também as discussões sobre os processos de formação do registro arqueológico e o teste de hipóteses já levantadas sobre estes sítios oriundas desde as primeiras pesquisas em fins do século XIX. Os resultados alcançados, apesar de incipientes, permitiram um panorama da história da arqueologia amazônica e a avaliação que o uso de tecnologias como o Sistema de Informação Geográfica (SIG) podem trazer como resultados positivos para a pesquisa arqueológica na região. / This work is a methodological effort to apply an Archaeological Predictive Model on sites known as Stone Alignments at the State of Amapá-Brazil. After some analyses of historical documents from the 1920\'s and 1940\'s, and the last surveys realized by Goeldi Museum in 2005, was possible to test empirically the predictive model. To its construction were used the discussions on site formation processes and the use of previous hypotheses created since the end of the XIX century. The results achieved allowed a brief view on the history of Amazon archaeology and the evaluation of technologies as the Geographical Information System (GIS) as a positive archaeological tool to produce researches in the region.
|
72 |
Deviating time-to-onset in predictive models : detecting new adverse effects from medicinesWärn, Caroline January 2015 (has links)
Identifying previously unknown adverse drug reactions becomes more important as the number of drugs and the extent of their use increases. The aim of this Master’s thesis project was to evaluate the performance of a novel approach for highlighting potential adverse drug reactions, also known as signal detection. The approach was based on deviating time-to-onset patterns and was implemented as a two-sample Kolmogorov-Smirnov test for non-vaccine data in the safety report database, VigiBase. The method was outperformed by both disproportionality analysis and the multivariate predictive model vigiRank. Performance estimates indicate that deviating time-to-onset patterns is not a suitable approach for signal detection for non-vaccine data in VigiBase.
|
73 |
SELF-SENSING CEMENTITIOUS MATERIALSHouk, Alexander Nicholas 01 January 2017 (has links)
The study of self-sensing cementitious materials is a constantly expanding topic of study in the materials and civil engineering fields and refers to the creation and utilization of cement-based materials (including cement paste, cement mortar, and concrete) that are capable of sensing (i.e. measuring) stress and strain states without the use of embedded or attached sensors. With the inclusion of electrically conductive fillers, cementitious materials can become truly self-sensing. Previous researchers have provided only qualitative studies of self-sensing material stress-electrical response. The overall goal of this research was to modify and apply previously developed predictive models on cylinder compression test data in order to provide a means to quantify stress-strain behavior from electrical response. The Vipulanandan and Mohammed (2015) stress-resistivity model was selected and modified to predict the stress state, up to yield, of cement cylinders enhanced with nanoscale iron(III) oxide (nanoFe2O3) particles based on three mix design parameters: nanoFe2O3 content, water-cement ratio, and curing time. With the addition of a nonlinear model, parameter values were obtained and compiled for each combination of nanoFe2O3 content and water-cement ratio for the 28-day cured cylinders. This research provides a procedure and lays the framework for future expansion of the predictive model.
|
74 |
Une approche pragmatique pour mesurer la qualité des applications à base de composants logiciels / A pragmatic approach to measure the quality of Component–Based Software ApplicationsHamza, Salma 19 December 2014 (has links)
Ces dernières années, de nombreuses entreprises ont introduit la technologie orientée composant dans leurs développements logiciels. Le paradigme composant, qui prône l’assemblage de briques logiciels autonomes et réutilisables, est en effet une proposition intéressante pour diminuer les coûts de développement et de maintenance tout en augmentant la qualité des applications. Dans ce paradigme, comme dans tous les autres, les architectes et les développeurs doivent pouvoir évaluer au plus tôt la qualité de ce qu’ils produisent, en particulier tout au long du processus de conception et de codage. Les métriques sur le code sont des outils indispensables pour ce faire. Elles permettent, dans une certaine mesure, de prédire la qualité « externe » d’un composant ou d’une architecture en cours de codage. Diverses propositions de métriques ont été faites dans la littérature spécifiquement pour le monde composant. Malheureusement, aucune des métriques proposées n’a fait l’objet d’une étude sérieuse quant à leur complétude, leur cohésion et surtout quant à leur aptitude à prédire la qualité externe des artefacts développés. Pire encore, l’absence de prise en charge de ces métriques par les outils d’analyse de code du marché rend impossible leur usage industriel. En l’état, la prédiction de manière quantitative et « a priori » de la qualité de leurs développements est impossible. Le risque est donc important d’une augmentation des coûts consécutive à la découverte tardive de défauts. Dans le cadre de cette thèse, je propose une réponse pragmatique à ce problème. Partant du constat qu’une grande partie des frameworks industriels reposent sur la technologie orientée objet, j’ai étudié la possibilité d’utiliser certaines des métriques de codes "classiques", non propres au monde composant, pour évaluer les applications à base de composants. Parmi les métriques existantes, j’ai identifié un sous-ensemble d’entre elles qui, en s’interprétant et en s’appliquant à certains niveaux de granularité, peuvent potentiellement donner des indications sur le respect par les développeurs et les architectes des grands principes de l’ingénierie logicielle, en particulier sur le couplage et la cohésion. Ces deux principes sont en effet à l’origine même du paradigme composant. Ce sous-ensemble devait être également susceptible de représenter toutes les facettes d’une application orientée composant : vue interne d’un composant, son interface et vue compositionnelle au travers l’architecture. Cette suite de métrique, identifiée à la main, a été ensuite appliquée sur 10 applications OSGi open- source afin de s’assurer, par une étude de leur distribution, qu’elle véhiculait effectivement pour le monde composant une information pertinente. J’ai ensuite construit des modèles prédictifs de propriétés qualité externes partant de ces métriques internes : réutilisation, défaillance, etc. J’ai décidé de construire des modèles qui permettent de prédire l’existence et la fréquence des défauts et les bugs. Pour ce faire, je me suis basée sur des données externes provenant de l’historique des modifications et des bugs d’un panel de 6 gros projets OSGi matures (avec une période de maintenance de plusieurs années). Plusieurs outils statistiques ont été mis en œuvre pour la construction des modèles, notamment l’analyse en composantes principales et la régression logistique multivariée. Cette étude a montré qu’il est possible de prévoir avec ces modèles 80% à 92% de composants fréquemment buggés avec des rappels allant de 89% à 98%, selon le projet évalué. Les modèles destinés à prévoir l’existence d’un défaut sont moins fiables que le premier type de modèle. Ce travail de thèse confirme ainsi l’intérêt « pratique » d’user de métriques communes et bien outillées pour mesurer au plus tôt la qualité des applications dans le monde composant. / Over the past decade, many companies proceeded with the introduction of component-oriented software technology in their development environments. The component paradigm that promotes the assembly of autonomous and reusable software bricks is indeed an interesting proposal to reduce development costs and maintenance while improving application quality. In this paradigm, as in all others, architects and developers need to evaluate as soon as possible the quality of what they produce, especially along the process of designing and coding. The code metrics are indispensable tools to do this. They provide, to a certain extent, the prediction of the quality of « external » component or architecture being encoded. Several proposals for metrics have been made in the literature especially for the component world. Unfortunately, none of the proposed metrics have been a serious study regarding their completeness, cohesion and especially for their ability to predict the external quality of developed artifacts. Even worse, the lack of support for these metrics with the code analysis tools in the market makes it impossible to be used in the industry. In this state, the prediction in a quantitative way and « a priori » the quality of their developments is impossible. The risk is therefore high for obtaining higher costs as a consequence of the late discovery of defects. In the context of this thesis, I propose a pragmatic solution to the problem. Based on the premise that much of the industrial frameworks are based on object-oriented technology, I have studied the possibility of using some « conventional » code metrics unpopular to component world, to evaluate component-based applications. Indeed, these metrics have the advantage of being well defined, known, equipped and especially to have been the subject of numerous empirical validations analyzing the predictive power for imperatives or objects codes. Among the existing metrics, I identified a subset of them which, by interpreting and applying to specific levels of granularity, can potentially provide guidance on the compliance of developers and architects of large principles of software engineering, particularly on the coupling and cohesion. These two principles are in fact the very source of the component paradigm. This subset has the ability to represent all aspects of a component-oriented application : internal view of a component, its interface and compositional view through architecture. This suite of metrics, identified by hand, was then applied to 10 open-source OSGi applications, in order to ensure, by studying of their distribution, that it effectively conveyed relevant information to the component world. I then built predictive models of external quality properties based on these internal metrics : reusability, failure, etc. The development of such models and the analysis of their power are only able to empirically validate the interest of the proposed metrics. It is also possible to compare the « power » of these models with other models from the literature specific to imperative and/or object world. I decided to build models that predict the existence and frequency of defects and bugs. To do this, I relied on external data from the history of changes and fixes a panel of 6 large mature OSGi projects (with a maintenance period of several years). Several statistical tools were used to build models, including principal component analysis and multivariate logistic regression. This study showed that it is possible to predict with these models 80% to 92% of frequently buggy components with reminders ranging from 89% to 98%, according to the evaluated projects. Models for predicting the existence of a defect are less reliable than the first type of model. This thesis confirms thus the interesting « practice » of using common and well equipped metrics to measure at the earliest application quality in the component world.
|
75 |
Development and Validation of an Administrative Data Algorithm to Identify Adults who have Endoscopic Sinus Surgery for Chronic RhinosinusitisMacdonald, Kristian I January 2016 (has links)
Objective: 1) Systematic review on the accuracy of Chronic Rhinosinusitis (CRS) identification in administrative databases; 2) Develop an administrative data algorithm to identify CRS patients who have endoscopic sinus surgery (ESS).
Methods: A chart review was performed for all ESS surgical encounters at The Ottawa Hospital from 2011-12. Cases were defined as encounters in which ESS for performed for Otolaryngologist-diagnosed CRS. An algorithm to identify patients who underwent ESS for CRS was developed using diagnostic and procedural codes within health administrative data. This algorithm was internally validated.
Results: Only three studies meeting inclusion criteria were identified in the systematic review and showed inaccurate CRS identification. The final algorithm using administrative and chart review data found that encounters having at least one CRS diagnostic code and one ESS procedural code had excellent accuracy for identifying ESS: sensitivity 96.0% sensitivity, specificity 100%, and positive predictive value 95.4%. Internal validation showed similar accuracy.
Conclusion: Most published AD studies examining CRS do not consider the accuracy of case identification. We identified a simple algorithm based on administrative database codes accurately identified ESS-CRS encounters.
|
76 |
A Predictive Model For Benchmarking Academic Programs (pbap) Using U.S. News Ranking Data For Engineering Colleges Offering Graduate ProgramsChuck, Lisa Gay Marie 01 January 2005 (has links)
Improving national ranking is an increasingly important issue for university administrators. While research has been conducted on performance measures in higher education, research designs have lacked a predictive quality. Studies on the U.S. News college rankings have provided insight into the methodology; however, none of them have provided a model to predict what change in variable values would likely cause an institution to improve its standing in the rankings. The purpose of this study was to develop a predictive model for benchmarking academic programs (pBAP) for engineering colleges. The 2005 U.S. News ranking data for graduate engineering programs were used to create a four-tier predictive model (pBAP). The pBAP model correctly classified 81.9% of the cases in their respective tier. To test the predictive accuracy of the pBAP model, the 2005 U.S .News data were entered into the pBAP variate developed using the 2004 U.S. News data. The model predicted that 88.9% of the institutions would remain in the same ranking tier in the 2005 U.S. News rankings (compared with 87.7% in the actual data), and 11.1% of the institutions would demonstrate tier movement (compared with an actual 12.3% movement in the actual data). The likelihood of improving an institution's standing in the rankings was greater when increasing the values of 3 of the 11 variables in the U.S. News model: peer assessment score, recruiter assessment score, and research expenditures.
|
77 |
A SNP Microarray Analysis Pipeline Using Machine Learning TechniquesEvans, Daniel T. January 2010 (has links)
No description available.
|
78 |
Food webs and phenology models: evaluating the efficacy of ecologically based insect pest management in different agroecosystemsPhilips, Christopher Robin 02 September 2013 (has links)
Integrated pest management (IPM) is defined as an effective and environmentally sensitive approach to pest management that relies on a combination of common-sense practices. Integrated pest management programs use current, comprehensive information on the life cycles of pests and their interactions with host plants and the environment. This information, in combination with available pest control methods, is used to manage pest populations by the most economical means, and with the least possible hazard to people, property, and the environment. True IPM takes advantage of all appropriate pest management options including, as appropriate, the judicious use of pesticides. It is currently estimated the IPM in its full capacity is being practiced on less than ten percent of the agricultural land in the U.S.
The primary objective of this research was to evaluate land management decisions and create new tools to promote a true IPM approach and encourage growers to reevaluate their method of insect control. To accomplish this I developed new predictive tools to reduce or eliminate unnecessary insecticide application intended to target cereal leaf beetle in wheat, and assessed a conservation biological control technique, farmscaping, to determine its true impact on lepidopteran pest suppression in collards. / Ph. D.
|
79 |
Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule baseSowan, Bilal Ibrahim January 2011 (has links)
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
|
80 |
Looking Outward from the Village: The Contingencies of Soil Moisture on the Prehistoric Farmed Landscape near Goodman Point PuebloBrown, Andrew D 08 1900 (has links)
Ancestral Pueblo communities of the central Mesa Verde region (CMVR) became increasingly reliant on agriculture for their subsistence needs during Basketmaker III (BMIII) through Terminal Pueblo III (TPIII) (AD 600–1300) periods. Researchers have been studying the Ancestral Pueblo people for over a century using a variety of methods to understand the relationships between climate, agriculture, population, and settlement patterns. While these methods and research have produced a well-developed cultural history of the region, studies at a smaller scale are still needed to understand the changes in farming behavior and the distribution of individual sites across the CMVR. Soil moisture is the limiting factor for crop growth in the semi-arid region of the Goodman Watershed in the CMVR. Thus, I constructed the soil moisture proxy model (SMPM) that is on a local scale and focuses on variables relevant to soil moisture – soil particle-size, soil depth, slope, and aspect. From the SMPM output, the areas of very high soil moisture are assumed to represent desirable farmland locations. I describe the relationship between very high soil moisture and site locations, then I infer the relevance of that relationship to settlement patterns and how those patterns changed over time (BMIII – TPIII). The results of the model and its application help to clarify how Ancestral Pueblo people changed as local farming communities. The results of this study indicates that farmers shifted away from use of preferred farmland during Terminal Pueblo III, which may have been caused by other cultural factors. The general outcome of this thesis is an improved understanding of human-environmental relationships on the local landscape in the CMVR.
|
Page generated in 0.0694 seconds