• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 36
  • 22
  • 15
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 97
  • 90
  • 77
  • 69
  • 57
  • 57
  • 56
  • 39
  • 39
  • 36
  • 34
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Comparing NR Expression among Metabolic Syndrome Risk Factors

Jacobsson, Annelie January 2003 (has links)
The metabolic syndrome is a cluster of metabolic risk factors such as diabetes type II, dyslipidemia, hypertension, obesity, microalbuminurea and insulin resistance, which in the recent years has increased greatly in many parts of the world. In this thesis decision trees were applied to the BioExpress database, including both clinical data about donors and gene expression data, to investigate nuclear receptors ability to serve as markers for the metabolic syndrome. Decision trees were created and the classification performance for each individual risk factor were then analysed. The rules generated from the risk factor trees were compared in order to search for similarities and dissimilarities. The comparisons of rules were performed in pairs of risk factors, in groups of three and on all risk factors and they resulted in the discovery of a set of genes where the most interesting were the Peroxisome Proliferator Activated Receptor - Alpha, the Peroxisome Proliferator Activated Receptor - Gamma and the Glucocorticoid Receptor. These genes existed in pathways associated with the metabolic syndrome and in the recent scientific literature.
102

A knowledge based approach of toxicity prediction for drug formulation : modelling drug vehicle relationships using soft computing techniques

Mistry, Pritesh January 2015 (has links)
This multidisciplinary thesis is concerned with the prediction of drug formulations for the reduction of drug toxicity. Both scientific and computational approaches are utilised to make original contributions to the field of predictive toxicology. The first part of this thesis provides a detailed scientific discussion on all aspects of drug formulation and toxicity. Discussions are focused around the principal mechanisms of drug toxicity and how drug toxicity is studied and reported in the literature. Furthermore, a review of the current technologies available for formulating drugs for toxicity reduction is provided. Examples of studies reported in the literature that have used these technologies to reduce drug toxicity are also reported. The thesis also provides an overview of the computational approaches currently employed in the field of in silico predictive toxicology. This overview focuses on the machine learning approaches used to build predictive QSAR classification models, with examples discovered from the literature provided. Two methodologies have been developed as part of the main work of this thesis. The first is focused on use of directed bipartite graphs and Venn diagrams for the visualisation and extraction of drug-vehicle relationships from large un-curated datasets which show changes in the patterns of toxicity. These relationships can be rapidly extracted and visualised using the methodology proposed in chapter 4. The second methodology proposed, involves mining large datasets for the extraction of drug-vehicle toxicity data. The methodology uses an area-under-the-curve principle to make pairwise comparisons of vehicles which are classified according to the toxicity protection they offer, from which predictive classification models based on random forests and decisions trees are built. The results of this methodology are reported in chapter 6.
103

Exploring the elements and dynamics of transformational change

Mdletye, Mbongeni Andile 01 May 2013 (has links)
D.Phil. (Leadership in Performance and Change) / The desire for organisational competitiveness as a result of factors such as the changing and increasing needs of customers, deregulation, the globalisation of the economy and work, the increasing competition due to globalisation, the need to control costs and increase efficiency, as well as the fast pace of technological advancement, has compelled organisations to embark on changes that take place at a fast and ever-increasing rate. However, it was noted that organisations are not at all succeeding in implementing and institutionalising change initiatives effectively. There is a high failure rate in the implementation of transformational change efforts, and this is attributed to the fact that managers are not well-equipped to deal with challenges associated with the implementation of transformational changes in organisations. As a result of the high failure rate in change implementation, there had been a number of empirical studies conducted, which investigated reasons behind this low success rate. Unfortunately very few studies have focused on the human side of transformational change. Most of the researches have dwelt more on the technical side of change. This quantitative study was then conducted in order to identify and explore the elements and dynamics of transformational change, which can be regarded as constituting the human dimension of transformational change. Specifically, the main objective of this study was to determine the extent to which the elements and dynamics of transformational change (that is, perceptions, reactions, experiences, personal impact, and organisational impact) relate to the status of the change process. This research adopted a two-pronged approach, which incorporated a literature study first, and thereafter an empirical study. The literature study contextualised the elements and dynamics of transformational change within the Correctional Services environment. An overview of transformational change in the Department of Correctional Services was also provided. Based on the results of the literature study, a theoretical model, which hypothesised the relationships between perceptions and experience on one side, and the status of change on the other, was developed and empirically tested. The empirical data was collected by means of two survey questionnaires – one for correctional officials and the other for offenders, which were administered to 1000 correctional officials and 500 offenders. Methodologically, the study was guided by an exploratory, survey, descriptive, correlational and explanatory research designs, which were underpinned by ontological and epistemological perspectives. All completed and returned questionnaires were computed to analyse the responses of the respondents. The results of the analysis of data showed that the DCS change was characterised by positive perceptions; positive, negative and introspective-anxious experiences; negative responses in terms of emotional reactions and resistance; negative personal impact at intrapersonal and interpersonal levels; and positive organisational impact as the key aspects of the elements and dynamics of transformational change. The discussion in this thesis revolves around the above-named elements and dynamics of transformational change. Through performing exploratory and confirmatory factor analyses, a three-factor measurement model which encompassed perception, experience and the status of change, was identified and confirmed. The structural equation modelling found that both perceptions and experiences were the predictors of the status of change.
104

Predicting High-cost Patients in General Population Using Data Mining Techniques

Izad Shenas, Seyed Abdolmotalleb January 2012 (has links)
In this research, we apply data mining techniques to a nationally-representative expenditure data from the US to predict very high-cost patients in the top 5 cost percentiles, among the general population. Samples are derived from the Medical Expenditure Panel Survey’s Household Component data for 2006-2008 including 98,175 records. After pre-processing, partitioning and balancing the data, the final MEPS dataset with 31,704 records is modeled by Decision Trees (including C5.0 and CHAID), Neural Networks. Multiple predictive models are built and their performances are analyzed using various measures including correctness accuracy, G-mean, and Area under ROC Curve. We conclude that the CHAID tree returns the best G-mean and AUC measures for top performing predictive models ranging from 76% to 85%, and 0.812 to 0.942 units, respectively. Among a primary set of 66 attributes, the best predictors to estimate the top 5% high-cost population include individual’s overall health perception, history of blood cholesterol check, history of physical/sensory/mental limitations, age, and history of colonic prevention measures. It is worthy to note that we do not consider number of visits to care providers as a predictor since it has a high correlation with the expenditure, and does not offer a new insight to the data (i.e. it is a trivial predictor). We predict high-cost patients without knowing how many times the patient was visited by doctors or hospitalized. Consequently, the results from this study can be used by policy makers, health planners, and insurers to plan and improve delivery of health services.
105

Vývoj kredit skóringových modelov s využitím vybraných štatistických metód v R / Building credit scoring models using selected statistical methods in R

Jánoš, Andrej January 2016 (has links)
Credit scoring is important and rapidly developing discipline. The aim of this thesis is to describe basic methods used for building and interpretation of the credit scoring models with an example of application of these methods for designing such models using statistical software R. This thesis is organized into five chapters. In chapter one, the term of credit scoring is explained with main examples of its application and motivation for studying this topic. In the next chapters, three in financial practice most often used methods for building credit scoring models are introduced. In chapter two, the most developed one, logistic regression is discussed. The main emphasis is put on the logistic regression model, which is characterized from a mathematical point of view and also various ways to assess the quality of the model are presented. The other two methods presented in this thesis are decision trees and Random forests, these methods are covered by chapters three and four. An important part of this thesis is a detailed application of the described models to a specific data set Default using the R program. The final fifth chapter is a practical demonstration of building credit scoring models, their diagnostics and subsequent evaluation of their applicability in practice using R. The appendices include used R code and also functions developed for testing of the final model and code used through the thesis. The key aspect of the work is to provide enough theoretical knowledge and practical skills for a reader to fully understand the mentioned models and to be able to apply them in practice.
106

Datamining - theory and it's application / Datamining - teorie a praxe

Popelka, Aleš January 2012 (has links)
This thesis deals with the topic of the technology called data mining. First, the thesis describes the term data mining as an independent discipline and then its processing methods and the most common use. The term data mining is thereafter explained with the help of methodologies describing all parts of the process of knowledge discovery in databases -- CRISP-DM, SEMMA. The study's purpose is presenting new data mining methods and particular algorithms -- decision trees, neural networks and genetic algorithms. These facts are used as theoretical introduction, which is followed by practical application searching for causes of meningoencephalitis development of certain sample of patients. Decision trees in system Clementine, which is one of the top datamining tools, were used for the analysys.
107

Wide Area System Islanding Detection, Classification, and State Evaluation Algorithm

Sun, Rui 12 March 2013 (has links)
An islanded power system indicates a geographical and logical detach between a portion<br />of a power system and the major grid, and often accompanies with the loss of system<br />observability. A power system islanding contingency could be one of the most severe<br />consequences of wide-area system failures. It might result in enormous losses to both the power utilities and the consumers. Even those relatively small and stable islanding events may largely disturb the consumers\' normal operation in the island. On the other hand, the power consumption in the U.S. has been largely increasing since 1970s with the respect to the bloom of global economy and mass manufacturing, and the daily increased requirements from the modern customers. Along with the extreme weather and natural disaster factors, the century old U.S. power grid is under severely tests for potential islanding disturbances. After 1980s, the invention of synchronized phasor measurement units (PMU) has broadened the horizon for system monitoring, control and protection. Its real time feature and reliable measurements has made possible many online system schemes. The recent revolution of computers and electronic devices enables the implementation of complex methods (such as data mining methods) requiring large databases in power system analysis. The proposed method presented in this dissertation is primarily focused on two studies: one power system islanding contingency detection, identification, classification and state evaluation algorithm using a decision tree algorithm and topology approach, and its application in Dominion Virginia power system; and one optimal PMU placement strategy using a binary integral programming algorithm with the consideration of system islanding and redundancy issues. / Ph. D.
108

Srovnání heuristických a konvenčních statistických metod v data miningu / Comparison of Heuristic and Conventional Statistical Methods in Data Mining

Bitara, Matúš January 2019 (has links)
The thesis deals with the comparison of conventional and heuristic methods in data mining used for binary classification. In the theoretical part, four different models are described. Model classification is demonstrated on simple examples. In the practical part, models are compared on real data. This part also consists of data cleaning, outliers removal, two different transformations and dimension reduction. In the last part methods used to quality testing of models are described.
109

Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

Hussain, Shahid 10 July 2016 (has links)
This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.
110

Optimization of Algorithms Using Extensions of Dynamic Programming

AbouEisha, Hassan M. 09 April 2017 (has links)
We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth reflects the worst-case time complexity and average depth indicates the average-case time complexity. Non-adaptive algorithms are represented as decision tests whose size expresses the worst-case time complexity. Finally, we present a dynamic programming algorithm that finds a minimum decision test (minimum reduct) for a given decision table.

Page generated in 0.0708 seconds