• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 9
  • 6
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 19
  • 12
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Genetic Programming and Rough Sets: A Hybrid Approach to Bankruptcy Classification

McKee, Thomas E., Lensberg, Terje 16 April 2002 (has links)
The high social costs associated with bankruptcy have spurred searches for better theoretical understanding and prediction capability. In this paper, we investigate a hybrid approach to bankruptcy prediction, using a genetic programming algorithm to construct a bankruptcy prediction model with variables from a rough sets model derived in prior research. Both studies used data from 291 US public companies for the period 1991 to 1997. The second stage genetic programming model developed in this research consists of a decision model that is 80% accurate on a validation sample as compared to the original rough sets model which was 67% accurate. Additionally, the genetic programming model reveals relationships between variables that are not apparent in either the rough sets model or prior research. These findings indicate that genetic programming coupled with rough sets theory can be an efficient and effective hybrid modeling approach both for developing a robust bankruptcy prediction model and for offering additional theoretical insights.
32

Predicting and classifying atrial fibrillation from ECG recordings using machine learning

Bogstedt, Carl January 2023 (has links)
Atrial fibrillation is one of the most common types of heart arrhythmias, which can cause irregular, weak and fast atrial contractions up to 600 beats per minute. Atrial fibrillation has increased prevalence with age and is associated with increased risks of ischemia, as blood clots can form due to the weak contractions. During prolonged periods of atrial fibrillation, the atria can undergo a process called atrial remodelling. This causes electrophysiological and structural changes to the atria such as increased atrial size and changes to calcium ion densities. These changes themselves promotes the initiation and propagation of atrial fibrillation, which makes early detection crucial. Fortunately, atrial fibrillation can be detected on an electrocardiogram. Electrocardiograms measures the electrical activity of the heart during its cardiac cycle. This includes the initiation of the action potential, the depolarization of the atria and ventricles and their repolarization. On the electrocardiogram recording, these are seen as peaks and valleys, where each peak and valley can be traced back to one of these events. This means that during atrial fibrillation, the weak, irregular and fast atrial contractions can all be detected and measured. The aim of this project was to develop a machine learning model that could predict onset of atrial fibrillation, and that could classify ongoing atrial fibrillation. This was achieved by training one multiclass classification machine learning model using XGBoost, and three binary classification machine learning models using ROSETTA, on electrocardiogram recordings of people with and without atrial fibrillation. XGBoost is a tree boosting system which uses tree-like structures to classify data, while ROSETTA is a rule-based classification model which creates rules in an IF and THEN format to make decisions. The recordings were labelled according to three different classes: no atrial fibrillation, atrial fibrillation or preceding atrial fibrillation. The XGBoost model had a prediction accuracy of 99.3%, outperforming the three ROSETTA models and other atrial fibrillation classification and prediction models found. The ROSETTA models had high accuracies on the learning set, however, the predictions were subpar, indicating faulty settings for this type of data. The results in this project indicate that the models created can be used to accurately classify and predict onset of and ongoing atrial fibrillation, serving as a tool for early detection and verification of diagnosis.
33

Towards a rough-fuzzy perception-based computing for vision-based indoor navigation

Duan, Tong 10 July 2014 (has links)
An indoor environment could be defined by a complex layout in a compact space. Since mobile robots can be used as substitute for human beings to access harmful and inaccessible locations, the research of autonomous indoor navigation has attracted much interest. In general, a mobile robot navigates in an indoor environment where acquired data are limited. Furthermore, sensor measurements may contain errors in a number of situations. Therefore, the complexity of indoor environment and ability of sensors have determined that it is an insufficient to merely compute with data. This thesis presents a new rough-fuzzy approach to perception-based computing for an indoor navigation algorithm. This approach to perceptual computing is being developed to store, analyze and summarize existing experience in given environment so that the machine is able to detect current situation and respond optimally. To improve uncertainty reasoning of fuzzy logic control, a rough set theory is integrated to regulate inputs before applying fuzzy inference rules. The behaviour extraction is evaluated and adjusted through entropy-based measures and multi-scale analysis. The rough-fuzzy based control algorithm aims to minimize overshoot and optimize transient-state period during navigation. The proposed algorithm is tested through simulations and experiments using practical common situations. The performance is evaluated with respect to desired path keeping and transient-state adaptability.
34

A framework of adaptive T-S type rough-fuzzy inference systems (ARFIS)

Lee, Chang Su January 2009 (has links)
[Truncated abstract] Fuzzy inference systems (FIS) are information processing systems using fuzzy logic mechanism to represent the human reasoning process and to make decisions based on uncertain, imprecise environments in our daily lives. Since the introduction of fuzzy set theory, fuzzy inference systems have been widely used mainly for system modeling, industrial plant control for a variety of practical applications, and also other decisionmaking purposes; advanced data analysis in medical research, risk management in business, stock market prediction in finance, data analysis in bioinformatics, and so on. Many approaches have been proposed to address the issue of automatic generation of membership functions and rules with the corresponding subsequent adjustment of them towards more satisfactory system performance. Because one of the most important factors for building high quality of FIS is the generation of the knowledge base of it, which consists of membership functions, fuzzy rules, fuzzy logic operators and other components for fuzzy calculations. The design of FIS comes from either the experience of human experts in the corresponding field of research or input and output data observations collected from operations of systems. Therefore, it is crucial to generate high quality FIS from a highly reliable design scheme to model the desired system process best. Furthermore, due to a lack of a learning property of fuzzy systems themselves most of the suggested schemes incorporate hybridization techniques towards better performance within a fuzzy system framework. ... This systematic enhancement is required to update the FIS in order to produce flexible and robust fuzzy systems for unexpected unknown inputs from real-world environments. This thesis proposes a general framework of Adaptive T-S (Takagi-Sugeno) type Rough-Fuzzy Inference Systems (ARFIS) for a variety of practical applications in order to resolve the problems mentioned above in the context of a Rough-Fuzzy hybridization scheme. Rough set theory is employed to effectively reduce the number of attributes that pertain to input variables and obtain a minimal set of decision rules based on input and output data sets. The generated rules are examined by checking their validity to use them as T-S type fuzzy rules. Using its excellent advantages in modeling non-linear systems, the T-S type fuzzy model is chosen to perform the fuzzy inference process. A T-S type fuzzy inference system is constructed by an automatic generation of membership functions and rules by the Fuzzy C-Means (FCM) clustering algorithm and the rough set approach, respectively. The generated T-S type rough-fuzzy inference system is then adjusted by the least-squares method and a conjugate gradient descent algorithm towards better performance within a fuzzy system framework. To show the viability of the proposed framework of ARFIS, the performance of ARFIS is compared with other existing approaches in a variety of practical applications; pattern classification, face recognition, and mobile robot navigation. The results are very satisfactory and competitive, and suggest the ARFIS is a suitable new framework for fuzzy inference systems by showing a better system performance with less number of attributes and rules in each application.
35

Redução de valores no critério de decisão em aplicações de Rough Sets com dominância e seus impactos na qualidade da aproximação

Moreira Filho, Roberto Malheiros 30 July 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2018-09-03T15:49:55Z No. of bitstreams: 1 robertomalheirosmoreirafilho.pdf: 1784153 bytes, checksum: 65d1abd13eb713e62923e5d3c4d69acf (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-09-03T15:53:44Z (GMT) No. of bitstreams: 1 robertomalheirosmoreirafilho.pdf: 1784153 bytes, checksum: 65d1abd13eb713e62923e5d3c4d69acf (MD5) / Made available in DSpace on 2018-09-03T15:53:44Z (GMT). No. of bitstreams: 1 robertomalheirosmoreirafilho.pdf: 1784153 bytes, checksum: 65d1abd13eb713e62923e5d3c4d69acf (MD5) Previous issue date: 2012-07-30 / PROQUALI (UFJF) / A criação de regras de apoio à decisão com base em sistemas de informação é o objeto central da Teoria dos Conjuntos Aproximativos - TCA (Rough Sets Theory). Um trabalho apresentado por Pawlak em 1982 deu início a diversos estudos com o objetivo de criar regras de decisão baseadas em sistemas de informações com múltiplos atributos condicionantes e um ou mais atributos de decisão. Ao longo do tempo, os atributos com relação de dominância, onde há uma escala de valores, vêm recebendo destaque. Para lidar com este tipo de dados foi desenvolvido a DRSA (Dominance-based Rough Sets Approach). O excessivo rigor exigido para a criação de regras pela proposta básica do DRSA fez com que novas propostas surgissem. Além das regras determinísticas, com 100% de certeza, surgiram regras probabilísticas, com algum percentual controlado de incerteza. Existem algumas vertentes de estudo de aplicações de DRSA e a abordagem aqui proposta explora a possibilidade de aumento na qualidade da aproximação e, consequentemente, na qualidade das regras geradas, considerando a possibilidade de união de algumas classes do atributo de decisão com princípio de dominância. Com isto, são preservados os princípios do uso de DRSA. De acordo com a necessidade do pesquisador, a redução de classes pode ser utilizada em conjunto com as outras alternativas de DRSA apresentadas até o momento. Duas novas propostas de união de classes do atributo do critério de decisão são apresentadas, comentadas e criticadas nesta tese, uma baseada em função densidade de probabilidades e outra baseada em transformações probabilísticas. / Creating rules for the support of the decision process is the main subject of Rough Sets Theory. The study first published by Pawlak in 1982 was a catalyst of several studies focusing on creating rules for the support of the decision making process based on multiple attributes conditioning one or more decision attributes. As the studies evolved, attributes with the feature of dominance – attributes measured in some scale – have caught attention. This gave rise to DRSA (Dominance-based Rough Sets Approach). The excessively strict guidelines of DRSA original studies led to the creation of new guidelines that consider not only deterministic rules – 100% certainty – but also probabilistic rules, which account for a certain degree of uncertainty. There are other developments of DRSA and the approach here taken evaluates alternatives for enhancing the quality of the approximation evaluation, therefore enhancing the quality of the rules, by clustering classes of values of decision attributes without compromising the guiding principles of DRSA. According to the need of researcher, the reduction of classes can be used in conjunction with other alternatives of development of DRSA. Two different proposals for the clustering of attributes are presented and evaluated in this study, one based on density functions and the other based on probabilistic transformations.
36

Modeling the Interaction Space of Biological Macromolecules: A Proteochemometric Approach : Applications for Drug Discovery and Development

Kontijevskis, Aleksejs January 2008 (has links)
<p>Molecular interactions lie at the heart of myriad biological processes. Knowledge of molecular recognition processes and the ability to model and predict interactions of any biological molecule to any chemical compound are the key for better understanding of cell functions and discovery of more efficacious medicines.</p><p>This thesis presents contributions to the development of a novel chemo-bioinformatics approach called proteochemometrics; a general method for interaction space analysis of biological macromolecules and their ligands. In this work we explore proteochemometrics-based interaction models over broad groups of protein families, evaluate their validity and scope, and compare proteochemometrics to traditional modeling approaches.</p><p>Through the proteochemometric analysis of large interaction data sets of multiple retroviral proteases from various viral species we investigate complex mechanisms of drug resistance in HIV-1 and discover general physicochemical determinants of substrate cleavage efficiency and binding in retroviral proteases. We further demonstrate how global proteochemometric models can be used for design of protease inhibitors with broad activity on drug-resistant viral mutants, for monitoring drug resistance mechanisms in the physicochemical sense and prediction of potential HIV-1 evolution trajectories. We provide novel insights into the complexity of HIV-1 protease specificity by constructing a generalized IF-THEN rule model based on bioinformatics analysis of the largest set of HIV-1 protease substrates and non-substrates.</p><p>We discuss how proteochemometrics can be used to map recognition sites of entire protein families in great detail and demonstrate how it can incorporate target variability into drug discovery process. Finally, we assess the utility of the proteochemometric approach in evaluation of ADMET properties of drug candidates with a special focus on inhibition of cytochrome P450 enzymes and investigate application of the approach in the pharmacogenomics field.</p>
37

Context Sensitive Transformation of Geographic Information.

Ahlqvist, Ola January 2001 (has links)
<p>This research is concerned with theoretical and methodological aspects of geographic information transformation between different user contexts. In this dissertation I present theories and methodological approaches that enable a context sensititve use and reuse of geographic data in geographic information systems.</p><p>A primary motive for the reported research is that the patrons interested in answering environmental questions have increased in number and been diversified during the last 10-15 years. The interest from international, national and regional authorities together with multinational and national corporations embrace a range of spatial and temporal scales from global to local, and from many-year/-decade perspectives to real time applications. These differences in spatial and temporal detail will be expressed as rather different questions towards existing data. It is expected that geographic information systems will be able to integrate a large number of diverse data to answer current and future geographic questions and support spatial decision processes. However, there are still important deficiencies in contemporary theories and methods for geographic information integration</p><p>Literature studies and preliminary experiments suggested that any transformation between different users’ contexts would change either the thematic, spatial or temporal detail, and the result would include some amount of semantic uncertainty. Consequently, the reported experiments are separated into studies of change in either spatial or thematic detail. The scope concerned with thematic detatil searched for approaches to represent indiscernibility between categories, and the scope concerned with spatial detail studied semantic effects caused by changing spatial granularity.</p><p>The findings make several contributions to the current knowledge about transforming geographic information between users’ contexts. When changing the categorical resolution of a geographic dataset, it is possible to represent cases of indiscernibility using novel methods of rough classification described in the thesis. The use of rough classification methods together with manual landscape interpretations made it possible to evaluate semantic uncertainty in geographic data. Such evaluations of spatially aggregated geographic data sets show both predictable and non-predictable effects. and these effects may vary for different environmental variables.</p><p>Development of methods that integrate crisp, fuzzy and rough data enables spatial decision support systems to consider various aspects of semantic uncertainty. By explicitly representing crisp, fuzzy and rough relations between datasets, a deeper semantic meaning is given to geographic databasses. The explicit representation of semantic relations is called a Geographic Concept Topology and is held as a viable tool for context transformation and full integration of geographic datasets.</p>
38

Implementation av ett kunskapsbas system för rough set theory med kvantitativa mätningar / Implementation of a Rough Knowledge Base System Supporting Quantitative Measures

Andersson, Robin January 2004 (has links)
<p>This thesis presents the implementation of a knowledge base system for rough sets [Paw92]within the logic programming framework. The combination of rough set theory with logic programming is a novel approach. The presented implementation serves as a prototype system for the ideas presented in [VDM03a, VDM03b]. The system is available at "http://www.ida.liu.se/rkbs". </p><p>The presented language for describing knowledge in the rough knowledge base caters for implicit definition of rough sets by combining different regions (e.g. upper approximation, lower approximation, boundary) of other defined rough sets. The rough knowledge base system also provides methods for querying the knowledge base and methods for computing quantitative measures. </p><p>We test the implemented system on a medium sized application example to illustrate the usefulness of the system and the incorporated language. We also provide performance measurements of the system.</p>
39

Context Sensitive Transformation of Geographic Information

Ahlqvist, Ola January 2000 (has links)
This research is concerned with theoretical and methodological aspects of geographic information transformation between different user contexts. In this dissertation I present theories and methodological approaches that enable a context sensititve use and reuse of geographic data in geographic information systems. A primary motive for the reported research is that the patrons interested in answering environmental questions have increased in number and been diversified during the last 10-15 years. The interest from international, national and regional authorities together with multinational and national corporations embrace a range of spatial and temporal scales from global to local, and from many-year/-decade perspectives to real time applications. These differences in spatial and temporal detail will be expressed as rather different questions towards existing data. It is expected that geographic information systems will be able to integrate a large number of diverse data to answer current and future geographic questions and support spatial decision processes. However, there are still important deficiencies in contemporary theories and methods for geographic information integration Literature studies and preliminary experiments suggested that any transformation between different users’ contexts would change either the thematic, spatial or temporal detail, and the result would include some amount of semantic uncertainty. Consequently, the reported experiments are separated into studies of change in either spatial or thematic detail. The scope concerned with thematic detatil searched for approaches to represent indiscernibility between categories, and the scope concerned with spatial detail studied semantic effects caused by changing spatial granularity. The findings make several contributions to the current knowledge about transforming geographic information between users’ contexts. When changing the categorical resolution of a geographic dataset, it is possible to represent cases of indiscernibility using novel methods of rough classification described in the thesis. The use of rough classification methods together with manual landscape interpretations made it possible to evaluate semantic uncertainty in geographic data. Such evaluations of spatially aggregated geographic data sets show both predictable and non-predictable effects. and these effects may vary for different environmental variables. Development of methods that integrate crisp, fuzzy and rough data enables spatial decision support systems to consider various aspects of semantic uncertainty. By explicitly representing crisp, fuzzy and rough relations between datasets, a deeper semantic meaning is given to geographic databasses. The explicit representation of semantic relations is called a Geographic Concept Topology and is held as a viable tool for context transformation and full integration of geographic datasets.
40

Modeling the Interaction Space of Biological Macromolecules: A Proteochemometric Approach : Applications for Drug Discovery and Development

Kontijevskis, Aleksejs January 2008 (has links)
Molecular interactions lie at the heart of myriad biological processes. Knowledge of molecular recognition processes and the ability to model and predict interactions of any biological molecule to any chemical compound are the key for better understanding of cell functions and discovery of more efficacious medicines. This thesis presents contributions to the development of a novel chemo-bioinformatics approach called proteochemometrics; a general method for interaction space analysis of biological macromolecules and their ligands. In this work we explore proteochemometrics-based interaction models over broad groups of protein families, evaluate their validity and scope, and compare proteochemometrics to traditional modeling approaches. Through the proteochemometric analysis of large interaction data sets of multiple retroviral proteases from various viral species we investigate complex mechanisms of drug resistance in HIV-1 and discover general physicochemical determinants of substrate cleavage efficiency and binding in retroviral proteases. We further demonstrate how global proteochemometric models can be used for design of protease inhibitors with broad activity on drug-resistant viral mutants, for monitoring drug resistance mechanisms in the physicochemical sense and prediction of potential HIV-1 evolution trajectories. We provide novel insights into the complexity of HIV-1 protease specificity by constructing a generalized IF-THEN rule model based on bioinformatics analysis of the largest set of HIV-1 protease substrates and non-substrates. We discuss how proteochemometrics can be used to map recognition sites of entire protein families in great detail and demonstrate how it can incorporate target variability into drug discovery process. Finally, we assess the utility of the proteochemometric approach in evaluation of ADMET properties of drug candidates with a special focus on inhibition of cytochrome P450 enzymes and investigate application of the approach in the pharmacogenomics field.

Page generated in 0.0728 seconds