• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1072
  • 358
  • 156
  • 98
  • 56
  • 29
  • 21
  • 14
  • 12
  • 11
  • 10
  • 9
  • 7
  • 6
  • 5
  • Tagged with
  • 2262
  • 836
  • 816
  • 347
  • 241
  • 233
  • 227
  • 224
  • 222
  • 221
  • 194
  • 190
  • 186
  • 170
  • 164
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Improving predictive models of software quality using search-based metric selection and decision trees

Vivanco, Rodrigo Antonio 10 September 2010 (has links)
Predictive models are used to identify potentially problematic components that decrease product quality. Design and source code metrics are used as input features for predictive models; however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. An important question to answer is: Which metrics should be used with a model for a particular predictive objective? Identifying a metric subset that improves the performance for the classifier may also provide insights into the structural properties that lead to problematic modules. In this work, a genetic algorithm (GA) is used as a search-based metric selection strategy. A comparative study has been carried out between GA, the Chidamber and Kemerer (CK) metrics suite, and principal component analysis (PCA) as metric selection strategies with different datasets. Program comprehension is important for programmers and the first dataset evaluated uses source code inspections as a subjective measure of cognitively complexity. Predicting the likely location of system failures is important in order to improve a system’s reliability. The second dataset uses an objective measure of faults found in system modules in order to predict fault-prone components. The aim of this research has been to advance the current state of the art in predictive models of software quality by exploring the efficacy of a search-based approach in selecting appropriate metrics subsets. Results show that GA performs well as a metric selection strategy when used with a linear discriminant analysis classifier. When predicting cognitive complex classes, GA achieved an F-value of 0.845 compared to an F-value of 0.740 using PCA, and 0.750 for the CK metrics. By examining the GA chosen metrics with a white box predictive model (decision tree classifier) additional insights into the structural properties of a system that degrade product quality were observed. Source code metrics have been designed for human understanding and program comprehension and predictive models for cognitive complexity perform well with just source code metrics. Models for fault prone modules do not perform as well when using only source metrics and need additional non-source code information, such module modification history or testing history.
212

Ectopic Eruption of the Maxillary First Permanent Molar: Rate and Predictive Factors of Self-correction and Survey of Specialists Attitudes Regarding Intervention

Dabbagh, Basma 21 November 2013 (has links)
Purpose: To retrospectively assess the incidence and predictive factors for self-correction of ectopic eruption of maxillary permanent first molars (EE) and the prevailing attitudes amongst surveyed specialists regarding intervention in cases of EE. Methods: Charts of patients diagnosed with EE were assessed for predictive clinical and radiographic factors. An online survey was sent to pediatric dentists and orthodontists. Results: The rate of self-correction was 71%. One third of self-corrections occurred after age 9. Increased amount of impaction (r(43)=0.59, p<.001) and degree of resorption (r(57)=0.41, p=.001) were positively correlated with irreversibility. Orthodontists estimated the spontaneous self-correction rate to be lower (t(1178)=19.2, p<.001) than pediatric dentists. Conclusions: One third of self-corrections occurred after 9 years of age and delaying treatment of EE may be a viable option when uncertain of the outcome. Reliable predictive factors of irreversibility of EE were identified. Differences exist between pediatric dentists and orthodontists regarding management of EE.
213

Scheduling quasi-min-max model predictve control

Lu, Yaohui 12 1900 (has links)
No description available.
214

Robust stability and performance for linear and nonlinear uncertain systems with structured uncertainty

Chellaboina, Vijaya-Sekhar 12 1900 (has links)
No description available.
215

Statistical Methods to Enhance Clinical Prediction with High-Dimensional Data and Ordinal Response

Leha, Andreas 25 March 2015 (has links)
Der technologische Fortschritt ermöglicht es heute, die moleculare Konfiguration einzelner Zellen oder ganzer Gewebeproben zu untersuchen. Solche in großen Mengen produzierten hochdimensionalen Omics-Daten aus der Molekularbiologie lassen sich zu immer niedrigeren Kosten erzeugen und werden so immer häufiger auch in klinischen Fragestellungen eingesetzt. Personalisierte Diagnose oder auch die Vorhersage eines Behandlungserfolges auf der Basis solcher Hochdurchsatzdaten stellen eine moderne Anwendung von Techniken aus dem maschinellen Lernen dar. In der Praxis werden klinische Parameter, wie etwa der Gesundheitszustand oder die Nebenwirkungen einer Therapie, häufig auf einer ordinalen Skala erhoben (beispielsweise gut, normal, schlecht). Es ist verbreitet, Klassifikationsproblme mit ordinal skaliertem Endpunkt wie generelle Mehrklassenproblme zu behandeln und somit die Information, die in der Ordnung zwischen den Klassen enthalten ist, zu ignorieren. Allerdings kann das Vernachlässigen dieser Information zu einer verminderten Klassifikationsgüte führen oder sogar eine ungünstige ungeordnete Klassifikation erzeugen. Klassische Ansätze, einen ordinal skalierten Endpunkt direkt zu modellieren, wie beispielsweise mit einem kumulativen Linkmodell, lassen sich typischerweise nicht auf hochdimensionale Daten anwenden. Wir präsentieren in dieser Arbeit hierarchical twoing (hi2) als einen Algorithmus für die Klassifikation hochdimensionler Daten in ordinal Skalierte Kategorien. hi2 nutzt die Mächtigkeit der sehr gut verstandenen binären Klassifikation, um auch in ordinale Kategorien zu klassifizieren. Eine Opensource-Implementierung von hi2 ist online verfügbar. In einer Vergleichsstudie zur Klassifikation von echten wie von simulierten Daten mit ordinalem Endpunkt produzieren etablierte Methoden, die speziell für geordnete Kategorien entworfen wurden, nicht generell bessere Ergebnisse als state-of-the-art nicht-ordinale Klassifikatoren. Die Fähigkeit eines Algorithmus, mit hochdimensionalen Daten umzugehen, dominiert die Klassifikationsleisting. Wir zeigen, dass unser Algorithmus hi2 konsistent gute Ergebnisse erzielt und in vielen Fällen besser abschneidet als die anderen Methoden.
216

Model predictive control of a Brayton cycle based power plant / Peter Kabanda Lusanga

Lusanga, Peter Kabanda January 2012 (has links)
The aim of this study is to implement the model predictive control in order to optimally control the power output of a Brayton cycle based power plant. Other control strategies have been tried but there still exists the need for better performance. In real systems, a number of constraints exist. Incorporating these into the control design is no trivial task. Unlike in most control strategies, model predictive control allows the designer to explicitly incorporate constraints in its formulation. The original design of the PBMR power plant is considered. It uses helium gas as the working fluid. The power output of the system can be controlled by manipulating the helium inventory to the gas cycle. A linear model of the power plant, modelled in Simulink® is used. This linear model is used as an evaluation platform for the control strategy. The helium inventory is manipulated by means of actuators which use values generated by the controller. The controller computes these values by minimizing the cost of future outputs over a finite horizon in the presence of constraints. The dynamic response of the system is used to tune the controller. The power output performance at different configurations of the controller under perfect conditions and with disturbances is examined. The best configuration is used resulting in an optimal power control system for the Brayton cycle based power plant. Results showed that the method employed can be used to implement the control strategy. Furthermore, better performance can be realised with model predictive control. / Thesis (M.Ing. (Electrical and Electronic Engineering))--North-West University, Potchefstroom Campus, 2012
217

Improving predictive models of software quality using search-based metric selection and decision trees

Vivanco, Rodrigo Antonio 10 September 2010 (has links)
Predictive models are used to identify potentially problematic components that decrease product quality. Design and source code metrics are used as input features for predictive models; however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. An important question to answer is: Which metrics should be used with a model for a particular predictive objective? Identifying a metric subset that improves the performance for the classifier may also provide insights into the structural properties that lead to problematic modules. In this work, a genetic algorithm (GA) is used as a search-based metric selection strategy. A comparative study has been carried out between GA, the Chidamber and Kemerer (CK) metrics suite, and principal component analysis (PCA) as metric selection strategies with different datasets. Program comprehension is important for programmers and the first dataset evaluated uses source code inspections as a subjective measure of cognitively complexity. Predicting the likely location of system failures is important in order to improve a system’s reliability. The second dataset uses an objective measure of faults found in system modules in order to predict fault-prone components. The aim of this research has been to advance the current state of the art in predictive models of software quality by exploring the efficacy of a search-based approach in selecting appropriate metrics subsets. Results show that GA performs well as a metric selection strategy when used with a linear discriminant analysis classifier. When predicting cognitive complex classes, GA achieved an F-value of 0.845 compared to an F-value of 0.740 using PCA, and 0.750 for the CK metrics. By examining the GA chosen metrics with a white box predictive model (decision tree classifier) additional insights into the structural properties of a system that degrade product quality were observed. Source code metrics have been designed for human understanding and program comprehension and predictive models for cognitive complexity perform well with just source code metrics. Models for fault prone modules do not perform as well when using only source metrics and need additional non-source code information, such module modification history or testing history.
218

Development of a correlation based and a decision tree based prediction algorithm for tissue to plasma partition coefficients

Yun, Yejin Esther 15 April 2013 (has links)
Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism. PBPK models, using both compound and physiologic inputs, are used to predict a drug’s pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), a key PBPK model input, define the steady state concentration differential between the tissue and plasma and are used to predict the volume of distribution. Experimental determination of these parameters once limited the development of PBPK models however in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy and none are considered standard, warranting further research. Chapter 2 presents a newly developed Kp prediction algorithm that requires only readily available input parameters. Using a test dataset, this Kp prediction algorithm demonstrated good prediction accuracy and greater prediction accuracy than preexisting algorithms. Chapter 3 introduced a decision tree based Kp prediction method. In this novel approach, six previously published algorithms, including the one developed in Chapter 2, were utilized. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physico-chemical space. Three versions of tissue specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy as compared to the use of any single Kp prediction algorithm for all tissues; the current mode of use in PBPK model building. With built-in estimation equations for those input parameters not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The two presented innovative methods will improve tissue distribution prediction accuracy thus enhancing the confidence in PBPK modeling outputs.
219

Pattern-Aware Prediction for Moving Objects

Hoyoung Jeung Unknown Date (has links)
This dissertation challenges an unstudied area in moving objects database domains; predicting (long-term) future locations of moving objects. Moving object prediction enables us to provide a wide range of applications, such as traffic prediction, pre-detection of an aircraft collision, and reporting attractive gas prices for drivers along their routes ahead. Nevertheless, existing location prediction techniques are limited to support such applications since they are generally capable only of short-term predictions. In the real world, many objects exhibit typical movement patterns. This pattern information is able to serve as an important background to tackle the limitations of the existing prediction methods. We aims at offering foundations of pattern-aware prediction for moving objects, rendering more precise prediction results. Specifically, this thesis focuses on three parts. The first part of the thesis studies the problem of predicting future locations of moving objects in Euclidean space. We introduce a novel prediction approach, termed the hybrid prediction model, which utilizes not only the current motion of an object, but also the object's trajectory patterns for prediction. We define, mine, and index the trajectory patterns with a novel access method for efficient query processing. We then propose two different query processing techniques along given query time, i.e., for near future and for distant future. The second part covers the prediction problem for moving objects in network space. We formulate a network mobility model that offers a concise representation of mobility statistics extracted from massive collections of historical objects trajectories. This model captures turning patterns of the objects at junctions, at the granularity of individual objects as well as globally. Based on the model, we develop three different algorithms for predicting the future path of a mobile user moving in a road network, named the PathPredictors. The third part of the thesis extends the prediction problem for a single object to that for multiple objects. We introduce a convoy query that retrieves all groups of objects, i.e., convoys, from the objects' historical trajectories, each convoy consists of objects that have traveled together for some time; thus they may also move together in the future. We then propose three efficient algorithms for the convoy discovery, called the CuTS family, that adopt line simplification methods for reducing the size of the trajectories, permitting efficient query processing. For each part, we demonstrate comprehensive experimental results of our proposals, which show significantly improved accuracies for moving object prediction compared with state-of-the-art methods, while also facilitating efficient query processing.
220

Negative outcomes of hospitalisation: predicting risk in older patients

Prabha Lakhan Unknown Date (has links)
Abstract Introduction Most countries including Australia are experiencing an ageing of their population, with an increasing proportion of frail older persons requiring hospitalisation from acute illness. The aging process places the older person at risk of geriatric syndromes, such as falling, dependency in performance of Activities of Daily Living and instrumental Activities of Daily Living, confusion, bladder and bowel incontinence. New or deteriorating geriatric syndromes are a frequent occurrence among hospitalized older patients. Hospital associated factors associated with these outcomes include complications of medical therapies; polypharmacy and excessive bed rest. Few studies have been conducted into factors predicting risk of negative outcomes in older patients admitted to medical units of acute care teaching hospitals. If available, a screening tool with few predictive factors, able to be administered close to the time of admission could be used to identify patients at lower and higher risk. It is imperative that such a tool is developed empirically and tested for its accuracy in identifying patients at high risk. Aims of the research The first aim was to identify the proportion of patients aged ≥ 70 years, admitted to acute care medical units that experienced a negative outcome. These outcomes included falls during hospitalisation, presence of new or a significant decline in existing pressure ulcers, significant decline in independently performing Activities of Daily Living (ADLs), requiring increased care needs at discharge, readmission to hospital with 28 days of the index hospitalisation, bladder and bowel incontinence, and delirium. The second aim was to identify factors predicting the risk of two of these negative outcomes: requiring a higher level of care at discharge, and experiencing a decline in independently performing ADLs. Based on the predictive factors, two screening tools to identify patients at risk were developed and validated. Method A prospective cohort study of 413 acute general medical patients, aged ≥ 70 years and consecutively admitted to an acute care metropolitan 700-bed teaching hospital was conducted. Consenting patients expected to remain in hospital for more than 48 hours were included. Patients were excluded if they were admitted to intensive or coronary care units, admitted for terminal care only or were transferred from a general medical to another unit within 24 hours of admission to the ward. Trained research nurses assessed patients and used the interRAI Acute Care instrument to collect information on candidate predictive variables and negative outcomes. Patients were assessed within 36 hours of admission and at discharge to obtain information on predictive variables and negative outcomes. Patients were also followed daily to identify any instances of transient negative outcomes during hospitalisation and at 28 days following discharge to identify any instances of readmission to hospital. The 413 cases were randomly split into 309 cases in the development cohort and 104 cases in validation cohort. Logistic regression models were used to identify the predictive factors independently associated with two negative outcomes, requiring a higher level of care at discharge and experiencing a decline in independently performing ADLs. Findings At least one negative outcome was experienced by 53% of the development and 63% of the validation cohort. The most common negative outcomes experienced were: delirium (27%; 23%), a significant decline in ADLs (19%, 22%), requiring a higher level of care at discharge (16%, 16%), and readmission to hospital within 28 days of discharge (17%, 28%) in the development and validation cohorts respectively. The logistic regression analysis identified four independent factors associated with requiring higher levels of care at discharge: ‘short term memory problems’ (OR 4.21, 95% CI 1.79, 9.89; p=0.001); ‘dependence in toilet use’ (OR 3.51, 95% CI 1.14, 10.84; p=0.029); ‘dependence in hygiene’ (OR 2.76, 95% CI 1.16, 6.56; p=0.021), and ‘use of community services prior to admission’ (OR 2.41, 95% CI 1.12, 5.16; p= 0.024). A screening tool developed to assess patients at lower and higher risk had a sensitivity, specificity, positive predicted value (PPV) and negative predictive value (NPV) of 77.27%, 73.66%, 36.56% and 94.29% respectively. Reasonable accuracy was evident when tested in the validation sample. Sensitivity, specificity, PPV and NPV were 60%, 76.32%, 33.33% and 90.63% respectively. Predictive factors associated with a significant decline in ADLs were: ‘history of falling’(OR 2.21, 95% CI 1.12, 4.36; p= 0.023), ‘no interest in things enjoyed normally’ (OR 4.30, 95% CI 1.92, 9.64; p=0.000), ‘dependence in management of finances’ (OR 3.93, 95% CI 1.63, 9.48; p =0.002) and ‘hearing problems’ (OR 2.38, 95% CI 1.05, 5.39; p =0.038). The screening tool had sensitivity, specificity, PPV and NPV in the development cohort of 74.55%, 69.13%, 36.6% and 92% respectively and 45%, 65.79%, 25.7% and 82% respectively in the validation sample. Conclusion The tools require further validation in larger samples in diverse settings. Future research should focus on developing a screening tool that could predict risk of a number of negative outcomes to enhance the provision of quality patient care.

Page generated in 0.0479 seconds