• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 60
  • 27
  • 14
  • 11
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 333
  • 333
  • 105
  • 90
  • 87
  • 67
  • 57
  • 49
  • 46
  • 44
  • 41
  • 40
  • 38
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Operational Knowledge Acquisition of Refuse Incinerator Using Data Mining Techniques

Lai, Po-Chuan 05 August 2005 (has links)
The physical and chemical mechanisms in a refuse ncinerator are complex. It is difficult to make a full comprehension of the system without a thorough research and long-term on-site experiments. In addition, many sensors are equipped in refuse incineration plant and much data are collected, those data were supposed to be useful since there may be some operational experience within. But to cope with the huge data that may exceed the computation capability, sequential Forward Floating Search algorithm (SFFS) is used to reduce the data dimension and find relevant features as well as to remove redundant information. In this research, data mining technique is applied toward three critical target attributes, steam production, NOx and SOx, to build decision tree models and extract operational experiences in the form of decision rules. Those models are evaluated by predicting accuracies, and rules extracted from decision tree models are also of great help to the on-site operation and prediction as well.
72

A Framework for Designing Nursing Knowledge Management System and the Application to Pediatric Nursing

Chen, Wei-jen 17 March 2007 (has links)
With the advances in technology, the change of the healthcare environment, and the need for users, the use of computerized support systems or expert systems are able to cut down costs for unnecessary procedures, achieve higher levels of efficiency and productivity. Applied to the nursing department, it may provide good quality of care, decrease the time that nurses duplicate patient history, reduce nurses¡¦ burden and enhance the abilities to solve problems. The topic of this research mainly focused on the nursing department in the pediatric ward. I propose a framework for nursing knowledge management by using subjective data, objective data, assessment, and care plan (SOAP), which is used by the nursing staffs as a way of decision-making processes. The method is to collect subjective and objective data, read relevant clinical practice guidelines, make clinical judgments about patients¡¦ actual or potential problems and provide applicable nursing plans and interventions. The staffs review and make final decision to accept or reject these judgments, nursing plans and related interventions. If the staffs reject any judgment, nursing plan and intervention, the system should have inquiry-signs to ask physician and nursing staff. Then the staffs correct the inappropriateness. These clear and easy-to-follow processes help student nurses or beginning nurses cultivate their abilities to care and hope it can provide as a guide to nursing teaching and clinical patient care.
73

Overview Of Solutions To Prevent Liquid Loading Problems In Gas Wells

Binli, Ozmen 01 February 2010 (has links) (PDF)
Every gas well ceases producing as reservoir pressure depletes. The usual liquid presence in the reservoir can cause further problems by accumulating in the wellbore and reducing production even more. There are a number of options in well completion to prevent liquid loading even before it becomes a problem. Tubing size and perforation interval optimization are the two most common methods. Although completion optimization will prevent liquid accumulation in the wellbore for a certain time, eventually as the reservoir pressure decreases more, the well will start loading. As liquid loading occurs it is crucial to recognize the problem at early stages and select a suitable prevention method. There are various methods to prevent liquid loading such as / gas lift, plunger lift, pumping and velocity string installation. This study set out to construct a decision tree for a possible expert system used to determine the best result for a particular gas well. The findings are tested to confirm by field applications as attempts of the expert system.
74

Improving Data Quality: Development and Evaluation of Error Detection Methods

Lee, Nien-Chiu 25 July 2002 (has links)
High quality of data are essential to decision support in organizations. However estimates have shown that 15-20% of data within an organization¡¦s databases can be erroneous. Some databases contain large number of errors, leading to a large potential problem if they are used for managerial decision-making. To improve data quality, data cleaning endeavors are needed and have been initiated by many organizations. Broadly, data quality problems can be classified into three categories, including incompleteness, inconsistency, and incorrectness. Among the three data quality problems, data incorrectness represents the major sources for low quality data. Thus, this research focuses on error detection for improving data quality. In this study, we developed a set of error detection methods based on the semantic constraint framework. Specifically, we proposed a set of error detection methods including uniqueness detection, domain detection, attribute value dependency detection, attribute domain inclusion detection, and entity participation detection. Empirical evaluation results showed that some of our proposed error detection techniques (i.e., uniqueness detection) achieved low miss rates and low false alarm rates. Overall, our error detection methods together could identify around 50% of the errors introduced by subjects during experiments.
75

Applying Data Mining Techniques to the Prediction of Marine Smuggling Behaviors

Lee, Chang-mou 26 July 2008 (has links)
none
76

A Study on Relationship between Metropolitan Population and Airport Yearly Enplanement-Based on the Airports in the Mainland of the United States

Yu, Heng-Tsung 19 January 2009 (has links)
Nowadays, the aviation technology has become much reliable than ever, and air transportation is by far the best choice for long distance transportation. Airports serve as the flight nodes for air transportation, and the construction and development of airports are often considered as the most important development plans of the entire country or the local government. The huge amount of cost for constructing an airport and the long (usually more than fifty years) life cycle demand a comprehensive plan in the initial stage of an airport construction. Underestimating the transportation demand of the airport may make it difficult to extend the airport in the future and affect its subsequent operations. On the other hand, overestimating the transportation demand of the airport may result in over-investment and poor operation performance. Around the world, airports are often considered as enterprises. The governments and airport administrators have begun to pay attention to the operation performance of airports and adopt every indicator of the conduct in order to carry on the performance assessment. By doing so, they hope to reduce the operating cost, increase profit, and enlarge their competition advantages. Of the indicators of the operation performance, yearly enplanement has widely been considered as a key indicator. This research collected the data pertaining to commercial airports in the mainland of the United States whose yearly enplanements are over 2,500 passengers. It employs statistical method and decision tree to analyse the relationship between the population change of the metropolitan (population, density of population, population change, etc.) and the yearly enplanement change of airports. Also, we discuss the relationship among the number of airports in a metropolitan, the distance from an airport to the closest business center, the distance from the airport to another nearest airport, and the yearly enplanement change of the airport.
77

The application of machine learning methods in software verification and validation

Phuc, Nguyen Vinh, 1955- 04 January 2011 (has links)
Machine learning methods have been employed in data mining to discover useful, valid, and beneficial patterns for various applications of which, the domain encompasses areas in business, medicine, agriculture, census, and software engineering. Focusing on software engineering, this report presents an investigation of machine learning techniques that have been utilized to predict programming faults during the verification and validation of software. Artifacts such as traces in program executions, information about test case coverage and data pertaining to execution failures are of special interest to address the following concerns: Completeness for test suite coverage; Automation of test oracles to reduce human intervention in Software testing; Detection of faults causing program failures; Defect prediction in software. A survey of literature pertaining to the verification and validation of software also revealed a novel concept designed to improve black-box testing using Category-Partition for test specifications and test suites. The report includes two experiments using data extracted from source code available from the website (15) to demonstrate the application of a decision tree (C4.5) and the multilayer perceptron for fault prediction, and an example that shows a potential candidate for the Category-Partition scheme. The results from several research projects shows that the application of machine learning in software testing has achieved various degrees of success in effectively assisting software developers to improve their test strategy in verification and validation of software systems. / text
78

Multivariate real options valuation

Wang, Tianyang 08 June 2011 (has links)
This dissertation research focuses on modeling and evaluating multivariate uncertainties and the dependency between the uncertainties. Managing risk and making strategic decisions under uncertainty is critically important for both individual and corporate success. In this dissertation research, we present two new methodologies, the implied binomial tree approach and the dependent decision tree approach, to modeling multivariate decision making problems with practical applications in real options valuation. First, we present the implied binomial tree approach to consolidate the representation of multiple sources of uncertainty into univariate uncertainty, while capturing the impact of these uncertainties on the project’s cash flows. This approach provides a nonparametric extension of the approaches in the literature by allowing the project value to follow a generalized diffusion process in which the volatility may vary with time and with the asset prices, therefore offering more modeling flexibility. This approach was motivated by the Implied Binomial Tree (IBT) approach that is widely used to value complex financial options. By constructing the implied recombining binomial tree in a way so as to be consistent with the simulated market information, we extended the finance-based IBT method for real options valuation — when the options are contingent on the value of one or more market related uncertainties that are not traded assets. Further, we present a general framework based on copulas for modeling dependent multivariate uncertainties through the use of a decision tree. The proposed dependent decision tree model allows multiple dependent uncertainties with arbitrary marginal distributions to be represented in a decision tree with a sequence of conditional probability distributions. This general framework could be naturally applied in decision analysis and real options valuations, as well as in more general applications of dependent probability trees. While this approach to modeling dependencies can be based on several popular copula families as we illustrate, we focus on the use of the normal copula and present an efficient computational method for multivariate decision and risk analysis that can be standardized for convenient application. / text
79

Applying Data Mining Techniques on Continuous Sensed Data : For daily living activity recognition

Li, Yunjie January 2014 (has links)
Nowadays, with the rapid development of the Internet of Things, the applicationfield of wearable sensors has been continuously expanded and extended, especiallyin the areas of remote electronic medical treatment, smart homes ect. Human dailyactivities recognition based on the sensing data is one of the challenges. With avariety of data mining techniques, the activities can be automatically recognized. Butdue to the diversity and the complexity of the sensor data, not every kind of datamining technique can performed very easily, until after a systematic analysis andimprovement. In this thesis, several data mining techniques were involved in theanalysis of a continuous sensing dataset in order to achieve the objective of humandaily activities recognition. This work studied several data mining techniques andfocuses on three of them; Decision Tree, Naive Bayes and neural network, analyzedand compared these techniques according to the classification results. The paper alsoproposed some improvements to the data mining techniques according to thespecific dataset. The comparison of the three classification results showed that eachclassifier has its own limitations and advantages. The proposed idea of combing theDecision Tree model with the neural network model significantly increased theclassification accuracy in this experiment.
80

預測模型中遺失值之選填順序研究 / Research of acquisition order of missing values in predictive model

施雲天 Unknown Date (has links)
預測模型已經被廣泛運用在日常生活中,例如銀行信用評比、消費者行為或是疾病的預測等等。然而不論在建構或使用預測模型的時候,我們都會在訓練資料或是測試資料中遇到遺失值的問題,因而降低預測的表現。面對遺失值有很多種處理方式,刪除、填補、模型建構以及機器學習都是可以使用的方法;除此之外,直接用某個成本去取得遺失值也是一個選擇。 本研究著重的議題是用某成本去取得遺失值,並且利用決策樹(因為其在建構時可以容納遺失值)來當作預測模型,希望可以找到用較低的成本的填值方法達到較高的準確率。我們延續過去Error Sampling中Uncertainty Score的概念與邏輯。提出U-Sampling來判斷不同特徵值的「重要性排序」。相較於過去Error Sampling用「受試者」(row-based)的重要性來排序。U-Sampling是根據「特徵值」(column-based)的重要性來排序。 我們用8組UCI machine Learning Repository的資料進行兩組實驗,分別讓訓練資料以及測試資料含有一定比例的遺失值。再利用U-Sampling、Random Sampling以及過去文獻所提及的Error Sampling作準確率和錯誤減少率的比較。實驗結果顯示在訓練資料有遺失值的情況,U-Sampling在70%以上的檔案表現較佳;而在測試資料有遺失值的情況,U-Sampling則是在87.5%的檔案表現較佳。 另外,我們也研究了對於不同的遺失比例對於上述方法的效果是否有影響,可以用來判斷哪種情況比較適用哪一種選值方法。希望透過U-Sampling,可以先挑選重要的特徵值來填補,用較少的遺失值取得就得到較高的準確率,也因此可以節省處理遺失值的成本。

Page generated in 0.1208 seconds