• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 60
  • 27
  • 14
  • 12
  • 11
  • 9
  • 8
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 335
  • 335
  • 106
  • 91
  • 88
  • 67
  • 58
  • 51
  • 47
  • 45
  • 41
  • 41
  • 39
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A REVIEW AND ANALYSIS OF THE LINKED DECISIONS IN THE CONFISCATION OF ILLEGALLY TRADED TURTLES

Smith, Desiree 14 November 2023 (has links) (PDF)
Over the last few decades, freshwater turtles have become more common in the global illegal wildlife trade because of the growing demand in the pet trade. Illegally traded turtles may be intercepted and deposited by a number of agencies. However, when turtles are confiscated, many uncertainties and risks make releasing them back to the wild difficult. Therefore, we used tools from decision analysis to achieve the following three objectives: (1) to identify points of intervention in illegal turtle trade using conceptual models, (2) to outline the linked decisions for turtle confiscation and repatriation using decision trees, and (3) to evaluate the decision trees for two example scenarios, one with complete information and one with uncertainty. We used the wood turtle (Glyptemys insculpta) as a case study, which is a species of conservation concern, in part due to illegal wildlife trafficking. We conducted informational interviews of biologists, law enforcement, land managers, and zoo staff, which we refer to as a decision makers. Interviews revealed that decisions regarding the disposition of confiscated turtles are complicated by uncertainty in disease status and potential differences in origin and confiscation locations. Decision makers that handle confiscated turtles also recognize that their decisions are linked, where linkages rely on personal contacts. In evaluating our decision trees, we found that despite different amounts and kinds of uncertainties, release of the confiscated wood turtles to the wild provided the highest conservation value. Collectively, our research shows how the use of decision trees can help improve decision making in the face of uncertainty.
182

Tillämpning av maskininlärning för att införa automatisk adaptiv uppvärmning genom en studie på KTH Live-In Labs lägenheter

Vik, Emil, Åsenius, Ingrid January 2020 (has links)
The purpose of this study is to investigate if it is possible to decrease Sweden's energy consumption through adaptive heating that uses climate data to detect occupancy in apartments using machine learning. The application of the study has been made using environmental data from one of KTH Live-In Labs apartments. The data was first used to investigate the possibility to detect occupancy through machine learning and was then used as input in an adaptive heating model to investigate potential benefits on the energy consumption and costs of heating. The result of the study show that occupancy can be detected using environmental data but not with 100% accuracy. It also shows that the features that have greatest impact in detecting occupancy is light and carbon dioxide and that the best performing machine learning algorithm, for the used dataset, is the Decision Tree algorithm. The potential energy savings through adaptive heating was estimated to be up to 10,1%. In the final part of the paper, it is discussed how a value creating service can be created around adaptive heating and its possibility to reach the market.
183

A Data Analytics Framework for Regional Voltage Control

Yang, Duotong 16 August 2017 (has links)
Modern power grids are some of the largest and most complex engineered systems. Due to economic competition and deregulation, the power systems are operated closer their security limit. When the system is operating under a heavy loading condition, the unstable voltage condition may cause a cascading outage. The voltage fluctuations are presently being further aggravated by the increasing integration of utility-scale renewable energy sources. In this regards, a fast response and reliable voltage control approach is indispensable. The continuing success of synchrophasor has ushered in new subdomains of power system applications for real-time situational awareness, online decision support, and offline system diagnostics. The primary objective of this dissertation is to develop a data analytic based framework for regional voltage control utilizing high-speed data streams delivered from synchronized phasor measurement units. The dissertation focuses on the following three studies: The first one is centered on the development of decision-tree based voltage security assessment and control. The second one proposes an adaptive decision tree scheme using online ensemble learning to update decision model in real time. A system network partition approach is introduced in the last study. The aim of this approach is to reduce the size of training sample database and the number of control candidates for each regional voltage controller. The methodologies proposed in this dissertation are evaluated based on an open source software framework. / Ph. D. / Modern power grids are some of the largest and most complex engineered systems. When the system is heavily loaded, a small contingency may cause a large system blackout. In this regard, a fast response and reliable control approach is indispensable. Voltage is one of the most important metrics to indicate the system condition. This dissertation develops a cost-effective control method to secure the power system based on the real-time voltage measurements. The proposed method is developed based on an open source framework.
184

Using random forest and decision tree models for a new vehicle prediction approach in computational toxicology

Mistry, Pritesh, Neagu, Daniel, Trundle, Paul R., Vessey, J.D. 22 October 2015 (has links)
Yes / Drug vehicles are chemical carriers that provide beneficial aid to the drugs they bear. Taking advantage of their favourable properties can potentially allow the safer use of drugs that are considered highly toxic. A means for vehicle selection without experimental trial would therefore be of benefit in saving time and money for the industry. Although machine learning is increasingly used in predictive toxicology, to our knowledge there is no reported work in using machine learning techniques to model drug-vehicle relationships for vehicle selection to minimise toxicity. In this paper we demonstrate the use of data mining and machine learning techniques to process, extract and build models based on classifiers (decision trees and random forests) that allow us to predict which vehicle would be most suited to reduce a drug’s toxicity. Using data acquired from the National Institute of Health’s (NIH) Developmental Therapeutics Program (DTP) we propose a methodology using an area under a curve (AUC) approach that allows us to distinguish which vehicle provides the best toxicity profile for a drug and build classification models based on this knowledge. Our results show that we can achieve prediction accuracies of 80 % using random forest models whilst the decision tree models produce accuracies in the 70 % region. We consider our methodology widely applicable within the scientific domain and beyond for comprehensively building classification models for the comparison of functional relationships between two variables.
185

The Foundation of Pattern Structures and their Applications

Lumpe, Lars 06 October 2021 (has links)
This thesis is divided into a theoretical part, aimed at developing statements around the newly introduced concept of pattern morphisms, and a practical part, where we present use cases of pattern structures. A first insight of our work clarifies the facts on projections of pattern structures. We discovered that a projection of a pattern structure does not always lead again to a pattern structure. A solution to this problem, and one of the most important points of this thesis, is the introduction of pattern morphisms in Chapter4. Pattern morphisms make it possible to describe relationships between pattern structures, and thus enable a deeper understanding of pattern structures in general. They also provide the means to describe projections of pattern structures that lead to pattern structures again. In Chapter5 and Chapter6, we looked at the impact of morphisms between pattern structures on concept lattices and on their representations and thus clarified the theoretical background of existing research in this field. The application part reveals that random forests can be described through pattern structures, which constitutes another central achievement of our work. In order to demonstrate the practical relevance of our findings, we included a use case where this finding is used to build an algorithm that solves a real world classification problem of red wines. The prediction accuracy of the random forest is better, but the high interpretability makes our algorithm valuable. Another approach to the red wine classification problem is presented in Chapter 8, where, starting from an elementary pattern structure, we built a classification model that yielded good results.
186

A Deep Learning Based Pipeline for Image Grading of Diabetic Retinopathy

Wang, Yu 21 June 2018 (has links)
Diabetic Retinopathy (DR) is one of the principal sources of blindness due to diabetes mellitus. It can be identified by lesions of the retina, namely microaneurysms, hemorrhages, and exudates. DR can be effectively prevented or delayed if discovered early enough and well-managed. Prior studies on diabetic retinopathy typically extract features manually but are time-consuming and not accurate. In this research, we propose a research framework using advanced retina image processing, deep learning, and a boosting algorithm for high-performance DR grading. First, we preprocess the retina image datasets to highlight signs of DR, then follow by a convolutional neural network to extract features of retina images, and finally apply a boosting tree algorithm to make a prediction based on extracted features. Experimental results show that our pipeline has excellent performance when grading diabetic retinopathy images, as evidenced by scores for both the Kaggle dataset and the IDRiD dataset. / Master of Science / Diabetes is a disease in which insulin can not work very well, that leads to long-term high blood sugar level. Diabetic Retinopathy (DR), a result of diabetes mellitus, is one of the leading causes of blindness. It can be identified by lesions on the surface of the retina. DR can be effectively prevented or delayed if discovered early enough and well-managed. Prior image processing studies of diabetic retinopathy typically detect features manually, like retinal lesions, but are time-consuming and not accurate. In this research, we propose a framework using advanced retina image processing, deep learning, and a boosting decision tree algorithm for high-performance DR grading. Deep learning is a method that can be used to extract features of the image. A boosting decision tree is a method widely used in classification tasks. We preprocess the retina image datasets to highlight signs of DR, followed by deep learning to extract features of retina images. Then, we apply a boosting decision tree algorithm to make a prediction based on extracted features. The results of experiments show that our pipeline has excellent performance when grading the diabetic retinopathy score for both Kaggle and IDRiD datasets.
187

Implementation of decision trees for embedded systems

Badr, Bashar January 2014 (has links)
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm. Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree. The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time. The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time.
188

官員職等陞遷分類預測之研究 / Classification prediction on government official’s rank promotion

賴隆平, Lai, Long Ping Unknown Date (has links)
公務人員的人事陞遷是一個複雜性極高,其中隱藏著許多不變的定律及過程,長官與部屬、各公務人員人之間的關係,更是如同蜘蛛網狀般的錯綜複雜,而各公務人員的陞遷狀況,更是隱藏著許多派系之間的鬥爭拉扯連動,或是提攜後進的過程,目前透過政府公開的總統府公報-總統令,可以清楚得知所有公務人員的任職相關資料,其中包含各職務之間的陞遷、任命、派免等相關資訊,而每筆資料亦包含機關、單位、職稱及職等資料,可以提供各種研究使用。 本篇係整理出一種陞遷序列的資料模型來進行研究,透過資料探勘的相關演算法-支撐向量機(Support Vector Machine,簡稱SVM)及決策樹(Decision Tree)的方式,並透過人事的領域知識加以找出較具影響力的屬性,來設計實驗的模型,並使用多組模型及多重資料進行實驗,透過整體平均預測結果及圖表方式來呈現各類別的預測狀況,再以不同的屬性資料來運算產生其相對結果,來分析其合理性,最後再依相關數據來評估此一方法的合理及可行性。 透過資料探勘設計的分類預測模型,其支撐向量機與決策樹都具有訓練量越大,展現之預測結果也愈佳之現象,這跟一般模型是相同的,而挖掘的主管職務屬性參數及關鍵屬性構想都跟人事陞遷的邏輯不謀而合,而預測結果雖各有所長,但整體來看則為支撐向量機略勝一籌,惟支撐向量機有一狀況,必須先行排除較不具影響力之屬性參數資料,否則其產生超平面的邏輯運算過程將產生拉扯作用,導致影響其預測結果;而決策樹則無是類狀況,且其應用較為廣泛,可以透過宣告各屬性值的類型,來進行不同屬性資料類型的分類實驗。 而透過支撐向量機與決策樹的產生的預測結果,其正確率為百分之77至82左右,如此顯示出國內中高階文官的陞遷制度是有脈絡可循的,其具有一定的制度規範及穩定性,而非隨意的任免陞遷;如此透過以上資料探勘的應用,藉著此特徵研究提供公務部門在進行人力資源管理、組織發展、陞遷發展以及組織部門精簡規劃上,作為調整設計參考的一些相關資訊;另透過一些相關屬性的輸入,可提供尚在服務的公務人員協助其預估陞遷發展的狀況,以提供其進行相關生涯規劃。 / The employee promotion is a highly complexity task in Government office, it include many invariable laws and the process, between the senior officer and the subordinate, various relationships with other government employees, It’s the similar complex with the spider lattice, and it hides many clique's struggles in Government official’s promotion, and help to process the promote for the junior generation, through the government public presidential palace - presidential order, it‘s able to get clearly information about all government employees’ correlation data, include various related information like promotion, recruitment , and each data also contains the instruction, like the job unit, job title and job rank for all research reference. It organizes a promoted material model to conduct the research, by the material exploration's related calculating method – Support Vector Machine (SVM) and the decision tree, and through by knowledge of human resource to discover the influence to design the experiment's model, and uses the multi-group models and materials to process, and by this way , it can get various categories result by overall average forecasting and the graph, then operates by different attribute material to get relative result and analyzes its rationality, finally it depends on the correlation data to re-evaluate its method reasonable and feasibility. To this classification forecast model design, the SVM and the decision tree got better performance together with the good training quality, it’s the same with the general model, and it’s the same view to find the details job description for senior management and employee promotion, however the forecasting result has their own strong points, but for the totally, the SVM is slightly better, only if any accidents occurred, it needs to elimination the attribute parameter material which is not have the big influence, otherwise it will have the planoid logic operation process to produce resist status, and will affect its forecasting result, but the decision tree does not have this problem, and its application is more widespread, it can through by different type to make the different experiment. The forecasting result through by SVM and decision tree, its correction percentage can be achieved around 77% - 82% , so it indicated the high position level promotion policy should be have its own rules to follow, it has certain system standard and the stability, but non-optional promoted, so trough by the above data mining, follow by this characteristic to provide Government office to do the Human resource management, organization development, employee promotion and simplify planning to the organization, takes the re-design information for reference, In addition through by some related attribute input, it may provide the government employee who is still on duty and assist them to evaluate promotion development for future career plan.
189

Reservoir screening criteria for deep slurry injection

Nadeem, Muhammad January 2005 (has links)
Deep slurry injection is a process of solid waste disposal that involves grinding the solid waste to a relatively fine-grained consistency, mixing the ground waste with water and/or other liquids to form slurry, and disposing of the slurry by pumping it down a well at a high enough pressure that fractures are created within the target formation. This thesis describes the site assessment criteria involved in selecting a suitable target reservoir for deep slurry injection. The main goals of this study are the follows: <ul> <li>Identify the geological parameters important for a prospective injection site</li> <li>Recognize the role of each parameter</li> <li>Determine the relationships among different parameters</li> <li>Design and develop a model which can assemble all the parameters into a semi-quantitative evaluation process that could allow site ranking and elimination of sites that are not suitable</li> <li>Evaluate the model against several real slurry injection cases and several prospective cases where slurry injection may take place in future</li> </ul> The quantitative and qualitative parameters that are recognized as important for making a decision regarding a target reservoir for deep slurry injection operations are permeability, porosity, depth, areal extent, thickness, mechanical strength, and compressibility of a reservoir; thickness and flow properties of the cap rock; geographical distance between an injection well and a waste source or collection centre; and, regional and detailed structural and tectonic setup of an area. Additional factors affecting the security level of a site include the details of the lithostratigraphic column overlying the target reservoir and the presence of overlying fracture blunting horizons. Each parameter is discussed in detail to determine its role in site assessment and also its relationship with other parameters. A geological assessment model is developed and is divided into two components; a decision tree and a numerical calculation system. The decision tree deals with the most critical parameters, those that render a site unsuitable or suitable, but of unspecified quality. The numerical calculation gives a score to a prospective injection site based on the rank numbers and weighting factors for the various parameters. The score for a particular site shows its favourability for the injection operation, and allows a direct comparison with other available sites. Three categories have been defined for this purpose, i. e. average, below average, and above average. A score range of 85 to 99 of 125 places a site in the ?average? category; a site will be unsuitable for injection if it belongs to the ?below average? category, i. e. if the total score is less than 85, and the best sites will generally have scores that are in the ?above average? category, with a score of 100 or higher. One may assume that for sites that fall in the ?average? category there will have to be more detailed tests and assessments. The geological assessment model is evaluated using original geological data from North America and Indonesia for sites that already have undergone deep slurry injection operations and also for some possible prospective sites. The results obtained from the model are satisfactory as they are in agreement with the empirical observations. Areas for future work consist of the writing of a computer program for the geological model, and further evaluation of the model using original data from more areas representing more diverse geology from around the world.
190

Structures Markoviennes cachées et modèles à corrélations conditionnelles dynamiques : extensions et applications aux corrélations d'actifs financiers / Hidden Markov Models and dynamic conditional correlations models : extensions et application to stock market time series

Charlot, Philippe 25 November 2010 (has links)
L'objectif de cette thèse est d'étudier le problème de la modélisation des changements de régime dans les modèles a corrélations conditionnelles dynamiques en nous intéressant plus particulièrement a l'approche Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet proposé de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut être vu comme un cas particulier du modèle RSDC (Regime Switching for Dynamic Correlations). Basé sur le HMM hiérarchique, notre modèle permet de capter des nuances de régimes qui sont ignorées par l'approche Markov-Switching classique.La seconde contribution propose une version Markov-switching du modèle DCC construite a partir du modèle HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les éléments de la matrice de corrélation suivent la même dynamique, notre modèle permet à tous les éléments de la matrice de corrélation d'avoir leur propre dynamique de saut. Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet propose de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut ^etre vu commeun cas particulier du modele RSDC (Regime Switching for Dynamic Correlations). Base sur le HMMhierarchique, notre modele permet de capter des nuances de regimes qui sont ignorees par l'approcheMarkov-Switching classique.La seconde contribution propose une version Markov-switching du modele DCC construite a partir dumodele HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les elementsde la matrice de correlation suivent la m^eme dynamique, notre modele permet a tous les elements de lamatrice de correlation d'avoir leur propre dynamique de saut.Dans la derniere contribution, nous proposons un modele DCC construit a partir d'un arbre dedecision. L'objectif de cet arbre est de relier le niveau des volatilites individuelles avec le niveau descorrelations. Pour cela, nous utilisons un arbre de decision Markovien cache, qui est une extension de HMM. / The objective of this thesis is to study the modelling of change in regime in the dynamic conditional correlation models. We focus particularly on the Markov-switching approach. Unlike the standard approach based on the Hidden Markov Model (HMM), we use extensions of HMM coming from probabilistic graphical models theory. This discipline has in fact proposed many derivations of the basic model to model complex structures. Thus, this thesis can be view at the interface of twodisciplines: financial econometrics and probabilistic graphical models.The first essay presents a model constructed from a hierarchical hidden Markov which allows to increase the granularity of the regimes. It can be seen as a special case of RSDC model (Regime Switching for Dynamic Correlations). Based on the hierarchical HMM, our model can capture nuances of regimes that are ignored by the classical Markov-Switching approach.The second contribution proposes a Markov-switching version of the DCC model that is built from the factorial HMM. While the classical Markov-switching approach assumes that all elements of the correlation matrix follow the same switching dynamic, our model allows all elements of the correlation matrix to have their own switching dynamic.In the final contribution, we propose a model DCC constructed based on a decision tree. The objective of this tree is to link the level of volatility with the level of individual correlations. For this, we use a hidden Markov decision tree, which is an extension of HMM.

Page generated in 0.0385 seconds