• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4490
  • 531
  • 275
  • 266
  • 157
  • 119
  • 66
  • 55
  • 39
  • 35
  • 24
  • 20
  • 19
  • 12
  • 11
  • Tagged with
  • 7525
  • 7525
  • 2504
  • 1358
  • 1212
  • 1172
  • 1160
  • 1100
  • 1032
  • 971
  • 951
  • 919
  • 878
  • 860
  • 757
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images

Markel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
42

Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in Co-registered 18-FDG PET/CT Images

Markel, Daniel 14 December 2011 (has links)
Variability between oncologists in defining the tumor during radiation therapy planning can be as high as 700% by volume. Robust, automated definition of tumor boundaries has the ability to significantly improve treatment accuracy and efficiency. However, the information provided in computed tomography (CT) is not sensitive enough to differences between tumor and healthy tissue and positron emission tomography (PET) is hampered by blurriness and low resolution. The textural characteristics of thoracic tissue was investigated and compared with those of tumors found within 21 patient PET and CT images in order to enhance the differences and the boundary between cancerous and healthy tissue. A pattern recognition approach was used from these samples to learn the textural characteristics of each and classify voxels as being either normal or abnormal. The approach was compared to a number of alternative methods and found to have the highest overlap with that of an oncologist's tumor definition.
43

Failure-driven learning as model-based self-redesign

Stroulia, Eleni January 1994 (has links)
No description available.
44

Introspective multistrategy learning : constructing a learning strategy under reasoning failure

Cox, Michael Thomas 05 1900 (has links)
No description available.
45

PREDICTION OF CHROMATIN STATES USING DNA SEQUENCE PROPERTIES

Bahabri, Rihab R. 06 1900 (has links)
Activities of DNA are to a great extent controlled epigenetically through the internal struc- ture of chromatin. This structure is dynamic and is influenced by different modifications of histone proteins. Various combinations of epigenetic modification of histones pinpoint to different functional regions of the DNA determining the so-called chromatin states. How- ever, the characterization of chromatin states by the DNA sequence properties remains largely unknown. In this study we aim to explore whether DNA sequence patterns in the human genome can characterize different chromatin states. Using DNA sequence motifs we built binary classifiers for each chromatic state to eval- uate whether a given genomic sequence is a good candidate for belonging to a particular chromatin state. Of four classification algorithms (C4.5, Naive Bayes, Random Forest, and SVM) used for this purpose, the decision tree based classifiers (C4.5 and Random Forest) yielded best results among those we evaluated. Our results suggest that in general these models lack sufficient predictive power, although for four chromatin states (insulators, het- erochromatin, and two types of copy number variation) we found that presence of certain motifs in DNA sequences does imply an increased probability that such a sequence is one of these chromatin states.
46

Efficient memory-based learning for robot control

Moore, Andrew William January 1990 (has links)
No description available.
47

Evaluating Forecasting Performance in the Context of Process-Level Decisions: Methods, Computation Platform, and Studies in Residential Electricity Demand Estimation

Huntsinger, Richard A. 01 May 2017 (has links)
This dissertation explores how decisions about the forecasting process can affect the evaluation of forecasting performance, in general and in the domain of residential electricity demand estimation. Decisions of interest include those around data sourcing, sampling, clustering, temporal magnification, algorithm selection, testing approach, evaluation metrics, and others. Models of the forecasting process and analysis methods are formulated in terms of a three-tier decision taxonomy, by which decision effects are exposed through systematic enumeration of the techniques resulting from those decisions. A computation platform based on the models is implemented to compute and visualize the effects. The methods and computation platform are first demonstrated by applying them to 3,003 benchmark datasets to investigate various decisions, including those that could impact the relationship between data entropy and forecastability. Then, they are used to study over 10,624 week-ahead and day-ahead residential electricity demand forecasting techniques, utilizing fine-resolution electricity usage data collected over 18 months on groups of 782 and 223 households by real smart electric grids in Ireland and Australia, respectively. The main finding from this research is that forecasting performance is highly sensitive to the interaction effects of many decisions. Sampling is found to be an especially effective data strategy, clustering not so, temporal magnification mixed. Other relationships between certain decisions and performance are surfaced, too. While these findings are empirical and specific to one practically scoped investigation, they are potentially generalizable, with implications for residential electricity demand estimation, smart electric grid design, and electricity policy.
48

Forecasting Success in the National Hockey League Using In-Game Statistics and Textual Data

Weissbock, Joshua January 2014 (has links)
In this thesis, we look at a number of methods to forecast success (winners and losers), both of single games and playoff series (best-of-seven games) in the sport of ice hockey, more specifically within the National Hockey League (NHL). Our findings indicate that there exists a theoretical upper bound, which seems to hold true for all sports, that makes prediction difficult. In the first part of this thesis, we look at predicting success of individual games to learn which of the two teams will win or lose. We use a number of traditional statistics (published on the league’s website and used by the media) and performance metrics (used by Internet hockey analysts; they are shown to have a much higher correlation with success over the long term). Despite the demonstrated long term success of performance metrics, it was the traditional statistics that had the most value to automatic game prediction, allowing our model to achieve 59.8% accuracy. We found it interesting that regardless of which features we used in our model, we were not able to increase the accuracy much higher than 60%. We compared the observed win% of teams in the NHL to many simulated leagues and found that there appears to be a theoretical upper bound of approximately 62% for single game prediction in the NHL. As one game is difficult to predict, with a maximum of accuracy of 62%, then pre- dicting a longer series of games must be easier. We looked at predicting the winner of the best-of-seven series between two teams using over 30 features, both traditional and advanced statistics, and found that we were able to increase our prediction accuracy to almost 75%. We then re-explored predicting single games with the use of pre-game textual reports written by hockey experts from http://www.NHL.com using Bag-of-Word features and sentiment analysis. We combined these features with the numerical data in a multi-layer meta-classifiers and were able to increase the accuracy close to the upper bound
49

Learning control knowledge within an explanation-based learning framework

Desimone, Roberto V. January 1989 (has links)
No description available.
50

An examination of the causes of bias in semi-supervised learning

Fox-Roberts, Patrick Kirk January 2014 (has links)
No description available.

Page generated in 0.1348 seconds