• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 47
  • 45
  • 19
  • 10
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 403
  • 106
  • 103
  • 91
  • 87
  • 62
  • 56
  • 53
  • 45
  • 45
  • 45
  • 44
  • 42
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

An Advanced System for the Targeted Classification of Grassland Types with Multi-Temporal SAR Imagery

Metz, Annekatrin 05 October 2016 (has links)
In the light of the ongoing loss of biodiversity at the global scale, monitoring grasslands is nowadays of utmost importance considering their functional relevance in terms of the ecosystem services that they provide. Here, guidelines of the European Union like the Fauna-Flora-Habitat Directive and the European Agricultural fund for Rural Development with its HNV indicators are crucial. Indeed, they form the legal framework for nature conservation and define grasslands as one of their conservation targets, whose status needs to be assessed and reported by all member states on a regular basis. In the light of these reporting requirements, the need for a harmonised and thorough grassland monitoring is highly demanding since most member states are still currently adopting intensive field surveys or photointerpretation with differing levels of detail for mapping habitat distribution. To this purpose, a cost-effective solution is offered by Earth Observation data for which specific grassland monitoring methodologies shall be then implemented which are capable of processing multitemporal acquisitions collected throughout the entire growing season. Although optical data are most suited for characterising vegetation in terms of spectral information content, they are actually subject to weather conditions (especially cloud coverage), which hinder the possibility of collecting enough information over the full phenological cycle. Furthermore, so far only few studies started employing high and very high resolution optical time series for grassland habitat monitoring since they have become available e.g., from the RapidEye satellites, only in the recent past. To overcome this limitation, SAR systems can be employed which provide imagery independent from weather or daytime conditions, hence enabling vegetation analysis by means of complete time series. Compared to optical data, radar imagery is less affected by the physical-chemical characteristics of the surface, but rather it is sensitive to structural features like geometry and roughness. However, in this context presently only very few techniques have been implemented, which are anyhow not suitable to be employed in an operational framework. Furthermore, to address the classification task, supervised approaches (which require in situ information for all the land-cover classes present in the study area) represent the most accurate methodological solution; nevertheless, collecting an exhaustive ground truth is generally expensive both in terms of time and economic costs and is not even feasible when the test site is remote. However, in many applications the end-users are generally only interested in very few specific targeted land-cover classes which, for instance, have high ecological value or are associated with support actions, subsidies or benefits from national or international institutions. The categorisation of specific grasslands and habitat types as those addressed in this thesis falls within such category of problems, which is defined in the literature as targeted land-cover classification. In this framework, a robust and effective targeted classification system for the automatic identification of grassland types by means of multi-temporal and multi-polarised SAR data has been developed within this thesis. In particular, the proposed system is composed of three main blocks: the preprocessing of the SAR image time series including the Kennaugh decomposition, the feature extraction including multi-temporal filtering and texture analysis, and the hierarchical targeted classification, which consist of two phases where first a one-class classifier is employed to outline the merger of all the grassland types of interest considered as a single information class and then a multi-class classifier is applied for discriminating the specific targeted classes within the areas identified as positive by the one-class classifier. To evaluate the capabilities of the proposed methodology, several experimental trials have been carried out over two test sites located in Southern Bavaria (Germany) and Mecklenburg Western-Pomerania (Germany) for which six diverse datasets have been derived from multitemporal series of dualpol TerraSAR-X as well as dual-/quadpol Radarsat-2 images. Four among the Natura 2000 habitat types of the Fauna-Flora-Habitat Directive as well all High Nature Value grassland types have been considered as targeted classes for this study. Overall, the proposed system proved to be robust and confirmed the effectiveness of employing multitemporal and multi-polarisation VHR SAR data for discriminating habitat types and High Nature Value grassland types, exhibiting high potential for future employment even at larger scales. In particular, it could be demonstrated that the proposed hierarchical targeted classification approach outperforms the available state-of-the-art methods and has a clear advantage with respect to the standard approaches in terms of robustness, reliability and transferability.
152

Machine Learning Methods for Septic Shock Prediction

Darwiche, Aiman A. 01 January 2018 (has links)
Sepsis is an organ dysfunction life-threatening disease that is caused by a dysregulated body response to infection. Sepsis is difficult to detect at an early stage, and when not detected early, is difficult to treat and results in high mortality rates. Developing improved methods for identifying patients in high risk of suffering septic shock has been the focus of much research in recent years. Building on this body of literature, this dissertation develops an improved method for septic shock prediction. Using the data from the MMIC-III database, an ensemble classifier is trained to identify high-risk patients. A robust prediction model is built by obtaining a risk score from fitting the Cox Hazard model on multiple input features. The score is added to the list of features and the Random Forest ensemble classifier is trained to produce the model. The Cox Enhanced Random Forest (CERF) proposed method is evaluated by comparing its predictive accuracy to those of extant methods.
153

Case Influence and Model Complexity in Regression and Classification

TU, SHANSHAN 17 October 2019 (has links)
No description available.
154

Classification of Dense Masses in Mammograms

Naram, Hari Prasad 01 May 2018 (has links) (PDF)
This dissertation material provided in this work details the techniques that are developed to aid in the Classification of tumors, non-tumors, and dense masses in a Mammogram, certain characteristics such as texture in a mammographic image are used to identify the regions of interest as a part of classification. Pattern recognizing techniques such as nearest mean classifier and Support vector machine classifier are also used to classify the features. The initial stages include the processing of mammographic image to extract the relevant features that would be necessary for classification and during the final stage the features are classified using the pattern recognizing techniques mentioned above. The goal of this research work is to provide the Medical Experts and Researchers an effective method which would aid them in identifying the tumors, non-tumors, and dense masses in a mammogram. At first the breast region extraction is carried using the entire mammogram. The extraction is carried out by creating the masks and using those masks to extract the region of interest pertaining to the tumor. A chain code is employed to extract the various regions, the extracted regions could potentially be classified as tumors, non-tumors, and dense regions. Adaptive histogram equalization technique is employed to enhance the contrast of an image. After applying the adaptive histogram equalization for several times which will provide a saturated image which would contain only bright spots of the mammographic image which appear like dense regions of the mammogram. These dense masses could be potential tumors which would need treatment. Relevant Characteristics such as texture in the mammographic image are used for feature extraction by using the nearest mean and support vector machine classifier. A total of thirteen Haralick features are used to classify the three classes. Support vector machine classifier is used to classify two class problems and radial basis function (RBF) kernel is used to find the best possible (c and gamma) values. Results obtained in this research suggest the best classification accuracy was achieved by using the support vector machines for both Tumor vs Non-Tumor and Tumor vs Dense masses. The maximum accuracies achieved for the tumor and non-tumor is above 90 % and for the dense masses is 70.8% using 11 features for support vector machines. Support vector machines performed better than the nearest mean majority classifier in the classification of the classes. Various case studies were performed using two distinct datasets in which each dataset consisting of 24 patients’ data in two individual views. Each patient data will consist of both the cranio caudal view and medio lateral oblique views. From these views the region of interest which could possibly be a tumor, non-tumor, or a dense regions(mass).
155

Variant Detection Using Next Generation Sequencing Data

Pyon, Yoon Soo 08 March 2013 (has links)
No description available.
156

An Improved Classifier Chain Ensemble for Multi-DimensionalClassification with Conditional Dependence

Heydorn, Joseph Ethan 01 July 2015 (has links) (PDF)
We focus on multi-dimensional classification (MDC) problems with conditional dependence, which we call multiple output dependence (MOD) problems. MDC is the task of predicting a vector of categorical outputs for each input. Conditional dependence in MDC means that the choice for one output value affects the choice for others, so it is not desirable to predict outputs independently. We show that conditional dependence in MDC implies that a single input can map to multiple correct output vectors. This means it is desirable to find multiple correct output vectors per input. Current solutions for MOD problems are not sufficient because they predict only one of the correct output vectors per input, ignoring all others.We modify four existing MDC solutions, including chain classifiers, to predict multiple output vectors. We further create a novel ensemble technique named weighted output vector ensemble (WOVE) which combines these multiple predictions from multiple chain classifiers in a way that preserves the integrity of output vectors and thus preserves conditional dependence among outputs. We verify the effectiveness of WOVE by comparing it against 7 other solutions on a variety of data sets and find that it shows significant gains over existing methods.
157

Text Identification by Example

Preece, Daniel Joseph 02 August 2007 (has links) (PDF)
The World-Wide Web contains a lot of information and reading through the web pages to collect this information is tedious, time consuming and error prone. Users need an automated solution for extracting or highlighting the data that they are interested in. Building a regular expression to match the text they are interested in will automate the process, but regular expressions are hard to create and certainly are not feasible for non-programmers to construct. Text Identification by Example (TIBE) makes it easier for end-users to harvest information from the web and other text documents. With TIBE, training text classifiers from user-selected positive and negative examples replaces the hand-writing of regular expressions. The text classifiers can then be used to extract or highlight text on web pages.
158

Relationships Among Learning Algorithms and Tasks

Lee, Jun won 27 January 2011 (has links) (PDF)
Metalearning aims to obtain knowledge of the relationship between the mechanism of learning and the concrete contexts in which that mechanisms is applicable. As new mechanisms of learning are continually added to the pool of learning algorithms, the chances of encountering behavior similarity among algorithms are increased. Understanding the relationships among algorithms and the interactions between algorithms and tasks help to narrow down the space of algorithms to search for a given learning task. In addition, this process helps to disclose factors contributing to the similar behavior of different algorithms. We first study general characteristics of learning tasks and their correlation with the performance of algorithms, isolating two metafeatures whose values are fairly distinguishable between easy and hard tasks. We then devise a new metafeature that measures the difficulty of a learning task that is independent of the performance of learning algorithms on it. Building on these preliminary results, we then investigate more formally how we might measure the behavior of algorithms at a ner grained level than a simple dichotomy between easy and hard tasks. We prove that, among all many possible candidates, the Classifi er Output Difference (COD) measure is the only one possessing the properties of a metric necessary for further use in our proposed behavior-based clustering of learning algorithms. Finally, we cluster 21 algorithms based on COD and show the value of the clustering in 1) highlighting interesting behavior similarity among algorithms, which leads us to a thorough comparison of Naive Bayes and Radial Basis Function Network learning, and 2) designing more accurate algorithm selection models, by predicting clusters rather than individual algorithms.
159

Coarse Radio Signal Classifier on a Hybrid FPGA/DSP/GPP Platform

Nair, Sujit S. 12 January 2010 (has links)
The Virginia Tech Universal Classifier Synchronizer (UCS) system can enable a cognitive receiver to detect, classify and extract all the parameters needed from a received signal for physical layer demodulation and configure a cognitive radio accordingly. Currently, UCS can process analog amplitude modulation (AM) and frequency modulation (FM) and digital narrow band M-PSK, M-QAM and wideband signal orthogonal frequency division multiplexing (OFDM). A fully developed prototype of UCS system was designed and implemented in our laboratory using GNU radio software platform and Universal Software Radio Peripheral (USRP) radio platform. That system introduces a lot of latency issues because of the limited USB data transfer speeds between the USRP and the host computer. Also, there are inherent latencies and timing uncertainties in the General Purpose Processor (GPP) software itself. Solving the timing and latency problems requires running key parts of the software-defined radio (SDR) code on a Field Programmable Gate Array (FPGA)/Digital Signal Processor (DSP)/GPP based hybrid platform. Our objective is to port the entire UCS system on the Lyrtech SFF SDR platform which is a hybrid DSP/FPGA/GPP platform. Since the FPGA allows parallel processing on a wideband signal, its computing speed is substantially faster than GPPs and most DSPs, which sequentially process signals. In addition, the Lyrtech Small Form Factor (SFF)-SDR development platform integrates the FPGA and the RF module on one platform; this further reduces the latency in moving signals from RF front end to the computing component. Also for UCS to be commercially viable, we need to port it to a more portable platform which can be transitioned to a handset radio in the future. This thesis is a proof of concept implementation of the coarse classifier which is the first step of classification. Both fixed point and floating point implementations are developed and no compiler specific libraries or vendor specific libraries are used. This makes transitioning the design to any other hardware like GPPs and DSPs of other vendors possible without having to change the basic framework and design. / Master of Science
160

A Comparative study of YOLO and Haar Cascade algorithm for helmet and license plate detection of motorcycles

Mavilla Vari Palli, Anusha Jayasree, Medimi, Vishnu Sai January 2022 (has links)
Background: Every country has seen an increase in motorcycle accidents over the years due to social and economic differences as well as regional variations in transportation circumstances. One common mode of transportation for those in the middle class is a motorbike.  Every motorbike rider is legally required to wear a helmet when driving a bike. However, some people on bikes used to ignore their safety, which resulted in them violating traffic rules by driving the bike without a helmet. The policeman tried to address this issue manually, but it was ineffective and proved to be quite challenging in practical circumstances. Therefore, automating this procedure is essential if we are to effectively enforce road safety. As a result, an automated system was created employing a variety of techniques, including Convolutional Neural Networks (CNN), the Haar Cascade Classifier, the You Only Look Once (YOLO), the Single Shot multi-box Detector (SSD), etc. In this study, YOLOv3 and Haar Cascade Classifier are used to compare motorcycle helmet and license plate detection.  Objectives: This thesis aims to compare the machine learning algorithms that detect motorcycles’ license plates and helmets. Here, the Haar Cascade Classifier and YOLO algorithms are used on the US License Plates and Helmet Detection datasets to train the models. The accuracy is obtained in detecting the helmets and license plates of the motorcycles and analyzed.  Methods: The experiment method is chosen to answer the research question. An experiment is performed to find the accuracy of the models in detecting the helmets and license plates of motorcycles. The datasets utilized for this are from Kaggle, which included 764 pictures of two distinct classes, i.e., with and without a helmet, along with 447 unique license plate images. Before training the model, preprocessing techniques are performed on US License Plates and Helmet Detection datasets. Now the datasets are divided into test and train datasets where the test data set size is considered to be 20% and the train data set size is 80%. The models are trained using 80% pre-processed training datasets and using the Haar Cascade Classifier and YOLOv3 algorithms. An observation is made by giving the 20% test data to the trained models. Finally, the prediction results of these two models are recorded and the accuracy is measured by generating a confusion matrix.   Results: The efficient and best algorithm for detecting the helmets and license plates of motorcycles is identified from the experiment method. The YOLOv3 algorithm is considered more accurate in detecting motorcycles' helmets and license plates based on the results.  Conclusions: Models are trained using Haar Cascade and YOLOv3 algorithms on US License Plates and Helmet Detection training datasets. The accuracy of the models in detecting the helmets and license plates of motorcycles is checked by using the testing datasets. The model trained using the YOLOv3 algorithm has high accuracy; hence, the Neural network-based YOLOv3 technique is thought to be the best and most efficient.

Page generated in 0.0549 seconds